From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753901Ab2LESpY (ORCPT ); Wed, 5 Dec 2012 13:45:24 -0500 Received: from e28smtp06.in.ibm.com ([122.248.162.6]:33137 "EHLO e28smtp06.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753805Ab2LESpU (ORCPT ); Wed, 5 Dec 2012 13:45:20 -0500 From: "Srivatsa S. Bhat" Subject: [RFC PATCH v2 05/10] smp, cpu hotplug: Fix on_each_cpu_*() to prevent CPU offline properly To: tglx@linutronix.de, peterz@infradead.org, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, vincent.guittot@linaro.org, tj@kernel.org, oleg@redhat.com Cc: sbw@mit.edu, amit.kucheria@linaro.org, rostedt@goodmis.org, rjw@sisk.pl, srivatsa.bhat@linux.vnet.ibm.com, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Date: Thu, 06 Dec 2012 00:13:53 +0530 Message-ID: <20121205184350.3750.57621.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20121205184041.3750.64945.stgit@srivatsabhat.in.ibm.com> References: <20121205184041.3750.64945.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12120518-9574-0000-0000-000005A0A565 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on preempt_disable() to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic_light() APIs to prevent changes to the cpu_online_mask, while invoking from atomic context. Signed-off-by: Srivatsa S. Bhat --- kernel/smp.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/kernel/smp.c b/kernel/smp.c index abcc4d2..b258a92 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -688,12 +688,12 @@ int on_each_cpu(void (*func) (void *info), void *info, int wait) unsigned long flags; int ret = 0; - preempt_disable(); + get_online_cpus_atomic_light(); ret = smp_call_function(func, info, wait); local_irq_save(flags); func(info); local_irq_restore(flags); - preempt_enable(); + put_online_cpus_atomic_light(); return ret; } EXPORT_SYMBOL(on_each_cpu); @@ -715,7 +715,11 @@ EXPORT_SYMBOL(on_each_cpu); void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func, void *info, bool wait) { - int cpu = get_cpu(); + int cpu; + + get_online_cpus_atomic_light(); + + cpu = smp_processor_id(); smp_call_function_many(mask, func, info, wait); if (cpumask_test_cpu(cpu, mask)) { @@ -723,7 +727,7 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func, func(info); local_irq_enable(); } - put_cpu(); + put_online_cpus_atomic_light(); } EXPORT_SYMBOL(on_each_cpu_mask); @@ -748,8 +752,10 @@ EXPORT_SYMBOL(on_each_cpu_mask); * The function might sleep if the GFP flags indicates a non * atomic allocation is allowed. * - * Preemption is disabled to protect against CPUs going offline but not online. - * CPUs going online during the call will not be seen or sent an IPI. + * We use get/put_online_cpus_atomic_light() to have a stable online mask + * to work with, whose CPUs won't go offline in-between our operation. + * And we will skip those CPUs which have already begun their offline journey. + * CPUs coming online during the call will not be seen or sent an IPI. * * You must not call this function with disabled interrupts or * from a hardware interrupt handler or from a bottom half handler. @@ -764,26 +770,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info), might_sleep_if(gfp_flags & __GFP_WAIT); if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) { - preempt_disable(); + get_online_cpus_atomic_light(); for_each_online_cpu(cpu) if (cond_func(cpu, info)) cpumask_set_cpu(cpu, cpus); on_each_cpu_mask(cpus, func, info, wait); - preempt_enable(); + put_online_cpus_atomic_light(); free_cpumask_var(cpus); } else { /* * No free cpumask, bother. No matter, we'll * just have to IPI them one by one. */ - preempt_disable(); + get_online_cpus_atomic_light(); for_each_online_cpu(cpu) if (cond_func(cpu, info)) { ret = smp_call_function_single(cpu, func, info, wait); WARN_ON_ONCE(!ret); } - preempt_enable(); + put_online_cpus_atomic_light(); } } EXPORT_SYMBOL(on_each_cpu_cond);