All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 00/45] CPU hotplug: stop_machine()-free CPU hotplug, part 1
@ 2013-06-27 19:52 ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Hi,

This patchset is a first step towards removing stop_machine() from the
CPU hotplug offline path. It introduces a set of APIs (as a replacement to
preempt_disable()/preempt_enable()) to synchronize with CPU hotplug from
atomic contexts.

The motivation behind getting rid of stop_machine() is to avoid its
ill-effects, such as performance penalties[1] and the real-time latencies it
inflicts on the system (and also things involving stop_machine() have often
been notoriously hard to debug). And in general, getting rid of stop_machine()
from CPU hotplug also greatly enhances the elegance of the CPU hotplug design
itself.

Getting rid of stop_machine() involves building the corresponding
infrastructure in the core CPU hotplug code and converting all places which
used to depend on the semantics of stop_machine() to synchronize with CPU
hotplug.

This patchset builds a first-level base infrastructure on which tree-wide
conversions can be built upon, and also includes the conversions themselves.
We certainly need a few more careful tree-sweeps to complete the conversion,
but the goal of this patchset is to introduce the core pieces and to get the
first batch of conversions in, while covering a reasonable bulk among them.

This patchset also has a debug infrastructure to help with the conversions -
with the newly introduced CONFIG_DEBUG_HOTPLUG_CPU option turned on, it
prints warnings whenever the need for a conversion is detected. Patches 4-7
build this framework. Needless to say, I'd really appreciate if people could
test kernels with this option turned on and report omissions or better yet,
send patches to contribute to this effort.

[It is to be noted that this patchset doesn't replace stop_machine() yet,
so the immediate risk in having an unconverted (or converted) call-site
is nil, since there is no major functional change involved.]

Once the conversion gets completed, we can finalize on the design of the
stop_machine() replacement and use that in the core CPU hotplug code. We have
had some discussions in the past where we debated several different
designs[2]. We'll revisit that with more ideas once this conversion gets over.


This patchset applies on current tip:master. It is also available in the
following git branch:

git://github.com/srivatsabhat/linux.git  stop-mch-free-cpuhp-part1-v3


Thank you very much!


Changes in v3:
-------------
* Added _nocheck() variants of cpu accessor functions for use in RCU
  and percpu-counter code. Reworked the RCU and percpu-counter patches
  accordingly.
* Dropped the patch related to kvm-vmx, since it was unnecessary.
* Added changes to r4k_on_each_cpu() in Mips, as pointed out by Ralf Baechle.
* Added Acks and rebased to current tip/master.

Changes in v2:
-------------
* Build-fix for !HOTPLUG_CPU case.
* Minor code cleanups, and added a few comments.


References:
----------

1. Performance difference between CPU Hotplug with and without
   stop_machine():
   http://article.gmane.org/gmane.linux.kernel/1435249

2. Links to discussions around alternative synchronization schemes to
   replace stop_machine() in the CPU Hotplug code:

   v6: http://lwn.net/Articles/538819/
   v5: http://lwn.net/Articles/533553/
   v4: https://lkml.org/lkml/2012/12/11/209
   v3: http://lwn.net/Articles/528541/
   v2: https://lkml.org/lkml/2012/12/5/322
   v1: https://lkml.org/lkml/2012/12/4/88
       http://lwn.net/Articles/527988/

3. Links to previous versions of this patchset:
   v2: http://thread.gmane.org/gmane.linux.kernel/1515396
   v1: http://lwn.net/Articles/556138/


--
 Srivatsa S. Bhat (45):
      CPU hotplug: Provide APIs to prevent CPU offline from atomic context
      CPU hotplug: Clarify the usage of different synchronization APIs
      Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic()
      CPU hotplug: Add infrastructure to check lacking hotplug synchronization
      CPU hotplug: Protect set_cpu_online() to avoid false-positives
      CPU hotplug: Sprinkle debugging checks to catch locking bugs
      CPU hotplug: Add _nocheck() variants of accessor functions
      CPU hotplug: Expose the new debug config option
      CPU hotplug: Convert preprocessor macros to static inline functions
      smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
      migration: Use raw_spin_lock/unlock since interrupts are already disabled
      sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
      timer: Use get/put_online_cpus_atomic() to prevent CPU offline
      sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
      rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
      tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
      time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
      softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
      irq: Use get/put_online_cpus_atomic() to prevent CPU offline
      net: Use get/put_online_cpus_atomic() to prevent CPU offline
      block: Use get/put_online_cpus_atomic() to prevent CPU offline
      percpu_counter: Use _nocheck version of for_each_online_cpu()
      infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline
      [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline
      staging/octeon: Use get/put_online_cpus_atomic() to prevent CPU offline
      x86: Use get/put_online_cpus_atomic() to prevent CPU offline
      perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
      KVM: Use get/put_online_cpus_atomic() to prevent CPU offline
      x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
      alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
      ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
      m32r: Use get/put_online_cpus_atomic() to prevent CPU offline
      MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
      mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline
      powerpc, irq: Use GFP_ATOMIC allocations in atomic context
      powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
      powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
      sh: Use get/put_online_cpus_atomic() to prevent CPU offline
      sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
      tile: Use get/put_online_cpus_atomic() to prevent CPU offline

 Documentation/cpu-hotplug.txt                 |   20 +++-
 arch/alpha/kernel/smp.c                       |   19 ++-
 arch/blackfin/mach-common/smp.c               |    4 -
 arch/cris/arch-v32/kernel/smp.c               |    5 +
 arch/hexagon/kernel/smp.c                     |    3 +
 arch/ia64/kernel/irq_ia64.c                   |   15 +++
 arch/ia64/kernel/perfmon.c                    |    8 +
 arch/ia64/kernel/smp.c                        |   12 +-
 arch/ia64/mm/tlb.c                            |    4 -
 arch/m32r/kernel/smp.c                        |   16 +--
 arch/mips/kernel/cevt-smtc.c                  |    7 +
 arch/mips/kernel/smp.c                        |   16 +--
 arch/mips/kernel/smtc.c                       |   12 ++
 arch/mips/mm/c-octeon.c                       |    4 -
 arch/mips/mm/c-r4k.c                          |    5 +
 arch/mn10300/mm/cache-smp.c                   |    3 +
 arch/mn10300/mm/tlb-smp.c                     |   17 ++-
 arch/powerpc/kernel/irq.c                     |    9 +-
 arch/powerpc/kernel/machine_kexec_64.c        |    4 -
 arch/powerpc/kernel/smp.c                     |    4 +
 arch/powerpc/kvm/book3s_hv.c                  |    5 +
 arch/powerpc/mm/mmu_context_nohash.c          |    3 +
 arch/powerpc/oprofile/cell/spu_profiler.c     |    3 +
 arch/powerpc/oprofile/cell/spu_task_sync.c    |    4 +
 arch/powerpc/oprofile/op_model_cell.c         |    6 +
 arch/sh/kernel/smp.c                          |   12 +-
 arch/sparc/kernel/smp_64.c                    |   12 +-
 arch/tile/kernel/module.c                     |    3 +
 arch/tile/kernel/tlb.c                        |   15 +++
 arch/tile/mm/homecache.c                      |    3 +
 arch/x86/kernel/apic/io_apic.c                |   21 +++-
 arch/x86/kernel/cpu/mcheck/therm_throt.c      |    4 -
 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 +
 arch/x86/mm/tlb.c                             |   14 +--
 arch/x86/xen/mmu.c                            |    9 +-
 block/blk-softirq.c                           |    3 +
 drivers/infiniband/hw/ehca/ehca_irq.c         |    5 +
 drivers/scsi/fcoe/fcoe.c                      |    7 +
 drivers/staging/octeon/ethernet-rx.c          |    3 +
 include/linux/cpu.h                           |   29 +++++
 include/linux/cpumask.h                       |  118 +++++++++++++++++++++
 kernel/cpu.c                                  |  138 +++++++++++++++++++++++++
 kernel/irq/manage.c                           |    7 +
 kernel/irq/proc.c                             |    3 +
 kernel/rcutree.c                              |   12 ++
 kernel/sched/core.c                           |   27 ++++-
 kernel/sched/fair.c                           |   14 ++-
 kernel/sched/rt.c                             |   14 +++
 kernel/smp.c                                  |   52 +++++----
 kernel/softirq.c                              |    3 +
 kernel/time/clocksource.c                     |    5 +
 kernel/time/tick-broadcast.c                  |    8 +
 kernel/timer.c                                |    4 +
 lib/Kconfig.debug                             |    8 +
 lib/cpumask.c                                 |    8 +
 lib/percpu_counter.c                          |    9 +-
 net/core/dev.c                                |    9 +-
 virt/kvm/kvm_main.c                           |    8 +
 58 files changed, 674 insertions(+), 127 deletions(-)


Regards,
Srivatsa S. Bhat
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 187+ messages in thread

* [PATCH v3 00/45] CPU hotplug: stop_machine()-free CPU hotplug, part 1
@ 2013-06-27 19:52 ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, srivatsa.bhat, netdev,
	linuxppc-dev

Hi,

This patchset is a first step towards removing stop_machine() from the
CPU hotplug offline path. It introduces a set of APIs (as a replacement to
preempt_disable()/preempt_enable()) to synchronize with CPU hotplug from
atomic contexts.

The motivation behind getting rid of stop_machine() is to avoid its
ill-effects, such as performance penalties[1] and the real-time latencies it
inflicts on the system (and also things involving stop_machine() have often
been notoriously hard to debug). And in general, getting rid of stop_machine()
from CPU hotplug also greatly enhances the elegance of the CPU hotplug design
itself.

Getting rid of stop_machine() involves building the corresponding
infrastructure in the core CPU hotplug code and converting all places which
used to depend on the semantics of stop_machine() to synchronize with CPU
hotplug.

This patchset builds a first-level base infrastructure on which tree-wide
conversions can be built upon, and also includes the conversions themselves.
We certainly need a few more careful tree-sweeps to complete the conversion,
but the goal of this patchset is to introduce the core pieces and to get the
first batch of conversions in, while covering a reasonable bulk among them.

This patchset also has a debug infrastructure to help with the conversions -
with the newly introduced CONFIG_DEBUG_HOTPLUG_CPU option turned on, it
prints warnings whenever the need for a conversion is detected. Patches 4-7
build this framework. Needless to say, I'd really appreciate if people could
test kernels with this option turned on and report omissions or better yet,
send patches to contribute to this effort.

[It is to be noted that this patchset doesn't replace stop_machine() yet,
so the immediate risk in having an unconverted (or converted) call-site
is nil, since there is no major functional change involved.]

Once the conversion gets completed, we can finalize on the design of the
stop_machine() replacement and use that in the core CPU hotplug code. We have
had some discussions in the past where we debated several different
designs[2]. We'll revisit that with more ideas once this conversion gets over.


This patchset applies on current tip:master. It is also available in the
following git branch:

git://github.com/srivatsabhat/linux.git  stop-mch-free-cpuhp-part1-v3


Thank you very much!


Changes in v3:
-------------
* Added _nocheck() variants of cpu accessor functions for use in RCU
  and percpu-counter code. Reworked the RCU and percpu-counter patches
  accordingly.
* Dropped the patch related to kvm-vmx, since it was unnecessary.
* Added changes to r4k_on_each_cpu() in Mips, as pointed out by Ralf Baechle.
* Added Acks and rebased to current tip/master.

Changes in v2:
-------------
* Build-fix for !HOTPLUG_CPU case.
* Minor code cleanups, and added a few comments.


References:
----------

1. Performance difference between CPU Hotplug with and without
   stop_machine():
   http://article.gmane.org/gmane.linux.kernel/1435249

2. Links to discussions around alternative synchronization schemes to
   replace stop_machine() in the CPU Hotplug code:

   v6: http://lwn.net/Articles/538819/
   v5: http://lwn.net/Articles/533553/
   v4: https://lkml.org/lkml/2012/12/11/209
   v3: http://lwn.net/Articles/528541/
   v2: https://lkml.org/lkml/2012/12/5/322
   v1: https://lkml.org/lkml/2012/12/4/88
       http://lwn.net/Articles/527988/

3. Links to previous versions of this patchset:
   v2: http://thread.gmane.org/gmane.linux.kernel/1515396
   v1: http://lwn.net/Articles/556138/


--
 Srivatsa S. Bhat (45):
      CPU hotplug: Provide APIs to prevent CPU offline from atomic context
      CPU hotplug: Clarify the usage of different synchronization APIs
      Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic()
      CPU hotplug: Add infrastructure to check lacking hotplug synchronization
      CPU hotplug: Protect set_cpu_online() to avoid false-positives
      CPU hotplug: Sprinkle debugging checks to catch locking bugs
      CPU hotplug: Add _nocheck() variants of accessor functions
      CPU hotplug: Expose the new debug config option
      CPU hotplug: Convert preprocessor macros to static inline functions
      smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
      migration: Use raw_spin_lock/unlock since interrupts are already disabled
      sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
      timer: Use get/put_online_cpus_atomic() to prevent CPU offline
      sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
      rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
      tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
      time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
      softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
      irq: Use get/put_online_cpus_atomic() to prevent CPU offline
      net: Use get/put_online_cpus_atomic() to prevent CPU offline
      block: Use get/put_online_cpus_atomic() to prevent CPU offline
      percpu_counter: Use _nocheck version of for_each_online_cpu()
      infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline
      [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline
      staging/octeon: Use get/put_online_cpus_atomic() to prevent CPU offline
      x86: Use get/put_online_cpus_atomic() to prevent CPU offline
      perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
      KVM: Use get/put_online_cpus_atomic() to prevent CPU offline
      x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
      alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
      ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
      ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
      m32r: Use get/put_online_cpus_atomic() to prevent CPU offline
      MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
      mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline
      powerpc, irq: Use GFP_ATOMIC allocations in atomic context
      powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
      powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
      sh: Use get/put_online_cpus_atomic() to prevent CPU offline
      sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
      tile: Use get/put_online_cpus_atomic() to prevent CPU offline

 Documentation/cpu-hotplug.txt                 |   20 +++-
 arch/alpha/kernel/smp.c                       |   19 ++-
 arch/blackfin/mach-common/smp.c               |    4 -
 arch/cris/arch-v32/kernel/smp.c               |    5 +
 arch/hexagon/kernel/smp.c                     |    3 +
 arch/ia64/kernel/irq_ia64.c                   |   15 +++
 arch/ia64/kernel/perfmon.c                    |    8 +
 arch/ia64/kernel/smp.c                        |   12 +-
 arch/ia64/mm/tlb.c                            |    4 -
 arch/m32r/kernel/smp.c                        |   16 +--
 arch/mips/kernel/cevt-smtc.c                  |    7 +
 arch/mips/kernel/smp.c                        |   16 +--
 arch/mips/kernel/smtc.c                       |   12 ++
 arch/mips/mm/c-octeon.c                       |    4 -
 arch/mips/mm/c-r4k.c                          |    5 +
 arch/mn10300/mm/cache-smp.c                   |    3 +
 arch/mn10300/mm/tlb-smp.c                     |   17 ++-
 arch/powerpc/kernel/irq.c                     |    9 +-
 arch/powerpc/kernel/machine_kexec_64.c        |    4 -
 arch/powerpc/kernel/smp.c                     |    4 +
 arch/powerpc/kvm/book3s_hv.c                  |    5 +
 arch/powerpc/mm/mmu_context_nohash.c          |    3 +
 arch/powerpc/oprofile/cell/spu_profiler.c     |    3 +
 arch/powerpc/oprofile/cell/spu_task_sync.c    |    4 +
 arch/powerpc/oprofile/op_model_cell.c         |    6 +
 arch/sh/kernel/smp.c                          |   12 +-
 arch/sparc/kernel/smp_64.c                    |   12 +-
 arch/tile/kernel/module.c                     |    3 +
 arch/tile/kernel/tlb.c                        |   15 +++
 arch/tile/mm/homecache.c                      |    3 +
 arch/x86/kernel/apic/io_apic.c                |   21 +++-
 arch/x86/kernel/cpu/mcheck/therm_throt.c      |    4 -
 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 +
 arch/x86/mm/tlb.c                             |   14 +--
 arch/x86/xen/mmu.c                            |    9 +-
 block/blk-softirq.c                           |    3 +
 drivers/infiniband/hw/ehca/ehca_irq.c         |    5 +
 drivers/scsi/fcoe/fcoe.c                      |    7 +
 drivers/staging/octeon/ethernet-rx.c          |    3 +
 include/linux/cpu.h                           |   29 +++++
 include/linux/cpumask.h                       |  118 +++++++++++++++++++++
 kernel/cpu.c                                  |  138 +++++++++++++++++++++++++
 kernel/irq/manage.c                           |    7 +
 kernel/irq/proc.c                             |    3 +
 kernel/rcutree.c                              |   12 ++
 kernel/sched/core.c                           |   27 ++++-
 kernel/sched/fair.c                           |   14 ++-
 kernel/sched/rt.c                             |   14 +++
 kernel/smp.c                                  |   52 +++++----
 kernel/softirq.c                              |    3 +
 kernel/time/clocksource.c                     |    5 +
 kernel/time/tick-broadcast.c                  |    8 +
 kernel/timer.c                                |    4 +
 lib/Kconfig.debug                             |    8 +
 lib/cpumask.c                                 |    8 +
 lib/percpu_counter.c                          |    9 +-
 net/core/dev.c                                |    9 +-
 virt/kvm/kvm_main.c                           |    8 +
 58 files changed, 674 insertions(+), 127 deletions(-)


Regards,
Srivatsa S. Bhat
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 187+ messages in thread

* [PATCH v3 01/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Tejun Heo,
	Rafael J. Wysocki, Yasuaki Ishimatsu, Srivatsa S. Bhat

The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.

There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize with CPU hotplug. Typically these are in atomic
sections of code, where they can't make use of get/put_online_cpus(), because
the latter set of APIs can sleep.

Going forward, we want to get rid of stop_machine() from the CPU hotplug
offline path. And then, with stop_machine() gone, disabling preemption will
no longer prevent CPUs from going offline.

So provide a set of APIs for such atomic hotplug readers, to prevent (any)
CPUs from going offline. For now, they will default to preempt_disable()
and preempt_enable() itself, but this will help us do the tree-wide conversion,
as a preparatory step to remove stop_machine() from CPU hotplug.

(Besides, it is good documentation as well, since it clearly marks places
where we synchronize with CPU hotplug, instead of combining it subtly with
disabling preemption).

In future, when actually removing stop_machine(), we will alter the
implementation of these APIs to a suitable synchronization scheme.

Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |   20 ++++++++++++++++++++
 kernel/cpu.c        |   38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 9f3c7e8..a57b25a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -17,6 +17,8 @@
 #include <linux/node.h>
 #include <linux/compiler.h>
 #include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/smp.h>
 
 struct device;
 
@@ -175,6 +177,8 @@ extern struct bus_type cpu_subsys;
 
 extern void get_online_cpus(void);
 extern void put_online_cpus(void);
+extern unsigned int get_online_cpus_atomic(void);
+extern void put_online_cpus_atomic(void);
 extern void cpu_hotplug_disable(void);
 extern void cpu_hotplug_enable(void);
 #define hotcpu_notifier(fn, pri)	cpu_notifier(fn, pri)
@@ -202,6 +206,22 @@ static inline void cpu_hotplug_driver_unlock(void)
 #define put_online_cpus()	do { } while (0)
 #define cpu_hotplug_disable()	do { } while (0)
 #define cpu_hotplug_enable()	do { } while (0)
+
+static inline unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * Disable preemption to avoid getting complaints from the
+	 * debug_smp_processor_id() code.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+
+static inline void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 198a388..2d03398 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -154,6 +154,44 @@ void cpu_hotplug_enable(void)
 	cpu_maps_update_done();
 }
 
+/*
+ * get_online_cpus_atomic - Prevent any CPU from going offline
+ *
+ * Atomic hotplug readers (tasks which wish to prevent CPUs from going
+ * offline during their critical section, but can't afford to sleep)
+ * can invoke this function to synchronize with CPU offline. This function
+ * can be called recursively, provided it is matched with an equal number
+ * of calls to put_online_cpus_atomic().
+ *
+ * Note: This does NOT prevent CPUs from coming online! It only prevents
+ * CPUs from going offline.
+ *
+ * Lock ordering rule: Strictly speaking, there is no lock ordering
+ * requirement here, but it is advisable to keep the locking consistent.
+ * As a simple rule-of-thumb, use these functions in the outer-most blocks
+ * of your critical sections, outside of other locks.
+ *
+ * Returns the current CPU number, with preemption disabled.
+ */
+unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * The current CPU hotplug implementation uses stop_machine() in
+	 * the CPU offline path. And disabling preemption prevents
+	 * stop_machine() from taking effect. Thus, this prevents any CPU
+	 * from going offline.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
+
+void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+EXPORT_SYMBOL_GPL(put_online_cpus_atomic);
+
 #else /* #if CONFIG_HOTPLUG_CPU */
 static void cpu_hotplug_begin(void) {}
 static void cpu_hotplug_done(void) {}


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 01/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: peterz, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Tejun Heo,
	Rafael J. Wysocki, Yasuaki Ishimatsu

The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.

There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize with CPU hotplug. Typically these are in atomic
sections of code, where they can't make use of get/put_online_cpus(), because
the latter set of APIs can sleep.

Going forward, we want to get rid of stop_machine() from the CPU hotplug
offline path. And then, with stop_machine() gone, disabling preemption will
no longer prevent CPUs from going offline.

So provide a set of APIs for such atomic hotplug readers, to prevent (any)
CPUs from going offline. For now, they will default to preempt_disable()
and preempt_enable() itself, but this will help us do the tree-wide conversion,
as a preparatory step to remove stop_machine() from CPU hotplug.

(Besides, it is good documentation as well, since it clearly marks places
where we synchronize with CPU hotplug, instead of combining it subtly with
disabling preemption).

In future, when actually removing stop_machine(), we will alter the
implementation of these APIs to a suitable synchronization scheme.

Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |   20 ++++++++++++++++++++
 kernel/cpu.c        |   38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 9f3c7e8..a57b25a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -17,6 +17,8 @@
 #include <linux/node.h>
 #include <linux/compiler.h>
 #include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/smp.h>
 
 struct device;
 
@@ -175,6 +177,8 @@ extern struct bus_type cpu_subsys;
 
 extern void get_online_cpus(void);
 extern void put_online_cpus(void);
+extern unsigned int get_online_cpus_atomic(void);
+extern void put_online_cpus_atomic(void);
 extern void cpu_hotplug_disable(void);
 extern void cpu_hotplug_enable(void);
 #define hotcpu_notifier(fn, pri)	cpu_notifier(fn, pri)
@@ -202,6 +206,22 @@ static inline void cpu_hotplug_driver_unlock(void)
 #define put_online_cpus()	do { } while (0)
 #define cpu_hotplug_disable()	do { } while (0)
 #define cpu_hotplug_enable()	do { } while (0)
+
+static inline unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * Disable preemption to avoid getting complaints from the
+	 * debug_smp_processor_id() code.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+
+static inline void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 198a388..2d03398 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -154,6 +154,44 @@ void cpu_hotplug_enable(void)
 	cpu_maps_update_done();
 }
 
+/*
+ * get_online_cpus_atomic - Prevent any CPU from going offline
+ *
+ * Atomic hotplug readers (tasks which wish to prevent CPUs from going
+ * offline during their critical section, but can't afford to sleep)
+ * can invoke this function to synchronize with CPU offline. This function
+ * can be called recursively, provided it is matched with an equal number
+ * of calls to put_online_cpus_atomic().
+ *
+ * Note: This does NOT prevent CPUs from coming online! It only prevents
+ * CPUs from going offline.
+ *
+ * Lock ordering rule: Strictly speaking, there is no lock ordering
+ * requirement here, but it is advisable to keep the locking consistent.
+ * As a simple rule-of-thumb, use these functions in the outer-most blocks
+ * of your critical sections, outside of other locks.
+ *
+ * Returns the current CPU number, with preemption disabled.
+ */
+unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * The current CPU hotplug implementation uses stop_machine() in
+	 * the CPU offline path. And disabling preemption prevents
+	 * stop_machine() from taking effect. Thus, this prevents any CPU
+	 * from going offline.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
+
+void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+EXPORT_SYMBOL_GPL(put_online_cpus_atomic);
+
 #else /* #if CONFIG_HOTPLUG_CPU */
 static void cpu_hotplug_begin(void) {}
 static void cpu_hotplug_done(void) {}

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 01/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rafael J. Wysocki, Yasuaki Ishimatsu

The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.

There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize with CPU hotplug. Typically these are in atomic
sections of code, where they can't make use of get/put_online_cpus(), because
the latter set of APIs can sleep.

Going forward, we want to get rid of stop_machine() from the CPU hotplug
offline path. And then, with stop_machine() gone, disabling preemption will
no longer prevent CPUs from going offline.

So provide a set of APIs for such atomic hotplug readers, to prevent (any)
CPUs from going offline. For now, they will default to preempt_disable()
and preempt_enable() itself, but this will help us do the tree-wide conversion,
as a preparatory step to remove stop_machine() from CPU hotplug.

(Besides, it is good documentation as well, since it clearly marks places
where we synchronize with CPU hotplug, instead of combining it subtly with
disabling preemption).

In future, when actually removing stop_machine(), we will alter the
implementation of these APIs to a suitable synchronization scheme.

Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |   20 ++++++++++++++++++++
 kernel/cpu.c        |   38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 9f3c7e8..a57b25a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -17,6 +17,8 @@
 #include <linux/node.h>
 #include <linux/compiler.h>
 #include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/smp.h>
 
 struct device;
 
@@ -175,6 +177,8 @@ extern struct bus_type cpu_subsys;
 
 extern void get_online_cpus(void);
 extern void put_online_cpus(void);
+extern unsigned int get_online_cpus_atomic(void);
+extern void put_online_cpus_atomic(void);
 extern void cpu_hotplug_disable(void);
 extern void cpu_hotplug_enable(void);
 #define hotcpu_notifier(fn, pri)	cpu_notifier(fn, pri)
@@ -202,6 +206,22 @@ static inline void cpu_hotplug_driver_unlock(void)
 #define put_online_cpus()	do { } while (0)
 #define cpu_hotplug_disable()	do { } while (0)
 #define cpu_hotplug_enable()	do { } while (0)
+
+static inline unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * Disable preemption to avoid getting complaints from the
+	 * debug_smp_processor_id() code.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+
+static inline void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 198a388..2d03398 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -154,6 +154,44 @@ void cpu_hotplug_enable(void)
 	cpu_maps_update_done();
 }
 
+/*
+ * get_online_cpus_atomic - Prevent any CPU from going offline
+ *
+ * Atomic hotplug readers (tasks which wish to prevent CPUs from going
+ * offline during their critical section, but can't afford to sleep)
+ * can invoke this function to synchronize with CPU offline. This function
+ * can be called recursively, provided it is matched with an equal number
+ * of calls to put_online_cpus_atomic().
+ *
+ * Note: This does NOT prevent CPUs from coming online! It only prevents
+ * CPUs from going offline.
+ *
+ * Lock ordering rule: Strictly speaking, there is no lock ordering
+ * requirement here, but it is advisable to keep the locking consistent.
+ * As a simple rule-of-thumb, use these functions in the outer-most blocks
+ * of your critical sections, outside of other locks.
+ *
+ * Returns the current CPU number, with preemption disabled.
+ */
+unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * The current CPU hotplug implementation uses stop_machine() in
+	 * the CPU offline path. And disabling preemption prevents
+	 * stop_machine() from taking effect. Thus, this prevents any CPU
+	 * from going offline.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
+
+void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+EXPORT_SYMBOL_GPL(put_online_cpus_atomic);
+
 #else /* #if CONFIG_HOTPLUG_CPU */
 static void cpu_hotplug_begin(void) {}
 static void cpu_hotplug_done(void) {}


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 01/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, Rafael J. Wysocki, sbw,
	Yasuaki Ishimatsu, wangyun, Srivatsa S. Bhat, netdev, Tejun Heo,
	Thomas Gleixner, linuxppc-dev, Andrew Morton

The current CPU offline code uses stop_machine() internally. And disabling
preemption prevents stop_machine() from taking effect, thus also preventing
CPUs from going offline, as a side effect.

There are places where this side-effect of preempt_disable() (or equivalent)
is used to synchronize with CPU hotplug. Typically these are in atomic
sections of code, where they can't make use of get/put_online_cpus(), because
the latter set of APIs can sleep.

Going forward, we want to get rid of stop_machine() from the CPU hotplug
offline path. And then, with stop_machine() gone, disabling preemption will
no longer prevent CPUs from going offline.

So provide a set of APIs for such atomic hotplug readers, to prevent (any)
CPUs from going offline. For now, they will default to preempt_disable()
and preempt_enable() itself, but this will help us do the tree-wide conversion,
as a preparatory step to remove stop_machine() from CPU hotplug.

(Besides, it is good documentation as well, since it clearly marks places
where we synchronize with CPU hotplug, instead of combining it subtly with
disabling preemption).

In future, when actually removing stop_machine(), we will alter the
implementation of these APIs to a suitable synchronization scheme.

Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |   20 ++++++++++++++++++++
 kernel/cpu.c        |   38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 9f3c7e8..a57b25a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -17,6 +17,8 @@
 #include <linux/node.h>
 #include <linux/compiler.h>
 #include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/smp.h>
 
 struct device;
 
@@ -175,6 +177,8 @@ extern struct bus_type cpu_subsys;
 
 extern void get_online_cpus(void);
 extern void put_online_cpus(void);
+extern unsigned int get_online_cpus_atomic(void);
+extern void put_online_cpus_atomic(void);
 extern void cpu_hotplug_disable(void);
 extern void cpu_hotplug_enable(void);
 #define hotcpu_notifier(fn, pri)	cpu_notifier(fn, pri)
@@ -202,6 +206,22 @@ static inline void cpu_hotplug_driver_unlock(void)
 #define put_online_cpus()	do { } while (0)
 #define cpu_hotplug_disable()	do { } while (0)
 #define cpu_hotplug_enable()	do { } while (0)
+
+static inline unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * Disable preemption to avoid getting complaints from the
+	 * debug_smp_processor_id() code.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+
+static inline void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 198a388..2d03398 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -154,6 +154,44 @@ void cpu_hotplug_enable(void)
 	cpu_maps_update_done();
 }
 
+/*
+ * get_online_cpus_atomic - Prevent any CPU from going offline
+ *
+ * Atomic hotplug readers (tasks which wish to prevent CPUs from going
+ * offline during their critical section, but can't afford to sleep)
+ * can invoke this function to synchronize with CPU offline. This function
+ * can be called recursively, provided it is matched with an equal number
+ * of calls to put_online_cpus_atomic().
+ *
+ * Note: This does NOT prevent CPUs from coming online! It only prevents
+ * CPUs from going offline.
+ *
+ * Lock ordering rule: Strictly speaking, there is no lock ordering
+ * requirement here, but it is advisable to keep the locking consistent.
+ * As a simple rule-of-thumb, use these functions in the outer-most blocks
+ * of your critical sections, outside of other locks.
+ *
+ * Returns the current CPU number, with preemption disabled.
+ */
+unsigned int get_online_cpus_atomic(void)
+{
+	/*
+	 * The current CPU hotplug implementation uses stop_machine() in
+	 * the CPU offline path. And disabling preemption prevents
+	 * stop_machine() from taking effect. Thus, this prevents any CPU
+	 * from going offline.
+	 */
+	preempt_disable();
+	return smp_processor_id();
+}
+EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
+
+void put_online_cpus_atomic(void)
+{
+	preempt_enable();
+}
+EXPORT_SYMBOL_GPL(put_online_cpus_atomic);
+
 #else /* #if CONFIG_HOTPLUG_CPU */
 static void cpu_hotplug_begin(void) {}
 static void cpu_hotplug_done(void) {}

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 02/45] CPU hotplug: Clarify the usage of different synchronization APIs
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki, Srivatsa S. Bhat

We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().

But there is yet another set, which might appear tempting to use:
cpu_hotplug_disable()/cpu_hotplug_enable(). Add comments to clarify
that this latter set is NOT for general use and must be used only in
specific cases where the requirement is really to _disable_ hotplug
and not just to synchronize with it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 2d03398..860f51a 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -139,6 +139,13 @@ static void cpu_hotplug_done(void)
  * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the
  * hotplug path before performing hotplug operations. So acquiring that lock
  * guarantees mutual exclusion from any currently running hotplug operations.
+ *
+ * Note: In most cases, this is *NOT* the function you need. If you simply
+ * want to avoid racing with CPU hotplug operations, use get/put_online_cpus()
+ * or get/put_online_cpus_atomic(), depending on the situation.
+ *
+ * This set of functions is reserved for cases where you really wish to
+ * _disable_ CPU hotplug and not just synchronize with it.
  */
 void cpu_hotplug_disable(void)
 {


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 02/45] CPU hotplug: Clarify the usage of different synchronization APIs
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki

We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().

But there is yet another set, which might appear tempting to use:
cpu_hotplug_disable()/cpu_hotplug_enable(). Add comments to clarify
that this latter set is NOT for general use and must be used only in
specific cases where the requirement is really to _disable_ hotplug
and not just to synchronize with it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 2d03398..860f51a 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -139,6 +139,13 @@ static void cpu_hotplug_done(void)
  * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the
  * hotplug path before performing hotplug operations. So acquiring that lock
  * guarantees mutual exclusion from any currently running hotplug operations.
+ *
+ * Note: In most cases, this is *NOT* the function you need. If you simply
+ * want to avoid racing with CPU hotplug operations, use get/put_online_cpus()
+ * or get/put_online_cpus_atomic(), depending on the situation.
+ *
+ * This set of functions is reserved for cases where you really wish to
+ * _disable_ CPU hotplug and not just synchronize with it.
  */
 void cpu_hotplug_disable(void)
 {

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 02/45] CPU hotplug: Clarify the usage of different synchronization APIs
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Yasuaki Ishimatsu, Rafael J. Wysocki

We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().

But there is yet another set, which might appear tempting to use:
cpu_hotplug_disable()/cpu_hotplug_enable(). Add comments to clarify
that this latter set is NOT for general use and must be used only in
specific cases where the requirement is really to _disable_ hotplug
and not just to synchronize with it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 2d03398..860f51a 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -139,6 +139,13 @@ static void cpu_hotplug_done(void)
  * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the
  * hotplug path before performing hotplug operations. So acquiring that lock
  * guarantees mutual exclusion from any currently running hotplug operations.
+ *
+ * Note: In most cases, this is *NOT* the function you need. If you simply
+ * want to avoid racing with CPU hotplug operations, use get/put_online_cpus()
+ * or get/put_online_cpus_atomic(), depending on the situation.
+ *
+ * This set of functions is reserved for cases where you really wish to
+ * _disable_ CPU hotplug and not just synchronize with it.
  */
 void cpu_hotplug_disable(void)
 {


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 02/45] CPU hotplug: Clarify the usage of different synchronization APIs
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, Rafael J. Wysocki,
	linux-kernel, rostedt, xiaoguangrong, sbw, Yasuaki Ishimatsu,
	wangyun, Srivatsa S. Bhat, netdev, Thomas Gleixner, linuxppc-dev,
	Andrew Morton

We have quite a few APIs now which help synchronize with CPU hotplug.
Among them, get/put_online_cpus() is the oldest and the most well-known,
so no problems there. By extension, its easy to comprehend the new
set : get/put_online_cpus_atomic().

But there is yet another set, which might appear tempting to use:
cpu_hotplug_disable()/cpu_hotplug_enable(). Add comments to clarify
that this latter set is NOT for general use and must be used only in
specific cases where the requirement is really to _disable_ hotplug
and not just to synchronize with it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 2d03398..860f51a 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -139,6 +139,13 @@ static void cpu_hotplug_done(void)
  * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the
  * hotplug path before performing hotplug operations. So acquiring that lock
  * guarantees mutual exclusion from any currently running hotplug operations.
+ *
+ * Note: In most cases, this is *NOT* the function you need. If you simply
+ * want to avoid racing with CPU hotplug operations, use get/put_online_cpus()
+ * or get/put_online_cpus_atomic(), depending on the situation.
+ *
+ * This set of functions is reserved for cases where you really wish to
+ * _disable_ CPU hotplug and not just synchronize with it.
  */
 void cpu_hotplug_disable(void)
 {

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 03/45] Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic()
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rob Landley, linux-doc, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from atomic context.

Cc: Rob Landley <rob@landley.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 Documentation/cpu-hotplug.txt |   20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index 9f40135..7b3ca60 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -113,13 +113,18 @@ Never use anything other than cpumask_t to represent bitmap of CPUs.
 	#include <linux/cpu.h>
 	get_online_cpus() and put_online_cpus():
 
-The above calls are used to inhibit cpu hotplug operations. While the
+The above calls are used to inhibit cpu hotplug operations, when invoked from
+non-atomic contexts (because the above functions can sleep). While the
 cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
-If you merely need to avoid cpus going away, you could also use
-preempt_disable() and preempt_enable() for those sections.
-Just remember the critical section cannot call any
-function that can sleep or schedule this process away. The preempt_disable()
-will work as long as stop_machine_run() is used to take a cpu down.
+
+However, if you are executing in atomic context (ie., you can't afford to
+sleep), and you merely need to avoid cpus going offline, you can use
+get_online_cpus_atomic() and put_online_cpus_atomic() for those sections.
+Just remember the critical section cannot call any function that can sleep or
+schedule this process away. Using preempt_disable() will also work, as long
+as stop_machine() is used to take a CPU down. But we are going to get rid of
+stop_machine() in the CPU offline path soon, so it is strongly recommended
+to use the APIs mentioned above.
 
 CPU Hotplug - Frequently Asked Questions.
 
@@ -360,6 +365,9 @@ A: There are two ways.  If your code can be run in interrupt context, use
 		return err;
 	}
 
+   If my_func_on_cpu() itself cannot block, use get/put_online_cpus_atomic()
+   instead of get/put_online_cpus(), to prevent CPUs from going offline.
+
 Q: How do we determine how many CPUs are available for hotplug.
 A: There is no clear spec defined way from ACPI that can give us that
    information today. Based on some input from Natalie of Unisys,


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 03/45] Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic()
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rob Landley, linux-doc

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from atomic context.

Cc: Rob Landley <rob@landley.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 Documentation/cpu-hotplug.txt |   20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index 9f40135..7b3ca60 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -113,13 +113,18 @@ Never use anything other than cpumask_t to represent bitmap of CPUs.
 	#include <linux/cpu.h>
 	get_online_cpus() and put_online_cpus():
 
-The above calls are used to inhibit cpu hotplug operations. While the
+The above calls are used to inhibit cpu hotplug operations, when invoked from
+non-atomic contexts (because the above functions can sleep). While the
 cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
-If you merely need to avoid cpus going away, you could also use
-preempt_disable() and preempt_enable() for those sections.
-Just remember the critical section cannot call any
-function that can sleep or schedule this process away. The preempt_disable()
-will work as long as stop_machine_run() is used to take a cpu down.
+
+However, if you are executing in atomic context (ie., you can't afford to
+sleep), and you merely need to avoid cpus going offline, you can use
+get_online_cpus_atomic() and put_online_cpus_atomic() for those sections.
+Just remember the critical section cannot call any function that can sleep or
+schedule this process away. Using preempt_disable() will also work, as long
+as stop_machine() is used to take a CPU down. But we are going to get rid of
+stop_machine() in the CPU offline path soon, so it is strongly recommended
+to use the APIs mentioned above.
 
 CPU Hotplug - Frequently Asked Questions.
 
@@ -360,6 +365,9 @@ A: There are two ways.  If your code can be run in interrupt context, use
 		return err;
 	}
 
+   If my_func_on_cpu() itself cannot block, use get/put_online_cpus_atomic()
+   instead of get/put_online_cpus(), to prevent CPUs from going offline.
+
 Q: How do we determine how many CPUs are available for hotplug.
 A: There is no clear spec defined way from ACPI that can give us that
    information today. Based on some input from Natalie of Unisys,

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 03/45] Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic()
@ 2013-06-27 19:52   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:52 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-doc,
	linux-kernel, rostedt, xiaoguangrong, sbw, wangyun,
	Srivatsa S. Bhat, netdev, Rob Landley, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

So add documentation to recommend using the new get/put_online_cpus_atomic()
APIs to prevent CPUs from going offline, while invoking from atomic context.

Cc: Rob Landley <rob@landley.net>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 Documentation/cpu-hotplug.txt |   20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index 9f40135..7b3ca60 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -113,13 +113,18 @@ Never use anything other than cpumask_t to represent bitmap of CPUs.
 	#include <linux/cpu.h>
 	get_online_cpus() and put_online_cpus():
 
-The above calls are used to inhibit cpu hotplug operations. While the
+The above calls are used to inhibit cpu hotplug operations, when invoked from
+non-atomic contexts (because the above functions can sleep). While the
 cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
-If you merely need to avoid cpus going away, you could also use
-preempt_disable() and preempt_enable() for those sections.
-Just remember the critical section cannot call any
-function that can sleep or schedule this process away. The preempt_disable()
-will work as long as stop_machine_run() is used to take a cpu down.
+
+However, if you are executing in atomic context (ie., you can't afford to
+sleep), and you merely need to avoid cpus going offline, you can use
+get_online_cpus_atomic() and put_online_cpus_atomic() for those sections.
+Just remember the critical section cannot call any function that can sleep or
+schedule this process away. Using preempt_disable() will also work, as long
+as stop_machine() is used to take a CPU down. But we are going to get rid of
+stop_machine() in the CPU offline path soon, so it is strongly recommended
+to use the APIs mentioned above.
 
 CPU Hotplug - Frequently Asked Questions.
 
@@ -360,6 +365,9 @@ A: There are two ways.  If your code can be run in interrupt context, use
 		return err;
 	}
 
+   If my_func_on_cpu() itself cannot block, use get/put_online_cpus_atomic()
+   instead of get/put_online_cpus(), to prevent CPUs from going offline.
+
 Q: How do we determine how many CPUs are available for hotplug.
 A: There is no clear spec defined way from ACPI that can give us that
    information today. Based on some input from Natalie of Unisys,

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki, Srivatsa S. Bhat

Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.

This debugging infrastructure proves useful in the tree-wide conversion
of atomic hotplug readers from preempt_disable() to the new APIs, and
help us catch the places we missed, much before we actually get rid of
stop_machine(). We can perhaps remove the debugging checks later on.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   12 ++++++
 kernel/cpu.c            |   89 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index d08e4d2..9197ca4 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -101,6 +101,18 @@ extern const struct cpumask *const cpu_active_mask;
 #define cpu_active(cpu)		((cpu) == 0)
 #endif
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+extern void check_hotplug_safe_cpumask(const struct cpumask *mask);
+extern void check_hotplug_safe_cpu(unsigned int cpu,
+				   const struct cpumask *mask);
+#else
+static inline void check_hotplug_safe_cpumask(const struct cpumask *mask) { }
+static inline void check_hotplug_safe_cpu(unsigned int cpu,
+					  const struct cpumask *mask)
+{
+}
+#endif
+
 /* verify cpu argument to cpumask_* operators */
 static inline unsigned int cpumask_check(unsigned int cpu)
 {
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..5297ec1 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,92 @@ static struct {
 	.refcount = 0,
 };
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+
+static DEFINE_PER_CPU(unsigned long, atomic_reader_refcnt);
+
+static int current_is_hotplug_safe(const struct cpumask *mask)
+{
+
+	/* If we are not dealing with cpu_online_mask, don't complain. */
+	if (mask != cpu_online_mask)
+		return 1;
+
+	/* If this is the task doing hotplug, don't complain. */
+	if (unlikely(current == cpu_hotplug.active_writer))
+		return 1;
+
+	/* If we are in early boot, don't complain. */
+	if (system_state != SYSTEM_RUNNING)
+		return 1;
+
+	/*
+	 * Check if the current task is in atomic context and it has
+	 * invoked get_online_cpus_atomic() to synchronize with
+	 * CPU Hotplug.
+	 */
+	if (preempt_count() || irqs_disabled())
+		return this_cpu_read(atomic_reader_refcnt);
+	else
+		return 1; /* No checks for non-atomic contexts for now */
+}
+
+static inline void warn_hotplug_unsafe(void)
+{
+	WARN_ONCE(1, "Must use get/put_online_cpus_atomic() to synchronize"
+		     " with CPU hotplug\n");
+}
+
+/*
+ * Check if the task (executing in atomic context) has the required protection
+ * against CPU hotplug, while accessing the specified cpumask.
+ */
+void check_hotplug_safe_cpumask(const struct cpumask *mask)
+{
+	if (!current_is_hotplug_safe(mask))
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpumask);
+
+/*
+ * Similar to check_hotplug_safe_cpumask(), except that we don't complain
+ * if the task (executing in atomic context) is testing whether the CPU it
+ * is executing on is online or not.
+ *
+ * (A task executing with preemption disabled on a CPU, automatically prevents
+ *  offlining that CPU, irrespective of the actual implementation of CPU
+ *  offline. So we don't enforce holding of get_online_cpus_atomic() for that
+ *  case).
+ */
+void check_hotplug_safe_cpu(unsigned int cpu, const struct cpumask *mask)
+{
+	if(!current_is_hotplug_safe(mask) && cpu != smp_processor_id())
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpu);
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+	this_cpu_inc(atomic_reader_refcnt);
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+	this_cpu_dec(atomic_reader_refcnt);
+}
+
+#else
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+}
+
+#endif
+
 void get_online_cpus(void)
 {
 	might_sleep();
@@ -189,12 +275,15 @@ unsigned int get_online_cpus_atomic(void)
 	 * from going offline.
 	 */
 	preempt_disable();
+	atomic_reader_refcnt_inc();
+
 	return smp_processor_id();
 }
 EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
 
 void put_online_cpus_atomic(void)
 {
+	atomic_reader_refcnt_dec();
 	preempt_enable();
 }
 EXPORT_SYMBOL_GPL(put_online_cpus_atomic);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: peterz, oleg, paulmck, mingo, namhyung, walken, vincent.guittot,
	laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki

Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.

This debugging infrastructure proves useful in the tree-wide conversion
of atomic hotplug readers from preempt_disable() to the new APIs, and
help us catch the places we missed, much before we actually get rid of
stop_machine(). We can perhaps remove the debugging checks later on.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   12 ++++++
 kernel/cpu.c            |   89 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index d08e4d2..9197ca4 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -101,6 +101,18 @@ extern const struct cpumask *const cpu_active_mask;
 #define cpu_active(cpu)		((cpu) == 0)
 #endif
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+extern void check_hotplug_safe_cpumask(const struct cpumask *mask);
+extern void check_hotplug_safe_cpu(unsigned int cpu,
+				   const struct cpumask *mask);
+#else
+static inline void check_hotplug_safe_cpumask(const struct cpumask *mask) { }
+static inline void check_hotplug_safe_cpu(unsigned int cpu,
+					  const struct cpumask *mask)
+{
+}
+#endif
+
 /* verify cpu argument to cpumask_* operators */
 static inline unsigned int cpumask_check(unsigned int cpu)
 {
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..5297ec1 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,92 @@ static struct {
 	.refcount = 0,
 };
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+
+static DEFINE_PER_CPU(unsigned long, atomic_reader_refcnt);
+
+static int current_is_hotplug_safe(const struct cpumask *mask)
+{
+
+	/* If we are not dealing with cpu_online_mask, don't complain. */
+	if (mask != cpu_online_mask)
+		return 1;
+
+	/* If this is the task doing hotplug, don't complain. */
+	if (unlikely(current == cpu_hotplug.active_writer))
+		return 1;
+
+	/* If we are in early boot, don't complain. */
+	if (system_state != SYSTEM_RUNNING)
+		return 1;
+
+	/*
+	 * Check if the current task is in atomic context and it has
+	 * invoked get_online_cpus_atomic() to synchronize with
+	 * CPU Hotplug.
+	 */
+	if (preempt_count() || irqs_disabled())
+		return this_cpu_read(atomic_reader_refcnt);
+	else
+		return 1; /* No checks for non-atomic contexts for now */
+}
+
+static inline void warn_hotplug_unsafe(void)
+{
+	WARN_ONCE(1, "Must use get/put_online_cpus_atomic() to synchronize"
+		     " with CPU hotplug\n");
+}
+
+/*
+ * Check if the task (executing in atomic context) has the required protection
+ * against CPU hotplug, while accessing the specified cpumask.
+ */
+void check_hotplug_safe_cpumask(const struct cpumask *mask)
+{
+	if (!current_is_hotplug_safe(mask))
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpumask);
+
+/*
+ * Similar to check_hotplug_safe_cpumask(), except that we don't complain
+ * if the task (executing in atomic context) is testing whether the CPU it
+ * is executing on is online or not.
+ *
+ * (A task executing with preemption disabled on a CPU, automatically prevents
+ *  offlining that CPU, irrespective of the actual implementation of CPU
+ *  offline. So we don't enforce holding of get_online_cpus_atomic() for that
+ *  case).
+ */
+void check_hotplug_safe_cpu(unsigned int cpu, const struct cpumask *mask)
+{
+	if(!current_is_hotplug_safe(mask) && cpu != smp_processor_id())
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpu);
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+	this_cpu_inc(atomic_reader_refcnt);
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+	this_cpu_dec(atomic_reader_refcnt);
+}
+
+#else
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+}
+
+#endif
+
 void get_online_cpus(void)
 {
 	might_sleep();
@@ -189,12 +275,15 @@ unsigned int get_online_cpus_atomic(void)
 	 * from going offline.
 	 */
 	preempt_disable();
+	atomic_reader_refcnt_inc();
+
 	return smp_processor_id();
 }
 EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
 
 void put_online_cpus_atomic(void)
 {
+	atomic_reader_refcnt_dec();
 	preempt_enable();
 }
 EXPORT_SYMBOL_GPL(put_online_cpus_atomic);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Alex Shi, KOSAKI Motohiro, Yasuaki Ishimatsu,
	Rafael J. Wysocki

Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.

This debugging infrastructure proves useful in the tree-wide conversion
of atomic hotplug readers from preempt_disable() to the new APIs, and
help us catch the places we missed, much before we actually get rid of
stop_machine(). We can perhaps remove the debugging checks later on.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   12 ++++++
 kernel/cpu.c            |   89 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index d08e4d2..9197ca4 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -101,6 +101,18 @@ extern const struct cpumask *const cpu_active_mask;
 #define cpu_active(cpu)		((cpu) == 0)
 #endif
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+extern void check_hotplug_safe_cpumask(const struct cpumask *mask);
+extern void check_hotplug_safe_cpu(unsigned int cpu,
+				   const struct cpumask *mask);
+#else
+static inline void check_hotplug_safe_cpumask(const struct cpumask *mask) { }
+static inline void check_hotplug_safe_cpu(unsigned int cpu,
+					  const struct cpumask *mask)
+{
+}
+#endif
+
 /* verify cpu argument to cpumask_* operators */
 static inline unsigned int cpumask_check(unsigned int cpu)
 {
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..5297ec1 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,92 @@ static struct {
 	.refcount = 0,
 };
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+
+static DEFINE_PER_CPU(unsigned long, atomic_reader_refcnt);
+
+static int current_is_hotplug_safe(const struct cpumask *mask)
+{
+
+	/* If we are not dealing with cpu_online_mask, don't complain. */
+	if (mask != cpu_online_mask)
+		return 1;
+
+	/* If this is the task doing hotplug, don't complain. */
+	if (unlikely(current == cpu_hotplug.active_writer))
+		return 1;
+
+	/* If we are in early boot, don't complain. */
+	if (system_state != SYSTEM_RUNNING)
+		return 1;
+
+	/*
+	 * Check if the current task is in atomic context and it has
+	 * invoked get_online_cpus_atomic() to synchronize with
+	 * CPU Hotplug.
+	 */
+	if (preempt_count() || irqs_disabled())
+		return this_cpu_read(atomic_reader_refcnt);
+	else
+		return 1; /* No checks for non-atomic contexts for now */
+}
+
+static inline void warn_hotplug_unsafe(void)
+{
+	WARN_ONCE(1, "Must use get/put_online_cpus_atomic() to synchronize"
+		     " with CPU hotplug\n");
+}
+
+/*
+ * Check if the task (executing in atomic context) has the required protection
+ * against CPU hotplug, while accessing the specified cpumask.
+ */
+void check_hotplug_safe_cpumask(const struct cpumask *mask)
+{
+	if (!current_is_hotplug_safe(mask))
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpumask);
+
+/*
+ * Similar to check_hotplug_safe_cpumask(), except that we don't complain
+ * if the task (executing in atomic context) is testing whether the CPU it
+ * is executing on is online or not.
+ *
+ * (A task executing with preemption disabled on a CPU, automatically prevents
+ *  offlining that CPU, irrespective of the actual implementation of CPU
+ *  offline. So we don't enforce holding of get_online_cpus_atomic() for that
+ *  case).
+ */
+void check_hotplug_safe_cpu(unsigned int cpu, const struct cpumask *mask)
+{
+	if(!current_is_hotplug_safe(mask) && cpu != smp_processor_id())
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpu);
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+	this_cpu_inc(atomic_reader_refcnt);
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+	this_cpu_dec(atomic_reader_refcnt);
+}
+
+#else
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+}
+
+#endif
+
 void get_online_cpus(void)
 {
 	might_sleep();
@@ -189,12 +275,15 @@ unsigned int get_online_cpus_atomic(void)
 	 * from going offline.
 	 */
 	preempt_disable();
+	atomic_reader_refcnt_inc();
+
 	return smp_processor_id();
 }
 EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
 
 void put_online_cpus_atomic(void)
 {
+	atomic_reader_refcnt_dec();
 	preempt_enable();
 }
 EXPORT_SYMBOL_GPL(put_online_cpus_atomic);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Alex Shi, nikunj, zhong, linux-pm, fweisbec,
	Rusty Russell, linux-kernel, rostedt, xiaoguangrong,
	Rafael J. Wysocki, sbw, Yasuaki Ishimatsu, wangyun,
	Srivatsa S. Bhat, netdev, Tejun Heo, Thomas Gleixner,
	KOSAKI Motohiro, linuxppc-dev, Andrew Morton

Add a debugging infrastructure to warn if an atomic hotplug reader has not
invoked get_online_cpus_atomic() before traversing/accessing the
cpu_online_mask. Encapsulate these checks under a new debug config option
DEBUG_HOTPLUG_CPU.

This debugging infrastructure proves useful in the tree-wide conversion
of atomic hotplug readers from preempt_disable() to the new APIs, and
help us catch the places we missed, much before we actually get rid of
stop_machine(). We can perhaps remove the debugging checks later on.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   12 ++++++
 kernel/cpu.c            |   89 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index d08e4d2..9197ca4 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -101,6 +101,18 @@ extern const struct cpumask *const cpu_active_mask;
 #define cpu_active(cpu)		((cpu) == 0)
 #endif
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+extern void check_hotplug_safe_cpumask(const struct cpumask *mask);
+extern void check_hotplug_safe_cpu(unsigned int cpu,
+				   const struct cpumask *mask);
+#else
+static inline void check_hotplug_safe_cpumask(const struct cpumask *mask) { }
+static inline void check_hotplug_safe_cpu(unsigned int cpu,
+					  const struct cpumask *mask)
+{
+}
+#endif
+
 /* verify cpu argument to cpumask_* operators */
 static inline unsigned int cpumask_check(unsigned int cpu)
 {
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 860f51a..5297ec1 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -63,6 +63,92 @@ static struct {
 	.refcount = 0,
 };
 
+#ifdef CONFIG_DEBUG_HOTPLUG_CPU
+
+static DEFINE_PER_CPU(unsigned long, atomic_reader_refcnt);
+
+static int current_is_hotplug_safe(const struct cpumask *mask)
+{
+
+	/* If we are not dealing with cpu_online_mask, don't complain. */
+	if (mask != cpu_online_mask)
+		return 1;
+
+	/* If this is the task doing hotplug, don't complain. */
+	if (unlikely(current == cpu_hotplug.active_writer))
+		return 1;
+
+	/* If we are in early boot, don't complain. */
+	if (system_state != SYSTEM_RUNNING)
+		return 1;
+
+	/*
+	 * Check if the current task is in atomic context and it has
+	 * invoked get_online_cpus_atomic() to synchronize with
+	 * CPU Hotplug.
+	 */
+	if (preempt_count() || irqs_disabled())
+		return this_cpu_read(atomic_reader_refcnt);
+	else
+		return 1; /* No checks for non-atomic contexts for now */
+}
+
+static inline void warn_hotplug_unsafe(void)
+{
+	WARN_ONCE(1, "Must use get/put_online_cpus_atomic() to synchronize"
+		     " with CPU hotplug\n");
+}
+
+/*
+ * Check if the task (executing in atomic context) has the required protection
+ * against CPU hotplug, while accessing the specified cpumask.
+ */
+void check_hotplug_safe_cpumask(const struct cpumask *mask)
+{
+	if (!current_is_hotplug_safe(mask))
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpumask);
+
+/*
+ * Similar to check_hotplug_safe_cpumask(), except that we don't complain
+ * if the task (executing in atomic context) is testing whether the CPU it
+ * is executing on is online or not.
+ *
+ * (A task executing with preemption disabled on a CPU, automatically prevents
+ *  offlining that CPU, irrespective of the actual implementation of CPU
+ *  offline. So we don't enforce holding of get_online_cpus_atomic() for that
+ *  case).
+ */
+void check_hotplug_safe_cpu(unsigned int cpu, const struct cpumask *mask)
+{
+	if(!current_is_hotplug_safe(mask) && cpu != smp_processor_id())
+		warn_hotplug_unsafe();
+}
+EXPORT_SYMBOL_GPL(check_hotplug_safe_cpu);
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+	this_cpu_inc(atomic_reader_refcnt);
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+	this_cpu_dec(atomic_reader_refcnt);
+}
+
+#else
+
+static inline void atomic_reader_refcnt_inc(void)
+{
+}
+
+static inline void atomic_reader_refcnt_dec(void)
+{
+}
+
+#endif
+
 void get_online_cpus(void)
 {
 	might_sleep();
@@ -189,12 +275,15 @@ unsigned int get_online_cpus_atomic(void)
 	 * from going offline.
 	 */
 	preempt_disable();
+	atomic_reader_refcnt_inc();
+
 	return smp_processor_id();
 }
 EXPORT_SYMBOL_GPL(get_online_cpus_atomic);
 
 void put_online_cpus_atomic(void)
 {
+	atomic_reader_refcnt_dec();
 	preempt_enable();
 }
 EXPORT_SYMBOL_GPL(put_online_cpus_atomic);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 05/45] CPU hotplug: Protect set_cpu_online() to avoid false-positives
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki, Srivatsa S. Bhat

When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.

But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the debug code too lenient, risking
false-negatives.

Luckily, all architectures use set_cpu_online() to manipulate the
cpu_online_mask. So, to avoid false-positive warnings from the CPU hotplug
debug code, encapsulate the body of set_cpu_online() within
get/put_online_cpus_atomic().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5297ec1..35e7115 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -832,10 +832,14 @@ void set_cpu_present(unsigned int cpu, bool present)
 
 void set_cpu_online(unsigned int cpu, bool online)
 {
+	get_online_cpus_atomic();
+
 	if (online)
 		cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits));
 	else
 		cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits));
+
+	put_online_cpus_atomic();
 }
 
 void set_cpu_active(unsigned int cpu, bool active)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 05/45] CPU hotplug: Protect set_cpu_online() to avoid false-positives
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Yasuaki Ishimatsu,
	Rafael J. Wysocki

When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.

But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the debug code too lenient, risking
false-negatives.

Luckily, all architectures use set_cpu_online() to manipulate the
cpu_online_mask. So, to avoid false-positive warnings from the CPU hotplug
debug code, encapsulate the body of set_cpu_online() within
get/put_online_cpus_atomic().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5297ec1..35e7115 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -832,10 +832,14 @@ void set_cpu_present(unsigned int cpu, bool present)
 
 void set_cpu_online(unsigned int cpu, bool online)
 {
+	get_online_cpus_atomic();
+
 	if (online)
 		cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits));
 	else
 		cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits));
+
+	put_online_cpus_atomic();
 }
 
 void set_cpu_active(unsigned int cpu, bool active)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 05/45] CPU hotplug: Protect set_cpu_online() to avoid false-positives
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Yasuaki Ishimatsu, Rafael J. Wysocki

When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.

But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the debug code too lenient, risking
false-negatives.

Luckily, all architectures use set_cpu_online() to manipulate the
cpu_online_mask. So, to avoid false-positive warnings from the CPU hotplug
debug code, encapsulate the body of set_cpu_online() within
get/put_online_cpus_atomic().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5297ec1..35e7115 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -832,10 +832,14 @@ void set_cpu_present(unsigned int cpu, bool present)
 
 void set_cpu_online(unsigned int cpu, bool online)
 {
+	get_online_cpus_atomic();
+
 	if (online)
 		cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits));
 	else
 		cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits));
+
+	put_online_cpus_atomic();
 }
 
 void set_cpu_active(unsigned int cpu, bool active)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 05/45] CPU hotplug: Protect set_cpu_online() to avoid false-positives
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, Rafael J. Wysocki,
	linux-kernel, rostedt, xiaoguangrong, sbw, Yasuaki Ishimatsu,
	wangyun, Srivatsa S. Bhat, netdev, Thomas Gleixner, linuxppc-dev,
	Andrew Morton

When bringing a secondary CPU online, the task running on the CPU coming up
sets itself in the cpu_online_mask. This is safe even though this task is not
the hotplug writer task.

But it is kinda hard to teach this to the CPU hotplug debug infrastructure,
and if we get it wrong, we risk making the debug code too lenient, risking
false-negatives.

Luckily, all architectures use set_cpu_online() to manipulate the
cpu_online_mask. So, to avoid false-positive warnings from the CPU hotplug
debug code, encapsulate the body of set_cpu_online() within
get/put_online_cpus_atomic().

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/cpu.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5297ec1..35e7115 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -832,10 +832,14 @@ void set_cpu_present(unsigned int cpu, bool present)
 
 void set_cpu_online(unsigned int cpu, bool online)
 {
+	get_online_cpus_atomic();
+
 	if (online)
 		cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits));
 	else
 		cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits));
+
+	put_online_cpus_atomic();
 }
 
 void set_cpu_active(unsigned int cpu, bool active)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 06/45] CPU hotplug: Sprinkle debugging checks to catch locking bugs
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Andrew Morton, Joonsoo Kim, Srivatsa S. Bhat

Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   47 +++++++++++++++++++++++++++++++++++++++++++++--
 lib/cpumask.c           |    8 ++++++++
 2 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 9197ca4..06d2c36 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -169,6 +169,7 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
  */
 static inline unsigned int cpumask_first(const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -184,6 +185,8 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -199,6 +202,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -288,8 +293,15 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
  *
  * No static inline type checking - see Subtlety (1) above.
  */
-#define cpumask_test_cpu(cpu, cpumask) \
-	test_bit(cpumask_check(cpu), cpumask_bits((cpumask)))
+#define cpumask_test_cpu(cpu, cpumask)				\
+({								\
+	int __ret;						\
+								\
+	check_hotplug_safe_cpu(cpu, cpumask);			\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
 
 /**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
@@ -349,6 +361,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -362,6 +377,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
 			      const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
 				      cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -376,6 +394,9 @@ static inline void cpumask_xor(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -392,6 +413,9 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 				  const struct cpumask *src1p,
 				  const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
 					  cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -404,6 +428,8 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 static inline void cpumask_complement(struct cpumask *dstp,
 				      const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
+
 	bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
 					      nr_cpumask_bits);
 }
@@ -416,6 +442,9 @@ static inline void cpumask_complement(struct cpumask *dstp,
 static inline bool cpumask_equal(const struct cpumask *src1p,
 				const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
 						 nr_cpumask_bits);
 }
@@ -428,6 +457,10 @@ static inline bool cpumask_equal(const struct cpumask *src1p,
 static inline bool cpumask_intersects(const struct cpumask *src1p,
 				     const struct cpumask *src2p)
 {
+
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
 						      nr_cpumask_bits);
 }
@@ -442,6 +475,9 @@ static inline bool cpumask_intersects(const struct cpumask *src1p,
 static inline int cpumask_subset(const struct cpumask *src1p,
 				 const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
 						  nr_cpumask_bits);
 }
@@ -470,6 +506,12 @@ static inline bool cpumask_full(const struct cpumask *srcp)
  */
 static inline unsigned int cpumask_weight(const struct cpumask *srcp)
 {
+	/*
+	 * Often, we just want to have a rough estimate of the number of
+	 * online CPUs, without going to the trouble of synchronizing with
+	 * CPU hotplug. So don't invoke check_hotplug_safe_cpumask() here.
+	 */
+
 	return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -507,6 +549,7 @@ static inline void cpumask_shift_left(struct cpumask *dstp,
 static inline void cpumask_copy(struct cpumask *dstp,
 				const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
 }
 
diff --git a/lib/cpumask.c b/lib/cpumask.c
index d327b87..481df57 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -7,12 +7,14 @@
 
 int __first_cpu(const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
 }
 EXPORT_SYMBOL(__first_cpu);
 
 int __next_cpu(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
 }
 EXPORT_SYMBOL(__next_cpu);
@@ -20,6 +22,7 @@ EXPORT_SYMBOL(__next_cpu);
 #if NR_CPUS > 64
 int __next_cpu_nr(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, nr_cpu_ids,
 				find_next_bit(srcp->bits, nr_cpu_ids, n+1));
 }
@@ -37,6 +40,9 @@ EXPORT_SYMBOL(__next_cpu_nr);
 int cpumask_next_and(int n, const struct cpumask *src1p,
 		     const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	while ((n = cpumask_next(n, src1p)) < nr_cpu_ids)
 		if (cpumask_test_cpu(n, src2p))
 			break;
@@ -57,6 +63,8 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
 	unsigned int i;
 
 	cpumask_check(cpu);
+	check_hotplug_safe_cpumask(mask);
+
 	for_each_cpu(i, mask)
 		if (i != cpu)
 			break;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 06/45] CPU hotplug: Sprinkle debugging checks to catch locking bugs
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, oleg, paulmck, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Andrew Morton, Joonsoo Kim

Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   47 +++++++++++++++++++++++++++++++++++++++++++++--
 lib/cpumask.c           |    8 ++++++++
 2 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 9197ca4..06d2c36 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -169,6 +169,7 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
  */
 static inline unsigned int cpumask_first(const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -184,6 +185,8 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -199,6 +202,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -288,8 +293,15 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
  *
  * No static inline type checking - see Subtlety (1) above.
  */
-#define cpumask_test_cpu(cpu, cpumask) \
-	test_bit(cpumask_check(cpu), cpumask_bits((cpumask)))
+#define cpumask_test_cpu(cpu, cpumask)				\
+({								\
+	int __ret;						\
+								\
+	check_hotplug_safe_cpu(cpu, cpumask);			\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
 
 /**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
@@ -349,6 +361,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -362,6 +377,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
 			      const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
 				      cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -376,6 +394,9 @@ static inline void cpumask_xor(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -392,6 +413,9 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 				  const struct cpumask *src1p,
 				  const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
 					  cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -404,6 +428,8 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 static inline void cpumask_complement(struct cpumask *dstp,
 				      const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
+
 	bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
 					      nr_cpumask_bits);
 }
@@ -416,6 +442,9 @@ static inline void cpumask_complement(struct cpumask *dstp,
 static inline bool cpumask_equal(const struct cpumask *src1p,
 				const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
 						 nr_cpumask_bits);
 }
@@ -428,6 +457,10 @@ static inline bool cpumask_equal(const struct cpumask *src1p,
 static inline bool cpumask_intersects(const struct cpumask *src1p,
 				     const struct cpumask *src2p)
 {
+
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
 						      nr_cpumask_bits);
 }
@@ -442,6 +475,9 @@ static inline bool cpumask_intersects(const struct cpumask *src1p,
 static inline int cpumask_subset(const struct cpumask *src1p,
 				 const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
 						  nr_cpumask_bits);
 }
@@ -470,6 +506,12 @@ static inline bool cpumask_full(const struct cpumask *srcp)
  */
 static inline unsigned int cpumask_weight(const struct cpumask *srcp)
 {
+	/*
+	 * Often, we just want to have a rough estimate of the number of
+	 * online CPUs, without going to the trouble of synchronizing with
+	 * CPU hotplug. So don't invoke check_hotplug_safe_cpumask() here.
+	 */
+
 	return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -507,6 +549,7 @@ static inline void cpumask_shift_left(struct cpumask *dstp,
 static inline void cpumask_copy(struct cpumask *dstp,
 				const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
 }
 
diff --git a/lib/cpumask.c b/lib/cpumask.c
index d327b87..481df57 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -7,12 +7,14 @@
 
 int __first_cpu(const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
 }
 EXPORT_SYMBOL(__first_cpu);
 
 int __next_cpu(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
 }
 EXPORT_SYMBOL(__next_cpu);
@@ -20,6 +22,7 @@ EXPORT_SYMBOL(__next_cpu);
 #if NR_CPUS > 64
 int __next_cpu_nr(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, nr_cpu_ids,
 				find_next_bit(srcp->bits, nr_cpu_ids, n+1));
 }
@@ -37,6 +40,9 @@ EXPORT_SYMBOL(__next_cpu_nr);
 int cpumask_next_and(int n, const struct cpumask *src1p,
 		     const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	while ((n = cpumask_next(n, src1p)) < nr_cpu_ids)
 		if (cpumask_test_cpu(n, src2p))
 			break;
@@ -57,6 +63,8 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
 	unsigned int i;
 
 	cpumask_check(cpu);
+	check_hotplug_safe_cpumask(mask);
+
 	for_each_cpu(i, mask)
 		if (i != cpu)
 			break;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 06/45] CPU hotplug: Sprinkle debugging checks to catch locking bugs
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Alex Shi, KOSAKI Motohiro, Joonsoo Kim

Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   47 +++++++++++++++++++++++++++++++++++++++++++++--
 lib/cpumask.c           |    8 ++++++++
 2 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 9197ca4..06d2c36 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -169,6 +169,7 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
  */
 static inline unsigned int cpumask_first(const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -184,6 +185,8 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -199,6 +202,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -288,8 +293,15 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
  *
  * No static inline type checking - see Subtlety (1) above.
  */
-#define cpumask_test_cpu(cpu, cpumask) \
-	test_bit(cpumask_check(cpu), cpumask_bits((cpumask)))
+#define cpumask_test_cpu(cpu, cpumask)				\
+({								\
+	int __ret;						\
+								\
+	check_hotplug_safe_cpu(cpu, cpumask);			\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
 
 /**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
@@ -349,6 +361,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -362,6 +377,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
 			      const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
 				      cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -376,6 +394,9 @@ static inline void cpumask_xor(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -392,6 +413,9 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 				  const struct cpumask *src1p,
 				  const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
 					  cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -404,6 +428,8 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 static inline void cpumask_complement(struct cpumask *dstp,
 				      const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
+
 	bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
 					      nr_cpumask_bits);
 }
@@ -416,6 +442,9 @@ static inline void cpumask_complement(struct cpumask *dstp,
 static inline bool cpumask_equal(const struct cpumask *src1p,
 				const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
 						 nr_cpumask_bits);
 }
@@ -428,6 +457,10 @@ static inline bool cpumask_equal(const struct cpumask *src1p,
 static inline bool cpumask_intersects(const struct cpumask *src1p,
 				     const struct cpumask *src2p)
 {
+
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
 						      nr_cpumask_bits);
 }
@@ -442,6 +475,9 @@ static inline bool cpumask_intersects(const struct cpumask *src1p,
 static inline int cpumask_subset(const struct cpumask *src1p,
 				 const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
 						  nr_cpumask_bits);
 }
@@ -470,6 +506,12 @@ static inline bool cpumask_full(const struct cpumask *srcp)
  */
 static inline unsigned int cpumask_weight(const struct cpumask *srcp)
 {
+	/*
+	 * Often, we just want to have a rough estimate of the number of
+	 * online CPUs, without going to the trouble of synchronizing with
+	 * CPU hotplug. So don't invoke check_hotplug_safe_cpumask() here.
+	 */
+
 	return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -507,6 +549,7 @@ static inline void cpumask_shift_left(struct cpumask *dstp,
 static inline void cpumask_copy(struct cpumask *dstp,
 				const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
 }
 
diff --git a/lib/cpumask.c b/lib/cpumask.c
index d327b87..481df57 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -7,12 +7,14 @@
 
 int __first_cpu(const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
 }
 EXPORT_SYMBOL(__first_cpu);
 
 int __next_cpu(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
 }
 EXPORT_SYMBOL(__next_cpu);
@@ -20,6 +22,7 @@ EXPORT_SYMBOL(__next_cpu);
 #if NR_CPUS > 64
 int __next_cpu_nr(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, nr_cpu_ids,
 				find_next_bit(srcp->bits, nr_cpu_ids, n+1));
 }
@@ -37,6 +40,9 @@ EXPORT_SYMBOL(__next_cpu_nr);
 int cpumask_next_and(int n, const struct cpumask *src1p,
 		     const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	while ((n = cpumask_next(n, src1p)) < nr_cpu_ids)
 		if (cpumask_test_cpu(n, src2p))
 			break;
@@ -57,6 +63,8 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
 	unsigned int i;
 
 	cpumask_check(cpu);
+	check_hotplug_safe_cpumask(mask);
+
 	for_each_cpu(i, mask)
 		if (i != cpu)
 			break;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 06/45] CPU hotplug: Sprinkle debugging checks to catch locking bugs
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Alex Shi, nikunj, zhong, linux-pm, fweisbec,
	Rusty Russell, linux-kernel, rostedt, xiaoguangrong, sbw,
	Joonsoo Kim, wangyun, Srivatsa S. Bhat, netdev, Tejun Heo,
	Andrew Morton, KOSAKI Motohiro, linuxppc-dev

Now that we have a debug infrastructure in place to detect cases where
get/put_online_cpus_atomic() had to be used, add these checks at the
right spots to help catch places where we missed converting to the new
APIs.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   47 +++++++++++++++++++++++++++++++++++++++++++++--
 lib/cpumask.c           |    8 ++++++++
 2 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 9197ca4..06d2c36 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -169,6 +169,7 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
  */
 static inline unsigned int cpumask_first(const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return find_first_bit(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -184,6 +185,8 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -199,6 +202,8 @@ static inline unsigned int cpumask_next_zero(int n, const struct cpumask *srcp)
 	/* -1 is a legal arg here. */
 	if (n != -1)
 		cpumask_check(n);
+
+	check_hotplug_safe_cpumask(srcp);
 	return find_next_zero_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
 }
 
@@ -288,8 +293,15 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
  *
  * No static inline type checking - see Subtlety (1) above.
  */
-#define cpumask_test_cpu(cpu, cpumask) \
-	test_bit(cpumask_check(cpu), cpumask_bits((cpumask)))
+#define cpumask_test_cpu(cpu, cpumask)				\
+({								\
+	int __ret;						\
+								\
+	check_hotplug_safe_cpu(cpu, cpumask);			\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
 
 /**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
@@ -349,6 +361,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_and(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -362,6 +377,9 @@ static inline int cpumask_and(struct cpumask *dstp,
 static inline void cpumask_or(struct cpumask *dstp, const struct cpumask *src1p,
 			      const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_or(cpumask_bits(dstp), cpumask_bits(src1p),
 				      cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -376,6 +394,9 @@ static inline void cpumask_xor(struct cpumask *dstp,
 			       const struct cpumask *src1p,
 			       const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	bitmap_xor(cpumask_bits(dstp), cpumask_bits(src1p),
 				       cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -392,6 +413,9 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 				  const struct cpumask *src1p,
 				  const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_andnot(cpumask_bits(dstp), cpumask_bits(src1p),
 					  cpumask_bits(src2p), nr_cpumask_bits);
 }
@@ -404,6 +428,8 @@ static inline int cpumask_andnot(struct cpumask *dstp,
 static inline void cpumask_complement(struct cpumask *dstp,
 				      const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
+
 	bitmap_complement(cpumask_bits(dstp), cpumask_bits(srcp),
 					      nr_cpumask_bits);
 }
@@ -416,6 +442,9 @@ static inline void cpumask_complement(struct cpumask *dstp,
 static inline bool cpumask_equal(const struct cpumask *src1p,
 				const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_equal(cpumask_bits(src1p), cpumask_bits(src2p),
 						 nr_cpumask_bits);
 }
@@ -428,6 +457,10 @@ static inline bool cpumask_equal(const struct cpumask *src1p,
 static inline bool cpumask_intersects(const struct cpumask *src1p,
 				     const struct cpumask *src2p)
 {
+
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_intersects(cpumask_bits(src1p), cpumask_bits(src2p),
 						      nr_cpumask_bits);
 }
@@ -442,6 +475,9 @@ static inline bool cpumask_intersects(const struct cpumask *src1p,
 static inline int cpumask_subset(const struct cpumask *src1p,
 				 const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	return bitmap_subset(cpumask_bits(src1p), cpumask_bits(src2p),
 						  nr_cpumask_bits);
 }
@@ -470,6 +506,12 @@ static inline bool cpumask_full(const struct cpumask *srcp)
  */
 static inline unsigned int cpumask_weight(const struct cpumask *srcp)
 {
+	/*
+	 * Often, we just want to have a rough estimate of the number of
+	 * online CPUs, without going to the trouble of synchronizing with
+	 * CPU hotplug. So don't invoke check_hotplug_safe_cpumask() here.
+	 */
+
 	return bitmap_weight(cpumask_bits(srcp), nr_cpumask_bits);
 }
 
@@ -507,6 +549,7 @@ static inline void cpumask_shift_left(struct cpumask *dstp,
 static inline void cpumask_copy(struct cpumask *dstp,
 				const struct cpumask *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	bitmap_copy(cpumask_bits(dstp), cpumask_bits(srcp), nr_cpumask_bits);
 }
 
diff --git a/lib/cpumask.c b/lib/cpumask.c
index d327b87..481df57 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -7,12 +7,14 @@
 
 int __first_cpu(const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_first_bit(srcp->bits, NR_CPUS));
 }
 EXPORT_SYMBOL(__first_cpu);
 
 int __next_cpu(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, NR_CPUS, find_next_bit(srcp->bits, NR_CPUS, n+1));
 }
 EXPORT_SYMBOL(__next_cpu);
@@ -20,6 +22,7 @@ EXPORT_SYMBOL(__next_cpu);
 #if NR_CPUS > 64
 int __next_cpu_nr(int n, const cpumask_t *srcp)
 {
+	check_hotplug_safe_cpumask(srcp);
 	return min_t(int, nr_cpu_ids,
 				find_next_bit(srcp->bits, nr_cpu_ids, n+1));
 }
@@ -37,6 +40,9 @@ EXPORT_SYMBOL(__next_cpu_nr);
 int cpumask_next_and(int n, const struct cpumask *src1p,
 		     const struct cpumask *src2p)
 {
+	check_hotplug_safe_cpumask(src1p);
+	check_hotplug_safe_cpumask(src2p);
+
 	while ((n = cpumask_next(n, src1p)) < nr_cpu_ids)
 		if (cpumask_test_cpu(n, src2p))
 			break;
@@ -57,6 +63,8 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu)
 	unsigned int i;
 
 	cpumask_check(cpu);
+	check_hotplug_safe_cpumask(mask);
+
 	for_each_cpu(i, mask)
 		if (i != cpu)
 			break;

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 07/45] CPU hotplug: Add _nocheck() variants of accessor functions
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Andrew Morton, Joonsoo Kim, Srivatsa S. Bhat

Sometimes, we have situations where the synchronization design of a
particular subsystem handles CPU hotplug properly, but the details are
non-trivial, making it hard to teach this to the rudimentary hotplug
locking validator. In such cases, it would be useful to have a set of
_nocheck() variants of the cpu accessor functions, to avoid false-positive
warnings from the hotplug locking validator.

However, we won't go overboard with that; we'll add them only on a
case-by-case basis and mandate that the call-sites which use them add
a comment explaining why it is hotplug safe and hence justify the use
of the _nocheck() variants.

At the moment, the RCU and the percpu-counter code have legitimate
reasons to use the _nocheck() variants, so let's add them for
cpu_is_offline() and for_each_online_cpu(), for use in those subsystems
respectively.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   59 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 06d2c36..f577a7d 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -87,6 +87,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)
 #define num_active_cpus()	cpumask_weight(cpu_active_mask)
 #define cpu_online(cpu)		cpumask_test_cpu((cpu), cpu_online_mask)
+#define cpu_online_nocheck(cpu)	cpumask_test_cpu_nocheck((cpu), cpu_online_mask)
 #define cpu_possible(cpu)	cpumask_test_cpu((cpu), cpu_possible_mask)
 #define cpu_present(cpu)	cpumask_test_cpu((cpu), cpu_present_mask)
 #define cpu_active(cpu)		cpumask_test_cpu((cpu), cpu_active_mask)
@@ -96,6 +97,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	1U
 #define num_active_cpus()	1U
 #define cpu_online(cpu)		((cpu) == 0)
+#define cpu_online_nocheck(cpu)	cpu_online((cpu))
 #define cpu_possible(cpu)	((cpu) == 0)
 #define cpu_present(cpu)	((cpu) == 0)
 #define cpu_active(cpu)		((cpu) == 0)
@@ -156,6 +158,8 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
 
 #define for_each_cpu(cpu, mask)			\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_nocheck(cpu, mask)		\
+			for_each_cpu((cpu), (mask))
 #define for_each_cpu_not(cpu, mask)		\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
 #define for_each_cpu_and(cpu, mask, and)	\
@@ -191,6 +195,24 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 }
 
 /**
+ * cpumask_next_nocheck - get the next cpu in a cpumask, without checking
+ * 			  for hotplug safety
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @srcp: the cpumask pointer
+ *
+ * Returns >= nr_cpu_ids if no further cpus set.
+ */
+static inline unsigned int cpumask_next_nocheck(int n,
+						const struct cpumask *srcp)
+{
+	/* -1 is a legal arg here. */
+	if (n != -1)
+		cpumask_check(n);
+
+	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
+}
+
+/**
  * cpumask_next_zero - get the next unset cpu in a cpumask
  * @n: the cpu prior to the place to search (ie. return will be > @n)
  * @srcp: the cpumask pointer
@@ -222,6 +244,21 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
 		(cpu) = cpumask_next((cpu), (mask)),	\
 		(cpu) < nr_cpu_ids;)
 
+
+/**
+ * for_each_cpu_nocheck - iterate over every cpu in a mask,
+ * 			  without checking for hotplug safety
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_nocheck(cpu, mask)				\
+	for ((cpu) = -1;					\
+		(cpu) = cpumask_next_nocheck((cpu), (mask)),	\
+		(cpu) < nr_cpu_ids;)
+
+
 /**
  * for_each_cpu_not - iterate over every cpu in a complemented mask
  * @cpu: the (optionally unsigned) integer iterator
@@ -304,6 +341,25 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
 })
 
 /**
+ * cpumask_test_cpu_nocheck - test for a cpu in a cpumask, without
+ * 			      checking for hotplug safety
+ * @cpu: cpu number (< nr_cpu_ids)
+ * @cpumask: the cpumask pointer
+ *
+ * Returns 1 if @cpu is set in @cpumask, else returns 0
+ *
+ * No static inline type checking - see Subtlety (1) above.
+ */
+#define cpumask_test_cpu_nocheck(cpu, cpumask)			\
+({								\
+	int __ret;						\
+								\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
+
+/**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
  * @cpu: cpu number (< nr_cpu_ids)
  * @cpumask: the cpumask pointer
@@ -775,6 +831,8 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
 
 #define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
 #define for_each_online_cpu(cpu)   for_each_cpu((cpu), cpu_online_mask)
+#define for_each_online_cpu_nocheck(cpu)				   \
+				for_each_cpu_nocheck((cpu), cpu_online_mask)
 #define for_each_present_cpu(cpu)  for_each_cpu((cpu), cpu_present_mask)
 
 /* Wrappers for arch boot code to manipulate normally-constant masks */
@@ -823,6 +881,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
 }
 
 #define cpu_is_offline(cpu)	unlikely(!cpu_online(cpu))
+#define cpu_is_offline_nocheck(cpu)	unlikely(!cpu_online_nocheck(cpu))
 
 #if NR_CPUS <= BITS_PER_LONG
 #define CPU_BITS_ALL						\


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 07/45] CPU hotplug: Add _nocheck() variants of accessor functions
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, oleg, paulmck, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rusty Russell, Alex Shi, KOSAKI Motohiro,
	Tejun Heo, Andrew Morton, Joonsoo Kim

Sometimes, we have situations where the synchronization design of a
particular subsystem handles CPU hotplug properly, but the details are
non-trivial, making it hard to teach this to the rudimentary hotplug
locking validator. In such cases, it would be useful to have a set of
_nocheck() variants of the cpu accessor functions, to avoid false-positive
warnings from the hotplug locking validator.

However, we won't go overboard with that; we'll add them only on a
case-by-case basis and mandate that the call-sites which use them add
a comment explaining why it is hotplug safe and hence justify the use
of the _nocheck() variants.

At the moment, the RCU and the percpu-counter code have legitimate
reasons to use the _nocheck() variants, so let's add them for
cpu_is_offline() and for_each_online_cpu(), for use in those subsystems
respectively.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   59 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 06d2c36..f577a7d 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -87,6 +87,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)
 #define num_active_cpus()	cpumask_weight(cpu_active_mask)
 #define cpu_online(cpu)		cpumask_test_cpu((cpu), cpu_online_mask)
+#define cpu_online_nocheck(cpu)	cpumask_test_cpu_nocheck((cpu), cpu_online_mask)
 #define cpu_possible(cpu)	cpumask_test_cpu((cpu), cpu_possible_mask)
 #define cpu_present(cpu)	cpumask_test_cpu((cpu), cpu_present_mask)
 #define cpu_active(cpu)		cpumask_test_cpu((cpu), cpu_active_mask)
@@ -96,6 +97,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	1U
 #define num_active_cpus()	1U
 #define cpu_online(cpu)		((cpu) == 0)
+#define cpu_online_nocheck(cpu)	cpu_online((cpu))
 #define cpu_possible(cpu)	((cpu) == 0)
 #define cpu_present(cpu)	((cpu) == 0)
 #define cpu_active(cpu)		((cpu) == 0)
@@ -156,6 +158,8 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
 
 #define for_each_cpu(cpu, mask)			\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_nocheck(cpu, mask)		\
+			for_each_cpu((cpu), (mask))
 #define for_each_cpu_not(cpu, mask)		\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
 #define for_each_cpu_and(cpu, mask, and)	\
@@ -191,6 +195,24 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 }
 
 /**
+ * cpumask_next_nocheck - get the next cpu in a cpumask, without checking
+ * 			  for hotplug safety
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @srcp: the cpumask pointer
+ *
+ * Returns >= nr_cpu_ids if no further cpus set.
+ */
+static inline unsigned int cpumask_next_nocheck(int n,
+						const struct cpumask *srcp)
+{
+	/* -1 is a legal arg here. */
+	if (n != -1)
+		cpumask_check(n);
+
+	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
+}
+
+/**
  * cpumask_next_zero - get the next unset cpu in a cpumask
  * @n: the cpu prior to the place to search (ie. return will be > @n)
  * @srcp: the cpumask pointer
@@ -222,6 +244,21 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
 		(cpu) = cpumask_next((cpu), (mask)),	\
 		(cpu) < nr_cpu_ids;)
 
+
+/**
+ * for_each_cpu_nocheck - iterate over every cpu in a mask,
+ * 			  without checking for hotplug safety
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_nocheck(cpu, mask)				\
+	for ((cpu) = -1;					\
+		(cpu) = cpumask_next_nocheck((cpu), (mask)),	\
+		(cpu) < nr_cpu_ids;)
+
+
 /**
  * for_each_cpu_not - iterate over every cpu in a complemented mask
  * @cpu: the (optionally unsigned) integer iterator
@@ -304,6 +341,25 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
 })
 
 /**
+ * cpumask_test_cpu_nocheck - test for a cpu in a cpumask, without
+ * 			      checking for hotplug safety
+ * @cpu: cpu number (< nr_cpu_ids)
+ * @cpumask: the cpumask pointer
+ *
+ * Returns 1 if @cpu is set in @cpumask, else returns 0
+ *
+ * No static inline type checking - see Subtlety (1) above.
+ */
+#define cpumask_test_cpu_nocheck(cpu, cpumask)			\
+({								\
+	int __ret;						\
+								\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
+
+/**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
  * @cpu: cpu number (< nr_cpu_ids)
  * @cpumask: the cpumask pointer
@@ -775,6 +831,8 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
 
 #define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
 #define for_each_online_cpu(cpu)   for_each_cpu((cpu), cpu_online_mask)
+#define for_each_online_cpu_nocheck(cpu)				   \
+				for_each_cpu_nocheck((cpu), cpu_online_mask)
 #define for_each_present_cpu(cpu)  for_each_cpu((cpu), cpu_present_mask)
 
 /* Wrappers for arch boot code to manipulate normally-constant masks */
@@ -823,6 +881,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
 }
 
 #define cpu_is_offline(cpu)	unlikely(!cpu_online(cpu))
+#define cpu_is_offline_nocheck(cpu)	unlikely(!cpu_online_nocheck(cpu))
 
 #if NR_CPUS <= BITS_PER_LONG
 #define CPU_BITS_ALL						\

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 07/45] CPU hotplug: Add _nocheck() variants of accessor functions
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Alex Shi, KOSAKI Motohiro, Joonsoo Kim

Sometimes, we have situations where the synchronization design of a
particular subsystem handles CPU hotplug properly, but the details are
non-trivial, making it hard to teach this to the rudimentary hotplug
locking validator. In such cases, it would be useful to have a set of
_nocheck() variants of the cpu accessor functions, to avoid false-positive
warnings from the hotplug locking validator.

However, we won't go overboard with that; we'll add them only on a
case-by-case basis and mandate that the call-sites which use them add
a comment explaining why it is hotplug safe and hence justify the use
of the _nocheck() variants.

At the moment, the RCU and the percpu-counter code have legitimate
reasons to use the _nocheck() variants, so let's add them for
cpu_is_offline() and for_each_online_cpu(), for use in those subsystems
respectively.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   59 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 06d2c36..f577a7d 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -87,6 +87,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)
 #define num_active_cpus()	cpumask_weight(cpu_active_mask)
 #define cpu_online(cpu)		cpumask_test_cpu((cpu), cpu_online_mask)
+#define cpu_online_nocheck(cpu)	cpumask_test_cpu_nocheck((cpu), cpu_online_mask)
 #define cpu_possible(cpu)	cpumask_test_cpu((cpu), cpu_possible_mask)
 #define cpu_present(cpu)	cpumask_test_cpu((cpu), cpu_present_mask)
 #define cpu_active(cpu)		cpumask_test_cpu((cpu), cpu_active_mask)
@@ -96,6 +97,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	1U
 #define num_active_cpus()	1U
 #define cpu_online(cpu)		((cpu) == 0)
+#define cpu_online_nocheck(cpu)	cpu_online((cpu))
 #define cpu_possible(cpu)	((cpu) == 0)
 #define cpu_present(cpu)	((cpu) == 0)
 #define cpu_active(cpu)		((cpu) == 0)
@@ -156,6 +158,8 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
 
 #define for_each_cpu(cpu, mask)			\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_nocheck(cpu, mask)		\
+			for_each_cpu((cpu), (mask))
 #define for_each_cpu_not(cpu, mask)		\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
 #define for_each_cpu_and(cpu, mask, and)	\
@@ -191,6 +195,24 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 }
 
 /**
+ * cpumask_next_nocheck - get the next cpu in a cpumask, without checking
+ * 			  for hotplug safety
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @srcp: the cpumask pointer
+ *
+ * Returns >= nr_cpu_ids if no further cpus set.
+ */
+static inline unsigned int cpumask_next_nocheck(int n,
+						const struct cpumask *srcp)
+{
+	/* -1 is a legal arg here. */
+	if (n != -1)
+		cpumask_check(n);
+
+	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
+}
+
+/**
  * cpumask_next_zero - get the next unset cpu in a cpumask
  * @n: the cpu prior to the place to search (ie. return will be > @n)
  * @srcp: the cpumask pointer
@@ -222,6 +244,21 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
 		(cpu) = cpumask_next((cpu), (mask)),	\
 		(cpu) < nr_cpu_ids;)
 
+
+/**
+ * for_each_cpu_nocheck - iterate over every cpu in a mask,
+ * 			  without checking for hotplug safety
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_nocheck(cpu, mask)				\
+	for ((cpu) = -1;					\
+		(cpu) = cpumask_next_nocheck((cpu), (mask)),	\
+		(cpu) < nr_cpu_ids;)
+
+
 /**
  * for_each_cpu_not - iterate over every cpu in a complemented mask
  * @cpu: the (optionally unsigned) integer iterator
@@ -304,6 +341,25 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
 })
 
 /**
+ * cpumask_test_cpu_nocheck - test for a cpu in a cpumask, without
+ * 			      checking for hotplug safety
+ * @cpu: cpu number (< nr_cpu_ids)
+ * @cpumask: the cpumask pointer
+ *
+ * Returns 1 if @cpu is set in @cpumask, else returns 0
+ *
+ * No static inline type checking - see Subtlety (1) above.
+ */
+#define cpumask_test_cpu_nocheck(cpu, cpumask)			\
+({								\
+	int __ret;						\
+								\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
+
+/**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
  * @cpu: cpu number (< nr_cpu_ids)
  * @cpumask: the cpumask pointer
@@ -775,6 +831,8 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
 
 #define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
 #define for_each_online_cpu(cpu)   for_each_cpu((cpu), cpu_online_mask)
+#define for_each_online_cpu_nocheck(cpu)				   \
+				for_each_cpu_nocheck((cpu), cpu_online_mask)
 #define for_each_present_cpu(cpu)  for_each_cpu((cpu), cpu_present_mask)
 
 /* Wrappers for arch boot code to manipulate normally-constant masks */
@@ -823,6 +881,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
 }
 
 #define cpu_is_offline(cpu)	unlikely(!cpu_online(cpu))
+#define cpu_is_offline_nocheck(cpu)	unlikely(!cpu_online_nocheck(cpu))
 
 #if NR_CPUS <= BITS_PER_LONG
 #define CPU_BITS_ALL						\


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 07/45] CPU hotplug: Add _nocheck() variants of accessor functions
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Alex Shi, nikunj, zhong, linux-pm, fweisbec,
	Rusty Russell, linux-kernel, rostedt, xiaoguangrong, sbw,
	Joonsoo Kim, wangyun, Srivatsa S. Bhat, netdev, Tejun Heo,
	Andrew Morton, KOSAKI Motohiro, linuxppc-dev

Sometimes, we have situations where the synchronization design of a
particular subsystem handles CPU hotplug properly, but the details are
non-trivial, making it hard to teach this to the rudimentary hotplug
locking validator. In such cases, it would be useful to have a set of
_nocheck() variants of the cpu accessor functions, to avoid false-positive
warnings from the hotplug locking validator.

However, we won't go overboard with that; we'll add them only on a
case-by-case basis and mandate that the call-sites which use them add
a comment explaining why it is hotplug safe and hence justify the use
of the _nocheck() variants.

At the moment, the RCU and the percpu-counter code have legitimate
reasons to use the _nocheck() variants, so let's add them for
cpu_is_offline() and for_each_online_cpu(), for use in those subsystems
respectively.

Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Alex Shi <alex.shi@intel.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <js1304@gmail.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpumask.h |   59 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 06d2c36..f577a7d 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -87,6 +87,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	cpumask_weight(cpu_present_mask)
 #define num_active_cpus()	cpumask_weight(cpu_active_mask)
 #define cpu_online(cpu)		cpumask_test_cpu((cpu), cpu_online_mask)
+#define cpu_online_nocheck(cpu)	cpumask_test_cpu_nocheck((cpu), cpu_online_mask)
 #define cpu_possible(cpu)	cpumask_test_cpu((cpu), cpu_possible_mask)
 #define cpu_present(cpu)	cpumask_test_cpu((cpu), cpu_present_mask)
 #define cpu_active(cpu)		cpumask_test_cpu((cpu), cpu_active_mask)
@@ -96,6 +97,7 @@ extern const struct cpumask *const cpu_active_mask;
 #define num_present_cpus()	1U
 #define num_active_cpus()	1U
 #define cpu_online(cpu)		((cpu) == 0)
+#define cpu_online_nocheck(cpu)	cpu_online((cpu))
 #define cpu_possible(cpu)	((cpu) == 0)
 #define cpu_present(cpu)	((cpu) == 0)
 #define cpu_active(cpu)		((cpu) == 0)
@@ -156,6 +158,8 @@ static inline unsigned int cpumask_any_but(const struct cpumask *mask,
 
 #define for_each_cpu(cpu, mask)			\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
+#define for_each_cpu_nocheck(cpu, mask)		\
+			for_each_cpu((cpu), (mask))
 #define for_each_cpu_not(cpu, mask)		\
 	for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask)
 #define for_each_cpu_and(cpu, mask, and)	\
@@ -191,6 +195,24 @@ static inline unsigned int cpumask_next(int n, const struct cpumask *srcp)
 }
 
 /**
+ * cpumask_next_nocheck - get the next cpu in a cpumask, without checking
+ * 			  for hotplug safety
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @srcp: the cpumask pointer
+ *
+ * Returns >= nr_cpu_ids if no further cpus set.
+ */
+static inline unsigned int cpumask_next_nocheck(int n,
+						const struct cpumask *srcp)
+{
+	/* -1 is a legal arg here. */
+	if (n != -1)
+		cpumask_check(n);
+
+	return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n+1);
+}
+
+/**
  * cpumask_next_zero - get the next unset cpu in a cpumask
  * @n: the cpu prior to the place to search (ie. return will be > @n)
  * @srcp: the cpumask pointer
@@ -222,6 +244,21 @@ int cpumask_any_but(const struct cpumask *mask, unsigned int cpu);
 		(cpu) = cpumask_next((cpu), (mask)),	\
 		(cpu) < nr_cpu_ids;)
 
+
+/**
+ * for_each_cpu_nocheck - iterate over every cpu in a mask,
+ * 			  without checking for hotplug safety
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the cpumask pointer
+ *
+ * After the loop, cpu is >= nr_cpu_ids.
+ */
+#define for_each_cpu_nocheck(cpu, mask)				\
+	for ((cpu) = -1;					\
+		(cpu) = cpumask_next_nocheck((cpu), (mask)),	\
+		(cpu) < nr_cpu_ids;)
+
+
 /**
  * for_each_cpu_not - iterate over every cpu in a complemented mask
  * @cpu: the (optionally unsigned) integer iterator
@@ -304,6 +341,25 @@ static inline void cpumask_clear_cpu(int cpu, struct cpumask *dstp)
 })
 
 /**
+ * cpumask_test_cpu_nocheck - test for a cpu in a cpumask, without
+ * 			      checking for hotplug safety
+ * @cpu: cpu number (< nr_cpu_ids)
+ * @cpumask: the cpumask pointer
+ *
+ * Returns 1 if @cpu is set in @cpumask, else returns 0
+ *
+ * No static inline type checking - see Subtlety (1) above.
+ */
+#define cpumask_test_cpu_nocheck(cpu, cpumask)			\
+({								\
+	int __ret;						\
+								\
+	__ret = test_bit(cpumask_check(cpu),			\
+				cpumask_bits((cpumask)));	\
+	__ret;							\
+})
+
+/**
  * cpumask_test_and_set_cpu - atomically test and set a cpu in a cpumask
  * @cpu: cpu number (< nr_cpu_ids)
  * @cpumask: the cpumask pointer
@@ -775,6 +831,8 @@ extern const DECLARE_BITMAP(cpu_all_bits, NR_CPUS);
 
 #define for_each_possible_cpu(cpu) for_each_cpu((cpu), cpu_possible_mask)
 #define for_each_online_cpu(cpu)   for_each_cpu((cpu), cpu_online_mask)
+#define for_each_online_cpu_nocheck(cpu)				   \
+				for_each_cpu_nocheck((cpu), cpu_online_mask)
 #define for_each_present_cpu(cpu)  for_each_cpu((cpu), cpu_present_mask)
 
 /* Wrappers for arch boot code to manipulate normally-constant masks */
@@ -823,6 +881,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
 }
 
 #define cpu_is_offline(cpu)	unlikely(!cpu_online(cpu))
+#define cpu_is_offline_nocheck(cpu)	unlikely(!cpu_online_nocheck(cpu))
 
 #if NR_CPUS <= BITS_PER_LONG
 #define CPU_BITS_ALL						\

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 08/45] CPU hotplug: Expose the new debug config option
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Andrew Morton, Paul E. McKenney, Akinobu Mita,
	Catalin Marinas, Michel Lespinasse, Sergei Shtylyov,
	Srivatsa S. Bhat

Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paul.mckenney@linaro.org>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/Kconfig.debug |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 7154f79..42e17f0 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -512,6 +512,14 @@ config DEBUG_PREEMPT
 	  if kernel code uses it in a preemption-unsafe way. Also, the kernel
 	  will detect preemption count underflows.
 
+config DEBUG_HOTPLUG_CPU
+	bool "Debug CPU hotplug"
+	depends on HOTPLUG_CPU
+	help
+	  If you say Y here, the kernel will check all the accesses of
+	  cpu_online_mask from atomic contexts, and will print warnings if
+	  the task lacks appropriate synchronization with CPU hotplug.
+
 config DEBUG_RT_MUTEXES
 	bool "RT Mutex debugging, deadlock detection"
 	depends on DEBUG_KERNEL && RT_MUTEXES


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 08/45] CPU hotplug: Expose the new debug config option
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, namhyung,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Andrew Morton, Paul E. McKenney, Akinobu Mita,
	Catalin Marinas, Michel Lespinasse, Sergei Shtylyov

Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paul.mckenney@linaro.org>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/Kconfig.debug |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 7154f79..42e17f0 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -512,6 +512,14 @@ config DEBUG_PREEMPT
 	  if kernel code uses it in a preemption-unsafe way. Also, the kernel
 	  will detect preemption count underflows.
 
+config DEBUG_HOTPLUG_CPU
+	bool "Debug CPU hotplug"
+	depends on HOTPLUG_CPU
+	help
+	  If you say Y here, the kernel will check all the accesses of
+	  cpu_online_mask from atomic contexts, and will print warnings if
+	  the task lacks appropriate synchronization with CPU hotplug.
+
 config DEBUG_RT_MUTEXES
 	bool "RT Mutex debugging, deadlock detection"
 	depends on DEBUG_KERNEL && RT_MUTEXES

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 08/45] CPU hotplug: Expose the new debug config option
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paul E. McKenney, Akinobu Mita, Catalin Marinas,
	Sergei Shtylyov

Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paul.mckenney@linaro.org>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/Kconfig.debug |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 7154f79..42e17f0 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -512,6 +512,14 @@ config DEBUG_PREEMPT
 	  if kernel code uses it in a preemption-unsafe way. Also, the kernel
 	  will detect preemption count underflows.
 
+config DEBUG_HOTPLUG_CPU
+	bool "Debug CPU hotplug"
+	depends on HOTPLUG_CPU
+	help
+	  If you say Y here, the kernel will check all the accesses of
+	  cpu_online_mask from atomic contexts, and will print warnings if
+	  the task lacks appropriate synchronization with CPU hotplug.
+
 config DEBUG_RT_MUTEXES
 	bool "RT Mutex debugging, deadlock detection"
 	depends on DEBUG_KERNEL && RT_MUTEXES


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 08/45] CPU hotplug: Expose the new debug config option
@ 2013-06-27 19:53   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:53 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Michel Lespinasse, Catalin Marinas, nikunj, zhong,
	linux-pm, fweisbec, linux-kernel, rostedt, xiaoguangrong, sbw,
	Akinobu Mita, wangyun, Srivatsa S. Bhat, netdev, Andrew Morton,
	Paul E. McKenney, linuxppc-dev, Sergei Shtylyov

Now that we have all the pieces of the CPU hotplug debug infrastructure
in place, expose the feature by growing a new Kconfig option,
CONFIG_DEBUG_HOTPLUG_CPU.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Paul E. McKenney" <paul.mckenney@linaro.org>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/Kconfig.debug |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 7154f79..42e17f0 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -512,6 +512,14 @@ config DEBUG_PREEMPT
 	  if kernel code uses it in a preemption-unsafe way. Also, the kernel
 	  will detect preemption count underflows.
 
+config DEBUG_HOTPLUG_CPU
+	bool "Debug CPU hotplug"
+	depends on HOTPLUG_CPU
+	help
+	  If you say Y here, the kernel will check all the accesses of
+	  cpu_online_mask from atomic contexts, and will print warnings if
+	  the task lacks appropriate synchronization with CPU hotplug.
+
 config DEBUG_RT_MUTEXES
 	bool "RT Mutex debugging, deadlock detection"
 	depends on DEBUG_KERNEL && RT_MUTEXES

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 09/45] CPU hotplug: Convert preprocessor macros to static inline functions
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Tejun Heo,
	Rafael J. Wysocki, Srivatsa S. Bhat

Convert the macros in the CPU hotplug code to static inline C functions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index a57b25a..85431a0 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -202,10 +202,8 @@ static inline void cpu_hotplug_driver_unlock(void)
 
 #else		/* CONFIG_HOTPLUG_CPU */
 
-#define get_online_cpus()	do { } while (0)
-#define put_online_cpus()	do { } while (0)
-#define cpu_hotplug_disable()	do { } while (0)
-#define cpu_hotplug_enable()	do { } while (0)
+static inline void get_online_cpus(void) {}
+static inline void put_online_cpus(void) {}
 
 static inline unsigned int get_online_cpus_atomic(void)
 {
@@ -222,6 +220,9 @@ static inline void put_online_cpus_atomic(void)
 	preempt_enable();
 }
 
+static inline void cpu_hotplug_disable(void) {}
+static inline void cpu_hotplug_enable(void) {}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 09/45] CPU hotplug: Convert preprocessor macros to static inline functions
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: peterz, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Andrew Morton, Tejun Heo,
	Rafael J. Wysocki

Convert the macros in the CPU hotplug code to static inline C functions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index a57b25a..85431a0 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -202,10 +202,8 @@ static inline void cpu_hotplug_driver_unlock(void)
 
 #else		/* CONFIG_HOTPLUG_CPU */
 
-#define get_online_cpus()	do { } while (0)
-#define put_online_cpus()	do { } while (0)
-#define cpu_hotplug_disable()	do { } while (0)
-#define cpu_hotplug_enable()	do { } while (0)
+static inline void get_online_cpus(void) {}
+static inline void put_online_cpus(void) {}
 
 static inline unsigned int get_online_cpus_atomic(void)
 {
@@ -222,6 +220,9 @@ static inline void put_online_cpus_atomic(void)
 	preempt_enable();
 }
 
+static inline void cpu_hotplug_disable(void) {}
+static inline void cpu_hotplug_enable(void) {}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 09/45] CPU hotplug: Convert preprocessor macros to static inline functions
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Rafael J. Wysocki

Convert the macros in the CPU hotplug code to static inline C functions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index a57b25a..85431a0 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -202,10 +202,8 @@ static inline void cpu_hotplug_driver_unlock(void)
 
 #else		/* CONFIG_HOTPLUG_CPU */
 
-#define get_online_cpus()	do { } while (0)
-#define put_online_cpus()	do { } while (0)
-#define cpu_hotplug_disable()	do { } while (0)
-#define cpu_hotplug_enable()	do { } while (0)
+static inline void get_online_cpus(void) {}
+static inline void put_online_cpus(void) {}
 
 static inline unsigned int get_online_cpus_atomic(void)
 {
@@ -222,6 +220,9 @@ static inline void put_online_cpus_atomic(void)
 	preempt_enable();
 }
 
+static inline void cpu_hotplug_disable(void) {}
+static inline void cpu_hotplug_enable(void) {}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 09/45] CPU hotplug: Convert preprocessor macros to static inline functions
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, Rafael J. Wysocki, sbw, wangyun,
	Srivatsa S. Bhat, netdev, Tejun Heo, Thomas Gleixner,
	linuxppc-dev, Andrew Morton

Convert the macros in the CPU hotplug code to static inline C functions.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 include/linux/cpu.h |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index a57b25a..85431a0 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -202,10 +202,8 @@ static inline void cpu_hotplug_driver_unlock(void)
 
 #else		/* CONFIG_HOTPLUG_CPU */
 
-#define get_online_cpus()	do { } while (0)
-#define put_online_cpus()	do { } while (0)
-#define cpu_hotplug_disable()	do { } while (0)
-#define cpu_hotplug_enable()	do { } while (0)
+static inline void get_online_cpus(void) {}
+static inline void put_online_cpus(void) {}
 
 static inline unsigned int get_online_cpus_atomic(void)
 {
@@ -222,6 +220,9 @@ static inline void put_online_cpus_atomic(void)
 	preempt_enable();
 }
 
+static inline void cpu_hotplug_disable(void) {}
+static inline void cpu_hotplug_enable(void) {}
+
 #define hotcpu_notifier(fn, pri)	do { (void)(fn); } while (0)
 /* These aren't inline functions due to a GCC bug. */
 #define register_hotcpu_notifier(nb)	({ (void)(nb); 0; })

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Andrew Morton, Wang YanQing, Shaohua Li,
	Jan Beulich, liguang, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   52 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 30 insertions(+), 22 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 4dba0f7..1f36d6d 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -232,7 +232,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	 * prevent preemption and reschedule on another processor,
 	 * as well as CPU removal
 	 */
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -264,7 +264,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 
 	return err;
 }
@@ -294,7 +294,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	int ret;
 
 	/* Try for same CPU (cheapest) */
-	cpu = get_cpu();
+	cpu = get_online_cpus_atomic();
 	if (cpumask_test_cpu(cpu, mask))
 		goto call;
 
@@ -310,7 +310,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	cpu = cpumask_any_and(mask, cpu_online_mask);
 call:
 	ret = smp_call_function_single(cpu, func, info, wait);
-	put_cpu();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
@@ -331,7 +331,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 	unsigned int this_cpu;
 	unsigned long flags;
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -349,7 +350,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 		csd_lock(csd);
 		generic_exec_single(cpu, csd, wait);
 	}
-	put_cpu();
+
+	put_online_cpus_atomic();
 }
 
 /**
@@ -370,7 +372,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait)
 {
 	struct call_function_data *cfd;
-	int cpu, next_cpu, this_cpu = smp_processor_id();
+	int cpu, next_cpu, this_cpu;
+
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -388,7 +392,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* No online cpus?  We're done. */
 	if (cpu >= nr_cpu_ids)
-		return;
+		goto out;
 
 	/* Do we have another CPU which isn't us? */
 	next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
@@ -398,7 +402,7 @@ void smp_call_function_many(const struct cpumask *mask,
 	/* Fastpath: do that cpu by itself. */
 	if (next_cpu >= nr_cpu_ids) {
 		smp_call_function_single(cpu, func, info, wait);
-		return;
+		goto out;
 	}
 
 	cfd = &__get_cpu_var(cfd_data);
@@ -408,7 +412,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* Some callers race with other cpus changing the passed mask */
 	if (unlikely(!cpumask_weight(cfd->cpumask)))
-		return;
+		goto out;
 
 	/*
 	 * After we put an entry into the list, cfd->cpumask may be cleared
@@ -443,6 +447,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			csd_lock_wait(csd);
 		}
 	}
+
+out:
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(smp_call_function_many);
 
@@ -463,9 +470,9 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 int smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return 0;
 }
@@ -565,12 +572,12 @@ int on_each_cpu(void (*func) (void *info), void *info, int wait)
 	unsigned long flags;
 	int ret = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	ret = smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	preempt_enable();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
@@ -592,7 +599,7 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 			void *info, bool wait)
 {
-	int cpu = get_cpu();
+	unsigned int cpu = get_online_cpus_atomic();
 
 	smp_call_function_many(mask, func, info, wait);
 	if (cpumask_test_cpu(cpu, mask)) {
@@ -600,7 +607,7 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 		func(info);
 		local_irq_enable();
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
@@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
  * The function might sleep if the GFP flags indicates a non
  * atomic allocation is allowed.
  *
- * Preemption is disabled to protect against CPUs going offline but not online.
- * CPUs going online during the call will not be seen or sent an IPI.
+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI.
  *
  * You must not call this function with disabled interrupts or
  * from a hardware interrupt handler or from a bottom half handler.
@@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
 	might_sleep_if(gfp_flags & __GFP_WAIT);
 
 	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info))
 				cpumask_set_cpu(cpu, cpus);
 		on_each_cpu_mask(cpus, func, info, wait);
-		preempt_enable();
+		put_online_cpus_atomic();
 		free_cpumask_var(cpus);
 	} else {
 		/*
 		 * No free cpumask, bother. No matter, we'll
 		 * just have to IPI them one by one.
 		 */
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info)) {
 				ret = smp_call_function_single(cpu, func,
 								info, wait);
 				WARN_ON_ONCE(!ret);
 			}
-		preempt_enable();
+		put_online_cpus_atomic();
 	}
 }
 EXPORT_SYMBOL(on_each_cpu_cond);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Andrew Morton, Wang YanQing, Shaohua Li,
	Jan Beulich, liguang

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   52 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 30 insertions(+), 22 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 4dba0f7..1f36d6d 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -232,7 +232,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	 * prevent preemption and reschedule on another processor,
 	 * as well as CPU removal
 	 */
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -264,7 +264,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 
 	return err;
 }
@@ -294,7 +294,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	int ret;
 
 	/* Try for same CPU (cheapest) */
-	cpu = get_cpu();
+	cpu = get_online_cpus_atomic();
 	if (cpumask_test_cpu(cpu, mask))
 		goto call;
 
@@ -310,7 +310,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	cpu = cpumask_any_and(mask, cpu_online_mask);
 call:
 	ret = smp_call_function_single(cpu, func, info, wait);
-	put_cpu();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
@@ -331,7 +331,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 	unsigned int this_cpu;
 	unsigned long flags;
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -349,7 +350,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 		csd_lock(csd);
 		generic_exec_single(cpu, csd, wait);
 	}
-	put_cpu();
+
+	put_online_cpus_atomic();
 }
 
 /**
@@ -370,7 +372,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait)
 {
 	struct call_function_data *cfd;
-	int cpu, next_cpu, this_cpu = smp_processor_id();
+	int cpu, next_cpu, this_cpu;
+
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -388,7 +392,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* No online cpus?  We're done. */
 	if (cpu >= nr_cpu_ids)
-		return;
+		goto out;
 
 	/* Do we have another CPU which isn't us? */
 	next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
@@ -398,7 +402,7 @@ void smp_call_function_many(const struct cpumask *mask,
 	/* Fastpath: do that cpu by itself. */
 	if (next_cpu >= nr_cpu_ids) {
 		smp_call_function_single(cpu, func, info, wait);
-		return;
+		goto out;
 	}
 
 	cfd = &__get_cpu_var(cfd_data);
@@ -408,7 +412,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* Some callers race with other cpus changing the passed mask */
 	if (unlikely(!cpumask_weight(cfd->cpumask)))
-		return;
+		goto out;
 
 	/*
 	 * After we put an entry into the list, cfd->cpumask may be cleared
@@ -443,6 +447,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			csd_lock_wait(csd);
 		}
 	}
+
+out:
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(smp_call_function_many);
 
@@ -463,9 +470,9 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 int smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return 0;
 }
@@ -565,12 +572,12 @@ int on_each_cpu(void (*func) (void *info), void *info, int wait)
 	unsigned long flags;
 	int ret = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	ret = smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	preempt_enable();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
@@ -592,7 +599,7 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 			void *info, bool wait)
 {
-	int cpu = get_cpu();
+	unsigned int cpu = get_online_cpus_atomic();
 
 	smp_call_function_many(mask, func, info, wait);
 	if (cpumask_test_cpu(cpu, mask)) {
@@ -600,7 +607,7 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 		func(info);
 		local_irq_enable();
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
@@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
  * The function might sleep if the GFP flags indicates a non
  * atomic allocation is allowed.
  *
- * Preemption is disabled to protect against CPUs going offline but not online.
- * CPUs going online during the call will not be seen or sent an IPI.
+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI.
  *
  * You must not call this function with disabled interrupts or
  * from a hardware interrupt handler or from a bottom half handler.
@@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
 	might_sleep_if(gfp_flags & __GFP_WAIT);
 
 	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info))
 				cpumask_set_cpu(cpu, cpus);
 		on_each_cpu_mask(cpus, func, info, wait);
-		preempt_enable();
+		put_online_cpus_atomic();
 		free_cpumask_var(cpus);
 	} else {
 		/*
 		 * No free cpumask, bother. No matter, we'll
 		 * just have to IPI them one by one.
 		 */
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info)) {
 				ret = smp_call_function_single(cpu, func,
 								info, wait);
 				WARN_ON_ONCE(!ret);
 			}
-		preempt_enable();
+		put_online_cpus_atomic();
 	}
 }
 EXPORT_SYMBOL(on_each_cpu_cond);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Wang YanQing, Shaohua Li, Jan Beulich, liguang

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   52 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 30 insertions(+), 22 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 4dba0f7..1f36d6d 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -232,7 +232,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	 * prevent preemption and reschedule on another processor,
 	 * as well as CPU removal
 	 */
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -264,7 +264,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 
 	return err;
 }
@@ -294,7 +294,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	int ret;
 
 	/* Try for same CPU (cheapest) */
-	cpu = get_cpu();
+	cpu = get_online_cpus_atomic();
 	if (cpumask_test_cpu(cpu, mask))
 		goto call;
 
@@ -310,7 +310,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	cpu = cpumask_any_and(mask, cpu_online_mask);
 call:
 	ret = smp_call_function_single(cpu, func, info, wait);
-	put_cpu();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
@@ -331,7 +331,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 	unsigned int this_cpu;
 	unsigned long flags;
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -349,7 +350,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 		csd_lock(csd);
 		generic_exec_single(cpu, csd, wait);
 	}
-	put_cpu();
+
+	put_online_cpus_atomic();
 }
 
 /**
@@ -370,7 +372,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait)
 {
 	struct call_function_data *cfd;
-	int cpu, next_cpu, this_cpu = smp_processor_id();
+	int cpu, next_cpu, this_cpu;
+
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -388,7 +392,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* No online cpus?  We're done. */
 	if (cpu >= nr_cpu_ids)
-		return;
+		goto out;
 
 	/* Do we have another CPU which isn't us? */
 	next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
@@ -398,7 +402,7 @@ void smp_call_function_many(const struct cpumask *mask,
 	/* Fastpath: do that cpu by itself. */
 	if (next_cpu >= nr_cpu_ids) {
 		smp_call_function_single(cpu, func, info, wait);
-		return;
+		goto out;
 	}
 
 	cfd = &__get_cpu_var(cfd_data);
@@ -408,7 +412,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* Some callers race with other cpus changing the passed mask */
 	if (unlikely(!cpumask_weight(cfd->cpumask)))
-		return;
+		goto out;
 
 	/*
 	 * After we put an entry into the list, cfd->cpumask may be cleared
@@ -443,6 +447,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			csd_lock_wait(csd);
 		}
 	}
+
+out:
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(smp_call_function_many);
 
@@ -463,9 +470,9 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 int smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return 0;
 }
@@ -565,12 +572,12 @@ int on_each_cpu(void (*func) (void *info), void *info, int wait)
 	unsigned long flags;
 	int ret = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	ret = smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	preempt_enable();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
@@ -592,7 +599,7 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 			void *info, bool wait)
 {
-	int cpu = get_cpu();
+	unsigned int cpu = get_online_cpus_atomic();
 
 	smp_call_function_many(mask, func, info, wait);
 	if (cpumask_test_cpu(cpu, mask)) {
@@ -600,7 +607,7 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 		func(info);
 		local_irq_enable();
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
@@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
  * The function might sleep if the GFP flags indicates a non
  * atomic allocation is allowed.
  *
- * Preemption is disabled to protect against CPUs going offline but not online.
- * CPUs going online during the call will not be seen or sent an IPI.
+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI.
  *
  * You must not call this function with disabled interrupts or
  * from a hardware interrupt handler or from a bottom half handler.
@@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
 	might_sleep_if(gfp_flags & __GFP_WAIT);
 
 	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info))
 				cpumask_set_cpu(cpu, cpus);
 		on_each_cpu_mask(cpus, func, info, wait);
-		preempt_enable();
+		put_online_cpus_atomic();
 		free_cpumask_var(cpus);
 	} else {
 		/*
 		 * No free cpumask, bother. No matter, we'll
 		 * just have to IPI them one by one.
 		 */
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info)) {
 				ret = smp_call_function_single(cpu, func,
 								info, wait);
 				WARN_ON_ONCE(!ret);
 			}
-		preempt_enable();
+		put_online_cpus_atomic();
 	}
 }
 EXPORT_SYMBOL(on_each_cpu_cond);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, Shaohua Li,
	linux-kernel, rostedt, xiaoguangrong, sbw, Jan Beulich, wangyun,
	Srivatsa S. Bhat, netdev, Andrew Morton, linuxppc-dev, liguang,
	Wang YanQing

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Wang YanQing <udknight@gmail.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: liguang <lig.fnst@cn.fujitsu.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/smp.c |   52 ++++++++++++++++++++++++++++++----------------------
 1 file changed, 30 insertions(+), 22 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index 4dba0f7..1f36d6d 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -232,7 +232,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 	 * prevent preemption and reschedule on another processor,
 	 * as well as CPU removal
 	 */
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -264,7 +264,7 @@ int smp_call_function_single(int cpu, smp_call_func_t func, void *info,
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 
 	return err;
 }
@@ -294,7 +294,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	int ret;
 
 	/* Try for same CPU (cheapest) */
-	cpu = get_cpu();
+	cpu = get_online_cpus_atomic();
 	if (cpumask_test_cpu(cpu, mask))
 		goto call;
 
@@ -310,7 +310,7 @@ int smp_call_function_any(const struct cpumask *mask,
 	cpu = cpumask_any_and(mask, cpu_online_mask);
 call:
 	ret = smp_call_function_single(cpu, func, info, wait);
-	put_cpu();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL_GPL(smp_call_function_any);
@@ -331,7 +331,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 	unsigned int this_cpu;
 	unsigned long flags;
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
+
 	/*
 	 * Can deadlock when called with interrupts disabled.
 	 * We allow cpu's that are not yet online though, as no one else can
@@ -349,7 +350,8 @@ void __smp_call_function_single(int cpu, struct call_single_data *csd,
 		csd_lock(csd);
 		generic_exec_single(cpu, csd, wait);
 	}
-	put_cpu();
+
+	put_online_cpus_atomic();
 }
 
 /**
@@ -370,7 +372,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			    smp_call_func_t func, void *info, bool wait)
 {
 	struct call_function_data *cfd;
-	int cpu, next_cpu, this_cpu = smp_processor_id();
+	int cpu, next_cpu, this_cpu;
+
+	this_cpu = get_online_cpus_atomic();
 
 	/*
 	 * Can deadlock when called with interrupts disabled.
@@ -388,7 +392,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* No online cpus?  We're done. */
 	if (cpu >= nr_cpu_ids)
-		return;
+		goto out;
 
 	/* Do we have another CPU which isn't us? */
 	next_cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
@@ -398,7 +402,7 @@ void smp_call_function_many(const struct cpumask *mask,
 	/* Fastpath: do that cpu by itself. */
 	if (next_cpu >= nr_cpu_ids) {
 		smp_call_function_single(cpu, func, info, wait);
-		return;
+		goto out;
 	}
 
 	cfd = &__get_cpu_var(cfd_data);
@@ -408,7 +412,7 @@ void smp_call_function_many(const struct cpumask *mask,
 
 	/* Some callers race with other cpus changing the passed mask */
 	if (unlikely(!cpumask_weight(cfd->cpumask)))
-		return;
+		goto out;
 
 	/*
 	 * After we put an entry into the list, cfd->cpumask may be cleared
@@ -443,6 +447,9 @@ void smp_call_function_many(const struct cpumask *mask,
 			csd_lock_wait(csd);
 		}
 	}
+
+out:
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(smp_call_function_many);
 
@@ -463,9 +470,9 @@ EXPORT_SYMBOL(smp_call_function_many);
  */
 int smp_call_function(smp_call_func_t func, void *info, int wait)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	smp_call_function_many(cpu_online_mask, func, info, wait);
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return 0;
 }
@@ -565,12 +572,12 @@ int on_each_cpu(void (*func) (void *info), void *info, int wait)
 	unsigned long flags;
 	int ret = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	ret = smp_call_function(func, info, wait);
 	local_irq_save(flags);
 	func(info);
 	local_irq_restore(flags);
-	preempt_enable();
+	put_online_cpus_atomic();
 	return ret;
 }
 EXPORT_SYMBOL(on_each_cpu);
@@ -592,7 +599,7 @@ EXPORT_SYMBOL(on_each_cpu);
 void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 			void *info, bool wait)
 {
-	int cpu = get_cpu();
+	unsigned int cpu = get_online_cpus_atomic();
 
 	smp_call_function_many(mask, func, info, wait);
 	if (cpumask_test_cpu(cpu, mask)) {
@@ -600,7 +607,7 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func,
 		func(info);
 		local_irq_enable();
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(on_each_cpu_mask);
 
@@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
  * The function might sleep if the GFP flags indicates a non
  * atomic allocation is allowed.
  *
- * Preemption is disabled to protect against CPUs going offline but not online.
- * CPUs going online during the call will not be seen or sent an IPI.
+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI.
  *
  * You must not call this function with disabled interrupts or
  * from a hardware interrupt handler or from a bottom half handler.
@@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
 	might_sleep_if(gfp_flags & __GFP_WAIT);
 
 	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info))
 				cpumask_set_cpu(cpu, cpus);
 		on_each_cpu_mask(cpus, func, info, wait);
-		preempt_enable();
+		put_online_cpus_atomic();
 		free_cpumask_var(cpus);
 	} else {
 		/*
 		 * No free cpumask, bother. No matter, we'll
 		 * just have to IPI them one by one.
 		 */
-		preempt_disable();
+		get_online_cpus_atomic();
 		for_each_online_cpu(cpu)
 			if (cond_func(cpu, info)) {
 				ret = smp_call_function_single(cpu, func,
 								info, wait);
 				WARN_ON_ONCE(!ret);
 			}
-		preempt_enable();
+		put_online_cpus_atomic();
 	}
 }
 EXPORT_SYMBOL(on_each_cpu_cond);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 11/45] sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |   23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9b1f2e5..9d870bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1160,11 +1160,11 @@ void kick_process(struct task_struct *p)
 {
 	int cpu;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = task_cpu(p);
 	if ((cpu != smp_processor_id()) && task_curr(p))
 		smp_send_reschedule(cpu);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kick_process);
 #endif /* CONFIG_SMP */
@@ -1172,6 +1172,9 @@ EXPORT_SYMBOL_GPL(kick_process);
 #ifdef CONFIG_SMP
 /*
  * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ *  Must be called within get/put_online_cpus_atomic(), to prevent
+ *  CPUs from going offline from under us.
  */
 static int select_fallback_rq(int cpu, struct task_struct *p)
 {
@@ -1245,6 +1248,9 @@ out:
 
 /*
  * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called within get/put_online_cpus_atomic(), to prevent
+ * CPUs from going offline from under us.
  */
 static inline
 int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1489,6 +1495,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	unsigned long flags;
 	int cpu, success = 0;
 
+	get_online_cpus_atomic();
+
 	smp_wmb();
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 	if (!(p->state & state))
@@ -1531,6 +1539,7 @@ stat:
 out:
 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
+	put_online_cpus_atomic();
 	return success;
 }
 
@@ -1744,6 +1753,8 @@ void wake_up_new_task(struct task_struct *p)
 	unsigned long flags;
 	struct rq *rq;
 
+	get_online_cpus_atomic();
+
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 #ifdef CONFIG_SMP
 	/*
@@ -1766,6 +1777,8 @@ void wake_up_new_task(struct task_struct *p)
 		p->sched_class->task_woken(rq, p);
 #endif
 	task_rq_unlock(rq, p, &flags);
+
+	put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -3879,6 +3892,8 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	unsigned long flags;
 	int yielded = 0;
 
+	get_online_cpus_atomic();
+
 	local_irq_save(flags);
 	rq = this_rq();
 
@@ -3924,6 +3939,8 @@ out_unlock:
 out_irq:
 	local_irq_restore(flags);
 
+	put_online_cpus_atomic();
+
 	if (yielded > 0)
 		schedule();
 
@@ -4324,9 +4341,11 @@ static int migration_cpu_stop(void *data)
 	 * The original target cpu might have gone down and we might
 	 * be on another cpu but it doesn't matter.
 	 */
+	get_online_cpus_atomic();
 	local_irq_disable();
 	__migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
 	local_irq_enable();
+	put_online_cpus_atomic();
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 11/45] sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |   23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9b1f2e5..9d870bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1160,11 +1160,11 @@ void kick_process(struct task_struct *p)
 {
 	int cpu;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = task_cpu(p);
 	if ((cpu != smp_processor_id()) && task_curr(p))
 		smp_send_reschedule(cpu);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kick_process);
 #endif /* CONFIG_SMP */
@@ -1172,6 +1172,9 @@ EXPORT_SYMBOL_GPL(kick_process);
 #ifdef CONFIG_SMP
 /*
  * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ *  Must be called within get/put_online_cpus_atomic(), to prevent
+ *  CPUs from going offline from under us.
  */
 static int select_fallback_rq(int cpu, struct task_struct *p)
 {
@@ -1245,6 +1248,9 @@ out:
 
 /*
  * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called within get/put_online_cpus_atomic(), to prevent
+ * CPUs from going offline from under us.
  */
 static inline
 int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1489,6 +1495,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	unsigned long flags;
 	int cpu, success = 0;
 
+	get_online_cpus_atomic();
+
 	smp_wmb();
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 	if (!(p->state & state))
@@ -1531,6 +1539,7 @@ stat:
 out:
 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
+	put_online_cpus_atomic();
 	return success;
 }
 
@@ -1744,6 +1753,8 @@ void wake_up_new_task(struct task_struct *p)
 	unsigned long flags;
 	struct rq *rq;
 
+	get_online_cpus_atomic();
+
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 #ifdef CONFIG_SMP
 	/*
@@ -1766,6 +1777,8 @@ void wake_up_new_task(struct task_struct *p)
 		p->sched_class->task_woken(rq, p);
 #endif
 	task_rq_unlock(rq, p, &flags);
+
+	put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -3879,6 +3892,8 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	unsigned long flags;
 	int yielded = 0;
 
+	get_online_cpus_atomic();
+
 	local_irq_save(flags);
 	rq = this_rq();
 
@@ -3924,6 +3939,8 @@ out_unlock:
 out_irq:
 	local_irq_restore(flags);
 
+	put_online_cpus_atomic();
+
 	if (yielded > 0)
 		schedule();
 
@@ -4324,9 +4341,11 @@ static int migration_cpu_stop(void *data)
 	 * The original target cpu might have gone down and we might
 	 * be on another cpu but it doesn't matter.
 	 */
+	get_online_cpus_atomic();
 	local_irq_disable();
 	__migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
 	local_irq_enable();
+	put_online_cpus_atomic();
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 11/45] sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |   23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9b1f2e5..9d870bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1160,11 +1160,11 @@ void kick_process(struct task_struct *p)
 {
 	int cpu;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = task_cpu(p);
 	if ((cpu != smp_processor_id()) && task_curr(p))
 		smp_send_reschedule(cpu);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kick_process);
 #endif /* CONFIG_SMP */
@@ -1172,6 +1172,9 @@ EXPORT_SYMBOL_GPL(kick_process);
 #ifdef CONFIG_SMP
 /*
  * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ *  Must be called within get/put_online_cpus_atomic(), to prevent
+ *  CPUs from going offline from under us.
  */
 static int select_fallback_rq(int cpu, struct task_struct *p)
 {
@@ -1245,6 +1248,9 @@ out:
 
 /*
  * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called within get/put_online_cpus_atomic(), to prevent
+ * CPUs from going offline from under us.
  */
 static inline
 int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1489,6 +1495,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	unsigned long flags;
 	int cpu, success = 0;
 
+	get_online_cpus_atomic();
+
 	smp_wmb();
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 	if (!(p->state & state))
@@ -1531,6 +1539,7 @@ stat:
 out:
 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
+	put_online_cpus_atomic();
 	return success;
 }
 
@@ -1744,6 +1753,8 @@ void wake_up_new_task(struct task_struct *p)
 	unsigned long flags;
 	struct rq *rq;
 
+	get_online_cpus_atomic();
+
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 #ifdef CONFIG_SMP
 	/*
@@ -1766,6 +1777,8 @@ void wake_up_new_task(struct task_struct *p)
 		p->sched_class->task_woken(rq, p);
 #endif
 	task_rq_unlock(rq, p, &flags);
+
+	put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -3879,6 +3892,8 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	unsigned long flags;
 	int yielded = 0;
 
+	get_online_cpus_atomic();
+
 	local_irq_save(flags);
 	rq = this_rq();
 
@@ -3924,6 +3939,8 @@ out_unlock:
 out_irq:
 	local_irq_restore(flags);
 
+	put_online_cpus_atomic();
+
 	if (yielded > 0)
 		schedule();
 
@@ -4324,9 +4341,11 @@ static int migration_cpu_stop(void *data)
 	 * The original target cpu might have gone down and we might
 	 * be on another cpu but it doesn't matter.
 	 */
+	get_online_cpus_atomic();
 	local_irq_disable();
 	__migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
 	local_irq_enable();
+	put_online_cpus_atomic();
 	return 0;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 11/45] sched/core: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Peter Zijlstra, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |   23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9b1f2e5..9d870bf 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1160,11 +1160,11 @@ void kick_process(struct task_struct *p)
 {
 	int cpu;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = task_cpu(p);
 	if ((cpu != smp_processor_id()) && task_curr(p))
 		smp_send_reschedule(cpu);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kick_process);
 #endif /* CONFIG_SMP */
@@ -1172,6 +1172,9 @@ EXPORT_SYMBOL_GPL(kick_process);
 #ifdef CONFIG_SMP
 /*
  * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ *
+ *  Must be called within get/put_online_cpus_atomic(), to prevent
+ *  CPUs from going offline from under us.
  */
 static int select_fallback_rq(int cpu, struct task_struct *p)
 {
@@ -1245,6 +1248,9 @@ out:
 
 /*
  * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ *
+ * Must be called within get/put_online_cpus_atomic(), to prevent
+ * CPUs from going offline from under us.
  */
 static inline
 int select_task_rq(struct task_struct *p, int sd_flags, int wake_flags)
@@ -1489,6 +1495,8 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
 	unsigned long flags;
 	int cpu, success = 0;
 
+	get_online_cpus_atomic();
+
 	smp_wmb();
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 	if (!(p->state & state))
@@ -1531,6 +1539,7 @@ stat:
 out:
 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
+	put_online_cpus_atomic();
 	return success;
 }
 
@@ -1744,6 +1753,8 @@ void wake_up_new_task(struct task_struct *p)
 	unsigned long flags;
 	struct rq *rq;
 
+	get_online_cpus_atomic();
+
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 #ifdef CONFIG_SMP
 	/*
@@ -1766,6 +1777,8 @@ void wake_up_new_task(struct task_struct *p)
 		p->sched_class->task_woken(rq, p);
 #endif
 	task_rq_unlock(rq, p, &flags);
+
+	put_online_cpus_atomic();
 }
 
 #ifdef CONFIG_PREEMPT_NOTIFIERS
@@ -3879,6 +3892,8 @@ bool __sched yield_to(struct task_struct *p, bool preempt)
 	unsigned long flags;
 	int yielded = 0;
 
+	get_online_cpus_atomic();
+
 	local_irq_save(flags);
 	rq = this_rq();
 
@@ -3924,6 +3939,8 @@ out_unlock:
 out_irq:
 	local_irq_restore(flags);
 
+	put_online_cpus_atomic();
+
 	if (yielded > 0)
 		schedule();
 
@@ -4324,9 +4341,11 @@ static int migration_cpu_stop(void *data)
 	 * The original target cpu might have gone down and we might
 	 * be on another cpu but it doesn't matter.
 	 */
+	get_online_cpus_atomic();
 	local_irq_disable();
 	__migrate_task(arg->task, raw_smp_processor_id(), arg->dest_cpu);
 	local_irq_enable();
+	put_online_cpus_atomic();
 	return 0;
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 12/45] migration: Use raw_spin_lock/unlock since interrupts are already disabled
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srivatsa S. Bhat

We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9d870bf..ad8a554 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4675,14 +4675,14 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DYING:
 		sched_ttwu_pending();
 		/* Update our root-domain */
-		raw_spin_lock_irqsave(&rq->lock, flags);
+		raw_spin_lock(&rq->lock); /* IRQs already disabled */
 		if (rq->rd) {
 			BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
 			set_rq_offline(rq);
 		}
 		migrate_tasks(cpu);
 		BUG_ON(rq->nr_running != 1); /* the migration thread */
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+		raw_spin_unlock(&rq->lock);
 		break;
 
 	case CPU_DEAD:


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 12/45] migration: Use raw_spin_lock/unlock since interrupts are already disabled
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra

We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9d870bf..ad8a554 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4675,14 +4675,14 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DYING:
 		sched_ttwu_pending();
 		/* Update our root-domain */
-		raw_spin_lock_irqsave(&rq->lock, flags);
+		raw_spin_lock(&rq->lock); /* IRQs already disabled */
 		if (rq->rd) {
 			BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
 			set_rq_offline(rq);
 		}
 		migrate_tasks(cpu);
 		BUG_ON(rq->nr_running != 1); /* the migration thread */
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+		raw_spin_unlock(&rq->lock);
 		break;
 
 	case CPU_DEAD:

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 12/45] migration: Use raw_spin_lock/unlock since interrupts are already disabled
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar

We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9d870bf..ad8a554 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4675,14 +4675,14 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DYING:
 		sched_ttwu_pending();
 		/* Update our root-domain */
-		raw_spin_lock_irqsave(&rq->lock, flags);
+		raw_spin_lock(&rq->lock); /* IRQs already disabled */
 		if (rq->rd) {
 			BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
 			set_rq_offline(rq);
 		}
 		migrate_tasks(cpu);
 		BUG_ON(rq->nr_running != 1); /* the migration thread */
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+		raw_spin_unlock(&rq->lock);
 		break;
 
 	case CPU_DEAD:


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 12/45] migration: Use raw_spin_lock/unlock since interrupts are already disabled
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Peter Zijlstra, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

We need not use the raw_spin_lock_irqsave/restore primitives because
all CPU_DYING notifiers run with interrupts disabled. So just use
raw_spin_lock/unlock.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/core.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 9d870bf..ad8a554 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4675,14 +4675,14 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	case CPU_DYING:
 		sched_ttwu_pending();
 		/* Update our root-domain */
-		raw_spin_lock_irqsave(&rq->lock, flags);
+		raw_spin_lock(&rq->lock); /* IRQs already disabled */
 		if (rq->rd) {
 			BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span));
 			set_rq_offline(rq);
 		}
 		migrate_tasks(cpu);
 		BUG_ON(rq->nr_running != 1); /* the migration thread */
-		raw_spin_unlock_irqrestore(&rq->lock, flags);
+		raw_spin_unlock(&rq->lock);
 		break;
 
 	case CPU_DEAD:

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 13/45] sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/fair.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f77f9c5..62d98dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3369,7 +3369,8 @@ done:
  *
  * Returns the target CPU number, or the same CPU if no balancing is needed.
  *
- * preempt must be disabled.
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
@@ -5289,6 +5290,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 	raw_spin_unlock(&this_rq->lock);
 
 	update_blocked_averages(this_cpu);
+
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
 		unsigned long interval;
@@ -5312,6 +5315,7 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 		}
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	raw_spin_lock(&this_rq->lock);
 
@@ -5338,6 +5342,7 @@ static int active_load_balance_cpu_stop(void *data)
 	struct rq *target_rq = cpu_rq(target_cpu);
 	struct sched_domain *sd;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irq(&busiest_rq->lock);
 
 	/* make sure the requested cpu hasn't gone down in the meantime */
@@ -5389,6 +5394,7 @@ static int active_load_balance_cpu_stop(void *data)
 out_unlock:
 	busiest_rq->active_balance = 0;
 	raw_spin_unlock_irq(&busiest_rq->lock);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -5549,6 +5555,7 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 
 	update_blocked_averages(cpu);
 
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(cpu, sd) {
 		if (!(sd->flags & SD_LOAD_BALANCE))
@@ -5597,6 +5604,7 @@ out:
 			break;
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	/*
 	 * next_balance will be updated only when there is a need.
@@ -5728,6 +5736,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	enum cpu_idle_type idle = this_rq->idle_balance ?
 						CPU_IDLE : CPU_NOT_IDLE;
 
+	get_online_cpus_atomic();
 	rebalance_domains(this_cpu, idle);
 
 	/*
@@ -5736,6 +5745,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	 * stopped.
 	 */
 	nohz_idle_balance(this_cpu, idle);
+	put_online_cpus_atomic();
 }
 
 static inline int on_null_domain(int cpu)
@@ -5753,8 +5763,10 @@ void trigger_load_balance(struct rq *rq, int cpu)
 	    likely(!on_null_domain(cpu)))
 		raise_softirq(SCHED_SOFTIRQ);
 #ifdef CONFIG_NO_HZ_COMMON
+	get_online_cpus_atomic();
 	if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
 		nohz_balancer_kick(cpu);
+	put_online_cpus_atomic();
 #endif
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 13/45] sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/fair.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f77f9c5..62d98dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3369,7 +3369,8 @@ done:
  *
  * Returns the target CPU number, or the same CPU if no balancing is needed.
  *
- * preempt must be disabled.
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
@@ -5289,6 +5290,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 	raw_spin_unlock(&this_rq->lock);
 
 	update_blocked_averages(this_cpu);
+
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
 		unsigned long interval;
@@ -5312,6 +5315,7 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 		}
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	raw_spin_lock(&this_rq->lock);
 
@@ -5338,6 +5342,7 @@ static int active_load_balance_cpu_stop(void *data)
 	struct rq *target_rq = cpu_rq(target_cpu);
 	struct sched_domain *sd;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irq(&busiest_rq->lock);
 
 	/* make sure the requested cpu hasn't gone down in the meantime */
@@ -5389,6 +5394,7 @@ static int active_load_balance_cpu_stop(void *data)
 out_unlock:
 	busiest_rq->active_balance = 0;
 	raw_spin_unlock_irq(&busiest_rq->lock);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -5549,6 +5555,7 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 
 	update_blocked_averages(cpu);
 
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(cpu, sd) {
 		if (!(sd->flags & SD_LOAD_BALANCE))
@@ -5597,6 +5604,7 @@ out:
 			break;
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	/*
 	 * next_balance will be updated only when there is a need.
@@ -5728,6 +5736,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	enum cpu_idle_type idle = this_rq->idle_balance ?
 						CPU_IDLE : CPU_NOT_IDLE;
 
+	get_online_cpus_atomic();
 	rebalance_domains(this_cpu, idle);
 
 	/*
@@ -5736,6 +5745,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	 * stopped.
 	 */
 	nohz_idle_balance(this_cpu, idle);
+	put_online_cpus_atomic();
 }
 
 static inline int on_null_domain(int cpu)
@@ -5753,8 +5763,10 @@ void trigger_load_balance(struct rq *rq, int cpu)
 	    likely(!on_null_domain(cpu)))
 		raise_softirq(SCHED_SOFTIRQ);
 #ifdef CONFIG_NO_HZ_COMMON
+	get_online_cpus_atomic();
 	if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
 		nohz_balancer_kick(cpu);
+	put_online_cpus_atomic();
 #endif
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 13/45] sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/fair.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f77f9c5..62d98dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3369,7 +3369,8 @@ done:
  *
  * Returns the target CPU number, or the same CPU if no balancing is needed.
  *
- * preempt must be disabled.
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
@@ -5289,6 +5290,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 	raw_spin_unlock(&this_rq->lock);
 
 	update_blocked_averages(this_cpu);
+
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
 		unsigned long interval;
@@ -5312,6 +5315,7 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 		}
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	raw_spin_lock(&this_rq->lock);
 
@@ -5338,6 +5342,7 @@ static int active_load_balance_cpu_stop(void *data)
 	struct rq *target_rq = cpu_rq(target_cpu);
 	struct sched_domain *sd;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irq(&busiest_rq->lock);
 
 	/* make sure the requested cpu hasn't gone down in the meantime */
@@ -5389,6 +5394,7 @@ static int active_load_balance_cpu_stop(void *data)
 out_unlock:
 	busiest_rq->active_balance = 0;
 	raw_spin_unlock_irq(&busiest_rq->lock);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -5549,6 +5555,7 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 
 	update_blocked_averages(cpu);
 
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(cpu, sd) {
 		if (!(sd->flags & SD_LOAD_BALANCE))
@@ -5597,6 +5604,7 @@ out:
 			break;
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	/*
 	 * next_balance will be updated only when there is a need.
@@ -5728,6 +5736,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	enum cpu_idle_type idle = this_rq->idle_balance ?
 						CPU_IDLE : CPU_NOT_IDLE;
 
+	get_online_cpus_atomic();
 	rebalance_domains(this_cpu, idle);
 
 	/*
@@ -5736,6 +5745,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	 * stopped.
 	 */
 	nohz_idle_balance(this_cpu, idle);
+	put_online_cpus_atomic();
 }
 
 static inline int on_null_domain(int cpu)
@@ -5753,8 +5763,10 @@ void trigger_load_balance(struct rq *rq, int cpu)
 	    likely(!on_null_domain(cpu)))
 		raise_softirq(SCHED_SOFTIRQ);
 #ifdef CONFIG_NO_HZ_COMMON
+	get_online_cpus_atomic();
 	if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
 		nohz_balancer_kick(cpu);
+	put_online_cpus_atomic();
 #endif
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 13/45] sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Peter Zijlstra, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/fair.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f77f9c5..62d98dd 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3369,7 +3369,8 @@ done:
  *
  * Returns the target CPU number, or the same CPU if no balancing is needed.
  *
- * preempt must be disabled.
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
@@ -5289,6 +5290,8 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 	raw_spin_unlock(&this_rq->lock);
 
 	update_blocked_averages(this_cpu);
+
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(this_cpu, sd) {
 		unsigned long interval;
@@ -5312,6 +5315,7 @@ void idle_balance(int this_cpu, struct rq *this_rq)
 		}
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	raw_spin_lock(&this_rq->lock);
 
@@ -5338,6 +5342,7 @@ static int active_load_balance_cpu_stop(void *data)
 	struct rq *target_rq = cpu_rq(target_cpu);
 	struct sched_domain *sd;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irq(&busiest_rq->lock);
 
 	/* make sure the requested cpu hasn't gone down in the meantime */
@@ -5389,6 +5394,7 @@ static int active_load_balance_cpu_stop(void *data)
 out_unlock:
 	busiest_rq->active_balance = 0;
 	raw_spin_unlock_irq(&busiest_rq->lock);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -5549,6 +5555,7 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
 
 	update_blocked_averages(cpu);
 
+	get_online_cpus_atomic();
 	rcu_read_lock();
 	for_each_domain(cpu, sd) {
 		if (!(sd->flags & SD_LOAD_BALANCE))
@@ -5597,6 +5604,7 @@ out:
 			break;
 	}
 	rcu_read_unlock();
+	put_online_cpus_atomic();
 
 	/*
 	 * next_balance will be updated only when there is a need.
@@ -5728,6 +5736,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	enum cpu_idle_type idle = this_rq->idle_balance ?
 						CPU_IDLE : CPU_NOT_IDLE;
 
+	get_online_cpus_atomic();
 	rebalance_domains(this_cpu, idle);
 
 	/*
@@ -5736,6 +5745,7 @@ static void run_rebalance_domains(struct softirq_action *h)
 	 * stopped.
 	 */
 	nohz_idle_balance(this_cpu, idle);
+	put_online_cpus_atomic();
 }
 
 static inline int on_null_domain(int cpu)
@@ -5753,8 +5763,10 @@ void trigger_load_balance(struct rq *rq, int cpu)
 	    likely(!on_null_domain(cpu)))
 		raise_softirq(SCHED_SOFTIRQ);
 #ifdef CONFIG_NO_HZ_COMMON
+	get_online_cpus_atomic();
 	if (nohz_kick_needed(rq, cpu) && likely(!on_null_domain(cpu)))
 		nohz_balancer_kick(cpu);
+	put_online_cpus_atomic();
 #endif
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 14/45] timer: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/timer.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..5db594c 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -729,6 +729,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(!timer->function);
 
+	get_online_cpus_atomic();
 	base = lock_timer_base(timer, &flags);
 
 	ret = detach_if_pending(timer, base, false);
@@ -768,6 +769,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 
 out_unlock:
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 
 	return ret;
 }
@@ -926,6 +928,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(timer_pending(timer) || !timer->function);
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&base->lock, flags);
 	timer_set_base(timer, base);
 	debug_activate(timer, timer->expires);
@@ -940,6 +943,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 	 */
 	wake_up_nohz_cpu(cpu);
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(add_timer_on);
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 14/45] timer: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/timer.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..5db594c 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -729,6 +729,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(!timer->function);
 
+	get_online_cpus_atomic();
 	base = lock_timer_base(timer, &flags);
 
 	ret = detach_if_pending(timer, base, false);
@@ -768,6 +769,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 
 out_unlock:
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 
 	return ret;
 }
@@ -926,6 +928,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(timer_pending(timer) || !timer->function);
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&base->lock, flags);
 	timer_set_base(timer, base);
 	debug_activate(timer, timer->expires);
@@ -940,6 +943,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 	 */
 	wake_up_nohz_cpu(cpu);
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(add_timer_on);
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 14/45] timer: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/timer.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..5db594c 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -729,6 +729,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(!timer->function);
 
+	get_online_cpus_atomic();
 	base = lock_timer_base(timer, &flags);
 
 	ret = detach_if_pending(timer, base, false);
@@ -768,6 +769,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 
 out_unlock:
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 
 	return ret;
 }
@@ -926,6 +928,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(timer_pending(timer) || !timer->function);
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&base->lock, flags);
 	timer_set_base(timer, base);
 	debug_activate(timer, timer->expires);
@@ -940,6 +943,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 	 */
 	wake_up_nohz_cpu(cpu);
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(add_timer_on);
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 14/45] timer: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/timer.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/timer.c b/kernel/timer.c
index 15ffdb3..5db594c 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -729,6 +729,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(!timer->function);
 
+	get_online_cpus_atomic();
 	base = lock_timer_base(timer, &flags);
 
 	ret = detach_if_pending(timer, base, false);
@@ -768,6 +769,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires,
 
 out_unlock:
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 
 	return ret;
 }
@@ -926,6 +928,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 
 	timer_stats_timer_set_start_info(timer);
 	BUG_ON(timer_pending(timer) || !timer->function);
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&base->lock, flags);
 	timer_set_base(timer, base);
 	debug_activate(timer, timer->expires);
@@ -940,6 +943,7 @@ void add_timer_on(struct timer_list *timer, int cpu)
 	 */
 	wake_up_nohz_cpu(cpu);
 	spin_unlock_irqrestore(&base->lock, flags);
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(add_timer_on);
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 15/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/rt.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 01970c8..03d9f38 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -6,6 +6,7 @@
 #include "sched.h"
 
 #include <linux/slab.h>
+#include <linux/cpu.h>
 
 int sched_rr_timeslice = RR_TIMESLICE;
 
@@ -28,7 +29,9 @@ static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		get_online_cpus_atomic();
 		idle = do_sched_rt_period_timer(rt_b, overrun);
+		put_online_cpus_atomic();
 	}
 
 	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
@@ -547,6 +550,7 @@ static int do_balance_runtime(struct rt_rq *rt_rq)
 	int i, weight, more = 0;
 	u64 rt_period;
 
+	get_online_cpus_atomic();
 	weight = cpumask_weight(rd->span);
 
 	raw_spin_lock(&rt_b->rt_runtime_lock);
@@ -588,6 +592,7 @@ next:
 		raw_spin_unlock(&iter->rt_runtime_lock);
 	}
 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+	put_online_cpus_atomic();
 
 	return more;
 }
@@ -1168,6 +1173,10 @@ static void yield_task_rt(struct rq *rq)
 #ifdef CONFIG_SMP
 static int find_lowest_rq(struct task_struct *task);
 
+/*
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
+ */
 static int
 select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
 {
@@ -1561,6 +1570,8 @@ retry:
 		return 0;
 	}
 
+	get_online_cpus_atomic();
+
 	/* We might release rq lock */
 	get_task_struct(next_task);
 
@@ -1611,6 +1622,7 @@ retry:
 out:
 	put_task_struct(next_task);
 
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1630,6 +1642,7 @@ static int pull_rt_task(struct rq *this_rq)
 	if (likely(!rt_overloaded(this_rq)))
 		return 0;
 
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
@@ -1695,6 +1708,7 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
+	put_online_cpus_atomic();
 	return ret;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 15/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, Peter Zijlstra

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/rt.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 01970c8..03d9f38 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -6,6 +6,7 @@
 #include "sched.h"
 
 #include <linux/slab.h>
+#include <linux/cpu.h>
 
 int sched_rr_timeslice = RR_TIMESLICE;
 
@@ -28,7 +29,9 @@ static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		get_online_cpus_atomic();
 		idle = do_sched_rt_period_timer(rt_b, overrun);
+		put_online_cpus_atomic();
 	}
 
 	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
@@ -547,6 +550,7 @@ static int do_balance_runtime(struct rt_rq *rt_rq)
 	int i, weight, more = 0;
 	u64 rt_period;
 
+	get_online_cpus_atomic();
 	weight = cpumask_weight(rd->span);
 
 	raw_spin_lock(&rt_b->rt_runtime_lock);
@@ -588,6 +592,7 @@ next:
 		raw_spin_unlock(&iter->rt_runtime_lock);
 	}
 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+	put_online_cpus_atomic();
 
 	return more;
 }
@@ -1168,6 +1173,10 @@ static void yield_task_rt(struct rq *rq)
 #ifdef CONFIG_SMP
 static int find_lowest_rq(struct task_struct *task);
 
+/*
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
+ */
 static int
 select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
 {
@@ -1561,6 +1570,8 @@ retry:
 		return 0;
 	}
 
+	get_online_cpus_atomic();
+
 	/* We might release rq lock */
 	get_task_struct(next_task);
 
@@ -1611,6 +1622,7 @@ retry:
 out:
 	put_task_struct(next_task);
 
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1630,6 +1642,7 @@ static int pull_rt_task(struct rq *this_rq)
 	if (likely(!rt_overloaded(this_rq)))
 		return 0;
 
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
@@ -1695,6 +1708,7 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
+	put_online_cpus_atomic();
 	return ret;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 15/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/rt.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 01970c8..03d9f38 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -6,6 +6,7 @@
 #include "sched.h"
 
 #include <linux/slab.h>
+#include <linux/cpu.h>
 
 int sched_rr_timeslice = RR_TIMESLICE;
 
@@ -28,7 +29,9 @@ static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		get_online_cpus_atomic();
 		idle = do_sched_rt_period_timer(rt_b, overrun);
+		put_online_cpus_atomic();
 	}
 
 	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
@@ -547,6 +550,7 @@ static int do_balance_runtime(struct rt_rq *rt_rq)
 	int i, weight, more = 0;
 	u64 rt_period;
 
+	get_online_cpus_atomic();
 	weight = cpumask_weight(rd->span);
 
 	raw_spin_lock(&rt_b->rt_runtime_lock);
@@ -588,6 +592,7 @@ next:
 		raw_spin_unlock(&iter->rt_runtime_lock);
 	}
 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+	put_online_cpus_atomic();
 
 	return more;
 }
@@ -1168,6 +1173,10 @@ static void yield_task_rt(struct rq *rq)
 #ifdef CONFIG_SMP
 static int find_lowest_rq(struct task_struct *task);
 
+/*
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
+ */
 static int
 select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
 {
@@ -1561,6 +1570,8 @@ retry:
 		return 0;
 	}
 
+	get_online_cpus_atomic();
+
 	/* We might release rq lock */
 	get_task_struct(next_task);
 
@@ -1611,6 +1622,7 @@ retry:
 out:
 	put_task_struct(next_task);
 
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1630,6 +1642,7 @@ static int pull_rt_task(struct rq *this_rq)
 	if (likely(!rt_overloaded(this_rq)))
 		return 0;
 
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
@@ -1695,6 +1708,7 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
+	put_online_cpus_atomic();
 	return ret;
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 15/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:54   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:54 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Peter Zijlstra, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/sched/rt.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 01970c8..03d9f38 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -6,6 +6,7 @@
 #include "sched.h"
 
 #include <linux/slab.h>
+#include <linux/cpu.h>
 
 int sched_rr_timeslice = RR_TIMESLICE;
 
@@ -28,7 +29,9 @@ static enum hrtimer_restart sched_rt_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		get_online_cpus_atomic();
 		idle = do_sched_rt_period_timer(rt_b, overrun);
+		put_online_cpus_atomic();
 	}
 
 	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
@@ -547,6 +550,7 @@ static int do_balance_runtime(struct rt_rq *rt_rq)
 	int i, weight, more = 0;
 	u64 rt_period;
 
+	get_online_cpus_atomic();
 	weight = cpumask_weight(rd->span);
 
 	raw_spin_lock(&rt_b->rt_runtime_lock);
@@ -588,6 +592,7 @@ next:
 		raw_spin_unlock(&iter->rt_runtime_lock);
 	}
 	raw_spin_unlock(&rt_b->rt_runtime_lock);
+	put_online_cpus_atomic();
 
 	return more;
 }
@@ -1168,6 +1173,10 @@ static void yield_task_rt(struct rq *rq)
 #ifdef CONFIG_SMP
 static int find_lowest_rq(struct task_struct *task);
 
+/*
+ * Must be called within get/put_online_cpus_atomic(), to prevent CPUs
+ * from going offline from under us.
+ */
 static int
 select_task_rq_rt(struct task_struct *p, int sd_flag, int flags)
 {
@@ -1561,6 +1570,8 @@ retry:
 		return 0;
 	}
 
+	get_online_cpus_atomic();
+
 	/* We might release rq lock */
 	get_task_struct(next_task);
 
@@ -1611,6 +1622,7 @@ retry:
 out:
 	put_task_struct(next_task);
 
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1630,6 +1642,7 @@ static int pull_rt_task(struct rq *this_rq)
 	if (likely(!rt_overloaded(this_rq)))
 		return 0;
 
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, this_rq->rd->rto_mask) {
 		if (this_cpu == cpu)
 			continue;
@@ -1695,6 +1708,7 @@ skip:
 		double_unlock_balance(this_rq, src_rq);
 	}
 
+	put_online_cpus_atomic();
 	return ret;
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Dipankar Sarma, Paul E. McKenney, Srivatsa S. Bhat

In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug, once we get rid of stop_machine(). However, RCU has adequate
synchronization with CPU hotplug, making that unnecessary. But since the
locking details are non-trivial, it is hard to teach this to the rudimentary
hotplug locking validator.

So use the _nocheck() variants of the cpu accessor functions to prevent false-
positive warnings from the CPU hotplug debug code. Also, add a comment
explaining the hotplug synchronization design used in RCU, so that its easy
to see why it is justified to use the _nocheck() variants.

Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/rcutree.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index cf3adc6..ced28a45 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
 		return 0;  /* Grace period is not old enough. */
 	barrier();
-	if (cpu_is_offline(rdp->cpu)) {
+
+	/*
+	 * It is safe to use the _nocheck() version of cpu_is_offline() here
+	 * (to avoid false-positive warnings from CPU hotplug debug code),
+	 * because:
+	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
+	 *    period initialization.
+	 * 2. The current grace period has not ended yet.
+	 * So it is safe to sample the offline state without synchronization.
+	 */
+	if (cpu_is_offline_nocheck(rdp->cpu)) {
 		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
 		rdp->offline_fqs++;
 		return 1;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Dipankar Sarma, Paul E. McKenney

In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug, once we get rid of stop_machine(). However, RCU has adequate
synchronization with CPU hotplug, making that unnecessary. But since the
locking details are non-trivial, it is hard to teach this to the rudimentary
hotplug locking validator.

So use the _nocheck() variants of the cpu accessor functions to prevent false-
positive warnings from the CPU hotplug debug code. Also, add a comment
explaining the hotplug synchronization design used in RCU, so that its easy
to see why it is justified to use the _nocheck() variants.

Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/rcutree.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index cf3adc6..ced28a45 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
 		return 0;  /* Grace period is not old enough. */
 	barrier();
-	if (cpu_is_offline(rdp->cpu)) {
+
+	/*
+	 * It is safe to use the _nocheck() version of cpu_is_offline() here
+	 * (to avoid false-positive warnings from CPU hotplug debug code),
+	 * because:
+	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
+	 *    period initialization.
+	 * 2. The current grace period has not ended yet.
+	 * So it is safe to sample the offline state without synchronization.
+	 */
+	if (cpu_is_offline_nocheck(rdp->cpu)) {
 		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
 		rdp->offline_fqs++;
 		return 1;

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Dipankar Sarma

In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug, once we get rid of stop_machine(). However, RCU has adequate
synchronization with CPU hotplug, making that unnecessary. But since the
locking details are non-trivial, it is hard to teach this to the rudimentary
hotplug locking validator.

So use the _nocheck() variants of the cpu accessor functions to prevent false-
positive warnings from the CPU hotplug debug code. Also, add a comment
explaining the hotplug synchronization design used in RCU, so that its easy
to see why it is justified to use the _nocheck() variants.

Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/rcutree.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index cf3adc6..ced28a45 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
 		return 0;  /* Grace period is not old enough. */
 	barrier();
-	if (cpu_is_offline(rdp->cpu)) {
+
+	/*
+	 * It is safe to use the _nocheck() version of cpu_is_offline() here
+	 * (to avoid false-positive warnings from CPU hotplug debug code),
+	 * because:
+	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
+	 *    period initialization.
+	 * 2. The current grace period has not ended yet.
+	 * So it is safe to sample the offline state without synchronization.
+	 */
+	if (cpu_is_offline_nocheck(rdp->cpu)) {
 		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
 		rdp->offline_fqs++;
 		return 1;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	Paul E. McKenney, linuxppc-dev

In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
while being protected by a spinlock. At first, it appears as if we need to
use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
hotplug, once we get rid of stop_machine(). However, RCU has adequate
synchronization with CPU hotplug, making that unnecessary. But since the
locking details are non-trivial, it is hard to teach this to the rudimentary
hotplug locking validator.

So use the _nocheck() variants of the cpu accessor functions to prevent false-
positive warnings from the CPU hotplug debug code. Also, add a comment
explaining the hotplug synchronization design used in RCU, so that its easy
to see why it is justified to use the _nocheck() variants.

Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/rcutree.c |   12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index cf3adc6..ced28a45 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
 	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
 		return 0;  /* Grace period is not old enough. */
 	barrier();
-	if (cpu_is_offline(rdp->cpu)) {
+
+	/*
+	 * It is safe to use the _nocheck() version of cpu_is_offline() here
+	 * (to avoid false-positive warnings from CPU hotplug debug code),
+	 * because:
+	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
+	 *    period initialization.
+	 * 2. The current grace period has not ended yet.
+	 * So it is safe to sample the offline state without synchronization.
+	 */
+	if (cpu_is_offline_nocheck(rdp->cpu)) {
 		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
 		rdp->offline_fqs++;
 		return 1;

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 17/45] tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/tick-broadcast.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 9d96a54..1e4ac45 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -227,12 +227,14 @@ static void tick_do_broadcast(struct cpumask *mask)
  */
 static void tick_do_periodic_broadcast(void)
 {
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 
 	cpumask_and(tmpmask, cpu_online_mask, tick_broadcast_mask);
 	tick_do_broadcast(tmpmask);
 
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -335,11 +337,13 @@ out:
  */
 void tick_broadcast_on_off(unsigned long reason, int *oncpu)
 {
+	get_online_cpus_atomic();
 	if (!cpumask_test_cpu(*oncpu, cpu_online_mask))
 		printk(KERN_ERR "tick-broadcast: ignoring broadcast for "
 		       "offline CPU #%d\n", *oncpu);
 	else
 		tick_do_broadcast_on_off(&reason);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -505,6 +509,7 @@ static void tick_handle_oneshot_broadcast(struct clock_event_device *dev)
 	ktime_t now, next_event;
 	int cpu, next_cpu = 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 again:
 	dev->next_event.tv64 = KTIME_MAX;
@@ -562,6 +567,7 @@ again:
 			goto again;
 	}
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -756,6 +762,7 @@ void tick_broadcast_switch_to_oneshot(void)
 	struct clock_event_device *bc;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
 
 	tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT;
@@ -764,6 +771,7 @@ void tick_broadcast_switch_to_oneshot(void)
 		tick_broadcast_setup_oneshot(bc);
 
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+	put_online_cpus_atomic();
 }
 
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 17/45] tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/tick-broadcast.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 9d96a54..1e4ac45 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -227,12 +227,14 @@ static void tick_do_broadcast(struct cpumask *mask)
  */
 static void tick_do_periodic_broadcast(void)
 {
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 
 	cpumask_and(tmpmask, cpu_online_mask, tick_broadcast_mask);
 	tick_do_broadcast(tmpmask);
 
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -335,11 +337,13 @@ out:
  */
 void tick_broadcast_on_off(unsigned long reason, int *oncpu)
 {
+	get_online_cpus_atomic();
 	if (!cpumask_test_cpu(*oncpu, cpu_online_mask))
 		printk(KERN_ERR "tick-broadcast: ignoring broadcast for "
 		       "offline CPU #%d\n", *oncpu);
 	else
 		tick_do_broadcast_on_off(&reason);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -505,6 +509,7 @@ static void tick_handle_oneshot_broadcast(struct clock_event_device *dev)
 	ktime_t now, next_event;
 	int cpu, next_cpu = 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 again:
 	dev->next_event.tv64 = KTIME_MAX;
@@ -562,6 +567,7 @@ again:
 			goto again;
 	}
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -756,6 +762,7 @@ void tick_broadcast_switch_to_oneshot(void)
 	struct clock_event_device *bc;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
 
 	tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT;
@@ -764,6 +771,7 @@ void tick_broadcast_switch_to_oneshot(void)
 		tick_broadcast_setup_oneshot(bc);
 
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+	put_online_cpus_atomic();
 }
 
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 17/45] tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/tick-broadcast.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 9d96a54..1e4ac45 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -227,12 +227,14 @@ static void tick_do_broadcast(struct cpumask *mask)
  */
 static void tick_do_periodic_broadcast(void)
 {
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 
 	cpumask_and(tmpmask, cpu_online_mask, tick_broadcast_mask);
 	tick_do_broadcast(tmpmask);
 
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -335,11 +337,13 @@ out:
  */
 void tick_broadcast_on_off(unsigned long reason, int *oncpu)
 {
+	get_online_cpus_atomic();
 	if (!cpumask_test_cpu(*oncpu, cpu_online_mask))
 		printk(KERN_ERR "tick-broadcast: ignoring broadcast for "
 		       "offline CPU #%d\n", *oncpu);
 	else
 		tick_do_broadcast_on_off(&reason);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -505,6 +509,7 @@ static void tick_handle_oneshot_broadcast(struct clock_event_device *dev)
 	ktime_t now, next_event;
 	int cpu, next_cpu = 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 again:
 	dev->next_event.tv64 = KTIME_MAX;
@@ -562,6 +567,7 @@ again:
 			goto again;
 	}
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -756,6 +762,7 @@ void tick_broadcast_switch_to_oneshot(void)
 	struct clock_event_device *bc;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
 
 	tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT;
@@ -764,6 +771,7 @@ void tick_broadcast_switch_to_oneshot(void)
 		tick_broadcast_setup_oneshot(bc);
 
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+	put_online_cpus_atomic();
 }
 
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 17/45] tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/tick-broadcast.c |    8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 9d96a54..1e4ac45 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -227,12 +227,14 @@ static void tick_do_broadcast(struct cpumask *mask)
  */
 static void tick_do_periodic_broadcast(void)
 {
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 
 	cpumask_and(tmpmask, cpu_online_mask, tick_broadcast_mask);
 	tick_do_broadcast(tmpmask);
 
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -335,11 +337,13 @@ out:
  */
 void tick_broadcast_on_off(unsigned long reason, int *oncpu)
 {
+	get_online_cpus_atomic();
 	if (!cpumask_test_cpu(*oncpu, cpu_online_mask))
 		printk(KERN_ERR "tick-broadcast: ignoring broadcast for "
 		       "offline CPU #%d\n", *oncpu);
 	else
 		tick_do_broadcast_on_off(&reason);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -505,6 +509,7 @@ static void tick_handle_oneshot_broadcast(struct clock_event_device *dev)
 	ktime_t now, next_event;
 	int cpu, next_cpu = 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&tick_broadcast_lock);
 again:
 	dev->next_event.tv64 = KTIME_MAX;
@@ -562,6 +567,7 @@ again:
 			goto again;
 	}
 	raw_spin_unlock(&tick_broadcast_lock);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -756,6 +762,7 @@ void tick_broadcast_switch_to_oneshot(void)
 	struct clock_event_device *bc;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
 
 	tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT;
@@ -764,6 +771,7 @@ void tick_broadcast_switch_to_oneshot(void)
 		tick_broadcast_setup_oneshot(bc);
 
 	raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
+	put_online_cpus_atomic();
 }
 
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 18/45] time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, John Stultz, Thomas Gleixner, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/clocksource.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index e713ef7..c4bbc25 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -30,6 +30,7 @@
 #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */
 #include <linux/tick.h>
 #include <linux/kthread.h>
+#include <linux/cpu.h>
 
 #include "tick-internal.h"
 
@@ -252,6 +253,7 @@ static void clocksource_watchdog(unsigned long data)
 	int64_t wd_nsec, cs_nsec;
 	int next_cpu, reset_pending;
 
+	get_online_cpus_atomic();
 	spin_lock(&watchdog_lock);
 	if (!watchdog_running)
 		goto out;
@@ -329,6 +331,7 @@ static void clocksource_watchdog(unsigned long data)
 	add_timer_on(&watchdog_timer, next_cpu);
 out:
 	spin_unlock(&watchdog_lock);
+	put_online_cpus_atomic();
 }
 
 static inline void clocksource_start_watchdog(void)
@@ -367,6 +370,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&watchdog_lock, flags);
 	if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
 		/* cs is a clocksource to be watched. */
@@ -386,6 +390,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 	/* Check if the watchdog timer needs to be started. */
 	clocksource_start_watchdog();
 	spin_unlock_irqrestore(&watchdog_lock, flags);
+	put_online_cpus_atomic();
 }
 
 static void clocksource_dequeue_watchdog(struct clocksource *cs)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 18/45] time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, John Stultz, Thomas Gleixner

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/clocksource.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index e713ef7..c4bbc25 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -30,6 +30,7 @@
 #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */
 #include <linux/tick.h>
 #include <linux/kthread.h>
+#include <linux/cpu.h>
 
 #include "tick-internal.h"
 
@@ -252,6 +253,7 @@ static void clocksource_watchdog(unsigned long data)
 	int64_t wd_nsec, cs_nsec;
 	int next_cpu, reset_pending;
 
+	get_online_cpus_atomic();
 	spin_lock(&watchdog_lock);
 	if (!watchdog_running)
 		goto out;
@@ -329,6 +331,7 @@ static void clocksource_watchdog(unsigned long data)
 	add_timer_on(&watchdog_timer, next_cpu);
 out:
 	spin_unlock(&watchdog_lock);
+	put_online_cpus_atomic();
 }
 
 static inline void clocksource_start_watchdog(void)
@@ -367,6 +370,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&watchdog_lock, flags);
 	if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
 		/* cs is a clocksource to be watched. */
@@ -386,6 +390,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 	/* Check if the watchdog timer needs to be started. */
 	clocksource_start_watchdog();
 	spin_unlock_irqrestore(&watchdog_lock, flags);
+	put_online_cpus_atomic();
 }
 
 static void clocksource_dequeue_watchdog(struct clocksource *cs)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 18/45] time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, John Stultz

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/clocksource.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index e713ef7..c4bbc25 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -30,6 +30,7 @@
 #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */
 #include <linux/tick.h>
 #include <linux/kthread.h>
+#include <linux/cpu.h>
 
 #include "tick-internal.h"
 
@@ -252,6 +253,7 @@ static void clocksource_watchdog(unsigned long data)
 	int64_t wd_nsec, cs_nsec;
 	int next_cpu, reset_pending;
 
+	get_online_cpus_atomic();
 	spin_lock(&watchdog_lock);
 	if (!watchdog_running)
 		goto out;
@@ -329,6 +331,7 @@ static void clocksource_watchdog(unsigned long data)
 	add_timer_on(&watchdog_timer, next_cpu);
 out:
 	spin_unlock(&watchdog_lock);
+	put_online_cpus_atomic();
 }
 
 static inline void clocksource_start_watchdog(void)
@@ -367,6 +370,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&watchdog_lock, flags);
 	if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
 		/* cs is a clocksource to be watched. */
@@ -386,6 +390,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 	/* Check if the watchdog timer needs to be started. */
 	clocksource_start_watchdog();
 	spin_unlock_irqrestore(&watchdog_lock, flags);
+	put_online_cpus_atomic();
 }
 
 static void clocksource_dequeue_watchdog(struct clocksource *cs)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 18/45] time/clocksource: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:55   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:55 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, John Stultz, wangyun,
	Srivatsa S. Bhat, netdev, Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/time/clocksource.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index e713ef7..c4bbc25 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -30,6 +30,7 @@
 #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */
 #include <linux/tick.h>
 #include <linux/kthread.h>
+#include <linux/cpu.h>
 
 #include "tick-internal.h"
 
@@ -252,6 +253,7 @@ static void clocksource_watchdog(unsigned long data)
 	int64_t wd_nsec, cs_nsec;
 	int next_cpu, reset_pending;
 
+	get_online_cpus_atomic();
 	spin_lock(&watchdog_lock);
 	if (!watchdog_running)
 		goto out;
@@ -329,6 +331,7 @@ static void clocksource_watchdog(unsigned long data)
 	add_timer_on(&watchdog_timer, next_cpu);
 out:
 	spin_unlock(&watchdog_lock);
+	put_online_cpus_atomic();
 }
 
 static inline void clocksource_start_watchdog(void)
@@ -367,6 +370,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&watchdog_lock, flags);
 	if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {
 		/* cs is a clocksource to be watched. */
@@ -386,6 +390,7 @@ static void clocksource_enqueue_watchdog(struct clocksource *cs)
 	/* Check if the watchdog timer needs to be started. */
 	clocksource_start_watchdog();
 	spin_unlock_irqrestore(&watchdog_lock, flags);
+	put_online_cpus_atomic();
 }
 
 static void clocksource_dequeue_watchdog(struct clocksource *cs)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 19/45] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Frederic Weisbecker, Thomas Gleixner,
	Andrew Morton, Sedat Dilek, Paul E. McKenney, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 3d6833f..c289722 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -644,14 +644,17 @@ static void remote_softirq_receive(void *data)
 
 static int __try_remote_softirq(struct call_single_data *cp, int cpu, int softirq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		cp->func = remote_softirq_receive;
 		cp->info = &softirq;
 		cp->flags = 0;
 
 		__smp_call_function_single(cpu, cp, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 	return 1;
 }
 #else /* CONFIG_USE_GENERIC_SMP_HELPERS */


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 19/45] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: peterz, tj, oleg, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 3d6833f..c289722 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -644,14 +644,17 @@ static void remote_softirq_receive(void *data)
 
 static int __try_remote_softirq(struct call_single_data *cp, int cpu, int softirq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		cp->func = remote_softirq_receive;
 		cp->info = &softirq;
 		cp->flags = 0;
 
 		__smp_call_function_single(cpu, cp, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 	return 1;
 }
 #else /* CONFIG_USE_GENERIC_SMP_HELPERS */

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 19/45] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 3d6833f..c289722 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -644,14 +644,17 @@ static void remote_softirq_receive(void *data)
 
 static int __try_remote_softirq(struct call_single_data *cp, int cpu, int softirq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		cp->func = remote_softirq_receive;
 		cp->info = &softirq;
 		cp->flags = 0;
 
 		__smp_call_function_single(cpu, cp, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 	return 1;
 }
 #else /* CONFIG_USE_GENERIC_SMP_HELPERS */


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 19/45] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Frederic Weisbecker,
	linux-kernel, rostedt, xiaoguangrong, sbw, wangyun,
	Srivatsa S. Bhat, netdev, Sedat Dilek, Thomas Gleixner,
	Paul E. McKenney, linuxppc-dev, Andrew Morton

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 3d6833f..c289722 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -644,14 +644,17 @@ static void remote_softirq_receive(void *data)
 
 static int __try_remote_softirq(struct call_single_data *cp, int cpu, int softirq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		cp->func = remote_softirq_receive;
 		cp->info = &softirq;
 		cp->flags = 0;
 
 		__smp_call_function_single(cpu, cp, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 	return 1;
 }
 #else /* CONFIG_USE_GENERIC_SMP_HELPERS */

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 20/45] irq: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/irq/manage.c |    7 +++++++
 kernel/irq/proc.c   |    3 +++
 2 files changed, 10 insertions(+)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index e16caa8..4d89f19 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -18,6 +18,7 @@
 #include <linux/sched.h>
 #include <linux/sched/rt.h>
 #include <linux/task_work.h>
+#include <linux/cpu.h>
 
 #include "internals.h"
 
@@ -202,9 +203,11 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
 	if (!desc)
 		return -EINVAL;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -343,9 +346,11 @@ int irq_select_affinity_usr(unsigned int irq, struct cpumask *mask)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret = setup_affinity(irq, desc, mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1128,7 +1133,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		}
 
 		/* Set default affinity mask once everything is setup */
+		get_online_cpus_atomic();
 		setup_affinity(irq, desc, mask);
+		put_online_cpus_atomic();
 
 	} else if (new->flags & IRQF_TRIGGER_MASK) {
 		unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 19ed5c4..47f9a74 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -7,6 +7,7 @@
  */
 
 #include <linux/irq.h>
+#include <linux/cpu.h>
 #include <linux/gfp.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -441,6 +442,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	if (!desc)
 		return 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	for_each_online_cpu(j)
 		any_count |= kstat_irqs_cpu(i, j);
@@ -477,6 +479,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	seq_putc(p, '\n');
 out:
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 #endif


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 20/45] irq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/irq/manage.c |    7 +++++++
 kernel/irq/proc.c   |    3 +++
 2 files changed, 10 insertions(+)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index e16caa8..4d89f19 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -18,6 +18,7 @@
 #include <linux/sched.h>
 #include <linux/sched/rt.h>
 #include <linux/task_work.h>
+#include <linux/cpu.h>
 
 #include "internals.h"
 
@@ -202,9 +203,11 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
 	if (!desc)
 		return -EINVAL;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -343,9 +346,11 @@ int irq_select_affinity_usr(unsigned int irq, struct cpumask *mask)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret = setup_affinity(irq, desc, mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1128,7 +1133,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		}
 
 		/* Set default affinity mask once everything is setup */
+		get_online_cpus_atomic();
 		setup_affinity(irq, desc, mask);
+		put_online_cpus_atomic();
 
 	} else if (new->flags & IRQF_TRIGGER_MASK) {
 		unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 19ed5c4..47f9a74 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -7,6 +7,7 @@
  */
 
 #include <linux/irq.h>
+#include <linux/cpu.h>
 #include <linux/gfp.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -441,6 +442,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	if (!desc)
 		return 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	for_each_online_cpu(j)
 		any_count |= kstat_irqs_cpu(i, j);
@@ -477,6 +479,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	seq_putc(p, '\n');
 out:
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 #endif

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 20/45] irq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/irq/manage.c |    7 +++++++
 kernel/irq/proc.c   |    3 +++
 2 files changed, 10 insertions(+)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index e16caa8..4d89f19 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -18,6 +18,7 @@
 #include <linux/sched.h>
 #include <linux/sched/rt.h>
 #include <linux/task_work.h>
+#include <linux/cpu.h>
 
 #include "internals.h"
 
@@ -202,9 +203,11 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
 	if (!desc)
 		return -EINVAL;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -343,9 +346,11 @@ int irq_select_affinity_usr(unsigned int irq, struct cpumask *mask)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret = setup_affinity(irq, desc, mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1128,7 +1133,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		}
 
 		/* Set default affinity mask once everything is setup */
+		get_online_cpus_atomic();
 		setup_affinity(irq, desc, mask);
+		put_online_cpus_atomic();
 
 	} else if (new->flags & IRQF_TRIGGER_MASK) {
 		unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 19ed5c4..47f9a74 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -7,6 +7,7 @@
  */
 
 #include <linux/irq.h>
+#include <linux/cpu.h>
 #include <linux/gfp.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -441,6 +442,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	if (!desc)
 		return 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	for_each_online_cpu(j)
 		any_count |= kstat_irqs_cpu(i, j);
@@ -477,6 +479,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	seq_putc(p, '\n');
 out:
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 #endif


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 20/45] irq: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 kernel/irq/manage.c |    7 +++++++
 kernel/irq/proc.c   |    3 +++
 2 files changed, 10 insertions(+)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index e16caa8..4d89f19 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -18,6 +18,7 @@
 #include <linux/sched.h>
 #include <linux/sched/rt.h>
 #include <linux/task_work.h>
+#include <linux/cpu.h>
 
 #include "internals.h"
 
@@ -202,9 +203,11 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *mask)
 	if (!desc)
 		return -EINVAL;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret =  __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -343,9 +346,11 @@ int irq_select_affinity_usr(unsigned int irq, struct cpumask *mask)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	ret = setup_affinity(irq, desc, mask);
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -1128,7 +1133,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
 		}
 
 		/* Set default affinity mask once everything is setup */
+		get_online_cpus_atomic();
 		setup_affinity(irq, desc, mask);
+		put_online_cpus_atomic();
 
 	} else if (new->flags & IRQF_TRIGGER_MASK) {
 		unsigned int nmsk = new->flags & IRQF_TRIGGER_MASK;
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 19ed5c4..47f9a74 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -7,6 +7,7 @@
  */
 
 #include <linux/irq.h>
+#include <linux/cpu.h>
 #include <linux/gfp.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -441,6 +442,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	if (!desc)
 		return 0;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&desc->lock, flags);
 	for_each_online_cpu(j)
 		any_count |= kstat_irqs_cpu(i, j);
@@ -477,6 +479,7 @@ int show_interrupts(struct seq_file *p, void *v)
 	seq_putc(p, '\n');
 out:
 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 #endif

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 21/45] net: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Eric Dumazet, Alexander Duyck,
	Cong Wang, Ben Hutchings, netdev, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 net/core/dev.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index faebb39..1530633 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3175,7 +3175,7 @@ int netif_rx(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu;
 
-		preempt_disable();
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3185,7 +3185,7 @@ int netif_rx(struct sk_buff *skb)
 		ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 
 		rcu_read_unlock();
-		preempt_enable();
+		put_online_cpus_atomic();
 	} else
 #endif
 	{
@@ -3604,6 +3604,7 @@ int netif_receive_skb(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu, ret;
 
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3611,9 +3612,11 @@ int netif_receive_skb(struct sk_buff *skb)
 		if (cpu >= 0) {
 			ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 			rcu_read_unlock();
+			put_online_cpus_atomic();
 			return ret;
 		}
 		rcu_read_unlock();
+		put_online_cpus_atomic();
 	}
 #endif
 	return __netif_receive_skb(skb);
@@ -3991,6 +3994,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 		local_irq_enable();
 
 		/* Send pending IPI's to kick RPS processing on remote cpus. */
+		get_online_cpus_atomic();
 		while (remsd) {
 			struct softnet_data *next = remsd->rps_ipi_next;
 
@@ -3999,6 +4003,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 							   &remsd->csd, 0);
 			remsd = next;
 		}
+		put_online_cpus_atomic();
 	} else
 #endif
 		local_irq_enable();


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 21/45] net: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Eric Dumazet, Alexander Duyck,
	Cong Wang, Ben Hutchings

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 net/core/dev.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index faebb39..1530633 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3175,7 +3175,7 @@ int netif_rx(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu;
 
-		preempt_disable();
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3185,7 +3185,7 @@ int netif_rx(struct sk_buff *skb)
 		ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 
 		rcu_read_unlock();
-		preempt_enable();
+		put_online_cpus_atomic();
 	} else
 #endif
 	{
@@ -3604,6 +3604,7 @@ int netif_receive_skb(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu, ret;
 
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3611,9 +3612,11 @@ int netif_receive_skb(struct sk_buff *skb)
 		if (cpu >= 0) {
 			ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 			rcu_read_unlock();
+			put_online_cpus_atomic();
 			return ret;
 		}
 		rcu_read_unlock();
+		put_online_cpus_atomic();
 	}
 #endif
 	return __netif_receive_skb(skb);
@@ -3991,6 +3994,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 		local_irq_enable();
 
 		/* Send pending IPI's to kick RPS processing on remote cpus. */
+		get_online_cpus_atomic();
 		while (remsd) {
 			struct softnet_data *next = remsd->rps_ipi_next;
 
@@ -3999,6 +4003,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 							   &remsd->csd, 0);
 			remsd = next;
 		}
+		put_online_cpus_atomic();
 	} else
 #endif
 		local_irq_enable();

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 21/45] net: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Alexander Duyck, nikunj, zhong, linux-pm, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, Eric Dumazet, wangyun,
	Srivatsa S. Bhat, netdev, Ben Hutchings, Cong Wang, linuxppc-dev,
	David S. Miller

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Ben Hutchings <bhutchings@solarflare.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 net/core/dev.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index faebb39..1530633 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3175,7 +3175,7 @@ int netif_rx(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu;
 
-		preempt_disable();
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3185,7 +3185,7 @@ int netif_rx(struct sk_buff *skb)
 		ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 
 		rcu_read_unlock();
-		preempt_enable();
+		put_online_cpus_atomic();
 	} else
 #endif
 	{
@@ -3604,6 +3604,7 @@ int netif_receive_skb(struct sk_buff *skb)
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu, ret;
 
+		get_online_cpus_atomic();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@@ -3611,9 +3612,11 @@ int netif_receive_skb(struct sk_buff *skb)
 		if (cpu >= 0) {
 			ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 			rcu_read_unlock();
+			put_online_cpus_atomic();
 			return ret;
 		}
 		rcu_read_unlock();
+		put_online_cpus_atomic();
 	}
 #endif
 	return __netif_receive_skb(skb);
@@ -3991,6 +3994,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 		local_irq_enable();
 
 		/* Send pending IPI's to kick RPS processing on remote cpus. */
+		get_online_cpus_atomic();
 		while (remsd) {
 			struct softnet_data *next = remsd->rps_ipi_next;
 
@@ -3999,6 +4003,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
 							   &remsd->csd, 0);
 			remsd = next;
 		}
+		put_online_cpus_atomic();
 	} else
 #endif
 		local_irq_enable();

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 22/45] block: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Jens Axboe, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 block/blk-softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 467c8de..bbab3d3 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -58,6 +58,7 @@ static void trigger_softirq(void *data)
  */
 static int raise_blk_irq(int cpu, struct request *rq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		struct call_single_data *data = &rq->csd;
 
@@ -66,8 +67,10 @@ static int raise_blk_irq(int cpu, struct request *rq)
 		data->flags = 0;
 
 		__smp_call_function_single(cpu, data, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 
 	return 1;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 22/45] block: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Jens Axboe

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 block/blk-softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 467c8de..bbab3d3 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -58,6 +58,7 @@ static void trigger_softirq(void *data)
  */
 static int raise_blk_irq(int cpu, struct request *rq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		struct call_single_data *data = &rq->csd;
 
@@ -66,8 +67,10 @@ static int raise_blk_irq(int cpu, struct request *rq)
 		data->flags = 0;
 
 		__smp_call_function_single(cpu, data, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 
 	return 1;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 22/45] block: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Jens Axboe, nikunj, zhong, linux-pm, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 block/blk-softirq.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 467c8de..bbab3d3 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -58,6 +58,7 @@ static void trigger_softirq(void *data)
  */
 static int raise_blk_irq(int cpu, struct request *rq)
 {
+	get_online_cpus_atomic();
 	if (cpu_online(cpu)) {
 		struct call_single_data *data = &rq->csd;
 
@@ -66,8 +67,10 @@ static int raise_blk_irq(int cpu, struct request *rq)
 		data->flags = 0;
 
 		__smp_call_function_single(cpu, data, 0);
+		put_online_cpus_atomic();
 		return 0;
 	}
+	put_online_cpus_atomic();
 
 	return 1;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 23/45] percpu_counter: Use _nocheck version of for_each_online_cpu()
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Al Viro, Tejun Heo, Srivatsa S. Bhat

The percpu-counter-sum code does a for_each_online_cpu() protected
by a spinlock, which makes it look like it needs to use
get/put_online_cpus_atomic(), going forward. However, the code has
adequate synchronization with CPU hotplug, via a hotplug callback
and the fbc->lock.

So use for_each_online_cpu_nocheck() to avoid false-positive warnings
from the hotplug locking validator. And add a comment justifying the
same.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/percpu_counter.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ba6085d..2d80e8a 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -98,9 +98,16 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
 	s64 ret;
 	int cpu;
 
+	/*
+	 * CPU hotplug synchronization is explicitly handled via the
+	 * hotplug callback, which synchronizes through fbc->lock.
+	 * So it is safe to use the _nocheck() version of
+	 * for_each_online_cpu() here (to avoid false-positive warnings
+	 * from the CPU hotplug debug code).
+	 */
 	raw_spin_lock(&fbc->lock);
 	ret = fbc->count;
-	for_each_online_cpu(cpu) {
+	for_each_online_cpu_nocheck(cpu) {
 		s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
 		ret += *pcount;
 	}


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 23/45] percpu_counter: Use _nocheck version of for_each_online_cpu()
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Al Viro, Tejun Heo

The percpu-counter-sum code does a for_each_online_cpu() protected
by a spinlock, which makes it look like it needs to use
get/put_online_cpus_atomic(), going forward. However, the code has
adequate synchronization with CPU hotplug, via a hotplug callback
and the fbc->lock.

So use for_each_online_cpu_nocheck() to avoid false-positive warnings
from the hotplug locking validator. And add a comment justifying the
same.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/percpu_counter.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ba6085d..2d80e8a 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -98,9 +98,16 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
 	s64 ret;
 	int cpu;
 
+	/*
+	 * CPU hotplug synchronization is explicitly handled via the
+	 * hotplug callback, which synchronizes through fbc->lock.
+	 * So it is safe to use the _nocheck() version of
+	 * for_each_online_cpu() here (to avoid false-positive warnings
+	 * from the CPU hotplug debug code).
+	 */
 	raw_spin_lock(&fbc->lock);
 	ret = fbc->count;
-	for_each_online_cpu(cpu) {
+	for_each_online_cpu_nocheck(cpu) {
 		s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
 		ret += *pcount;
 	}

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 23/45] percpu_counter: Use _nocheck version of for_each_online_cpu()
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Al Viro

The percpu-counter-sum code does a for_each_online_cpu() protected
by a spinlock, which makes it look like it needs to use
get/put_online_cpus_atomic(), going forward. However, the code has
adequate synchronization with CPU hotplug, via a hotplug callback
and the fbc->lock.

So use for_each_online_cpu_nocheck() to avoid false-positive warnings
from the hotplug locking validator. And add a comment justifying the
same.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/percpu_counter.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ba6085d..2d80e8a 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -98,9 +98,16 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
 	s64 ret;
 	int cpu;
 
+	/*
+	 * CPU hotplug synchronization is explicitly handled via the
+	 * hotplug callback, which synchronizes through fbc->lock.
+	 * So it is safe to use the _nocheck() version of
+	 * for_each_online_cpu() here (to avoid false-positive warnings
+	 * from the CPU hotplug debug code).
+	 */
 	raw_spin_lock(&fbc->lock);
 	ret = fbc->count;
-	for_each_online_cpu(cpu) {
+	for_each_online_cpu_nocheck(cpu) {
 		s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
 		ret += *pcount;
 	}


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 23/45] percpu_counter: Use _nocheck version of for_each_online_cpu()
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	Tejun Heo, linuxppc-dev, Al Viro

The percpu-counter-sum code does a for_each_online_cpu() protected
by a spinlock, which makes it look like it needs to use
get/put_online_cpus_atomic(), going forward. However, the code has
adequate synchronization with CPU hotplug, via a hotplug callback
and the fbc->lock.

So use for_each_online_cpu_nocheck() to avoid false-positive warnings
from the hotplug locking validator. And add a comment justifying the
same.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 lib/percpu_counter.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index ba6085d..2d80e8a 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -98,9 +98,16 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc)
 	s64 ret;
 	int cpu;
 
+	/*
+	 * CPU hotplug synchronization is explicitly handled via the
+	 * hotplug callback, which synchronizes through fbc->lock.
+	 * So it is safe to use the _nocheck() version of
+	 * for_each_online_cpu() here (to avoid false-positive warnings
+	 * from the CPU hotplug debug code).
+	 */
 	raw_spin_lock(&fbc->lock);
 	ret = fbc->count;
-	for_each_online_cpu(cpu) {
+	for_each_online_cpu_nocheck(cpu) {
 		s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
 		ret += *pcount;
 	}

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 24/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Hoang-Nam Nguyen, Christoph Raisch, Roland Dreier,
	Sean Hefty, Hal Rosenstock, linux-rdma

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
Cc: Christoph Raisch <raisch@de.ibm.com>
Cc: Roland Dreier <roland@kernel.org>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: linux-rdma@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/infiniband/hw/ehca/ehca_irq.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/infiniband/hw/ehca/ehca_irq.c b/drivers/infiniband/hw/ehca/ehca_irq.c
index 8615d7c..ace901e 100644
--- a/drivers/infiniband/hw/ehca/ehca_irq.c
+++ b/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -43,6 +43,7 @@
 
 #include <linux/slab.h>
 #include <linux/smpboot.h>
+#include <linux/cpu.h>
 
 #include "ehca_classes.h"
 #include "ehca_irq.h"
@@ -703,6 +704,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 	int cq_jobs;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	cpu_id = find_next_online_cpu(pool);
 	BUG_ON(!cpu_online(cpu_id));
 
@@ -720,6 +722,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 		BUG_ON(!cct || !thread);
 	}
 	__queue_comp_task(__cq, cct, thread);
+	put_online_cpus_atomic();
 }
 
 static void run_comp_task(struct ehca_cpu_comp_task *cct)
@@ -759,6 +762,7 @@ static void comp_task_park(unsigned int cpu)
 	list_splice_init(&cct->cq_list, &list);
 	spin_unlock_irq(&cct->task_lock);
 
+	get_online_cpus_atomic();
 	cpu = find_next_online_cpu(pool);
 	target = per_cpu_ptr(pool->cpu_comp_tasks, cpu);
 	thread = *per_cpu_ptr(pool->cpu_comp_threads, cpu);
@@ -768,6 +772,7 @@ static void comp_task_park(unsigned int cpu)
 		__queue_comp_task(cq, target, thread);
 	}
 	spin_unlock_irq(&target->task_lock);
+	put_online_cpus_atomic();
 }
 
 static void comp_task_stop(unsigned int cpu, bool online)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 24/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Hoang-Nam Nguyen, Christoph Raisch, Roland Dreier,
	Sean Hefty, Hal Rosenstock, linux-rdma, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
Cc: Christoph Raisch <raisch@de.ibm.com>
Cc: Roland Dreier <roland@kernel.org>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: linux-rdma@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/infiniband/hw/ehca/ehca_irq.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/infiniband/hw/ehca/ehca_irq.c b/drivers/infiniband/hw/ehca/ehca_irq.c
index 8615d7c..ace901e 100644
--- a/drivers/infiniband/hw/ehca/ehca_irq.c
+++ b/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -43,6 +43,7 @@
 
 #include <linux/slab.h>
 #include <linux/smpboot.h>
+#include <linux/cpu.h>
 
 #include "ehca_classes.h"
 #include "ehca_irq.h"
@@ -703,6 +704,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 	int cq_jobs;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	cpu_id = find_next_online_cpu(pool);
 	BUG_ON(!cpu_online(cpu_id));
 
@@ -720,6 +722,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 		BUG_ON(!cct || !thread);
 	}
 	__queue_comp_task(__cq, cct, thread);
+	put_online_cpus_atomic();
 }
 
 static void run_comp_task(struct ehca_cpu_comp_task *cct)
@@ -759,6 +762,7 @@ static void comp_task_park(unsigned int cpu)
 	list_splice_init(&cct->cq_list, &list);
 	spin_unlock_irq(&cct->task_lock);
 
+	get_online_cpus_atomic();
 	cpu = find_next_online_cpu(pool);
 	target = per_cpu_ptr(pool->cpu_comp_tasks, cpu);
 	thread = *per_cpu_ptr(pool->cpu_comp_threads, cpu);
@@ -768,6 +772,7 @@ static void comp_task_park(unsigned int cpu)
 		__queue_comp_task(cq, target, thread);
 	}
 	spin_unlock_irq(&target->task_lock);
+	put_online_cpus_atomic();
 }
 
 static void comp_task_stop(unsigned int cpu, bool online)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 24/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Roland Dreier, nikunj, zhong, linux-pm, linux-rdma,
	fweisbec, linux-kernel, rostedt, xiaoguangrong, sbw,
	Christoph Raisch, Hoang-Nam Nguyen, wangyun, Srivatsa S. Bhat,
	netdev, Sean Hefty, linuxppc-dev, Hal Rosenstock

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hoang-Nam Nguyen <hnguyen@de.ibm.com>
Cc: Christoph Raisch <raisch@de.ibm.com>
Cc: Roland Dreier <roland@kernel.org>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: linux-rdma@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/infiniband/hw/ehca/ehca_irq.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/infiniband/hw/ehca/ehca_irq.c b/drivers/infiniband/hw/ehca/ehca_irq.c
index 8615d7c..ace901e 100644
--- a/drivers/infiniband/hw/ehca/ehca_irq.c
+++ b/drivers/infiniband/hw/ehca/ehca_irq.c
@@ -43,6 +43,7 @@
 
 #include <linux/slab.h>
 #include <linux/smpboot.h>
+#include <linux/cpu.h>
 
 #include "ehca_classes.h"
 #include "ehca_irq.h"
@@ -703,6 +704,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 	int cq_jobs;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	cpu_id = find_next_online_cpu(pool);
 	BUG_ON(!cpu_online(cpu_id));
 
@@ -720,6 +722,7 @@ static void queue_comp_task(struct ehca_cq *__cq)
 		BUG_ON(!cct || !thread);
 	}
 	__queue_comp_task(__cq, cct, thread);
+	put_online_cpus_atomic();
 }
 
 static void run_comp_task(struct ehca_cpu_comp_task *cct)
@@ -759,6 +762,7 @@ static void comp_task_park(unsigned int cpu)
 	list_splice_init(&cct->cq_list, &list);
 	spin_unlock_irq(&cct->task_lock);
 
+	get_online_cpus_atomic();
 	cpu = find_next_online_cpu(pool);
 	target = per_cpu_ptr(pool->cpu_comp_tasks, cpu);
 	thread = *per_cpu_ptr(pool->cpu_comp_threads, cpu);
@@ -768,6 +772,7 @@ static void comp_task_park(unsigned int cpu)
 		__queue_comp_task(cq, target, thread);
 	}
 	spin_unlock_irq(&target->task_lock);
+	put_online_cpus_atomic();
 }
 
 static void comp_task_stop(unsigned int cpu, bool online)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 25/45] [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Robert Love, James E.J. Bottomley, devel,
	linux-scsi, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Robert Love <robert.w.love@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Cc: devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/scsi/fcoe/fcoe.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 32ae6c6..eaa390e 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1484,6 +1484,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	 * was originated, otherwise select cpu using rx exchange id
 	 * or fcoe_select_cpu().
 	 */
+	get_online_cpus_atomic();
 	if (ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX)
 		cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask;
 	else {
@@ -1493,8 +1494,10 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 			cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask;
 	}
 
-	if (cpu >= nr_cpu_ids)
+	if (cpu >= nr_cpu_ids) {
+		put_online_cpus_atomic();
 		goto err;
+	}
 
 	fps = &per_cpu(fcoe_percpu, cpu);
 	spin_lock(&fps->fcoe_rx_list.lock);
@@ -1514,6 +1517,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 		spin_lock(&fps->fcoe_rx_list.lock);
 		if (!fps->thread) {
 			spin_unlock(&fps->fcoe_rx_list.lock);
+			put_online_cpus_atomic();
 			goto err;
 		}
 	}
@@ -1535,6 +1539,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	if (fps->thread->state == TASK_INTERRUPTIBLE)
 		wake_up_process(fps->thread);
 	spin_unlock(&fps->fcoe_rx_list.lock);
+	put_online_cpus_atomic();
 
 	return 0;
 err:


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 25/45] [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Robert Love, James E.J. Bottomley, devel,
	linux-scsi

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Robert Love <robert.w.love@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Cc: devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/scsi/fcoe/fcoe.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 32ae6c6..eaa390e 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1484,6 +1484,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	 * was originated, otherwise select cpu using rx exchange id
 	 * or fcoe_select_cpu().
 	 */
+	get_online_cpus_atomic();
 	if (ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX)
 		cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask;
 	else {
@@ -1493,8 +1494,10 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 			cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask;
 	}
 
-	if (cpu >= nr_cpu_ids)
+	if (cpu >= nr_cpu_ids) {
+		put_online_cpus_atomic();
 		goto err;
+	}
 
 	fps = &per_cpu(fcoe_percpu, cpu);
 	spin_lock(&fps->fcoe_rx_list.lock);
@@ -1514,6 +1517,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 		spin_lock(&fps->fcoe_rx_list.lock);
 		if (!fps->thread) {
 			spin_unlock(&fps->fcoe_rx_list.lock);
+			put_online_cpus_atomic();
 			goto err;
 		}
 	}
@@ -1535,6 +1539,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	if (fps->thread->state == TASK_INTERRUPTIBLE)
 		wake_up_process(fps->thread);
 	spin_unlock(&fps->fcoe_rx_list.lock);
+	put_online_cpus_atomic();
 
 	return 0;
 err:

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 25/45] [SCSI] fcoe: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:56   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:56 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, James E.J. Bottomley, nikunj, zhong, linux-pm,
	fweisbec, linux-kernel, rostedt, xiaoguangrong, linux-scsi, sbw,
	wangyun, Srivatsa S. Bhat, netdev, Robert Love, linuxppc-dev,
	devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Robert Love <robert.w.love@intel.com>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Cc: devel@open-fcoe.org
Cc: linux-scsi@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/scsi/fcoe/fcoe.c |    7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 32ae6c6..eaa390e 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1484,6 +1484,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	 * was originated, otherwise select cpu using rx exchange id
 	 * or fcoe_select_cpu().
 	 */
+	get_online_cpus_atomic();
 	if (ntoh24(fh->fh_f_ctl) & FC_FC_EX_CTX)
 		cpu = ntohs(fh->fh_ox_id) & fc_cpu_mask;
 	else {
@@ -1493,8 +1494,10 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 			cpu = ntohs(fh->fh_rx_id) & fc_cpu_mask;
 	}
 
-	if (cpu >= nr_cpu_ids)
+	if (cpu >= nr_cpu_ids) {
+		put_online_cpus_atomic();
 		goto err;
+	}
 
 	fps = &per_cpu(fcoe_percpu, cpu);
 	spin_lock(&fps->fcoe_rx_list.lock);
@@ -1514,6 +1517,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 		spin_lock(&fps->fcoe_rx_list.lock);
 		if (!fps->thread) {
 			spin_unlock(&fps->fcoe_rx_list.lock);
+			put_online_cpus_atomic();
 			goto err;
 		}
 	}
@@ -1535,6 +1539,7 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
 	if (fps->thread->state == TASK_INTERRUPTIBLE)
 		wake_up_process(fps->thread);
 	spin_unlock(&fps->fcoe_rx_list.lock);
+	put_online_cpus_atomic();
 
 	return 0;
 err:

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 26/45] staging/octeon: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David Daney, Greg Kroah-Hartman, devel,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: devel@driverdev.osuosl.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/staging/octeon/ethernet-rx.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
index 34afc16..8588b4d 100644
--- a/drivers/staging/octeon/ethernet-rx.c
+++ b/drivers/staging/octeon/ethernet-rx.c
@@ -36,6 +36,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/interrupt.h>
 #include <net/dst.h>
 #ifdef CONFIG_XFRM
@@ -97,6 +98,7 @@ static void cvm_oct_enable_one_cpu(void)
 		return;
 
 	/* ... if a CPU is available, Turn on NAPI polling for that CPU.  */
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (!cpu_test_and_set(cpu, core_state.cpu_state)) {
 			v = smp_call_function_single(cpu, cvm_oct_enable_napi,
@@ -106,6 +108,7 @@ static void cvm_oct_enable_one_cpu(void)
 			break;
 		}
 	}
+	put_online_cpus_atomic();
 }
 
 static void cvm_oct_no_more_work(void)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 26/45] staging/octeon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David Daney, Greg Kroah-Hartman, devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: devel@driverdev.osuosl.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/staging/octeon/ethernet-rx.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
index 34afc16..8588b4d 100644
--- a/drivers/staging/octeon/ethernet-rx.c
+++ b/drivers/staging/octeon/ethernet-rx.c
@@ -36,6 +36,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/interrupt.h>
 #include <net/dst.h>
 #ifdef CONFIG_XFRM
@@ -97,6 +98,7 @@ static void cvm_oct_enable_one_cpu(void)
 		return;
 
 	/* ... if a CPU is available, Turn on NAPI polling for that CPU.  */
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (!cpu_test_and_set(cpu, core_state.cpu_state)) {
 			v = smp_call_function_single(cpu, cvm_oct_enable_napi,
@@ -106,6 +108,7 @@ static void cvm_oct_enable_one_cpu(void)
 			break;
 		}
 	}
+	put_online_cpus_atomic();
 }
 
 static void cvm_oct_no_more_work(void)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 26/45] staging/octeon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, devel, nikunj, zhong, linux-pm, fweisbec,
	David Daney, linux-kernel, rostedt, xiaoguangrong, sbw,
	Greg Kroah-Hartman, wangyun, Srivatsa S. Bhat, netdev,
	linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: devel@driverdev.osuosl.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/staging/octeon/ethernet-rx.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
index 34afc16..8588b4d 100644
--- a/drivers/staging/octeon/ethernet-rx.c
+++ b/drivers/staging/octeon/ethernet-rx.c
@@ -36,6 +36,7 @@
 #include <linux/prefetch.h>
 #include <linux/ratelimit.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/interrupt.h>
 #include <net/dst.h>
 #ifdef CONFIG_XFRM
@@ -97,6 +98,7 @@ static void cvm_oct_enable_one_cpu(void)
 		return;
 
 	/* ... if a CPU is available, Turn on NAPI polling for that CPU.  */
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (!cpu_test_and_set(cpu, core_state.cpu_state)) {
 			v = smp_call_function_single(cpu, cvm_oct_enable_napi,
@@ -106,6 +108,7 @@ static void cvm_oct_enable_one_cpu(void)
 			break;
 		}
 	}
+	put_online_cpus_atomic();
 }
 
 static void cvm_oct_no_more_work(void)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 27/45] x86: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Tony Luck, Borislav Petkov, Konrad Rzeszutek Wilk,
	Sebastian Andrzej Siewior, Joerg Roedel, Jan Beulich,
	Joonsoo Kim, linux-edac, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: linux-edac@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/apic/io_apic.c           |   21 ++++++++++++++++++---
 arch/x86/kernel/cpu/mcheck/therm_throt.c |    4 ++--
 arch/x86/mm/tlb.c                        |   14 +++++++-------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 9ed796c..4c71c1e 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -25,6 +25,7 @@
 #include <linux/init.h>
 #include <linux/delay.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/pci.h>
 #include <linux/mc146818rtc.h>
 #include <linux/compiler.h>
@@ -1169,9 +1170,11 @@ int assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask)
 	int err;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	err = __assign_irq_vector(irq, cfg, mask);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return err;
 }
 
@@ -1757,13 +1760,13 @@ __apicdebuginit(void) print_local_APICs(int maxcpu)
 	if (!maxcpu)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu >= maxcpu)
 			break;
 		smp_call_function_single(cpu, print_local_APIC, NULL, 1);
 	}
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 __apicdebuginit(void) print_PIC(void)
@@ -2153,10 +2156,12 @@ static int ioapic_retrigger_irq(struct irq_data *data)
 	unsigned long flags;
 	int cpu;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	cpu = cpumask_first_and(cfg->domain, cpu_online_mask);
 	apic->send_IPI_mask(cpumask_of(cpu), cfg->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	return 1;
 }
@@ -2175,6 +2180,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 {
 	cpumask_var_t cleanup_mask;
 
+	get_online_cpus_atomic();
 	if (unlikely(!alloc_cpumask_var(&cleanup_mask, GFP_ATOMIC))) {
 		unsigned int i;
 		for_each_cpu_and(i, cfg->old_domain, cpu_online_mask)
@@ -2185,6 +2191,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 		free_cpumask_var(cleanup_mask);
 	}
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 asmlinkage void smp_irq_move_cleanup_interrupt(void)
@@ -2939,11 +2946,13 @@ unsigned int __create_irqs(unsigned int from, unsigned int count, int node)
 			goto out_irqs;
 	}
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	for (i = 0; i < count; i++)
 		if (__assign_irq_vector(irq + i, cfg[i], apic->target_cpus()))
 			goto out_vecs;
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < count; i++) {
 		irq_set_chip_data(irq + i, cfg[i]);
@@ -2957,6 +2966,7 @@ out_vecs:
 	for (i--; i >= 0; i--)
 		__clear_irq_vector(irq + i, cfg[i]);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 out_irqs:
 	for (i = 0; i < count; i++)
 		free_irq_at(irq + i, cfg[i]);
@@ -2994,9 +3004,11 @@ void destroy_irq(unsigned int irq)
 
 	free_remapped_irq(irq);
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq, cfg);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	free_irq_at(irq, cfg);
 }
 
@@ -3365,8 +3377,11 @@ io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr)
 	if (!cfg)
 		return -EINVAL;
 	ret = __add_pin_to_irq_node(cfg, node, attr->ioapic, attr->ioapic_pin);
-	if (!ret)
+	if (!ret) {
+		get_online_cpus_atomic();
 		setup_ioapic_irq(irq, cfg, attr);
+		put_online_cpus_atomic();
+	}
 	return ret;
 }
 
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 2f3a799..3eea984 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -83,13 +83,13 @@ static ssize_t therm_throt_device_show_##event##_##name(		\
 	unsigned int cpu = dev->id;					\
 	ssize_t ret;							\
 									\
-	preempt_disable();	/* CPU hotplug */			\
+	get_online_cpus_atomic();	/* CPU hotplug */		\
 	if (cpu_online(cpu)) {						\
 		ret = sprintf(buf, "%lu\n",				\
 			      per_cpu(thermal_state, cpu).event.name);	\
 	} else								\
 		ret = 0;						\
-	preempt_enable();						\
+	put_online_cpus_atomic();					\
 									\
 	return ret;							\
 }
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 282375f..8126374 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,12 +147,12 @@ void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	local_flush_tlb();
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -187,7 +187,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	unsigned long addr;
 	unsigned act_entries, tlb_entries = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if (current->active_mm != mm)
 		goto flush_all;
 
@@ -225,21 +225,21 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		if (cpumask_any_but(mm_cpumask(mm),
 				smp_processor_id()) < nr_cpu_ids)
 			flush_tlb_others(mm_cpumask(mm), mm, start, end);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 
 flush_all:
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (current->active_mm == mm) {
 		if (current->mm)
@@ -251,7 +251,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void do_flush_tlb_all(void *info)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 27/45] x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86,
	Tony Luck, Borislav Petkov, Konrad Rzeszutek Wilk,
	Sebastian Andrzej Siewior, Joerg Roedel, Jan Beulich,
	Joonsoo Kim, linux-edac

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: linux-edac@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/apic/io_apic.c           |   21 ++++++++++++++++++---
 arch/x86/kernel/cpu/mcheck/therm_throt.c |    4 ++--
 arch/x86/mm/tlb.c                        |   14 +++++++-------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 9ed796c..4c71c1e 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -25,6 +25,7 @@
 #include <linux/init.h>
 #include <linux/delay.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/pci.h>
 #include <linux/mc146818rtc.h>
 #include <linux/compiler.h>
@@ -1169,9 +1170,11 @@ int assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask)
 	int err;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	err = __assign_irq_vector(irq, cfg, mask);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return err;
 }
 
@@ -1757,13 +1760,13 @@ __apicdebuginit(void) print_local_APICs(int maxcpu)
 	if (!maxcpu)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu >= maxcpu)
 			break;
 		smp_call_function_single(cpu, print_local_APIC, NULL, 1);
 	}
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 __apicdebuginit(void) print_PIC(void)
@@ -2153,10 +2156,12 @@ static int ioapic_retrigger_irq(struct irq_data *data)
 	unsigned long flags;
 	int cpu;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	cpu = cpumask_first_and(cfg->domain, cpu_online_mask);
 	apic->send_IPI_mask(cpumask_of(cpu), cfg->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	return 1;
 }
@@ -2175,6 +2180,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 {
 	cpumask_var_t cleanup_mask;
 
+	get_online_cpus_atomic();
 	if (unlikely(!alloc_cpumask_var(&cleanup_mask, GFP_ATOMIC))) {
 		unsigned int i;
 		for_each_cpu_and(i, cfg->old_domain, cpu_online_mask)
@@ -2185,6 +2191,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 		free_cpumask_var(cleanup_mask);
 	}
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 asmlinkage void smp_irq_move_cleanup_interrupt(void)
@@ -2939,11 +2946,13 @@ unsigned int __create_irqs(unsigned int from, unsigned int count, int node)
 			goto out_irqs;
 	}
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	for (i = 0; i < count; i++)
 		if (__assign_irq_vector(irq + i, cfg[i], apic->target_cpus()))
 			goto out_vecs;
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < count; i++) {
 		irq_set_chip_data(irq + i, cfg[i]);
@@ -2957,6 +2966,7 @@ out_vecs:
 	for (i--; i >= 0; i--)
 		__clear_irq_vector(irq + i, cfg[i]);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 out_irqs:
 	for (i = 0; i < count; i++)
 		free_irq_at(irq + i, cfg[i]);
@@ -2994,9 +3004,11 @@ void destroy_irq(unsigned int irq)
 
 	free_remapped_irq(irq);
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq, cfg);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	free_irq_at(irq, cfg);
 }
 
@@ -3365,8 +3377,11 @@ io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr)
 	if (!cfg)
 		return -EINVAL;
 	ret = __add_pin_to_irq_node(cfg, node, attr->ioapic, attr->ioapic_pin);
-	if (!ret)
+	if (!ret) {
+		get_online_cpus_atomic();
 		setup_ioapic_irq(irq, cfg, attr);
+		put_online_cpus_atomic();
+	}
 	return ret;
 }
 
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 2f3a799..3eea984 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -83,13 +83,13 @@ static ssize_t therm_throt_device_show_##event##_##name(		\
 	unsigned int cpu = dev->id;					\
 	ssize_t ret;							\
 									\
-	preempt_disable();	/* CPU hotplug */			\
+	get_online_cpus_atomic();	/* CPU hotplug */		\
 	if (cpu_online(cpu)) {						\
 		ret = sprintf(buf, "%lu\n",				\
 			      per_cpu(thermal_state, cpu).event.name);	\
 	} else								\
 		ret = 0;						\
-	preempt_enable();						\
+	put_online_cpus_atomic();					\
 									\
 	return ret;							\
 }
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 282375f..8126374 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,12 +147,12 @@ void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	local_flush_tlb();
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -187,7 +187,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	unsigned long addr;
 	unsigned act_entries, tlb_entries = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if (current->active_mm != mm)
 		goto flush_all;
 
@@ -225,21 +225,21 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		if (cpumask_any_but(mm_cpumask(mm),
 				smp_processor_id()) < nr_cpu_ids)
 			flush_tlb_others(mm_cpumask(mm), mm, start, end);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 
 flush_all:
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (current->active_mm == mm) {
 		if (current->mm)
@@ -251,7 +251,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void do_flush_tlb_all(void *info)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 27/45] x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ingo Molnar, H. Peter Anvin, x86, Tony Luck,
	Borislav Petkov, Konrad Rzeszutek Wilk,
	Sebastian Andrzej Siewior, Joerg Roedel, Jan Beulich,
	Joonsoo Kim, linux-edac

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: linux-edac@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/apic/io_apic.c           |   21 ++++++++++++++++++---
 arch/x86/kernel/cpu/mcheck/therm_throt.c |    4 ++--
 arch/x86/mm/tlb.c                        |   14 +++++++-------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 9ed796c..4c71c1e 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -25,6 +25,7 @@
 #include <linux/init.h>
 #include <linux/delay.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/pci.h>
 #include <linux/mc146818rtc.h>
 #include <linux/compiler.h>
@@ -1169,9 +1170,11 @@ int assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask)
 	int err;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	err = __assign_irq_vector(irq, cfg, mask);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return err;
 }
 
@@ -1757,13 +1760,13 @@ __apicdebuginit(void) print_local_APICs(int maxcpu)
 	if (!maxcpu)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu >= maxcpu)
 			break;
 		smp_call_function_single(cpu, print_local_APIC, NULL, 1);
 	}
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 __apicdebuginit(void) print_PIC(void)
@@ -2153,10 +2156,12 @@ static int ioapic_retrigger_irq(struct irq_data *data)
 	unsigned long flags;
 	int cpu;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	cpu = cpumask_first_and(cfg->domain, cpu_online_mask);
 	apic->send_IPI_mask(cpumask_of(cpu), cfg->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	return 1;
 }
@@ -2175,6 +2180,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 {
 	cpumask_var_t cleanup_mask;
 
+	get_online_cpus_atomic();
 	if (unlikely(!alloc_cpumask_var(&cleanup_mask, GFP_ATOMIC))) {
 		unsigned int i;
 		for_each_cpu_and(i, cfg->old_domain, cpu_online_mask)
@@ -2185,6 +2191,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 		free_cpumask_var(cleanup_mask);
 	}
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 asmlinkage void smp_irq_move_cleanup_interrupt(void)
@@ -2939,11 +2946,13 @@ unsigned int __create_irqs(unsigned int from, unsigned int count, int node)
 			goto out_irqs;
 	}
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	for (i = 0; i < count; i++)
 		if (__assign_irq_vector(irq + i, cfg[i], apic->target_cpus()))
 			goto out_vecs;
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < count; i++) {
 		irq_set_chip_data(irq + i, cfg[i]);
@@ -2957,6 +2966,7 @@ out_vecs:
 	for (i--; i >= 0; i--)
 		__clear_irq_vector(irq + i, cfg[i]);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 out_irqs:
 	for (i = 0; i < count; i++)
 		free_irq_at(irq + i, cfg[i]);
@@ -2994,9 +3004,11 @@ void destroy_irq(unsigned int irq)
 
 	free_remapped_irq(irq);
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq, cfg);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	free_irq_at(irq, cfg);
 }
 
@@ -3365,8 +3377,11 @@ io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr)
 	if (!cfg)
 		return -EINVAL;
 	ret = __add_pin_to_irq_node(cfg, node, attr->ioapic, attr->ioapic_pin);
-	if (!ret)
+	if (!ret) {
+		get_online_cpus_atomic();
 		setup_ioapic_irq(irq, cfg, attr);
+		put_online_cpus_atomic();
+	}
 	return ret;
 }
 
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 2f3a799..3eea984 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -83,13 +83,13 @@ static ssize_t therm_throt_device_show_##event##_##name(		\
 	unsigned int cpu = dev->id;					\
 	ssize_t ret;							\
 									\
-	preempt_disable();	/* CPU hotplug */			\
+	get_online_cpus_atomic();	/* CPU hotplug */		\
 	if (cpu_online(cpu)) {						\
 		ret = sprintf(buf, "%lu\n",				\
 			      per_cpu(thermal_state, cpu).event.name);	\
 	} else								\
 		ret = 0;						\
-	preempt_enable();						\
+	put_online_cpus_atomic();					\
 									\
 	return ret;							\
 }
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 282375f..8126374 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,12 +147,12 @@ void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	local_flush_tlb();
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -187,7 +187,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	unsigned long addr;
 	unsigned act_entries, tlb_entries = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if (current->active_mm != mm)
 		goto flush_all;
 
@@ -225,21 +225,21 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		if (cpumask_any_but(mm_cpumask(mm),
 				smp_processor_id()) < nr_cpu_ids)
 			flush_tlb_others(mm_cpumask(mm), mm, start, end);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 
 flush_all:
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (current->active_mm == mm) {
 		if (current->mm)
@@ -251,7 +251,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void do_flush_tlb_all(void *info)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 27/45] x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: fweisbec, Sebastian Andrzej Siewior, Jan Beulich, H. Peter Anvin,
	linux-arch, Joonsoo Kim, Joerg Roedel, x86, xiaoguangrong,
	Ingo Molnar, wangyun, nikunj, linux-pm, rostedt,
	Konrad Rzeszutek Wilk, Borislav Petkov, Thomas Gleixner,
	linux-edac, Tony Luck, zhong, netdev, linux-kernel, sbw,
	Srivatsa S. Bhat, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: linux-edac@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/apic/io_apic.c           |   21 ++++++++++++++++++---
 arch/x86/kernel/cpu/mcheck/therm_throt.c |    4 ++--
 arch/x86/mm/tlb.c                        |   14 +++++++-------
 3 files changed, 27 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 9ed796c..4c71c1e 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -25,6 +25,7 @@
 #include <linux/init.h>
 #include <linux/delay.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/pci.h>
 #include <linux/mc146818rtc.h>
 #include <linux/compiler.h>
@@ -1169,9 +1170,11 @@ int assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask)
 	int err;
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	err = __assign_irq_vector(irq, cfg, mask);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return err;
 }
 
@@ -1757,13 +1760,13 @@ __apicdebuginit(void) print_local_APICs(int maxcpu)
 	if (!maxcpu)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu >= maxcpu)
 			break;
 		smp_call_function_single(cpu, print_local_APIC, NULL, 1);
 	}
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 __apicdebuginit(void) print_PIC(void)
@@ -2153,10 +2156,12 @@ static int ioapic_retrigger_irq(struct irq_data *data)
 	unsigned long flags;
 	int cpu;
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	cpu = cpumask_first_and(cfg->domain, cpu_online_mask);
 	apic->send_IPI_mask(cpumask_of(cpu), cfg->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	return 1;
 }
@@ -2175,6 +2180,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 {
 	cpumask_var_t cleanup_mask;
 
+	get_online_cpus_atomic();
 	if (unlikely(!alloc_cpumask_var(&cleanup_mask, GFP_ATOMIC))) {
 		unsigned int i;
 		for_each_cpu_and(i, cfg->old_domain, cpu_online_mask)
@@ -2185,6 +2191,7 @@ void send_cleanup_vector(struct irq_cfg *cfg)
 		free_cpumask_var(cleanup_mask);
 	}
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 asmlinkage void smp_irq_move_cleanup_interrupt(void)
@@ -2939,11 +2946,13 @@ unsigned int __create_irqs(unsigned int from, unsigned int count, int node)
 			goto out_irqs;
 	}
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	for (i = 0; i < count; i++)
 		if (__assign_irq_vector(irq + i, cfg[i], apic->target_cpus()))
 			goto out_vecs;
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < count; i++) {
 		irq_set_chip_data(irq + i, cfg[i]);
@@ -2957,6 +2966,7 @@ out_vecs:
 	for (i--; i >= 0; i--)
 		__clear_irq_vector(irq + i, cfg[i]);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 out_irqs:
 	for (i = 0; i < count; i++)
 		free_irq_at(irq + i, cfg[i]);
@@ -2994,9 +3004,11 @@ void destroy_irq(unsigned int irq)
 
 	free_remapped_irq(irq);
 
+	get_online_cpus_atomic();
 	raw_spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq, cfg);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	free_irq_at(irq, cfg);
 }
 
@@ -3365,8 +3377,11 @@ io_apic_setup_irq_pin(unsigned int irq, int node, struct io_apic_irq_attr *attr)
 	if (!cfg)
 		return -EINVAL;
 	ret = __add_pin_to_irq_node(cfg, node, attr->ioapic, attr->ioapic_pin);
-	if (!ret)
+	if (!ret) {
+		get_online_cpus_atomic();
 		setup_ioapic_irq(irq, cfg, attr);
+		put_online_cpus_atomic();
+	}
 	return ret;
 }
 
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 2f3a799..3eea984 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -83,13 +83,13 @@ static ssize_t therm_throt_device_show_##event##_##name(		\
 	unsigned int cpu = dev->id;					\
 	ssize_t ret;							\
 									\
-	preempt_disable();	/* CPU hotplug */			\
+	get_online_cpus_atomic();	/* CPU hotplug */		\
 	if (cpu_online(cpu)) {						\
 		ret = sprintf(buf, "%lu\n",				\
 			      per_cpu(thermal_state, cpu).event.name);	\
 	} else								\
 		ret = 0;						\
-	preempt_enable();						\
+	put_online_cpus_atomic();					\
 									\
 	return ret;							\
 }
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 282375f..8126374 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -147,12 +147,12 @@ void flush_tlb_current_task(void)
 {
 	struct mm_struct *mm = current->mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	local_flush_tlb();
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -187,7 +187,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 	unsigned long addr;
 	unsigned act_entries, tlb_entries = 0;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if (current->active_mm != mm)
 		goto flush_all;
 
@@ -225,21 +225,21 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 		if (cpumask_any_but(mm_cpumask(mm),
 				smp_processor_id()) < nr_cpu_ids)
 			flush_tlb_others(mm_cpumask(mm), mm, start, end);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 
 flush_all:
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, 0UL, TLB_FLUSH_ALL);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (current->active_mm == mm) {
 		if (current->mm)
@@ -251,7 +251,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long start)
 	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
 		flush_tlb_others(mm_cpumask(mm), mm, start, 0UL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void do_flush_tlb_all(void *info)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 28/45] perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Peter Zijlstra, Paul Mackerras, Ingo Molnar,
	Arnaldo Carvalho de Melo, Thomas Gleixner, H. Peter Anvin, x86,
	Srivatsa S. Bhat

The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index 9dd9975..7c2a064 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1,3 +1,4 @@
+#include <linux/cpu.h>
 #include "perf_event_intel_uncore.h"
 
 static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -2630,6 +2631,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 	if (box)
 		return box;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&uncore_box_lock);
 	list_for_each_entry(box, &pmu->box_list, list) {
 		if (box->phys_id == topology_physical_package_id(cpu)) {
@@ -2639,6 +2641,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 		}
 	}
 	raw_spin_unlock(&uncore_box_lock);
+	put_online_cpus_atomic();
 
 	return *per_cpu_ptr(pmu->box, cpu);
 }
@@ -3229,6 +3232,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	list_del(&box->list);
 	raw_spin_unlock(&uncore_box_lock);
 
+	get_online_cpus_atomic();
 	for_each_possible_cpu(cpu) {
 		if (*per_cpu_ptr(pmu->box, cpu) == box) {
 			*per_cpu_ptr(pmu->box, cpu) = NULL;
@@ -3237,6 +3241,8 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	}
 
 	WARN_ON_ONCE(atomic_read(&box->refcnt) != 1);
+	put_online_cpus_atomic();
+
 	kfree(box);
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 28/45] perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Peter Zijlstra, Paul Mackerras, Ingo Molnar,
	Arnaldo Carvalho de Melo, Thomas Gleixner, H. Peter Anvin, x86

The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index 9dd9975..7c2a064 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1,3 +1,4 @@
+#include <linux/cpu.h>
 #include "perf_event_intel_uncore.h"
 
 static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -2630,6 +2631,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 	if (box)
 		return box;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&uncore_box_lock);
 	list_for_each_entry(box, &pmu->box_list, list) {
 		if (box->phys_id == topology_physical_package_id(cpu)) {
@@ -2639,6 +2641,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 		}
 	}
 	raw_spin_unlock(&uncore_box_lock);
+	put_online_cpus_atomic();
 
 	return *per_cpu_ptr(pmu->box, cpu);
 }
@@ -3229,6 +3232,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	list_del(&box->list);
 	raw_spin_unlock(&uncore_box_lock);
 
+	get_online_cpus_atomic();
 	for_each_possible_cpu(cpu) {
 		if (*per_cpu_ptr(pmu->box, cpu) == box) {
 			*per_cpu_ptr(pmu->box, cpu) = NULL;
@@ -3237,6 +3241,8 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	}
 
 	WARN_ON_ONCE(atomic_read(&box->refcnt) != 1);
+	put_online_cpus_atomic();
+
 	kfree(box);
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 28/45] perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Peter Zijlstra, Paul Mackerras, Ingo Molnar,
	Arnaldo Carvalho de Melo, H. Peter Anvin, x86

The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index 9dd9975..7c2a064 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1,3 +1,4 @@
+#include <linux/cpu.h>
 #include "perf_event_intel_uncore.h"
 
 static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -2630,6 +2631,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 	if (box)
 		return box;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&uncore_box_lock);
 	list_for_each_entry(box, &pmu->box_list, list) {
 		if (box->phys_id == topology_physical_package_id(cpu)) {
@@ -2639,6 +2641,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 		}
 	}
 	raw_spin_unlock(&uncore_box_lock);
+	put_online_cpus_atomic();
 
 	return *per_cpu_ptr(pmu->box, cpu);
 }
@@ -3229,6 +3232,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	list_del(&box->list);
 	raw_spin_unlock(&uncore_box_lock);
 
+	get_online_cpus_atomic();
 	for_each_possible_cpu(cpu) {
 		if (*per_cpu_ptr(pmu->box, cpu) == box) {
 			*per_cpu_ptr(pmu->box, cpu) = NULL;
@@ -3237,6 +3241,8 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	}
 
 	WARN_ON_ONCE(atomic_read(&box->refcnt) != 1);
+	put_online_cpus_atomic();
+
 	kfree(box);
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 28/45] perf/x86: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, x86, nikunj, zhong, linux-pm, fweisbec,
	H. Peter Anvin, linux-kernel, rostedt, xiaoguangrong, sbw,
	Ingo Molnar, Paul Mackerras, wangyun, Srivatsa S. Bhat, netdev,
	Thomas Gleixner, linuxppc-dev, Arnaldo Carvalho de Melo,
	Peter Zijlstra

The CPU_DYING notifier modifies the per-cpu pointer pmu->box, and this can
race with functions such as uncore_pmu_to_box() and uncore_pci_remove() when
we remove stop_machine() from the CPU offline path. So protect them using
get/put_online_cpus_atomic().

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index 9dd9975..7c2a064 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1,3 +1,4 @@
+#include <linux/cpu.h>
 #include "perf_event_intel_uncore.h"
 
 static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -2630,6 +2631,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 	if (box)
 		return box;
 
+	get_online_cpus_atomic();
 	raw_spin_lock(&uncore_box_lock);
 	list_for_each_entry(box, &pmu->box_list, list) {
 		if (box->phys_id == topology_physical_package_id(cpu)) {
@@ -2639,6 +2641,7 @@ uncore_pmu_to_box(struct intel_uncore_pmu *pmu, int cpu)
 		}
 	}
 	raw_spin_unlock(&uncore_box_lock);
+	put_online_cpus_atomic();
 
 	return *per_cpu_ptr(pmu->box, cpu);
 }
@@ -3229,6 +3232,7 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	list_del(&box->list);
 	raw_spin_unlock(&uncore_box_lock);
 
+	get_online_cpus_atomic();
 	for_each_possible_cpu(cpu) {
 		if (*per_cpu_ptr(pmu->box, cpu) == box) {
 			*per_cpu_ptr(pmu->box, cpu) = NULL;
@@ -3237,6 +3241,8 @@ static void uncore_pci_remove(struct pci_dev *pdev)
 	}
 
 	WARN_ON_ONCE(atomic_read(&box->refcnt) != 1);
+	put_online_cpus_atomic();
+
 	kfree(box);
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 29/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paolo Bonzini, Gleb Natapov, kvm, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 virt/kvm/kvm_main.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..5bbfa30 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -174,7 +174,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 
 	zalloc_cpumask_var(&cpus, GFP_ATOMIC);
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_make_request(req, vcpu);
 		cpu = vcpu->cpu;
@@ -192,7 +192,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 		smp_call_function_many(cpus, ack_flush, NULL, 1);
 	else
 		called = false;
-	put_cpu();
+	put_online_cpus_atomic();
 	free_cpumask_var(cpus);
 	return called;
 }
@@ -1707,11 +1707,11 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
 		if (kvm_arch_vcpu_should_kick(vcpu))
 			smp_send_reschedule(cpu);
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 29/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paolo Bonzini, Gleb Natapov, kvm

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 virt/kvm/kvm_main.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..5bbfa30 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -174,7 +174,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 
 	zalloc_cpumask_var(&cpus, GFP_ATOMIC);
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_make_request(req, vcpu);
 		cpu = vcpu->cpu;
@@ -192,7 +192,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 		smp_call_function_many(cpus, ack_flush, NULL, 1);
 	else
 		called = false;
-	put_cpu();
+	put_online_cpus_atomic();
 	free_cpumask_var(cpus);
 	return called;
 }
@@ -1707,11 +1707,11 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
 		if (kvm_arch_vcpu_should_kick(vcpu))
 			smp_send_reschedule(cpu);
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 29/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, kvm, Gleb Natapov, nikunj, zhong, linux-pm, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, wangyun,
	Srivatsa S. Bhat, netdev, Paolo Bonzini, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 virt/kvm/kvm_main.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 302681c..5bbfa30 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -174,7 +174,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 
 	zalloc_cpumask_var(&cpus, GFP_ATOMIC);
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	kvm_for_each_vcpu(i, vcpu, kvm) {
 		kvm_make_request(req, vcpu);
 		cpu = vcpu->cpu;
@@ -192,7 +192,7 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
 		smp_call_function_many(cpus, ack_flush, NULL, 1);
 	else
 		called = false;
-	put_cpu();
+	put_online_cpus_atomic();
 	free_cpumask_var(cpus);
 	return called;
 }
@@ -1707,11 +1707,11 @@ void kvm_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 	if (cpu != me && (unsigned)cpu < nr_cpu_ids && cpu_online(cpu))
 		if (kvm_arch_vcpu_should_kick(vcpu))
 			smp_send_reschedule(cpu);
-	put_cpu();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_kick);
 #endif /* !CONFIG_S390 */

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 30/45] x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Konrad Rzeszutek Wilk, Jeremy Fitzhardinge,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, xen-devel,
	virtualization, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: xen-devel@lists.xensource.com
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/xen/mmu.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index fdc3ba2..3229c4f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -39,6 +39,7 @@
  * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007
  */
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/highmem.h>
 #include <linux/debugfs.h>
 #include <linux/bug.h>
@@ -1163,9 +1164,13 @@ static void xen_drop_mm_ref(struct mm_struct *mm)
  */
 static void xen_exit_mmap(struct mm_struct *mm)
 {
-	get_cpu();		/* make sure we don't move around */
+	/*
+	 * Make sure we don't move around, and also prevent CPUs from
+	 * going offline.
+	 */
+	get_online_cpus_atomic();
 	xen_drop_mm_ref(mm);
-	put_cpu();
+	put_online_cpus_atomic();
 
 	spin_lock(&mm->page_table_lock);
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 30/45] x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Jeremy Fitzhardinge, x86, H. Peter Anvin, nikunj,
	zhong, linux-pm, fweisbec, Konrad Rzeszutek Wilk, linux-kernel,
	rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, Thomas Gleixner, virtualization,
	linuxppc-dev, xen-devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: xen-devel@lists.xensource.com
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/xen/mmu.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index fdc3ba2..3229c4f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -39,6 +39,7 @@
  * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007
  */
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/highmem.h>
 #include <linux/debugfs.h>
 #include <linux/bug.h>
@@ -1163,9 +1164,13 @@ static void xen_drop_mm_ref(struct mm_struct *mm)
  */
 static void xen_exit_mmap(struct mm_struct *mm)
 {
-	get_cpu();		/* make sure we don't move around */
+	/*
+	 * Make sure we don't move around, and also prevent CPUs from
+	 * going offline.
+	 */
+	get_online_cpus_atomic();
 	xen_drop_mm_ref(mm);
-	put_cpu();
+	put_online_cpus_atomic();
 
 	spin_lock(&mm->page_table_lock);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 30/45] x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Jeremy Fitzhardinge, x86, H. Peter Anvin, nikunj,
	zhong, linux-pm, fweisbec, Konrad Rzeszutek Wilk, linux-kernel,
	rostedt, xiaoguangrong, sbw, Ingo Molnar, wangyun,
	Srivatsa S. Bhat, netdev, Thomas Gleixner, virtualization,
	linuxppc-dev, xen-devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: xen-devel@lists.xensource.com
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/xen/mmu.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index fdc3ba2..3229c4f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -39,6 +39,7 @@
  * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007
  */
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/highmem.h>
 #include <linux/debugfs.h>
 #include <linux/bug.h>
@@ -1163,9 +1164,13 @@ static void xen_drop_mm_ref(struct mm_struct *mm)
  */
 static void xen_exit_mmap(struct mm_struct *mm)
 {
-	get_cpu();		/* make sure we don't move around */
+	/*
+	 * Make sure we don't move around, and also prevent CPUs from
+	 * going offline.
+	 */
+	get_online_cpus_atomic();
 	xen_drop_mm_ref(mm);
-	put_cpu();
+	put_online_cpus_atomic();
 
 	spin_lock(&mm->page_table_lock);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 30/45] x86/xen: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Konrad Rzeszutek Wilk, Jeremy Fitzhardinge,
	Ingo Molnar, H. Peter Anvin, x86, xen-devel, virtualization

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: xen-devel@lists.xensource.com
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/x86/xen/mmu.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index fdc3ba2..3229c4f 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -39,6 +39,7 @@
  * Jeremy Fitzhardinge <jeremy@xensource.com>, XenSource Inc, 2007
  */
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/highmem.h>
 #include <linux/debugfs.h>
 #include <linux/bug.h>
@@ -1163,9 +1164,13 @@ static void xen_drop_mm_ref(struct mm_struct *mm)
  */
 static void xen_exit_mmap(struct mm_struct *mm)
 {
-	get_cpu();		/* make sure we don't move around */
+	/*
+	 * Make sure we don't move around, and also prevent CPUs from
+	 * going offline.
+	 */
+	get_online_cpus_atomic();
 	xen_drop_mm_ref(mm);
-	put_cpu();
+	put_online_cpus_atomic();
 
 	spin_lock(&mm->page_table_lock);
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 31/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
                     ` (2 preceding siblings ...)
  (?)
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	Thomas Gleixner, linux-alpha, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/alpha/kernel/smp.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 7b60834..e147268 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -497,7 +497,6 @@ smp_cpus_done(unsigned int max_cpus)
 	       ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-\f
 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -681,7 +680,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current(mm);
@@ -693,7 +692,7 @@ flush_tlb_mm(struct mm_struct *mm)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -702,7 +701,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -730,7 +729,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current_page(mm, vma, addr);
@@ -742,7 +741,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -755,7 +754,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		printk(KERN_CRIT "flush_tlb_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -786,7 +785,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 	if ((vma->vm_flags & VM_EXEC) == 0)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		__load_new_mm_context(mm);
@@ -798,7 +797,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -807,5 +806,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		printk(KERN_CRIT "flush_icache_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 31/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	Thomas Gleixner, linux-alpha

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/alpha/kernel/smp.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 7b60834..e147268 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -497,7 +497,6 @@ smp_cpus_done(unsigned int max_cpus)
 	       ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-\f
 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -681,7 +680,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current(mm);
@@ -693,7 +692,7 @@ flush_tlb_mm(struct mm_struct *mm)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -702,7 +701,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -730,7 +729,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current_page(mm, vma, addr);
@@ -742,7 +741,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -755,7 +754,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		printk(KERN_CRIT "flush_tlb_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -786,7 +785,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 	if ((vma->vm_flags & VM_EXEC) == 0)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		__load_new_mm_context(mm);
@@ -798,7 +797,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -807,5 +806,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		printk(KERN_CRIT "flush_icache_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 31/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	linux-alpha

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/alpha/kernel/smp.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 7b60834..e147268 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -497,7 +497,6 @@ smp_cpus_done(unsigned int max_cpus)
 	       ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-\f
 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -681,7 +680,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current(mm);
@@ -693,7 +692,7 @@ flush_tlb_mm(struct mm_struct *mm)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -702,7 +701,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -730,7 +729,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current_page(mm, vma, addr);
@@ -742,7 +741,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -755,7 +754,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		printk(KERN_CRIT "flush_tlb_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -786,7 +785,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 	if ((vma->vm_flags & VM_EXEC) == 0)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		__load_new_mm_context(mm);
@@ -798,7 +797,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -807,5 +806,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		printk(KERN_CRIT "flush_icache_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 31/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Henderson, Ivan Kokshaysky, Matt Turner,
	Thomas Gleixner, linux-alpha

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/alpha/kernel/smp.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 7b60834..e147268 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -497,7 +497,6 @@ smp_cpus_done(unsigned int max_cpus)
 	       ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-\f

 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -681,7 +680,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current(mm);
@@ -693,7 +692,7 @@ flush_tlb_mm(struct mm_struct *mm)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -702,7 +701,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -730,7 +729,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current_page(mm, vma, addr);
@@ -742,7 +741,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -755,7 +754,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		printk(KERN_CRIT "flush_tlb_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -786,7 +785,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 	if ((vma->vm_flags & VM_EXEC) == 0)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		__load_new_mm_context(mm);
@@ -798,7 +797,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -807,5 +806,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		printk(KERN_CRIT "flush_icache_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 31/45] alpha/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:57   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:57 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-alpha,
	linux-kernel, rostedt, xiaoguangrong, sbw, Ivan Kokshaysky,
	wangyun, Srivatsa S. Bhat, netdev, Matt Turner, linuxppc-dev,
	Thomas Gleixner, Richard Henderson

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/alpha/kernel/smp.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index 7b60834..e147268 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -497,7 +497,6 @@ smp_cpus_done(unsigned int max_cpus)
 	       ((bogosum + 2500) / (5000/HZ)) % 100);
 }
 
-\f
 void
 smp_percpu_timer_interrupt(struct pt_regs *regs)
 {
@@ -681,7 +680,7 @@ ipi_flush_tlb_mm(void *x)
 void
 flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current(mm);
@@ -693,7 +692,7 @@ flush_tlb_mm(struct mm_struct *mm)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -702,7 +701,7 @@ flush_tlb_mm(struct mm_struct *mm)
 		printk(KERN_CRIT "flush_tlb_mm: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_mm);
 
@@ -730,7 +729,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		flush_tlb_current_page(mm, vma, addr);
@@ -742,7 +741,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -755,7 +754,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		printk(KERN_CRIT "flush_tlb_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 EXPORT_SYMBOL(flush_tlb_page);
 
@@ -786,7 +785,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 	if ((vma->vm_flags & VM_EXEC) == 0)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if (mm == current->active_mm) {
 		__load_new_mm_context(mm);
@@ -798,7 +797,7 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 				if (mm->context[cpu])
 					mm->context[cpu] = 0;
 			}
-			preempt_enable();
+			put_online_cpus_atomic();
 			return;
 		}
 	}
@@ -807,5 +806,5 @@ flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
 		printk(KERN_CRIT "flush_icache_page: timed out\n");
 	}
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 32/45] blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Mike Frysinger, Bob Liu, Steven Miao,
	Thomas Gleixner, uclinux-dist-devel, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/blackfin/mach-common/smp.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 1bc2ce6..11496cd 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -238,13 +238,13 @@ void smp_send_stop(void)
 {
 	cpumask_t callmap;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&callmap, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &callmap);
 	if (!cpumask_empty(&callmap))
 		send_ipi(&callmap, BFIN_IPI_CPU_STOP);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 32/45] blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Mike Frysinger, Bob Liu, Steven Miao,
	Thomas Gleixner, uclinux-dist-devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/blackfin/mach-common/smp.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 1bc2ce6..11496cd 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -238,13 +238,13 @@ void smp_send_stop(void)
 {
 	cpumask_t callmap;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&callmap, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &callmap);
 	if (!cpumask_empty(&callmap))
 		send_ipi(&callmap, BFIN_IPI_CPU_STOP);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 32/45] blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Mike Frysinger, Bob Liu, Steven Miao,
	uclinux-dist-devel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/blackfin/mach-common/smp.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 1bc2ce6..11496cd 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -238,13 +238,13 @@ void smp_send_stop(void)
 {
 	cpumask_t callmap;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&callmap, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &callmap);
 	if (!cpumask_empty(&callmap))
 		send_ipi(&callmap, BFIN_IPI_CPU_STOP);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 32/45] blackfin/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Bob Liu, Mike Frysinger, nikunj, zhong, linux-pm,
	fweisbec, Steven Miao, linux-kernel, rostedt, xiaoguangrong, sbw,
	wangyun, Srivatsa S. Bhat, netdev, uclinux-dist-devel,
	Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Bob Liu <lliubbo@gmail.com>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/blackfin/mach-common/smp.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 1bc2ce6..11496cd 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -238,13 +238,13 @@ void smp_send_stop(void)
 {
 	cpumask_t callmap;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&callmap, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &callmap);
 	if (!cpumask_empty(&callmap))
 		send_ipi(&callmap, BFIN_IPI_CPU_STOP);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 
 	return;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 33/45] cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Jesper Nilsson, Mikael Starvik, Thomas Gleixner,
	linux-cris-kernel, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-cris-kernel@axis.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/cris/arch-v32/kernel/smp.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index cdd1202..b2d4612 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/timex.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/kernel.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
@@ -222,6 +223,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	unsigned long flags;
 	cpumask_t cpu_mask;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&tlbstate_lock, flags);
 	cpu_mask = (mm == FLUSH_ALL ? cpu_all_mask : *mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
@@ -230,6 +232,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	flush_addr = addr;
 	send_ipi(IPI_FLUSH_TLB, 1, cpu_mask);
 	spin_unlock_irqrestore(&tlbstate_lock, flags);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
@@ -319,10 +322,12 @@ int smp_call_function(void (*func)(void *info), void *info, int wait)
 	data.info = info;
 	data.wait = wait;
 
+	get_online_cpus_atomic();
 	spin_lock(&call_lock);
 	call_data = &data;
 	ret = send_ipi(IPI_CALL, wait, cpu_mask);
 	spin_unlock(&call_lock);
+	put_online_cpus_atomic();
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 33/45] cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Jesper Nilsson, Mikael Starvik, Thomas Gleixner,
	linux-cris-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-cris-kernel@axis.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/cris/arch-v32/kernel/smp.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index cdd1202..b2d4612 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/timex.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/kernel.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
@@ -222,6 +223,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	unsigned long flags;
 	cpumask_t cpu_mask;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&tlbstate_lock, flags);
 	cpu_mask = (mm == FLUSH_ALL ? cpu_all_mask : *mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
@@ -230,6 +232,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	flush_addr = addr;
 	send_ipi(IPI_FLUSH_TLB, 1, cpu_mask);
 	spin_unlock_irqrestore(&tlbstate_lock, flags);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
@@ -319,10 +322,12 @@ int smp_call_function(void (*func)(void *info), void *info, int wait)
 	data.info = info;
 	data.wait = wait;
 
+	get_online_cpus_atomic();
 	spin_lock(&call_lock);
 	call_data = &data;
 	ret = send_ipi(IPI_CALL, wait, cpu_mask);
 	spin_unlock(&call_lock);
+	put_online_cpus_atomic();
 
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 33/45] cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Jesper Nilsson, Mikael Starvik, linux-cris-kernel

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-cris-kernel@axis.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/cris/arch-v32/kernel/smp.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index cdd1202..b2d4612 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/timex.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/kernel.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
@@ -222,6 +223,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	unsigned long flags;
 	cpumask_t cpu_mask;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&tlbstate_lock, flags);
 	cpu_mask = (mm == FLUSH_ALL ? cpu_all_mask : *mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
@@ -230,6 +232,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	flush_addr = addr;
 	send_ipi(IPI_FLUSH_TLB, 1, cpu_mask);
 	spin_unlock_irqrestore(&tlbstate_lock, flags);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
@@ -319,10 +322,12 @@ int smp_call_function(void (*func)(void *info), void *info, int wait)
 	data.info = info;
 	data.wait = wait;
 
+	get_online_cpus_atomic();
 	spin_lock(&call_lock);
 	call_data = &data;
 	ret = send_ipi(IPI_CALL, wait, cpu_mask);
 	spin_unlock(&call_lock);
+	put_online_cpus_atomic();
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 33/45] cris/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Jesper Nilsson, nikunj, zhong, linux-pm,
	linux-cris-kernel, fweisbec, linux-kernel, rostedt,
	xiaoguangrong, Mikael Starvik, sbw, wangyun, Srivatsa S. Bhat,
	netdev, Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Acked-by: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-cris-kernel@axis.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/cris/arch-v32/kernel/smp.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/cris/arch-v32/kernel/smp.c b/arch/cris/arch-v32/kernel/smp.c
index cdd1202..b2d4612 100644
--- a/arch/cris/arch-v32/kernel/smp.c
+++ b/arch/cris/arch-v32/kernel/smp.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/timex.h>
 #include <linux/sched.h>
+#include <linux/cpu.h>
 #include <linux/kernel.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
@@ -222,6 +223,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	unsigned long flags;
 	cpumask_t cpu_mask;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&tlbstate_lock, flags);
 	cpu_mask = (mm == FLUSH_ALL ? cpu_all_mask : *mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
@@ -230,6 +232,7 @@ void flush_tlb_common(struct mm_struct* mm, struct vm_area_struct* vma, unsigned
 	flush_addr = addr;
 	send_ipi(IPI_FLUSH_TLB, 1, cpu_mask);
 	spin_unlock_irqrestore(&tlbstate_lock, flags);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
@@ -319,10 +322,12 @@ int smp_call_function(void (*func)(void *info), void *info, int wait)
 	data.info = info;
 	data.wait = wait;
 
+	get_online_cpus_atomic();
 	spin_lock(&call_lock);
 	call_data = &data;
 	ret = send_ipi(IPI_CALL, wait, cpu_mask);
 	spin_unlock(&call_lock);
+	put_online_cpus_atomic();
 
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 34/45] hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Kuo, Thomas Gleixner, linux-hexagon,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-hexagon@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/hexagon/kernel/smp.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 0e364ca..30d4318 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -241,9 +241,12 @@ void smp_send_reschedule(int cpu)
 void smp_send_stop(void)
 {
 	struct cpumask targets;
+
+	get_online_cpus_atomic();
 	cpumask_copy(&targets, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &targets);
 	send_ipi(&targets, IPI_CPU_STOP);
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 34/45] hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Kuo, Thomas Gleixner, linux-hexagon

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-hexagon@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/hexagon/kernel/smp.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 0e364ca..30d4318 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -241,9 +241,12 @@ void smp_send_reschedule(int cpu)
 void smp_send_stop(void)
 {
 	struct cpumask targets;
+
+	get_online_cpus_atomic();
 	cpumask_copy(&targets, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &targets);
 	send_ipi(&targets, IPI_CPU_STOP);
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 34/45] hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Richard Kuo, linux-hexagon

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-hexagon@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/hexagon/kernel/smp.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 0e364ca..30d4318 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -241,9 +241,12 @@ void smp_send_reschedule(int cpu)
 void smp_send_stop(void)
 {
 	struct cpumask targets;
+
+	get_online_cpus_atomic();
 	cpumask_copy(&targets, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &targets);
 	send_ipi(&targets, IPI_CPU_STOP);
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 34/45] hexagon/smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, wangyun, Srivatsa S. Bhat, netdev,
	linux-hexagon, Thomas Gleixner, Richard Kuo, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-hexagon@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/hexagon/kernel/smp.c |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/hexagon/kernel/smp.c b/arch/hexagon/kernel/smp.c
index 0e364ca..30d4318 100644
--- a/arch/hexagon/kernel/smp.c
+++ b/arch/hexagon/kernel/smp.c
@@ -241,9 +241,12 @@ void smp_send_reschedule(int cpu)
 void smp_send_stop(void)
 {
 	struct cpumask targets;
+
+	get_online_cpus_atomic();
 	cpumask_copy(&targets, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &targets);
 	send_ipi(&targets, IPI_CPU_STOP);
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 35/45] ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
                     ` (2 preceding siblings ...)
  (?)
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, Andrew Morton,
	Eric W. Biederman, Thomas Gleixner, linux-ia64, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/irq_ia64.c |   15 +++++++++++++++
 arch/ia64/kernel/perfmon.c  |    8 +++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..f58b162 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -25,6 +25,7 @@
 #include <linux/ptrace.h>
 #include <linux/signal.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
 #include <linux/irq.h>
@@ -160,9 +161,11 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __bind_irq_vector(irq, vector, domain);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -190,9 +193,11 @@ static void clear_irq_vector(int irq)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +209,7 @@ ia64_native_assign_irq_vector (int irq)
 
 	vector = -ENOSPC;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -218,6 +224,7 @@ ia64_native_assign_irq_vector (int irq)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return vector;
 }
 
@@ -302,9 +309,11 @@ int irq_prepare_move(int irq, int cpu)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __irq_prepare_move(irq, cpu);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -320,11 +329,13 @@ void irq_complete_move(unsigned irq)
 	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
 		return;
 
+	get_online_cpus_atomic();
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
 	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
 	for_each_cpu_mask(i, cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -393,10 +404,12 @@ void destroy_and_reserve_irq(unsigned int irq)
 
 	dynamic_irq_cleanup(irq);
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	irq_status[irq] = IRQ_RSVD;
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -409,6 +422,7 @@ int create_irq(void)
 	cpumask_t domain = CPU_MASK_NONE;
 
 	irq = vector = -ENOSPC;
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -424,6 +438,7 @@ int create_irq(void)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	if (irq >= 0)
 		dynamic_irq_init(irq);
 	return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9ea25fc..16c8303 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6476,9 +6476,12 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	/* do the easy test first */
 	if (pfm_alt_intr_handler) return -EBUSY;
 
+	get_online_cpus_atomic();
+
 	/* one at a time in the install or remove, just fail the others */
 	if (!spin_trylock(&pfm_alt_install_check)) {
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	/* reserve our session */
@@ -6498,6 +6501,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	pfm_alt_intr_handler = hdl;
 
 	spin_unlock(&pfm_alt_install_check);
+	put_online_cpus_atomic();
 
 	return 0;
 
@@ -6510,6 +6514,8 @@ cleanup_reserve:
 	}
 
 	spin_unlock(&pfm_alt_install_check);
+out:
+	put_online_cpus_atomic();
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 35/45] ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, Andrew Morton,
	Eric W. Biederman, Thomas Gleixner, linux-ia64

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/irq_ia64.c |   15 +++++++++++++++
 arch/ia64/kernel/perfmon.c  |    8 +++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..f58b162 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -25,6 +25,7 @@
 #include <linux/ptrace.h>
 #include <linux/signal.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
 #include <linux/irq.h>
@@ -160,9 +161,11 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __bind_irq_vector(irq, vector, domain);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -190,9 +193,11 @@ static void clear_irq_vector(int irq)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +209,7 @@ ia64_native_assign_irq_vector (int irq)
 
 	vector = -ENOSPC;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -218,6 +224,7 @@ ia64_native_assign_irq_vector (int irq)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return vector;
 }
 
@@ -302,9 +309,11 @@ int irq_prepare_move(int irq, int cpu)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __irq_prepare_move(irq, cpu);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -320,11 +329,13 @@ void irq_complete_move(unsigned irq)
 	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
 		return;
 
+	get_online_cpus_atomic();
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
 	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
 	for_each_cpu_mask(i, cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -393,10 +404,12 @@ void destroy_and_reserve_irq(unsigned int irq)
 
 	dynamic_irq_cleanup(irq);
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	irq_status[irq] = IRQ_RSVD;
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -409,6 +422,7 @@ int create_irq(void)
 	cpumask_t domain = CPU_MASK_NONE;
 
 	irq = vector = -ENOSPC;
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -424,6 +438,7 @@ int create_irq(void)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	if (irq >= 0)
 		dynamic_irq_init(irq);
 	return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9ea25fc..16c8303 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6476,9 +6476,12 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	/* do the easy test first */
 	if (pfm_alt_intr_handler) return -EBUSY;
 
+	get_online_cpus_atomic();
+
 	/* one at a time in the install or remove, just fail the others */
 	if (!spin_trylock(&pfm_alt_install_check)) {
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	/* reserve our session */
@@ -6498,6 +6501,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	pfm_alt_intr_handler = hdl;
 
 	spin_unlock(&pfm_alt_install_check);
+	put_online_cpus_atomic();
 
 	return 0;
 
@@ -6510,6 +6514,8 @@ cleanup_reserve:
 	}
 
 	spin_unlock(&pfm_alt_install_check);
+out:
+	put_online_cpus_atomic();
 
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 35/45] ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, Eric W. Biederman,
	linux-ia64

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/irq_ia64.c |   15 +++++++++++++++
 arch/ia64/kernel/perfmon.c  |    8 +++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..f58b162 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -25,6 +25,7 @@
 #include <linux/ptrace.h>
 #include <linux/signal.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
 #include <linux/irq.h>
@@ -160,9 +161,11 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __bind_irq_vector(irq, vector, domain);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -190,9 +193,11 @@ static void clear_irq_vector(int irq)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +209,7 @@ ia64_native_assign_irq_vector (int irq)
 
 	vector = -ENOSPC;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -218,6 +224,7 @@ ia64_native_assign_irq_vector (int irq)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return vector;
 }
 
@@ -302,9 +309,11 @@ int irq_prepare_move(int irq, int cpu)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __irq_prepare_move(irq, cpu);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -320,11 +329,13 @@ void irq_complete_move(unsigned irq)
 	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
 		return;
 
+	get_online_cpus_atomic();
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
 	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
 	for_each_cpu_mask(i, cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -393,10 +404,12 @@ void destroy_and_reserve_irq(unsigned int irq)
 
 	dynamic_irq_cleanup(irq);
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	irq_status[irq] = IRQ_RSVD;
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -409,6 +422,7 @@ int create_irq(void)
 	cpumask_t domain = CPU_MASK_NONE;
 
 	irq = vector = -ENOSPC;
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -424,6 +438,7 @@ int create_irq(void)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	if (irq >= 0)
 		dynamic_irq_init(irq);
 	return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9ea25fc..16c8303 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6476,9 +6476,12 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	/* do the easy test first */
 	if (pfm_alt_intr_handler) return -EBUSY;
 
+	get_online_cpus_atomic();
+
 	/* one at a time in the install or remove, just fail the others */
 	if (!spin_trylock(&pfm_alt_install_check)) {
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	/* reserve our session */
@@ -6498,6 +6501,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	pfm_alt_intr_handler = hdl;
 
 	spin_unlock(&pfm_alt_install_check);
+	put_online_cpus_atomic();
 
 	return 0;
 
@@ -6510,6 +6514,8 @@ cleanup_reserve:
 	}
 
 	spin_unlock(&pfm_alt_install_check);
+out:
+	put_online_cpus_atomic();
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 35/45] ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:58 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Fenghua Yu, Tony Luck, linux-ia64, nikunj, zhong,
	linux-pm, fweisbec, linux-kernel, rostedt, xiaoguangrong, sbw,
	wangyun, Srivatsa S. Bhat, netdev, Andrew Morton, linuxppc-dev,
	Thomas Gleixner, Eric W. Biederman

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/irq_ia64.c |   15 +++++++++++++++
 arch/ia64/kernel/perfmon.c  |    8 +++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..f58b162 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -25,6 +25,7 @@
 #include <linux/ptrace.h>
 #include <linux/signal.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
 #include <linux/irq.h>
@@ -160,9 +161,11 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __bind_irq_vector(irq, vector, domain);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -190,9 +193,11 @@ static void clear_irq_vector(int irq)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +209,7 @@ ia64_native_assign_irq_vector (int irq)
 
 	vector = -ENOSPC;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -218,6 +224,7 @@ ia64_native_assign_irq_vector (int irq)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return vector;
 }
 
@@ -302,9 +309,11 @@ int irq_prepare_move(int irq, int cpu)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __irq_prepare_move(irq, cpu);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -320,11 +329,13 @@ void irq_complete_move(unsigned irq)
 	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
 		return;
 
+	get_online_cpus_atomic();
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
 	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
 	for_each_cpu_mask(i, cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -393,10 +404,12 @@ void destroy_and_reserve_irq(unsigned int irq)
 
 	dynamic_irq_cleanup(irq);
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	irq_status[irq] = IRQ_RSVD;
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -409,6 +422,7 @@ int create_irq(void)
 	cpumask_t domain = CPU_MASK_NONE;
 
 	irq = vector = -ENOSPC;
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -424,6 +438,7 @@ int create_irq(void)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	if (irq >= 0)
 		dynamic_irq_init(irq);
 	return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9ea25fc..16c8303 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6476,9 +6476,12 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	/* do the easy test first */
 	if (pfm_alt_intr_handler) return -EBUSY;
 
+	get_online_cpus_atomic();
+
 	/* one at a time in the install or remove, just fail the others */
 	if (!spin_trylock(&pfm_alt_install_check)) {
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	/* reserve our session */
@@ -6498,6 +6501,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	pfm_alt_intr_handler = hdl;
 
 	spin_unlock(&pfm_alt_install_check);
+	put_online_cpus_atomic();
 
 	return 0;
 
@@ -6510,6 +6514,8 @@ cleanup_reserve:
 	}
 
 	spin_unlock(&pfm_alt_install_check);
+out:
+	put_online_cpus_atomic();
 
 	return ret;
 }

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 36/45] ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, linux-ia64,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/smp.c |   12 ++++++------
 arch/ia64/mm/tlb.c     |    4 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..25991ba 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -24,6 +24,7 @@
 #include <linux/init.h>
 #include <linux/interrupt.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/kernel_stat.h>
 #include <linux/mm.h>
 #include <linux/cache.h>
@@ -259,8 +260,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	cpumask_t cpumask = xcpumask;
 	int mycpu, cpu, flush_mycpu = 0;
 
-	preempt_disable();
-	mycpu = smp_processor_id();
+	mycpu = get_online_cpus_atomic();
 
 	for_each_cpu_mask(cpu, cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
@@ -280,7 +280,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 		while(counts[cpu] == (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void
@@ -293,12 +293,12 @@ void
 smp_flush_tlb_mm (struct mm_struct *mm)
 {
 	cpumask_var_t cpus;
-	preempt_disable();
+	get_online_cpus_atomic();
 	/* this happens for the common case of a single-threaded fork():  */
 	if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
 	{
 		local_finish_flush_tlb_mm(mm);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 	if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) {
@@ -313,7 +313,7 @@ smp_flush_tlb_mm (struct mm_struct *mm)
 	local_irq_disable();
 	local_finish_flush_tlb_mm(mm);
 	local_irq_enable();
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)
diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c
index ed61297..8c55ef5 100644
--- a/arch/ia64/mm/tlb.c
+++ b/arch/ia64/mm/tlb.c
@@ -87,11 +87,11 @@ wrap_mmu_context (struct mm_struct *mm)
 	 * can't call flush_tlb_all() here because of race condition
 	 * with O(1) scheduler [EF]
 	 */
-	cpu = get_cpu(); /* prevent preemption/migration */
+	cpu = get_online_cpus_atomic(); /* prevent preemption/migration */
 	for_each_online_cpu(i)
 		if (i != cpu)
 			per_cpu(ia64_need_tlb_flush, i) = 1;
-	put_cpu();
+	put_online_cpus_atomic();
 	local_flush_tlb_all();
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 36/45] ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, linux-ia64

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/smp.c |   12 ++++++------
 arch/ia64/mm/tlb.c     |    4 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..25991ba 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -24,6 +24,7 @@
 #include <linux/init.h>
 #include <linux/interrupt.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/kernel_stat.h>
 #include <linux/mm.h>
 #include <linux/cache.h>
@@ -259,8 +260,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	cpumask_t cpumask = xcpumask;
 	int mycpu, cpu, flush_mycpu = 0;
 
-	preempt_disable();
-	mycpu = smp_processor_id();
+	mycpu = get_online_cpus_atomic();
 
 	for_each_cpu_mask(cpu, cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
@@ -280,7 +280,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 		while(counts[cpu] == (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void
@@ -293,12 +293,12 @@ void
 smp_flush_tlb_mm (struct mm_struct *mm)
 {
 	cpumask_var_t cpus;
-	preempt_disable();
+	get_online_cpus_atomic();
 	/* this happens for the common case of a single-threaded fork():  */
 	if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
 	{
 		local_finish_flush_tlb_mm(mm);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 	if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) {
@@ -313,7 +313,7 @@ smp_flush_tlb_mm (struct mm_struct *mm)
 	local_irq_disable();
 	local_finish_flush_tlb_mm(mm);
 	local_irq_enable();
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)
diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c
index ed61297..8c55ef5 100644
--- a/arch/ia64/mm/tlb.c
+++ b/arch/ia64/mm/tlb.c
@@ -87,11 +87,11 @@ wrap_mmu_context (struct mm_struct *mm)
 	 * can't call flush_tlb_all() here because of race condition
 	 * with O(1) scheduler [EF]
 	 */
-	cpu = get_cpu(); /* prevent preemption/migration */
+	cpu = get_online_cpus_atomic(); /* prevent preemption/migration */
 	for_each_online_cpu(i)
 		if (i != cpu)
 			per_cpu(ia64_need_tlb_flush, i) = 1;
-	put_cpu();
+	put_online_cpus_atomic();
 	local_flush_tlb_all();
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 36/45] ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, Fenghua Yu, Tony Luck, linux-ia64, nikunj, zhong,
	linux-pm, fweisbec, linux-kernel, rostedt, xiaoguangrong, sbw,
	wangyun, Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/smp.c |   12 ++++++------
 arch/ia64/mm/tlb.c     |    4 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..25991ba 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -24,6 +24,7 @@
 #include <linux/init.h>
 #include <linux/interrupt.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/kernel_stat.h>
 #include <linux/mm.h>
 #include <linux/cache.h>
@@ -259,8 +260,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	cpumask_t cpumask = xcpumask;
 	int mycpu, cpu, flush_mycpu = 0;
 
-	preempt_disable();
-	mycpu = smp_processor_id();
+	mycpu = get_online_cpus_atomic();
 
 	for_each_cpu_mask(cpu, cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
@@ -280,7 +280,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 		while(counts[cpu] == (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void
@@ -293,12 +293,12 @@ void
 smp_flush_tlb_mm (struct mm_struct *mm)
 {
 	cpumask_var_t cpus;
-	preempt_disable();
+	get_online_cpus_atomic();
 	/* this happens for the common case of a single-threaded fork():  */
 	if (likely(mm == current->active_mm && atomic_read(&mm->mm_users) == 1))
 	{
 		local_finish_flush_tlb_mm(mm);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 	if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) {
@@ -313,7 +313,7 @@ smp_flush_tlb_mm (struct mm_struct *mm)
 	local_irq_disable();
 	local_finish_flush_tlb_mm(mm);
 	local_irq_enable();
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)
diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c
index ed61297..8c55ef5 100644
--- a/arch/ia64/mm/tlb.c
+++ b/arch/ia64/mm/tlb.c
@@ -87,11 +87,11 @@ wrap_mmu_context (struct mm_struct *mm)
 	 * can't call flush_tlb_all() here because of race condition
 	 * with O(1) scheduler [EF]
 	 */
-	cpu = get_cpu(); /* prevent preemption/migration */
+	cpu = get_online_cpus_atomic(); /* prevent preemption/migration */
 	for_each_online_cpu(i)
 		if (i != cpu)
 			per_cpu(ia64_need_tlb_flush, i) = 1;
-	put_cpu();
+	put_online_cpus_atomic();
 	local_flush_tlb_all();
 }
 

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 37/45] m32r: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Hirokazu Takata, linux-m32r, linux-m32r-ja,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/m32r/kernel/smp.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c
index ce7aea3..ffafdba 100644
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -151,7 +151,7 @@ void smp_flush_cache_all(void)
 	cpumask_t cpumask;
 	unsigned long *mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpumask, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &cpumask);
 	spin_lock(&flushcache_lock);
@@ -162,7 +162,7 @@ void smp_flush_cache_all(void)
 	while (flushcache_cpumask)
 		mb();
 	spin_unlock(&flushcache_lock);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void smp_flush_cache_all_interrupt(void)
@@ -197,12 +197,12 @@ void smp_flush_tlb_all(void)
 {
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	__flush_tlb_all();
 	local_irq_restore(flags);
 	smp_call_function(flush_tlb_all_ipi, NULL, 1);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -250,7 +250,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -268,7 +268,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, NULL, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -320,7 +320,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -341,7 +341,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, vma, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 37/45] m32r: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Hirokazu Takata, linux-m32r, linux-m32r-ja

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/m32r/kernel/smp.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c
index ce7aea3..ffafdba 100644
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -151,7 +151,7 @@ void smp_flush_cache_all(void)
 	cpumask_t cpumask;
 	unsigned long *mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpumask, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &cpumask);
 	spin_lock(&flushcache_lock);
@@ -162,7 +162,7 @@ void smp_flush_cache_all(void)
 	while (flushcache_cpumask)
 		mb();
 	spin_unlock(&flushcache_lock);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void smp_flush_cache_all_interrupt(void)
@@ -197,12 +197,12 @@ void smp_flush_tlb_all(void)
 {
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	__flush_tlb_all();
 	local_irq_restore(flags);
 	smp_call_function(flush_tlb_all_ipi, NULL, 1);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -250,7 +250,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -268,7 +268,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, NULL, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -320,7 +320,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -341,7 +341,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, vma, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 37/45] m32r: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, linux-m32r-ja, linux-m32r, nikunj, zhong, linux-pm,
	fweisbec, Hirokazu Takata, linux-kernel, rostedt, xiaoguangrong,
	sbw, wangyun, Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m32r-ja@ml.linux-m32r.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/m32r/kernel/smp.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c
index ce7aea3..ffafdba 100644
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -151,7 +151,7 @@ void smp_flush_cache_all(void)
 	cpumask_t cpumask;
 	unsigned long *mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpumask, cpu_online_mask);
 	cpumask_clear_cpu(smp_processor_id(), &cpumask);
 	spin_lock(&flushcache_lock);
@@ -162,7 +162,7 @@ void smp_flush_cache_all(void)
 	while (flushcache_cpumask)
 		mb();
 	spin_unlock(&flushcache_lock);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void smp_flush_cache_all_interrupt(void)
@@ -197,12 +197,12 @@ void smp_flush_tlb_all(void)
 {
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	__flush_tlb_all();
 	local_irq_restore(flags);
 	smp_call_function(flush_tlb_all_ipi, NULL, 1);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -250,7 +250,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -268,7 +268,7 @@ void smp_flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, NULL, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*
@@ -320,7 +320,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	unsigned long *mmc;
 	unsigned long flags;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu_id = smp_processor_id();
 	mmc = &mm->context[cpu_id];
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
@@ -341,7 +341,7 @@ void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, vma, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*==========================================================================*

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 38/45] MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ralf Baechle, David Daney, Yong Zhang,
	Thomas Gleixner, Sanjay Lal, Steven J. Hill, John Crispin,
	Florian Fainelli, linux-mips, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Yong Zhang <yong.zhang0@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: "Steven J. Hill" <sjhill@mips.com>
Cc: John Crispin <blogic@openwrt.org>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mips/kernel/cevt-smtc.c |    7 +++++++
 arch/mips/kernel/smp.c       |   16 ++++++++--------
 arch/mips/kernel/smtc.c      |   12 ++++++++++++
 arch/mips/mm/c-octeon.c      |    4 ++--
 arch/mips/mm/c-r4k.c         |    5 +++--
 5 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/arch/mips/kernel/cevt-smtc.c b/arch/mips/kernel/cevt-smtc.c
index 9de5ed7..2e6c0cd 100644
--- a/arch/mips/kernel/cevt-smtc.c
+++ b/arch/mips/kernel/cevt-smtc.c
@@ -11,6 +11,7 @@
 #include <linux/interrupt.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/irq.h>
 
 #include <asm/smtc_ipi.h>
@@ -84,6 +85,8 @@ static int mips_next_event(unsigned long delta,
 	unsigned long nextcomp = 0L;
 	int vpe = current_cpu_data.vpe_id;
 	int cpu = smp_processor_id();
+
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	mtflags = dmt();
 
@@ -164,6 +167,7 @@ static int mips_next_event(unsigned long delta,
 	}
 	emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -177,6 +181,7 @@ void smtc_distribute_timer(int vpe)
 	unsigned long nextstamp;
 	unsigned long reference;
 
+	get_online_cpus_atomic();
 
 repeat:
 	nextstamp = 0L;
@@ -229,6 +234,8 @@ repeat:
 			> (unsigned long)LONG_MAX)
 				goto repeat;
 	}
+
+	put_online_cpus_atomic();
 }
 
 
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 6e7862a..be152b6 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -250,12 +250,12 @@ static inline void smp_on_other_tlbs(void (*func) (void *info), void *info)
 
 static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	smp_on_other_tlbs(func, info);
 	func(info);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -273,7 +273,7 @@ static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
@@ -287,7 +287,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -307,7 +307,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -325,7 +325,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 		}
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -354,7 +354,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -371,7 +371,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 		}
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 75a4fd7..3cda8eb 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/kernel_stat.h>
@@ -1143,6 +1144,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 	 * for the current TC, so we ought not to have to do it explicitly here.
 	 */
 
+	get_online_cpus_atomic();
+
 	for_each_online_cpu(cpu) {
 		if (cpu_data[cpu].vpe_id != my_vpe)
 			continue;
@@ -1180,6 +1183,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 		}
 	}
 
+	put_online_cpus_atomic();
+
 	return IRQ_HANDLED;
 }
 
@@ -1383,6 +1388,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	 * them, let's be really careful...
 	 */
 
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	if (smtc_status & SMTC_TLB_SHARED) {
 		mtflags = dvpe();
@@ -1438,6 +1444,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	else
 		emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -1496,6 +1503,7 @@ void smtc_cflush_lockdown(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu != smp_processor_id()) {
 			settc(cpu_data[cpu].tc_id);
@@ -1504,6 +1512,7 @@ void smtc_cflush_lockdown(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
 
 /* It would be cheating to change the cpu_online states during a flush! */
@@ -1512,6 +1521,8 @@ void smtc_cflush_release(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
+
 	/*
 	 * Start with a hazard barrier to ensure
 	 * that all CACHE ops have played through.
@@ -1525,4 +1536,5 @@ void smtc_cflush_release(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index 8557fb5..8e1bcf6 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -73,7 +73,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	mb();
 	octeon_local_flush_icache();
 #ifdef CONFIG_SMP
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = smp_processor_id();
 
 	/*
@@ -88,7 +88,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	for_each_cpu(cpu, &mask)
 		octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 #endif
 }
 
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index 21813be..86c8b3e 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -14,6 +14,7 @@
 #include <linux/linkage.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/bitops.h>
@@ -46,13 +47,13 @@
  */
 static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #if !defined(CONFIG_MIPS_MT_SMP) && !defined(CONFIG_MIPS_MT_SMTC)
 	smp_call_function(func, info, 1);
 #endif
 	func(info);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 #if defined(CONFIG_MIPS_CMP)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 38/45] MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ralf Baechle, David Daney, Yong Zhang,
	Thomas Gleixner, Sanjay Lal, Steven J. Hill, John Crispin,
	Florian Fainelli, linux-mips

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Yong Zhang <yong.zhang0@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: "Steven J. Hill" <sjhill@mips.com>
Cc: John Crispin <blogic@openwrt.org>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mips/kernel/cevt-smtc.c |    7 +++++++
 arch/mips/kernel/smp.c       |   16 ++++++++--------
 arch/mips/kernel/smtc.c      |   12 ++++++++++++
 arch/mips/mm/c-octeon.c      |    4 ++--
 arch/mips/mm/c-r4k.c         |    5 +++--
 5 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/arch/mips/kernel/cevt-smtc.c b/arch/mips/kernel/cevt-smtc.c
index 9de5ed7..2e6c0cd 100644
--- a/arch/mips/kernel/cevt-smtc.c
+++ b/arch/mips/kernel/cevt-smtc.c
@@ -11,6 +11,7 @@
 #include <linux/interrupt.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/irq.h>
 
 #include <asm/smtc_ipi.h>
@@ -84,6 +85,8 @@ static int mips_next_event(unsigned long delta,
 	unsigned long nextcomp = 0L;
 	int vpe = current_cpu_data.vpe_id;
 	int cpu = smp_processor_id();
+
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	mtflags = dmt();
 
@@ -164,6 +167,7 @@ static int mips_next_event(unsigned long delta,
 	}
 	emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -177,6 +181,7 @@ void smtc_distribute_timer(int vpe)
 	unsigned long nextstamp;
 	unsigned long reference;
 
+	get_online_cpus_atomic();
 
 repeat:
 	nextstamp = 0L;
@@ -229,6 +234,8 @@ repeat:
 			> (unsigned long)LONG_MAX)
 				goto repeat;
 	}
+
+	put_online_cpus_atomic();
 }
 
 
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 6e7862a..be152b6 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -250,12 +250,12 @@ static inline void smp_on_other_tlbs(void (*func) (void *info), void *info)
 
 static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	smp_on_other_tlbs(func, info);
 	func(info);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -273,7 +273,7 @@ static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
@@ -287,7 +287,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -307,7 +307,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -325,7 +325,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 		}
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -354,7 +354,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -371,7 +371,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 		}
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 75a4fd7..3cda8eb 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/kernel_stat.h>
@@ -1143,6 +1144,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 	 * for the current TC, so we ought not to have to do it explicitly here.
 	 */
 
+	get_online_cpus_atomic();
+
 	for_each_online_cpu(cpu) {
 		if (cpu_data[cpu].vpe_id != my_vpe)
 			continue;
@@ -1180,6 +1183,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 		}
 	}
 
+	put_online_cpus_atomic();
+
 	return IRQ_HANDLED;
 }
 
@@ -1383,6 +1388,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	 * them, let's be really careful...
 	 */
 
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	if (smtc_status & SMTC_TLB_SHARED) {
 		mtflags = dvpe();
@@ -1438,6 +1444,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	else
 		emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -1496,6 +1503,7 @@ void smtc_cflush_lockdown(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu != smp_processor_id()) {
 			settc(cpu_data[cpu].tc_id);
@@ -1504,6 +1512,7 @@ void smtc_cflush_lockdown(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
 
 /* It would be cheating to change the cpu_online states during a flush! */
@@ -1512,6 +1521,8 @@ void smtc_cflush_release(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
+
 	/*
 	 * Start with a hazard barrier to ensure
 	 * that all CACHE ops have played through.
@@ -1525,4 +1536,5 @@ void smtc_cflush_release(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index 8557fb5..8e1bcf6 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -73,7 +73,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	mb();
 	octeon_local_flush_icache();
 #ifdef CONFIG_SMP
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = smp_processor_id();
 
 	/*
@@ -88,7 +88,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	for_each_cpu(cpu, &mask)
 		octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 #endif
 }
 
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index 21813be..86c8b3e 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -14,6 +14,7 @@
 #include <linux/linkage.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/bitops.h>
@@ -46,13 +47,13 @@
  */
 static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #if !defined(CONFIG_MIPS_MT_SMP) && !defined(CONFIG_MIPS_MT_SMTC)
 	smp_call_function(func, info, 1);
 #endif
 	func(info);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 #if defined(CONFIG_MIPS_CMP)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 38/45] MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Ralf Baechle, David Daney, Yong Zhang, Sanjay Lal,
	Steven J. Hill, John Crispin, Florian Fainelli, linux-mips

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Yong Zhang <yong.zhang0@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: "Steven J. Hill" <sjhill@mips.com>
Cc: John Crispin <blogic@openwrt.org>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mips/kernel/cevt-smtc.c |    7 +++++++
 arch/mips/kernel/smp.c       |   16 ++++++++--------
 arch/mips/kernel/smtc.c      |   12 ++++++++++++
 arch/mips/mm/c-octeon.c      |    4 ++--
 arch/mips/mm/c-r4k.c         |    5 +++--
 5 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/arch/mips/kernel/cevt-smtc.c b/arch/mips/kernel/cevt-smtc.c
index 9de5ed7..2e6c0cd 100644
--- a/arch/mips/kernel/cevt-smtc.c
+++ b/arch/mips/kernel/cevt-smtc.c
@@ -11,6 +11,7 @@
 #include <linux/interrupt.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/irq.h>
 
 #include <asm/smtc_ipi.h>
@@ -84,6 +85,8 @@ static int mips_next_event(unsigned long delta,
 	unsigned long nextcomp = 0L;
 	int vpe = current_cpu_data.vpe_id;
 	int cpu = smp_processor_id();
+
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	mtflags = dmt();
 
@@ -164,6 +167,7 @@ static int mips_next_event(unsigned long delta,
 	}
 	emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -177,6 +181,7 @@ void smtc_distribute_timer(int vpe)
 	unsigned long nextstamp;
 	unsigned long reference;
 
+	get_online_cpus_atomic();
 
 repeat:
 	nextstamp = 0L;
@@ -229,6 +234,8 @@ repeat:
 			> (unsigned long)LONG_MAX)
 				goto repeat;
 	}
+
+	put_online_cpus_atomic();
 }
 
 
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 6e7862a..be152b6 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -250,12 +250,12 @@ static inline void smp_on_other_tlbs(void (*func) (void *info), void *info)
 
 static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	smp_on_other_tlbs(func, info);
 	func(info);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -273,7 +273,7 @@ static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
@@ -287,7 +287,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -307,7 +307,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -325,7 +325,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 		}
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -354,7 +354,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -371,7 +371,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 		}
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 75a4fd7..3cda8eb 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/kernel_stat.h>
@@ -1143,6 +1144,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 	 * for the current TC, so we ought not to have to do it explicitly here.
 	 */
 
+	get_online_cpus_atomic();
+
 	for_each_online_cpu(cpu) {
 		if (cpu_data[cpu].vpe_id != my_vpe)
 			continue;
@@ -1180,6 +1183,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 		}
 	}
 
+	put_online_cpus_atomic();
+
 	return IRQ_HANDLED;
 }
 
@@ -1383,6 +1388,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	 * them, let's be really careful...
 	 */
 
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	if (smtc_status & SMTC_TLB_SHARED) {
 		mtflags = dvpe();
@@ -1438,6 +1444,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	else
 		emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -1496,6 +1503,7 @@ void smtc_cflush_lockdown(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu != smp_processor_id()) {
 			settc(cpu_data[cpu].tc_id);
@@ -1504,6 +1512,7 @@ void smtc_cflush_lockdown(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
 
 /* It would be cheating to change the cpu_online states during a flush! */
@@ -1512,6 +1521,8 @@ void smtc_cflush_release(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
+
 	/*
 	 * Start with a hazard barrier to ensure
 	 * that all CACHE ops have played through.
@@ -1525,4 +1536,5 @@ void smtc_cflush_release(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index 8557fb5..8e1bcf6 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -73,7 +73,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	mb();
 	octeon_local_flush_icache();
 #ifdef CONFIG_SMP
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = smp_processor_id();
 
 	/*
@@ -88,7 +88,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	for_each_cpu(cpu, &mask)
 		octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 #endif
 }
 
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index 21813be..86c8b3e 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -14,6 +14,7 @@
 #include <linux/linkage.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/bitops.h>
@@ -46,13 +47,13 @@
  */
 static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #if !defined(CONFIG_MIPS_MT_SMP) && !defined(CONFIG_MIPS_MT_SMTC)
 	smp_call_function(func, info, 1);
 #endif
 	func(info);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 #if defined(CONFIG_MIPS_CMP)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 38/45] MIPS: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-mips, David Daney, fweisbec, linux-arch, xiaoguangrong,
	Steven J. Hill, wangyun, nikunj, linux-pm, rostedt, Sanjay Lal,
	Thomas Gleixner, Florian Fainelli, John Crispin, zhong,
	Yong Zhang, netdev, linux-kernel, Ralf Baechle, sbw,
	Srivatsa S. Bhat, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Daney <david.daney@cavium.com>
Cc: Yong Zhang <yong.zhang0@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sanjay Lal <sanjayl@kymasys.com>
Cc: "Steven J. Hill" <sjhill@mips.com>
Cc: John Crispin <blogic@openwrt.org>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mips/kernel/cevt-smtc.c |    7 +++++++
 arch/mips/kernel/smp.c       |   16 ++++++++--------
 arch/mips/kernel/smtc.c      |   12 ++++++++++++
 arch/mips/mm/c-octeon.c      |    4 ++--
 arch/mips/mm/c-r4k.c         |    5 +++--
 5 files changed, 32 insertions(+), 12 deletions(-)

diff --git a/arch/mips/kernel/cevt-smtc.c b/arch/mips/kernel/cevt-smtc.c
index 9de5ed7..2e6c0cd 100644
--- a/arch/mips/kernel/cevt-smtc.c
+++ b/arch/mips/kernel/cevt-smtc.c
@@ -11,6 +11,7 @@
 #include <linux/interrupt.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/irq.h>
 
 #include <asm/smtc_ipi.h>
@@ -84,6 +85,8 @@ static int mips_next_event(unsigned long delta,
 	unsigned long nextcomp = 0L;
 	int vpe = current_cpu_data.vpe_id;
 	int cpu = smp_processor_id();
+
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	mtflags = dmt();
 
@@ -164,6 +167,7 @@ static int mips_next_event(unsigned long delta,
 	}
 	emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 	return 0;
 }
 
@@ -177,6 +181,7 @@ void smtc_distribute_timer(int vpe)
 	unsigned long nextstamp;
 	unsigned long reference;
 
+	get_online_cpus_atomic();
 
 repeat:
 	nextstamp = 0L;
@@ -229,6 +234,8 @@ repeat:
 			> (unsigned long)LONG_MAX)
 				goto repeat;
 	}
+
+	put_online_cpus_atomic();
 }
 
 
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 6e7862a..be152b6 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -250,12 +250,12 @@ static inline void smp_on_other_tlbs(void (*func) (void *info), void *info)
 
 static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	smp_on_other_tlbs(func, info);
 	func(info);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /*
@@ -273,7 +273,7 @@ static inline void smp_on_each_tlb(void (*func) (void *info), void *info)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_on_other_tlbs(flush_tlb_mm_ipi, mm);
@@ -287,7 +287,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -307,7 +307,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -325,7 +325,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned l
 		}
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -354,7 +354,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd = {
 			.vma = vma,
@@ -371,7 +371,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 		}
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)
diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c
index 75a4fd7..3cda8eb 100644
--- a/arch/mips/kernel/smtc.c
+++ b/arch/mips/kernel/smtc.c
@@ -21,6 +21,7 @@
 #include <linux/kernel.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/kernel_stat.h>
@@ -1143,6 +1144,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 	 * for the current TC, so we ought not to have to do it explicitly here.
 	 */
 
+	get_online_cpus_atomic();
+
 	for_each_online_cpu(cpu) {
 		if (cpu_data[cpu].vpe_id != my_vpe)
 			continue;
@@ -1180,6 +1183,8 @@ static irqreturn_t ipi_interrupt(int irq, void *dev_idm)
 		}
 	}
 
+	put_online_cpus_atomic();
+
 	return IRQ_HANDLED;
 }
 
@@ -1383,6 +1388,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	 * them, let's be really careful...
 	 */
 
+	get_online_cpus_atomic();
 	local_irq_save(flags);
 	if (smtc_status & SMTC_TLB_SHARED) {
 		mtflags = dvpe();
@@ -1438,6 +1444,7 @@ void smtc_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 	else
 		emt(mtflags);
 	local_irq_restore(flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -1496,6 +1503,7 @@ void smtc_cflush_lockdown(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cpu != smp_processor_id()) {
 			settc(cpu_data[cpu].tc_id);
@@ -1504,6 +1512,7 @@ void smtc_cflush_lockdown(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
 
 /* It would be cheating to change the cpu_online states during a flush! */
@@ -1512,6 +1521,8 @@ void smtc_cflush_release(void)
 {
 	int cpu;
 
+	get_online_cpus_atomic();
+
 	/*
 	 * Start with a hazard barrier to ensure
 	 * that all CACHE ops have played through.
@@ -1525,4 +1536,5 @@ void smtc_cflush_release(void)
 		}
 	}
 	mips_ihb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index 8557fb5..8e1bcf6 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -73,7 +73,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	mb();
 	octeon_local_flush_icache();
 #ifdef CONFIG_SMP
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpu = smp_processor_id();
 
 	/*
@@ -88,7 +88,7 @@ static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
 	for_each_cpu(cpu, &mask)
 		octeon_send_ipi_single(cpu, SMP_ICACHE_FLUSH);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 #endif
 }
 
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index 21813be..86c8b3e 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -14,6 +14,7 @@
 #include <linux/linkage.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/bitops.h>
@@ -46,13 +47,13 @@
  */
 static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #if !defined(CONFIG_MIPS_MT_SMP) && !defined(CONFIG_MIPS_MT_SMTC)
 	smp_call_function(func, info, 1);
 #endif
 	func(info);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 #if defined(CONFIG_MIPS_CMP)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 39/45] mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David Howells, Koichi Yasutake, linux-am33-list,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: linux-am33-list@redhat.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mn10300/mm/cache-smp.c |    3 +++
 arch/mn10300/mm/tlb-smp.c   |   17 +++++++++--------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/mn10300/mm/cache-smp.c b/arch/mn10300/mm/cache-smp.c
index 2d23b9e..406357d 100644
--- a/arch/mn10300/mm/cache-smp.c
+++ b/arch/mn10300/mm/cache-smp.c
@@ -13,6 +13,7 @@
 #include <linux/mman.h>
 #include <linux/threads.h>
 #include <linux/interrupt.h>
+#include <linux/cpu.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/processor.h>
@@ -91,6 +92,7 @@ void smp_cache_interrupt(void)
 void smp_cache_call(unsigned long opr_mask,
 		    unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	smp_cache_mask = opr_mask;
 	smp_cache_start = start;
 	smp_cache_end = end;
@@ -102,4 +104,5 @@ void smp_cache_call(unsigned long opr_mask,
 	while (!cpumask_empty(&smp_cache_ipi_map))
 		/* nothing. lockup detection does not belong here */
 		mb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mn10300/mm/tlb-smp.c b/arch/mn10300/mm/tlb-smp.c
index 3e57faf..8856fd3 100644
--- a/arch/mn10300/mm/tlb-smp.c
+++ b/arch/mn10300/mm/tlb-smp.c
@@ -23,6 +23,7 @@
 #include <linux/sched.h>
 #include <linux/profile.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <asm/tlbflush.h>
 #include <asm/bitops.h>
 #include <asm/processor.h>
@@ -61,7 +62,7 @@ void smp_flush_tlb(void *unused)
 {
 	unsigned long cpu_id;
 
-	cpu_id = get_cpu();
+	cpu_id = get_online_cpus_atomic();
 
 	if (!cpumask_test_cpu(cpu_id, &flush_cpumask))
 		/* This was a BUG() but until someone can quote me the line
@@ -82,7 +83,7 @@ void smp_flush_tlb(void *unused)
 	cpumask_clear_cpu(cpu_id, &flush_cpumask);
 	smp_mb__after_clear_bit();
 out:
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -144,7 +145,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -152,7 +153,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -163,7 +164,7 @@ void flush_tlb_current_task(void)
 	struct mm_struct *mm = current->mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -171,7 +172,7 @@ void flush_tlb_current_task(void)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -184,7 +185,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	struct mm_struct *mm = vma->vm_mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -192,7 +193,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 39/45] mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David Howells, Koichi Yasutake, linux-am33-list

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: linux-am33-list@redhat.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mn10300/mm/cache-smp.c |    3 +++
 arch/mn10300/mm/tlb-smp.c   |   17 +++++++++--------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/mn10300/mm/cache-smp.c b/arch/mn10300/mm/cache-smp.c
index 2d23b9e..406357d 100644
--- a/arch/mn10300/mm/cache-smp.c
+++ b/arch/mn10300/mm/cache-smp.c
@@ -13,6 +13,7 @@
 #include <linux/mman.h>
 #include <linux/threads.h>
 #include <linux/interrupt.h>
+#include <linux/cpu.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/processor.h>
@@ -91,6 +92,7 @@ void smp_cache_interrupt(void)
 void smp_cache_call(unsigned long opr_mask,
 		    unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	smp_cache_mask = opr_mask;
 	smp_cache_start = start;
 	smp_cache_end = end;
@@ -102,4 +104,5 @@ void smp_cache_call(unsigned long opr_mask,
 	while (!cpumask_empty(&smp_cache_ipi_map))
 		/* nothing. lockup detection does not belong here */
 		mb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mn10300/mm/tlb-smp.c b/arch/mn10300/mm/tlb-smp.c
index 3e57faf..8856fd3 100644
--- a/arch/mn10300/mm/tlb-smp.c
+++ b/arch/mn10300/mm/tlb-smp.c
@@ -23,6 +23,7 @@
 #include <linux/sched.h>
 #include <linux/profile.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <asm/tlbflush.h>
 #include <asm/bitops.h>
 #include <asm/processor.h>
@@ -61,7 +62,7 @@ void smp_flush_tlb(void *unused)
 {
 	unsigned long cpu_id;
 
-	cpu_id = get_cpu();
+	cpu_id = get_online_cpus_atomic();
 
 	if (!cpumask_test_cpu(cpu_id, &flush_cpumask))
 		/* This was a BUG() but until someone can quote me the line
@@ -82,7 +83,7 @@ void smp_flush_tlb(void *unused)
 	cpumask_clear_cpu(cpu_id, &flush_cpumask);
 	smp_mb__after_clear_bit();
 out:
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -144,7 +145,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -152,7 +153,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -163,7 +164,7 @@ void flush_tlb_current_task(void)
 	struct mm_struct *mm = current->mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -171,7 +172,7 @@ void flush_tlb_current_task(void)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -184,7 +185,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	struct mm_struct *mm = vma->vm_mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -192,7 +193,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 39/45] mn10300: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, linux-am33-list, nikunj, zhong, linux-pm, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, David Howells, sbw,
	wangyun, Srivatsa S. Bhat, netdev, Koichi Yasutake, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: linux-am33-list@redhat.com
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/mn10300/mm/cache-smp.c |    3 +++
 arch/mn10300/mm/tlb-smp.c   |   17 +++++++++--------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/mn10300/mm/cache-smp.c b/arch/mn10300/mm/cache-smp.c
index 2d23b9e..406357d 100644
--- a/arch/mn10300/mm/cache-smp.c
+++ b/arch/mn10300/mm/cache-smp.c
@@ -13,6 +13,7 @@
 #include <linux/mman.h>
 #include <linux/threads.h>
 #include <linux/interrupt.h>
+#include <linux/cpu.h>
 #include <asm/page.h>
 #include <asm/pgtable.h>
 #include <asm/processor.h>
@@ -91,6 +92,7 @@ void smp_cache_interrupt(void)
 void smp_cache_call(unsigned long opr_mask,
 		    unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	smp_cache_mask = opr_mask;
 	smp_cache_start = start;
 	smp_cache_end = end;
@@ -102,4 +104,5 @@ void smp_cache_call(unsigned long opr_mask,
 	while (!cpumask_empty(&smp_cache_ipi_map))
 		/* nothing. lockup detection does not belong here */
 		mb();
+	put_online_cpus_atomic();
 }
diff --git a/arch/mn10300/mm/tlb-smp.c b/arch/mn10300/mm/tlb-smp.c
index 3e57faf..8856fd3 100644
--- a/arch/mn10300/mm/tlb-smp.c
+++ b/arch/mn10300/mm/tlb-smp.c
@@ -23,6 +23,7 @@
 #include <linux/sched.h>
 #include <linux/profile.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <asm/tlbflush.h>
 #include <asm/bitops.h>
 #include <asm/processor.h>
@@ -61,7 +62,7 @@ void smp_flush_tlb(void *unused)
 {
 	unsigned long cpu_id;
 
-	cpu_id = get_cpu();
+	cpu_id = get_online_cpus_atomic();
 
 	if (!cpumask_test_cpu(cpu_id, &flush_cpumask))
 		/* This was a BUG() but until someone can quote me the line
@@ -82,7 +83,7 @@ void smp_flush_tlb(void *unused)
 	cpumask_clear_cpu(cpu_id, &flush_cpumask);
 	smp_mb__after_clear_bit();
 out:
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -144,7 +145,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -152,7 +153,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -163,7 +164,7 @@ void flush_tlb_current_task(void)
 	struct mm_struct *mm = current->mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -171,7 +172,7 @@ void flush_tlb_current_task(void)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, FLUSH_ALL);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**
@@ -184,7 +185,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	struct mm_struct *mm = vma->vm_mm;
 	cpumask_t cpu_mask;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	cpumask_copy(&cpu_mask, mm_cpumask(mm));
 	cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
 
@@ -192,7 +193,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
 	if (!cpumask_empty(&cpu_mask))
 		flush_tlb_others(cpu_mask, mm, va);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 /**

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 40/45] powerpc, irq: Use GFP_ATOMIC allocations in atomic context
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Michael Ellerman,
	Paul Mackerras, Ian Munsie, Steven Rostedt, Michael Ellerman,
	Li Zhong, linuxppc-dev, Srivatsa S. Bhat

The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ea185e0..ca39bac 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -412,7 +412,7 @@ void migrate_irqs(void)
 	cpumask_var_t mask;
 	const struct cpumask *map = cpu_online_mask;
 
-	alloc_cpumask_var(&mask, GFP_KERNEL);
+	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
 	for_each_irq_desc(irq, desc) {
 		struct irq_data *data;


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 40/45] powerpc, irq: Use GFP_ATOMIC allocations in atomic context
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Michael Ellerman,
	Paul Mackerras, Ian Munsie

The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ea185e0..ca39bac 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -412,7 +412,7 @@ void migrate_irqs(void)
 	cpumask_var_t mask;
 	const struct cpumask *map = cpu_online_mask;
 
-	alloc_cpumask_var(&mask, GFP_KERNEL);
+	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
 	for_each_irq_desc(irq, desc) {
 		struct irq_data *data;

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 40/45] powerpc, irq: Use GFP_ATOMIC allocations in atomic context
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 19:59 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, Li Zhong, linux-pm, fweisbec, linux-kernel,
	Steven Rostedt, xiaoguangrong, sbw, Paul Mackerras, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev, Ian Munsie

The function migrate_irqs() is called with interrupts disabled
and hence its not safe to do GFP_KERNEL allocations inside it,
because they can sleep. So change the gfp mask to GFP_ATOMIC.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ian Munsie <imunsie@au1.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ea185e0..ca39bac 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -412,7 +412,7 @@ void migrate_irqs(void)
 	cpumask_var_t mask;
 	const struct cpumask *map = cpu_online_mask;
 
-	alloc_cpumask_var(&mask, GFP_KERNEL);
+	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
 	for_each_irq_desc(irq, desc) {
 		struct irq_data *data;

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 41/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Gleb Natapov,
	Alexander Graf, Rob Herring, Grant Likely, Kumar Gala,
	Zhao Chenhui, linuxppc-dev, kvm, kvm-ppc, oprofile-list,
	cbe-oss-dev, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: kvm@vger.kernel.org
Cc: kvm-ppc@vger.kernel.org
Cc: oprofile-list@lists.sf.net
Cc: cbe-oss-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c                  |    7 ++++++-
 arch/powerpc/kernel/machine_kexec_64.c     |    4 ++--
 arch/powerpc/kernel/smp.c                  |    2 ++
 arch/powerpc/kvm/book3s_hv.c               |    5 +++--
 arch/powerpc/mm/mmu_context_nohash.c       |    3 +++
 arch/powerpc/oprofile/cell/spu_profiler.c  |    3 +++
 arch/powerpc/oprofile/cell/spu_task_sync.c |    4 ++++
 arch/powerpc/oprofile/op_model_cell.c      |    6 ++++++
 8 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ca39bac..41e9961 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -45,6 +45,7 @@
 #include <linux/irq.h>
 #include <linux/seq_file.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/profile.h>
 #include <linux/bitops.h>
 #include <linux/list.h>
@@ -410,7 +411,10 @@ void migrate_irqs(void)
 	unsigned int irq;
 	static int warned;
 	cpumask_var_t mask;
-	const struct cpumask *map = cpu_online_mask;
+	const struct cpumask *map;
+
+	get_online_cpus_atomic();
+	map = cpu_online_mask;
 
 	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
@@ -436,6 +440,7 @@ void migrate_irqs(void)
 	}
 
 	free_cpumask_var(mask);
+	put_online_cpus_atomic();
 
 	local_irq_enable();
 	mdelay(1);
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..38f6d75 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -187,7 +187,7 @@ static void kexec_prepare_cpus_wait(int wait_state)
 	int my_cpu, i, notified=-1;
 
 	hw_breakpoint_disable();
-	my_cpu = get_cpu();
+	my_cpu = get_online_cpus_atomic();
 	/* Make sure each CPU has at least made it to the state we need.
 	 *
 	 * FIXME: There is a (slim) chance of a problem if not all of the CPUs
@@ -266,7 +266,7 @@ static void kexec_prepare_cpus(void)
 	 */
 	kexec_prepare_cpus_wait(KEXEC_STATE_REAL_MODE);
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 #else /* ! SMP */
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ee7ac5e..2123bec 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -277,9 +277,11 @@ void smp_send_debugger_break(void)
 	if (unlikely(!smp_ops))
 		return;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu)
 		if (cpu != me)
 			do_message_pass(cpu, PPC_MSG_DEBUGGER_BREAK);
+	put_online_cpus_atomic();
 }
 #endif
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2efa9dd..9d8a973 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -28,6 +28,7 @@
 #include <linux/fs.h>
 #include <linux/anon_inodes.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/page-flags.h>
 #include <linux/srcu.h>
@@ -78,7 +79,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 
 	/* CPU points to the first thread of the core */
 	if (cpu != me && cpu >= 0 && cpu < nr_cpu_ids) {
@@ -88,7 +89,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		else if (cpu_online(cpu))
 			smp_send_reschedule(cpu);
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index e779642..c7bdcb4 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -194,6 +194,8 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	unsigned int i, id, cpu = smp_processor_id();
 	unsigned long *map;
 
+	get_online_cpus_atomic();
+
 	/* No lockless fast path .. yet */
 	raw_spin_lock(&context_lock);
 
@@ -280,6 +282,7 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	pr_hardcont(" -> %d\n", id);
 	set_context(id, next->pgd);
 	raw_spin_unlock(&context_lock);
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/oprofile/cell/spu_profiler.c b/arch/powerpc/oprofile/cell/spu_profiler.c
index b129d00..ab6e6c1 100644
--- a/arch/powerpc/oprofile/cell/spu_profiler.c
+++ b/arch/powerpc/oprofile/cell/spu_profiler.c
@@ -14,6 +14,7 @@
 
 #include <linux/hrtimer.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/slab.h>
 #include <asm/cell-pmu.h>
 #include <asm/time.h>
@@ -142,6 +143,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 	if (!spu_prof_running)
 		goto stop;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cbe_get_hw_thread_id(cpu))
 			continue;
@@ -177,6 +179,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 				       oprof_spu_smpl_arry_lck_flags);
 
 	}
+	put_online_cpus_atomic();
 	smp_wmb();	/* insure spu event buffer updates are written */
 			/* don't want events intermingled... */
 
diff --git a/arch/powerpc/oprofile/cell/spu_task_sync.c b/arch/powerpc/oprofile/cell/spu_task_sync.c
index 28f1af2..8464ef6 100644
--- a/arch/powerpc/oprofile/cell/spu_task_sync.c
+++ b/arch/powerpc/oprofile/cell/spu_task_sync.c
@@ -28,6 +28,7 @@
 #include <linux/oprofile.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/cpu.h>
 #include "pr_util.h"
 
 #define RELEASE_ALL 9999
@@ -448,11 +449,14 @@ static int number_of_online_nodes(void)
 {
         u32 cpu; u32 tmp;
         int nodes = 0;
+
+	get_online_cpus_atomic();
         for_each_online_cpu(cpu) {
                 tmp = cbe_cpu_to_node(cpu) + 1;
                 if (tmp > nodes)
                         nodes++;
         }
+	put_online_cpus_atomic();
         return nodes;
 }
 
diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c
index b9589c1..c9bb028 100644
--- a/arch/powerpc/oprofile/op_model_cell.c
+++ b/arch/powerpc/oprofile/op_model_cell.c
@@ -22,6 +22,7 @@
 #include <linux/oprofile.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/timer.h>
 #include <asm/cell-pmu.h>
@@ -463,6 +464,7 @@ static void cell_virtual_cntr(unsigned long data)
 	 * not both playing with the counters on the same node.
 	 */
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	prev_hdw_thread = hdw_thread;
@@ -550,6 +552,7 @@ static void cell_virtual_cntr(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	mod_timer(&timer_virt_cntr, jiffies + HZ / 10);
 }
@@ -608,6 +611,8 @@ static void spu_evnt_swap(unsigned long data)
 	/* Make sure spu event interrupt handler and spu event swap
 	 * don't access the counters simultaneously.
 	 */
+
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	cur_spu_evnt_phys_spu_indx = spu_evnt_phys_spu_indx;
@@ -673,6 +678,7 @@ static void spu_evnt_swap(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	/* swap approximately every 0.1 seconds */
 	mod_timer(&timer_spu_event_swap, jiffies + HZ / 25);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 41/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Gleb Natapov,
	Alexander Graf, Rob Herring, Grant Likely, Kumar Gala,
	Zhao Chenhui

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: kvm@vger.kernel.org
Cc: kvm-ppc@vger.kernel.org
Cc: oprofile-list@lists.sf.net
Cc: cbe-oss-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c                  |    7 ++++++-
 arch/powerpc/kernel/machine_kexec_64.c     |    4 ++--
 arch/powerpc/kernel/smp.c                  |    2 ++
 arch/powerpc/kvm/book3s_hv.c               |    5 +++--
 arch/powerpc/mm/mmu_context_nohash.c       |    3 +++
 arch/powerpc/oprofile/cell/spu_profiler.c  |    3 +++
 arch/powerpc/oprofile/cell/spu_task_sync.c |    4 ++++
 arch/powerpc/oprofile/op_model_cell.c      |    6 ++++++
 8 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ca39bac..41e9961 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -45,6 +45,7 @@
 #include <linux/irq.h>
 #include <linux/seq_file.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/profile.h>
 #include <linux/bitops.h>
 #include <linux/list.h>
@@ -410,7 +411,10 @@ void migrate_irqs(void)
 	unsigned int irq;
 	static int warned;
 	cpumask_var_t mask;
-	const struct cpumask *map = cpu_online_mask;
+	const struct cpumask *map;
+
+	get_online_cpus_atomic();
+	map = cpu_online_mask;
 
 	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
@@ -436,6 +440,7 @@ void migrate_irqs(void)
 	}
 
 	free_cpumask_var(mask);
+	put_online_cpus_atomic();
 
 	local_irq_enable();
 	mdelay(1);
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..38f6d75 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -187,7 +187,7 @@ static void kexec_prepare_cpus_wait(int wait_state)
 	int my_cpu, i, notified=-1;
 
 	hw_breakpoint_disable();
-	my_cpu = get_cpu();
+	my_cpu = get_online_cpus_atomic();
 	/* Make sure each CPU has at least made it to the state we need.
 	 *
 	 * FIXME: There is a (slim) chance of a problem if not all of the CPUs
@@ -266,7 +266,7 @@ static void kexec_prepare_cpus(void)
 	 */
 	kexec_prepare_cpus_wait(KEXEC_STATE_REAL_MODE);
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 #else /* ! SMP */
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ee7ac5e..2123bec 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -277,9 +277,11 @@ void smp_send_debugger_break(void)
 	if (unlikely(!smp_ops))
 		return;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu)
 		if (cpu != me)
 			do_message_pass(cpu, PPC_MSG_DEBUGGER_BREAK);
+	put_online_cpus_atomic();
 }
 #endif
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2efa9dd..9d8a973 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -28,6 +28,7 @@
 #include <linux/fs.h>
 #include <linux/anon_inodes.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/page-flags.h>
 #include <linux/srcu.h>
@@ -78,7 +79,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 
 	/* CPU points to the first thread of the core */
 	if (cpu != me && cpu >= 0 && cpu < nr_cpu_ids) {
@@ -88,7 +89,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		else if (cpu_online(cpu))
 			smp_send_reschedule(cpu);
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index e779642..c7bdcb4 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -194,6 +194,8 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	unsigned int i, id, cpu = smp_processor_id();
 	unsigned long *map;
 
+	get_online_cpus_atomic();
+
 	/* No lockless fast path .. yet */
 	raw_spin_lock(&context_lock);
 
@@ -280,6 +282,7 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	pr_hardcont(" -> %d\n", id);
 	set_context(id, next->pgd);
 	raw_spin_unlock(&context_lock);
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/oprofile/cell/spu_profiler.c b/arch/powerpc/oprofile/cell/spu_profiler.c
index b129d00..ab6e6c1 100644
--- a/arch/powerpc/oprofile/cell/spu_profiler.c
+++ b/arch/powerpc/oprofile/cell/spu_profiler.c
@@ -14,6 +14,7 @@
 
 #include <linux/hrtimer.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/slab.h>
 #include <asm/cell-pmu.h>
 #include <asm/time.h>
@@ -142,6 +143,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 	if (!spu_prof_running)
 		goto stop;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cbe_get_hw_thread_id(cpu))
 			continue;
@@ -177,6 +179,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 				       oprof_spu_smpl_arry_lck_flags);
 
 	}
+	put_online_cpus_atomic();
 	smp_wmb();	/* insure spu event buffer updates are written */
 			/* don't want events intermingled... */
 
diff --git a/arch/powerpc/oprofile/cell/spu_task_sync.c b/arch/powerpc/oprofile/cell/spu_task_sync.c
index 28f1af2..8464ef6 100644
--- a/arch/powerpc/oprofile/cell/spu_task_sync.c
+++ b/arch/powerpc/oprofile/cell/spu_task_sync.c
@@ -28,6 +28,7 @@
 #include <linux/oprofile.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/cpu.h>
 #include "pr_util.h"
 
 #define RELEASE_ALL 9999
@@ -448,11 +449,14 @@ static int number_of_online_nodes(void)
 {
         u32 cpu; u32 tmp;
         int nodes = 0;
+
+	get_online_cpus_atomic();
         for_each_online_cpu(cpu) {
                 tmp = cbe_cpu_to_node(cpu) + 1;
                 if (tmp > nodes)
                         nodes++;
         }
+	put_online_cpus_atomic();
         return nodes;
 }
 
diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c
index b9589c1..c9bb028 100644
--- a/arch/powerpc/oprofile/op_model_cell.c
+++ b/arch/powerpc/oprofile/op_model_cell.c
@@ -22,6 +22,7 @@
 #include <linux/oprofile.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/timer.h>
 #include <asm/cell-pmu.h>
@@ -463,6 +464,7 @@ static void cell_virtual_cntr(unsigned long data)
 	 * not both playing with the counters on the same node.
 	 */
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	prev_hdw_thread = hdw_thread;
@@ -550,6 +552,7 @@ static void cell_virtual_cntr(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	mod_timer(&timer_virt_cntr, jiffies + HZ / 10);
 }
@@ -608,6 +611,8 @@ static void spu_evnt_swap(unsigned long data)
 	/* Make sure spu event interrupt handler and spu event swap
 	 * don't access the counters simultaneously.
 	 */
+
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	cur_spu_evnt_phys_spu_indx = spu_evnt_phys_spu_indx;
@@ -673,6 +678,7 @@ static void spu_evnt_swap(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	/* swap approximately every 0.1 seconds */
 	mod_timer(&timer_spu_event_swap, jiffies + HZ / 25);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 41/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: Gleb Natapov, linux-arch, Zhao Chenhui, Alexander Graf,
	xiaoguangrong, oprofile-list, wangyun, fweisbec, Rob Herring,
	cbe-oss-dev, nikunj, linux-pm, rostedt, kvm-ppc, kvm, zhong,
	netdev, linux-kernel, sbw, Srivatsa S. Bhat, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: kvm@vger.kernel.org
Cc: kvm-ppc@vger.kernel.org
Cc: oprofile-list@lists.sf.net
Cc: cbe-oss-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c                  |    7 ++++++-
 arch/powerpc/kernel/machine_kexec_64.c     |    4 ++--
 arch/powerpc/kernel/smp.c                  |    2 ++
 arch/powerpc/kvm/book3s_hv.c               |    5 +++--
 arch/powerpc/mm/mmu_context_nohash.c       |    3 +++
 arch/powerpc/oprofile/cell/spu_profiler.c  |    3 +++
 arch/powerpc/oprofile/cell/spu_task_sync.c |    4 ++++
 arch/powerpc/oprofile/op_model_cell.c      |    6 ++++++
 8 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ca39bac..41e9961 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -45,6 +45,7 @@
 #include <linux/irq.h>
 #include <linux/seq_file.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/profile.h>
 #include <linux/bitops.h>
 #include <linux/list.h>
@@ -410,7 +411,10 @@ void migrate_irqs(void)
 	unsigned int irq;
 	static int warned;
 	cpumask_var_t mask;
-	const struct cpumask *map = cpu_online_mask;
+	const struct cpumask *map;
+
+	get_online_cpus_atomic();
+	map = cpu_online_mask;
 
 	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
@@ -436,6 +440,7 @@ void migrate_irqs(void)
 	}
 
 	free_cpumask_var(mask);
+	put_online_cpus_atomic();
 
 	local_irq_enable();
 	mdelay(1);
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..38f6d75 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -187,7 +187,7 @@ static void kexec_prepare_cpus_wait(int wait_state)
 	int my_cpu, i, notified=-1;
 
 	hw_breakpoint_disable();
-	my_cpu = get_cpu();
+	my_cpu = get_online_cpus_atomic();
 	/* Make sure each CPU has at least made it to the state we need.
 	 *
 	 * FIXME: There is a (slim) chance of a problem if not all of the CPUs
@@ -266,7 +266,7 @@ static void kexec_prepare_cpus(void)
 	 */
 	kexec_prepare_cpus_wait(KEXEC_STATE_REAL_MODE);
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 #else /* ! SMP */
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ee7ac5e..2123bec 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -277,9 +277,11 @@ void smp_send_debugger_break(void)
 	if (unlikely(!smp_ops))
 		return;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu)
 		if (cpu != me)
 			do_message_pass(cpu, PPC_MSG_DEBUGGER_BREAK);
+	put_online_cpus_atomic();
 }
 #endif
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2efa9dd..9d8a973 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -28,6 +28,7 @@
 #include <linux/fs.h>
 #include <linux/anon_inodes.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/page-flags.h>
 #include <linux/srcu.h>
@@ -78,7 +79,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 
 	/* CPU points to the first thread of the core */
 	if (cpu != me && cpu >= 0 && cpu < nr_cpu_ids) {
@@ -88,7 +89,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		else if (cpu_online(cpu))
 			smp_send_reschedule(cpu);
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index e779642..c7bdcb4 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -194,6 +194,8 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	unsigned int i, id, cpu = smp_processor_id();
 	unsigned long *map;
 
+	get_online_cpus_atomic();
+
 	/* No lockless fast path .. yet */
 	raw_spin_lock(&context_lock);
 
@@ -280,6 +282,7 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	pr_hardcont(" -> %d\n", id);
 	set_context(id, next->pgd);
 	raw_spin_unlock(&context_lock);
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/oprofile/cell/spu_profiler.c b/arch/powerpc/oprofile/cell/spu_profiler.c
index b129d00..ab6e6c1 100644
--- a/arch/powerpc/oprofile/cell/spu_profiler.c
+++ b/arch/powerpc/oprofile/cell/spu_profiler.c
@@ -14,6 +14,7 @@
 
 #include <linux/hrtimer.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/slab.h>
 #include <asm/cell-pmu.h>
 #include <asm/time.h>
@@ -142,6 +143,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 	if (!spu_prof_running)
 		goto stop;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cbe_get_hw_thread_id(cpu))
 			continue;
@@ -177,6 +179,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 				       oprof_spu_smpl_arry_lck_flags);
 
 	}
+	put_online_cpus_atomic();
 	smp_wmb();	/* insure spu event buffer updates are written */
 			/* don't want events intermingled... */
 
diff --git a/arch/powerpc/oprofile/cell/spu_task_sync.c b/arch/powerpc/oprofile/cell/spu_task_sync.c
index 28f1af2..8464ef6 100644
--- a/arch/powerpc/oprofile/cell/spu_task_sync.c
+++ b/arch/powerpc/oprofile/cell/spu_task_sync.c
@@ -28,6 +28,7 @@
 #include <linux/oprofile.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/cpu.h>
 #include "pr_util.h"
 
 #define RELEASE_ALL 9999
@@ -448,11 +449,14 @@ static int number_of_online_nodes(void)
 {
         u32 cpu; u32 tmp;
         int nodes = 0;
+
+	get_online_cpus_atomic();
         for_each_online_cpu(cpu) {
                 tmp = cbe_cpu_to_node(cpu) + 1;
                 if (tmp > nodes)
                         nodes++;
         }
+	put_online_cpus_atomic();
         return nodes;
 }
 
diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c
index b9589c1..c9bb028 100644
--- a/arch/powerpc/oprofile/op_model_cell.c
+++ b/arch/powerpc/oprofile/op_model_cell.c
@@ -22,6 +22,7 @@
 #include <linux/oprofile.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/timer.h>
 #include <asm/cell-pmu.h>
@@ -463,6 +464,7 @@ static void cell_virtual_cntr(unsigned long data)
 	 * not both playing with the counters on the same node.
 	 */
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	prev_hdw_thread = hdw_thread;
@@ -550,6 +552,7 @@ static void cell_virtual_cntr(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	mod_timer(&timer_virt_cntr, jiffies + HZ / 10);
 }
@@ -608,6 +611,8 @@ static void spu_evnt_swap(unsigned long data)
 	/* Make sure spu event interrupt handler and spu event swap
 	 * don't access the counters simultaneously.
 	 */
+
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	cur_spu_evnt_phys_spu_indx = spu_evnt_phys_spu_indx;
@@ -673,6 +678,7 @@ static void spu_evnt_swap(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	/* swap approximately every 0.1 seconds */
 	mod_timer(&timer_spu_event_swap, jiffies + HZ / 25);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 42/45] powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
  (?)
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Paul Mackerras, Kumar Gala,
	Zhao Chenhui, Thomas Gleixner, linuxppc-dev, Srivatsa S. Bhat

Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.

It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily powerpc is one of the few places
where the CPU coming online traverses the cpu_online_mask before fully
coming online. So wrap that part under get/put_online_cpus_atomic(), to
avoid false-positive warnings from the CPU hotplug debug code.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/smp.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 2123bec..59c9a09 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -657,6 +657,7 @@ __cpuinit void start_secondary(void *unused)
 		cpumask_set_cpu(base + i, cpu_core_mask(cpu));
 	}
 	l2_cache = cpu_to_l2cache(cpu);
+	get_online_cpus_atomic();
 	for_each_online_cpu(i) {
 		struct device_node *np = cpu_to_l2cache(i);
 		if (!np)
@@ -667,6 +668,7 @@ __cpuinit void start_secondary(void *unused)
 		}
 		of_node_put(np);
 	}
+	put_online_cpus_atomic();
 	of_node_put(l2_cache);
 
 	local_irq_enable();


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 42/45] powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Paul Mackerras, Kumar Gala,
	Zhao Chenhui, Thomas Gleixner

Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.

It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily powerpc is one of the few places
where the CPU coming online traverses the cpu_online_mask before fully
coming online. So wrap that part under get/put_online_cpus_atomic(), to
avoid false-positive warnings from the CPU hotplug debug code.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/smp.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 2123bec..59c9a09 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -657,6 +657,7 @@ __cpuinit void start_secondary(void *unused)
 		cpumask_set_cpu(base + i, cpu_core_mask(cpu));
 	}
 	l2_cache = cpu_to_l2cache(cpu);
+	get_online_cpus_atomic();
 	for_each_online_cpu(i) {
 		struct device_node *np = cpu_to_l2cache(i);
 		if (!np)
@@ -667,6 +668,7 @@ __cpuinit void start_secondary(void *unused)
 		}
 		of_node_put(np);
 	}
+	put_online_cpus_atomic();
 	of_node_put(l2_cache);
 
 	local_irq_enable();

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 42/45] powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Paul Mackerras, Kumar Gala,
	Zhao Chenhui

Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.

It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily powerpc is one of the few places
where the CPU coming online traverses the cpu_online_mask before fully
coming online. So wrap that part under get/put_online_cpus_atomic(), to
avoid false-positive warnings from the CPU hotplug debug code.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/smp.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 2123bec..59c9a09 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -657,6 +657,7 @@ __cpuinit void start_secondary(void *unused)
 		cpumask_set_cpu(base + i, cpu_core_mask(cpu));
 	}
 	l2_cache = cpu_to_l2cache(cpu);
+	get_online_cpus_atomic();
 	for_each_online_cpu(i) {
 		struct device_node *np = cpu_to_l2cache(i);
 		if (!np)
@@ -667,6 +668,7 @@ __cpuinit void start_secondary(void *unused)
 		}
 		of_node_put(np);
 	}
+	put_online_cpus_atomic();
 	of_node_put(l2_cache);
 
 	local_irq_enable();


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 42/45] powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, sbw, Paul Mackerras, wangyun,
	Srivatsa S. Bhat, netdev, Thomas Gleixner, linuxppc-dev,
	Zhao Chenhui

Bringing a secondary CPU online is a special case in which, accessing
the cpu_online_mask is safe, even though that task (which running on the
CPU coming online) is not the hotplug writer.

It is a little hard to teach this to the debugging checks under
CONFIG_DEBUG_HOTPLUG_CPU. But luckily powerpc is one of the few places
where the CPU coming online traverses the cpu_online_mask before fully
coming online. So wrap that part under get/put_online_cpus_atomic(), to
avoid false-positive warnings from the CPU hotplug debug code.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/smp.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 2123bec..59c9a09 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -657,6 +657,7 @@ __cpuinit void start_secondary(void *unused)
 		cpumask_set_cpu(base + i, cpu_core_mask(cpu));
 	}
 	l2_cache = cpu_to_l2cache(cpu);
+	get_online_cpus_atomic();
 	for_each_online_cpu(i) {
 		struct device_node *np = cpu_to_l2cache(i);
 		if (!np)
@@ -667,6 +668,7 @@ __cpuinit void start_secondary(void *unused)
 		}
 		of_node_put(np);
 	}
+	put_online_cpus_atomic();
 	of_node_put(l2_cache);
 
 	local_irq_enable();

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
                     ` (2 preceding siblings ...)
  (?)
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paul Mundt, Thomas Gleixner, linux-sh,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sh/kernel/smp.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 4569645..42ec182 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 				cpu_context(i, mm) = 0;
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
 	    (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paul Mundt, Thomas Gleixner, linux-sh

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sh/kernel/smp.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 4569645..42ec182 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 				cpu_context(i, mm) = 0;
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
 	    (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paul Mundt, linux-sh

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sh/kernel/smp.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 4569645..42ec182 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 				cpu_context(i, mm) = 0;
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
 	    (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-sh,
	linux-kernel, rostedt, xiaoguangrong, sbw, Paul Mundt, wangyun,
	Srivatsa S. Bhat, netdev, Thomas Gleixner, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sh/kernel/smp.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 4569645..42ec182 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 				cpu_context(i, mm) = 0;
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
 	    (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 44/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
                     ` (2 preceding siblings ...)
  (?)
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Sam Ravnborg, Thomas Gleixner,
	Dave Kleikamp, sparclinux, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sparc/kernel/smp_64.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..4f71a95 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -792,7 +792,9 @@ static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 d
 /* Send cross call to all processors except self. */
 static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
 {
+	get_online_cpus_atomic();
 	smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
+	put_online_cpus_atomic();
 }
 
 extern unsigned long xcall_sync_tick;
@@ -896,7 +898,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 	atomic_inc(&dcpage_flushes);
 #endif
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	if (cpu == this_cpu) {
 		__local_flush_dcache_page(page);
@@ -922,7 +924,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -933,7 +935,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	if (tlb_type == hypervisor)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
 	atomic_inc(&dcpage_flushes);
@@ -958,7 +960,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	}
 	__local_flush_dcache_page(page);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
@@ -1150,6 +1152,7 @@ void smp_capture(void)
 {
 	int result = atomic_add_ret(1, &smp_capture_depth);
 
+	get_online_cpus_atomic();
 	if (result == 1) {
 		int ncpus = num_online_cpus();
 
@@ -1166,6 +1169,7 @@ void smp_capture(void)
 		printk("done\n");
 #endif
 	}
+	put_online_cpus_atomic();
 }
 
 void smp_release(void)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 44/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Sam Ravnborg, Thomas Gleixner,
	Dave Kleikamp, sparclinux

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sparc/kernel/smp_64.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..4f71a95 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -792,7 +792,9 @@ static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 d
 /* Send cross call to all processors except self. */
 static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
 {
+	get_online_cpus_atomic();
 	smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
+	put_online_cpus_atomic();
 }
 
 extern unsigned long xcall_sync_tick;
@@ -896,7 +898,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 	atomic_inc(&dcpage_flushes);
 #endif
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	if (cpu == this_cpu) {
 		__local_flush_dcache_page(page);
@@ -922,7 +924,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -933,7 +935,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	if (tlb_type == hypervisor)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
 	atomic_inc(&dcpage_flushes);
@@ -958,7 +960,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	}
 	__local_flush_dcache_page(page);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
@@ -1150,6 +1152,7 @@ void smp_capture(void)
 {
 	int result = atomic_add_ret(1, &smp_capture_depth);
 
+	get_online_cpus_atomic();
 	if (result == 1) {
 		int ncpus = num_online_cpus();
 
@@ -1166,6 +1169,7 @@ void smp_capture(void)
 		printk("done\n");
 #endif
 	}
+	put_online_cpus_atomic();
 }
 
 void smp_release(void)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 44/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Sam Ravnborg, Dave Kleikamp,
	sparclinux

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sparc/kernel/smp_64.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..4f71a95 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -792,7 +792,9 @@ static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 d
 /* Send cross call to all processors except self. */
 static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
 {
+	get_online_cpus_atomic();
 	smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
+	put_online_cpus_atomic();
 }
 
 extern unsigned long xcall_sync_tick;
@@ -896,7 +898,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 	atomic_inc(&dcpage_flushes);
 #endif
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	if (cpu == this_cpu) {
 		__local_flush_dcache_page(page);
@@ -922,7 +924,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -933,7 +935,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	if (tlb_type == hypervisor)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
 	atomic_inc(&dcpage_flushes);
@@ -958,7 +960,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	}
 	__local_flush_dcache_page(page);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
@@ -1150,6 +1152,7 @@ void smp_capture(void)
 {
 	int result = atomic_add_ret(1, &smp_capture_depth);
 
+	get_online_cpus_atomic();
 	if (result == 1) {
 		int ncpus = num_online_cpus();
 
@@ -1166,6 +1169,7 @@ void smp_capture(void)
 		printk("done\n");
 #endif
 	}
+	put_online_cpus_atomic();
 }
 
 void smp_release(void)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 44/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, fweisbec, linux-kernel,
	rostedt, xiaoguangrong, Sam Ravnborg, sbw, Dave Kleikamp,
	wangyun, Srivatsa S. Bhat, netdev, sparclinux, Thomas Gleixner,
	linuxppc-dev, David S. Miller

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sparc/kernel/smp_64.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..4f71a95 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -792,7 +792,9 @@ static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 d
 /* Send cross call to all processors except self. */
 static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
 {
+	get_online_cpus_atomic();
 	smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
+	put_online_cpus_atomic();
 }
 
 extern unsigned long xcall_sync_tick;
@@ -896,7 +898,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 	atomic_inc(&dcpage_flushes);
 #endif
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	if (cpu == this_cpu) {
 		__local_flush_dcache_page(page);
@@ -922,7 +924,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -933,7 +935,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	if (tlb_type == hypervisor)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
 	atomic_inc(&dcpage_flushes);
@@ -958,7 +960,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	}
 	__local_flush_dcache_page(page);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
@@ -1150,6 +1152,7 @@ void smp_capture(void)
 {
 	int result = atomic_add_ret(1, &smp_capture_depth);
 
+	get_online_cpus_atomic();
 	if (result == 1) {
 		int ncpus = num_online_cpus();
 
@@ -1166,6 +1169,7 @@ void smp_capture(void)
 		printk("done\n");
 #endif
 	}
+	put_online_cpus_atomic();
 }
 
 void smp_release(void)

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:52 ` Srivatsa S. Bhat
  (?)
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Chris Metcalf, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/tile/kernel/module.c |    3 +++
 arch/tile/kernel/tlb.c    |   15 +++++++++++++++
 arch/tile/mm/homecache.c  |    3 +++
 3 files changed, 21 insertions(+)

diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c
index 4918d91..db7d858 100644
--- a/arch/tile/kernel/module.c
+++ b/arch/tile/kernel/module.c
@@ -20,6 +20,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/cpu.h>
 #include <asm/pgtable.h>
 #include <asm/homecache.h>
 #include <arch/opcode.h>
@@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region)
 	vfree(module_region);
 
 	/* Globally flush the L1 icache. */
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     0, 0, 0, NULL, NULL, 0);
+	put_online_cpus_atomic();
 
 	/*
 	 * FIXME: If module_region == mod->module_init, trim exception
diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c
index 3fd54d5..a32b9dd 100644
--- a/arch/tile/kernel/tlb.c
+++ b/arch/tile/kernel/tlb.c
@@ -14,6 +14,7 @@
  */
 
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/module.h>
 #include <linux/hugetlb.h>
 #include <asm/tlbflush.h>
@@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	HV_Remote_ASID asids[NR_CPUS];
 	int i = 0, cpu;
+
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, mm_cpumask(mm)) {
 		HV_Remote_ASID *asid = &asids[i++];
 		asid->y = cpu / smp_topology.width;
@@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm),
 		     0, 0, 0, NULL, asids, i);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_current_task(void)
@@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm,
 {
 	unsigned long size = vma_kernel_pagesize(vma);
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm),
 		     va, size, size, mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
@@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma,
 	unsigned long size = vma_kernel_pagesize(vma);
 	struct mm_struct *mm = vma->vm_mm;
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm), start, end - start, size,
 		     mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
 {
 	int i;
+
+	get_online_cpus_atomic();
 	for (i = 0; ; ++i) {
 		HV_VirtAddrRange r = hv_inquire_virtual(i);
 		if (r.size == 0)
@@ -89,10 +101,13 @@ void flush_tlb_all(void)
 			     r.start, r.size, HPAGE_SIZE, cpu_online_mask,
 			     NULL, 0);
 	}
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0);
+	put_online_cpus_atomic();
 }
diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c
index 1ae9119..7ff5bf0 100644
--- a/arch/tile/mm/homecache.c
+++ b/arch/tile/mm/homecache.c
@@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home)
 	BUG_ON(page_count(page) > 1);
 	BUG_ON(page_mapcount(page) != 0);
 	kva = (unsigned long) page_address(page);
+
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map,
 		     kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask,
 		     NULL, 0);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < pages; ++i, kva += PAGE_SIZE) {
 		pte_t *ptep = virt_to_pte(NULL, kva);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Chris Metcalf

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/tile/kernel/module.c |    3 +++
 arch/tile/kernel/tlb.c    |   15 +++++++++++++++
 arch/tile/mm/homecache.c  |    3 +++
 3 files changed, 21 insertions(+)

diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c
index 4918d91..db7d858 100644
--- a/arch/tile/kernel/module.c
+++ b/arch/tile/kernel/module.c
@@ -20,6 +20,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/cpu.h>
 #include <asm/pgtable.h>
 #include <asm/homecache.h>
 #include <arch/opcode.h>
@@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region)
 	vfree(module_region);
 
 	/* Globally flush the L1 icache. */
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     0, 0, 0, NULL, NULL, 0);
+	put_online_cpus_atomic();
 
 	/*
 	 * FIXME: If module_region == mod->module_init, trim exception
diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c
index 3fd54d5..a32b9dd 100644
--- a/arch/tile/kernel/tlb.c
+++ b/arch/tile/kernel/tlb.c
@@ -14,6 +14,7 @@
  */
 
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/module.h>
 #include <linux/hugetlb.h>
 #include <asm/tlbflush.h>
@@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	HV_Remote_ASID asids[NR_CPUS];
 	int i = 0, cpu;
+
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, mm_cpumask(mm)) {
 		HV_Remote_ASID *asid = &asids[i++];
 		asid->y = cpu / smp_topology.width;
@@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm),
 		     0, 0, 0, NULL, asids, i);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_current_task(void)
@@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm,
 {
 	unsigned long size = vma_kernel_pagesize(vma);
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm),
 		     va, size, size, mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
@@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma,
 	unsigned long size = vma_kernel_pagesize(vma);
 	struct mm_struct *mm = vma->vm_mm;
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm), start, end - start, size,
 		     mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
 {
 	int i;
+
+	get_online_cpus_atomic();
 	for (i = 0; ; ++i) {
 		HV_VirtAddrRange r = hv_inquire_virtual(i);
 		if (r.size == 0)
@@ -89,10 +101,13 @@ void flush_tlb_all(void)
 			     r.start, r.size, HPAGE_SIZE, cpu_online_mask,
 			     NULL, 0);
 	}
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0);
+	put_online_cpus_atomic();
 }
diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c
index 1ae9119..7ff5bf0 100644
--- a/arch/tile/mm/homecache.c
+++ b/arch/tile/mm/homecache.c
@@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home)
 	BUG_ON(page_count(page) > 1);
 	BUG_ON(page_mapcount(page) != 0);
 	kva = (unsigned long) page_address(page);
+
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map,
 		     kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask,
 		     NULL, 0);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < pages; ++i, kva += PAGE_SIZE) {
 		pte_t *ptep = virt_to_pte(NULL, kva);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 45/45] tile: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:00 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: linux-arch, nikunj, zhong, linux-pm, Chris Metcalf, fweisbec,
	linux-kernel, rostedt, xiaoguangrong, sbw, wangyun,
	Srivatsa S. Bhat, netdev, linuxppc-dev

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/tile/kernel/module.c |    3 +++
 arch/tile/kernel/tlb.c    |   15 +++++++++++++++
 arch/tile/mm/homecache.c  |    3 +++
 3 files changed, 21 insertions(+)

diff --git a/arch/tile/kernel/module.c b/arch/tile/kernel/module.c
index 4918d91..db7d858 100644
--- a/arch/tile/kernel/module.c
+++ b/arch/tile/kernel/module.c
@@ -20,6 +20,7 @@
 #include <linux/fs.h>
 #include <linux/string.h>
 #include <linux/kernel.h>
+#include <linux/cpu.h>
 #include <asm/pgtable.h>
 #include <asm/homecache.h>
 #include <arch/opcode.h>
@@ -79,8 +80,10 @@ void module_free(struct module *mod, void *module_region)
 	vfree(module_region);
 
 	/* Globally flush the L1 icache. */
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     0, 0, 0, NULL, NULL, 0);
+	put_online_cpus_atomic();
 
 	/*
 	 * FIXME: If module_region == mod->module_init, trim exception
diff --git a/arch/tile/kernel/tlb.c b/arch/tile/kernel/tlb.c
index 3fd54d5..a32b9dd 100644
--- a/arch/tile/kernel/tlb.c
+++ b/arch/tile/kernel/tlb.c
@@ -14,6 +14,7 @@
  */
 
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/module.h>
 #include <linux/hugetlb.h>
 #include <asm/tlbflush.h>
@@ -35,6 +36,8 @@ void flush_tlb_mm(struct mm_struct *mm)
 {
 	HV_Remote_ASID asids[NR_CPUS];
 	int i = 0, cpu;
+
+	get_online_cpus_atomic();
 	for_each_cpu(cpu, mm_cpumask(mm)) {
 		HV_Remote_ASID *asid = &asids[i++];
 		asid->y = cpu / smp_topology.width;
@@ -43,6 +46,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	flush_remote(0, HV_FLUSH_EVICT_L1I, mm_cpumask(mm),
 		     0, 0, 0, NULL, asids, i);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_current_task(void)
@@ -55,8 +59,11 @@ void flush_tlb_page_mm(struct vm_area_struct *vma, struct mm_struct *mm,
 {
 	unsigned long size = vma_kernel_pagesize(vma);
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm),
 		     va, size, size, mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long va)
@@ -71,13 +78,18 @@ void flush_tlb_range(struct vm_area_struct *vma,
 	unsigned long size = vma_kernel_pagesize(vma);
 	struct mm_struct *mm = vma->vm_mm;
 	int cache = (vma->vm_flags & VM_EXEC) ? HV_FLUSH_EVICT_L1I : 0;
+
+	get_online_cpus_atomic();
 	flush_remote(0, cache, mm_cpumask(mm), start, end - start, size,
 		     mm_cpumask(mm), NULL, 0);
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_all(void)
 {
 	int i;
+
+	get_online_cpus_atomic();
 	for (i = 0; ; ++i) {
 		HV_VirtAddrRange r = hv_inquire_virtual(i);
 		if (r.size == 0)
@@ -89,10 +101,13 @@ void flush_tlb_all(void)
 			     r.start, r.size, HPAGE_SIZE, cpu_online_mask,
 			     NULL, 0);
 	}
+	put_online_cpus_atomic();
 }
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L1I, cpu_online_mask,
 		     start, end - start, PAGE_SIZE, cpu_online_mask, NULL, 0);
+	put_online_cpus_atomic();
 }
diff --git a/arch/tile/mm/homecache.c b/arch/tile/mm/homecache.c
index 1ae9119..7ff5bf0 100644
--- a/arch/tile/mm/homecache.c
+++ b/arch/tile/mm/homecache.c
@@ -397,9 +397,12 @@ void homecache_change_page_home(struct page *page, int order, int home)
 	BUG_ON(page_count(page) > 1);
 	BUG_ON(page_mapcount(page) != 0);
 	kva = (unsigned long) page_address(page);
+
+	get_online_cpus_atomic();
 	flush_remote(0, HV_FLUSH_EVICT_L2, &cpu_cacheable_map,
 		     kva, pages * PAGE_SIZE, PAGE_SIZE, cpu_online_mask,
 		     NULL, 0);
+	put_online_cpus_atomic();
 
 	for (i = 0; i < pages; ++i, kva += PAGE_SIZE) {
 		pte_t *ptep = virt_to_pte(NULL, kva);

^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 35/45] ia64: irq, perfmon: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:58   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:10 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, Eric W. Biederman,
	linux-ia64, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/irq_ia64.c |   15 +++++++++++++++
 arch/ia64/kernel/perfmon.c  |    8 +++++++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 1034884..f58b162 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -25,6 +25,7 @@
 #include <linux/ptrace.h>
 #include <linux/signal.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
 #include <linux/irq.h>
@@ -160,9 +161,11 @@ int bind_irq_vector(int irq, int vector, cpumask_t domain)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __bind_irq_vector(irq, vector, domain);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -190,9 +193,11 @@ static void clear_irq_vector(int irq)
 {
 	unsigned long flags;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 int
@@ -204,6 +209,7 @@ ia64_native_assign_irq_vector (int irq)
 
 	vector = -ENOSPC;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -218,6 +224,7 @@ ia64_native_assign_irq_vector (int irq)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return vector;
 }
 
@@ -302,9 +309,11 @@ int irq_prepare_move(int irq, int cpu)
 	unsigned long flags;
 	int ret;
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	ret = __irq_prepare_move(irq, cpu);
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	return ret;
 }
 
@@ -320,11 +329,13 @@ void irq_complete_move(unsigned irq)
 	if (unlikely(cpu_isset(smp_processor_id(), cfg->old_domain)))
 		return;
 
+	get_online_cpus_atomic();
 	cpumask_and(&cleanup_mask, &cfg->old_domain, cpu_online_mask);
 	cfg->move_cleanup_count = cpus_weight(cleanup_mask);
 	for_each_cpu_mask(i, cleanup_mask)
 		platform_send_ipi(i, IA64_IRQ_MOVE_VECTOR, IA64_IPI_DM_INT, 0);
 	cfg->move_in_progress = 0;
+	put_online_cpus_atomic();
 }
 
 static irqreturn_t smp_irq_move_cleanup_interrupt(int irq, void *dev_id)
@@ -393,10 +404,12 @@ void destroy_and_reserve_irq(unsigned int irq)
 
 	dynamic_irq_cleanup(irq);
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	__clear_irq_vector(irq);
 	irq_status[irq] = IRQ_RSVD;
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 }
 
 /*
@@ -409,6 +422,7 @@ int create_irq(void)
 	cpumask_t domain = CPU_MASK_NONE;
 
 	irq = vector = -ENOSPC;
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&vector_lock, flags);
 	for_each_online_cpu(cpu) {
 		domain = vector_allocation_domain(cpu);
@@ -424,6 +438,7 @@ int create_irq(void)
 	BUG_ON(__bind_irq_vector(irq, vector, domain));
  out:
 	spin_unlock_irqrestore(&vector_lock, flags);
+	put_online_cpus_atomic();
 	if (irq >= 0)
 		dynamic_irq_init(irq);
 	return irq;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9ea25fc..16c8303 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -6476,9 +6476,12 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	/* do the easy test first */
 	if (pfm_alt_intr_handler) return -EBUSY;
 
+	get_online_cpus_atomic();
+
 	/* one at a time in the install or remove, just fail the others */
 	if (!spin_trylock(&pfm_alt_install_check)) {
-		return -EBUSY;
+		ret = -EBUSY;
+		goto out;
 	}
 
 	/* reserve our session */
@@ -6498,6 +6501,7 @@ pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
 	pfm_alt_intr_handler = hdl;
 
 	spin_unlock(&pfm_alt_install_check);
+	put_online_cpus_atomic();
 
 	return 0;
 
@@ -6510,6 +6514,8 @@ cleanup_reserve:
 	}
 
 	spin_unlock(&pfm_alt_install_check);
+out:
+	put_online_cpus_atomic();
 
 	return ret;
 }


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 36/45] ia64: smp, tlb: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 19:59   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:11 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Tony Luck, Fenghua Yu, linux-ia64,
	Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: linux-ia64@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/ia64/kernel/smp.c |   12 ++++++------
 arch/ia64/mm/tlb.c     |    4 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/ia64/kernel/smp.c b/arch/ia64/kernel/smp.c
index 9fcd4e6..25991ba 100644
--- a/arch/ia64/kernel/smp.c
+++ b/arch/ia64/kernel/smp.c
@@ -24,6 +24,7 @@
 #include <linux/init.h>
 #include <linux/interrupt.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/kernel_stat.h>
 #include <linux/mm.h>
 #include <linux/cache.h>
@@ -259,8 +260,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 	cpumask_t cpumask = xcpumask;
 	int mycpu, cpu, flush_mycpu = 0;
 
-	preempt_disable();
-	mycpu = smp_processor_id();
+	mycpu = get_online_cpus_atomic();
 
 	for_each_cpu_mask(cpu, cpumask)
 		counts[cpu] = local_tlb_flush_counts[cpu].count & 0xffff;
@@ -280,7 +280,7 @@ smp_flush_tlb_cpumask(cpumask_t xcpumask)
 		while(counts[cpu] = (local_tlb_flush_counts[cpu].count & 0xffff))
 			udelay(FLUSH_DELAY);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void
@@ -293,12 +293,12 @@ void
 smp_flush_tlb_mm (struct mm_struct *mm)
 {
 	cpumask_var_t cpus;
-	preempt_disable();
+	get_online_cpus_atomic();
 	/* this happens for the common case of a single-threaded fork():  */
 	if (likely(mm = current->active_mm && atomic_read(&mm->mm_users) = 1))
 	{
 		local_finish_flush_tlb_mm(mm);
-		preempt_enable();
+		put_online_cpus_atomic();
 		return;
 	}
 	if (!alloc_cpumask_var(&cpus, GFP_ATOMIC)) {
@@ -313,7 +313,7 @@ smp_flush_tlb_mm (struct mm_struct *mm)
 	local_irq_disable();
 	local_finish_flush_tlb_mm(mm);
 	local_irq_enable();
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void arch_send_call_function_single_ipi(int cpu)
diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c
index ed61297..8c55ef5 100644
--- a/arch/ia64/mm/tlb.c
+++ b/arch/ia64/mm/tlb.c
@@ -87,11 +87,11 @@ wrap_mmu_context (struct mm_struct *mm)
 	 * can't call flush_tlb_all() here because of race condition
 	 * with O(1) scheduler [EF]
 	 */
-	cpu = get_cpu(); /* prevent preemption/migration */
+	cpu = get_online_cpus_atomic(); /* prevent preemption/migration */
 	for_each_online_cpu(i)
 		if (i != cpu)
 			per_cpu(ia64_need_tlb_flush, i) = 1;
-	put_cpu();
+	put_online_cpus_atomic();
 	local_flush_tlb_all();
 }
 


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 41/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:12 UTC (permalink / raw)
  To: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Benjamin Herrenschmidt, Gleb Natapov,
	Alexander Graf, Rob Herring, Grant Likely, Kumar Gala,
	Zhao Chenhui, linuxppc-dev, kvm, kvm-ppc, oprofile-list,
	cbe-oss-dev, Srivatsa S. Bhat

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Kumar Gala <galak@kernel.crashing.org>
Cc: Zhao Chenhui <chenhui.zhao@freescale.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: kvm@vger.kernel.org
Cc: kvm-ppc@vger.kernel.org
Cc: oprofile-list@lists.sf.net
Cc: cbe-oss-dev@lists.ozlabs.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/irq.c                  |    7 ++++++-
 arch/powerpc/kernel/machine_kexec_64.c     |    4 ++--
 arch/powerpc/kernel/smp.c                  |    2 ++
 arch/powerpc/kvm/book3s_hv.c               |    5 +++--
 arch/powerpc/mm/mmu_context_nohash.c       |    3 +++
 arch/powerpc/oprofile/cell/spu_profiler.c  |    3 +++
 arch/powerpc/oprofile/cell/spu_task_sync.c |    4 ++++
 arch/powerpc/oprofile/op_model_cell.c      |    6 ++++++
 8 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index ca39bac..41e9961 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -45,6 +45,7 @@
 #include <linux/irq.h>
 #include <linux/seq_file.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/profile.h>
 #include <linux/bitops.h>
 #include <linux/list.h>
@@ -410,7 +411,10 @@ void migrate_irqs(void)
 	unsigned int irq;
 	static int warned;
 	cpumask_var_t mask;
-	const struct cpumask *map = cpu_online_mask;
+	const struct cpumask *map;
+
+	get_online_cpus_atomic();
+	map = cpu_online_mask;
 
 	alloc_cpumask_var(&mask, GFP_ATOMIC);
 
@@ -436,6 +440,7 @@ void migrate_irqs(void)
 	}
 
 	free_cpumask_var(mask);
+	put_online_cpus_atomic();
 
 	local_irq_enable();
 	mdelay(1);
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..38f6d75 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -187,7 +187,7 @@ static void kexec_prepare_cpus_wait(int wait_state)
 	int my_cpu, i, notified=-1;
 
 	hw_breakpoint_disable();
-	my_cpu = get_cpu();
+	my_cpu = get_online_cpus_atomic();
 	/* Make sure each CPU has at least made it to the state we need.
 	 *
 	 * FIXME: There is a (slim) chance of a problem if not all of the CPUs
@@ -266,7 +266,7 @@ static void kexec_prepare_cpus(void)
 	 */
 	kexec_prepare_cpus_wait(KEXEC_STATE_REAL_MODE);
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 #else /* ! SMP */
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index ee7ac5e..2123bec 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -277,9 +277,11 @@ void smp_send_debugger_break(void)
 	if (unlikely(!smp_ops))
 		return;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu)
 		if (cpu != me)
 			do_message_pass(cpu, PPC_MSG_DEBUGGER_BREAK);
+	put_online_cpus_atomic();
 }
 #endif
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2efa9dd..9d8a973 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -28,6 +28,7 @@
 #include <linux/fs.h>
 #include <linux/anon_inodes.h>
 #include <linux/cpumask.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/page-flags.h>
 #include <linux/srcu.h>
@@ -78,7 +79,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		++vcpu->stat.halt_wakeup;
 	}
 
-	me = get_cpu();
+	me = get_online_cpus_atomic();
 
 	/* CPU points to the first thread of the core */
 	if (cpu != me && cpu >= 0 && cpu < nr_cpu_ids) {
@@ -88,7 +89,7 @@ void kvmppc_fast_vcpu_kick(struct kvm_vcpu *vcpu)
 		else if (cpu_online(cpu))
 			smp_send_reschedule(cpu);
 	}
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index e779642..c7bdcb4 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -194,6 +194,8 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	unsigned int i, id, cpu = smp_processor_id();
 	unsigned long *map;
 
+	get_online_cpus_atomic();
+
 	/* No lockless fast path .. yet */
 	raw_spin_lock(&context_lock);
 
@@ -280,6 +282,7 @@ void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 	pr_hardcont(" -> %d\n", id);
 	set_context(id, next->pgd);
 	raw_spin_unlock(&context_lock);
+	put_online_cpus_atomic();
 }
 
 /*
diff --git a/arch/powerpc/oprofile/cell/spu_profiler.c b/arch/powerpc/oprofile/cell/spu_profiler.c
index b129d00..ab6e6c1 100644
--- a/arch/powerpc/oprofile/cell/spu_profiler.c
+++ b/arch/powerpc/oprofile/cell/spu_profiler.c
@@ -14,6 +14,7 @@
 
 #include <linux/hrtimer.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/slab.h>
 #include <asm/cell-pmu.h>
 #include <asm/time.h>
@@ -142,6 +143,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 	if (!spu_prof_running)
 		goto stop;
 
+	get_online_cpus_atomic();
 	for_each_online_cpu(cpu) {
 		if (cbe_get_hw_thread_id(cpu))
 			continue;
@@ -177,6 +179,7 @@ static enum hrtimer_restart profile_spus(struct hrtimer *timer)
 				       oprof_spu_smpl_arry_lck_flags);
 
 	}
+	put_online_cpus_atomic();
 	smp_wmb();	/* insure spu event buffer updates are written */
 			/* don't want events intermingled... */
 
diff --git a/arch/powerpc/oprofile/cell/spu_task_sync.c b/arch/powerpc/oprofile/cell/spu_task_sync.c
index 28f1af2..8464ef6 100644
--- a/arch/powerpc/oprofile/cell/spu_task_sync.c
+++ b/arch/powerpc/oprofile/cell/spu_task_sync.c
@@ -28,6 +28,7 @@
 #include <linux/oprofile.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/cpu.h>
 #include "pr_util.h"
 
 #define RELEASE_ALL 9999
@@ -448,11 +449,14 @@ static int number_of_online_nodes(void)
 {
         u32 cpu; u32 tmp;
         int nodes = 0;
+
+	get_online_cpus_atomic();
         for_each_online_cpu(cpu) {
                 tmp = cbe_cpu_to_node(cpu) + 1;
                 if (tmp > nodes)
                         nodes++;
         }
+	put_online_cpus_atomic();
         return nodes;
 }
 
diff --git a/arch/powerpc/oprofile/op_model_cell.c b/arch/powerpc/oprofile/op_model_cell.c
index b9589c1..c9bb028 100644
--- a/arch/powerpc/oprofile/op_model_cell.c
+++ b/arch/powerpc/oprofile/op_model_cell.c
@@ -22,6 +22,7 @@
 #include <linux/oprofile.h>
 #include <linux/percpu.h>
 #include <linux/smp.h>
+#include <linux/cpu.h>
 #include <linux/spinlock.h>
 #include <linux/timer.h>
 #include <asm/cell-pmu.h>
@@ -463,6 +464,7 @@ static void cell_virtual_cntr(unsigned long data)
 	 * not both playing with the counters on the same node.
 	 */
 
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	prev_hdw_thread = hdw_thread;
@@ -550,6 +552,7 @@ static void cell_virtual_cntr(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	mod_timer(&timer_virt_cntr, jiffies + HZ / 10);
 }
@@ -608,6 +611,8 @@ static void spu_evnt_swap(unsigned long data)
 	/* Make sure spu event interrupt handler and spu event swap
 	 * don't access the counters simultaneously.
 	 */
+
+	get_online_cpus_atomic();
 	spin_lock_irqsave(&cntr_lock, flags);
 
 	cur_spu_evnt_phys_spu_indx = spu_evnt_phys_spu_indx;
@@ -673,6 +678,7 @@ static void spu_evnt_swap(unsigned long data)
 	}
 
 	spin_unlock_irqrestore(&cntr_lock, flags);
+	put_online_cpus_atomic();
 
 	/* swap approximately every 0.1 seconds */
 	mod_timer(&timer_spu_event_swap, jiffies + HZ / 25);


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:12 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, Paul Mundt, Thomas Gleixner, linux-sh

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sh/kernel/smp.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 4569645..42ec182 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm)
  */
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1);
@@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	}
 	local_flush_tlb_mm(mm);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 struct flush_tlb_data {
@@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) {
 		struct flush_tlb_data fd;
 
@@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma,
 				cpu_context(i, mm) = 0;
 	}
 	local_flush_tlb_range(vma, start, end);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_kernel_range_ipi(void *info)
@@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info)
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 {
-	preempt_disable();
+	get_online_cpus_atomic();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
 	    (current->mm != vma->vm_mm)) {
 		struct flush_tlb_data fd;
@@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 				cpu_context(i, vma->vm_mm) = 0;
 	}
 	local_flush_tlb_page(vma, page);
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 static void flush_tlb_one_ipi(void *info)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
  2013-06-27 19:55   ` Srivatsa S. Bhat
@ 2013-06-27 20:12     ` Paul E. McKenney
  -1 siblings, 0 replies; 187+ messages in thread
From: Paul E. McKenney @ 2013-06-27 20:12 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: tglx, peterz, tj, oleg, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight, rostedt, wangyun,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Dipankar Sarma

On Fri, Jun 28, 2013 at 01:25:17AM +0530, Srivatsa S. Bhat wrote:
> In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
> while being protected by a spinlock. At first, it appears as if we need to
> use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
> hotplug, once we get rid of stop_machine(). However, RCU has adequate
> synchronization with CPU hotplug, making that unnecessary. But since the
> locking details are non-trivial, it is hard to teach this to the rudimentary
> hotplug locking validator.
> 
> So use the _nocheck() variants of the cpu accessor functions to prevent false-
> positive warnings from the CPU hotplug debug code. Also, add a comment
> explaining the hotplug synchronization design used in RCU, so that its easy
> to see why it is justified to use the _nocheck() variants.
> 
> Cc: Dipankar Sarma <dipankar@in.ibm.com>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>

Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> ---
> 
>  kernel/rcutree.c |   12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index cf3adc6..ced28a45 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
>  	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
>  		return 0;  /* Grace period is not old enough. */
>  	barrier();
> -	if (cpu_is_offline(rdp->cpu)) {
> +
> +	/*
> +	 * It is safe to use the _nocheck() version of cpu_is_offline() here
> +	 * (to avoid false-positive warnings from CPU hotplug debug code),
> +	 * because:
> +	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
> +	 *    period initialization.
> +	 * 2. The current grace period has not ended yet.
> +	 * So it is safe to sample the offline state without synchronization.
> +	 */
> +	if (cpu_is_offline_nocheck(rdp->cpu)) {
>  		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
>  		rdp->offline_fqs++;
>  		return 1;
> 


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings
@ 2013-06-27 20:12     ` Paul E. McKenney
  0 siblings, 0 replies; 187+ messages in thread
From: Paul E. McKenney @ 2013-06-27 20:12 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, fweisbec, linux-kernel, walken, mingo, linux-arch,
	vincent.guittot, xiaoguangrong, wangyun, nikunj, linux-pm, rusty,
	rostedt, namhyung, tglx, laijs, zhong, netdev, oleg, sbw,
	David.Laight, tj, akpm, linuxppc-dev

On Fri, Jun 28, 2013 at 01:25:17AM +0530, Srivatsa S. Bhat wrote:
> In RCU code, rcu_implicit_dynticks_qs() checks if a CPU is offline,
> while being protected by a spinlock. At first, it appears as if we need to
> use the get/put_online_cpus_atomic() APIs to properly synchronize with CPU
> hotplug, once we get rid of stop_machine(). However, RCU has adequate
> synchronization with CPU hotplug, making that unnecessary. But since the
> locking details are non-trivial, it is hard to teach this to the rudimentary
> hotplug locking validator.
> 
> So use the _nocheck() variants of the cpu accessor functions to prevent false-
> positive warnings from the CPU hotplug debug code. Also, add a comment
> explaining the hotplug synchronization design used in RCU, so that its easy
> to see why it is justified to use the _nocheck() variants.
> 
> Cc: Dipankar Sarma <dipankar@in.ibm.com>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>

Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> ---
> 
>  kernel/rcutree.c |   12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index cf3adc6..ced28a45 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -794,7 +794,17 @@ static int rcu_implicit_dynticks_qs(struct rcu_data *rdp)
>  	if (ULONG_CMP_GE(rdp->rsp->gp_start + 2, jiffies))
>  		return 0;  /* Grace period is not old enough. */
>  	barrier();
> -	if (cpu_is_offline(rdp->cpu)) {
> +
> +	/*
> +	 * It is safe to use the _nocheck() version of cpu_is_offline() here
> +	 * (to avoid false-positive warnings from CPU hotplug debug code),
> +	 * because:
> +	 * 1. rcu_gp_init() holds off CPU hotplug operations during grace
> +	 *    period initialization.
> +	 * 2. The current grace period has not ended yet.
> +	 * So it is safe to sample the offline state without synchronization.
> +	 */
> +	if (cpu_is_offline_nocheck(rdp->cpu)) {
>  		trace_rcu_fqs(rdp->rsp->name, rdp->gpnum, rdp->cpu, "ofl");
>  		rdp->offline_fqs++;
>  		return 1;
> 

^ permalink raw reply	[flat|nested] 187+ messages in thread

* [PATCH v3 44/45] sparc: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-06-27 20:00   ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-06-27 20:12 UTC (permalink / raw)
  To: peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung, walken,
	vincent.guittot, laijs, David.Laight
  Cc: rostedt, wangyun, xiaoguangrong, sbw, fweisbec, zhong, nikunj,
	srivatsa.bhat, linux-pm, linux-arch, linuxppc-dev, netdev,
	linux-kernel, David S. Miller, Sam Ravnborg, Thomas Gleixner,
	Dave Kleikamp, sparclinux

Once stop_machine() is gone from the CPU offline path, we won't be able
to depend on disabling preemption to prevent CPUs from going offline
from under us.

Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going
offline, while invoking from atomic context.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: sparclinux@vger.kernel.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 arch/sparc/kernel/smp_64.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..4f71a95 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -792,7 +792,9 @@ static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 d
 /* Send cross call to all processors except self. */
 static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
 {
+	get_online_cpus_atomic();
 	smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
+	put_online_cpus_atomic();
 }
 
 extern unsigned long xcall_sync_tick;
@@ -896,7 +898,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 	atomic_inc(&dcpage_flushes);
 #endif
 
-	this_cpu = get_cpu();
+	this_cpu = get_online_cpus_atomic();
 
 	if (cpu = this_cpu) {
 		__local_flush_dcache_page(page);
@@ -922,7 +924,7 @@ void smp_flush_dcache_page_impl(struct page *page, int cpu)
 		}
 	}
 
-	put_cpu();
+	put_online_cpus_atomic();
 }
 
 void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
@@ -933,7 +935,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	if (tlb_type = hypervisor)
 		return;
 
-	preempt_disable();
+	get_online_cpus_atomic();
 
 #ifdef CONFIG_DEBUG_DCFLUSH
 	atomic_inc(&dcpage_flushes);
@@ -958,7 +960,7 @@ void flush_dcache_page_all(struct mm_struct *mm, struct page *page)
 	}
 	__local_flush_dcache_page(page);
 
-	preempt_enable();
+	put_online_cpus_atomic();
 }
 
 void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)
@@ -1150,6 +1152,7 @@ void smp_capture(void)
 {
 	int result = atomic_add_ret(1, &smp_capture_depth);
 
+	get_online_cpus_atomic();
 	if (result = 1) {
 		int ncpus = num_online_cpus();
 
@@ -1166,6 +1169,7 @@ void smp_capture(void)
 		printk("done\n");
 #endif
 	}
+	put_online_cpus_atomic();
 }
 
 void smp_release(void)


^ permalink raw reply related	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-06-27 19:54   ` Srivatsa S. Bhat
@ 2013-07-02  5:32     ` Michael Wang
  -1 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02  5:32 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight, rostedt,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Wang YanQing,
	Shaohua Li, Jan Beulich, liguang

Hi, Srivatsa

On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
[snip]
> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>   * The function might sleep if the GFP flags indicates a non
>   * atomic allocation is allowed.
>   *
> - * Preemption is disabled to protect against CPUs going offline but not online.
> - * CPUs going online during the call will not be seen or sent an IPI.
> + * We use get/put_online_cpus_atomic() to protect against CPUs going
> + * offline but not online. CPUs going online during the call will
> + * not be seen or sent an IPI.

I was a little confused about this comment, if the offline-cpu still
have chances to become online, then there is chances that we will pick
it from for_each_online_cpu(), isn't it? Did I miss some point?

Regards,
Michael Wang

>   *
>   * You must not call this function with disabled interrupts or
>   * from a hardware interrupt handler or from a bottom half handler.
> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>  	might_sleep_if(gfp_flags & __GFP_WAIT);
> 
>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
> -		preempt_disable();
> +		get_online_cpus_atomic();
>  		for_each_online_cpu(cpu)
>  			if (cond_func(cpu, info))
>  				cpumask_set_cpu(cpu, cpus);
>  		on_each_cpu_mask(cpus, func, info, wait);
> -		preempt_enable();
> +		put_online_cpus_atomic();
>  		free_cpumask_var(cpus);
>  	} else {
>  		/*
>  		 * No free cpumask, bother. No matter, we'll
>  		 * just have to IPI them one by one.
>  		 */
> -		preempt_disable();
> +		get_online_cpus_atomic();
>  		for_each_online_cpu(cpu)
>  			if (cond_func(cpu, info)) {
>  				ret = smp_call_function_single(cpu, func,
>  								info, wait);
>  				WARN_ON_ONCE(!ret);
>  			}
> -		preempt_enable();
> +		put_online_cpus_atomic();
>  	}
>  }
>  EXPORT_SYMBOL(on_each_cpu_cond);
> 


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-07-02  5:32     ` Michael Wang
  0 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02  5:32 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, fweisbec, Shaohua Li, linux-kernel, Jan Beulich, walken,
	mingo, linux-arch, vincent.guittot, xiaoguangrong, paulmck,
	nikunj, linux-pm, rusty, rostedt, namhyung, tglx, Wang YanQing,
	laijs, zhong, netdev, oleg, sbw, David.Laight, tj, akpm,
	linuxppc-dev, liguang

Hi, Srivatsa

On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
[snip]
> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>   * The function might sleep if the GFP flags indicates a non
>   * atomic allocation is allowed.
>   *
> - * Preemption is disabled to protect against CPUs going offline but not online.
> - * CPUs going online during the call will not be seen or sent an IPI.
> + * We use get/put_online_cpus_atomic() to protect against CPUs going
> + * offline but not online. CPUs going online during the call will
> + * not be seen or sent an IPI.

I was a little confused about this comment, if the offline-cpu still
have chances to become online, then there is chances that we will pick
it from for_each_online_cpu(), isn't it? Did I miss some point?

Regards,
Michael Wang

>   *
>   * You must not call this function with disabled interrupts or
>   * from a hardware interrupt handler or from a bottom half handler.
> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>  	might_sleep_if(gfp_flags & __GFP_WAIT);
> 
>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
> -		preempt_disable();
> +		get_online_cpus_atomic();
>  		for_each_online_cpu(cpu)
>  			if (cond_func(cpu, info))
>  				cpumask_set_cpu(cpu, cpus);
>  		on_each_cpu_mask(cpus, func, info, wait);
> -		preempt_enable();
> +		put_online_cpus_atomic();
>  		free_cpumask_var(cpus);
>  	} else {
>  		/*
>  		 * No free cpumask, bother. No matter, we'll
>  		 * just have to IPI them one by one.
>  		 */
> -		preempt_disable();
> +		get_online_cpus_atomic();
>  		for_each_online_cpu(cpu)
>  			if (cond_func(cpu, info)) {
>  				ret = smp_call_function_single(cpu, func,
>  								info, wait);
>  				WARN_ON_ONCE(!ret);
>  			}
> -		preempt_enable();
> +		put_online_cpus_atomic();
>  	}
>  }
>  EXPORT_SYMBOL(on_each_cpu_cond);
> 

^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-07-02  5:32     ` Michael Wang
@ 2013-07-02  8:25       ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-07-02  8:25 UTC (permalink / raw)
  To: Michael Wang
  Cc: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight, rostedt,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Wang YanQing,
	Shaohua Li, Jan Beulich, liguang

Hi Michael,

On 07/02/2013 11:02 AM, Michael Wang wrote:
> Hi, Srivatsa
> 
> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
> [snip]
>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>   * The function might sleep if the GFP flags indicates a non
>>   * atomic allocation is allowed.
>>   *
>> - * Preemption is disabled to protect against CPUs going offline but not online.
>> - * CPUs going online during the call will not be seen or sent an IPI.
>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>> + * offline but not online. CPUs going online during the call will
>> + * not be seen or sent an IPI.
> 
> I was a little confused about this comment, if the offline-cpu still
> have chances to become online, then there is chances that we will pick
> it from for_each_online_cpu(), isn't it? Did I miss some point?
>

Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
loop, is dependent on timing. On top of that, there are 2 paths in the code:
one which uses a temporary cpumask and the other which doesn't. In the former
case, if a CPU comes online _after_ we populate the temporary cpumask, then
we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
loop itself (either in the former or the latter case), then yes, we will send
the IPI to that CPU.

Regards,
Srivatsa S. Bhat

> 
>>   *
>>   * You must not call this function with disabled interrupts or
>>   * from a hardware interrupt handler or from a bottom half handler.
>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>
>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>> -		preempt_disable();
>> +		get_online_cpus_atomic();
>>  		for_each_online_cpu(cpu)
>>  			if (cond_func(cpu, info))
>>  				cpumask_set_cpu(cpu, cpus);
>>  		on_each_cpu_mask(cpus, func, info, wait);
>> -		preempt_enable();
>> +		put_online_cpus_atomic();
>>  		free_cpumask_var(cpus);
>>  	} else {
>>  		/*
>>  		 * No free cpumask, bother. No matter, we'll
>>  		 * just have to IPI them one by one.
>>  		 */
>> -		preempt_disable();
>> +		get_online_cpus_atomic();
>>  		for_each_online_cpu(cpu)
>>  			if (cond_func(cpu, info)) {
>>  				ret = smp_call_function_single(cpu, func,
>>  								info, wait);
>>  				WARN_ON_ONCE(!ret);
>>  			}
>> -		preempt_enable();
>> +		put_online_cpus_atomic();
>>  	}
>>  }
>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>
> 


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-07-02  8:25       ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-07-02  8:25 UTC (permalink / raw)
  To: Michael Wang
  Cc: peterz, fweisbec, Shaohua Li, linux-kernel, Jan Beulich, walken,
	mingo, linux-arch, vincent.guittot, xiaoguangrong, paulmck,
	nikunj, linux-pm, rusty, rostedt, namhyung, tglx, Wang YanQing,
	laijs, zhong, netdev, oleg, sbw, David.Laight, tj, akpm,
	linuxppc-dev, liguang

Hi Michael,

On 07/02/2013 11:02 AM, Michael Wang wrote:
> Hi, Srivatsa
> 
> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
> [snip]
>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>   * The function might sleep if the GFP flags indicates a non
>>   * atomic allocation is allowed.
>>   *
>> - * Preemption is disabled to protect against CPUs going offline but not online.
>> - * CPUs going online during the call will not be seen or sent an IPI.
>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>> + * offline but not online. CPUs going online during the call will
>> + * not be seen or sent an IPI.
> 
> I was a little confused about this comment, if the offline-cpu still
> have chances to become online, then there is chances that we will pick
> it from for_each_online_cpu(), isn't it? Did I miss some point?
>

Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
loop, is dependent on timing. On top of that, there are 2 paths in the code:
one which uses a temporary cpumask and the other which doesn't. In the former
case, if a CPU comes online _after_ we populate the temporary cpumask, then
we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
loop itself (either in the former or the latter case), then yes, we will send
the IPI to that CPU.

Regards,
Srivatsa S. Bhat

> 
>>   *
>>   * You must not call this function with disabled interrupts or
>>   * from a hardware interrupt handler or from a bottom half handler.
>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>
>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>> -		preempt_disable();
>> +		get_online_cpus_atomic();
>>  		for_each_online_cpu(cpu)
>>  			if (cond_func(cpu, info))
>>  				cpumask_set_cpu(cpu, cpus);
>>  		on_each_cpu_mask(cpus, func, info, wait);
>> -		preempt_enable();
>> +		put_online_cpus_atomic();
>>  		free_cpumask_var(cpus);
>>  	} else {
>>  		/*
>>  		 * No free cpumask, bother. No matter, we'll
>>  		 * just have to IPI them one by one.
>>  		 */
>> -		preempt_disable();
>> +		get_online_cpus_atomic();
>>  		for_each_online_cpu(cpu)
>>  			if (cond_func(cpu, info)) {
>>  				ret = smp_call_function_single(cpu, func,
>>  								info, wait);
>>  				WARN_ON_ONCE(!ret);
>>  			}
>> -		preempt_enable();
>> +		put_online_cpus_atomic();
>>  	}
>>  }
>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>
> 

^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-07-02  8:25       ` Srivatsa S. Bhat
@ 2013-07-02  8:47         ` Michael Wang
  -1 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02  8:47 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight, rostedt,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Wang YanQing,
	Shaohua Li, Jan Beulich, liguang

On 07/02/2013 04:25 PM, Srivatsa S. Bhat wrote:
> Hi Michael,
> 
> On 07/02/2013 11:02 AM, Michael Wang wrote:
>> Hi, Srivatsa
>>
>> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
>> [snip]
>>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>>   * The function might sleep if the GFP flags indicates a non
>>>   * atomic allocation is allowed.
>>>   *
>>> - * Preemption is disabled to protect against CPUs going offline but not online.
>>> - * CPUs going online during the call will not be seen or sent an IPI.
>>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>>> + * offline but not online. CPUs going online during the call will
>>> + * not be seen or sent an IPI.
>>
>> I was a little confused about this comment, if the offline-cpu still
>> have chances to become online, then there is chances that we will pick
>> it from for_each_online_cpu(), isn't it? Did I miss some point?
>>
> 
> Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
> loop, is dependent on timing. On top of that, there are 2 paths in the code:
> one which uses a temporary cpumask and the other which doesn't. In the former
> case, if a CPU comes online _after_ we populate the temporary cpumask, then
> we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
> that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
> loop itself (either in the former or the latter case), then yes, we will send
> the IPI to that CPU.

So it is not 'during the call' but 'during the call of
on_each_cpu_mask()', correct?

The comment position seems like it declaim that during the call of this
func, online-cpu won't be seem and send IPI...

Regards,
Michael Wang

> 
> Regards,
> Srivatsa S. Bhat
> 
>>
>>>   *
>>>   * You must not call this function with disabled interrupts or
>>>   * from a hardware interrupt handler or from a bottom half handler.
>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>
>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>> -		preempt_disable();
>>> +		get_online_cpus_atomic();
>>>  		for_each_online_cpu(cpu)
>>>  			if (cond_func(cpu, info))
>>>  				cpumask_set_cpu(cpu, cpus);
>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>> -		preempt_enable();
>>> +		put_online_cpus_atomic();
>>>  		free_cpumask_var(cpus);
>>>  	} else {
>>>  		/*
>>>  		 * No free cpumask, bother. No matter, we'll
>>>  		 * just have to IPI them one by one.
>>>  		 */
>>> -		preempt_disable();
>>> +		get_online_cpus_atomic();
>>>  		for_each_online_cpu(cpu)
>>>  			if (cond_func(cpu, info)) {
>>>  				ret = smp_call_function_single(cpu, func,
>>>  								info, wait);
>>>  				WARN_ON_ONCE(!ret);
>>>  			}
>>> -		preempt_enable();
>>> +		put_online_cpus_atomic();
>>>  	}
>>>  }
>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-07-02  8:47         ` Michael Wang
  0 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02  8:47 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, fweisbec, Shaohua Li, linux-kernel, Jan Beulich, walken,
	mingo, linux-arch, vincent.guittot, xiaoguangrong, paulmck,
	nikunj, linux-pm, rusty, rostedt, namhyung, tglx, Wang YanQing,
	laijs, zhong, netdev, oleg, sbw, David.Laight, tj, akpm,
	linuxppc-dev, liguang

On 07/02/2013 04:25 PM, Srivatsa S. Bhat wrote:
> Hi Michael,
> 
> On 07/02/2013 11:02 AM, Michael Wang wrote:
>> Hi, Srivatsa
>>
>> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
>> [snip]
>>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>>   * The function might sleep if the GFP flags indicates a non
>>>   * atomic allocation is allowed.
>>>   *
>>> - * Preemption is disabled to protect against CPUs going offline but not online.
>>> - * CPUs going online during the call will not be seen or sent an IPI.
>>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>>> + * offline but not online. CPUs going online during the call will
>>> + * not be seen or sent an IPI.
>>
>> I was a little confused about this comment, if the offline-cpu still
>> have chances to become online, then there is chances that we will pick
>> it from for_each_online_cpu(), isn't it? Did I miss some point?
>>
> 
> Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
> loop, is dependent on timing. On top of that, there are 2 paths in the code:
> one which uses a temporary cpumask and the other which doesn't. In the former
> case, if a CPU comes online _after_ we populate the temporary cpumask, then
> we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
> that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
> loop itself (either in the former or the latter case), then yes, we will send
> the IPI to that CPU.

So it is not 'during the call' but 'during the call of
on_each_cpu_mask()', correct?

The comment position seems like it declaim that during the call of this
func, online-cpu won't be seem and send IPI...

Regards,
Michael Wang

> 
> Regards,
> Srivatsa S. Bhat
> 
>>
>>>   *
>>>   * You must not call this function with disabled interrupts or
>>>   * from a hardware interrupt handler or from a bottom half handler.
>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>
>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>> -		preempt_disable();
>>> +		get_online_cpus_atomic();
>>>  		for_each_online_cpu(cpu)
>>>  			if (cond_func(cpu, info))
>>>  				cpumask_set_cpu(cpu, cpus);
>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>> -		preempt_enable();
>>> +		put_online_cpus_atomic();
>>>  		free_cpumask_var(cpus);
>>>  	} else {
>>>  		/*
>>>  		 * No free cpumask, bother. No matter, we'll
>>>  		 * just have to IPI them one by one.
>>>  		 */
>>> -		preempt_disable();
>>> +		get_online_cpus_atomic();
>>>  		for_each_online_cpu(cpu)
>>>  			if (cond_func(cpu, info)) {
>>>  				ret = smp_call_function_single(cpu, func,
>>>  								info, wait);
>>>  				WARN_ON_ONCE(!ret);
>>>  			}
>>> -		preempt_enable();
>>> +		put_online_cpus_atomic();
>>>  	}
>>>  }
>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-07-02  8:47         ` Michael Wang
@ 2013-07-02  9:51           ` Srivatsa S. Bhat
  -1 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-07-02  9:51 UTC (permalink / raw)
  To: Michael Wang
  Cc: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight, rostedt,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Wang YanQing,
	Shaohua Li, Jan Beulich, liguang

On 07/02/2013 02:17 PM, Michael Wang wrote:
> On 07/02/2013 04:25 PM, Srivatsa S. Bhat wrote:
>> Hi Michael,
>>
>> On 07/02/2013 11:02 AM, Michael Wang wrote:
>>> Hi, Srivatsa
>>>
>>> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
>>> [snip]
>>>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>>>   * The function might sleep if the GFP flags indicates a non
>>>>   * atomic allocation is allowed.
>>>>   *
>>>> - * Preemption is disabled to protect against CPUs going offline but not online.
>>>> - * CPUs going online during the call will not be seen or sent an IPI.
>>>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>>>> + * offline but not online. CPUs going online during the call will
>>>> + * not be seen or sent an IPI.
>>>
>>> I was a little confused about this comment, if the offline-cpu still
>>> have chances to become online, then there is chances that we will pick
>>> it from for_each_online_cpu(), isn't it? Did I miss some point?
>>>
>>
>> Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
>> loop, is dependent on timing. On top of that, there are 2 paths in the code:
>> one which uses a temporary cpumask and the other which doesn't. In the former
>> case, if a CPU comes online _after_ we populate the temporary cpumask, then
>> we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
>> that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
>> loop itself (either in the former or the latter case), then yes, we will send
>> the IPI to that CPU.
> 
> So it is not 'during the call' but 'during the call of
> on_each_cpu_mask()', correct?
> 

Well, as I said, its timing dependent. We might miss the newly onlined CPU in
the for_each_online_cpu() loop itself, based on when exactly the CPU was added
to the cpu_online_mask. So you can't exactly pin-point the places where you'll
miss the CPU and where you won't. Besides, is it _that_ important? It is after
all unpredictable..

> The comment position seems like it declaim that during the call of this
> func, online-cpu won't be seem and send IPI...
>

Doesn't matter, AFAICS. The key take-away from that whole comment is: nothing is
done to prevent CPUs from coming online while the function is running, whereas
the online CPUs are guaranteed to remain online throughout the function. In other
words, its a weaker form of get_online_cpus()/put_online_cpus(), providing a
one-way synchronization (CPU offline).

As long as that idea is conveyed properly, the purpose of that comment is served,
IMHO.

Regards,
Srivatsa S. Bhat


>>>>   *
>>>>   * You must not call this function with disabled interrupts or
>>>>   * from a hardware interrupt handler or from a bottom half handler.
>>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>>
>>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>>> -		preempt_disable();
>>>> +		get_online_cpus_atomic();
>>>>  		for_each_online_cpu(cpu)
>>>>  			if (cond_func(cpu, info))
>>>>  				cpumask_set_cpu(cpu, cpus);
>>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>>> -		preempt_enable();
>>>> +		put_online_cpus_atomic();
>>>>  		free_cpumask_var(cpus);
>>>>  	} else {
>>>>  		/*
>>>>  		 * No free cpumask, bother. No matter, we'll
>>>>  		 * just have to IPI them one by one.
>>>>  		 */
>>>> -		preempt_disable();
>>>> +		get_online_cpus_atomic();
>>>>  		for_each_online_cpu(cpu)
>>>>  			if (cond_func(cpu, info)) {
>>>>  				ret = smp_call_function_single(cpu, func,
>>>>  								info, wait);
>>>>  				WARN_ON_ONCE(!ret);
>>>>  			}
>>>> -		preempt_enable();
>>>> +		put_online_cpus_atomic();
>>>>  	}
>>>>  }
>>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>>
>>>
>>


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-07-02  9:51           ` Srivatsa S. Bhat
  0 siblings, 0 replies; 187+ messages in thread
From: Srivatsa S. Bhat @ 2013-07-02  9:51 UTC (permalink / raw)
  To: Michael Wang
  Cc: peterz, fweisbec, Shaohua Li, linux-kernel, Jan Beulich, walken,
	mingo, linux-arch, vincent.guittot, xiaoguangrong, paulmck,
	nikunj, linux-pm, rusty, rostedt, namhyung, tglx, Wang YanQing,
	laijs, zhong, netdev, oleg, sbw, David.Laight, tj, akpm,
	linuxppc-dev, liguang

On 07/02/2013 02:17 PM, Michael Wang wrote:
> On 07/02/2013 04:25 PM, Srivatsa S. Bhat wrote:
>> Hi Michael,
>>
>> On 07/02/2013 11:02 AM, Michael Wang wrote:
>>> Hi, Srivatsa
>>>
>>> On 06/28/2013 03:54 AM, Srivatsa S. Bhat wrote:
>>> [snip]
>>>> @@ -625,8 +632,9 @@ EXPORT_SYMBOL(on_each_cpu_mask);
>>>>   * The function might sleep if the GFP flags indicates a non
>>>>   * atomic allocation is allowed.
>>>>   *
>>>> - * Preemption is disabled to protect against CPUs going offline but not online.
>>>> - * CPUs going online during the call will not be seen or sent an IPI.
>>>> + * We use get/put_online_cpus_atomic() to protect against CPUs going
>>>> + * offline but not online. CPUs going online during the call will
>>>> + * not be seen or sent an IPI.
>>>
>>> I was a little confused about this comment, if the offline-cpu still
>>> have chances to become online, then there is chances that we will pick
>>> it from for_each_online_cpu(), isn't it? Did I miss some point?
>>>
>>
>> Whether or not the newly onlined CPU is observed in our for_each_online_cpu()
>> loop, is dependent on timing. On top of that, there are 2 paths in the code:
>> one which uses a temporary cpumask and the other which doesn't. In the former
>> case, if a CPU comes online _after_ we populate the temporary cpumask, then
>> we won't send an IPI to that cpu, since the temporary cpumask doesn't contain
>> that CPU. Whereas, if we observe the newly onlined CPU in the for_each_online_cpu()
>> loop itself (either in the former or the latter case), then yes, we will send
>> the IPI to that CPU.
> 
> So it is not 'during the call' but 'during the call of
> on_each_cpu_mask()', correct?
> 

Well, as I said, its timing dependent. We might miss the newly onlined CPU in
the for_each_online_cpu() loop itself, based on when exactly the CPU was added
to the cpu_online_mask. So you can't exactly pin-point the places where you'll
miss the CPU and where you won't. Besides, is it _that_ important? It is after
all unpredictable..

> The comment position seems like it declaim that during the call of this
> func, online-cpu won't be seem and send IPI...
>

Doesn't matter, AFAICS. The key take-away from that whole comment is: nothing is
done to prevent CPUs from coming online while the function is running, whereas
the online CPUs are guaranteed to remain online throughout the function. In other
words, its a weaker form of get_online_cpus()/put_online_cpus(), providing a
one-way synchronization (CPU offline).

As long as that idea is conveyed properly, the purpose of that comment is served,
IMHO.

Regards,
Srivatsa S. Bhat


>>>>   *
>>>>   * You must not call this function with disabled interrupts or
>>>>   * from a hardware interrupt handler or from a bottom half handler.
>>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>>
>>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>>> -		preempt_disable();
>>>> +		get_online_cpus_atomic();
>>>>  		for_each_online_cpu(cpu)
>>>>  			if (cond_func(cpu, info))
>>>>  				cpumask_set_cpu(cpu, cpus);
>>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>>> -		preempt_enable();
>>>> +		put_online_cpus_atomic();
>>>>  		free_cpumask_var(cpus);
>>>>  	} else {
>>>>  		/*
>>>>  		 * No free cpumask, bother. No matter, we'll
>>>>  		 * just have to IPI them one by one.
>>>>  		 */
>>>> -		preempt_disable();
>>>> +		get_online_cpus_atomic();
>>>>  		for_each_online_cpu(cpu)
>>>>  			if (cond_func(cpu, info)) {
>>>>  				ret = smp_call_function_single(cpu, func,
>>>>  								info, wait);
>>>>  				WARN_ON_ONCE(!ret);
>>>>  			}
>>>> -		preempt_enable();
>>>> +		put_online_cpus_atomic();
>>>>  	}
>>>>  }
>>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>>
>>>
>>

^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
  2013-07-02  9:51           ` Srivatsa S. Bhat
@ 2013-07-02 10:08             ` Michael Wang
  -1 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02 10:08 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: tglx, peterz, tj, oleg, paulmck, rusty, mingo, akpm, namhyung,
	walken, vincent.guittot, laijs, David.Laight, rostedt,
	xiaoguangrong, sbw, fweisbec, zhong, nikunj, linux-pm,
	linux-arch, linuxppc-dev, netdev, linux-kernel, Wang YanQing,
	Shaohua Li, Jan Beulich, liguang

On 07/02/2013 05:51 PM, Srivatsa S. Bhat wrote:
[snip]
> 
> Well, as I said, its timing dependent. We might miss the newly onlined CPU in
> the for_each_online_cpu() loop itself, based on when exactly the CPU was added
> to the cpu_online_mask. So you can't exactly pin-point the places where you'll
> miss the CPU and where you won't. Besides, is it _that_ important? It is after
> all unpredictable..

Sure, it's nothing important ;-)

I just think this comment:

+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI

It told people that the cpu could online during the call, but won't get
IPI, while actually they have a chance to get it, folks haven't look
inside may missed some thing when use it.

But it's just self-opinion, so let's put down the discuss :)

Regards,
Michael Wang

> 
>> The comment position seems like it declaim that during the call of this
>> func, online-cpu won't be seem and send IPI...
>>
> 
> Doesn't matter, AFAICS. The key take-away from that whole comment is: nothing is
> done to prevent CPUs from coming online while the function is running, whereas
> the online CPUs are guaranteed to remain online throughout the function. In other
> words, its a weaker form of get_online_cpus()/put_online_cpus(), providing a
> one-way synchronization (CPU offline).
> 
> As long as that idea is conveyed properly, the purpose of that comment is served,
> IMHO.
> 
> Regards,
> Srivatsa S. Bhat
> 
> 
>>>>>   *
>>>>>   * You must not call this function with disabled interrupts or
>>>>>   * from a hardware interrupt handler or from a bottom half handler.
>>>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>>>
>>>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>>>> -		preempt_disable();
>>>>> +		get_online_cpus_atomic();
>>>>>  		for_each_online_cpu(cpu)
>>>>>  			if (cond_func(cpu, info))
>>>>>  				cpumask_set_cpu(cpu, cpus);
>>>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>>>> -		preempt_enable();
>>>>> +		put_online_cpus_atomic();
>>>>>  		free_cpumask_var(cpus);
>>>>>  	} else {
>>>>>  		/*
>>>>>  		 * No free cpumask, bother. No matter, we'll
>>>>>  		 * just have to IPI them one by one.
>>>>>  		 */
>>>>> -		preempt_disable();
>>>>> +		get_online_cpus_atomic();
>>>>>  		for_each_online_cpu(cpu)
>>>>>  			if (cond_func(cpu, info)) {
>>>>>  				ret = smp_call_function_single(cpu, func,
>>>>>  								info, wait);
>>>>>  				WARN_ON_ONCE(!ret);
>>>>>  			}
>>>>> -		preempt_enable();
>>>>> +		put_online_cpus_atomic();
>>>>>  	}
>>>>>  }
>>>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>>>
>>>>
>>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 187+ messages in thread

* Re: [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline
@ 2013-07-02 10:08             ` Michael Wang
  0 siblings, 0 replies; 187+ messages in thread
From: Michael Wang @ 2013-07-02 10:08 UTC (permalink / raw)
  To: Srivatsa S. Bhat
  Cc: peterz, fweisbec, Shaohua Li, linux-kernel, Jan Beulich, walken,
	mingo, linux-arch, vincent.guittot, xiaoguangrong, paulmck,
	nikunj, linux-pm, rusty, rostedt, namhyung, tglx, Wang YanQing,
	laijs, zhong, netdev, oleg, sbw, David.Laight, tj, akpm,
	linuxppc-dev, liguang

On 07/02/2013 05:51 PM, Srivatsa S. Bhat wrote:
[snip]
> 
> Well, as I said, its timing dependent. We might miss the newly onlined CPU in
> the for_each_online_cpu() loop itself, based on when exactly the CPU was added
> to the cpu_online_mask. So you can't exactly pin-point the places where you'll
> miss the CPU and where you won't. Besides, is it _that_ important? It is after
> all unpredictable..

Sure, it's nothing important ;-)

I just think this comment:

+ * We use get/put_online_cpus_atomic() to protect against CPUs going
+ * offline but not online. CPUs going online during the call will
+ * not be seen or sent an IPI

It told people that the cpu could online during the call, but won't get
IPI, while actually they have a chance to get it, folks haven't look
inside may missed some thing when use it.

But it's just self-opinion, so let's put down the discuss :)

Regards,
Michael Wang

> 
>> The comment position seems like it declaim that during the call of this
>> func, online-cpu won't be seem and send IPI...
>>
> 
> Doesn't matter, AFAICS. The key take-away from that whole comment is: nothing is
> done to prevent CPUs from coming online while the function is running, whereas
> the online CPUs are guaranteed to remain online throughout the function. In other
> words, its a weaker form of get_online_cpus()/put_online_cpus(), providing a
> one-way synchronization (CPU offline).
> 
> As long as that idea is conveyed properly, the purpose of that comment is served,
> IMHO.
> 
> Regards,
> Srivatsa S. Bhat
> 
> 
>>>>>   *
>>>>>   * You must not call this function with disabled interrupts or
>>>>>   * from a hardware interrupt handler or from a bottom half handler.
>>>>> @@ -641,26 +649,26 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info),
>>>>>  	might_sleep_if(gfp_flags & __GFP_WAIT);
>>>>>
>>>>>  	if (likely(zalloc_cpumask_var(&cpus, (gfp_flags|__GFP_NOWARN)))) {
>>>>> -		preempt_disable();
>>>>> +		get_online_cpus_atomic();
>>>>>  		for_each_online_cpu(cpu)
>>>>>  			if (cond_func(cpu, info))
>>>>>  				cpumask_set_cpu(cpu, cpus);
>>>>>  		on_each_cpu_mask(cpus, func, info, wait);
>>>>> -		preempt_enable();
>>>>> +		put_online_cpus_atomic();
>>>>>  		free_cpumask_var(cpus);
>>>>>  	} else {
>>>>>  		/*
>>>>>  		 * No free cpumask, bother. No matter, we'll
>>>>>  		 * just have to IPI them one by one.
>>>>>  		 */
>>>>> -		preempt_disable();
>>>>> +		get_online_cpus_atomic();
>>>>>  		for_each_online_cpu(cpu)
>>>>>  			if (cond_func(cpu, info)) {
>>>>>  				ret = smp_call_function_single(cpu, func,
>>>>>  								info, wait);
>>>>>  				WARN_ON_ONCE(!ret);
>>>>>  			}
>>>>> -		preempt_enable();
>>>>> +		put_online_cpus_atomic();
>>>>>  	}
>>>>>  }
>>>>>  EXPORT_SYMBOL(on_each_cpu_cond);
>>>>>
>>>>
>>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

^ permalink raw reply	[flat|nested] 187+ messages in thread

end of thread, other threads:[~2013-07-02 10:08 UTC | newest]

Thread overview: 187+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-27 19:52 [PATCH v3 00/45] CPU hotplug: stop_machine()-free CPU hotplug, part 1 Srivatsa S. Bhat
2013-06-27 19:52 ` Srivatsa S. Bhat
2013-06-27 19:52 ` [PATCH v3 01/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52 ` [PATCH v3 02/45] CPU hotplug: Clarify the usage of different synchronization APIs Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52 ` [PATCH v3 03/45] Documentation, CPU hotplug: Recommend usage of get/put_online_cpus_atomic() Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:52   ` Srivatsa S. Bhat
2013-06-27 19:53 ` [PATCH v3 04/45] CPU hotplug: Add infrastructure to check lacking hotplug synchronization Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53 ` [PATCH v3 05/45] CPU hotplug: Protect set_cpu_online() to avoid false-positives Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53 ` [PATCH v3 06/45] CPU hotplug: Sprinkle debugging checks to catch locking bugs Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53 ` [PATCH v3 07/45] CPU hotplug: Add _nocheck() variants of accessor functions Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53 ` [PATCH v3 08/45] CPU hotplug: Expose the new debug config option Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:53   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 09/45] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 10/45] smp: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-07-02  5:32   ` Michael Wang
2013-07-02  5:32     ` Michael Wang
2013-07-02  8:25     ` Srivatsa S. Bhat
2013-07-02  8:25       ` Srivatsa S. Bhat
2013-07-02  8:47       ` Michael Wang
2013-07-02  8:47         ` Michael Wang
2013-07-02  9:51         ` Srivatsa S. Bhat
2013-07-02  9:51           ` Srivatsa S. Bhat
2013-07-02 10:08           ` Michael Wang
2013-07-02 10:08             ` Michael Wang
2013-06-27 19:54 ` [PATCH v3 11/45] sched/core: " Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 12/45] migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 13/45] sched/fair: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 14/45] timer: " Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54 ` [PATCH v3 15/45] sched/rt: " Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:54   ` Srivatsa S. Bhat
2013-06-27 19:55 ` [PATCH v3 16/45] rcu: Use cpu_is_offline_nocheck() to avoid false-positive warnings Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 20:12   ` Paul E. McKenney
2013-06-27 20:12     ` Paul E. McKenney
2013-06-27 19:55 ` [PATCH v3 17/45] tick-broadcast: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55 ` [PATCH v3 18/45] time/clocksource: " Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:55   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 19/45] softirq: " Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 20/45] irq: " Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 21/45] net: " Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 22/45] block: " Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 23/45] percpu_counter: Use _nocheck version of for_each_online_cpu() Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 24/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56 ` [PATCH v3 25/45] [SCSI] fcoe: " Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:56   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 26/45] staging/octeon: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 27/45] x86: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 28/45] perf/x86: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 29/45] KVM: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 30/45] x86/xen: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57 ` [PATCH v3 31/45] alpha/smp: " Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:57   ` Srivatsa S. Bhat
2013-06-27 19:58 ` [PATCH v3 32/45] blackfin/smp: " Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58 ` [PATCH v3 33/45] cris/smp: " Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58 ` [PATCH v3 34/45] hexagon/smp: " Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58 ` [PATCH v3 35/45] ia64: irq, perfmon: " Srivatsa S. Bhat
2013-06-27 20:10   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:58   ` Srivatsa S. Bhat
2013-06-27 19:59 ` [PATCH v3 36/45] ia64: smp, tlb: " Srivatsa S. Bhat
2013-06-27 20:11   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59 ` [PATCH v3 37/45] m32r: " Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59 ` [PATCH v3 38/45] MIPS: " Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59 ` [PATCH v3 39/45] mn10300: " Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59 ` [PATCH v3 40/45] powerpc, irq: Use GFP_ATOMIC allocations in atomic context Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 19:59   ` Srivatsa S. Bhat
2013-06-27 20:00 ` [PATCH v3 41/45] powerpc: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 20:12   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00 ` [PATCH v3 42/45] powerpc: Use get/put_online_cpus_atomic() to avoid false-positive warning Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00 ` [PATCH v3 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-06-27 20:12   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00 ` [PATCH v3 44/45] sparc: " Srivatsa S. Bhat
2013-06-27 20:12   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00 ` [PATCH v3 45/45] tile: " Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat
2013-06-27 20:00   ` Srivatsa S. Bhat

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.