All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v0 0/9] Remove CPU_*_FROZEN
@ 2015-09-04 13:34 Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out Daniel Wagner
                   ` (9 more replies)
  0 siblings, 10 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: Daniel Wagner

Hi

I was looking at Thomas' "CPU hotplug rework - episode I" series [1]
and noted the CPU_*_FROZEN bits in there.

In 2007 CPU_TASK_FROZEN was introduced to allow subsystem to
distinguish between normal CPU hotplug events and CPU hotplug events
under system-wide suspend or resume operations [2]. As it turns out
almost no subsystem is interested in this information. 

So this begs the question why having the additional complexity in the
CPU state handling instead of having an explicit function for
retrieving this information. Here and attempt to rip out
CPU_TASKS_FROZEN bits.

Overall I think it is worth doing so but you might see it differently.
FWIW, the image size is slightly smaller too in my sample
configuration.

   text    data     bss     dec     hex filename
16794542        4462208 14954496        36211246        2288a2e vmlinux
16794267        4462208 14954496        36210971        228891b vmlinux-wo-frozen

Patch 1: I think this patch fixes a real bug. Even Paul agreed during
a chat at LinuxCon. He needed an addition state to and grabbed just
one of the FROZEN ones.

Patch 2: Adds a new freeze_active() call which tells if PM is active
or not.

Patch 3, 4 and 5: Update the only users of FROZEN.

Patch 6: Is the refactoring patch from Thomas hotplug rework [1].

Patch 7: Remove all FROZEN references. It should contain only simple
changes. I did that manually. Probably some scripting could be done to
ensure the changes are more correct. This patch could be spitted and
the pieces could be applied one after the other.

Patch 8: Get rid of the definitions of FROZEN.

Patch 9: And finally update the documentation.

I starred at this code for while and compiled it for different
architectures (x86, ARM, S390, powerpc). I also tested by executing
Steven's stress-cpu-hotplug script and then do suspend-resume cycles.
Nothing exploded but that is not a real proof all is okay. So please
have a close look at the changes on the FROZEN users.

Thanks,
Daniel

[1] https://lwn.net/Articles/535764/
[2] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8bb7844286fb8c9fce6f65d8288aeb09d03a5e0d


"H. Peter Anvin" <hpa@zytor.com>
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
"Rafael J. Wysocki" <rjw@rjwysocki.net>
Akinobu Mita <akinobu.mita@gmail.com>
Andrew Morton <akpm@linux-foundation.org>
Boris Ostrovsky <boris.ostrovsky@oracle.com>
Borislav Petkov <bp@alien8.de>
Chris Metcalf <cmetcalf@ezchip.com>
Daniel Wagner <daniel.wagner@bmw-carit.de>
David Hildenbrand <dahi@linux.vnet.ibm.com>
David Vrabel <david.vrabel@citrix.com>
Don Zickus <dzickus@redhat.com>
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ingo Molnar <mingo@redhat.com>
John Hubbard <jhubbard@nvidia.com>
Jonathan Corbet <corbet@lwn.net>
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Lai Jiangshan <laijs@cn.fujitsu.com>
Len Brown <len.brown@intel.com>
Luis R. Rodriguez <mcgrof@do-not-panic.com>
Mathias Krause <minipli@googlemail.com>
Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Paul Gortmaker <paul.gortmaker@windriver.com>
Pavel Machek <pavel@ucw.cz>
Peter Zijlstra <peterz@infradead.org>
Sudeep Holla <sudeep.holla@arm.com>
Thomas Gleixner <tglx@linutronix.de>
Tony Luck <tony.luck@intel.com>
Vitaly Kuznetsov <vkuznets@redhat.com>

Daniel Wagner (8):
  smpboot: Add a separate CPU state when a surviving CPU times out
  suspend: Add getter function to report if freezing is active
  x86: Use freeze_active() instead of CPU_*_FROZEN
  smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information
  sched: Use freeze_active() instead CPU_*_FROZEN state information
  cpu: Remove unused CPU_*_FROZEN states
  cpu: Do not set CPU_TASKS_FROZEN anymore
  doc: Update cpu-hotplug documents on removal of CPU_TASKS_FROZEN

Thomas Gleixner (1):
  cpu: Restructure FROZEN state handling

 Documentation/cpu-hotplug.txt                      | 12 ++--
 .../fault-injection/notifier-error-inject.txt      |  2 -
 Documentation/power/suspend-and-cpuhotplug.txt     | 13 +---
 arch/arm/kernel/hw_breakpoint.c                    |  2 +-
 arch/arm/kernel/perf_event.c                       |  2 +-
 arch/arm/kernel/smp_twd.c                          |  2 +-
 arch/arm/kvm/arm.c                                 |  1 -
 arch/arm/mm/cache-l2x0.c                           |  2 +-
 arch/arm/vfp/vfpmodule.c                           |  4 +-
 arch/arm64/kernel/armv8_deprecated.c               |  2 +-
 arch/arm64/kernel/fpsimd.c                         |  1 -
 arch/blackfin/kernel/perf_event.c                  |  2 +-
 arch/ia64/kernel/err_inject.c                      |  2 -
 arch/ia64/kernel/mca.c                             |  1 -
 arch/ia64/kernel/palinfo.c                         |  2 -
 arch/ia64/kernel/salinfo.c                         |  2 -
 arch/ia64/kernel/topology.c                        |  2 -
 arch/metag/kernel/perf/perf_event.c                |  2 +-
 arch/mips/loongson64/loongson-3/smp.c              |  3 -
 arch/mips/oprofile/op_model_loongson3.c            |  2 -
 arch/powerpc/kernel/sysfs.c                        |  2 -
 arch/powerpc/mm/mmu_context_nohash.c               |  3 -
 arch/powerpc/mm/numa.c                             |  3 -
 arch/powerpc/perf/core-book3s.c                    |  2 +-
 arch/powerpc/platforms/powermac/smp.c              |  2 -
 arch/s390/kernel/perf_cpum_cf.c                    |  2 +-
 arch/s390/kernel/perf_cpum_sf.c                    |  3 +-
 arch/s390/kernel/smp.c                             |  2 +-
 arch/s390/mm/fault.c                               |  2 +-
 arch/sh/kernel/perf_event.c                        |  2 +-
 arch/sparc/kernel/sysfs.c                          |  2 -
 arch/x86/entry/vdso/vma.c                          |  2 +-
 arch/x86/kernel/apic/x2apic_cluster.c              |  1 -
 arch/x86/kernel/cpu/mcheck/mce.c                   | 15 ++---
 arch/x86/kernel/cpu/mcheck/mce_amd.c               |  2 -
 arch/x86/kernel/cpu/mcheck/therm_throt.c           |  3 -
 arch/x86/kernel/cpu/microcode/core.c               | 12 ++--
 arch/x86/kernel/cpu/perf_event.c                   |  2 +-
 arch/x86/kernel/cpu/perf_event_amd_ibs.c           |  2 +-
 arch/x86/kernel/cpu/perf_event_amd_uncore.c        |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_cqm.c         |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_rapl.c        |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_uncore.c      |  4 +-
 arch/x86/kernel/cpuid.c                            |  1 -
 arch/x86/kernel/kvm.c                              |  2 -
 arch/x86/kernel/msr.c                              |  1 -
 arch/x86/pci/amd_bus.c                             |  1 -
 arch/x86/xen/smp.c                                 |  2 +-
 arch/xtensa/kernel/perf_event.c                    |  2 +-
 block/blk-iopoll.c                                 |  2 +-
 block/blk-mq.c                                     |  5 +-
 block/blk-softirq.c                                |  2 +-
 drivers/acpi/processor_driver.c                    |  1 -
 drivers/base/cacheinfo.c                           |  2 +-
 drivers/base/topology.c                            |  3 -
 drivers/bus/arm-cci.c                              |  2 +-
 drivers/bus/arm-ccn.c                              |  2 +-
 drivers/bus/mips_cdmm.c                            |  2 +-
 drivers/clocksource/arm_arch_timer.c               |  2 +-
 drivers/clocksource/arm_global_timer.c             |  2 +-
 drivers/clocksource/dummy_timer.c                  |  2 +-
 drivers/clocksource/exynos_mct.c                   |  2 +-
 drivers/clocksource/metag_generic.c                |  1 -
 drivers/clocksource/mips-gic-timer.c               |  2 +-
 drivers/clocksource/qcom-timer.c                   |  2 +-
 drivers/clocksource/time-armada-370-xp.c           |  2 +-
 drivers/clocksource/timer-atlas7.c                 |  2 +-
 drivers/cpufreq/acpi-cpufreq.c                     |  2 -
 drivers/cpufreq/cpufreq.c                          |  2 +-
 drivers/cpuidle/coupled.c                          |  4 +-
 drivers/cpuidle/cpuidle-powernv.c                  |  2 -
 drivers/cpuidle/cpuidle-pseries.c                  |  2 -
 drivers/hwtracing/coresight/coresight-etm3x.c      |  2 +-
 drivers/hwtracing/coresight/coresight-etm4x.c      |  2 +-
 drivers/idle/intel_idle.c                          |  2 +-
 drivers/irqchip/irq-armada-370-xp.c                |  4 +-
 drivers/irqchip/irq-gic-v3.c                       |  2 +-
 drivers/irqchip/irq-gic.c                          |  2 +-
 drivers/irqchip/irq-hip04.c                        |  2 +-
 drivers/leds/trigger/ledtrig-cpu.c                 |  2 +-
 drivers/md/raid5.c                                 |  2 -
 drivers/net/virtio_net.c                           |  2 +-
 drivers/oprofile/timer_int.c                       |  2 -
 drivers/pci/host/pci-xgene-msi.c                   |  2 -
 drivers/powercap/intel_rapl.c                      |  3 -
 drivers/scsi/bnx2fc/bnx2fc_fcoe.c                  |  2 -
 drivers/scsi/bnx2i/bnx2i_init.c                    |  2 -
 drivers/scsi/fcoe/fcoe.c                           |  2 -
 drivers/scsi/virtio_scsi.c                         |  2 -
 .../staging/lustre/lustre/libcfs/linux/linux-cpu.c |  4 +-
 fs/buffer.c                                        |  2 +-
 include/linux/cpu.h                                | 17 +----
 include/linux/suspend.h                            |  6 ++
 kernel/cpu.c                                       | 74 ++++++++--------------
 kernel/events/core.c                               |  2 +-
 kernel/padata.c                                    |  4 --
 kernel/profile.c                                   |  4 --
 kernel/rcu/tree.c                                  |  5 --
 kernel/relay.c                                     |  2 -
 kernel/sched/core.c                                | 59 +++++++++--------
 kernel/sched/fair.c                                |  2 +-
 kernel/smp.c                                       |  6 +-
 kernel/smpboot.c                                   | 34 +++++-----
 kernel/softirq.c                                   |  1 -
 kernel/time/hrtimer.c                              |  2 -
 kernel/time/tick-sched.c                           |  2 +-
 kernel/time/timer.c                                |  1 -
 kernel/trace/ring_buffer.c                         |  2 -
 kernel/workqueue.c                                 |  4 +-
 lib/cpu-notifier-error-inject.c                    |  2 -
 lib/percpu_counter.c                               |  2 +-
 lib/radix-tree.c                                   |  2 +-
 mm/memcontrol.c                                    |  2 +-
 mm/page-writeback.c                                |  2 +-
 mm/page_alloc.c                                    |  2 +-
 mm/slab.c                                          |  6 --
 mm/slub.c                                          |  2 -
 mm/vmscan.c                                        |  2 +-
 mm/vmstat.c                                        |  4 --
 net/core/dev.c                                     |  2 +-
 net/core/flow.c                                    |  2 -
 net/iucv/iucv.c                                    |  6 --
 virt/kvm/arm/arch_timer.c                          |  2 -
 virt/kvm/arm/vgic.c                                |  2 -
 virt/kvm/kvm_main.c                                |  1 -
 125 files changed, 172 insertions(+), 333 deletions(-)

-- 
2.4.3


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-04 13:34 ` Daniel Wagner
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, H. Peter Anvin, Paul E. McKenney, Andrew Morton,
	Boris Ostrovsky, Chris Metcalf, David Vrabel, Ingo Molnar,
	John Hubbard, Konrad Rzeszutek Wilk, Lai Jiangshan,
	Peter Zijlstra, Thomas Gleixner, x86, xen-devel

The CPU_DEAD_FROZEN state is abused to report to cpu_wait_death() that
the operation timeout. It has nothing to do with the pm freezing
process. Introduce a new state to allow proper distinction between the
states and also prepares the code to get rid of all FROZEN states.

This was intruced in

8038dad7e888581266c76df15d70ca457a3c5910 smpboot: Add common code for notification from dying CPU
2a442c9c6453d3d043dfd89f2e03a1deff8a6f06 x86: Use common outgoing-CPU-notification code

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/xen/smp.c  | 2 +-
 include/linux/cpu.h | 2 ++
 kernel/smpboot.c    | 4 ++--
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 8648438..7a8bc03 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -740,7 +740,7 @@ static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
 	 * This can happen if CPU was offlined earlier and
 	 * offlining timed out in common_cpu_die().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
+	if (cpu_report_state(cpu) == CPU_DEAD_TIMEOUT) {
 		xen_smp_intr_free(cpu);
 		xen_uninit_lock_cpu(cpu);
 	}
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 23c30bd..381ea8a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -101,6 +101,8 @@ enum {
 					* idle loop. */
 #define CPU_BROKEN		0x000C /* CPU (unsigned)v did not die properly,
 					* perhaps due to preemption. */
+#define CPU_DEAD_TIMEOUT	0x000D /* CPU (unsigned)v surviving CPU timed
+					  out */
 
 /* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
  * operation in progress
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 7c434c3..e37efbf 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -505,7 +505,7 @@ update_state:
  * Called by the outgoing CPU to report its successful death.  Return
  * false if this report follows the surviving CPU's timing out.
  *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
+ * A separate "CPU_DEAD_TIMEOUT" is used when the surviving CPU
  * timed out.  This approach allows architectures to omit calls to
  * cpu_check_up_prepare() and cpu_set_state_online() without defeating
  * the next cpu_wait_death()'s polling loop.
@@ -521,7 +521,7 @@ bool cpu_report_death(void)
 		if (oldstate != CPU_BROKEN)
 			newstate = CPU_DEAD;
 		else
-			newstate = CPU_DEAD_FROZEN;
+			newstate = CPU_DEAD_TIMEOUT;
 	} while (atomic_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
 				oldstate, newstate) != oldstate);
 	return newstate == CPU_DEAD;
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 2/9] suspend: Add getter function to report if freezing is active Daniel Wagner
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Lai Jiangshan, Peter Zijlstra, John Hubbard, x86, Daniel Wagner,
	Chris Metcalf, Ingo Molnar, Thomas Gleixner, David Vrabel,
	H. Peter Anvin, xen-devel, Andrew Morton, Paul E. McKenney,
	Boris Ostrovsky

The CPU_DEAD_FROZEN state is abused to report to cpu_wait_death() that
the operation timeout. It has nothing to do with the pm freezing
process. Introduce a new state to allow proper distinction between the
states and also prepares the code to get rid of all FROZEN states.

This was intruced in

8038dad7e888581266c76df15d70ca457a3c5910 smpboot: Add common code for notification from dying CPU
2a442c9c6453d3d043dfd89f2e03a1deff8a6f06 x86: Use common outgoing-CPU-notification code

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/xen/smp.c  | 2 +-
 include/linux/cpu.h | 2 ++
 kernel/smpboot.c    | 4 ++--
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 8648438..7a8bc03 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -740,7 +740,7 @@ static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
 	 * This can happen if CPU was offlined earlier and
 	 * offlining timed out in common_cpu_die().
 	 */
-	if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) {
+	if (cpu_report_state(cpu) == CPU_DEAD_TIMEOUT) {
 		xen_smp_intr_free(cpu);
 		xen_uninit_lock_cpu(cpu);
 	}
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 23c30bd..381ea8a 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -101,6 +101,8 @@ enum {
 					* idle loop. */
 #define CPU_BROKEN		0x000C /* CPU (unsigned)v did not die properly,
 					* perhaps due to preemption. */
+#define CPU_DEAD_TIMEOUT	0x000D /* CPU (unsigned)v surviving CPU timed
+					  out */
 
 /* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
  * operation in progress
diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index 7c434c3..e37efbf 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -505,7 +505,7 @@ update_state:
  * Called by the outgoing CPU to report its successful death.  Return
  * false if this report follows the surviving CPU's timing out.
  *
- * A separate "CPU_DEAD_FROZEN" is used when the surviving CPU
+ * A separate "CPU_DEAD_TIMEOUT" is used when the surviving CPU
  * timed out.  This approach allows architectures to omit calls to
  * cpu_check_up_prepare() and cpu_set_state_online() without defeating
  * the next cpu_wait_death()'s polling loop.
@@ -521,7 +521,7 @@ bool cpu_report_death(void)
 		if (oldstate != CPU_BROKEN)
 			newstate = CPU_DEAD;
 		else
-			newstate = CPU_DEAD_FROZEN;
+			newstate = CPU_DEAD_TIMEOUT;
 	} while (atomic_cmpxchg(&per_cpu(cpu_hotplug_state, cpu),
 				oldstate, newstate) != oldstate);
 	return newstate == CPU_DEAD;
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out Daniel Wagner
  2015-09-04 13:34 ` Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-05  2:11   ` Rafael J. Wysocki
  2015-09-04 13:34 ` [RFC v0 3/9] x86: Use freeze_active() instead of CPU_*_FROZEN Daniel Wagner
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, Rafael J. Wysocki, Len Brown, Pavel Machek, linux-pm

Instead encode the FREEZE state via the CPU state we allow the
interesting subsystems (MCE, microcode) to query the power
subsystem directly. Most notifiers are not interested at all
in this information so rather have explicit calls to freeze_active()
instead adding complexity to the rest of the users of the CPU
notifiers.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: linux-pm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/suspend.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 5efe743..5e15ade 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -216,6 +216,11 @@ static inline bool idle_should_freeze(void)
 	return unlikely(suspend_freeze_state == FREEZE_STATE_ENTER);
 }
 
+static inline bool freeze_active(void)
+{
+	return unlikely(suspend_freeze_state != FREEZE_STATE_NONE);
+}
+
 extern void freeze_set_ops(const struct platform_freeze_ops *ops);
 extern void freeze_wake(void);
 
@@ -244,6 +249,7 @@ extern int pm_suspend(suspend_state_t state);
 static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
 static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
 static inline bool idle_should_freeze(void) { return false; }
+static inline bool freeze_active(void) { return false; }
 static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {}
 static inline void freeze_wake(void) {}
 #endif /* !CONFIG_SUSPEND */
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 3/9] x86: Use freeze_active() instead of CPU_*_FROZEN
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (2 preceding siblings ...)
  2015-09-04 13:34 ` [RFC v0 2/9] suspend: Add getter function to report if freezing is active Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information Daniel Wagner
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, Tony Luck, Borislav Petkov, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, x86, linux-edac

The CPU state encodes if the CPU hotplug operation happens during suspend
or hibernate operations. Instead at looking at the encoded fields in the
CPU state variable, ask the PM subsystem directly.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: linux-edac@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/kernel/cpu/mcheck/mce.c     | 13 ++++++-------
 arch/x86/kernel/cpu/microcode/core.c | 10 ++++++----
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 9d014b82..1bd421b 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -18,6 +18,7 @@
 #include <linux/rcupdate.h>
 #include <linux/kobject.h>
 #include <linux/uaccess.h>
+#include <linux/suspend.h>
 #include <linux/kdebug.h>
 #include <linux/kernel.h>
 #include <linux/percpu.h>
@@ -2341,13 +2342,12 @@ static void mce_device_remove(unsigned int cpu)
 /* Make sure there are no machine checks on offlined CPUs. */
 static void mce_disable_cpu(void *h)
 {
-	unsigned long action = *(unsigned long *)h;
 	int i;
 
 	if (!mce_available(raw_cpu_ptr(&cpu_info)))
 		return;
 
-	if (!(action & CPU_TASKS_FROZEN))
+	if (!freeze_active())
 		cmci_clear();
 	for (i = 0; i < mca_cfg.banks; i++) {
 		struct mce_bank *b = &mce_banks[i];
@@ -2359,13 +2359,12 @@ static void mce_disable_cpu(void *h)
 
 static void mce_reenable_cpu(void *h)
 {
-	unsigned long action = *(unsigned long *)h;
 	int i;
 
 	if (!mce_available(raw_cpu_ptr(&cpu_info)))
 		return;
 
-	if (!(action & CPU_TASKS_FROZEN))
+	if (!freeze_active())
 		cmci_reenable();
 	for (i = 0; i < mca_cfg.banks; i++) {
 		struct mce_bank *b = &mce_banks[i];
@@ -2395,15 +2394,15 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		mce_intel_hcpu_update(cpu);
 
 		/* intentionally ignoring frozen here */
-		if (!(action & CPU_TASKS_FROZEN))
+		if (!freeze_active())
 			cmci_rediscover();
 		break;
 	case CPU_DOWN_PREPARE:
-		smp_call_function_single(cpu, mce_disable_cpu, &action, 1);
+		smp_call_function_single(cpu, mce_disable_cpu, NULL, 1);
 		del_timer_sync(t);
 		break;
 	case CPU_DOWN_FAILED:
-		smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
+		smp_call_function_single(cpu, mce_reenable_cpu, NULL, 1);
 		mce_start_timer(cpu, t);
 		break;
 	}
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index 9e3f3c7..e49ec2c 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -18,6 +18,7 @@
 #include <linux/platform_device.h>
 #include <linux/miscdevice.h>
 #include <linux/capability.h>
+#include <linux/suspend.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -442,6 +443,11 @@ mc_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu)
 		pr_debug("CPU%d removed\n", cpu);
 		break;
 
+	case CPU_UP_CANCELED:
+		/* The CPU refused to come up during a system resume */
+		if (freeze_active())
+			microcode_fini_cpu(cpu);
+		break;
 	/*
 	 * case CPU_DEAD:
 	 *
@@ -452,10 +458,6 @@ mc_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu)
 	 */
 	}
 
-	/* The CPU refused to come up during a system resume */
-	if (action == CPU_UP_CANCELED_FROZEN)
-		microcode_fini_cpu(cpu);
-
 	return NOTIFY_OK;
 }
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (3 preceding siblings ...)
  2015-09-04 13:34 ` [RFC v0 3/9] x86: Use freeze_active() instead of CPU_*_FROZEN Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-08  8:49   ` Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 5/9] sched: Use freeze_active() instead CPU_*_FROZEN " Daniel Wagner
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, Andrew Morton, Chris Metcalf, Don Zickus,
	Ingo Molnar, Thomas Gleixner, Lai Jiangshan, Peter Zijlstra,
	Paul E. McKenney

In order to get rid of all CPU_*_FROZEN states we need to convert all
users first.

cpu_check_up_prepare() wants to report different errors depending on
an ongoing suspend or not. freeze_active() reports back if that is the
case so we don't have to rely on the CPU_DEAD_FROZEN anymore.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Don Zickus <dzickus@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
---
 kernel/smpboot.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/kernel/smpboot.c b/kernel/smpboot.c
index e37efbf..49ce4e9 100644
--- a/kernel/smpboot.c
+++ b/kernel/smpboot.c
@@ -13,6 +13,7 @@
 #include <linux/percpu.h>
 #include <linux/kthread.h>
 #include <linux/smpboot.h>
+#include <linux/suspend.h>
 
 #include "smpboot.h"
 
@@ -407,26 +408,25 @@ int cpu_check_up_prepare(int cpu)
 	switch (atomic_read(&per_cpu(cpu_hotplug_state, cpu))) {
 
 	case CPU_POST_DEAD:
+		if (freeze_active()) {
+			/*
+			 * Timeout during CPU death, so let caller know.
+			 * The outgoing CPU completed its processing, but after
+			 * cpu_wait_death() timed out and reported the error. The
+			 * caller is free to proceed, in which case the state
+			 * will be reset properly by cpu_set_state_online().
+			 * Proceeding despite this -EBUSY return makes sense
+			 * for systems where the outgoing CPUs take themselves
+			 * offline, with no post-death manipulation required from
+			 * a surviving CPU.
+			 */
+			return -EBUSY;
+		}
 
 		/* The CPU died properly, so just start it up again. */
 		atomic_set(&per_cpu(cpu_hotplug_state, cpu), CPU_UP_PREPARE);
 		return 0;
 
-	case CPU_DEAD_FROZEN:
-
-		/*
-		 * Timeout during CPU death, so let caller know.
-		 * The outgoing CPU completed its processing, but after
-		 * cpu_wait_death() timed out and reported the error. The
-		 * caller is free to proceed, in which case the state
-		 * will be reset properly by cpu_set_state_online().
-		 * Proceeding despite this -EBUSY return makes sense
-		 * for systems where the outgoing CPUs take themselves
-		 * offline, with no post-death manipulation required from
-		 * a surviving CPU.
-		 */
-		return -EBUSY;
-
 	case CPU_BROKEN:
 
 		/*
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 5/9] sched: Use freeze_active() instead CPU_*_FROZEN state information
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (4 preceding siblings ...)
  2015-09-04 13:34 ` [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-04 13:34 ` [RFC v0 6/9] cpu: Restructure FROZEN state handling Daniel Wagner
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: Daniel Wagner, Ingo Molnar, Peter Zijlstra

In order to get rid of all CPU_*_FROZEN states we need to convert all
users first.

cpuset_cpu_active() tracks via num_cpus_frozen if the current CPU is
the last one. So there is no need to track the exact CPU is in
a CPU_*_FROZEN state. Instead we can probe freeze_active() to tell
if supsens or resume is ongoing.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org
---
 kernel/sched/core.c | 48 +++++++++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 23 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5de2c9e..36b00eb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -74,6 +74,7 @@
 #include <linux/binfmts.h>
 #include <linux/context_tracking.h>
 #include <linux/compiler.h>
+#include <linux/suspend.h>
 
 #include <asm/switch_to.h>
 #include <asm/tlb.h>
@@ -7128,28 +7129,28 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action,
 			     void *hcpu)
 {
 	switch (action) {
-	case CPU_ONLINE_FROZEN:
-	case CPU_DOWN_FAILED_FROZEN:
-
-		/*
-		 * num_cpus_frozen tracks how many CPUs are involved in suspend
-		 * resume sequence. As long as this is not the last online
-		 * operation in the resume sequence, just build a single sched
-		 * domain, ignoring cpusets.
-		 */
-		num_cpus_frozen--;
-		if (likely(num_cpus_frozen)) {
-			partition_sched_domains(1, NULL, NULL);
-			break;
+	case CPU_ONLINE:
+	case CPU_DOWN_FAILED:
+		if (freeze_active()) {
+			/*
+			 * num_cpus_frozen tracks how many CPUs are
+			 * involved in suspend resume sequence. As
+			 * long as this is not the last online
+			 * operation in the resume sequence, just
+			 * build a single sched domain, ignoring
+			 * cpusets.
+			 */
+			num_cpus_frozen--;
+			if (likely(num_cpus_frozen)) {
+				partition_sched_domains(1, NULL, NULL);
+				break;
+			}
 		}
-
 		/*
-		 * This is the last CPU online operation. So fall through and
-		 * restore the original sched domains by considering the
-		 * cpuset configurations.
+		 * This is the last CPU online operation. Restore the
+		 * original sched domains by considering the cpuset
+		 * configurations.
 		 */
-
-	case CPU_ONLINE:
 		cpuset_update_active_cpus(true);
 		break;
 	default:
@@ -7169,6 +7170,11 @@ static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
 
 	switch (action) {
 	case CPU_DOWN_PREPARE:
+		if (freeze_active()) {
+			num_cpus_frozen++;
+			partition_sched_domains(1, NULL, NULL);
+			break;
+		}
 		rcu_read_lock_sched();
 		dl_b = dl_bw_of(cpu);
 
@@ -7183,10 +7189,6 @@ static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action,
 			return notifier_from_errno(-EBUSY);
 		cpuset_update_active_cpus(false);
 		break;
-	case CPU_DOWN_PREPARE_FROZEN:
-		num_cpus_frozen++;
-		partition_sched_domains(1, NULL, NULL);
-		break;
 	default:
 		return NOTIFY_DONE;
 	}
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 6/9] cpu: Restructure FROZEN state handling
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (5 preceding siblings ...)
  2015-09-04 13:34 ` [RFC v0 5/9] sched: Use freeze_active() instead CPU_*_FROZEN " Daniel Wagner
@ 2015-09-04 13:34 ` Daniel Wagner
  2015-09-04 13:35 ` [RFC v0 7/9] cpu: Remove unused CPU_*_FROZEN states Daniel Wagner
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:34 UTC (permalink / raw)
  To: linux-kernel
  Cc: Thomas Gleixner, Daniel Wagner, Paul E. McKenney, Ingo Molnar,
	Greg Kroah-Hartman, Paul Gortmaker, Vitaly Kuznetsov,
	Mathias Krause, David Hildenbrand

From: Thomas Gleixner <tglx@linutronix.de>

There are only a few callbacks which really care about FROZEN
vs. !FROZEN. No need to have extra states for this.

Publish the frozen state in an extra variable which is updated under
the hotplug lock and let the users interested deal with it w/o
imposing that extra state checks on everyone.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
---
 kernel/cpu.c | 66 +++++++++++++++++++++++++-----------------------------------
 1 file changed, 27 insertions(+), 39 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 82cf9df..e37442d 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -29,6 +29,7 @@
 #ifdef CONFIG_SMP
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
+static bool cpuhp_tasks_frozen;
 
 /*
  * The following two APIs (cpu_maps_update_begin/done) must be used when
@@ -220,27 +221,30 @@ int __register_cpu_notifier(struct notifier_block *nb)
 	return raw_notifier_chain_register(&cpu_chain, nb);
 }
 
-static int __cpu_notify(unsigned long val, void *v, int nr_to_call,
+static int __cpu_notify(unsigned long val, unsigned int cpu, int nr_to_call,
 			int *nr_calls)
 {
+	unsigned long mod = cpuhp_tasks_frozen ? CPU_TASKS_FROZEN : 0;
+	void *hcpu = (void *)(long)cpu;
+
 	int ret;
 
-	ret = __raw_notifier_call_chain(&cpu_chain, val, v, nr_to_call,
+	ret = __raw_notifier_call_chain(&cpu_chain, val | mod, hcpu, nr_to_call,
 					nr_calls);
 
 	return notifier_to_errno(ret);
 }
 
-static int cpu_notify(unsigned long val, void *v)
+static int cpu_notify(unsigned long val, unsigned int cpu)
 {
-	return __cpu_notify(val, v, -1, NULL);
+	return __cpu_notify(val, cpu, -1, NULL);
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
 
-static void cpu_notify_nofail(unsigned long val, void *v)
+static void cpu_notify_nofail(unsigned long val, unsigned int cpu)
 {
-	BUG_ON(cpu_notify(val, v));
+	BUG_ON(cpu_notify(val, cpu));
 }
 EXPORT_SYMBOL(register_cpu_notifier);
 EXPORT_SYMBOL(__register_cpu_notifier);
@@ -324,23 +328,17 @@ static inline void check_for_tasks(int dead_cpu)
 	read_unlock_irq(&tasklist_lock);
 }
 
-struct take_cpu_down_param {
-	unsigned long mod;
-	void *hcpu;
-};
-
 /* Take this CPU down. */
 static int take_cpu_down(void *_param)
 {
-	struct take_cpu_down_param *param = _param;
-	int err;
+	int err, cpu = smp_processor_id();
 
 	/* Ensure this CPU doesn't handle any more interrupts. */
 	err = __cpu_disable();
 	if (err < 0)
 		return err;
 
-	cpu_notify(CPU_DYING | param->mod, param->hcpu);
+	cpu_notify(CPU_DYING, cpu);
 	/* Give up timekeeping duties */
 	tick_handover_do_timer();
 	/* Park the stopper thread */
@@ -352,12 +350,6 @@ static int take_cpu_down(void *_param)
 static int _cpu_down(unsigned int cpu, int tasks_frozen)
 {
 	int err, nr_calls = 0;
-	void *hcpu = (void *)(long)cpu;
-	unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
-	struct take_cpu_down_param tcd_param = {
-		.mod = mod,
-		.hcpu = hcpu,
-	};
 
 	if (num_online_cpus() == 1)
 		return -EBUSY;
@@ -367,10 +359,12 @@ static int _cpu_down(unsigned int cpu, int tasks_frozen)
 
 	cpu_hotplug_begin();
 
-	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
+	cpuhp_tasks_frozen = tasks_frozen;
+
+	err = __cpu_notify(CPU_DOWN_PREPARE, cpu, -1, &nr_calls);
 	if (err) {
 		nr_calls--;
-		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
+		__cpu_notify(CPU_DOWN_FAILED, cpu, nr_calls, NULL);
 		pr_warn("%s: attempt to take down CPU %u failed\n",
 			__func__, cpu);
 		goto out_release;
@@ -402,10 +396,10 @@ static int _cpu_down(unsigned int cpu, int tasks_frozen)
 	/*
 	 * So now all preempt/rcu users must observe !cpu_active().
 	 */
-	err = stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
+	err = stop_machine(take_cpu_down, NULL, cpumask_of(cpu));
 	if (err) {
 		/* CPU didn't die: tell everyone.  Can't complain. */
-		cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
+		cpu_notify_nofail(CPU_DOWN_FAILED, cpu);
 		irq_unlock_sparse();
 		goto out_release;
 	}
@@ -432,14 +426,14 @@ static int _cpu_down(unsigned int cpu, int tasks_frozen)
 
 	/* CPU is completely dead: tell everyone.  Too late to complain. */
 	tick_cleanup_dead_cpu(cpu);
-	cpu_notify_nofail(CPU_DEAD | mod, hcpu);
+	cpu_notify_nofail(CPU_DEAD, cpu);
 
 	check_for_tasks(cpu);
 
 out_release:
 	cpu_hotplug_done();
 	if (!err)
-		cpu_notify_nofail(CPU_POST_DEAD | mod, hcpu);
+		cpu_notify_nofail(CPU_POST_DEAD, cpu);
 	return err;
 }
 
@@ -498,10 +492,8 @@ void smpboot_thread_init(void)
 /* Requires cpu_add_remove_lock to be held */
 static int _cpu_up(unsigned int cpu, int tasks_frozen)
 {
-	int ret, nr_calls = 0;
-	void *hcpu = (void *)(long)cpu;
-	unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
 	struct task_struct *idle;
+	int ret, nr_calls = 0;
 
 	cpu_hotplug_begin();
 
@@ -520,7 +512,9 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
 	if (ret)
 		goto out;
 
-	ret = __cpu_notify(CPU_UP_PREPARE | mod, hcpu, -1, &nr_calls);
+	cpuhp_tasks_frozen = tasks_frozen;
+
+	ret = __cpu_notify(CPU_UP_PREPARE, cpu, -1, &nr_calls);
 	if (ret) {
 		nr_calls--;
 		pr_warn("%s: attempt to bring up CPU %u failed\n",
@@ -536,11 +530,11 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
 	BUG_ON(!cpu_online(cpu));
 
 	/* Now call notifier in preparation. */
-	cpu_notify(CPU_ONLINE | mod, hcpu);
+	cpu_notify(CPU_ONLINE, cpu);
 
 out_notify:
 	if (ret != 0)
-		__cpu_notify(CPU_UP_CANCELED | mod, hcpu, nr_calls, NULL);
+		__cpu_notify(CPU_UP_CANCELED, cpu, nr_calls, NULL);
 out:
 	cpu_hotplug_done();
 
@@ -732,13 +726,7 @@ core_initcall(cpu_hotplug_pm_sync_init);
  */
 void notify_cpu_starting(unsigned int cpu)
 {
-	unsigned long val = CPU_STARTING;
-
-#ifdef CONFIG_PM_SLEEP_SMP
-	if (frozen_cpus != NULL && cpumask_test_cpu(cpu, frozen_cpus))
-		val = CPU_STARTING_FROZEN;
-#endif /* CONFIG_PM_SLEEP_SMP */
-	cpu_notify(val, (void *)(long)cpu);
+	cpu_notify(CPU_STARTING, cpu);
 }
 
 #endif /* CONFIG_SMP */
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 7/9] cpu: Remove unused CPU_*_FROZEN states
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (6 preceding siblings ...)
  2015-09-04 13:34 ` [RFC v0 6/9] cpu: Restructure FROZEN state handling Daniel Wagner
@ 2015-09-04 13:35 ` Daniel Wagner
  2015-09-04 13:35 ` [RFC v0 8/9] cpu: Do not set CPU_TASKS_FROZEN anymore Daniel Wagner
  2015-09-04 13:35 ` [RFC v0 9/9] doc: Update cpu-hotplug documents on removal of CPU_TASKS_FROZEN Daniel Wagner
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:35 UTC (permalink / raw)
  To: linux-kernel; +Cc: Daniel Wagner

There is no user left of the CPU_*_FROZEN states. Any subsystem
which needs do to know if tasks are frozen due to a suspend
operation can ask directly via freeze_active().

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>

[This patch contains only things like

-	  if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE)
+	  if (action == CPU_ONLINE)

or

-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {

or

	-    case CPU_STARTING_FROZEN:

The more interesting changes are separate patches in order to get
enough eyes to look at. Still this patch is pretty big and touches
almost everying. I'll split it up if the rest of the series makes
sense.]
---
 arch/arm/kernel/hw_breakpoint.c                        |  2 +-
 arch/arm/kernel/perf_event.c                           |  2 +-
 arch/arm/kernel/smp_twd.c                              |  2 +-
 arch/arm/kvm/arm.c                                     |  1 -
 arch/arm/mm/cache-l2x0.c                               |  2 +-
 arch/arm/vfp/vfpmodule.c                               |  4 ++--
 arch/arm64/kernel/armv8_deprecated.c                   |  2 +-
 arch/arm64/kernel/fpsimd.c                             |  1 -
 arch/blackfin/kernel/perf_event.c                      |  2 +-
 arch/ia64/kernel/err_inject.c                          |  2 --
 arch/ia64/kernel/mca.c                                 |  1 -
 arch/ia64/kernel/palinfo.c                             |  2 --
 arch/ia64/kernel/salinfo.c                             |  2 --
 arch/ia64/kernel/topology.c                            |  2 --
 arch/metag/kernel/perf/perf_event.c                    |  2 +-
 arch/mips/loongson64/loongson-3/smp.c                  |  3 ---
 arch/mips/oprofile/op_model_loongson3.c                |  2 --
 arch/powerpc/kernel/sysfs.c                            |  2 --
 arch/powerpc/mm/mmu_context_nohash.c                   |  3 ---
 arch/powerpc/mm/numa.c                                 |  3 ---
 arch/powerpc/perf/core-book3s.c                        |  2 +-
 arch/powerpc/platforms/powermac/smp.c                  |  2 --
 arch/s390/kernel/perf_cpum_cf.c                        |  2 +-
 arch/s390/kernel/perf_cpum_sf.c                        |  3 +--
 arch/s390/kernel/smp.c                                 |  2 +-
 arch/s390/mm/fault.c                                   |  2 +-
 arch/sh/kernel/perf_event.c                            |  2 +-
 arch/sparc/kernel/sysfs.c                              |  2 --
 arch/x86/entry/vdso/vma.c                              |  2 +-
 arch/x86/kernel/apic/x2apic_cluster.c                  |  1 -
 arch/x86/kernel/cpu/mcheck/mce.c                       |  2 +-
 arch/x86/kernel/cpu/mcheck/mce_amd.c                   |  2 --
 arch/x86/kernel/cpu/mcheck/therm_throt.c               |  3 ---
 arch/x86/kernel/cpu/microcode/core.c                   |  2 +-
 arch/x86/kernel/cpu/perf_event.c                       |  2 +-
 arch/x86/kernel/cpu/perf_event_amd_ibs.c               |  2 +-
 arch/x86/kernel/cpu/perf_event_amd_uncore.c            |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_cqm.c             |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_rapl.c            |  2 +-
 arch/x86/kernel/cpu/perf_event_intel_uncore.c          |  4 ++--
 arch/x86/kernel/cpuid.c                                |  1 -
 arch/x86/kernel/kvm.c                                  |  2 --
 arch/x86/kernel/msr.c                                  |  1 -
 arch/x86/pci/amd_bus.c                                 |  1 -
 arch/xtensa/kernel/perf_event.c                        |  2 +-
 block/blk-iopoll.c                                     |  2 +-
 block/blk-mq.c                                         |  5 ++---
 block/blk-softirq.c                                    |  2 +-
 drivers/acpi/processor_driver.c                        |  1 -
 drivers/base/cacheinfo.c                               |  2 +-
 drivers/base/topology.c                                |  3 ---
 drivers/bus/arm-cci.c                                  |  2 +-
 drivers/bus/arm-ccn.c                                  |  2 +-
 drivers/bus/mips_cdmm.c                                |  2 +-
 drivers/clocksource/arm_arch_timer.c                   |  2 +-
 drivers/clocksource/arm_global_timer.c                 |  2 +-
 drivers/clocksource/dummy_timer.c                      |  2 +-
 drivers/clocksource/exynos_mct.c                       |  2 +-
 drivers/clocksource/metag_generic.c                    |  1 -
 drivers/clocksource/mips-gic-timer.c                   |  2 +-
 drivers/clocksource/qcom-timer.c                       |  2 +-
 drivers/clocksource/time-armada-370-xp.c               |  2 +-
 drivers/clocksource/timer-atlas7.c                     |  2 +-
 drivers/cpufreq/acpi-cpufreq.c                         |  2 --
 drivers/cpufreq/cpufreq.c                              |  2 +-
 drivers/cpuidle/coupled.c                              |  4 ++--
 drivers/cpuidle/cpuidle-powernv.c                      |  2 --
 drivers/cpuidle/cpuidle-pseries.c                      |  2 --
 drivers/hwtracing/coresight/coresight-etm3x.c          |  2 +-
 drivers/hwtracing/coresight/coresight-etm4x.c          |  2 +-
 drivers/idle/intel_idle.c                              |  2 +-
 drivers/irqchip/irq-armada-370-xp.c                    |  4 ++--
 drivers/irqchip/irq-gic-v3.c                           |  2 +-
 drivers/irqchip/irq-gic.c                              |  2 +-
 drivers/irqchip/irq-hip04.c                            |  2 +-
 drivers/leds/trigger/ledtrig-cpu.c                     |  2 +-
 drivers/md/raid5.c                                     |  2 --
 drivers/net/virtio_net.c                               |  2 +-
 drivers/oprofile/timer_int.c                           |  2 --
 drivers/pci/host/pci-xgene-msi.c                       |  2 --
 drivers/powercap/intel_rapl.c                          |  3 ---
 drivers/scsi/bnx2fc/bnx2fc_fcoe.c                      |  2 --
 drivers/scsi/bnx2i/bnx2i_init.c                        |  2 --
 drivers/scsi/fcoe/fcoe.c                               |  2 --
 drivers/scsi/virtio_scsi.c                             |  2 --
 drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c |  4 +---
 fs/buffer.c                                            |  2 +-
 kernel/events/core.c                                   |  2 +-
 kernel/padata.c                                        |  4 ----
 kernel/profile.c                                       |  4 ----
 kernel/rcu/tree.c                                      |  5 -----
 kernel/relay.c                                         |  2 --
 kernel/sched/core.c                                    | 11 ++++-------
 kernel/sched/fair.c                                    |  2 +-
 kernel/smp.c                                           |  6 +-----
 kernel/softirq.c                                       |  1 -
 kernel/time/hrtimer.c                                  |  2 --
 kernel/time/tick-sched.c                               |  2 +-
 kernel/time/timer.c                                    |  1 -
 kernel/trace/ring_buffer.c                             |  2 --
 kernel/workqueue.c                                     |  4 ++--
 lib/cpu-notifier-error-inject.c                        |  2 --
 lib/percpu_counter.c                                   |  2 +-
 lib/radix-tree.c                                       |  2 +-
 mm/memcontrol.c                                        |  2 +-
 mm/page-writeback.c                                    |  2 +-
 mm/page_alloc.c                                        |  2 +-
 mm/slab.c                                              |  6 ------
 mm/slub.c                                              |  2 --
 mm/vmscan.c                                            |  2 +-
 mm/vmstat.c                                            |  4 ----
 net/core/dev.c                                         |  2 +-
 net/core/flow.c                                        |  2 --
 net/iucv/iucv.c                                        |  6 ------
 virt/kvm/arm/arch_timer.c                              |  2 --
 virt/kvm/arm/vgic.c                                    |  2 --
 virt/kvm/kvm_main.c                                    |  1 -
 117 files changed, 74 insertions(+), 200 deletions(-)

diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
index dc7d0a9..98c76e7 100644
--- a/arch/arm/kernel/hw_breakpoint.c
+++ b/arch/arm/kernel/hw_breakpoint.c
@@ -1024,7 +1024,7 @@ out_mdbgen:
 static int dbg_reset_notify(struct notifier_block *self,
 				      unsigned long action, void *cpu)
 {
-	if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE)
+	if (action == CPU_ONLINE)
 		smp_call_function_single((int)cpu, reset_ctrl_regs, NULL, 1);
 
 	return NOTIFY_OK;
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index 54272e0..4ae8e06 100644
--- a/arch/arm/kernel/perf_event.c
+++ b/arch/arm/kernel/perf_event.c
@@ -704,7 +704,7 @@ static int cpu_pmu_notify(struct notifier_block *b, unsigned long action,
 	int cpu = (unsigned long)hcpu;
 	struct arm_pmu *pmu = container_of(b, struct arm_pmu, hotplug_nb);
 
-	if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING)
+	if (action != CPU_STARTING)
 		return NOTIFY_DONE;
 
 	if (!cpumask_test_cpu(cpu, &pmu->supported_cpus))
diff --git a/arch/arm/kernel/smp_twd.c b/arch/arm/kernel/smp_twd.c
index e9035cd..76dcbcaf 100644
--- a/arch/arm/kernel/smp_twd.c
+++ b/arch/arm/kernel/smp_twd.c
@@ -313,7 +313,7 @@ static void twd_timer_setup(void)
 static int twd_timer_cpu_notify(struct notifier_block *self,
 				unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		twd_timer_setup();
 		break;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bc738d2..98be232 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -928,7 +928,6 @@ static int hyp_init_cpu_notify(struct notifier_block *self,
 {
 	switch (action) {
 	case CPU_STARTING:
-	case CPU_STARTING_FROZEN:
 		if (__hyp_get_vectors() == hyp_default_vectors)
 			cpu_init_hyp_mode(NULL);
 		break;
diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c
index 71b3d33..19e26ee 100644
--- a/arch/arm/mm/cache-l2x0.c
+++ b/arch/arm/mm/cache-l2x0.c
@@ -599,7 +599,7 @@ static void l2c310_configure(void __iomem *base)
 
 static int l2c310_cpu_enable_flz(struct notifier_block *nb, unsigned long act, void *data)
 {
-	switch (act & ~CPU_TASKS_FROZEN) {
+	switch (act) {
 	case CPU_STARTING:
 		set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1));
 		break;
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 2a61e4b..09c785e 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -655,9 +655,9 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
 static int vfp_hotplug(struct notifier_block *b, unsigned long action,
 	void *hcpu)
 {
-	if (action == CPU_DYING || action == CPU_DYING_FROZEN)
+	if (action == CPU_DYING)
 		vfp_current_hw_state[(long)hcpu] = NULL;
-	else if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
+	else if (action == CPU_STARTING)
 		vfp_enable(NULL);
 	return NOTIFY_OK;
 }
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 7922c2e..981058f 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -625,7 +625,7 @@ static int insn_cpu_hotplug_notify(struct notifier_block *b,
 			      unsigned long action, void *hcpu)
 {
 	int rc = 0;
-	if ((action & ~CPU_TASKS_FROZEN) == CPU_STARTING)
+	if (action == CPU_STARTING)
 		rc = run_all_insn_set_hw_mode((unsigned long)hcpu);
 
 	return notifier_from_errno(rc);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 44d6f75..c5c2e0c 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -306,7 +306,6 @@ static int fpsimd_cpu_hotplug_notifier(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		per_cpu(fpsimd_last_state, cpu) = NULL;
 		break;
 	}
diff --git a/arch/blackfin/kernel/perf_event.c b/arch/blackfin/kernel/perf_event.c
index 1e9c8b0..d7815af 100644
--- a/arch/blackfin/kernel/perf_event.c
+++ b/arch/blackfin/kernel/perf_event.c
@@ -465,7 +465,7 @@ bfin_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		bfin_write_PFCTL(0);
 		bfin_pmu_setup(cpu);
diff --git a/arch/ia64/kernel/err_inject.c b/arch/ia64/kernel/err_inject.c
index 0c161ed..721991d 100644
--- a/arch/ia64/kernel/err_inject.c
+++ b/arch/ia64/kernel/err_inject.c
@@ -244,11 +244,9 @@ static int err_inject_cpu_callback(struct notifier_block *nfb,
 	sys_dev = get_cpu_device(cpu);
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		err_inject_add_dev(sys_dev);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		err_inject_remove_dev(sys_dev);
 		break;
 	}
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 2889412..cb3ba65 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1908,7 +1908,6 @@ static int mca_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		smp_call_function_single(hotcpu, ia64_mca_cmc_vector_adjust,
 					 NULL, 0);
 		break;
diff --git a/arch/ia64/kernel/palinfo.c b/arch/ia64/kernel/palinfo.c
index c39c3cd..8ba2d8f 100644
--- a/arch/ia64/kernel/palinfo.c
+++ b/arch/ia64/kernel/palinfo.c
@@ -969,11 +969,9 @@ static int palinfo_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		create_palinfo_proc_entries(hotcpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		remove_palinfo_proc_entries(hotcpu);
 		break;
 	}
diff --git a/arch/ia64/kernel/salinfo.c b/arch/ia64/kernel/salinfo.c
index 1eeffb7..8f91669 100644
--- a/arch/ia64/kernel/salinfo.c
+++ b/arch/ia64/kernel/salinfo.c
@@ -576,7 +576,6 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 	struct salinfo_data *data;
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		spin_lock_irqsave(&data_saved_lock, flags);
 		for (i = 0, data = salinfo_data;
 		     i < ARRAY_SIZE(salinfo_data);
@@ -587,7 +586,6 @@ salinfo_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu
 		spin_unlock_irqrestore(&data_saved_lock, flags);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		spin_lock_irqsave(&data_saved_lock, flags);
 		for (i = 0, data = salinfo_data;
 		     i < ARRAY_SIZE(salinfo_data);
diff --git a/arch/ia64/kernel/topology.c b/arch/ia64/kernel/topology.c
index c01fe89..58c2f50 100644
--- a/arch/ia64/kernel/topology.c
+++ b/arch/ia64/kernel/topology.c
@@ -432,11 +432,9 @@ static int cache_cpu_callback(struct notifier_block *nfb,
 	sys_dev = get_cpu_device(cpu);
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		cache_add_dev(sys_dev);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		cache_remove_dev(sys_dev);
 		break;
 	}
diff --git a/arch/metag/kernel/perf/perf_event.c b/arch/metag/kernel/perf/perf_event.c
index 2478ec6..41c1841 100644
--- a/arch/metag/kernel/perf/perf_event.c
+++ b/arch/metag/kernel/perf/perf_event.c
@@ -809,7 +809,7 @@ static int metag_pmu_cpu_notify(struct notifier_block *b, unsigned long action,
 	unsigned int cpu = (unsigned int)hcpu;
 	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
 
-	if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING)
+	if (action != CPU_STARTING)
 		return NOTIFY_DONE;
 
 	memset(cpuc, 0, sizeof(struct cpu_hw_events));
diff --git a/arch/mips/loongson64/loongson-3/smp.c b/arch/mips/loongson64/loongson-3/smp.c
index 1a4738a..b23b867 100644
--- a/arch/mips/loongson64/loongson-3/smp.c
+++ b/arch/mips/loongson64/loongson-3/smp.c
@@ -609,7 +609,6 @@ void loongson3_enable_clock(int cpu)
 	}
 }
 
-#define CPU_POST_DEAD_FROZEN	(CPU_POST_DEAD | CPU_TASKS_FROZEN)
 static int loongson3_cpu_callback(struct notifier_block *nfb,
 	unsigned long action, void *hcpu)
 {
@@ -617,12 +616,10 @@ static int loongson3_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_POST_DEAD:
-	case CPU_POST_DEAD_FROZEN:
 		pr_info("Disable clock for CPU#%d\n", cpu);
 		loongson3_disable_clock(cpu);
 		break;
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		pr_info("Enable clock for CPU#%d\n", cpu);
 		loongson3_enable_clock(cpu);
 		break;
diff --git a/arch/mips/oprofile/op_model_loongson3.c b/arch/mips/oprofile/op_model_loongson3.c
index 8bcf7fc..bd62285 100644
--- a/arch/mips/oprofile/op_model_loongson3.c
+++ b/arch/mips/oprofile/op_model_loongson3.c
@@ -173,12 +173,10 @@ static int loongson3_cpu_callback(struct notifier_block *nfb,
 {
 	switch (action) {
 	case CPU_STARTING:
-	case CPU_STARTING_FROZEN:
 		write_c0_perflo1(reg.control1);
 		write_c0_perflo2(reg.control2);
 		break;
 	case CPU_DYING:
-	case CPU_DYING_FROZEN:
 		write_c0_perflo1(0xc0000000);
 		write_c0_perflo2(0x40000000);
 		break;
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index 692873b..54d5239 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -892,12 +892,10 @@ static int sysfs_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		register_cpu_online(cpu);
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		unregister_cpu_online(cpu);
 		break;
 #endif
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index 986afbc..1866e9b 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -383,15 +383,12 @@ static int mmu_context_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		pr_devel("MMU: Allocating stale context map for CPU %d\n", cpu);
 		stale_map[cpu] = kzalloc(CTX_MAP_SIZE, GFP_KERNEL);
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		pr_devel("MMU: Freeing stale context map for CPU %d\n", cpu);
 		kfree(stale_map[cpu]);
 		stale_map[cpu] = NULL;
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 5e80621..fd0188b 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -582,16 +582,13 @@ static int cpu_numa_callback(struct notifier_block *nfb, unsigned long action,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		nid = numa_setup_cpu(lcpu);
 		verify_cpu_node_mapping((int)lcpu, nid);
 		ret = NOTIFY_OK;
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 		unmap_cpu_from_node(lcpu);
 		ret = NOTIFY_OK;
 		break;
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index d90893b..ec01aa0 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -2149,7 +2149,7 @@ power_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		power_pmu_setup(cpu);
 		break;
diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c
index 28a147c..88f739c 100644
--- a/arch/powerpc/platforms/powermac/smp.c
+++ b/arch/powerpc/platforms/powermac/smp.c
@@ -859,7 +859,6 @@ static int smp_core99_cpu_notify(struct notifier_block *self,
 
 	switch(action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		/* Open i2c bus if it was used for tb sync */
 		if (pmac_tb_clock_chip_host) {
 			rc = pmac_i2c_open(pmac_tb_clock_chip_host, 1);
@@ -870,7 +869,6 @@ static int smp_core99_cpu_notify(struct notifier_block *self,
 		}
 		break;
 	case CPU_ONLINE:
-	case CPU_UP_CANCELED:
 		/* Close i2c bus if it was used for tb sync */
 		if (pmac_tb_clock_chip_host)
 			pmac_i2c_close(pmac_tb_clock_chip_host);
diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
index 56fdad4..891b659 100644
--- a/arch/s390/kernel/perf_cpum_cf.c
+++ b/arch/s390/kernel/perf_cpum_cf.c
@@ -639,7 +639,7 @@ static int cpumf_pmu_notifier(struct notifier_block *self, unsigned long action,
 	unsigned int cpu = (long) hcpu;
 	int flags;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		flags = PMC_INIT;
 		smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
index b973972..b687697 100644
--- a/arch/s390/kernel/perf_cpum_sf.c
+++ b/arch/s390/kernel/perf_cpum_sf.c
@@ -1514,9 +1514,8 @@ static int cpumf_pmu_notifier(struct notifier_block *self,
 	if (!atomic_read(&num_events))
 		return NOTIFY_OK;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		flags = PMC_INIT;
 		smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
 		break;
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index c6355e6..a5a9454 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -1061,7 +1061,7 @@ static int smp_cpu_notify(struct notifier_block *self, unsigned long action,
 	struct device *s = &per_cpu(cpu_device, cpu)->dev;
 	int err = 0;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		err = sysfs_create_group(&s->kobj, &cpu_online_attr_group);
 		break;
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index f985856..b9357055 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -715,7 +715,7 @@ static int pfault_cpu_notify(struct notifier_block *self, unsigned long action,
 	struct thread_struct *thread, *next;
 	struct task_struct *tsk;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DEAD:
 		spin_lock_irq(&pfault_lock);
 		list_for_each_entry_safe(thread, next, &pfault_list, list) {
diff --git a/arch/sh/kernel/perf_event.c b/arch/sh/kernel/perf_event.c
index 7cfd7f1..abb5d97 100644
--- a/arch/sh/kernel/perf_event.c
+++ b/arch/sh/kernel/perf_event.c
@@ -364,7 +364,7 @@ sh_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		sh_pmu_setup(cpu);
 		break;
diff --git a/arch/sparc/kernel/sysfs.c b/arch/sparc/kernel/sysfs.c
index 7f41d40..0576ec7 100644
--- a/arch/sparc/kernel/sysfs.c
+++ b/arch/sparc/kernel/sysfs.c
@@ -253,12 +253,10 @@ static int sysfs_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		register_cpu_online(cpu);
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		unregister_cpu_online(cpu);
 		break;
 #endif
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 4345431..4645ea7 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -275,7 +275,7 @@ vgetcpu_cpu_notifier(struct notifier_block *n, unsigned long action, void *arg)
 {
 	long cpu = (long)arg;
 
-	if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN)
+	if (action == CPU_ONLINE)
 		smp_call_function_single(cpu, vgetcpu_cpu_init, NULL, 1);
 
 	return NOTIFY_DONE;
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index cc8311c..4aa7e73 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -166,7 +166,6 @@ update_clusterinfo(struct notifier_block *nfb, unsigned long action, void *hcpu)
 		}
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
 		for_each_online_cpu(cpu) {
 			if (x2apic_cluster(this_cpu) != x2apic_cluster(cpu))
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index 1bd421b..08b161d 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -2381,7 +2381,7 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	unsigned int cpu = (unsigned long)hcpu;
 	struct timer_list *t = &per_cpu(mce_timer, cpu);
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		mce_device_create(cpu);
 		if (threshold_cpu_callback)
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
index e99b150..0310ea5 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
@@ -847,11 +847,9 @@ amd_64_threshold_cpu_callback(unsigned long action, unsigned int cpu)
 {
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		threshold_create_device(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		threshold_remove_device(cpu);
 		break;
 	default:
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index 1af51b1..017f549 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -285,14 +285,11 @@ thermal_throttle_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		err = thermal_throttle_add_dev(dev, cpu);
 		WARN_ON(err);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		thermal_throttle_remove_dev(dev);
 		break;
 	}
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index e49ec2c..818652e 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -423,7 +423,7 @@ mc_cpu_callback(struct notifier_block *nb, unsigned long action, void *hcpu)
 
 	dev = get_cpu_device(cpu);
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		microcode_update_cpu(cpu);
 		pr_debug("CPU%d added\n", cpu);
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 66dd3fe9..30efef5 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -1461,7 +1461,7 @@ x86_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
 	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
 	int i, ret = NOTIFY_OK;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		for (i = 0 ; i < X86_PERF_KFREE_MAX; i++)
 			cpuc->kfree_on_online[i] = NULL;
diff --git a/arch/x86/kernel/cpu/perf_event_amd_ibs.c b/arch/x86/kernel/cpu/perf_event_amd_ibs.c
index 989d3c2..a42d7a0 100644
--- a/arch/x86/kernel/cpu/perf_event_amd_ibs.c
+++ b/arch/x86/kernel/cpu/perf_event_amd_ibs.c
@@ -911,7 +911,7 @@ static inline void perf_ibs_pm_init(void) { }
 static int
 perf_ibs_cpu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		setup_APIC_ibs(NULL);
 		break;
diff --git a/arch/x86/kernel/cpu/perf_event_amd_uncore.c b/arch/x86/kernel/cpu/perf_event_amd_uncore.c
index cc6cedb..a4fc015 100644
--- a/arch/x86/kernel/cpu/perf_event_amd_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_amd_uncore.c
@@ -465,7 +465,7 @@ amd_uncore_cpu_notifier(struct notifier_block *self, unsigned long action,
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		if (amd_uncore_cpu_up_prepare(cpu))
 			return notifier_from_errno(-ENOMEM);
diff --git a/arch/x86/kernel/cpu/perf_event_intel_cqm.c b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
index 377e8f8..d1b4508 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_cqm.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_cqm.c
@@ -1295,7 +1295,7 @@ static int intel_cqm_cpu_notifier(struct notifier_block *nb,
 {
 	unsigned int cpu  = (unsigned long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		intel_cqm_cpu_exit(cpu);
 		break;
diff --git a/arch/x86/kernel/cpu/perf_event_intel_rapl.c b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
index 81431c0..0d905a3 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_rapl.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_rapl.c
@@ -666,7 +666,7 @@ static int rapl_cpu_notifier(struct notifier_block *self,
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		rapl_cpu_prepare(cpu);
 		break;
diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
index 560e525..d9f7280 100644
--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
@@ -1150,7 +1150,7 @@ static int uncore_cpu_notifier(struct notifier_block *self,
 	unsigned int cpu = (long)hcpu;
 
 	/* allocate/free data structure for uncore box */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		uncore_cpu_prepare(cpu, -1);
 		break;
@@ -1170,7 +1170,7 @@ static int uncore_cpu_notifier(struct notifier_block *self,
 	}
 
 	/* select the cpu that collects uncore events */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_FAILED:
 	case CPU_STARTING:
 		uncore_event_init_cpu(cpu);
diff --git a/arch/x86/kernel/cpuid.c b/arch/x86/kernel/cpuid.c
index bd3507d..edd8816 100644
--- a/arch/x86/kernel/cpuid.c
+++ b/arch/x86/kernel/cpuid.c
@@ -162,7 +162,6 @@ static int cpuid_class_cpu_callback(struct notifier_block *nfb,
 		err = cpuid_device_create(cpu);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
 		cpuid_device_destroy(cpu);
 		break;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 47190bd..07c4313 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -453,11 +453,9 @@ static int kvm_cpu_notify(struct notifier_block *self, unsigned long action,
 	switch (action) {
 	case CPU_ONLINE:
 	case CPU_DOWN_FAILED:
-	case CPU_ONLINE_FROZEN:
 		smp_call_function_single(cpu, kvm_guest_cpu_online, NULL, 0);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		smp_call_function_single(cpu, kvm_guest_cpu_offline, NULL, 1);
 		break;
 	default:
diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c
index 113e707..99674fe 100644
--- a/arch/x86/kernel/msr.c
+++ b/arch/x86/kernel/msr.c
@@ -227,7 +227,6 @@ static int msr_class_cpu_callback(struct notifier_block *nfb,
 		err = msr_device_create(cpu);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
 		msr_device_destroy(cpu);
 		break;
diff --git a/arch/x86/pci/amd_bus.c b/arch/x86/pci/amd_bus.c
index c20d2cc..e819cf6 100644
--- a/arch/x86/pci/amd_bus.c
+++ b/arch/x86/pci/amd_bus.c
@@ -343,7 +343,6 @@ static int amd_cpu_notify(struct notifier_block *self, unsigned long action,
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		smp_call_function_single(cpu, enable_pci_io_ecs, NULL, 0);
 		break;
 	default:
diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
index 54f0118..5615397 100644
--- a/arch/xtensa/kernel/perf_event.c
+++ b/arch/xtensa/kernel/perf_event.c
@@ -418,7 +418,7 @@ static void xtensa_pmu_setup(void)
 static int xtensa_pmu_notifier(struct notifier_block *self,
 			       unsigned long action, void *data)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		xtensa_pmu_setup();
 		break;
diff --git a/block/blk-iopoll.c b/block/blk-iopoll.c
index 0736729..aa1dc01 100644
--- a/block/blk-iopoll.c
+++ b/block/blk-iopoll.c
@@ -193,7 +193,7 @@ static int blk_iopoll_cpu_notify(struct notifier_block *self,
 	 * If a CPU goes away, splice its entries to the current CPU
 	 * and trigger a run of the softirq
 	 */
-	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+	if (action == CPU_DEAD) {
 		int cpu = (unsigned long) hcpu;
 
 		local_irq_disable();
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7d842db..cbfa5bb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1618,7 +1618,7 @@ static int blk_mq_hctx_notify(void *data, unsigned long action,
 {
 	struct blk_mq_hw_ctx *hctx = data;
 
-	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
+	if (action == CPU_DEAD)
 		return blk_mq_hctx_cpu_offline(hctx, cpu);
 
 	/*
@@ -2114,8 +2114,7 @@ static int blk_mq_queue_reinit_notify(struct notifier_block *nb,
 	 * offline CPUs to first hardware queue. We will re-init the queue
 	 * below to get optimal settings.
 	 */
-	if (action != CPU_DEAD && action != CPU_DEAD_FROZEN &&
-	    action != CPU_ONLINE && action != CPU_ONLINE_FROZEN)
+	if (action != CPU_DEAD && action != CPU_ONLINE)
 		return NOTIFY_OK;
 
 	mutex_lock(&all_q_mutex);
diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index 53b1737..5a21131 100644
--- a/block/blk-softirq.c
+++ b/block/blk-softirq.c
@@ -85,7 +85,7 @@ static int blk_cpu_notify(struct notifier_block *self, unsigned long action,
 	 * If a CPU goes away, splice its entries to the current CPU
 	 * and trigger a run of the softirq
 	 */
-	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+	if (action == CPU_DEAD) {
 		int cpu = (unsigned long) hcpu;
 
 		local_irq_disable();
diff --git a/drivers/acpi/processor_driver.c b/drivers/acpi/processor_driver.c
index d9f7158..4fcbd67 100644
--- a/drivers/acpi/processor_driver.c
+++ b/drivers/acpi/processor_driver.c
@@ -120,7 +120,6 @@ static int acpi_cpu_soft_notify(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct acpi_processor *pr = per_cpu(processors, cpu);
 	struct acpi_device *device;
-	action &= ~CPU_TASKS_FROZEN;
 
 	/*
 	 * CPU_STARTING and CPU_DYING must not sleep. Return here since
diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
index 764280a..7441ca5 100644
--- a/drivers/base/cacheinfo.c
+++ b/drivers/base/cacheinfo.c
@@ -506,7 +506,7 @@ static int cacheinfo_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	int rc = 0;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		rc = detect_cache_attributes(cpu);
 		if (!rc)
diff --git a/drivers/base/topology.c b/drivers/base/topology.c
index 8b7d7f8..a30d92f 100644
--- a/drivers/base/topology.c
+++ b/drivers/base/topology.c
@@ -120,13 +120,10 @@ static int topology_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		rc = topology_add_dev(cpu);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		topology_remove_dev(cpu);
 		break;
 	}
diff --git a/drivers/bus/arm-cci.c b/drivers/bus/arm-cci.c
index 577cc4b..33f2be3 100644
--- a/drivers/bus/arm-cci.c
+++ b/drivers/bus/arm-cci.c
@@ -1309,7 +1309,7 @@ static int cci_pmu_cpu_notifier(struct notifier_block *self,
 	unsigned int cpu = (long)hcpu;
 	unsigned int target;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		if (!cpumask_test_and_clear_cpu(cpu, &cci_pmu->cpus))
 			break;
diff --git a/drivers/bus/arm-ccn.c b/drivers/bus/arm-ccn.c
index 7d9879e..f55a045 100644
--- a/drivers/bus/arm-ccn.c
+++ b/drivers/bus/arm-ccn.c
@@ -1179,7 +1179,7 @@ static int arm_ccn_pmu_cpu_notifier(struct notifier_block *nb,
 	unsigned int cpu = (long)hcpu; /* for (long) see kernel/cpu.c */
 	unsigned int target;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		if (!cpumask_test_and_clear_cpu(cpu, &dt->cpu))
 			break;
diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
index ab3bde1..bba990e8 100644
--- a/drivers/bus/mips_cdmm.c
+++ b/drivers/bus/mips_cdmm.c
@@ -662,7 +662,7 @@ static int mips_cdmm_cpu_notify(struct notifier_block *nb,
 {
 	unsigned int cpu = (unsigned int)data;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 	case CPU_DOWN_FAILED:
 		work_on_cpu(cpu, mips_cdmm_bus_up, &cpu);
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index d6e3e49..dea3118 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -508,7 +508,7 @@ static int arch_timer_cpu_notify(struct notifier_block *self,
 	 * Grab cpu pointer in each case to avoid spurious
 	 * preemptible warnings
 	 */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		arch_timer_setup(this_cpu_ptr(arch_timer_evt));
 		break;
diff --git a/drivers/clocksource/arm_global_timer.c b/drivers/clocksource/arm_global_timer.c
index 29ea50a..0019bd8 100644
--- a/drivers/clocksource/arm_global_timer.c
+++ b/drivers/clocksource/arm_global_timer.c
@@ -222,7 +222,7 @@ static void __init gt_clocksource_init(void)
 static int gt_cpu_notify(struct notifier_block *self, unsigned long action,
 			 void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		gt_clockevents_init(this_cpu_ptr(gt_evt));
 		break;
diff --git a/drivers/clocksource/dummy_timer.c b/drivers/clocksource/dummy_timer.c
index 776b6c8..58e8af6 100644
--- a/drivers/clocksource/dummy_timer.c
+++ b/drivers/clocksource/dummy_timer.c
@@ -34,7 +34,7 @@ static void dummy_timer_setup(void)
 static int dummy_timer_cpu_notify(struct notifier_block *self,
 				      unsigned long action, void *hcpu)
 {
-	if ((action & ~CPU_TASKS_FROZEN) == CPU_STARTING)
+	if (action == CPU_STARTING)
 		dummy_timer_setup();
 
 	return NOTIFY_OK;
diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
index 029f96a..7e0e7d4 100644
--- a/drivers/clocksource/exynos_mct.c
+++ b/drivers/clocksource/exynos_mct.c
@@ -492,7 +492,7 @@ static int exynos4_mct_cpu_notify(struct notifier_block *self,
 	 * Grab cpu pointer in each case to avoid spurious
 	 * preemptible warnings
 	 */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		mevt = this_cpu_ptr(&percpu_mct_tick);
 		exynos4_local_timer_setup(mevt);
diff --git a/drivers/clocksource/metag_generic.c b/drivers/clocksource/metag_generic.c
index bcd5c0d..7b8e2eb 100644
--- a/drivers/clocksource/metag_generic.c
+++ b/drivers/clocksource/metag_generic.c
@@ -141,7 +141,6 @@ static int arch_timer_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_STARTING:
-	case CPU_STARTING_FROZEN:
 		arch_timer_setup(cpu);
 		break;
 	}
diff --git a/drivers/clocksource/mips-gic-timer.c b/drivers/clocksource/mips-gic-timer.c
index c3810b6..7eb2d78 100644
--- a/drivers/clocksource/mips-gic-timer.c
+++ b/drivers/clocksource/mips-gic-timer.c
@@ -75,7 +75,7 @@ static void gic_clockevent_cpu_exit(struct clock_event_device *cd)
 static int gic_cpu_notifier(struct notifier_block *nb, unsigned long action,
 				void *data)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		gic_clockevent_cpu_init(this_cpu_ptr(&gic_clockevent_device));
 		break;
diff --git a/drivers/clocksource/qcom-timer.c b/drivers/clocksource/qcom-timer.c
index f8e09f9..e2c2a00 100644
--- a/drivers/clocksource/qcom-timer.c
+++ b/drivers/clocksource/qcom-timer.c
@@ -148,7 +148,7 @@ static int msm_timer_cpu_notify(struct notifier_block *self,
 	 * Grab cpu pointer in each case to avoid spurious
 	 * preemptible warnings
 	 */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		msm_local_timer_setup(this_cpu_ptr(msm_evt));
 		break;
diff --git a/drivers/clocksource/time-armada-370-xp.c b/drivers/clocksource/time-armada-370-xp.c
index 2162796..8f43645 100644
--- a/drivers/clocksource/time-armada-370-xp.c
+++ b/drivers/clocksource/time-armada-370-xp.c
@@ -211,7 +211,7 @@ static int armada_370_xp_timer_cpu_notify(struct notifier_block *self,
 	 * Grab cpu pointer in each case to avoid spurious
 	 * preemptible warnings
 	 */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		armada_370_xp_timer_setup(this_cpu_ptr(armada_370_xp_evt));
 		break;
diff --git a/drivers/clocksource/timer-atlas7.c b/drivers/clocksource/timer-atlas7.c
index 27fa136..03bfe07 100644
--- a/drivers/clocksource/timer-atlas7.c
+++ b/drivers/clocksource/timer-atlas7.c
@@ -222,7 +222,7 @@ static int sirfsoc_cpu_notify(struct notifier_block *self,
 	 * Grab cpu pointer in each case to avoid spurious
 	 * preemptible warnings
 	 */
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		sirfsoc_local_timer_setup(this_cpu_ptr(sirfsoc_clockevent));
 		break;
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index 0136dfc..a7b349e 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -535,12 +535,10 @@ static int boost_notify(struct notifier_block *nb, unsigned long action,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		boost_set_msrs(acpi_cpufreq_driver.boost_enabled, cpumask);
 		break;
 
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		boost_set_msrs(1, cpumask);
 		break;
 
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 9bb09ce..59ff6e2 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -2389,7 +2389,7 @@ static int cpufreq_cpu_callback(struct notifier_block *nfb,
 
 	dev = get_cpu_device(cpu);
 	if (dev) {
-		switch (action & ~CPU_TASKS_FROZEN) {
+		switch (action) {
 		case CPU_ONLINE:
 			cpufreq_add_dev(dev, NULL);
 			break;
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 7936dce..7664f1d 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -745,7 +745,7 @@ static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
 	int cpu = (unsigned long)hcpu;
 	struct cpuidle_device *dev;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 	case CPU_DOWN_PREPARE:
 	case CPU_ONLINE:
@@ -763,7 +763,7 @@ static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
 	if (!dev || !dev->coupled)
 		goto out;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 	case CPU_DOWN_PREPARE:
 		cpuidle_coupled_prevent_idle(dev->coupled);
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 845bafc..7039df0 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -115,14 +115,12 @@ static int powernv_cpuidle_add_cpu_notifier(struct notifier_block *n,
 	if (dev && cpuidle_get_driver()) {
 		switch (action) {
 		case CPU_ONLINE:
-		case CPU_ONLINE_FROZEN:
 			cpuidle_pause_and_lock();
 			cpuidle_enable_device(dev);
 			cpuidle_resume_and_unlock();
 			break;
 
 		case CPU_DEAD:
-		case CPU_DEAD_FROZEN:
 			cpuidle_pause_and_lock();
 			cpuidle_disable_device(dev);
 			cpuidle_resume_and_unlock();
diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index 07135e0..bc45457 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -181,14 +181,12 @@ static int pseries_cpuidle_add_cpu_notifier(struct notifier_block *n,
 	if (dev && cpuidle_get_driver()) {
 		switch (action) {
 		case CPU_ONLINE:
-		case CPU_ONLINE_FROZEN:
 			cpuidle_pause_and_lock();
 			cpuidle_enable_device(dev);
 			cpuidle_resume_and_unlock();
 			break;
 
 		case CPU_DEAD:
-		case CPU_DEAD_FROZEN:
 			cpuidle_pause_and_lock();
 			cpuidle_disable_device(dev);
 			cpuidle_resume_and_unlock();
diff --git a/drivers/hwtracing/coresight/coresight-etm3x.c b/drivers/hwtracing/coresight/coresight-etm3x.c
index bf2476e..8885d20 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x.c
@@ -1633,7 +1633,7 @@ static int etm_cpu_callback(struct notifier_block *nfb, unsigned long action,
 	if (!etmdrvdata[cpu])
 		goto out;
 
-	switch (action & (~CPU_TASKS_FROZEN)) {
+	switch (action) {
 	case CPU_STARTING:
 		spin_lock(&etmdrvdata[cpu]->spinlock);
 		if (!etmdrvdata[cpu]->os_unlock) {
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
index 254a81a..cde5f0a 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
@@ -2548,7 +2548,7 @@ static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action,
 	if (!etmdrvdata[cpu])
 		goto out;
 
-	switch (action & (~CPU_TASKS_FROZEN)) {
+	switch (action) {
 	case CPU_STARTING:
 		spin_lock(&etmdrvdata[cpu]->spinlock);
 		if (!etmdrvdata[cpu]->os_unlock) {
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index 2a36a95..397942a 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -716,7 +716,7 @@ static int cpu_hotplug_notify(struct notifier_block *n,
 	int hotcpu = (unsigned long)hcpu;
 	struct cpuidle_device *dev;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 
 		if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE)
diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c
index 39b72da..b2ea69b 100644
--- a/drivers/irqchip/irq-armada-370-xp.c
+++ b/drivers/irqchip/irq-armada-370-xp.c
@@ -378,7 +378,7 @@ static void armada_mpic_send_doorbell(const struct cpumask *mask,
 static int armada_xp_mpic_secondary_init(struct notifier_block *nfb,
 					 unsigned long action, void *hcpu)
 {
-	if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) {
+	if (action == CPU_STARTING) {
 		armada_xp_mpic_perf_init();
 		armada_xp_mpic_smp_cpu_init();
 	}
@@ -394,7 +394,7 @@ static struct notifier_block armada_370_xp_mpic_cpu_notifier = {
 static int mpic_cascaded_secondary_init(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
-	if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) {
+	if (action == CPU_STARTING) {
 		armada_xp_mpic_perf_init();
 		enable_percpu_irq(parent_irq, IRQ_TYPE_NONE);
 	}
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 7deed6e..a17f97a 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -544,7 +544,7 @@ static void gic_cpu_init(void)
 static int gic_secondary_init(struct notifier_block *nfb,
 			      unsigned long action, void *hcpu)
 {
-	if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
+	if (action == CPU_STARTING)
 		gic_cpu_init();
 	return NOTIFY_OK;
 }
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index e6b7ed5..d2b3136 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -947,7 +947,7 @@ static int gic_irq_domain_xlate(struct irq_domain *d,
 static int gic_secondary_init(struct notifier_block *nfb, unsigned long action,
 			      void *hcpu)
 {
-	if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
+	if (action == CPU_STARTING)
 		gic_cpu_init(&gic_data[0]);
 	return NOTIFY_OK;
 }
diff --git a/drivers/irqchip/irq-hip04.c b/drivers/irqchip/irq-hip04.c
index a0128c7..8b5f78b 100644
--- a/drivers/irqchip/irq-hip04.c
+++ b/drivers/irqchip/irq-hip04.c
@@ -347,7 +347,7 @@ static int hip04_irq_secondary_init(struct notifier_block *nfb,
 				    unsigned long action,
 				    void *hcpu)
 {
-	if (action == CPU_STARTING || action == CPU_STARTING_FROZEN)
+	if (action == CPU_STARTING)
 		hip04_irq_cpu_init(&hip04_data);
 	return NOTIFY_OK;
 }
diff --git a/drivers/leds/trigger/ledtrig-cpu.c b/drivers/leds/trigger/ledtrig-cpu.c
index aec0f02..4297b97 100644
--- a/drivers/leds/trigger/ledtrig-cpu.c
+++ b/drivers/leds/trigger/ledtrig-cpu.c
@@ -96,7 +96,7 @@ static struct syscore_ops ledtrig_cpu_syscore_ops = {
 static int ledtrig_cpu_notify(struct notifier_block *self,
 					   unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		ledtrig_cpu(CPU_LED_START);
 		break;
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index f757023..bd7ddd4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6331,7 +6331,6 @@ static int raid456_cpu_notify(struct notifier_block *nfb, unsigned long action,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		if (alloc_scratch_buffer(conf, percpu)) {
 			pr_err("%s: failed memory allocation for cpu%ld\n",
 			       __func__, cpu);
@@ -6339,7 +6338,6 @@ static int raid456_cpu_notify(struct notifier_block *nfb, unsigned long action,
 		}
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		free_scratch_buffer(conf, per_cpu_ptr(conf->percpu, cpu));
 		break;
 	default:
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 237f8e5..8e4ec21d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1291,7 +1291,7 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
 {
 	struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
 
-	switch(action & ~CPU_TASKS_FROZEN) {
+	switch(action) {
 	case CPU_ONLINE:
 	case CPU_DOWN_FAILED:
 	case CPU_DEAD:
diff --git a/drivers/oprofile/timer_int.c b/drivers/oprofile/timer_int.c
index bdef916..d40d417 100644
--- a/drivers/oprofile/timer_int.c
+++ b/drivers/oprofile/timer_int.c
@@ -81,12 +81,10 @@ static int oprofile_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		smp_call_function_single(cpu, __oprofile_hrtimer_start,
 					 NULL, 1);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		__oprofile_hrtimer_stop(cpu);
 		break;
 	}
diff --git a/drivers/pci/host/pci-xgene-msi.c b/drivers/pci/host/pci-xgene-msi.c
index 996327c..a851a9b 100644
--- a/drivers/pci/host/pci-xgene-msi.c
+++ b/drivers/pci/host/pci-xgene-msi.c
@@ -450,11 +450,9 @@ static int xgene_msi_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		xgene_msi_hwirq_alloc(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		xgene_msi_hwirq_free(cpu);
 		break;
 	default:
diff --git a/drivers/powercap/intel_rapl.c b/drivers/powercap/intel_rapl.c
index 482b22d..38d15cb 100644
--- a/drivers/powercap/intel_rapl.c
+++ b/drivers/powercap/intel_rapl.c
@@ -1476,9 +1476,7 @@ static int rapl_cpu_callback(struct notifier_block *nfb,
 	phy_package_id = topology_physical_package_id(cpu);
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
 		rp = find_package_by_id(phy_package_id);
 		if (rp)
 			++rp->nr_cpus;
@@ -1486,7 +1484,6 @@ static int rapl_cpu_callback(struct notifier_block *nfb,
 			rapl_add_package(cpu);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		rp = find_package_by_id(phy_package_id);
 		if (!rp)
 			break;
diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
index 98d06d1..20fdad7 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
@@ -2518,12 +2518,10 @@ static int bnx2fc_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		printk(PFX "CPU %x online: Create Rx thread\n", cpu);
 		bnx2fc_percpu_thread_create(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		printk(PFX "CPU %x offline: Remove Rx thread\n", cpu);
 		bnx2fc_percpu_thread_destroy(cpu);
 		break;
diff --git a/drivers/scsi/bnx2i/bnx2i_init.c b/drivers/scsi/bnx2i/bnx2i_init.c
index c8b410c..257dcca 100644
--- a/drivers/scsi/bnx2i/bnx2i_init.c
+++ b/drivers/scsi/bnx2i/bnx2i_init.c
@@ -480,13 +480,11 @@ static int bnx2i_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		printk(KERN_INFO "bnx2i: CPU %x online: Create Rx thread\n",
 			cpu);
 		bnx2i_percpu_thread_create(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		printk(KERN_INFO "CPU %x offline: Remove Rx thread\n", cpu);
 		bnx2i_percpu_thread_destroy(cpu);
 		break;
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index ec193a8..6ccc97c 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1379,12 +1379,10 @@ static int fcoe_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		FCOE_DBG("CPU %x online: Create Rx thread\n", cpu);
 		fcoe_percpu_thread_create(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		FCOE_DBG("CPU %x offline: Remove Rx thread\n", cpu);
 		fcoe_percpu_thread_destroy(cpu);
 		break;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 7dbbb29..a76141c 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -858,9 +858,7 @@ static int virtscsi_cpu_callback(struct notifier_block *nfb,
 	struct virtio_scsi *vscsi = container_of(nfb, struct virtio_scsi, nb);
 	switch(action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		__virtscsi_set_affinity(vscsi, true);
 		break;
 	default:
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
index f926224..436bf1e 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
@@ -952,14 +952,12 @@ cfs_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu)
 
 	switch (action) {
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		spin_lock(&cpt_data.cpt_lock);
 		cpt_data.cpt_version++;
 		spin_unlock(&cpt_data.cpt_lock);
 	default:
-		if (action != CPU_DEAD && action != CPU_DEAD_FROZEN) {
+		if (action != CPU_DEAD) {
 			CDEBUG(D_INFO, "CPU changed [cpu %u action %lx]\n",
 			       cpu, action);
 			break;
diff --git a/fs/buffer.c b/fs/buffer.c
index 1cf7a53..339adac 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3384,7 +3384,7 @@ static void buffer_exit_cpu(int cpu)
 static int buffer_cpu_notify(struct notifier_block *self,
 			      unsigned long action, void *hcpu)
 {
-	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN)
+	if (action == CPU_DEAD)
 		buffer_exit_cpu((unsigned long)hcpu);
 	return NOTIFY_OK;
 }
diff --git a/kernel/events/core.c b/kernel/events/core.c
index ae16867..2b9d351 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -9085,7 +9085,7 @@ perf_cpu_notify(struct notifier_block *self, unsigned long action, void *hcpu)
 {
 	unsigned int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 
 	case CPU_UP_PREPARE:
 	case CPU_DOWN_FAILED:
diff --git a/kernel/padata.c b/kernel/padata.c
index b38bea9..a5d004e 100644
--- a/kernel/padata.c
+++ b/kernel/padata.c
@@ -844,9 +844,7 @@ static int padata_cpu_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
 		if (!pinst_has_cpu(pinst, cpu))
 			break;
 		mutex_lock(&pinst->lock);
@@ -857,9 +855,7 @@ static int padata_cpu_callback(struct notifier_block *nfb,
 		break;
 
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 		if (!pinst_has_cpu(pinst, cpu))
 			break;
 		mutex_lock(&pinst->lock);
diff --git a/kernel/profile.c b/kernel/profile.c
index a7bcd28..932e377 100644
--- a/kernel/profile.c
+++ b/kernel/profile.c
@@ -335,7 +335,6 @@ static int profile_cpu_callback(struct notifier_block *info,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		node = cpu_to_mem(cpu);
 		per_cpu(cpu_profile_flip, cpu) = 0;
 		if (!per_cpu(cpu_profile_hits, cpu)[1]) {
@@ -361,14 +360,11 @@ out_free:
 		__free_page(page);
 		return notifier_from_errno(-ENOMEM);
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		if (prof_cpu_mask != NULL)
 			cpumask_set_cpu(cpu, prof_cpu_mask);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		if (prof_cpu_mask != NULL)
 			cpumask_clear_cpu(cpu, prof_cpu_mask);
 		if (per_cpu(cpu_profile_hits, cpu)[0]) {
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 9f75f25..04adfb0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3953,20 +3953,17 @@ int rcu_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		rcu_prepare_cpu(cpu);
 		rcu_prepare_kthreads(cpu);
 		rcu_spawn_all_nocb_kthreads(cpu);
 		break;
 	case CPU_ONLINE:
-	case CPU_DOWN_FAILED:
 		rcu_boost_kthread_setaffinity(rnp, -1);
 		break;
 	case CPU_DOWN_PREPARE:
 		rcu_boost_kthread_setaffinity(rnp, cpu);
 		break;
 	case CPU_DYING:
-	case CPU_DYING_FROZEN:
 		for_each_rcu_flavor(rsp)
 			rcu_cleanup_dying_cpu(rsp);
 		break;
@@ -3976,9 +3973,7 @@ int rcu_cpu_notify(struct notifier_block *self,
 		}
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 		for_each_rcu_flavor(rsp) {
 			rcu_cleanup_dead_cpu(cpu, rsp);
 			do_nocb_deferred_wakeup(per_cpu_ptr(rsp->rda, cpu));
diff --git a/kernel/relay.c b/kernel/relay.c
index 0b4570c..f1d1b8f 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -522,7 +522,6 @@ static int relay_hotcpu_callback(struct notifier_block *nb,
 
 	switch(action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		mutex_lock(&relay_channels_mutex);
 		list_for_each_entry(chan, &relay_channels, list) {
 			if (chan->buf[hotcpu])
@@ -539,7 +538,6 @@ static int relay_hotcpu_callback(struct notifier_block *nb,
 		mutex_unlock(&relay_channels_mutex);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		/* No need to flush the cpu : will be flushed upon
 		 * final relay_flush() call. */
 		break;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 36b00eb..5f08f4e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -389,11 +389,8 @@ hotplug_hrtick(struct notifier_block *nfb, unsigned long action, void *hcpu)
 
 	switch (action) {
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		hrtick_clear(cpu_rq(cpu));
 		return NOTIFY_OK;
 	}
@@ -5424,7 +5421,7 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
 	unsigned long flags;
 	struct rq *rq = cpu_rq(cpu);
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 
 	case CPU_UP_PREPARE:
 		rq->calc_load_update = calc_load_update;
@@ -5486,7 +5483,7 @@ static void set_cpu_rq_start_time(void)
 static int sched_cpu_active(struct notifier_block *nfb,
 				      unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_STARTING:
 		set_cpu_rq_start_time();
 		return NOTIFY_OK;
@@ -5509,7 +5506,7 @@ static int sched_cpu_active(struct notifier_block *nfb,
 static int sched_cpu_inactive(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		set_cpu_active((long)hcpu, false);
 		return NOTIFY_OK;
@@ -6710,7 +6707,7 @@ static int sched_domains_numa_masks_update(struct notifier_block *nfb,
 {
 	int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 		sched_domains_numa_masks_set(cpu);
 		break;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6e2e348..34d667c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7497,7 +7497,7 @@ void nohz_balance_enter_idle(int cpu)
 static int sched_ilb_notifier(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DYING:
 		nohz_balance_exit_idle(smp_processor_id());
 		return NOTIFY_OK;
diff --git a/kernel/smp.c b/kernel/smp.c
index 0785447..264d6e6 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -41,7 +41,6 @@ hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL,
 				cpu_to_node(cpu)))
 			return notifier_from_errno(-ENOMEM);
@@ -54,17 +53,14 @@ hotplug_cfd(struct notifier_block *nfb, unsigned long action, void *hcpu)
 
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
-		/* Fall-through to the CPU_DEAD[_FROZEN] case. */
+		/* Fall-through to the CPU_DEAD case. */
 
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		free_cpumask_var(cfd->cpumask);
 		free_percpu(cfd->csd);
 		break;
 
 	case CPU_DYING:
-	case CPU_DYING_FROZEN:
 		/*
 		 * The IPIs for the smp-call-function callbacks queued by other
 		 * CPUs might arrive late, either due to hardware latencies or
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 479e443..6374fe4 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -732,7 +732,6 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
 	switch (action) {
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		takeover_tasklets((unsigned long)hcpu);
 		break;
 #endif /* CONFIG_HOTPLUG_CPU */
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 435b885..cd02a1d 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1668,13 +1668,11 @@ static int hrtimer_cpu_notify(struct notifier_block *self,
 	switch (action) {
 
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		init_hrtimers_cpu(scpu);
 		break;
 
 #ifdef CONFIG_HOTPLUG_CPU
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		migrate_hrtimers(scpu);
 		break;
 #endif
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 7c7ec45..d8817ad 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -295,7 +295,7 @@ static int tick_nohz_cpu_down_callback(struct notifier_block *nfb,
 {
 	unsigned int cpu = (unsigned long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		/*
 		 * The boot CPU handles housekeeping duty (unbound timers,
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 84190f0..5b70cdc 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1598,7 +1598,6 @@ static int timer_cpu_notify(struct notifier_block *self,
 {
 	switch (action) {
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		migrate_timers((long)hcpu);
 		break;
 	default:
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 6260717..b35759b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4609,7 +4609,6 @@ static int rb_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		if (cpumask_test_cpu(cpu, buffer->cpumask))
 			return NOTIFY_OK;
 
@@ -4639,7 +4638,6 @@ static int rb_cpu_notify(struct notifier_block *self,
 		cpumask_set_cpu(cpu, buffer->cpumask);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		/*
 		 * Do nothing.
 		 *  If we were to free the buffer, then the user would
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 811edb7..7b13657 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4501,7 +4501,7 @@ static int workqueue_cpu_up_callback(struct notifier_block *nfb,
 	struct workqueue_struct *wq;
 	int pi;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_UP_PREPARE:
 		for_each_cpu_worker_pool(pool, cpu) {
 			if (pool->nr_workers)
@@ -4548,7 +4548,7 @@ static int workqueue_cpu_down_callback(struct notifier_block *nfb,
 	struct work_struct unbind_work;
 	struct workqueue_struct *wq;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_DOWN_PREPARE:
 		/* unbinding per-cpu workers should happen on the local CPU */
 		INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);
diff --git a/lib/cpu-notifier-error-inject.c b/lib/cpu-notifier-error-inject.c
index 707ca24..7f651e3 100644
--- a/lib/cpu-notifier-error-inject.c
+++ b/lib/cpu-notifier-error-inject.c
@@ -11,9 +11,7 @@ MODULE_PARM_DESC(priority, "specify cpu notifier priority");
 static struct notifier_err_inject cpu_notifier_err_inject = {
 	.actions = {
 		{ NOTIFIER_ERR_INJECT_ACTION(CPU_UP_PREPARE) },
-		{ NOTIFIER_ERR_INJECT_ACTION(CPU_UP_PREPARE_FROZEN) },
 		{ NOTIFIER_ERR_INJECT_ACTION(CPU_DOWN_PREPARE) },
-		{ NOTIFIER_ERR_INJECT_ACTION(CPU_DOWN_PREPARE_FROZEN) },
 		{}
 	}
 };
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index f051d69..44bb9f4 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -173,7 +173,7 @@ static int percpu_counter_hotcpu_callback(struct notifier_block *nb,
 	struct percpu_counter *fbc;
 
 	compute_batch_value();
-	if (action != CPU_DEAD && action != CPU_DEAD_FROZEN)
+	if (action != CPU_DEAD)
 		return NOTIFY_OK;
 
 	cpu = (unsigned long)hcpu;
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index f9ebe1c..b4e9302 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -1471,7 +1471,7 @@ static int radix_tree_callback(struct notifier_block *nfb,
        struct radix_tree_node *node;
 
        /* Free per-cpu pool of perloaded nodes */
-       if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+       if (action == CPU_DEAD) {
                rtp = &per_cpu(radix_tree_preloads, cpu);
                while (rtp->nr) {
 			node = rtp->nodes;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index acb93c5..7feada6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2211,7 +2211,7 @@ static int memcg_cpu_hotplug_callback(struct notifier_block *nb,
 	if (action == CPU_ONLINE)
 		return NOTIFY_OK;
 
-	if (action != CPU_DEAD && action != CPU_DEAD_FROZEN)
+	if (action != CPU_DEAD)
 		return NOTIFY_OK;
 
 	stock = &per_cpu(memcg_stock, cpu);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 5cccc12..e1f3c9d 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2028,7 +2028,7 @@ ratelimit_handler(struct notifier_block *self, unsigned long action,
 		  void *hcpu)
 {
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 	case CPU_ONLINE:
 	case CPU_DEAD:
 		writeback_set_ratelimit();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b5240b..4973bc2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6026,7 +6026,7 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
 {
 	int cpu = (unsigned long)hcpu;
 
-	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+	if (action == CPU_DEAD) {
 		lru_add_drain_cpu(cpu);
 		drain_pages(cpu);
 
diff --git a/mm/slab.c b/mm/slab.c
index bbd0b47..7b268c2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1236,18 +1236,15 @@ static int cpuup_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		mutex_lock(&slab_mutex);
 		err = cpuup_prepare(cpu);
 		mutex_unlock(&slab_mutex);
 		break;
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		start_cpu_timer(cpu);
 		break;
 #ifdef CONFIG_HOTPLUG_CPU
   	case CPU_DOWN_PREPARE:
-  	case CPU_DOWN_PREPARE_FROZEN:
 		/*
 		 * Shutdown cache reaper. Note that the slab_mutex is
 		 * held so that if cache_reap() is invoked it cannot do
@@ -1259,11 +1256,9 @@ static int cpuup_callback(struct notifier_block *nfb,
 		per_cpu(slab_reap_work, cpu).work.func = NULL;
   		break;
   	case CPU_DOWN_FAILED:
-  	case CPU_DOWN_FAILED_FROZEN:
 		start_cpu_timer(cpu);
   		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		/*
 		 * Even if all the cpus of a node are down, we don't free the
 		 * kmem_cache_node of any cache. This to avoid a race between
@@ -1275,7 +1270,6 @@ static int cpuup_callback(struct notifier_block *nfb,
 		/* fall through */
 #endif
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 		mutex_lock(&slab_mutex);
 		cpuup_canceled(cpu);
 		mutex_unlock(&slab_mutex);
diff --git a/mm/slub.c b/mm/slub.c
index f68c0e5..2432cb5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3783,9 +3783,7 @@ static int slab_cpuup_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		mutex_lock(&slab_mutex);
 		list_for_each_entry(s, &slab_caches, list) {
 			local_irq_save(flags);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8286938..f97563f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3553,7 +3553,7 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
 {
 	int nid;
 
-	if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN) {
+	if (action == CPU_ONLINE) {
 		for_each_node_state(nid, N_MEMORY) {
 			pg_data_t *pgdat = NODE_DATA(nid);
 			const struct cpumask *mask;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4f5cd97..dcca6c1 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1489,22 +1489,18 @@ static int vmstat_cpuup_callback(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 		refresh_zone_stat_thresholds();
 		node_set_state(cpu_to_node(cpu), N_CPU);
 		cpumask_set_cpu(cpu, cpu_stat_off);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		cancel_delayed_work_sync(&per_cpu(vmstat_work, cpu));
 		cpumask_clear_cpu(cpu, cpu_stat_off);
 		break;
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
 		cpumask_set_cpu(cpu, cpu_stat_off);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		refresh_zone_stat_thresholds();
 		vmstat_cpu_dead(cpu_to_node(cpu));
 		break;
diff --git a/net/core/dev.c b/net/core/dev.c
index a8e4dd4..3bf474c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7252,7 +7252,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu, oldcpu = (unsigned long)ocpu;
 	struct softnet_data *sd, *oldsd;
 
-	if (action != CPU_DEAD && action != CPU_DEAD_FROZEN)
+	if (action != CPU_DEAD)
 		return NOTIFY_OK;
 
 	local_irq_disable();
diff --git a/net/core/flow.c b/net/core/flow.c
index 1033725..5468724 100644
--- a/net/core/flow.c
+++ b/net/core/flow.c
@@ -419,13 +419,11 @@ static int flow_cache_cpu(struct notifier_block *nfb,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		res = flow_cache_cpu_prepare(fc, cpu);
 		if (res)
 			return notifier_from_errno(res);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		__flow_cache_shrink(fc, fcp, 0);
 		break;
 	}
diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
index 2a6a1fd..85d92aa 100644
--- a/net/iucv/iucv.c
+++ b/net/iucv/iucv.c
@@ -665,26 +665,20 @@ static int iucv_cpu_notify(struct notifier_block *self,
 
 	switch (action) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		if (alloc_iucv_data(cpu))
 			return notifier_from_errno(-ENOMEM);
 		break;
 	case CPU_UP_CANCELED:
-	case CPU_UP_CANCELED_FROZEN:
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		free_iucv_data(cpu);
 		break;
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
 		if (!iucv_path_table)
 			break;
 		smp_call_function_single(cpu, iucv_declare_cpu, NULL, 1);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		if (!iucv_path_table)
 			break;
 		cpumask_copy(&cpumask, &iucv_buffer_cpumask);
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 98c95f2..eb55847 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -252,11 +252,9 @@ static int kvm_timer_cpu_notify(struct notifier_block *self,
 {
 	switch (action) {
 	case CPU_STARTING:
-	case CPU_STARTING_FROZEN:
 		kvm_timer_init_interrupt(NULL);
 		break;
 	case CPU_DYING:
-	case CPU_DYING_FROZEN:
 		disable_percpu_irq(host_vtimer_irq);
 		break;
 	}
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index bc40137..11fd236 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -2069,11 +2069,9 @@ static int vgic_cpu_notify(struct notifier_block *self,
 {
 	switch (action) {
 	case CPU_STARTING:
-	case CPU_STARTING_FROZEN:
 		vgic_init_maintenance_interrupt(NULL);
 		break;
 	case CPU_DYING:
-	case CPU_DYING_FROZEN:
 		disable_percpu_irq(vgic->maint_irq);
 		break;
 	}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d8db2f8f..af5a398 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -3024,7 +3024,6 @@ static int hardware_enable_all(void)
 static int kvm_cpu_hotplug(struct notifier_block *notifier, unsigned long val,
 			   void *v)
 {
-	val &= ~CPU_TASKS_FROZEN;
 	switch (val) {
 	case CPU_DYING:
 		hardware_disable();
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 8/9] cpu: Do not set CPU_TASKS_FROZEN anymore
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (7 preceding siblings ...)
  2015-09-04 13:35 ` [RFC v0 7/9] cpu: Remove unused CPU_*_FROZEN states Daniel Wagner
@ 2015-09-04 13:35 ` Daniel Wagner
  2015-09-04 13:35 ` [RFC v0 9/9] doc: Update cpu-hotplug documents on removal of CPU_TASKS_FROZEN Daniel Wagner
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, Paul E. McKenney, Andrew Morton,
	David Hildenbrand, Greg Kroah-Hartman, Ingo Molnar,
	Mathias Krause, Nicolas Iooss, Paul Gortmaker, Sudeep Holla,
	Thomas Gleixner, Vitaly Kuznetsov

There is no user left of CPU_TASKS_FROZEN, so we can stop propagating
this information.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: linux-kernel@vger.kernel.org
---
 include/linux/cpu.h | 15 ---------------
 kernel/cpu.c        | 22 ++++++++--------------
 2 files changed, 8 insertions(+), 29 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 381ea8a..ebd07e7 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -104,21 +104,6 @@ enum {
 #define CPU_DEAD_TIMEOUT	0x000D /* CPU (unsigned)v surviving CPU timed
 					  out */
 
-/* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
- * operation in progress
- */
-#define CPU_TASKS_FROZEN	0x0010
-
-#define CPU_ONLINE_FROZEN	(CPU_ONLINE | CPU_TASKS_FROZEN)
-#define CPU_UP_PREPARE_FROZEN	(CPU_UP_PREPARE | CPU_TASKS_FROZEN)
-#define CPU_UP_CANCELED_FROZEN	(CPU_UP_CANCELED | CPU_TASKS_FROZEN)
-#define CPU_DOWN_PREPARE_FROZEN	(CPU_DOWN_PREPARE | CPU_TASKS_FROZEN)
-#define CPU_DOWN_FAILED_FROZEN	(CPU_DOWN_FAILED | CPU_TASKS_FROZEN)
-#define CPU_DEAD_FROZEN		(CPU_DEAD | CPU_TASKS_FROZEN)
-#define CPU_DYING_FROZEN	(CPU_DYING | CPU_TASKS_FROZEN)
-#define CPU_STARTING_FROZEN	(CPU_STARTING | CPU_TASKS_FROZEN)
-
-
 #ifdef CONFIG_SMP
 /* Need to know about CPUs going up/down? */
 #if defined(CONFIG_HOTPLUG_CPU) || !defined(MODULE)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index e37442d..1f0408c 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -29,7 +29,6 @@
 #ifdef CONFIG_SMP
 /* Serializes the updates to cpu_online_mask, cpu_present_mask */
 static DEFINE_MUTEX(cpu_add_remove_lock);
-static bool cpuhp_tasks_frozen;
 
 /*
  * The following two APIs (cpu_maps_update_begin/done) must be used when
@@ -224,12 +223,11 @@ int __register_cpu_notifier(struct notifier_block *nb)
 static int __cpu_notify(unsigned long val, unsigned int cpu, int nr_to_call,
 			int *nr_calls)
 {
-	unsigned long mod = cpuhp_tasks_frozen ? CPU_TASKS_FROZEN : 0;
 	void *hcpu = (void *)(long)cpu;
 
 	int ret;
 
-	ret = __raw_notifier_call_chain(&cpu_chain, val | mod, hcpu, nr_to_call,
+	ret = __raw_notifier_call_chain(&cpu_chain, val, hcpu, nr_to_call,
 					nr_calls);
 
 	return notifier_to_errno(ret);
@@ -347,7 +345,7 @@ static int take_cpu_down(void *_param)
 }
 
 /* Requires cpu_add_remove_lock to be held */
-static int _cpu_down(unsigned int cpu, int tasks_frozen)
+static int _cpu_down(unsigned int cpu)
 {
 	int err, nr_calls = 0;
 
@@ -359,8 +357,6 @@ static int _cpu_down(unsigned int cpu, int tasks_frozen)
 
 	cpu_hotplug_begin();
 
-	cpuhp_tasks_frozen = tasks_frozen;
-
 	err = __cpu_notify(CPU_DOWN_PREPARE, cpu, -1, &nr_calls);
 	if (err) {
 		nr_calls--;
@@ -448,7 +444,7 @@ int cpu_down(unsigned int cpu)
 		goto out;
 	}
 
-	err = _cpu_down(cpu, 0);
+	err = _cpu_down(cpu);
 
 out:
 	cpu_maps_update_done();
@@ -465,7 +461,7 @@ static int smpboot_thread_call(struct notifier_block *nfb,
 {
 	int cpu = (long)hcpu;
 
-	switch (action & ~CPU_TASKS_FROZEN) {
+	switch (action) {
 
 	case CPU_DOWN_FAILED:
 	case CPU_ONLINE:
@@ -490,7 +486,7 @@ void smpboot_thread_init(void)
 }
 
 /* Requires cpu_add_remove_lock to be held */
-static int _cpu_up(unsigned int cpu, int tasks_frozen)
+static int _cpu_up(unsigned int cpu)
 {
 	struct task_struct *idle;
 	int ret, nr_calls = 0;
@@ -512,8 +508,6 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen)
 	if (ret)
 		goto out;
 
-	cpuhp_tasks_frozen = tasks_frozen;
-
 	ret = __cpu_notify(CPU_UP_PREPARE, cpu, -1, &nr_calls);
 	if (ret) {
 		nr_calls--;
@@ -565,7 +559,7 @@ int cpu_up(unsigned int cpu)
 		goto out;
 	}
 
-	err = _cpu_up(cpu, 0);
+	err = _cpu_up(cpu);
 
 out:
 	cpu_maps_update_done();
@@ -593,7 +587,7 @@ int disable_nonboot_cpus(void)
 		if (cpu == first_cpu)
 			continue;
 		trace_suspend_resume(TPS("CPU_OFF"), cpu, true);
-		error = _cpu_down(cpu, 1);
+		error = _cpu_down(cpu);
 		trace_suspend_resume(TPS("CPU_OFF"), cpu, false);
 		if (!error)
 			cpumask_set_cpu(cpu, frozen_cpus);
@@ -643,7 +637,7 @@ void enable_nonboot_cpus(void)
 
 	for_each_cpu(cpu, frozen_cpus) {
 		trace_suspend_resume(TPS("CPU_ON"), cpu, true);
-		error = _cpu_up(cpu, 1);
+		error = _cpu_up(cpu);
 		trace_suspend_resume(TPS("CPU_ON"), cpu, false);
 		if (!error) {
 			pr_info("CPU%d is up\n", cpu);
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC v0 9/9] doc: Update cpu-hotplug documents on removal of CPU_TASKS_FROZEN
  2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
                   ` (8 preceding siblings ...)
  2015-09-04 13:35 ` [RFC v0 8/9] cpu: Do not set CPU_TASKS_FROZEN anymore Daniel Wagner
@ 2015-09-04 13:35 ` Daniel Wagner
  9 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-04 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Daniel Wagner, Rafael J. Wysocki, Akinobu Mita, Jonathan Corbet,
	Len Brown, Pavel Machek, linux-doc

CPU_*_FROZEN states are gone update the documentation accordingly.

I am not completely convinced that the listed known race condition
can be removed as it is done in this patch. If a suspend/resume is
ongoing while cpu_{down|up}() is called renders at least the 'always
passing 0 as tasks_frozen in cpu_up()' argument false.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 Documentation/cpu-hotplug.txt                           | 12 +++++-------
 Documentation/fault-injection/notifier-error-inject.txt |  2 --
 Documentation/power/suspend-and-cpuhotplug.txt          | 13 ++-----------
 3 files changed, 7 insertions(+), 20 deletions(-)

diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
index f9ad5e0..e362f42 100644
--- a/Documentation/cpu-hotplug.txt
+++ b/Documentation/cpu-hotplug.txt
@@ -244,9 +244,9 @@ Q: What happens when a CPU is being logically offlined?
 A: The following happen, listed in no particular order :-)
 
 - A notification is sent to in-kernel registered modules by sending an event
-  CPU_DOWN_PREPARE or CPU_DOWN_PREPARE_FROZEN, depending on whether or not the
-  CPU is being offlined while tasks are frozen due to a suspend operation in
-  progress
+  CPU_DOWN_PREPARE. Via freeze_active it is possible to inquire if a suspend
+  operation is ongoing, that is a CPU is beeing offlined while tasks
+  are frozen.
 - All processes are migrated away from this outgoing CPU to new CPUs.
   The new CPU is chosen from each process' current cpuset, which may be
   a subset of all online CPUs.
@@ -255,8 +255,8 @@ A: The following happen, listed in no particular order :-)
 - Once all services are migrated, kernel calls an arch specific routine
   __cpu_disable() to perform arch specific cleanup.
 - Once this is successful, an event for successful cleanup is sent by an event
-  CPU_DEAD (or CPU_DEAD_FROZEN if tasks are frozen due to a suspend while the
-  CPU is being offlined).
+  CPU_DEAD (if tasks are frozen due to a suspend while the
+  CPU is being offlined can be probed via freeze_active()).
 
   "It is expected that each service cleans up when the CPU_DOWN_PREPARE
   notifier is called, when CPU_DEAD is called its expected there is nothing
@@ -274,11 +274,9 @@ A: This is what you would need in your kernel code to receive notifications.
 
 		switch (action) {
 		case CPU_ONLINE:
-		case CPU_ONLINE_FROZEN:
 			foobar_online_action(cpu);
 			break;
 		case CPU_DEAD:
-		case CPU_DEAD_FROZEN:
 			foobar_dead_action(cpu);
 			break;
 		}
diff --git a/Documentation/fault-injection/notifier-error-inject.txt b/Documentation/fault-injection/notifier-error-inject.txt
index 09adabe..1ec4e84 100644
--- a/Documentation/fault-injection/notifier-error-inject.txt
+++ b/Documentation/fault-injection/notifier-error-inject.txt
@@ -23,9 +23,7 @@ the error code to debugfs interface
 Possible CPU notifier events to be failed are:
 
  * CPU_UP_PREPARE
- * CPU_UP_PREPARE_FROZEN
  * CPU_DOWN_PREPARE
- * CPU_DOWN_PREPARE_FROZEN
 
 Example1: Inject CPU offline error (-1 == -EPERM)
 
diff --git a/Documentation/power/suspend-and-cpuhotplug.txt b/Documentation/power/suspend-and-cpuhotplug.txt
index 2fc9095..ab657cd 100644
--- a/Documentation/power/suspend-and-cpuhotplug.txt
+++ b/Documentation/power/suspend-and-cpuhotplug.txt
@@ -232,7 +232,7 @@ d. Handling microcode update during suspend/hibernate:
    hibernate/restore cycle.]
 
    In the current design of the kernel however, during a CPU offline operation
-   as part of the suspend/hibernate cycle (the CPU_DEAD_FROZEN notification),
+   as part of the suspend/hibernate cycle (see freeze_active()),
    the existing copy of microcode image in the kernel is not freed up.
    And during the CPU online operations (during resume/restore), since the
    kernel finds that it already has copies of the microcode images for all the
@@ -248,16 +248,7 @@ III. Are there any known problems when regular CPU hotplug and suspend race
 
 Yes, they are listed below:
 
-1. When invoking regular CPU hotplug, the 'tasks_frozen' argument passed to
-   the _cpu_down() and _cpu_up() functions is *always* 0.
-   This might not reflect the true current state of the system, since the
-   tasks could have been frozen by an out-of-band event such as a suspend
-   operation in progress. Hence, it will lead to wrong notifications being
-   sent during the cpu online/offline events (eg, CPU_ONLINE notification
-   instead of CPU_ONLINE_FROZEN) which in turn will lead to execution of
-   inappropriate code by the callbacks registered for such CPU hotplug events.
-
-2. If a regular CPU hotplug stress test happens to race with the freezer due
+1. If a regular CPU hotplug stress test happens to race with the freezer due
    to a suspend operation in progress at the same time, then we could hit the
    situation described below:
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-04 13:34 ` [RFC v0 2/9] suspend: Add getter function to report if freezing is active Daniel Wagner
@ 2015-09-05  2:11   ` Rafael J. Wysocki
  2015-09-07  8:55       ` Daniel Wagner
  0 siblings, 1 reply; 19+ messages in thread
From: Rafael J. Wysocki @ 2015-09-05  2:11 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
> Instead encode the FREEZE state via the CPU state we allow the
> interesting subsystems (MCE, microcode) to query the power
> subsystem directly.

A use case, please.

> Most notifiers are not interested at all
> in this information so rather have explicit calls to freeze_active()
> instead adding complexity to the rest of the users of the CPU
> notifiers.

Why does it has anything to do with CPU notifiers?  We don't offline
CPUs for suspend-to-idle.


> Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: linux-pm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/suspend.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/include/linux/suspend.h b/include/linux/suspend.h
> index 5efe743..5e15ade 100644
> --- a/include/linux/suspend.h
> +++ b/include/linux/suspend.h
> @@ -216,6 +216,11 @@ static inline bool idle_should_freeze(void)
>  	return unlikely(suspend_freeze_state == FREEZE_STATE_ENTER);
>  }
>  
> +static inline bool freeze_active(void)
> +{
> +	return unlikely(suspend_freeze_state != FREEZE_STATE_NONE);
> +}
> +
>  extern void freeze_set_ops(const struct platform_freeze_ops *ops);
>  extern void freeze_wake(void);
>  
> @@ -244,6 +249,7 @@ extern int pm_suspend(suspend_state_t state);
>  static inline void suspend_set_ops(const struct platform_suspend_ops *ops) {}
>  static inline int pm_suspend(suspend_state_t state) { return -ENOSYS; }
>  static inline bool idle_should_freeze(void) { return false; }
> +static inline bool freeze_active(void) { return false; }
>  static inline void freeze_set_ops(const struct platform_freeze_ops *ops) {}
>  static inline void freeze_wake(void) {}
>  #endif /* !CONFIG_SUSPEND */
> 

-- 
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-05  2:11   ` Rafael J. Wysocki
@ 2015-09-07  8:55       ` Daniel Wagner
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-07  8:55 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>> Instead encode the FREEZE state via the CPU state we allow the
>> interesting subsystems (MCE, microcode) to query the power
>> subsystem directly.
> 
> A use case, please.

The motivation for this change is to reduce the complexity in the
hotplug code. As tried to point out in the cover letter, the FROZEN
bits have only a bunch of users after all those years (2007). So it is
worth to have all the notifier users to handle the FROZEN state?

Don't know if that counts as use case.

>> Most notifiers are not interested at all
>> in this information so rather have explicit calls to freeze_active()
>> instead adding complexity to the rest of the users of the CPU
>> notifiers.
> 
> Why does it has anything to do with CPU notifiers?

cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
and so most notifiers are doing

	switch (actcion ~CPU_TASK_FROZEN)

to filter it out because they don't need to handle the system wide
ongoing freeze operations.

> We don't offline CPUs for suspend-to-idle.

Sure. As I said the motivation is to reduce the complexity in the
hotplug code.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
@ 2015-09-07  8:55       ` Daniel Wagner
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-07  8:55 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>> Instead encode the FREEZE state via the CPU state we allow the
>> interesting subsystems (MCE, microcode) to query the power
>> subsystem directly.
> 
> A use case, please.

The motivation for this change is to reduce the complexity in the
hotplug code. As tried to point out in the cover letter, the FROZEN
bits have only a bunch of users after all those years (2007). So it is
worth to have all the notifier users to handle the FROZEN state?

Don't know if that counts as use case.

>> Most notifiers are not interested at all
>> in this information so rather have explicit calls to freeze_active()
>> instead adding complexity to the rest of the users of the CPU
>> notifiers.
> 
> Why does it has anything to do with CPU notifiers?

cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
and so most notifiers are doing

	switch (actcion ~CPU_TASK_FROZEN)

to filter it out because they don't need to handle the system wide
ongoing freeze operations.

> We don't offline CPUs for suspend-to-idle.

Sure. As I said the motivation is to reduce the complexity in the
hotplug code.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-07  8:55       ` Daniel Wagner
  (?)
@ 2015-09-07 13:42       ` Rafael J. Wysocki
  -1 siblings, 0 replies; 19+ messages in thread
From: Rafael J. Wysocki @ 2015-09-07 13:42 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> > On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
> >> Instead encode the FREEZE state via the CPU state we allow the
> >> interesting subsystems (MCE, microcode) to query the power
> >> subsystem directly.
> > 
> > A use case, please.
> 
> The motivation for this change is to reduce the complexity in the
> hotplug code. As tried to point out in the cover letter, the FROZEN
> bits have only a bunch of users after all those years (2007). So it is
> worth to have all the notifier users to handle the FROZEN state?
> 
> Don't know if that counts as use case.
> 
> >> Most notifiers are not interested at all
> >> in this information so rather have explicit calls to freeze_active()
> >> instead adding complexity to the rest of the users of the CPU
> >> notifiers.
> > 
> > Why does it has anything to do with CPU notifiers?
> 
> cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
> and so most notifiers are doing
> 
> 	switch (actcion ~CPU_TASK_FROZEN)
> 
> to filter it out because they don't need to handle the system wide
> ongoing freeze operations.
> 
> > We don't offline CPUs for suspend-to-idle.
> 
> Sure. As I said the motivation is to reduce the complexity in the
> hotplug code.

Well, it looks like I confused two things.

Let me look at this again.

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-07  8:55       ` Daniel Wagner
  (?)
  (?)
@ 2015-09-07 21:44       ` Rafael J. Wysocki
  2015-09-08  8:19           ` Daniel Wagner
  -1 siblings, 1 reply; 19+ messages in thread
From: Rafael J. Wysocki @ 2015-09-07 21:44 UTC (permalink / raw)
  To: Daniel Wagner; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
> > On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
> >> Instead encode the FREEZE state via the CPU state we allow the
> >> interesting subsystems (MCE, microcode) to query the power
> >> subsystem directly.
> > 
> > A use case, please.
> 
> The motivation for this change is to reduce the complexity in the
> hotplug code. As tried to point out in the cover letter, the FROZEN
> bits have only a bunch of users after all those years (2007). So it is
> worth to have all the notifier users to handle the FROZEN state?
> 
> Don't know if that counts as use case.

Well, the code you're changing has nothing to do with CPU hotplug and
CPU_TASKS_FROZEN.  It is about suspend-to-idle.

Please grep for suspend_freeze_state and see what it is used for.

There is some confusion in the naming, but that is about the freezing of
the whole system, while CPU_TASKS_FROZEN is about the freezing of user space.

> >> Most notifiers are not interested at all
> >> in this information so rather have explicit calls to freeze_active()
> >> instead adding complexity to the rest of the users of the CPU
> >> notifiers.
> > 
> > Why does it has anything to do with CPU notifiers?
> 
> cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
> and so most notifiers are doing
> 
> 	switch (actcion ~CPU_TASK_FROZEN)
> 
> to filter it out because they don't need to handle the system wide
> ongoing freeze operations.
> 
> > We don't offline CPUs for suspend-to-idle.
> 
> Sure. As I said the motivation is to reduce the complexity in the
> hotplug code.

You need to fine a different way to do that.

Thanks,
Rafael


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
  2015-09-07 21:44       ` Rafael J. Wysocki
@ 2015-09-08  8:19           ` Daniel Wagner
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-08  8:19 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On 09/07/2015 11:44 PM, Rafael J. Wysocki wrote:
> On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
>> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
>>> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>>>> Instead encode the FREEZE state via the CPU state we allow the
>>>> interesting subsystems (MCE, microcode) to query the power
>>>> subsystem directly.
>>>
>>> A use case, please.
>>
>> The motivation for this change is to reduce the complexity in the
>> hotplug code. As tried to point out in the cover letter, the FROZEN
>> bits have only a bunch of users after all those years (2007). So it is
>> worth to have all the notifier users to handle the FROZEN state?
>>
>> Don't know if that counts as use case.
> 
> Well, the code you're changing has nothing to do with CPU hotplug and
> CPU_TASKS_FROZEN.  It is about suspend-to-idle.
> 
> Please grep for suspend_freeze_state and see what it is used for.
> 
> There is some confusion in the naming, but that is about the freezing of
> the whole system, while CPU_TASKS_FROZEN is about the freezing of user space.

You are right. I got confused by all those frozen/freezing naming scheme.

>>>> Most notifiers are not interested at all
>>>> in this information so rather have explicit calls to freeze_active()
>>>> instead adding complexity to the rest of the users of the CPU
>>>> notifiers.
>>>
>>> Why does it has anything to do with CPU notifiers?
>>
>> cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
>> and so most notifiers are doing
>>
>> 	switch (actcion ~CPU_TASK_FROZEN)
>>
>> to filter it out because they don't need to handle the system wide
>> ongoing freeze operations.
>>
>>> We don't offline CPUs for suspend-to-idle.
>>
>> Sure. As I said the motivation is to reduce the complexity in the
>> hotplug code.
> 
> You need to fine a different way to do that.

I'll try something else.

Thanks for taking the time explaining!

cheers,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 2/9] suspend: Add getter function to report if freezing is active
@ 2015-09-08  8:19           ` Daniel Wagner
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-08  8:19 UTC (permalink / raw)
  To: Rafael J. Wysocki; +Cc: linux-kernel, Len Brown, Pavel Machek, linux-pm

On 09/07/2015 11:44 PM, Rafael J. Wysocki wrote:
> On Monday, September 07, 2015 10:55:43 AM Daniel Wagner wrote:
>> On 09/05/2015 04:11 AM, Rafael J. Wysocki wrote:
>>> On Friday, September 04, 2015 03:34:55 PM Daniel Wagner wrote:
>>>> Instead encode the FREEZE state via the CPU state we allow the
>>>> interesting subsystems (MCE, microcode) to query the power
>>>> subsystem directly.
>>>
>>> A use case, please.
>>
>> The motivation for this change is to reduce the complexity in the
>> hotplug code. As tried to point out in the cover letter, the FROZEN
>> bits have only a bunch of users after all those years (2007). So it is
>> worth to have all the notifier users to handle the FROZEN state?
>>
>> Don't know if that counts as use case.
> 
> Well, the code you're changing has nothing to do with CPU hotplug and
> CPU_TASKS_FROZEN.  It is about suspend-to-idle.
> 
> Please grep for suspend_freeze_state and see what it is used for.
> 
> There is some confusion in the naming, but that is about the freezing of
> the whole system, while CPU_TASKS_FROZEN is about the freezing of user space.

You are right. I got confused by all those frozen/freezing naming scheme.

>>>> Most notifiers are not interested at all
>>>> in this information so rather have explicit calls to freeze_active()
>>>> instead adding complexity to the rest of the users of the CPU
>>>> notifiers.
>>>
>>> Why does it has anything to do with CPU notifiers?
>>
>> cpu_{down|up} will call the notifiers with the CPU_TASK_FROZEN bit set
>> and so most notifiers are doing
>>
>> 	switch (actcion ~CPU_TASK_FROZEN)
>>
>> to filter it out because they don't need to handle the system wide
>> ongoing freeze operations.
>>
>>> We don't offline CPUs for suspend-to-idle.
>>
>> Sure. As I said the motivation is to reduce the complexity in the
>> hotplug code.
> 
> You need to fine a different way to do that.

I'll try something else.

Thanks for taking the time explaining!

cheers,
Daniel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information
  2015-09-04 13:34 ` [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information Daniel Wagner
@ 2015-09-08  8:49   ` Daniel Wagner
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Wagner @ 2015-09-08  8:49 UTC (permalink / raw)
  To: Daniel Wagner, linux-kernel
  Cc: Andrew Morton, Chris Metcalf, Don Zickus, Ingo Molnar,
	Thomas Gleixner, Lai Jiangshan, Peter Zijlstra, Paul E. McKenney

On 09/04/2015 03:34 PM, Daniel Wagner wrote:
> In order to get rid of all CPU_*_FROZEN states we need to convert all
> users first.
> 
> cpu_check_up_prepare() wants to report different errors depending on
> an ongoing suspend or not. freeze_active() reports back if that is the
> case so we don't have to rely on the CPU_DEAD_FROZEN anymore.

Just realized, this is patch doesn't make sense. The FROZEN bits in the
current code have nothing to do with CPU_TASK_FROZEN. Instead it should
use CPU_DEAD_TIMEOUT as explained in patch 01.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-09-08  8:49 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-04 13:34 [RFC v0 0/9] Remove CPU_*_FROZEN Daniel Wagner
2015-09-04 13:34 ` [RFC v0 1/9] smpboot: Add a separate CPU state when a surviving CPU times out Daniel Wagner
2015-09-04 13:34 ` Daniel Wagner
2015-09-04 13:34 ` [RFC v0 2/9] suspend: Add getter function to report if freezing is active Daniel Wagner
2015-09-05  2:11   ` Rafael J. Wysocki
2015-09-07  8:55     ` Daniel Wagner
2015-09-07  8:55       ` Daniel Wagner
2015-09-07 13:42       ` Rafael J. Wysocki
2015-09-07 21:44       ` Rafael J. Wysocki
2015-09-08  8:19         ` Daniel Wagner
2015-09-08  8:19           ` Daniel Wagner
2015-09-04 13:34 ` [RFC v0 3/9] x86: Use freeze_active() instead of CPU_*_FROZEN Daniel Wagner
2015-09-04 13:34 ` [RFC v0 4/9] smpboot: Use freeze_active() instead CPU_DEAD_FROZEN state information Daniel Wagner
2015-09-08  8:49   ` Daniel Wagner
2015-09-04 13:34 ` [RFC v0 5/9] sched: Use freeze_active() instead CPU_*_FROZEN " Daniel Wagner
2015-09-04 13:34 ` [RFC v0 6/9] cpu: Restructure FROZEN state handling Daniel Wagner
2015-09-04 13:35 ` [RFC v0 7/9] cpu: Remove unused CPU_*_FROZEN states Daniel Wagner
2015-09-04 13:35 ` [RFC v0 8/9] cpu: Do not set CPU_TASKS_FROZEN anymore Daniel Wagner
2015-09-04 13:35 ` [RFC v0 9/9] doc: Update cpu-hotplug documents on removal of CPU_TASKS_FROZEN Daniel Wagner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.