All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version
@ 2012-06-06 21:27 Thomas Gleixner
  2012-06-06 21:27 ` [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE Thomas Gleixner
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

The following series fixes a few interesting bugs (found by review in
context of the CMCI poll effort) and a cleanup to the timer/hotplug
code followed by a consolidated version of the CMCI poll
implementation. This series is based on

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git

which contains the bugfix for the dropped timer interval init.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  2012-06-06 21:38   ` Thomas Gleixner
  2012-06-06 21:27 ` [patch 2/6] x86: mce: Disable preemption when calling raise_local() Thomas Gleixner
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-cleanup-notifiers.patch --]
[-- Type: text/plain, Size: 788 bytes --]

This makes the treshold_cpu_callback for AMD actually work and makes
the code symetric vs. teardown.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: tip/arch/x86/kernel/cpu/mcheck/mce.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce.c
@@ -2256,8 +2256,8 @@ mce_cpu_callback(struct notifier_block *
 	struct timer_list *t = &per_cpu(mce_timer, cpu);
 
 	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
+	case CPU_UP_PREPARE:
+	case CPU_UP_PREPARE_FROZEN:
 		mce_device_create(cpu);
 		if (threshold_cpu_callback)
 			threshold_cpu_callback(action, cpu);



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 2/6] x86: mce: Disable preemption when calling raise_local()
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
  2012-06-06 21:27 ` [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  2012-06-06 21:27 ` [patch 3/6] x86: mce: Serialize mce injection Thomas Gleixner
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-disable-preemption-when-calling-raise-local.patch --]
[-- Type: text/plain, Size: 923 bytes --]

raise_mce() has a code path which does not disable preemption when the
raise_local() is called. The per cpu variable access in raise_local()
depends on preemption being disabled to be functional. So that code
path was either never tested or never tested with CONFIG_DEBUG_PREEMPT
enabled.

Add the missing preempt_disable/enable() pair around the call.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce-inject.c |    4 ++++
 1 file changed, 4 insertions(+)

Index: tip/arch/x86/kernel/cpu/mcheck/mce-inject.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce-inject.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce-inject.c
@@ -194,7 +194,11 @@ static void raise_mce(struct mce *m)
 		put_online_cpus();
 	} else
 #endif
+	{
+		preempt_disable();
 		raise_local();
+		preempt_enable();
+	}
 }
 
 /* Error injection interface */



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 3/6] x86: mce: Serialize mce injection
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
  2012-06-06 21:27 ` [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE Thomas Gleixner
  2012-06-06 21:27 ` [patch 2/6] x86: mce: Disable preemption when calling raise_local() Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  2012-06-06 21:27 ` [patch 4/6] x86: mce: Split timer init Thomas Gleixner
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-serialize-mce-injection.patch --]
[-- Type: text/plain, Size: 1006 bytes --]

raise_mce() fiddles with global state, but lacks any kind of
serialization.

Add a mutex around the raise_mce() call, so concurrent writers do not
stomp on each other toes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce-inject.c |    4 ++++
 1 file changed, 4 insertions(+)

Index: tip/arch/x86/kernel/cpu/mcheck/mce-inject.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce-inject.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce-inject.c
@@ -78,6 +78,7 @@ static void raise_exception(struct mce *
 }
 
 static cpumask_var_t mce_inject_cpumask;
+static DEFINE_MUTEX(mce_inject_mutex);
 
 static int mce_raise_notify(unsigned int cmd, struct pt_regs *regs)
 {
@@ -229,7 +230,10 @@ static ssize_t mce_write(struct file *fi
 	 * so do it a jiffie or two later everywhere.
 	 */
 	schedule_timeout(2);
+
+	mutex_lock(&mce_inject_mutex);
 	raise_mce(&m);
+	mutex_unlock(&mce_inject_mutex);
 	return usize;
 }
 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 4/6] x86: mce: Split timer init
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
                   ` (2 preceding siblings ...)
  2012-06-06 21:27 ` [patch 3/6] x86: mce: Serialize mce injection Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  2012-06-06 21:27 ` [patch 6/6] x86: mce: Add cmci poll mode Thomas Gleixner
  2012-06-06 21:27 ` [patch 5/6] x86: mce: Remove the frozen cases in the hotplug code Thomas Gleixner
  5 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-split-timer-init.patch --]
[-- Type: text/plain, Size: 1904 bytes --]

Split timer init function into the init and the start part, so the
start part can replace the open coded version in CPU_DOWN_FAILED.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce.c |   25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

Index: tip/arch/x86/kernel/cpu/mcheck/mce.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1554,23 +1554,28 @@ static void __mcheck_cpu_init_vendor(str
 	}
 }
 
-static void __mcheck_cpu_init_timer(void)
+static void mce_start_timer(unsigned int cpu, struct timer_list *t)
 {
-	struct timer_list *t = &__get_cpu_var(mce_timer);
 	unsigned long iv = check_interval * HZ;
 
-	setup_timer(t, mce_timer_fn, smp_processor_id());
+	__this_cpu_write(mce_next_interval, iv);
 
-	if (mce_ignore_ce)
+	if (mce_ignore_ce || !iv)
 		return;
 
-	__this_cpu_write(mce_next_interval, iv);
-	if (!iv)
-		return;
 	t->expires = round_jiffies(jiffies + iv);
 	add_timer_on(t, smp_processor_id());
 }
 
+static void __mcheck_cpu_init_timer(void)
+{
+	struct timer_list *t = &__get_cpu_var(mce_timer);
+	unsigned int cpu = smp_processor_id();
+
+	setup_timer(t, mce_timer_fn, cpu);
+	mce_start_timer(cpu, t);
+}
+
 /* Handle unconfigured int18 (should never happen) */
 static void unexpected_machine_check(struct pt_regs *regs, long error_code)
 {
@@ -2275,12 +2280,8 @@ mce_cpu_callback(struct notifier_block *
 		break;
 	case CPU_DOWN_FAILED:
 	case CPU_DOWN_FAILED_FROZEN:
-		if (!mce_ignore_ce && check_interval) {
-			t->expires = round_jiffies(jiffies +
-					per_cpu(mce_next_interval, cpu));
-			add_timer_on(t, cpu);
-		}
 		smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
+		mce_start_timer(cpu, t);
 		break;
 	case CPU_POST_DEAD:
 		/* intentionally ignoring frozen here */



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 6/6] x86: mce: Add cmci poll mode
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
                   ` (3 preceding siblings ...)
  2012-06-06 21:27 ` [patch 4/6] x86: mce: Split timer init Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  2012-06-06 21:27 ` [patch 5/6] x86: mce: Remove the frozen cases in the hotplug code Thomas Gleixner
  5 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-cmci-poll-mode.patch --]
[-- Type: text/plain, Size: 7321 bytes --]

Still waits for explanation :)

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce-internal.h |   10 +++
 arch/x86/kernel/cpu/mcheck/mce.c          |   46 +++++++++++++--
 arch/x86/kernel/cpu/mcheck/mce_intel.c    |   88 +++++++++++++++++++++++++++++-
 3 files changed, 137 insertions(+), 7 deletions(-)

Index: tip/arch/x86/kernel/cpu/mcheck/mce-internal.h
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce-internal.h
+++ tip/arch/x86/kernel/cpu/mcheck/mce-internal.h
@@ -28,6 +28,16 @@ extern int mce_ser;
 
 extern struct mce_bank *mce_banks;
 
+#ifdef CONFIG_X86_MCE_INTEL
+unsigned long mce_intel_adjust_timer(unsigned long interval);
+void mce_intel_cmci_poll(void);
+#else
+# define mce_intel_adjust_timer mce_adjust_timer_default
+static inline void mce_intel_cmci_poll(void) { }
+#endif
+
+void mce_timer_kick(unsigned long interval);
+
 #ifdef CONFIG_ACPI_APEI
 int apei_write_mce(struct mce *m);
 ssize_t apei_read_mce(struct mce *m, u64 *record_id);
Index: tip/arch/x86/kernel/cpu/mcheck/mce.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1256,6 +1256,14 @@ static unsigned long check_interval = 5 
 static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
 static DEFINE_PER_CPU(struct timer_list, mce_timer);
 
+static unsigned long mce_adjust_timer_default(unsigned long interval)
+{
+	return interval;
+}
+
+static unsigned long (*mce_adjust_timer)(unsigned long interval) =
+	mce_adjust_timer_default;
+
 static void mce_timer_fn(unsigned long data)
 {
 	struct timer_list *t = &__get_cpu_var(mce_timer);
@@ -1266,6 +1274,7 @@ static void mce_timer_fn(unsigned long d
 	if (mce_available(__this_cpu_ptr(&cpu_info))) {
 		machine_check_poll(MCP_TIMESTAMP,
 				&__get_cpu_var(mce_poll_banks));
+		mce_intel_cmci_poll();
 	}
 
 	/*
@@ -1273,14 +1282,38 @@ static void mce_timer_fn(unsigned long d
 	 * polling interval, otherwise increase the polling interval.
 	 */
 	iv = __this_cpu_read(mce_next_interval);
-	if (mce_notify_irq())
+	if (mce_notify_irq()) {
 		iv = max(iv / 2, (unsigned long) HZ/100);
-	else
+	} else {
 		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
+		iv = mce_adjust_timer(iv);
+	}
 	__this_cpu_write(mce_next_interval, iv);
+	/* Might have become 0 after CMCI storm subsided */
+	if (iv) {
+		t->expires = jiffies + iv;
+		add_timer_on(t, smp_processor_id());
+	}
+}
 
-	t->expires = jiffies + iv;
-	add_timer_on(t, smp_processor_id());
+/*
+ * Ensure that the timer is firing in @interval from now.
+ */
+void mce_timer_kick(unsigned long interval)
+{
+	struct timer_list *t = &__get_cpu_var(mce_timer);
+	unsigned long when = jiffies + interval;
+	unsigned long iv = __this_cpu_read(mce_next_interval);
+
+	if (timer_pending(t)) {
+		if (time_before(when, t->expires))
+			mod_timer_pinned(t, when);
+	} else {
+		t->expires = round_jiffies(when);
+		add_timer_on(t, smp_processor_id());
+	}
+	if (interval < iv)
+		__this_cpu_write(mce_next_interval, interval);
 }
 
 /* Must not be called in IRQ context where del_timer_sync() can deadlock */
@@ -1545,6 +1578,7 @@ static void __mcheck_cpu_init_vendor(str
 	switch (c->x86_vendor) {
 	case X86_VENDOR_INTEL:
 		mce_intel_feature_init(c);
+		mce_adjust_timer = mce_intel_adjust_timer;
 		break;
 	case X86_VENDOR_AMD:
 		mce_amd_feature_init(c);
@@ -1556,7 +1590,7 @@ static void __mcheck_cpu_init_vendor(str
 
 static void mce_start_timer(unsigned int cpu, struct timer_list *t)
 {
-	unsigned long iv = check_interval * HZ;
+	unsigned long iv = mce_adjust_timer(check_interval * HZ);
 
 	__this_cpu_write(mce_next_interval, iv);
 
@@ -2272,8 +2306,8 @@ mce_cpu_callback(struct notifier_block *
 		mce_device_remove(cpu);
 		break;
 	case CPU_DOWN_PREPARE:
-		del_timer_sync(t);
 		smp_call_function_single(cpu, mce_disable_cpu, &action, 1);
+		del_timer_sync(t);
 		break;
 	case CPU_DOWN_FAILED:
 		smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
Index: tip/arch/x86/kernel/cpu/mcheck/mce_intel.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce_intel.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce_intel.c
@@ -15,6 +15,8 @@
 #include <asm/msr.h>
 #include <asm/mce.h>
 
+#include "mce-internal.h"
+
 /*
  * Support for Intel Correct Machine Check Interrupts. This allows
  * the CPU to raise an interrupt when a corrected machine check happened.
@@ -30,7 +32,22 @@ static DEFINE_PER_CPU(mce_banks_t, mce_b
  */
 static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
 
-#define CMCI_THRESHOLD 1
+#define CMCI_THRESHOLD		1
+#define CMCI_POLL_INTERVAL	(30 * HZ)
+#define CMCI_STORM_INTERVAL	(1 * HZ)
+#define CMCI_STORM_TRESHOLD	5
+
+static DEFINE_PER_CPU(unsigned long, cmci_time_stamp);
+static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt);
+static DEFINE_PER_CPU(unsigned int, cmci_storm_state);
+
+enum {
+	CMCI_STORM_NONE,
+	CMCI_STORM_ACTIVE,
+	CMCI_STORM_SUBSIDED,
+};
+
+static atomic_t cmci_storm_on_cpus;
 
 static int cmci_supported(int *banks)
 {
@@ -53,6 +70,73 @@ static int cmci_supported(int *banks)
 	return !!(cap & MCG_CMCI_P);
 }
 
+void mce_intel_cmci_poll(void)
+{
+	if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE)
+		return;
+	machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned));
+}
+
+unsigned long mce_intel_adjust_timer(unsigned long interval)
+{
+	if (interval < CMCI_POLL_INTERVAL)
+		return interval;
+
+	switch (__this_cpu_read(cmci_storm_state)) {
+	case CMCI_STORM_ACTIVE:
+		/*
+		 * We switch back to interrupt mode once the poll timer has
+		 * silenced itself. That means no events recorded and the
+		 * timer interval is back to our poll interval.
+		 */
+		__this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED);
+		atomic_dec(&cmci_storm_on_cpus);
+
+	case CMCI_STORM_SUBSIDED:
+		/*
+		 * We wait for all cpus to go back to SUBSIDED
+		 * state. When that happens we switch back to
+		 * interrupt mode.
+		 */
+		if (!atomic_read(&cmci_storm_on_cpus)) {
+			__this_cpu_write(cmci_storm_state, CMCI_STORM_NONE);
+			cmci_reenable();
+			cmci_recheck();
+		}
+		return CMCI_POLL_INTERVAL;
+	default:
+		/*
+		 * We have shiny wheather, let the poll do whatever it
+		 * thinks.
+		 */
+		return interval;
+	}
+}
+
+static bool cmci_storm_detect(void)
+{
+	unsigned int cnt = __this_cpu_read(cmci_storm_cnt);
+	unsigned long ts = __this_cpu_read(cmci_time_stamp);
+	unsigned long now = jiffies;
+
+	if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) {
+		cnt++;
+	} else {
+		cnt = 1;
+		__this_cpu_write(cmci_time_stamp, now);
+	}
+	__this_cpu_write(cmci_storm_cnt, cnt);
+
+	if (cnt <= CMCI_STORM_TRESHOLD)
+		return false;
+
+	cmci_clear();
+	__this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE);
+	atomic_inc(&cmci_storm_on_cpus);
+	mce_timer_kick(CMCI_POLL_INTERVAL);
+	return true;
+}
+
 /*
  * The interrupt handler. This is called on every event.
  * Just call the poller directly to log any events.
@@ -61,6 +145,8 @@ static int cmci_supported(int *banks)
  */
 static void intel_threshold_interrupt(void)
 {
+	if (cmci_storm_detect())
+		return;
 	machine_check_poll(MCP_TIMESTAMP, &__get_cpu_var(mce_banks_owned));
 	mce_notify_irq();
 }



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [patch 5/6] x86: mce: Remove the frozen cases in the hotplug code
  2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
                   ` (4 preceding siblings ...)
  2012-06-06 21:27 ` [patch 6/6] x86: mce: Add cmci poll mode Thomas Gleixner
@ 2012-06-06 21:27 ` Thomas Gleixner
  5 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:27 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

[-- Attachment #1: x86-mce-remove-frozen-cases.patch --]
[-- Type: text/plain, Size: 2143 bytes --]

No point in having double cases if we can simply mask the FROZEN bit
out.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/cpu/mcheck/mce.c     |   12 +++++-------
 arch/x86/kernel/cpu/mcheck/mce_amd.c |    6 ++----
 2 files changed, 7 insertions(+), 11 deletions(-)

Index: tip/arch/x86/kernel/cpu/mcheck/mce.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce.c
@@ -2260,34 +2260,32 @@ mce_cpu_callback(struct notifier_block *
 	unsigned int cpu = (unsigned long)hcpu;
 	struct timer_list *t = &per_cpu(mce_timer, cpu);
 
-	switch (action) {
+	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_UP_PREPARE:
-	case CPU_UP_PREPARE_FROZEN:
 		mce_device_create(cpu);
 		if (threshold_cpu_callback)
 			threshold_cpu_callback(action, cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		if (threshold_cpu_callback)
 			threshold_cpu_callback(action, cpu);
 		mce_device_remove(cpu);
 		break;
 	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 		del_timer_sync(t);
 		smp_call_function_single(cpu, mce_disable_cpu, &action, 1);
 		break;
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
 		smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
 		mce_start_timer(cpu, t);
 		break;
-	case CPU_POST_DEAD:
+	}
+
+	if (action == CPU_POST_DEAD) {
 		/* intentionally ignoring frozen here */
 		cmci_rediscover(cpu);
-		break;
 	}
+
 	return NOTIFY_OK;
 }
 
Index: tip/arch/x86/kernel/cpu/mcheck/mce_amd.c
===================================================================
--- tip.orig/arch/x86/kernel/cpu/mcheck/mce_amd.c
+++ tip/arch/x86/kernel/cpu/mcheck/mce_amd.c
@@ -748,13 +748,11 @@ static void threshold_remove_device(unsi
 static void __cpuinit
 amd_64_threshold_cpu_callback(unsigned long action, unsigned int cpu)
 {
-	switch (action) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_UP_PREPARE:
 		threshold_create_device(cpu);
 		break;
 	case CPU_DEAD:
-	case CPU_DEAD_FROZEN:
 		threshold_remove_device(cpu);
 		break;
 	default:



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE
  2012-06-06 21:27 ` [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE Thomas Gleixner
@ 2012-06-06 21:38   ` Thomas Gleixner
  2012-06-06 21:42     ` Borislav Petkov
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:38 UTC (permalink / raw)
  To: LKML; +Cc: Tony Luck, Borislav Petkov, Chen Gong, x86, Peter Zijlstra

On Wed, 6 Jun 2012, Thomas Gleixner wrote:

> This makes the treshold_cpu_callback for AMD actually work and makes
> the code symetric vs. teardown.

Ignore that. I sent out the wrong version of this lot. Sorry for the noise

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE
  2012-06-06 21:38   ` Thomas Gleixner
@ 2012-06-06 21:42     ` Borislav Petkov
  2012-06-06 21:52       ` Thomas Gleixner
  0 siblings, 1 reply; 10+ messages in thread
From: Borislav Petkov @ 2012-06-06 21:42 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: LKML, Tony Luck, Chen Gong, x86, Peter Zijlstra

On Wed, Jun 06, 2012 at 11:38:11PM +0200, Thomas Gleixner wrote:
> On Wed, 6 Jun 2012, Thomas Gleixner wrote:
> 
> > This makes the treshold_cpu_callback for AMD actually work and makes
> > the code symetric vs. teardown.
> 
> Ignore that. I sent out the wrong version of this lot. Sorry for the noise

By this you mean the chunk at the end of 5/6, right?

There's a chunk touching amd_64_threshold_cpu_callback...

-- 
Regards/Gruss,
Boris.

Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach
GM: Alberto Bozzo
Reg: Dornach, Landkreis Muenchen
HRB Nr. 43632 WEEE Registernr: 129 19551

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE
  2012-06-06 21:42     ` Borislav Petkov
@ 2012-06-06 21:52       ` Thomas Gleixner
  0 siblings, 0 replies; 10+ messages in thread
From: Thomas Gleixner @ 2012-06-06 21:52 UTC (permalink / raw)
  To: Borislav Petkov; +Cc: LKML, Tony Luck, Chen Gong, x86, Peter Zijlstra

On Wed, 6 Jun 2012, Borislav Petkov wrote:

> On Wed, Jun 06, 2012 at 11:38:11PM +0200, Thomas Gleixner wrote:
> > On Wed, 6 Jun 2012, Thomas Gleixner wrote:
> > 
> > > This makes the treshold_cpu_callback for AMD actually work and makes
> > > the code symetric vs. teardown.
> > 
> > Ignore that. I sent out the wrong version of this lot. Sorry for the noise
> 
> By this you mean the chunk at the end of 5/6, right?
> 
> There's a chunk touching amd_64_threshold_cpu_callback...

Both 1/6 and 5/6 are wrong. That's a leftover from a previous
iteration. About to send the fixed version.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-06-06 21:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-06 21:27 [patch 0/6] x86: mce: Bugfixes, cleanups and a new CMCI poll version Thomas Gleixner
2012-06-06 21:27 ` [patch 1/6] x86: mce: Create devices in CPU_UP_PREPARE Thomas Gleixner
2012-06-06 21:38   ` Thomas Gleixner
2012-06-06 21:42     ` Borislav Petkov
2012-06-06 21:52       ` Thomas Gleixner
2012-06-06 21:27 ` [patch 2/6] x86: mce: Disable preemption when calling raise_local() Thomas Gleixner
2012-06-06 21:27 ` [patch 3/6] x86: mce: Serialize mce injection Thomas Gleixner
2012-06-06 21:27 ` [patch 4/6] x86: mce: Split timer init Thomas Gleixner
2012-06-06 21:27 ` [patch 6/6] x86: mce: Add cmci poll mode Thomas Gleixner
2012-06-06 21:27 ` [patch 5/6] x86: mce: Remove the frozen cases in the hotplug code Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.