linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Handle AMD threshold interrupt storms
@ 2022-02-17 14:16 Smita Koralahalli
  2022-02-17 14:16 ` [RFC PATCH 1/2] x86/mce: " Smita Koralahalli
  2022-02-17 14:16 ` [RFC PATCH 2/2] x86/mce: Simplify code in log_and_reset_block() Smita Koralahalli
  0 siblings, 2 replies; 18+ messages in thread
From: Smita Koralahalli @ 2022-02-17 14:16 UTC (permalink / raw)
  To: x86, linux-edac, linux-kernel
  Cc: Tony Luck, H . Peter Anvin, Dave Hansen, Yazen Ghannam,
	Smita Koralahalli

This series of patches handles interrupt storms in AMD threshold interrupt
handler. Patch 1 takes care of handling the storms and Patch 2 does some
refactoring and simplification to reuse the existing function.

Smita Koralahalli (2):
  x86/mce: Handle AMD threshold interrupt storms
  x86/mce: Simplify code in log_and_reset_block()

 arch/x86/kernel/cpu/mce/amd.c      | 131 ++++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/mce/core.c     |  16 +++-
 arch/x86/kernel/cpu/mce/intel.c    |   2 +-
 arch/x86/kernel/cpu/mce/internal.h |   8 +-
 4 files changed, 148 insertions(+), 9 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms
  2022-02-17 14:16 [RFC PATCH 0/2] Handle AMD threshold interrupt storms Smita Koralahalli
@ 2022-02-17 14:16 ` Smita Koralahalli
  2022-02-17 17:28   ` Luck, Tony
  2022-02-17 14:16 ` [RFC PATCH 2/2] x86/mce: Simplify code in log_and_reset_block() Smita Koralahalli
  1 sibling, 1 reply; 18+ messages in thread
From: Smita Koralahalli @ 2022-02-17 14:16 UTC (permalink / raw)
  To: x86, linux-edac, linux-kernel
  Cc: Tony Luck, H . Peter Anvin, Dave Hansen, Yazen Ghannam,
	Smita Koralahalli

Extend the logic of handling CMCI storms to AMD threshold interrupts.

Similar to CMCI storm handling, keep track of the rate at which each
processor sees interrupts. If it exceeds threshold, disable interrupts
and switch to polling of machine check banks.

But unlike CMCI, re-enable threshold interrupts per CPU because MCA
exceptions and interrupts are directed to a single CPU on AMD systems.
As the threshold interrupts are per CPU, a global counter is not required
to keep the count of all CPUs in the storm.

Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
---
This patch mostly inherits existing Intel's CMCI storm logic and is not a
per bank based approach. Commit 7bee1ef01f38395 ("x86/mce: Add per-bank
CMCI storm mitigation") under Tony Luck's Linux Tree makes the existing
CMCI storm more fine grained and adds a hook into machine_check_poll()
to keep track of per-CPU, per-bank corrected error logs.
---
 arch/x86/kernel/cpu/mce/amd.c      | 126 +++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/mce/core.c     |  16 +++-
 arch/x86/kernel/cpu/mce/intel.c    |   2 +-
 arch/x86/kernel/cpu/mce/internal.h |   8 +-
 4 files changed, 147 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 1940d305db1c..53d9320d1470 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -478,6 +478,129 @@ static void mce_threshold_block_init(struct threshold_block *b, int offset)
 	threshold_restart_bank(&tr);
 };
 
+#define MCI_STORM_INTERVAL	(HZ)
+#define MCI_STORM_THRESHOLD	15
+
+enum {
+	MCI_STORM_NONE,
+	MCI_STORM_ACTIVE,
+};
+
+static DEFINE_PER_CPU(unsigned long, mci_time_stamp);
+static DEFINE_PER_CPU(unsigned int, mci_storm_cnt);
+static DEFINE_PER_CPU(unsigned int, mci_storm_state);
+
+static DEFINE_PER_CPU(int, mci_backoff_cnt);
+
+static void _reset_block(struct threshold_block *block)
+{
+	struct thresh_restart tr;
+
+	memset(&tr, 0, sizeof(tr));
+	tr.b = block;
+	threshold_restart_bank(&tr);
+}
+
+static void toggle_interrupt_reset_block(struct threshold_block *block, bool on)
+{
+	if (!block)
+		return;
+
+	block->interrupt_enable = !!on;
+	_reset_block(block);
+}
+
+static void mci_toggle_interrupt_mode(bool on)
+{
+	struct threshold_block *first_block = NULL, *block = NULL, *tmp = NULL;
+	struct threshold_bank **bp = this_cpu_read(threshold_banks);
+	unsigned long flags;
+	unsigned int bank;
+
+	if (!bp)
+		return;
+
+	local_irq_save(flags);
+
+	for (bank = 0; bank < this_cpu_read(mce_num_banks); bank++) {
+		if (!(this_cpu_read(bank_map) & (1 << bank)))
+			continue;
+
+		first_block = bp[bank]->blocks;
+		if (!first_block)
+			continue;
+
+		toggle_interrupt_reset_block(first_block, on);
+
+		list_for_each_entry_safe(block, tmp, &first_block->miscj, miscj)
+			toggle_interrupt_reset_block(block, on);
+	}
+
+	local_irq_restore(flags);
+}
+
+bool mce_amd_mci_poll(bool err_seen)
+{
+	if (__this_cpu_read(mci_storm_state) == MCI_STORM_NONE)
+		return false;
+
+	if (err_seen)
+		this_cpu_write(mci_backoff_cnt, INITIAL_CHECK_INTERVAL);
+	else
+		this_cpu_dec(mci_backoff_cnt);
+
+	return true;
+}
+
+unsigned long mci_amd_adjust_timer(unsigned long interval)
+{
+	if (__this_cpu_read(mci_storm_state) == MCI_STORM_ACTIVE) {
+		if (this_cpu_read(mci_backoff_cnt) > 0) {
+			mce_notify_irq();
+			return MCI_STORM_INTERVAL;
+		}
+
+		__this_cpu_write(mci_storm_state, MCI_STORM_NONE);
+		pr_notice("Storm subsided on CPU %d: switching to interrupt mode\n",
+			  smp_processor_id());
+		mci_toggle_interrupt_mode(true);
+	}
+
+	return interval;
+}
+
+static bool storm_detect(void)
+{
+	unsigned int cnt = this_cpu_read(mci_storm_cnt);
+	unsigned long ts = this_cpu_read(mci_time_stamp);
+	unsigned long now = jiffies;
+
+	if (__this_cpu_read(mci_storm_state) != MCI_STORM_NONE)
+		return true;
+
+	if (time_before_eq(now, ts + MCI_STORM_INTERVAL)) {
+		cnt++;
+	} else {
+		cnt = 1;
+		this_cpu_write(mci_time_stamp, now);
+	}
+
+	this_cpu_write(mci_storm_cnt, cnt);
+
+	if (cnt <= MCI_STORM_THRESHOLD)
+		return false;
+
+	mci_toggle_interrupt_mode(false);
+	__this_cpu_write(mci_storm_state, MCI_STORM_ACTIVE);
+	mce_timer_kick(MCI_STORM_INTERVAL);
+	this_cpu_write(mci_backoff_cnt, INITIAL_CHECK_INTERVAL);
+
+	pr_notice("Storm detected on CPU %d: switching to poll mode\n",
+		  smp_processor_id());
+
+	return true;
+}
+
 static int setup_APIC_mce_threshold(int reserved, int new)
 {
 	if (reserved < 0 && !setup_APIC_eilvt(new, THRESHOLD_APIC_VECTOR,
@@ -868,6 +991,9 @@ static void amd_threshold_interrupt(void)
 	struct threshold_bank **bp = this_cpu_read(threshold_banks);
 	unsigned int bank, cpu = smp_processor_id();
 
+	if (storm_detect())
+		return;
+
 	/*
 	 * Validate that the threshold bank has been initialized already. The
 	 * handler is installed at boot time, but on a hotplug event the
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 4c31656503bd..ec89b1115889 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1554,6 +1554,13 @@ static unsigned long mce_adjust_timer_default(unsigned long interval)
 
 static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
 
+static bool mce_mci_poll_default(bool err_seen)
+{
+	return false;
+}
+
+static bool (*mce_mci_poll)(bool err_seen) = mce_mci_poll_default;
+
 static void __start_timer(struct timer_list *t, unsigned long interval)
 {
 	unsigned long when = jiffies + interval;
@@ -1577,9 +1584,11 @@ static void mce_timer_fn(struct timer_list *t)
 	iv = __this_cpu_read(mce_next_interval);
 
 	if (mce_available(this_cpu_ptr(&cpu_info))) {
-		machine_check_poll(0, this_cpu_ptr(&mce_poll_banks));
+		bool err_seen;
+
+		err_seen = machine_check_poll(0, this_cpu_ptr(&mce_poll_banks));
 
-		if (mce_intel_cmci_poll()) {
+		if (mce_mci_poll(err_seen)) {
 			iv = mce_adjust_timer(iv);
 			goto done;
 		}
@@ -1938,10 +1947,13 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
 	case X86_VENDOR_INTEL:
 		mce_intel_feature_init(c);
 		mce_adjust_timer = cmci_intel_adjust_timer;
+		mce_mci_poll = mce_intel_cmci_poll;
 		break;
 
 	case X86_VENDOR_AMD: {
 		mce_amd_feature_init(c);
+		mce_adjust_timer = mci_amd_adjust_timer;
+		mce_mci_poll = mce_amd_mci_poll;
 		break;
 		}
 
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index 95275a5e57e0..6f8006d9620d 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -127,7 +127,7 @@ static bool lmce_supported(void)
 	return tmp & FEAT_CTL_LMCE_ENABLED;
 }
 
-bool mce_intel_cmci_poll(void)
+bool mce_intel_cmci_poll(bool err_seen)
 {
 	if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE)
 		return false;
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index a04b61e27827..aa03107a72b5 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -42,7 +42,7 @@ extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
 unsigned long cmci_intel_adjust_timer(unsigned long interval);
-bool mce_intel_cmci_poll(void);
+bool mce_intel_cmci_poll(bool err_seen);
 void mce_intel_hcpu_update(unsigned long cpu);
 void cmci_disable_bank(int bank);
 void intel_init_cmci(void);
@@ -51,7 +51,7 @@ void intel_clear_lmce(void);
 bool intel_filter_mce(struct mce *m);
 #else
 # define cmci_intel_adjust_timer mce_adjust_timer_default
-static inline bool mce_intel_cmci_poll(void) { return false; }
+# define mce_intel_cmci_poll mce_mci_poll_default
 static inline void mce_intel_hcpu_update(unsigned long cpu) { }
 static inline void cmci_disable_bank(int bank) { }
 static inline void intel_init_cmci(void) { }
@@ -186,8 +186,12 @@ enum mca_msr {
 extern bool filter_mce(struct mce *m);
 
 #ifdef CONFIG_X86_MCE_AMD
+unsigned long mci_amd_adjust_timer(unsigned long interval);
 extern bool amd_filter_mce(struct mce *m);
+bool mce_amd_mci_poll(bool err_seen);
 #else
+# define mci_amd_adjust_timer mce_adjust_timer_default
+# define mce_amd_mci_poll mce_mci_poll_default
 static inline bool amd_filter_mce(struct mce *m) { return false; }
 #endif
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH 2/2] x86/mce: Simplify code in log_and_reset_block()
  2022-02-17 14:16 [RFC PATCH 0/2] Handle AMD threshold interrupt storms Smita Koralahalli
  2022-02-17 14:16 ` [RFC PATCH 1/2] x86/mce: " Smita Koralahalli
@ 2022-02-17 14:16 ` Smita Koralahalli
  1 sibling, 0 replies; 18+ messages in thread
From: Smita Koralahalli @ 2022-02-17 14:16 UTC (permalink / raw)
  To: x86, linux-edac, linux-kernel
  Cc: Tony Luck, H . Peter Anvin, Dave Hansen, Yazen Ghannam,
	Smita Koralahalli

Reuse the existing _reset_block() to reset the threshold block after
logging error in log_and_reset_block().

Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com>
---
 arch/x86/kernel/cpu/mce/amd.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
index 53d9320d1470..823733468973 100644
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -960,7 +960,6 @@ static void log_error_thresholding(unsigned int bank, u64 misc)
 
 static void log_and_reset_block(struct threshold_block *block)
 {
-	struct thresh_restart tr;
 	u32 low = 0, high = 0;
 
 	if (!block)
@@ -976,9 +975,7 @@ static void log_and_reset_block(struct threshold_block *block)
 	log_error_thresholding(block->bank, ((u64)high << 32) | low);
 
 	/* Reset threshold block after logging error. */
-	memset(&tr, 0, sizeof(tr));
-	tr.b = block;
-	threshold_restart_bank(&tr);
+	_reset_block(block);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms
  2022-02-17 14:16 ` [RFC PATCH 1/2] x86/mce: " Smita Koralahalli
@ 2022-02-17 17:28   ` Luck, Tony
  2022-02-17 17:35     ` [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code Luck, Tony
                       ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Luck, Tony @ 2022-02-17 17:28 UTC (permalink / raw)
  To: Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

On Thu, Feb 17, 2022 at 08:16:08AM -0600, Smita Koralahalli wrote:
> Extend the logic of handling CMCI storms to AMD threshold interrupts.
> 
> Similar to CMCI storm handling, keep track of the rate at which each
> processor sees interrupts. If it exceeds threshold, disable interrupts
> and switch to polling of machine check banks.

I've been sitting on some partially done patches to re-work
storm handling for Intel ... which rips out all the existing
storm bits and replaces with something all new. I'll post the
2-part series as replies to this.

Two-part motivation:

1) Disabling CMCI globally is an overly big hammer (as you note
in your patches which to a more gentle per-CPU disable.

2) Intel signals some UNCORRECTED errors using CMCI (yes, turns
out that was a poorly chosen name given the later evolution of
the architecture). Since we don't want to miss those, the proposed
storm code just bumps the threshold to (almost) maximum to mitigate,
but not eliminate the storm. Note that the threshold only applies
to corrected errors.

-Tony

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code
  2022-02-17 17:28   ` Luck, Tony
@ 2022-02-17 17:35     ` Luck, Tony
  2022-02-24 15:14       ` Borislav Petkov
  2022-02-17 17:36     ` [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation Luck, Tony
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Luck, Tony @ 2022-02-17 17:35 UTC (permalink / raw)
  To: Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

When a "storm" of CMCI is detected this code mitigates by
disabling CMCI interrupt signalling from all of the banks
owned by the CPU that saw the storm.

There are problems with this approach:

1) It is very coarse grained. In all liklihood only one of the
banks was geenrating the interrupts, but CMCI is disabled for all.
This means Linux may delay seeing and processing errors logged
from other banks.

2) Although CMCI stands for Corrected Machine Check Interrupt, it
is also used to signal when an uncorrected error is logged. This
is a problem because these errors should be handled in a timely
manner.

Delete all this code in preparation for a finer grained solution.

Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/mce/core.c     |  20 +---
 arch/x86/kernel/cpu/mce/intel.c    | 145 -----------------------------
 arch/x86/kernel/cpu/mce/internal.h |   6 --
 3 files changed, 1 insertion(+), 170 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 6ed365337a3b..4f9abb66520d 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1510,13 +1510,6 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
 static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
 static DEFINE_PER_CPU(struct timer_list, mce_timer);
 
-static unsigned long mce_adjust_timer_default(unsigned long interval)
-{
-	return interval;
-}
-
-static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
-
 static void __start_timer(struct timer_list *t, unsigned long interval)
 {
 	unsigned long when = jiffies + interval;
@@ -1539,15 +1532,9 @@ static void mce_timer_fn(struct timer_list *t)
 
 	iv = __this_cpu_read(mce_next_interval);
 
-	if (mce_available(this_cpu_ptr(&cpu_info))) {
+	if (mce_available(this_cpu_ptr(&cpu_info)))
 		machine_check_poll(0, this_cpu_ptr(&mce_poll_banks));
 
-		if (mce_intel_cmci_poll()) {
-			iv = mce_adjust_timer(iv);
-			goto done;
-		}
-	}
-
 	/*
 	 * Alert userspace if needed. If we logged an MCE, reduce the polling
 	 * interval, otherwise increase the polling interval.
@@ -1557,7 +1544,6 @@ static void mce_timer_fn(struct timer_list *t)
 	else
 		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
 
-done:
 	__this_cpu_write(mce_next_interval, iv);
 	__start_timer(t, iv);
 }
@@ -1887,7 +1873,6 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
 
 	intel_init_cmci();
 	intel_init_lmce();
-	mce_adjust_timer = cmci_intel_adjust_timer;
 }
 
 static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c)
@@ -1900,7 +1885,6 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
 	switch (c->x86_vendor) {
 	case X86_VENDOR_INTEL:
 		mce_intel_feature_init(c);
-		mce_adjust_timer = cmci_intel_adjust_timer;
 		break;
 
 	case X86_VENDOR_AMD: {
@@ -2559,8 +2543,6 @@ static void mce_reenable_cpu(void)
 
 static int mce_cpu_dead(unsigned int cpu)
 {
-	mce_intel_hcpu_update(cpu);
-
 	/* intentionally ignoring frozen here */
 	if (!cpuhp_tasks_frozen)
 		cmci_rediscover();
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index bb9a46a804bf..cee9d989f791 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -41,15 +41,6 @@
  */
 static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
 
-/*
- * CMCI storm detection backoff counter
- *
- * During storm, we reset this counter to INITIAL_CHECK_INTERVAL in case we've
- * encountered an error. If not, we decrement it by one. We signal the end of
- * the CMCI storm when it reaches 0.
- */
-static DEFINE_PER_CPU(int, cmci_backoff_cnt);
-
 /*
  * cmci_discover_lock protects against parallel discovery attempts
  * which could race against each other.
@@ -57,21 +48,6 @@ static DEFINE_PER_CPU(int, cmci_backoff_cnt);
 static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
 
 #define CMCI_THRESHOLD		1
-#define CMCI_POLL_INTERVAL	(30 * HZ)
-#define CMCI_STORM_INTERVAL	(HZ)
-#define CMCI_STORM_THRESHOLD	15
-
-static DEFINE_PER_CPU(unsigned long, cmci_time_stamp);
-static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt);
-static DEFINE_PER_CPU(unsigned int, cmci_storm_state);
-
-enum {
-	CMCI_STORM_NONE,
-	CMCI_STORM_ACTIVE,
-	CMCI_STORM_SUBSIDED,
-};
-
-static atomic_t cmci_storm_on_cpus;
 
 static int cmci_supported(int *banks)
 {
@@ -127,124 +103,6 @@ static bool lmce_supported(void)
 	return tmp & FEAT_CTL_LMCE_ENABLED;
 }
 
-bool mce_intel_cmci_poll(void)
-{
-	if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE)
-		return false;
-
-	/*
-	 * Reset the counter if we've logged an error in the last poll
-	 * during the storm.
-	 */
-	if (machine_check_poll(0, this_cpu_ptr(&mce_banks_owned)))
-		this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL);
-	else
-		this_cpu_dec(cmci_backoff_cnt);
-
-	return true;
-}
-
-void mce_intel_hcpu_update(unsigned long cpu)
-{
-	if (per_cpu(cmci_storm_state, cpu) == CMCI_STORM_ACTIVE)
-		atomic_dec(&cmci_storm_on_cpus);
-
-	per_cpu(cmci_storm_state, cpu) = CMCI_STORM_NONE;
-}
-
-static void cmci_toggle_interrupt_mode(bool on)
-{
-	unsigned long flags, *owned;
-	int bank;
-	u64 val;
-
-	raw_spin_lock_irqsave(&cmci_discover_lock, flags);
-	owned = this_cpu_ptr(mce_banks_owned);
-	for_each_set_bit(bank, owned, MAX_NR_BANKS) {
-		rdmsrl(MSR_IA32_MCx_CTL2(bank), val);
-
-		if (on)
-			val |= MCI_CTL2_CMCI_EN;
-		else
-			val &= ~MCI_CTL2_CMCI_EN;
-
-		wrmsrl(MSR_IA32_MCx_CTL2(bank), val);
-	}
-	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
-}
-
-unsigned long cmci_intel_adjust_timer(unsigned long interval)
-{
-	if ((this_cpu_read(cmci_backoff_cnt) > 0) &&
-	    (__this_cpu_read(cmci_storm_state) == CMCI_STORM_ACTIVE)) {
-		mce_notify_irq();
-		return CMCI_STORM_INTERVAL;
-	}
-
-	switch (__this_cpu_read(cmci_storm_state)) {
-	case CMCI_STORM_ACTIVE:
-
-		/*
-		 * We switch back to interrupt mode once the poll timer has
-		 * silenced itself. That means no events recorded and the timer
-		 * interval is back to our poll interval.
-		 */
-		__this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED);
-		if (!atomic_sub_return(1, &cmci_storm_on_cpus))
-			pr_notice("CMCI storm subsided: switching to interrupt mode\n");
-
-		fallthrough;
-
-	case CMCI_STORM_SUBSIDED:
-		/*
-		 * We wait for all CPUs to go back to SUBSIDED state. When that
-		 * happens we switch back to interrupt mode.
-		 */
-		if (!atomic_read(&cmci_storm_on_cpus)) {
-			__this_cpu_write(cmci_storm_state, CMCI_STORM_NONE);
-			cmci_toggle_interrupt_mode(true);
-			cmci_recheck();
-		}
-		return CMCI_POLL_INTERVAL;
-	default:
-
-		/* We have shiny weather. Let the poll do whatever it thinks. */
-		return interval;
-	}
-}
-
-static bool cmci_storm_detect(void)
-{
-	unsigned int cnt = __this_cpu_read(cmci_storm_cnt);
-	unsigned long ts = __this_cpu_read(cmci_time_stamp);
-	unsigned long now = jiffies;
-	int r;
-
-	if (__this_cpu_read(cmci_storm_state) != CMCI_STORM_NONE)
-		return true;
-
-	if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) {
-		cnt++;
-	} else {
-		cnt = 1;
-		__this_cpu_write(cmci_time_stamp, now);
-	}
-	__this_cpu_write(cmci_storm_cnt, cnt);
-
-	if (cnt <= CMCI_STORM_THRESHOLD)
-		return false;
-
-	cmci_toggle_interrupt_mode(false);
-	__this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE);
-	r = atomic_add_return(1, &cmci_storm_on_cpus);
-	mce_timer_kick(CMCI_STORM_INTERVAL);
-	this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL);
-
-	if (r == 1)
-		pr_notice("CMCI storm detected: switching to poll mode\n");
-	return true;
-}
-
 /*
  * The interrupt handler. This is called on every event.
  * Just call the poller directly to log any events.
@@ -253,9 +111,6 @@ static bool cmci_storm_detect(void)
  */
 static void intel_threshold_interrupt(void)
 {
-	if (cmci_storm_detect())
-		return;
-
 	machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned));
 }
 
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index acd61c41846c..f01d6cbeb809 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -41,18 +41,12 @@ struct dentry *mce_get_debugfs_dir(void);
 extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
-unsigned long cmci_intel_adjust_timer(unsigned long interval);
-bool mce_intel_cmci_poll(void);
-void mce_intel_hcpu_update(unsigned long cpu);
 void cmci_disable_bank(int bank);
 void intel_init_cmci(void);
 void intel_init_lmce(void);
 void intel_clear_lmce(void);
 bool intel_filter_mce(struct mce *m);
 #else
-# define cmci_intel_adjust_timer mce_adjust_timer_default
-static inline bool mce_intel_cmci_poll(void) { return false; }
-static inline void mce_intel_hcpu_update(unsigned long cpu) { }
 static inline void cmci_disable_bank(int bank) { }
 static inline void intel_init_cmci(void) { }
 static inline void intel_init_lmce(void) { }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation
  2022-02-17 17:28   ` Luck, Tony
  2022-02-17 17:35     ` [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code Luck, Tony
@ 2022-02-17 17:36     ` Luck, Tony
  2022-02-23 22:11       ` Koralahalli Channabasappa, Smita
  2022-03-07 13:31       ` Borislav Petkov
  2022-02-18 11:07     ` [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms Borislav Petkov
  2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
  3 siblings, 2 replies; 18+ messages in thread
From: Luck, Tony @ 2022-02-17 17:36 UTC (permalink / raw)
  To: Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

Add a hook into machine_check_poll() to keep track of per-CPU, per-bank
corrected error logs.

Maintain a bitmap history for each bank showing whether the bank
logged an corrected error or not each time it is polled.

In normal operation the interval between polls of this banks
determines how far to shift the history. The 64 bit width corresponds
to about one second.

When a storm is observed the Rate of interrupts is reduced by setting
a large threshold value for this bank in IA32_MCi_CTL2. This bank is
added to the bitmap of banks for this CPU to poll. The polling rate
is increased to once per second.
During a storm each bit in the history indicates the status of the
bank each time it is polled. Thus the history covers just over a minute.

Declare a storm for that bank if the number of corrected interrupts
seen in that history is above some threshold (5 in this RFC code for
ease of testing, likely move to 15 for compatibility with previous
storm detection).

A storm on a bank ends if enough consecutive polls of the bank show
no corrected errors (currently 30, may also change). That resets the
threshold in IA32_MCi_CTL2 back to 1, removes the bank from the bitmap
for polling, and changes the polling rate back to the default.

If a CPU with banks in storm mode is taken offline, the new CPU
that inherits ownership of those banks takes over management of
storm(s) in the inherited bank(s).

Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/mce/core.c     |  26 ++++--
 arch/x86/kernel/cpu/mce/intel.c    | 124 ++++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/mce/internal.h |   4 +-
 3 files changed, 143 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 4f9abb66520d..1f3e7c074182 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -714,6 +714,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
 		barrier();
 		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
 
+		mce_intel_storm_tracker(i, m.status);
+
 		/* If this entry is not valid, ignore it */
 		if (!(m.status & MCI_STATUS_VAL))
 			continue;
@@ -1509,6 +1511,7 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
 
 static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
 static DEFINE_PER_CPU(struct timer_list, mce_timer);
+static DEFINE_PER_CPU(bool, storm_poll_mode);
 
 static void __start_timer(struct timer_list *t, unsigned long interval)
 {
@@ -1544,22 +1547,29 @@ static void mce_timer_fn(struct timer_list *t)
 	else
 		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
 
-	__this_cpu_write(mce_next_interval, iv);
-	__start_timer(t, iv);
+	if (__this_cpu_read(storm_poll_mode)) {
+		__start_timer(t, HZ);
+	} else {
+		__this_cpu_write(mce_next_interval, iv);
+		__start_timer(t, iv);
+	}
 }
 
 /*
- * Ensure that the timer is firing in @interval from now.
+ * When a storm starts on any bank on this CPU, switch to polling
+ * once per second. When the storm ends, revert to the default
+ * polling interval.
  */
-void mce_timer_kick(unsigned long interval)
+void mce_timer_kick(bool storm)
 {
 	struct timer_list *t = this_cpu_ptr(&mce_timer);
-	unsigned long iv = __this_cpu_read(mce_next_interval);
 
-	__start_timer(t, interval);
+	__this_cpu_write(storm_poll_mode, storm);
 
-	if (interval < iv)
-		__this_cpu_write(mce_next_interval, interval);
+	if (storm)
+		__start_timer(t, HZ);
+	else
+		__this_cpu_write(mce_next_interval, check_interval * HZ);
 }
 
 /* Must not be called in IRQ context where del_timer_sync() can deadlock */
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index cee9d989f791..2ed5634ec277 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -47,8 +47,48 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
  */
 static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
 
+/*
+ * CMCI storm tracking state
+ */
+static DEFINE_PER_CPU(int, stormy_bank_count);
+static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history);
+static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm);
+static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp);
+static int cmci_threshold[MAX_NR_BANKS];
+
 #define CMCI_THRESHOLD		1
 
+/*
+ * High threshold to limit CMCI rate during storms. Max supported is
+ * 0x7FFF. Use this slightly smaller value so it has a distinctive
+ * signature when some asks "Why am I not seeing all corrected errors?"
+ */
+#define CMCI_STORM_THRESHOLD	0x7FED
+
+/*
+ * How many errors within the history buffer mark the start of a storm
+ */
+#define STORM_BEGIN	5
+
+/*
+ * How many polls of machine check bank without an error before declaring
+ * the storm is over
+ */
+#define STORM_END	30
+
+/*
+ * If there is no poll data for a bank for this amount of time, just
+ * discard the history.
+ */
+#define STORM_INTERVAL (1 * HZ)
+
+/*
+ * When there is no storm each "bit" in the history represents
+ * this many jiffies. When there is a storm every poll() takes
+ * one history bit.
+ */
+#define HZBITS (HZ / 64)
+
 static int cmci_supported(int *banks)
 {
 	u64 cap;
@@ -103,6 +143,70 @@ static bool lmce_supported(void)
 	return tmp & FEAT_CTL_LMCE_ENABLED;
 }
 
+/*
+ * Set a new CMCI threshold value. Preserve the state of the
+ * MCI_CTL2_CMCI_EN bit in case this happens during a
+ * cmci_rediscover() operation.
+ */
+static void cmci_set_threshold(int bank, int thresh)
+{
+	unsigned long flags;
+	u64 val;
+
+	raw_spin_lock_irqsave(&cmci_discover_lock, flags);
+	rdmsrl(MSR_IA32_MCx_CTL2(bank), val);
+	val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
+	wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh);
+	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
+}
+
+static void cmci_storm_begin(int bank)
+{
+	__set_bit(bank, this_cpu_ptr(mce_poll_banks));
+	this_cpu_write(bank_storm[bank], true);
+	if (this_cpu_inc_return(stormy_bank_count) == 1)
+		mce_timer_kick(true);
+}
+
+static void cmci_storm_end(int bank)
+{
+	__clear_bit(bank, this_cpu_ptr(mce_poll_banks));
+	this_cpu_write(bank_history[bank], 0ull);
+	this_cpu_write(bank_storm[bank], false);
+	if (this_cpu_dec_return(stormy_bank_count) == 0)
+		mce_timer_kick(false);
+}
+
+void mce_intel_storm_tracker(int bank, u64 status)
+{
+	unsigned long now = jiffies, delta;
+	unsigned int shift;
+	u64 history;
+
+	delta = now - this_cpu_read(bank_time_stamp[bank]);
+	shift = this_cpu_read(bank_storm[bank]) ? 1 : (delta + HZBITS) / HZBITS;
+	history = (shift < 64) ? this_cpu_read(bank_history[bank]) << shift : 0;
+	this_cpu_write(bank_time_stamp[bank], now);
+
+	if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL)
+		history |= 1;
+	this_cpu_write(bank_history[bank], history);
+
+	if (this_cpu_read(bank_storm[bank])) {
+		if (history & GENMASK_ULL(STORM_END - 1, 0))
+			return;
+		pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank);
+		cmci_set_threshold(bank, cmci_threshold[bank]);
+		cmci_storm_end(bank);
+	} else {
+		if (hweight64(history) < STORM_BEGIN)
+			return;
+		pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank);
+		cmci_set_threshold(bank, CMCI_STORM_THRESHOLD);
+		cmci_storm_begin(bank);
+	}
+}
+
 /*
  * The interrupt handler. This is called on every event.
  * Just call the poller directly to log any events.
@@ -147,6 +251,9 @@ static void cmci_discover(int banks)
 			continue;
 		}
 
+		if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
+			goto storm;
+
 		if (!mca_cfg.bios_cmci_threshold) {
 			val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
 			val |= CMCI_THRESHOLD;
@@ -159,7 +266,7 @@ static void cmci_discover(int banks)
 			bios_zero_thresh = 1;
 			val |= CMCI_THRESHOLD;
 		}
-
+storm:
 		val |= MCI_CTL2_CMCI_EN;
 		wrmsrl(MSR_IA32_MCx_CTL2(i), val);
 		rdmsrl(MSR_IA32_MCx_CTL2(i), val);
@@ -167,7 +274,14 @@ static void cmci_discover(int banks)
 		/* Did the enable bit stick? -- the bank supports CMCI */
 		if (val & MCI_CTL2_CMCI_EN) {
 			set_bit(i, owned);
-			__clear_bit(i, this_cpu_ptr(mce_poll_banks));
+			if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) {
+				pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), i);
+				this_cpu_write(bank_history[i], ~0ull);
+				this_cpu_write(bank_time_stamp[i], jiffies);
+				cmci_storm_begin(i);
+			} else {
+				__clear_bit(i, this_cpu_ptr(mce_poll_banks));
+			}
 			/*
 			 * We are able to set thresholds for some banks that
 			 * had a threshold of 0. This means the BIOS has not
@@ -177,6 +291,10 @@ static void cmci_discover(int banks)
 			if (mca_cfg.bios_cmci_threshold && bios_zero_thresh &&
 					(val & MCI_CTL2_CMCI_THRESHOLD_MASK))
 				bios_wrong_thresh = 1;
+
+			/* Save default threshold for each bank */
+			if (cmci_threshold[i] == 0)
+				cmci_threshold[i] = val & MCI_CTL2_CMCI_THRESHOLD_MASK;
 		} else {
 			WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks)));
 		}
@@ -218,6 +336,8 @@ static void __cmci_disable_bank(int bank)
 	val &= ~MCI_CTL2_CMCI_EN;
 	wrmsrl(MSR_IA32_MCx_CTL2(bank), val);
 	__clear_bit(bank, this_cpu_ptr(mce_banks_owned));
+	if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
+		cmci_storm_end(bank);
 }
 
 /*
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index f01d6cbeb809..6c7480bce977 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -41,12 +41,14 @@ struct dentry *mce_get_debugfs_dir(void);
 extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
+void mce_intel_storm_tracker(int bank, u64 status);
 void cmci_disable_bank(int bank);
 void intel_init_cmci(void);
 void intel_init_lmce(void);
 void intel_clear_lmce(void);
 bool intel_filter_mce(struct mce *m);
 #else
+static inline void mce_intel_storm_tracker(int bank, u64 status) { }
 static inline void cmci_disable_bank(int bank) { }
 static inline void intel_init_cmci(void) { }
 static inline void intel_init_lmce(void) { }
@@ -54,7 +56,7 @@ static inline void intel_clear_lmce(void) { }
 static inline bool intel_filter_mce(struct mce *m) { return false; }
 #endif
 
-void mce_timer_kick(unsigned long interval);
+void mce_timer_kick(bool storm);
 
 #ifdef CONFIG_ACPI_APEI
 int apei_write_mce(struct mce *m);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms
  2022-02-17 17:28   ` Luck, Tony
  2022-02-17 17:35     ` [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code Luck, Tony
  2022-02-17 17:36     ` [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation Luck, Tony
@ 2022-02-18 11:07     ` Borislav Petkov
  2022-02-23 22:22       ` Koralahalli Channabasappa, Smita
  2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
  3 siblings, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2022-02-18 11:07 UTC (permalink / raw)
  To: Luck, Tony, Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

On Thu, Feb 17, 2022 at 09:28:09AM -0800, Luck, Tony wrote:
> I've been sitting on some partially done patches to re-work
> storm handling for Intel ... which rips out all the existing
> storm bits and replaces with something all new. I'll post the
> 2-part series as replies to this.

Which begs the obvious question: how much of that code can be shared
between the two?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation
  2022-02-17 17:36     ` [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation Luck, Tony
@ 2022-02-23 22:11       ` Koralahalli Channabasappa, Smita
  2022-03-07 13:31       ` Borislav Petkov
  1 sibling, 0 replies; 18+ messages in thread
From: Koralahalli Channabasappa, Smita @ 2022-02-23 22:11 UTC (permalink / raw)
  To: Luck, Tony, Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

Hi,

After reviewing Intel's new implementation to handle storms and
comparing it against the patch here:
https://lore.kernel.org/all/20220217141609.119453-2-Smita.KoralahalliChannabasappa@amd.com/  <https://lore.kernel.org/all/20220217141609.119453-2-Smita.KoralahalliChannabasappa@amd.com/>

AMD's current implementation, already does per CPU disabling of
interrupts and the storm handling is done for corrected errors only.

However, we are not doing per bank approach to handle storms. So,
its a good idea to incorporate this in our code too.

It looked to me that most of the Intel's current implementation can
be shared except in few places where we use different registers
(MCx_MISC) for setting threshold limits to fire interrupts.

Other questions/comments below..

On 2/17/22 11:36 AM, Luck, Tony wrote:

> Add a hook into machine_check_poll() to keep track of per-CPU, per-bank
> corrected error logs.
>
> Maintain a bitmap history for each bank showing whether the bank
> logged an corrected error or not each time it is polled.
>
> In normal operation the interval between polls of this banks
> determines how far to shift the history. The 64 bit width corresponds
> to about one second.

I did not quite get this paragraph and the associated sentence below
..."Thus the history covers just over a minute".

I'm thinking these timing calculations relate to the statements above?

+	delta = now - this_cpu_read(bank_time_stamp[bank]);
+	shift = this_cpu_read(bank_storm[bank]) ? 1 : (delta + HZBITS) / HZBITS;
+	history = (shift < 64) ? this_cpu_read(bank_history[bank]) << shift : 0;
+	this_cpu_write(bank_time_stamp[bank], now);

Can you please elaborate?

>
> When a storm is observed the Rate of interrupts is reduced by setting
> a large threshold value for this bank in IA32_MCi_CTL2. This bank is
> added to the bitmap of banks for this CPU to poll. The polling rate
> is increased to once per second.
> During a storm each bit in the history indicates the status of the
> bank each time it is polled. Thus the history covers just over a minute.
>
> Declare a storm for that bank if the number of corrected interrupts
> seen in that history is above some threshold (5 in this RFC code for
> ease of testing, likely move to 15 for compatibility with previous
> storm detection).
>
> A storm on a bank ends if enough consecutive polls of the bank show
> no corrected errors (currently 30, may also change). That resets the
> threshold in IA32_MCi_CTL2 back to 1, removes the bank from the bitmap
> for polling, and changes the polling rate back to the default.
>
> If a CPU with banks in storm mode is taken offline, the new CPU
> that inherits ownership of those banks takes over management of
> storm(s) in the inherited bank(s).
>
> Signed-off-by: Tony Luck <tony.luck@intel.com>
> ---
>   arch/x86/kernel/cpu/mce/core.c     |  26 ++++--
>   arch/x86/kernel/cpu/mce/intel.c    | 124 ++++++++++++++++++++++++++++-
>   arch/x86/kernel/cpu/mce/internal.h |   4 +-
>   3 files changed, 143 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index 4f9abb66520d..1f3e7c074182 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -714,6 +714,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
>   		barrier();
>   		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
>   
> +		mce_intel_storm_tracker(i, m.status);
> +
>   		/* If this entry is not valid, ignore it */
>   		if (!(m.status & MCI_STATUS_VAL))
>   			continue;
> @@ -1509,6 +1511,7 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
>   
>   static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
>   static DEFINE_PER_CPU(struct timer_list, mce_timer);
> +static DEFINE_PER_CPU(bool, storm_poll_mode);
>   
>   static void __start_timer(struct timer_list *t, unsigned long interval)
>   {
> @@ -1544,22 +1547,29 @@ static void mce_timer_fn(struct timer_list *t)
>   	else
>   		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
>   
> -	__this_cpu_write(mce_next_interval, iv);
> -	__start_timer(t, iv);
> +	if (__this_cpu_read(storm_poll_mode)) {
> +		__start_timer(t, HZ);

So, this is where the timer is fired to poll once per second on
a storm?

> +	} else {
> +		__this_cpu_write(mce_next_interval, iv);
> +		__start_timer(t, iv);
> +	}
>   }
>   
>   /*
> - * Ensure that the timer is firing in @interval from now.
> + * When a storm starts on any bank on this CPU, switch to polling
> + * once per second. When the storm ends, revert to the default
> + * polling interval.
>    */
> -void mce_timer_kick(unsigned long interval)
> +void mce_timer_kick(bool storm)
>   {
>   	struct timer_list *t = this_cpu_ptr(&mce_timer);
> -	unsigned long iv = __this_cpu_read(mce_next_interval);
>   
> -	__start_timer(t, interval);
> +	__this_cpu_write(storm_poll_mode, storm);
>   
> -	if (interval < iv)
> -		__this_cpu_write(mce_next_interval, interval);
> +	if (storm)
> +		__start_timer(t, HZ);

.. and here?

> +	else
> +		__this_cpu_write(mce_next_interval, check_interval * HZ);
>   }
>   
>   /* Must not be called in IRQ context where del_timer_sync() can deadlock */
> diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
> index cee9d989f791..2ed5634ec277 100644
> --- a/arch/x86/kernel/cpu/mce/intel.c
> +++ b/arch/x86/kernel/cpu/mce/intel.c
> @@ -47,8 +47,48 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
>    */
>   static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
>   
> +/*
> + * CMCI storm tracking state
> + */
> +static DEFINE_PER_CPU(int, stormy_bank_count);
> +static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history);
> +static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm);

So, we maintain two bitmaps.
bank_history: per bank bitmap which contains history of whether
the bank logged corrected error or not each time it is polled.

bank_storm: bitmap of banks in a storm.
Am I right?
  

> +static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp);
> +static int cmci_threshold[MAX_NR_BANKS];
> +
>   #define CMCI_THRESHOLD		1
>   
> +/*
> + * High threshold to limit CMCI rate during storms. Max supported is
> + * 0x7FFF. Use this slightly smaller value so it has a distinctive
> + * signature when some asks "Why am I not seeing all corrected errors?"
> + */
> +#define CMCI_STORM_THRESHOLD	0x7FED
> +
> +/*
> + * How many errors within the history buffer mark the start of a storm
> + */
> +#define STORM_BEGIN	5
> +
> +/*
> + * How many polls of machine check bank without an error before declaring
> + * the storm is over
> + */
> +#define STORM_END	30
> +
> +/*
> + * If there is no poll data for a bank for this amount of time, just
> + * discard the history.
> + */
> +#define STORM_INTERVAL (1 * HZ)

Looks like STORM_INTERVAL isn't been used anywhere..

> +
> +/*
> + * When there is no storm each "bit" in the history represents
> + * this many jiffies. When there is a storm every poll() takes
> + * one history bit.
> + */
> +#define HZBITS (HZ / 64)
> +
>   static int cmci_supported(int *banks)
>   {
>   	u64 cap;
> @@ -103,6 +143,70 @@ static bool lmce_supported(void)
>   	return tmp & FEAT_CTL_LMCE_ENABLED;
>   }
>   
> +/*
> + * Set a new CMCI threshold value. Preserve the state of the
> + * MCI_CTL2_CMCI_EN bit in case this happens during a
> + * cmci_rediscover() operation.
> + */
> +static void cmci_set_threshold(int bank, int thresh)
> +{
> +	unsigned long flags;
> +	u64 val;
> +
> +	raw_spin_lock_irqsave(&cmci_discover_lock, flags);

Why not local_irq_save(flags) instead?

> +	rdmsrl(MSR_IA32_MCx_CTL2(bank), val);
> +	val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
> +	wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh);
> +	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
> +}
> +
> +static void cmci_storm_begin(int bank)
> +{
> +	__set_bit(bank, this_cpu_ptr(mce_poll_banks));
> +	this_cpu_write(bank_storm[bank], true);
> +	if (this_cpu_inc_return(stormy_bank_count) == 1)

What is the significance of stormy_bank_count?

> +		mce_timer_kick(true);
> +}
> +
> +static void cmci_storm_end(int bank)
> +{
> +	__clear_bit(bank, this_cpu_ptr(mce_poll_banks));
> +	this_cpu_write(bank_history[bank], 0ull);
> +	this_cpu_write(bank_storm[bank], false);
> +	if (this_cpu_dec_return(stormy_bank_count) == 0)
> +		mce_timer_kick(false);
> +}
> +
> +void mce_intel_storm_tracker(int bank, u64 status)
> +{
> +	unsigned long now = jiffies, delta;
> +	unsigned int shift;
> +	u64 history;
> +
> +	delta = now - this_cpu_read(bank_time_stamp[bank]);
> +	shift = this_cpu_read(bank_storm[bank]) ? 1 : (delta + HZBITS) / HZBITS;
> +	history = (shift < 64) ? this_cpu_read(bank_history[bank]) << shift : 0;
> +	this_cpu_write(bank_time_stamp[bank], now);
> +
> +	if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL)
> +		history |= 1;
> +	this_cpu_write(bank_history[bank], history);
> +
> +	if (this_cpu_read(bank_storm[bank])) {
> +		if (history & GENMASK_ULL(STORM_END - 1, 0))
> +			return;
> +		pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank);
> +		cmci_set_threshold(bank, cmci_threshold[bank]);
> +		cmci_storm_end(bank);
> +	} else {
> +		if (hweight64(history) < STORM_BEGIN)
> +			return;
> +		pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank);
> +		cmci_set_threshold(bank, CMCI_STORM_THRESHOLD);
> +		cmci_storm_begin(bank);
> +	}
> +}
> +
>   /*
>    * The interrupt handler. This is called on every event.
>    * Just call the poller directly to log any events.
> @@ -147,6 +251,9 @@ static void cmci_discover(int banks)
>   			continue;
>   		}
>   
> +		if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
> +			goto storm;
> +
>   		if (!mca_cfg.bios_cmci_threshold) {
>   			val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
>   			val |= CMCI_THRESHOLD;
> @@ -159,7 +266,7 @@ static void cmci_discover(int banks)
>   			bios_zero_thresh = 1;
>   			val |= CMCI_THRESHOLD;
>   		}
> -
> +storm:
>   		val |= MCI_CTL2_CMCI_EN;
>   		wrmsrl(MSR_IA32_MCx_CTL2(i), val);
>   		rdmsrl(MSR_IA32_MCx_CTL2(i), val);
> @@ -167,7 +274,14 @@ static void cmci_discover(int banks)
>   		/* Did the enable bit stick? -- the bank supports CMCI */
>   		if (val & MCI_CTL2_CMCI_EN) {
>   			set_bit(i, owned);
> -			__clear_bit(i, this_cpu_ptr(mce_poll_banks));
> +			if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) {

Why val is checked twice? Before goto storm and after?

> +				pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), i);
> +				this_cpu_write(bank_history[i], ~0ull);
> +				this_cpu_write(bank_time_stamp[i], jiffies);
> +				cmci_storm_begin(i);
> +			} else {
> +				__clear_bit(i, this_cpu_ptr(mce_poll_banks));
> +			}
>   			/*
>   			 * We are able to set thresholds for some banks that
>   			 * had a threshold of 0. This means the BIOS has not
> @@ -177,6 +291,10 @@ static void cmci_discover(int banks)
>   			if (mca_cfg.bios_cmci_threshold && bios_zero_thresh &&
>   					(val & MCI_CTL2_CMCI_THRESHOLD_MASK))
>   				bios_wrong_thresh = 1;
> +
> +			/* Save default threshold for each bank */
> +			if (cmci_threshold[i] == 0)
> +				cmci_threshold[i] = val & MCI_CTL2_CMCI_THRESHOLD_MASK;

So, this is the default threshold value for interrupts to fire after storm
subsides?

Is this defaulted to 1? As the commit message reads..
.."That resets the threshold in IA32_MCi_CTL2 back to 1"..

>   		} else {
>   			WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks)));
>   		}
> @@ -218,6 +336,8 @@ static void __cmci_disable_bank(int bank)
>   	val &= ~MCI_CTL2_CMCI_EN;
>   	wrmsrl(MSR_IA32_MCx_CTL2(bank), val);
>   	__clear_bit(bank, this_cpu_ptr(mce_banks_owned));
> +	if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
> +		cmci_storm_end(bank);
>   }
>   
>   /*
> diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
> index f01d6cbeb809..6c7480bce977 100644
> --- a/arch/x86/kernel/cpu/mce/internal.h
> +++ b/arch/x86/kernel/cpu/mce/internal.h
> @@ -41,12 +41,14 @@ struct dentry *mce_get_debugfs_dir(void);
>   extern mce_banks_t mce_banks_ce_disabled;
>   
>   #ifdef CONFIG_X86_MCE_INTEL
> +void mce_intel_storm_tracker(int bank, u64 status);
>   void cmci_disable_bank(int bank);
>   void intel_init_cmci(void);
>   void intel_init_lmce(void);
>   void intel_clear_lmce(void);
>   bool intel_filter_mce(struct mce *m);
>   #else
> +static inline void mce_intel_storm_tracker(int bank, u64 status) { }
>   static inline void cmci_disable_bank(int bank) { }
>   static inline void intel_init_cmci(void) { }
>   static inline void intel_init_lmce(void) { }
> @@ -54,7 +56,7 @@ static inline void intel_clear_lmce(void) { }
>   static inline bool intel_filter_mce(struct mce *m) { return false; }
>   #endif
>   
> -void mce_timer_kick(unsigned long interval);
> +void mce_timer_kick(bool storm);
>   
>   #ifdef CONFIG_ACPI_APEI
>   int apei_write_mce(struct mce *m);

Seems to me that most of the code can be shared except in few places.

Should I come up with a shared code by keeping Tony's patch as reference
and incorporating AMD's changes in them?

Thanks,
Smita



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms
  2022-02-18 11:07     ` [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms Borislav Petkov
@ 2022-02-23 22:22       ` Koralahalli Channabasappa, Smita
  2022-02-23 23:03         ` Luck, Tony
  0 siblings, 1 reply; 18+ messages in thread
From: Koralahalli Channabasappa, Smita @ 2022-02-23 22:22 UTC (permalink / raw)
  To: Borislav Petkov, Luck, Tony, Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

On 2/18/22 5:07 AM, Borislav Petkov wrote:
> On Thu, Feb 17, 2022 at 09:28:09AM -0800, Luck, Tony wrote:
>> I've been sitting on some partially done patches to re-work
>> storm handling for Intel ... which rips out all the existing
>> storm bits and replaces with something all new. I'll post the
>> 2-part series as replies to this.
> Which begs the obvious question: how much of that code can be shared
> between the two?
>
It looks to me most of the code can be shared except in few places
where AMD and Intel use different registers to set error thresholds.
And the fact that AMD's threshold interrupts just handles corrected
errors unlike CMCI.

I'm thinking of coming up with a shared code between both by keeping
the Intel's new storm handling code as base and incorporating AMD
changes on them and send for review.

Let me know if thats okay?

Thanks,
Smita


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms
  2022-02-23 22:22       ` Koralahalli Channabasappa, Smita
@ 2022-02-23 23:03         ` Luck, Tony
  0 siblings, 0 replies; 18+ messages in thread
From: Luck, Tony @ 2022-02-23 23:03 UTC (permalink / raw)
  To: Koralahalli Channabasappa, Smita, Borislav Petkov, Smita Koralahalli
  Cc: x86, linux-edac, linux-kernel, H . Peter Anvin, Dave Hansen,
	Yazen Ghannam

> It looks to me most of the code can be shared except in few places
> where AMD and Intel use different registers to set error thresholds.

Hopefully easy to abstract.

> And the fact that AMD's threshold interrupts just handles corrected
> errors unlike CMCI.

That makes your life much simpler than mine :-)

> I'm thinking of coming up with a shared code between both by keeping
> the Intel's new storm handling code as base and incorporating AMD
> changes on them and send for review.
>
> Let me know if thats okay?

My new Intel code hasn't had Boris look through it yet to point
out all the bits where I have bugs, or just make things more complex
than they need to be.

So it would be helpful if Boris could do at least a quick scan to
say my code is a worthy base. I'd hate to see you waste time
building a merged version and then have Boris say "Nack".

-Tony

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code
  2022-02-17 17:35     ` [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code Luck, Tony
@ 2022-02-24 15:14       ` Borislav Petkov
  0 siblings, 0 replies; 18+ messages in thread
From: Borislav Petkov @ 2022-02-24 15:14 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Smita Koralahalli, x86, linux-edac, linux-kernel,
	H . Peter Anvin, Dave Hansen, Yazen Ghannam

On Thu, Feb 17, 2022 at 09:35:52AM -0800, Luck, Tony wrote:
> When a "storm" of CMCI is detected this code mitigates by
> disabling CMCI interrupt signalling from all of the banks
> owned by the CPU that saw the storm.
> 
> There are problems with this approach:
> 
> 1) It is very coarse grained. In all liklihood only one of the

Unknown word [liklihood] in commit message.
Suggestions: ['likelihood', 'livelihood']

> banks was geenrating the interrupts, but CMCI is disabled for all.

Unknown word [geenrating] in commit message.
Suggestions: ['generating', 'penetrating', 'germinating', 'entreating', 'ingratiating']

Do I need to give you the whole spiel about using a spellchecker?

:)

> This means Linux may delay seeing and processing errors logged
> from other banks.
> 
> 2) Although CMCI stands for Corrected Machine Check Interrupt, it
> is also used to signal when an uncorrected error is logged. This
> is a problem because these errors should be handled in a timely
> manner.
> 
> Delete all this code in preparation for a finer grained solution.
> 
> Signed-off-by: Tony Luck <tony.luck@intel.com>
> ---
>  arch/x86/kernel/cpu/mce/core.c     |  20 +---
>  arch/x86/kernel/cpu/mce/intel.c    | 145 -----------------------------
>  arch/x86/kernel/cpu/mce/internal.h |   6 --
>  3 files changed, 1 insertion(+), 170 deletions(-)

Yah, can't complain about diffstats like that.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation
  2022-02-17 17:36     ` [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation Luck, Tony
  2022-02-23 22:11       ` Koralahalli Channabasappa, Smita
@ 2022-03-07 13:31       ` Borislav Petkov
  2022-03-07 20:04         ` Luck, Tony
  1 sibling, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2022-03-07 13:31 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Smita Koralahalli, x86, linux-edac, linux-kernel,
	H . Peter Anvin, Dave Hansen, Yazen Ghannam

On Thu, Feb 17, 2022 at 09:36:50AM -0800, Luck, Tony wrote:
> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index 4f9abb66520d..1f3e7c074182 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -714,6 +714,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
>  		barrier();
>  		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
>  
> +		mce_intel_storm_tracker(i, m.status);

Why is this called before the VALID bit check?

Because you want to still run the tracker on each polling - not only
when it sees a valid error?

> diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
> index cee9d989f791..2ed5634ec277 100644
> --- a/arch/x86/kernel/cpu/mce/intel.c
> +++ b/arch/x86/kernel/cpu/mce/intel.c
> @@ -47,8 +47,48 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
>   */
>  static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
>  
> +/*
> + * CMCI storm tracking state
> + */

Those could use some comments explaining what is tracking what:

> +static DEFINE_PER_CPU(int, stormy_bank_count);
> +static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history);
> +static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm);

AFAICT, this says whether a bank is in storm mode?

> +static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp);

This looks like it collects the jiffies when the bank was looked at in
the storm tracker.

> +static int cmci_threshold[MAX_NR_BANKS];
> +
>  #define CMCI_THRESHOLD		1
>  
> +/*
> + * High threshold to limit CMCI rate during storms. Max supported is
> + * 0x7FFF. Use this slightly smaller value so it has a distinctive
> + * signature when some asks "Why am I not seeing all corrected errors?"
> + */
> +#define CMCI_STORM_THRESHOLD	0x7FED

Why is a "threshold" in hex?

> +
> +/*
> + * How many errors within the history buffer mark the start of a storm
> + */
> +#define STORM_BEGIN	5

That looks like a STORM_BEGIN_THRESHOLD to me.

> +
> +/*
> + * How many polls of machine check bank without an error before declaring
> + * the storm is over
> + */
> +#define STORM_END	30

Similarly:

STORM_END_POLL_THRESHOLD

> +
> +/*
> + * If there is no poll data for a bank for this amount of time, just
> + * discard the history.
> + */
> +#define STORM_INTERVAL (1 * HZ)

That looks unused.

> +static void cmci_storm_begin(int bank)
> +{
> +	__set_bit(bank, this_cpu_ptr(mce_poll_banks));
> +	this_cpu_write(bank_storm[bank], true);
> +	if (this_cpu_inc_return(stormy_bank_count) == 1)

s/ == 1//

> +		mce_timer_kick(true);
> +}
> +
> +static void cmci_storm_end(int bank)
> +{
> +	__clear_bit(bank, this_cpu_ptr(mce_poll_banks));
> +	this_cpu_write(bank_history[bank], 0ull);
> +	this_cpu_write(bank_storm[bank], false);
> +	if (this_cpu_dec_return(stormy_bank_count) == 0)

	if (!...

> +		mce_timer_kick(false);
> +}
> +
> +void mce_intel_storm_tracker(int bank, u64 status)

Function name needs a verb.

> +{
> +	unsigned long now = jiffies, delta;
> +	unsigned int shift;
> +	u64 history;
> +
> +	delta = now - this_cpu_read(bank_time_stamp[bank]);
> +	shift = this_cpu_read(bank_storm[bank]) ? 1 : (delta + HZBITS) / HZBITS;

Do

	shift = 1;

on function entry to simplify this assignment.

Also, I'm having trouble with this shift calculation. The laptop here has
HZ=250. Let's say delta is 2000 jiffies.

So if this bank wasn't in storm mode, I'd have

shift = (2000 + (250 / 64)) / (250 / 64) = 513

...

Aaaha, so only when the diff is < 250 in my case, i.e., it polls the
same bank within a second, only then it would shift the history. I.e.,
what you mean with that "The 64 bit width corresponds to about one
second."

> +	history = (shift < 64) ? this_cpu_read(bank_history[bank]) << shift : 0;
> +	this_cpu_write(bank_time_stamp[bank], now);
> +
> +	if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL)
> +		history |= 1;
> +	this_cpu_write(bank_history[bank], history);
> +
> +	if (this_cpu_read(bank_storm[bank])) {
> +		if (history & GENMASK_ULL(STORM_END - 1, 0))
> +			return;

Aha, under STORM_END polls you don't declare the storm as being over.

> +		pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank);
> +		cmci_set_threshold(bank, cmci_threshold[bank]);
> +		cmci_storm_end(bank);
> +	} else {
> +		if (hweight64(history) < STORM_BEGIN)
> +			return;

Aha, so you need STORM_BEGIN errors within the last second to cause the
storm polling. Ok I guess.

So all in all I can't find anything eeewy in this - it would need to
have a lot more documentation, though, as this is not the most trivial
thing to stare at.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation
  2022-03-07 13:31       ` Borislav Petkov
@ 2022-03-07 20:04         ` Luck, Tony
  0 siblings, 0 replies; 18+ messages in thread
From: Luck, Tony @ 2022-03-07 20:04 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Smita Koralahalli, x86, linux-edac, linux-kernel,
	H . Peter Anvin, Dave Hansen, Yazen Ghannam

On Mon, Mar 07, 2022 at 02:31:21PM +0100, Borislav Petkov wrote:

> So all in all I can't find anything eeewy in this - it would need to
> have a lot more documentation, though, as this is not the most trivial
> thing to stare at.

Thanks for the review. I'll see about re-naming things and adding comments
to make this easier to read.

> >  		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
> >  
> > +		mce_intel_storm_tracker(i, m.status);
> 
> Why is this called before the VALID bit check?
> 
> Because you want to still run the tracker on each polling - not only
> when it sees a valid error?

Yes. The tracker cares both about polls that find errors, and polls
that don't. Storm detection is triggered with some threshold of positive
scans, even if they are mixed in with some negative ones. End of storm
is determined by a number of consecutive negative polls.

> > diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
> > index cee9d989f791..2ed5634ec277 100644
> > --- a/arch/x86/kernel/cpu/mce/intel.c
> > +++ b/arch/x86/kernel/cpu/mce/intel.c
> > @@ -47,8 +47,48 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
> >   */
> >  static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
> >  
> > +/*
> > + * CMCI storm tracking state
> > + */
> 
> Those could use some comments explaining what is tracking what:
> 
> > +static DEFINE_PER_CPU(int, stormy_bank_count);

This one is a count of how many banks on this CPU are in storm mode.

> > +static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history);

Bitmask history of the most recent 64 polls for each bank.

> > +static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm);
> 
> AFAICT, this says whether a bank is in storm mode?

Yes.

> > +static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp);
> 
> This looks like it collects the jiffies when the bank was looked at in
> the storm tracker.

Yes.

> > +static int cmci_threshold[MAX_NR_BANKS];
> > +
> >  #define CMCI_THRESHOLD		1
> >  
> > +/*
> > + * High threshold to limit CMCI rate during storms. Max supported is
> > + * 0x7FFF. Use this slightly smaller value so it has a distinctive
> > + * signature when some asks "Why am I not seeing all corrected errors?"
> > + */
> > +#define CMCI_STORM_THRESHOLD	0x7FED
> 
> Why is a "threshold" in hex?

For debugging ... I was using rdmsr tool to read the MCi_CTL2 machine
check bank MSRs. It could be defined in decimal.

> > +
> > +/*
> > + * How many errors within the history buffer mark the start of a storm
> > + */
> > +#define STORM_BEGIN	5
> 
> That looks like a STORM_BEGIN_THRESHOLD to me.

Yes.

> > +
> > +/*
> > + * How many polls of machine check bank without an error before declaring
> > + * the storm is over
> > + */
> > +#define STORM_END	30
> 
> Similarly:
> 
> STORM_END_POLL_THRESHOLD

Ditto yes.

> > +
> > +/*
> > + * If there is no poll data for a bank for this amount of time, just
> > + * discard the history.
> > + */
> > +#define STORM_INTERVAL (1 * HZ)
> 
> That looks unused.

It was at one point a replacement for CMCI_STORM_INTERVAL deleted
in patch 0001. But I must have dropped the place it was used. Will
delete.

> > +static void cmci_storm_begin(int bank)
> > +{
> > +	__set_bit(bank, this_cpu_ptr(mce_poll_banks));
> > +	this_cpu_write(bank_storm[bank], true);
> > +	if (this_cpu_inc_return(stormy_bank_count) == 1)
> 
> s/ == 1//

Not sure about this. The stormy_bank_count variable keeps
track of how many banks on this CPU are in storm mode. So
the transition from zero to one is significant. Need to start
polling. But for 1->2, 2->3 etc. nothing needs to happen. The
CPU is already polling.

> > +		mce_timer_kick(true);
> > +}
> > +
> > +static void cmci_storm_end(int bank)
> > +{
> > +	__clear_bit(bank, this_cpu_ptr(mce_poll_banks));
> > +	this_cpu_write(bank_history[bank], 0ull);
> > +	this_cpu_write(bank_storm[bank], false);
> > +	if (this_cpu_dec_return(stormy_bank_count) == 0)
> 
> 	if (!...

OK.

> > +		mce_timer_kick(false);
> > +}
> > +
> > +void mce_intel_storm_tracker(int bank, u64 status)
> 
> Function name needs a verb.

How about: track_cmci_storm(int bank, u64 status) ?

> > +{
> > +	unsigned long now = jiffies, delta;
> > +	unsigned int shift;
> > +	u64 history;
> > +
> > +	delta = now - this_cpu_read(bank_time_stamp[bank]);
> > +	shift = this_cpu_read(bank_storm[bank]) ? 1 : (delta + HZBITS) / HZBITS;
> 
> Do
> 
> 	shift = 1;
> 
> on function entry to simplify this assignment.


Yes. That will be easier to read.

> Also, I'm having trouble with this shift calculation. The laptop here has
> HZ=250. Let's say delta is 2000 jiffies.
> 
> So if this bank wasn't in storm mode, I'd have
> 
> shift = (2000 + (250 / 64)) / (250 / 64) = 513
> 
> ...
> 
> Aaaha, so only when the diff is < 250 in my case, i.e., it polls the
> same bank within a second, only then it would shift the history. I.e.,
> what you mean with that "The 64 bit width corresponds to about one
> second."

Yes, you got it ... but I'll add a comment so code readers don't have
to redo that deduction every time.

> > +	history = (shift < 64) ? this_cpu_read(bank_history[bank]) << shift : 0;
> > +	this_cpu_write(bank_time_stamp[bank], now);
> > +
> > +	if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL)
> > +		history |= 1;
> > +	this_cpu_write(bank_history[bank], history);
> > +
> > +	if (this_cpu_read(bank_storm[bank])) {
> > +		if (history & GENMASK_ULL(STORM_END - 1, 0))
> > +			return;
> 
> Aha, under STORM_END polls you don't declare the storm as being over.
> 
> > +		pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank);
> > +		cmci_set_threshold(bank, cmci_threshold[bank]);
> > +		cmci_storm_end(bank);
> > +	} else {
> > +		if (hweight64(history) < STORM_BEGIN)
> > +			return;
> 
> Aha, so you need STORM_BEGIN errors within the last second to cause the
> storm polling. Ok I guess.

Agreed. This needs comments to explain what is going on.

> So all in all I can't find anything eeewy in this - it would need to
> have a lot more documentation, though, as this is not the most trivial
> thing to stare at.

Thanks again for the review.

-Tony

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs
  2022-02-17 17:28   ` Luck, Tony
                       ` (2 preceding siblings ...)
  2022-02-18 11:07     ` [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms Borislav Petkov
@ 2022-03-15 18:15     ` Tony Luck
  2022-03-15 18:15       ` [PATCH v2 1/2] x86/mce: Remove old CMCI storm mitigation code Tony Luck
                         ` (2 more replies)
  3 siblings, 3 replies; 18+ messages in thread
From: Tony Luck @ 2022-03-15 18:15 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Smita Koralahalli, hpa, Dave Hansen, Yazen Ghannam, linux-edac,
	linux-kernel, patches, Tony Luck

Two-part motivation:

1) Disabling CMCI globally is an overly big hammer

2) Intel signals some UNCORRECTED errors using CMCI (yes, turns
out that was a poorly chosen name given the later evolution of
the architecture). Since we don't want to miss those, the proposed
storm code just bumps the threshold to (almost) maximum to mitigate,
but not eliminate the storm. Note that the threshold only applies
to corrected errors.

Patch 1 deletes the parts of the old storm code that are no
longer needed.

Patch 2 adds the new per-bank mitigation.

Smita: Unless Boris finds a some more stuff for me to fix, this
version will be a better starting point to merge with your changes.

Changes since v1 (based on feedback from Boris)

- Spelling fixes in commit message
- Many more comments explaining what is going on
- Change name of function that does tracking
- Change names for #defines for storm BEGIN/END
- #define for high threshold in decimal, not hex

Tony Luck (2):
  x86/mce: Remove old CMCI storm mitigation code
  x86/mce: Add per-bank CMCI storm mitigation

 arch/x86/kernel/cpu/mce/core.c     |  46 +++---
 arch/x86/kernel/cpu/mce/intel.c    | 241 ++++++++++++++---------------
 arch/x86/kernel/cpu/mce/internal.h |  10 +-
 3 files changed, 141 insertions(+), 156 deletions(-)


base-commit: ffb217a13a2eaf6d5bd974fc83036a53ca69f1e2
-- 
2.35.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/2] x86/mce: Remove old CMCI storm mitigation code
  2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
@ 2022-03-15 18:15       ` Tony Luck
  2022-03-15 18:15       ` [PATCH v2 2/2] x86/mce: Add per-bank CMCI storm mitigation Tony Luck
  2022-03-15 18:34       ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Borislav Petkov
  2 siblings, 0 replies; 18+ messages in thread
From: Tony Luck @ 2022-03-15 18:15 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Smita Koralahalli, hpa, Dave Hansen, Yazen Ghannam, linux-edac,
	linux-kernel, patches, Tony Luck

When a "storm" of CMCI is detected this code mitigates by
disabling CMCI interrupt signalling from all of the banks
owned by the CPU that saw the storm.

There are problems with this approach:

1) It is very coarse grained. In all likelihood only one of the
banks was generating the interrupts, but CMCI is disabled for all.
This means Linux may delay seeing and processing errors logged
from other banks.

2) Although CMCI stands for Corrected Machine Check Interrupt, it
is also used to signal when an uncorrected error is logged. This
is a problem because these errors should be handled in a timely
manner.

Delete all this code in preparation for a finer grained solution.

Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/mce/core.c     |  20 +---
 arch/x86/kernel/cpu/mce/intel.c    | 145 -----------------------------
 arch/x86/kernel/cpu/mce/internal.h |   6 --
 3 files changed, 1 insertion(+), 170 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 5818b837fd4d..396484141ee1 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1572,13 +1572,6 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
 static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
 static DEFINE_PER_CPU(struct timer_list, mce_timer);
 
-static unsigned long mce_adjust_timer_default(unsigned long interval)
-{
-	return interval;
-}
-
-static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
-
 static void __start_timer(struct timer_list *t, unsigned long interval)
 {
 	unsigned long when = jiffies + interval;
@@ -1601,15 +1594,9 @@ static void mce_timer_fn(struct timer_list *t)
 
 	iv = __this_cpu_read(mce_next_interval);
 
-	if (mce_available(this_cpu_ptr(&cpu_info))) {
+	if (mce_available(this_cpu_ptr(&cpu_info)))
 		machine_check_poll(0, this_cpu_ptr(&mce_poll_banks));
 
-		if (mce_intel_cmci_poll()) {
-			iv = mce_adjust_timer(iv);
-			goto done;
-		}
-	}
-
 	/*
 	 * Alert userspace if needed. If we logged an MCE, reduce the polling
 	 * interval, otherwise increase the polling interval.
@@ -1619,7 +1606,6 @@ static void mce_timer_fn(struct timer_list *t)
 	else
 		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
 
-done:
 	__this_cpu_write(mce_next_interval, iv);
 	__start_timer(t, iv);
 }
@@ -1949,7 +1935,6 @@ static void mce_zhaoxin_feature_init(struct cpuinfo_x86 *c)
 
 	intel_init_cmci();
 	intel_init_lmce();
-	mce_adjust_timer = cmci_intel_adjust_timer;
 }
 
 static void mce_zhaoxin_feature_clear(struct cpuinfo_x86 *c)
@@ -1962,7 +1947,6 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
 	switch (c->x86_vendor) {
 	case X86_VENDOR_INTEL:
 		mce_intel_feature_init(c);
-		mce_adjust_timer = cmci_intel_adjust_timer;
 		break;
 
 	case X86_VENDOR_AMD: {
@@ -2621,8 +2605,6 @@ static void mce_reenable_cpu(void)
 
 static int mce_cpu_dead(unsigned int cpu)
 {
-	mce_intel_hcpu_update(cpu);
-
 	/* intentionally ignoring frozen here */
 	if (!cpuhp_tasks_frozen)
 		cmci_rediscover();
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index baafbb37be67..7fa5aafb860a 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -41,15 +41,6 @@
  */
 static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
 
-/*
- * CMCI storm detection backoff counter
- *
- * During storm, we reset this counter to INITIAL_CHECK_INTERVAL in case we've
- * encountered an error. If not, we decrement it by one. We signal the end of
- * the CMCI storm when it reaches 0.
- */
-static DEFINE_PER_CPU(int, cmci_backoff_cnt);
-
 /*
  * cmci_discover_lock protects against parallel discovery attempts
  * which could race against each other.
@@ -57,21 +48,6 @@ static DEFINE_PER_CPU(int, cmci_backoff_cnt);
 static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
 
 #define CMCI_THRESHOLD		1
-#define CMCI_POLL_INTERVAL	(30 * HZ)
-#define CMCI_STORM_INTERVAL	(HZ)
-#define CMCI_STORM_THRESHOLD	15
-
-static DEFINE_PER_CPU(unsigned long, cmci_time_stamp);
-static DEFINE_PER_CPU(unsigned int, cmci_storm_cnt);
-static DEFINE_PER_CPU(unsigned int, cmci_storm_state);
-
-enum {
-	CMCI_STORM_NONE,
-	CMCI_STORM_ACTIVE,
-	CMCI_STORM_SUBSIDED,
-};
-
-static atomic_t cmci_storm_on_cpus;
 
 static int cmci_supported(int *banks)
 {
@@ -127,124 +103,6 @@ static bool lmce_supported(void)
 	return tmp & FEAT_CTL_LMCE_ENABLED;
 }
 
-bool mce_intel_cmci_poll(void)
-{
-	if (__this_cpu_read(cmci_storm_state) == CMCI_STORM_NONE)
-		return false;
-
-	/*
-	 * Reset the counter if we've logged an error in the last poll
-	 * during the storm.
-	 */
-	if (machine_check_poll(0, this_cpu_ptr(&mce_banks_owned)))
-		this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL);
-	else
-		this_cpu_dec(cmci_backoff_cnt);
-
-	return true;
-}
-
-void mce_intel_hcpu_update(unsigned long cpu)
-{
-	if (per_cpu(cmci_storm_state, cpu) == CMCI_STORM_ACTIVE)
-		atomic_dec(&cmci_storm_on_cpus);
-
-	per_cpu(cmci_storm_state, cpu) = CMCI_STORM_NONE;
-}
-
-static void cmci_toggle_interrupt_mode(bool on)
-{
-	unsigned long flags, *owned;
-	int bank;
-	u64 val;
-
-	raw_spin_lock_irqsave(&cmci_discover_lock, flags);
-	owned = this_cpu_ptr(mce_banks_owned);
-	for_each_set_bit(bank, owned, MAX_NR_BANKS) {
-		rdmsrl(MSR_IA32_MCx_CTL2(bank), val);
-
-		if (on)
-			val |= MCI_CTL2_CMCI_EN;
-		else
-			val &= ~MCI_CTL2_CMCI_EN;
-
-		wrmsrl(MSR_IA32_MCx_CTL2(bank), val);
-	}
-	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
-}
-
-unsigned long cmci_intel_adjust_timer(unsigned long interval)
-{
-	if ((this_cpu_read(cmci_backoff_cnt) > 0) &&
-	    (__this_cpu_read(cmci_storm_state) == CMCI_STORM_ACTIVE)) {
-		mce_notify_irq();
-		return CMCI_STORM_INTERVAL;
-	}
-
-	switch (__this_cpu_read(cmci_storm_state)) {
-	case CMCI_STORM_ACTIVE:
-
-		/*
-		 * We switch back to interrupt mode once the poll timer has
-		 * silenced itself. That means no events recorded and the timer
-		 * interval is back to our poll interval.
-		 */
-		__this_cpu_write(cmci_storm_state, CMCI_STORM_SUBSIDED);
-		if (!atomic_sub_return(1, &cmci_storm_on_cpus))
-			pr_notice("CMCI storm subsided: switching to interrupt mode\n");
-
-		fallthrough;
-
-	case CMCI_STORM_SUBSIDED:
-		/*
-		 * We wait for all CPUs to go back to SUBSIDED state. When that
-		 * happens we switch back to interrupt mode.
-		 */
-		if (!atomic_read(&cmci_storm_on_cpus)) {
-			__this_cpu_write(cmci_storm_state, CMCI_STORM_NONE);
-			cmci_toggle_interrupt_mode(true);
-			cmci_recheck();
-		}
-		return CMCI_POLL_INTERVAL;
-	default:
-
-		/* We have shiny weather. Let the poll do whatever it thinks. */
-		return interval;
-	}
-}
-
-static bool cmci_storm_detect(void)
-{
-	unsigned int cnt = __this_cpu_read(cmci_storm_cnt);
-	unsigned long ts = __this_cpu_read(cmci_time_stamp);
-	unsigned long now = jiffies;
-	int r;
-
-	if (__this_cpu_read(cmci_storm_state) != CMCI_STORM_NONE)
-		return true;
-
-	if (time_before_eq(now, ts + CMCI_STORM_INTERVAL)) {
-		cnt++;
-	} else {
-		cnt = 1;
-		__this_cpu_write(cmci_time_stamp, now);
-	}
-	__this_cpu_write(cmci_storm_cnt, cnt);
-
-	if (cnt <= CMCI_STORM_THRESHOLD)
-		return false;
-
-	cmci_toggle_interrupt_mode(false);
-	__this_cpu_write(cmci_storm_state, CMCI_STORM_ACTIVE);
-	r = atomic_add_return(1, &cmci_storm_on_cpus);
-	mce_timer_kick(CMCI_STORM_INTERVAL);
-	this_cpu_write(cmci_backoff_cnt, INITIAL_CHECK_INTERVAL);
-
-	if (r == 1)
-		pr_notice("CMCI storm detected: switching to poll mode\n");
-	return true;
-}
-
 /*
  * The interrupt handler. This is called on every event.
  * Just call the poller directly to log any events.
@@ -253,9 +111,6 @@ static bool cmci_storm_detect(void)
  */
 static void intel_threshold_interrupt(void)
 {
-	if (cmci_storm_detect())
-		return;
-
 	machine_check_poll(MCP_TIMESTAMP, this_cpu_ptr(&mce_banks_owned));
 }
 
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index 52c633950b38..dfbd0bca67a0 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -41,18 +41,12 @@ struct dentry *mce_get_debugfs_dir(void);
 extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
-unsigned long cmci_intel_adjust_timer(unsigned long interval);
-bool mce_intel_cmci_poll(void);
-void mce_intel_hcpu_update(unsigned long cpu);
 void cmci_disable_bank(int bank);
 void intel_init_cmci(void);
 void intel_init_lmce(void);
 void intel_clear_lmce(void);
 bool intel_filter_mce(struct mce *m);
 #else
-# define cmci_intel_adjust_timer mce_adjust_timer_default
-static inline bool mce_intel_cmci_poll(void) { return false; }
-static inline void mce_intel_hcpu_update(unsigned long cpu) { }
 static inline void cmci_disable_bank(int bank) { }
 static inline void intel_init_cmci(void) { }
 static inline void intel_init_lmce(void) { }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] x86/mce: Add per-bank CMCI storm mitigation
  2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
  2022-03-15 18:15       ` [PATCH v2 1/2] x86/mce: Remove old CMCI storm mitigation code Tony Luck
@ 2022-03-15 18:15       ` Tony Luck
  2022-03-15 18:34       ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Borislav Petkov
  2 siblings, 0 replies; 18+ messages in thread
From: Tony Luck @ 2022-03-15 18:15 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Smita Koralahalli, hpa, Dave Hansen, Yazen Ghannam, linux-edac,
	linux-kernel, patches, Tony Luck

Add a hook into machine_check_poll() to keep track of per-CPU, per-bank
corrected error logs.

Maintain a bitmap history for each bank showing whether the bank
logged an corrected error or not each time it is polled.

In normal operation the interval between polls of this banks
determines how far to shift the history. The 64 bit width corresponds
to about one second.

When a storm is observed the Rate of interrupts is reduced by setting
a large threshold value for this bank in IA32_MCi_CTL2. This bank is
added to the bitmap of banks for this CPU to poll. The polling rate
is increased to once per second.
During a storm each bit in the history indicates the status of the
bank each time it is polled. Thus the history covers just over a minute.

Declare a storm for that bank if the number of corrected interrupts
seen in that history is above some threshold (5 in this RFC code for
ease of testing, likely move to 15 for compatibility with previous
storm detection).

A storm on a bank ends if enough consecutive polls of the bank show
no corrected errors (currently 30, may also change). That resets the
threshold in IA32_MCi_CTL2 back to 1, removes the bank from the bitmap
for polling, and changes the polling rate back to the default.

If a CPU with banks in storm mode is taken offline, the new CPU
that inherits ownership of those banks takes over management of
storm(s) in the inherited bank(s).

Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/mce/core.c     |  26 +++--
 arch/x86/kernel/cpu/mce/intel.c    | 146 ++++++++++++++++++++++++++++-
 arch/x86/kernel/cpu/mce/internal.h |   4 +-
 3 files changed, 165 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 396484141ee1..6e62140eed97 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -726,6 +726,8 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
 		barrier();
 		m.status = mce_rdmsrl(mca_msr_reg(i, MCA_STATUS));
 
+		track_cmci_storm(i, m.status);
+
 		/* If this entry is not valid, ignore it */
 		if (!(m.status & MCI_STATUS_VAL))
 			continue;
@@ -1571,6 +1573,7 @@ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
 
 static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
 static DEFINE_PER_CPU(struct timer_list, mce_timer);
+static DEFINE_PER_CPU(bool, storm_poll_mode);
 
 static void __start_timer(struct timer_list *t, unsigned long interval)
 {
@@ -1606,22 +1609,29 @@ static void mce_timer_fn(struct timer_list *t)
 	else
 		iv = min(iv * 2, round_jiffies_relative(check_interval * HZ));
 
-	__this_cpu_write(mce_next_interval, iv);
-	__start_timer(t, iv);
+	if (__this_cpu_read(storm_poll_mode)) {
+		__start_timer(t, HZ);
+	} else {
+		__this_cpu_write(mce_next_interval, iv);
+		__start_timer(t, iv);
+	}
 }
 
 /*
- * Ensure that the timer is firing in @interval from now.
+ * When a storm starts on any bank on this CPU, switch to polling
+ * once per second. When the storm ends, revert to the default
+ * polling interval.
  */
-void mce_timer_kick(unsigned long interval)
+void mce_timer_kick(bool storm)
 {
 	struct timer_list *t = this_cpu_ptr(&mce_timer);
-	unsigned long iv = __this_cpu_read(mce_next_interval);
 
-	__start_timer(t, interval);
+	__this_cpu_write(storm_poll_mode, storm);
 
-	if (interval < iv)
-		__this_cpu_write(mce_next_interval, interval);
+	if (storm)
+		__start_timer(t, HZ);
+	else
+		__this_cpu_write(mce_next_interval, check_interval * HZ);
 }
 
 /* Must not be called in IRQ context where del_timer_sync() can deadlock */
diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
index 7fa5aafb860a..0856f463e863 100644
--- a/arch/x86/kernel/cpu/mce/intel.c
+++ b/arch/x86/kernel/cpu/mce/intel.c
@@ -47,8 +47,47 @@ static DEFINE_PER_CPU(mce_banks_t, mce_banks_owned);
  */
 static DEFINE_RAW_SPINLOCK(cmci_discover_lock);
 
+/*
+ * CMCI storm tracking state
+ *	stormy_bank_count: per-cpu count of MC banks in storm state
+ *	bank_history: bitmask tracking of corrected errors seen in each bank
+ *	bank_time_stamp: last time (in jiffies) that each bank was polled
+ *	cmci_threshold: MCi_CTL2 threshold for each bank when there is no storm
+ */
+static DEFINE_PER_CPU(int, stormy_bank_count);
+static DEFINE_PER_CPU(u64 [MAX_NR_BANKS], bank_history);
+static DEFINE_PER_CPU(bool [MAX_NR_BANKS], bank_storm);
+static DEFINE_PER_CPU(unsigned long [MAX_NR_BANKS], bank_time_stamp);
+static int cmci_threshold[MAX_NR_BANKS];
+
+/* Linux non-storm CMCI threshold (may be overridden by BIOS */
 #define CMCI_THRESHOLD		1
 
+/*
+ * High threshold to limit CMCI rate during storms. Max supported is
+ * 0x7FFF. Use this slightly smaller value so it has a distinctive
+ * signature when some asks "Why am I not seeing all corrected errors?"
+ */
+#define CMCI_STORM_THRESHOLD	32749
+
+/*
+ * How many errors within the history buffer mark the start of a storm
+ */
+#define STORM_BEGIN_THRESHOLD	5
+
+/*
+ * How many polls of machine check bank without an error before declaring
+ * the storm is over
+ */
+#define STORM_END_POLL_THRESHOLD	30
+
+/*
+ * When there is no storm each "bit" in the history represents
+ * this many jiffies. When there is a storm every poll() takes
+ * one history bit.
+ */
+#define HZBITS (HZ / 64)
+
 static int cmci_supported(int *banks)
 {
 	u64 cap;
@@ -103,6 +142,93 @@ static bool lmce_supported(void)
 	return tmp & FEAT_CTL_LMCE_ENABLED;
 }
 
+/*
+ * Set a new CMCI threshold value. Preserve the state of the
+ * MCI_CTL2_CMCI_EN bit in case this happens during a
+ * cmci_rediscover() operation.
+ */
+static void cmci_set_threshold(int bank, int thresh)
+{
+	unsigned long flags;
+	u64 val;
+
+	raw_spin_lock_irqsave(&cmci_discover_lock, flags);
+	rdmsrl(MSR_IA32_MCx_CTL2(bank), val);
+	val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
+	wrmsrl(MSR_IA32_MCx_CTL2(bank), val | thresh);
+	raw_spin_unlock_irqrestore(&cmci_discover_lock, flags);
+}
+
+static void cmci_storm_begin(int bank)
+{
+	__set_bit(bank, this_cpu_ptr(mce_poll_banks));
+	this_cpu_write(bank_storm[bank], true);
+
+	/*
+	 * If this is the first bank on this CPU to enter storm mode
+	 * start polling
+	 */
+	if (this_cpu_inc_return(stormy_bank_count) == 1)
+		mce_timer_kick(true);
+}
+
+static void cmci_storm_end(int bank)
+{
+	__clear_bit(bank, this_cpu_ptr(mce_poll_banks));
+	this_cpu_write(bank_history[bank], 0ull);
+	this_cpu_write(bank_storm[bank], false);
+
+	/* If no banks left in storm mode, stop polling */
+	if (!this_cpu_dec_return(stormy_bank_count))
+		mce_timer_kick(false);
+}
+
+void track_cmci_storm(int bank, u64 status)
+{
+	unsigned long now = jiffies, delta;
+	unsigned int shift = 1;
+	u64 history;
+
+	/*
+	 * When a bank is in storm mode, the history mask covers about
+	 * one second of elapsed time. Check how long it has been since
+	 * this bank was last polled, and compute a shift value to update
+	 * the history bitmask.  When not in storm mode, each consecutive
+	 * poll of the bank is logged in the next history bit, so shift
+	 * is kept at "1".
+	 */
+	if (this_cpu_read(bank_storm[bank])) {
+		delta = now - this_cpu_read(bank_time_stamp[bank]);
+		shift = (delta + HZBITS) / HZBITS;
+	}
+
+	/* If has been a long time since the last poll, clear history */
+	if (shift >= 64)
+		history = 0;
+	else
+		history = this_cpu_read(bank_history[bank]) << shift;
+	this_cpu_write(bank_time_stamp[bank], now);
+
+	/* History keeps track of corrected errors. VAL=1 && UC=0 */
+	if ((status & (MCI_STATUS_VAL | MCI_STATUS_UC)) == MCI_STATUS_VAL)
+		history |= 1;
+	this_cpu_write(bank_history[bank], history);
+
+	if (this_cpu_read(bank_storm[bank])) {
+		if (history & GENMASK_ULL(STORM_END_POLL_THRESHOLD - 1, 0))
+			return;
+		pr_notice("CPU%d BANK%d CMCI storm subsided\n", smp_processor_id(), bank);
+		cmci_set_threshold(bank, cmci_threshold[bank]);
+		cmci_storm_end(bank);
+	} else {
+		if (hweight64(history) < STORM_BEGIN_THRESHOLD)
+			return;
+		pr_notice("CPU%d BANK%d CMCI storm detected\n", smp_processor_id(), bank);
+		cmci_set_threshold(bank, CMCI_STORM_THRESHOLD);
+		cmci_storm_begin(bank);
+	}
+}
+
 /*
  * The interrupt handler. This is called on every event.
  * Just call the poller directly to log any events.
@@ -147,6 +273,9 @@ static void cmci_discover(int banks)
 			continue;
 		}
 
+		if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
+			goto storm;
+
 		if (!mca_cfg.bios_cmci_threshold) {
 			val &= ~MCI_CTL2_CMCI_THRESHOLD_MASK;
 			val |= CMCI_THRESHOLD;
@@ -159,7 +288,7 @@ static void cmci_discover(int banks)
 			bios_zero_thresh = 1;
 			val |= CMCI_THRESHOLD;
 		}
-
+storm:
 		val |= MCI_CTL2_CMCI_EN;
 		wrmsrl(MSR_IA32_MCx_CTL2(i), val);
 		rdmsrl(MSR_IA32_MCx_CTL2(i), val);
@@ -167,7 +296,14 @@ static void cmci_discover(int banks)
 		/* Did the enable bit stick? -- the bank supports CMCI */
 		if (val & MCI_CTL2_CMCI_EN) {
 			set_bit(i, owned);
-			__clear_bit(i, this_cpu_ptr(mce_poll_banks));
+			if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD) {
+				pr_notice("CPU%d BANK%d CMCI inherited storm\n", smp_processor_id(), i);
+				this_cpu_write(bank_history[i], ~0ull);
+				this_cpu_write(bank_time_stamp[i], jiffies);
+				cmci_storm_begin(i);
+			} else {
+				__clear_bit(i, this_cpu_ptr(mce_poll_banks));
+			}
 			/*
 			 * We are able to set thresholds for some banks that
 			 * had a threshold of 0. This means the BIOS has not
@@ -177,6 +313,10 @@ static void cmci_discover(int banks)
 			if (mca_cfg.bios_cmci_threshold && bios_zero_thresh &&
 					(val & MCI_CTL2_CMCI_THRESHOLD_MASK))
 				bios_wrong_thresh = 1;
+
+			/* Save default threshold for each bank */
+			if (cmci_threshold[i] == 0)
+				cmci_threshold[i] = val & MCI_CTL2_CMCI_THRESHOLD_MASK;
 		} else {
 			WARN_ON(!test_bit(i, this_cpu_ptr(mce_poll_banks)));
 		}
@@ -218,6 +358,8 @@ static void __cmci_disable_bank(int bank)
 	val &= ~MCI_CTL2_CMCI_EN;
 	wrmsrl(MSR_IA32_MCx_CTL2(bank), val);
 	__clear_bit(bank, this_cpu_ptr(mce_banks_owned));
+	if ((val & MCI_CTL2_CMCI_THRESHOLD_MASK) == CMCI_STORM_THRESHOLD)
+		cmci_storm_end(bank);
 }
 
 /*
diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
index dfbd0bca67a0..4822fd0ab477 100644
--- a/arch/x86/kernel/cpu/mce/internal.h
+++ b/arch/x86/kernel/cpu/mce/internal.h
@@ -41,12 +41,14 @@ struct dentry *mce_get_debugfs_dir(void);
 extern mce_banks_t mce_banks_ce_disabled;
 
 #ifdef CONFIG_X86_MCE_INTEL
+void track_cmci_storm(int bank, u64 status);
 void cmci_disable_bank(int bank);
 void intel_init_cmci(void);
 void intel_init_lmce(void);
 void intel_clear_lmce(void);
 bool intel_filter_mce(struct mce *m);
 #else
+static inline void track_cmci_storm(int bank, u64 status) { }
 static inline void cmci_disable_bank(int bank) { }
 static inline void intel_init_cmci(void) { }
 static inline void intel_init_lmce(void) { }
@@ -54,7 +56,7 @@ static inline void intel_clear_lmce(void) { }
 static inline bool intel_filter_mce(struct mce *m) { return false; }
 #endif
 
-void mce_timer_kick(unsigned long interval);
+void mce_timer_kick(bool storm);
 
 #ifdef CONFIG_ACPI_APEI
 int apei_write_mce(struct mce *m);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs
  2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
  2022-03-15 18:15       ` [PATCH v2 1/2] x86/mce: Remove old CMCI storm mitigation code Tony Luck
  2022-03-15 18:15       ` [PATCH v2 2/2] x86/mce: Add per-bank CMCI storm mitigation Tony Luck
@ 2022-03-15 18:34       ` Borislav Petkov
  2022-03-15 21:46         ` Koralahalli Channabasappa, Smita
  2 siblings, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2022-03-15 18:34 UTC (permalink / raw)
  To: Tony Luck
  Cc: Smita Koralahalli, hpa, Dave Hansen, Yazen Ghannam, linux-edac,
	linux-kernel, patches

On Tue, Mar 15, 2022 at 11:15:07AM -0700, Tony Luck wrote:
> Smita: Unless Boris finds a some more stuff for me to fix, this
> version will be a better starting point to merge with your changes.

Right, I'm wondering if AMD can use the same scheme so that abstracting
out the hw-specific accesses (MSR writes, etc) would be enough...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs
  2022-03-15 18:34       ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Borislav Petkov
@ 2022-03-15 21:46         ` Koralahalli Channabasappa, Smita
  0 siblings, 0 replies; 18+ messages in thread
From: Koralahalli Channabasappa, Smita @ 2022-03-15 21:46 UTC (permalink / raw)
  To: Borislav Petkov, Tony Luck
  Cc: Smita Koralahalli, hpa, Dave Hansen, Yazen Ghannam, linux-edac,
	linux-kernel, patches

On 3/15/22 1:34 PM, Borislav Petkov wrote:

> On Tue, Mar 15, 2022 at 11:15:07AM -0700, Tony Luck wrote:
>> Smita: Unless Boris finds a some more stuff for me to fix, this
>> version will be a better starting point to merge with your changes.
> Right, I'm wondering if AMD can use the same scheme so that abstracting
> out the hw-specific accesses (MSR writes, etc) would be enough...

Thanks Tony.

Agreed. Most of this would apply for AMD's threshold interrupts too.

Will come up with a merged patch and move the storm handling to
mce/core.c and just keep the hw-specific accesses separate for
Intel and AMD in their respective files.

Thanks
Smita.



^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-03-15 21:46 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-17 14:16 [RFC PATCH 0/2] Handle AMD threshold interrupt storms Smita Koralahalli
2022-02-17 14:16 ` [RFC PATCH 1/2] x86/mce: " Smita Koralahalli
2022-02-17 17:28   ` Luck, Tony
2022-02-17 17:35     ` [PATCH 1/2] x86/mce: Remove old CMCI storm mitigation code Luck, Tony
2022-02-24 15:14       ` Borislav Petkov
2022-02-17 17:36     ` [PATCH 2/2] x86/mce: Add per-bank CMCI storm mitigation Luck, Tony
2022-02-23 22:11       ` Koralahalli Channabasappa, Smita
2022-03-07 13:31       ` Borislav Petkov
2022-03-07 20:04         ` Luck, Tony
2022-02-18 11:07     ` [RFC PATCH 1/2] x86/mce: Handle AMD threshold interrupt storms Borislav Petkov
2022-02-23 22:22       ` Koralahalli Channabasappa, Smita
2022-02-23 23:03         ` Luck, Tony
2022-03-15 18:15     ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Tony Luck
2022-03-15 18:15       ` [PATCH v2 1/2] x86/mce: Remove old CMCI storm mitigation code Tony Luck
2022-03-15 18:15       ` [PATCH v2 2/2] x86/mce: Add per-bank CMCI storm mitigation Tony Luck
2022-03-15 18:34       ` [PATCH v2 0/2] New CMCI storm mitigation for Intel CPUs Borislav Petkov
2022-03-15 21:46         ` Koralahalli Channabasappa, Smita
2022-02-17 14:16 ` [RFC PATCH 2/2] x86/mce: Simplify code in log_and_reset_block() Smita Koralahalli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).