linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
@ 2020-10-02 12:22 Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 1/7] RAS/CEC: Replace the macro PFN with ELEM_NO Shiju Jose
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

In ARM64 hardware platforms, for example our Kunpeng platforms, CPU L1/L2
cache corrected errors are reported in the ARM processor error section.
The situations the CPU CE errors are reported too often is not unlikely
and may need to isolate that CPU core to prevent leading to more 
serious faults. This is important for the early fault prediction.

Extend the existing RAS CEC to support the errors count check on short
time period with the threshold. The decay interval is divided into a
number of time slots for these elements. The CEC calculates the average
error count for each element at the end of each decay interval. Then the
average count would be subtracted from the total count in each of the
following time slots within the decay interval. The work function for
the decay interval would be set for a reduced time period = decay
interval/ number of time slots. When the new CE count for a CPU is added,
the element would try to offline when the sum of the most recent CEs
counts exceeded the CEs threshold value. More implementation details is
added in the file.

Add collection of CPU correctable errors, for example ARM64 L1/L2 cache
errors and isolation of the CPU core when the errors count in the short
time interval exceed the threshold value.

Open Questions based on the feedback from Boris,
1. ARM processor error types are cache/TLB/bus errors.
   [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8]
Any of the above error types should not be consider for the
error collection and CPU core isolation?

2.If disabling entire CPU core is not acceptable,
please suggest method to disable L1 and L2 cache on ARM64 core?

Shiju Jose (7):
  RAS/CEC: Replace the macro PFN with ELEM_NO
  RAS/CEC: Replace pfns_poisoned with elems_poisoned
  RAS/CEC: Move X86 MCE specific code under CONFIG_X86_MCE
  RAS/CEC: Modify cec_mod_work() for common use
  RAS/CEC: Add support for errors count check on short time period
  RAS/CEC: Add CPU Correctable Error Collector to isolate an erroneous
    CPU core
  ACPI / APEI: Add reporting ARM64 CPU correctable errors to the CEC

 arch/arm64/ras/Kconfig   |  17 ++
 drivers/acpi/apei/ghes.c |  36 +++-
 drivers/ras/Kconfig      |   1 +
 drivers/ras/cec.c        | 399 +++++++++++++++++++++++++++++++++++----
 include/linux/ras.h      |   9 +
 5 files changed, 418 insertions(+), 44 deletions(-)
 create mode 100644 arch/arm64/ras/Kconfig

-- 
2.17.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [RFC PATCH 1/7] RAS/CEC: Replace the macro PFN with ELEM_NO
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 2/7] RAS/CEC: Replace pfns_poisoned with elems_poisoned Shiju Jose
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

Replace the macro PFN with ELEM_NO for common use.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/ras/cec.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index 569d9ad2c594..22d11c66c266 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -86,7 +86,7 @@
  * u64: [ 63 ... 12 | DECAY_BITS | COUNT_BITS ]
  */
 
-#define PFN(e)			((e) >> PAGE_SHIFT)
+#define ELEM_NO(e, shift)	((e) >> (shift))
 #define DECAY(e)		(((e) >> COUNT_BITS) & DECAY_MASK)
 #define COUNT(e)		((unsigned int)(e) & COUNT_MASK)
 #define FULL_COUNT(e)		((e) & (PAGE_SIZE - 1))
@@ -113,6 +113,10 @@ static struct ce_array {
 					 * Times we did spring cleaning.
 					 */
 
+	u8 id_shift;			/*
+					 * shift for element id.
+					 */
+
 	union {
 		struct {
 			__u32	disabled : 1,	/* cmdline disabled */
@@ -191,7 +195,7 @@ static int __find_elem(struct ce_array *ca, u64 pfn, unsigned int *to)
 	while (min <= max) {
 		int i = (min + max) >> 1;
 
-		this_pfn = PFN(ca->array[i]);
+		this_pfn = ELEM_NO(ca->array[i], ca->id_shift);
 
 		if (this_pfn < pfn)
 			min = i + 1;
@@ -258,7 +262,7 @@ static u64 del_lru_elem_unlocked(struct ce_array *ca)
 
 	del_elem(ca, min_idx);
 
-	return PFN(ca->array[min_idx]);
+	return ELEM_NO(ca->array[min_idx], ca->id_shift);
 }
 
 /*
@@ -287,7 +291,7 @@ static bool sanity_check(struct ce_array *ca)
 	int i;
 
 	for (i = 0; i < ca->n; i++) {
-		u64 this = PFN(ca->array[i]);
+		u64 this = ELEM_NO(ca->array[i], ca->id_shift);
 
 		if (WARN(prev > this, "prev: 0x%016llx <-> this: 0x%016llx\n", prev, this))
 			ret = true;
@@ -300,7 +304,7 @@ static bool sanity_check(struct ce_array *ca)
 
 	pr_info("Sanity check dump:\n{ n: %d\n", ca->n);
 	for (i = 0; i < ca->n; i++) {
-		u64 this = PFN(ca->array[i]);
+		u64 this = ELEM_NO(ca->array[i], ca->id_shift);
 
 		pr_info(" %03d: [%016llx|%03llx]\n", i, this, FULL_COUNT(ca->array[i]));
 	}
@@ -444,7 +448,7 @@ static int array_dump(struct seq_file *m, void *v)
 
 	seq_printf(m, "{ n: %d\n", ca->n);
 	for (i = 0; i < ca->n; i++) {
-		u64 this = PFN(ca->array[i]);
+		u64 this = ELEM_NO(ca->array[i], ca->id_shift);
 
 		seq_printf(m, " %3d: [%016llx|%s|%03llx]\n",
 			   i, this, bins[DECAY(ca->array[i])], COUNT(ca->array[i]));
@@ -569,6 +573,7 @@ static void __init cec_init(void)
 		return;
 	}
 
+	ce_arr.id_shift = PAGE_SHIFT;
 	INIT_DELAYED_WORK(&cec_work, cec_work_fn);
 	schedule_delayed_work(&cec_work, CEC_DECAY_DEFAULT_INTERVAL);
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 2/7] RAS/CEC: Replace pfns_poisoned with elems_poisoned
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 1/7] RAS/CEC: Replace the macro PFN with ELEM_NO Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 3/7] RAS/CEC: Move X86 MCE specific code under CONFIG_X86_MCE Shiju Jose
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

Replace the variable pfns_poisoned with elems_poisoned
for the common use.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/ras/cec.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index 22d11c66c266..f20da1103f27 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -100,8 +100,8 @@ static struct ce_array {
 					 * since the last spring cleaning.
 					 */
 
-	u64 pfns_poisoned;		/*
-					 * number of PFNs which got poisoned.
+	u64 elems_poisoned;		/*
+					 * number of elements which got poisoned.
 					 */
 
 	u64 ces_entered;		/*
@@ -362,7 +362,7 @@ static int cec_add_elem(u64 pfn)
 			/* We have reached max count for this page, soft-offline it. */
 			pr_err("Soft-offlining pfn: 0x%llx\n", pfn);
 			memory_failure_queue(pfn, MF_SOFT_OFFLINE);
-			ca->pfns_poisoned++;
+			ca->elems_poisoned++;
 		}
 
 		del_elem(ca, to);
@@ -457,7 +457,7 @@ static int array_dump(struct seq_file *m, void *v)
 	seq_printf(m, "}\n");
 
 	seq_printf(m, "Stats:\nCEs: %llu\nofflined pages: %llu\n",
-		   ca->ces_entered, ca->pfns_poisoned);
+		   ca->ces_entered, ca->elems_poisoned);
 
 	seq_printf(m, "Flags: 0x%x\n", ca->flags);
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 3/7] RAS/CEC: Move X86 MCE specific code under CONFIG_X86_MCE
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 1/7] RAS/CEC: Replace the macro PFN with ELEM_NO Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 2/7] RAS/CEC: Replace pfns_poisoned with elems_poisoned Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 4/7] RAS/CEC: Modify cec_mod_work() for common use Shiju Jose
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

CEC may need to support other architectures such as ARM64.
Move X86 MCE specific code under CONFIG_X86_MCE to support
building for other architectures.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/ras/cec.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index f20da1103f27..803e641d8e5c 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -8,7 +8,9 @@
 #include <linux/kernel.h>
 #include <linux/workqueue.h>
 
+#if defined(CONFIG_X86_MCE)
 #include <asm/mce.h>
+#endif
 
 #include "debugfs.h"
 
@@ -511,6 +513,7 @@ static int __init create_debugfs_nodes(void)
 	if (!IS_ENABLED(CONFIG_RAS_CEC_DEBUG))
 		return 0;
 
+#if defined(CONFIG_X86_MCE)
 	pfn = debugfs_create_file("pfn", S_IRUSR | S_IWUSR, d, &dfs_pfn, &pfn_ops);
 	if (!pfn) {
 		pr_warn("Error creating pfn debugfs node!\n");
@@ -522,6 +525,7 @@ static int __init create_debugfs_nodes(void)
 		pr_warn("Error creating array debugfs node!\n");
 		goto err;
 	}
+#endif
 
 	return 0;
 
@@ -531,6 +535,7 @@ static int __init create_debugfs_nodes(void)
 	return 1;
 }
 
+#if defined(CONFIG_X86_MCE)
 static int cec_notifier(struct notifier_block *nb, unsigned long val,
 			void *data)
 {
@@ -556,28 +561,33 @@ static struct notifier_block cec_nb = {
 	.notifier_call	= cec_notifier,
 	.priority	= MCE_PRIO_CEC,
 };
+#endif
 
 static void __init cec_init(void)
 {
 	if (ce_arr.disabled)
 		return;
 
+#if defined(CONFIG_X86_MCE)
 	ce_arr.array = (void *)get_zeroed_page(GFP_KERNEL);
 	if (!ce_arr.array) {
 		pr_err("Error allocating CE array page!\n");
 		return;
 	}
+#endif
 
 	if (create_debugfs_nodes()) {
 		free_page((unsigned long)ce_arr.array);
 		return;
 	}
 
+#if defined(CONFIG_X86_MCE)
 	ce_arr.id_shift = PAGE_SHIFT;
 	INIT_DELAYED_WORK(&cec_work, cec_work_fn);
 	schedule_delayed_work(&cec_work, CEC_DECAY_DEFAULT_INTERVAL);
 
 	mce_register_decode_chain(&cec_nb);
+#endif
 
 	pr_info("Correctable Errors collector initialized.\n");
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 4/7] RAS/CEC: Modify cec_mod_work() for common use
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
                   ` (2 preceding siblings ...)
  2020-10-02 12:22 ` [RFC PATCH 3/7] RAS/CEC: Move X86 MCE specific code under CONFIG_X86_MCE Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 5/7] RAS/CEC: Add support for errors count check on short time period Shiju Jose
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

Modify the function cec_mod_work() for the common use
with the other error sources.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/ras/cec.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index 803e641d8e5c..f869e7a270b8 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -167,12 +167,12 @@ static void do_spring_cleaning(struct ce_array *ca)
 /*
  * @interval in seconds
  */
-static void cec_mod_work(unsigned long interval)
+static void cec_mod_work(struct delayed_work *dwork, unsigned long interval)
 {
 	unsigned long iv;
 
 	iv = interval * HZ;
-	mod_delayed_work(system_wq, &cec_work, round_jiffies(iv));
+	mod_delayed_work(system_wq, dwork, round_jiffies(iv));
 }
 
 static void cec_work_fn(struct work_struct *work)
@@ -181,7 +181,7 @@ static void cec_work_fn(struct work_struct *work)
 	do_spring_cleaning(&ce_arr);
 	mutex_unlock(&ce_mutex);
 
-	cec_mod_work(decay_interval);
+	cec_mod_work(&cec_work, decay_interval);
 }
 
 /*
@@ -420,7 +420,7 @@ static int decay_interval_set(void *data, u64 val)
 	*(u64 *)data   = val;
 	decay_interval = val;
 
-	cec_mod_work(decay_interval);
+	cec_mod_work(&cec_work, decay_interval);
 
 	return 0;
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 5/7] RAS/CEC: Add support for errors count check on short time period
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
                   ` (3 preceding siblings ...)
  2020-10-02 12:22 ` [RFC PATCH 4/7] RAS/CEC: Modify cec_mod_work() for common use Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 6/7] RAS/CEC: Add CPU Correctable Error Collector to isolate an erroneous CPU core Shiju Jose
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

Some types of elements, for example CPU core, should be isolated
when the corrected errors reported too often. This is used for the
early fault prediction and would help to prevent serious faults
by taking corrective actions.
Modify CEC to support for the errors count check on short
time period. Implementation details is added in the file.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/ras/cec.c | 125 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 109 insertions(+), 16 deletions(-)

diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index f869e7a270b8..ca52917d514c 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -119,6 +119,23 @@ static struct ce_array {
 					 * shift for element id.
 					 */
 
+	struct delayed_work work;	/*
+					 * delayed work.
+					 */
+
+	bool short_period;		/* Indicates threshold check for the error count
+					 * over short time period.
+					 */
+
+	u8 time_slot;			/*
+					 * time slot's number within the decay interval.
+					 */
+
+	union {
+		struct mutex	mutex;
+		spinlock_t	spin_lock;
+	};
+
 	union {
 		struct {
 			__u32	disabled : 1,	/* cmdline disabled */
@@ -128,7 +145,6 @@ static struct ce_array {
 	};
 } ce_arr;
 
-static DEFINE_MUTEX(ce_mutex);
 static u64 dfs_pfn;
 
 /* Amount of errors after which we offline */
@@ -138,9 +154,35 @@ static u64 action_threshold = COUNT_MASK;
 #define CEC_DECAY_DEFAULT_INTERVAL	24 * 60 * 60	/* 24 hrs */
 #define CEC_DECAY_MIN_INTERVAL		 1 * 60 * 60	/* 1h */
 #define CEC_DECAY_MAX_INTERVAL	   30 *	24 * 60 * 60	/* one month */
-static struct delayed_work cec_work;
 static u64 decay_interval = CEC_DECAY_DEFAULT_INTERVAL;
 
+/* Definitions for elements (for example CPU) for which
+ * error count on shrot time period is checked with threshold.
+ *
+ * An element such as a CPU core may need to isolate when large number of
+ * correctable errors are reported on that element too often. When the
+ * CEs count is exceeded the threshold value in a short time period.
+ *
+ * The decay interval is divided into a number of time slots. The CE collector
+ * calculates the average error count at the end of each decay interval. Then
+ * the average count would be subtracted from the total count in each following
+ * time slots. The work function for the decay interval would be set  for the
+ * reduced time period = decay interval/ number of time slots. When the new
+ * CE count for a cpu is added, the element would be offlined when the sum of
+ * the most recent CEs counts exceeded the CE threshold value.
+ */
+
+/*
+ * u64: [ 63 ELEM ID 23 | ELEM_STATUS_BIT 22 | 21 AVG_COUNT_BITS 12 | 11 DECAY_BITS 10 | 9 COUNT_BITS 0]
+ */
+
+/* Number of time slots in the decay interval */
+#define RAS_CEC_NUM_TIME_SLOTS	10
+
+#define AVG_COUNT_SHIFT	(DECAY_BITS + COUNT_BITS)
+#define ELEM_STATUS_BIT	BIT(22)	/* Indicates an element offlined by CEC */
+#define ELEM_ID_SHIFT	(1 + AVG_COUNT_SHIFT + COUNT_BITS)
+
 /*
  * Decrement decay value. We're using DECAY_BITS bits to denote decay of an
  * element in the array. On insertion and any access, it gets reset to max.
@@ -177,11 +219,62 @@ static void cec_mod_work(struct delayed_work *dwork, unsigned long interval)
 
 static void cec_work_fn(struct work_struct *work)
 {
-	mutex_lock(&ce_mutex);
-	do_spring_cleaning(&ce_arr);
-	mutex_unlock(&ce_mutex);
+	struct ce_array *ca;
+	unsigned long flags;
+	u64 avg_count;
+	int i, time_slots = 1;
+	struct delayed_work *d_work = container_of(work, struct delayed_work, work);
+
+	if (!d_work)
+		return;
+
+	ca = container_of(d_work, struct ce_array, work);
+	if (!ca->array || ca->disabled)
+		return;
 
-	cec_mod_work(&cec_work, decay_interval);
+	if (!ca->short_period) {
+		mutex_lock(&ca->mutex);
+		do_spring_cleaning(ca);
+		mutex_unlock(&ca->mutex);
+	} else {
+		time_slots = RAS_CEC_NUM_TIME_SLOTS;
+		spin_lock_irqsave(&ca->spin_lock, flags);
+		ca->time_slot = (ca->time_slot + 1) % RAS_CEC_NUM_TIME_SLOTS;
+
+		for (i = 0; i < ca->n; i++) {
+			if (ca->array[i] & ELEM_STATUS_BIT)
+				continue;
+
+			/* clear old errors count approximately by subtracting the avg count
+			 * from the total errors count.
+			 */
+			avg_count = (ca->array[i] >> AVG_COUNT_SHIFT) & COUNT_MASK;
+			ca->array[i] -= avg_count;
+		}
+
+		if (ca->time_slot) {
+			spin_unlock_irqrestore(&ca->spin_lock, flags);
+			goto exit;
+		}
+
+		for (i = 0; i < ca->n; i++) {
+			if (ca->array[i] & ELEM_STATUS_BIT)
+				continue;
+
+			/* calculate average error count for the completed time period */
+			avg_count = COUNT(ca->array[i]) / RAS_CEC_NUM_TIME_SLOTS;
+			ca->array[i] -= (COUNT(ca->array[i]) % RAS_CEC_NUM_TIME_SLOTS);
+			/* store average error count */
+			ca->array[i] &= ~(COUNT_MASK << AVG_COUNT_SHIFT);
+			ca->array[i] |= (avg_count << AVG_COUNT_SHIFT);
+		}
+
+		do_spring_cleaning(ca);
+		spin_unlock_irqrestore(&ca->spin_lock, flags);
+	}
+
+exit:
+	cec_mod_work(&ca->work, decay_interval/time_slots);
 }
 
 /*
@@ -279,9 +372,9 @@ static u64 __maybe_unused del_lru_elem(void)
 	if (!ca->n)
 		return 0;
 
-	mutex_lock(&ce_mutex);
+	mutex_lock(&ca->mutex);
 	pfn = del_lru_elem_unlocked(ca);
-	mutex_unlock(&ce_mutex);
+	mutex_unlock(&ca->mutex);
 
 	return pfn;
 }
@@ -328,7 +421,7 @@ static int cec_add_elem(u64 pfn)
 	if (!ce_arr.array || ce_arr.disabled)
 		return -ENODEV;
 
-	mutex_lock(&ce_mutex);
+	mutex_lock(&ca->mutex);
 
 	ca->ces_entered++;
 
@@ -386,7 +479,7 @@ static int cec_add_elem(u64 pfn)
 	WARN_ON_ONCE(sanity_check(ca));
 
 unlock:
-	mutex_unlock(&ce_mutex);
+	mutex_unlock(&ca->mutex);
 
 	return ret;
 }
@@ -420,7 +513,7 @@ static int decay_interval_set(void *data, u64 val)
 	*(u64 *)data   = val;
 	decay_interval = val;
 
-	cec_mod_work(&cec_work, decay_interval);
+	cec_mod_work(&ce_arr.work, decay_interval);
 
 	return 0;
 }
@@ -446,7 +539,7 @@ static int array_dump(struct seq_file *m, void *v)
 	struct ce_array *ca = &ce_arr;
 	int i;
 
-	mutex_lock(&ce_mutex);
+	mutex_lock(&ca->mutex);
 
 	seq_printf(m, "{ n: %d\n", ca->n);
 	for (i = 0; i < ca->n; i++) {
@@ -468,7 +561,7 @@ static int array_dump(struct seq_file *m, void *v)
 
 	seq_printf(m, "Action threshold: %lld\n", action_threshold);
 
-	mutex_unlock(&ce_mutex);
+	mutex_unlock(&ca->mutex);
 
 	return 0;
 }
@@ -583,9 +676,9 @@ static void __init cec_init(void)
 
 #if defined(CONFIG_X86_MCE)
 	ce_arr.id_shift = PAGE_SHIFT;
-	INIT_DELAYED_WORK(&cec_work, cec_work_fn);
-	schedule_delayed_work(&cec_work, CEC_DECAY_DEFAULT_INTERVAL);
-
+	mutex_init(&ce_arr.mutex);
+	INIT_DELAYED_WORK(&ce_arr.work, cec_work_fn);
+	schedule_delayed_work(&ce_arr.work, CEC_DECAY_DEFAULT_INTERVAL);
 	mce_register_decode_chain(&cec_nb);
 #endif
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 6/7] RAS/CEC: Add CPU Correctable Error Collector to isolate an erroneous CPU core
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
                   ` (4 preceding siblings ...)
  2020-10-02 12:22 ` [RFC PATCH 5/7] RAS/CEC: Add support for errors count check on short time period Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:22 ` [RFC PATCH 7/7] ACPI / APEI: Add reporting ARM64 CPU correctable errors to the CEC Shiju Jose
  2020-10-02 12:43 ` [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Borislav Petkov
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

When the CPU correctable errors, for example L1/L2 cache errors,
reported on an ARM64 CPU core too often, it should be isolated.
Add the CPU correctable error collector to store the CPU correctable
error count.

When the correctable error count for a CPU exceed the threshold
value in a short time period, it will try to isolate the CPU core.

If disabling entire CPU core is not acceptable, Please suggest
method to disable L1 and L2 cache on ARM64 core?

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 arch/arm64/ras/Kconfig |  17 +++
 drivers/ras/Kconfig    |   1 +
 drivers/ras/cec.c      | 231 +++++++++++++++++++++++++++++++++++++++--
 include/linux/ras.h    |   9 ++
 4 files changed, 247 insertions(+), 11 deletions(-)
 create mode 100644 arch/arm64/ras/Kconfig

diff --git a/arch/arm64/ras/Kconfig b/arch/arm64/ras/Kconfig
new file mode 100644
index 000000000000..bfa14157cd2e
--- /dev/null
+++ b/arch/arm64/ras/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0
+config RAS_CEC
+        bool "Correctable Errors Collector"
+        depends on ARM64 && HOTPLUG_CPU && DEBUG_FS
+        help
+          This is a small cache which collects correctable CPU errors and
+          counts their repeated occurrence. Once the counter for a CPU
+          overflows in a short time period, we try to offline that CPU
+          as we take it to mean that it has reached a relatively high error
+          count and would probably be best if we don't use it anymore.
+
+          Presently CPU error correction enabld for ARM64 platform only.
+
+config RAS_CEC_DEBUG
+        bool "CEC debugging machinery"
+        default n
+        depends on RAS_CEC
diff --git a/drivers/ras/Kconfig b/drivers/ras/Kconfig
index c2a236f2e846..d2f877e5f7ad 100644
--- a/drivers/ras/Kconfig
+++ b/drivers/ras/Kconfig
@@ -32,5 +32,6 @@ menuconfig RAS
 if RAS
 
 source "arch/x86/ras/Kconfig"
+source "arch/arm64/ras/Kconfig"
 
 endif
diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c
index ca52917d514c..408bf2ac2461 100644
--- a/drivers/ras/cec.c
+++ b/drivers/ras/cec.c
@@ -7,6 +7,8 @@
 #include <linux/ras.h>
 #include <linux/kernel.h>
 #include <linux/workqueue.h>
+#include <linux/cpu.h>
+#include <linux/slab.h>
 
 #if defined(CONFIG_X86_MCE)
 #include <asm/mce.h>
@@ -143,7 +145,7 @@ static struct ce_array {
 		};
 		__u32 flags;
 	};
-} ce_arr;
+} ce_arr, cpu_ce_arr;
 
 static u64 dfs_pfn;
 
@@ -156,6 +158,8 @@ static u64 action_threshold = COUNT_MASK;
 #define CEC_DECAY_MAX_INTERVAL	   30 *	24 * 60 * 60	/* one month */
 static u64 decay_interval = CEC_DECAY_DEFAULT_INTERVAL;
 
+static const char * const bins[] = { "00", "01", "10", "11" };
+
 /* Definitions for elements (for example CPU) for which
  * error count on shrot time period is checked with threshold.
  *
@@ -484,6 +488,172 @@ static int cec_add_elem(u64 pfn)
 	return ret;
 }
 
+struct cec_elem_offline {
+	struct work_struct work;
+	struct ce_array *ca;
+	int array_index;
+	int elem_id;
+};
+
+/*
+ * Work function to offline a cpu because the offlining to be done
+ * in the process context.
+ */
+static void cec_cpu_offline_work_fn(struct work_struct *work)
+{
+	int rc, cpu;
+	struct cec_elem_offline *elem;
+	struct ce_array *ca;
+
+	elem = container_of(work, struct cec_elem_offline, work);
+
+	cpu = elem->elem_id;
+	if (!cpu_online(cpu))
+		return;
+
+	rc = remove_cpu(cpu);
+	if (rc) {
+		pr_warn("Failed to offline CPU%d, error %d\n", cpu, rc);
+	} else {
+		ca = elem->ca;
+		ca->array[elem->array_index] |= ELEM_STATUS_BIT;
+	}
+
+	kfree(elem);
+}
+
+int cec_cpu_add_elem(int cpu, u64 ce_count)
+{
+	struct ce_array *ca = &cpu_ce_arr;
+	unsigned int to = 0;
+	int count, ret = 0;
+	unsigned long flags;
+	struct cec_elem_offline *elem;
+
+	/*
+	 * We can be called very early on the identify_cpu() path where we are
+	 * not initialized yet. We ignore the error for simplicity.
+	 */
+	if (!ca->array || ca->disabled || !cpu_online(cpu))
+		return -ENODEV;
+
+	spin_lock_irqsave(&ca->spin_lock, flags);
+
+	ca->ces_entered++;
+
+	ret = find_elem(ca, cpu, &to);
+	if (ret < 0) {
+		/*
+		 * Shift range [to-end] to make room for one more element.
+		 */
+		memmove((void *)&ca->array[to + 1],
+			(void *)&ca->array[to],
+			(ca->n - to) * sizeof(u64));
+
+		ca->array[to] = cpu << ca->id_shift;
+		ca->n++;
+	}
+
+	/* Error received for a previously CEC offlined CPU, which later online elsewhere.
+	 * reset array.
+	 */
+	if (ca->array[to] & ELEM_STATUS_BIT) {
+		ca->array[to] &= ~(ELEM_STATUS_BIT);
+		ca->array[to] &= ~(COUNT_MASK);
+	}
+
+	/* Add/refresh element generation and increment count */
+	ca->array[to] |= DECAY_MASK << COUNT_BITS;
+	ca->array[to] += ce_count;
+
+	/* Check action threshold and offline, if reached. */
+	count = COUNT(ca->array[to]);
+	if (count >= action_threshold) {
+		if (!cpu_online(cpu)) {
+			pr_warn("CEC: Invalid cpu: %d\n", cpu);
+		} else {
+			/* We have reached max count for this cpu, offline it. */
+			ca->elems_poisoned++;
+			/* schedule work function to offline the cpu */
+			elem = kmalloc(sizeof(*elem), GFP_NOWAIT);
+			if (elem) {
+				pr_info("CEC: offlining cpu: %d\n", cpu);
+				elem->ca = ca;
+				elem->array_index = to;
+				elem->elem_id = cpu;
+				INIT_WORK(&elem->work, cec_cpu_offline_work_fn);
+				schedule_work(&elem->work);
+			} else
+				pr_warn("CEC: offlining cpu: out of memory %d\n", cpu);
+		}
+
+		/*
+		 * Return a >0 value to callers, to denote that we've reached
+		 * the offlining threshold.
+		 */
+		ret = 1;
+
+		goto unlock;
+	}
+
+	ca->decay_count++;
+
+	/* Do we need to call spring cleaning for the modules(eg CPU) with
+	 * small number of elements?
+	 */
+	if (ca->decay_count >= (num_present_cpus() >> DECAY_BITS))
+		do_spring_cleaning(ca);
+
+	WARN_ON_ONCE(sanity_check(ca));
+
+unlock:
+	spin_unlock_irqrestore(&ca->spin_lock, flags);
+
+	return ret;
+}
+
+static int cec_cpu_stats_show(struct seq_file *seq, void *v)
+{
+	struct ce_array *ca = &cpu_ce_arr;
+	unsigned long flags;
+	int i;
+
+	spin_lock_irqsave(&cpu_ce_arr.spin_lock, flags);
+	seq_puts(seq, "CEC CPU Stats:\n");
+
+	seq_printf(seq, "{ n: %d\n", ca->n);
+	for (i = 0; i < ca->n; i++) {
+		int cpu = ELEM_NO(ca->array[i], ca->id_shift);
+
+	seq_printf(seq, "cpu=%d: %03llx\n",
+		   cpu, ca->array[i]);
+
+	seq_printf(seq, " %3d: [%d|%s|%03lld|%s]\n",
+		   i, cpu, bins[DECAY(ca->array[i])],
+		   COUNT(ca->array[i]),
+		   cpu_online(cpu) ? "online" :
+		   (ca->array[i] & ELEM_STATUS_BIT) ?
+		   "offlined-by-cec" : "offline");
+	}
+
+	seq_printf(seq, "}\n");
+
+	seq_printf(seq, "Stats:\nCEs: %llu\nofflined CPUs: %llu\n",
+		   ca->ces_entered, ca->elems_poisoned);
+
+	seq_printf(seq, "Flags: 0x%x\n", ca->flags);
+
+	seq_printf(seq, "Decay interval: %lld seconds\n", decay_interval);
+	seq_printf(seq, "Decays: %lld\n", ca->decays_done);
+
+	seq_printf(seq, "Action threshold: %lld\n", action_threshold);
+
+	spin_unlock_irqrestore(&cpu_ce_arr.spin_lock, flags);
+
+	return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(cec_cpu_stats);
+
 static int u64_get(void *data, u64 *val)
 {
 	*val = *(u64 *)data;
@@ -514,6 +684,7 @@ static int decay_interval_set(void *data, u64 val)
 	decay_interval = val;
 
 	cec_mod_work(&ce_arr.work, decay_interval);
+	cec_mod_work(&cpu_ce_arr.work, decay_interval/RAS_CEC_NUM_TIME_SLOTS);
 
 	return 0;
 }
@@ -532,8 +703,6 @@ static int action_threshold_set(void *data, u64 val)
 }
 DEFINE_DEBUGFS_ATTRIBUTE(action_threshold_ops, u64_get, action_threshold_set, "%lld\n");
 
-static const char * const bins[] = { "00", "01", "10", "11" };
-
 static int array_dump(struct seq_file *m, void *v)
 {
 	struct ce_array *ca = &ce_arr;
@@ -620,6 +789,14 @@ static int __init create_debugfs_nodes(void)
 	}
 #endif
 
+#if defined(CONFIG_ARM64)
+	array = debugfs_create_file("cpu_stats", 0400, d, NULL, &cec_cpu_stats_fops);
+	if (!array) {
+		pr_warn("Error creating cpu_stats debugfs node!\n");
+		goto err;
+	}
+#endif
+
 	return 0;
 
 err:
@@ -658,21 +835,26 @@ static struct notifier_block cec_nb = {
 
 static void __init cec_init(void)
 {
-	if (ce_arr.disabled)
+	if (ce_arr.disabled && cpu_ce_arr.disabled)
 		return;
 
 #if defined(CONFIG_X86_MCE)
 	ce_arr.array = (void *)get_zeroed_page(GFP_KERNEL);
 	if (!ce_arr.array) {
 		pr_err("Error allocating CE array page!\n");
-		return;
+		goto error;
 	}
 #endif
 
-	if (create_debugfs_nodes()) {
-		free_page((unsigned long)ce_arr.array);
-		return;
-	}
+#if defined(CONFIG_ARM64)
+	cpu_ce_arr.array = kcalloc(num_present_cpus(), sizeof(*(cpu_ce_arr.array)),
+				   GFP_KERNEL);
+	if (!cpu_ce_arr.array)
+		goto error;
+#endif
+
+	if (create_debugfs_nodes())
+		goto error;
 
 #if defined(CONFIG_X86_MCE)
 	ce_arr.id_shift = PAGE_SHIFT;
@@ -682,22 +864,49 @@ static void __init cec_init(void)
 	mce_register_decode_chain(&cec_nb);
 #endif
 
+#if defined(CONFIG_ARM64)
+	cpu_ce_arr.short_period = true;
+	cpu_ce_arr.id_shift = ELEM_ID_SHIFT;
+	spin_lock_init(&cpu_ce_arr.spin_lock);
+	INIT_DELAYED_WORK(&cpu_ce_arr.work, cec_work_fn);
+	schedule_delayed_work(&cpu_ce_arr.work, CEC_DECAY_DEFAULT_INTERVAL/RAS_CEC_NUM_TIME_SLOTS);
+#endif
+
 	pr_info("Correctable Errors collector initialized.\n");
+	return;
+error:
+#if defined(CONFIG_ARM64)
+	kfree(cpu_ce_arr.array);
+#endif
+	if (ce_arr.array)
+		free_page((unsigned long)ce_arr.array);
+
 }
 late_initcall(cec_init);
 
 int __init parse_cec_param(char *str)
 {
+	bool match = false;
+
 	if (!str)
 		return 0;
 
 	if (*str == '=')
 		str++;
 
-	if (!strcmp(str, "cec_disable"))
+	if (!strcmp(str, "cec_disable")) {
 		ce_arr.disabled = 1;
+		match = true;
+	}
+
+	if (!strcmp(str, "cec_cpu_disable")) {
+		cpu_ce_arr.disabled = 1;
+		match = true;
+	}
+
+	if (match)
+		return 1;
 	else
 		return 0;
 
-	return 1;
 }
diff --git a/include/linux/ras.h b/include/linux/ras.h
index 1f4048bf2674..43d91298f1e3 100644
--- a/include/linux/ras.h
+++ b/include/linux/ras.h
@@ -18,6 +18,15 @@ static inline int ras_add_daemon_trace(void) { return 0; }
 
 #ifdef CONFIG_RAS_CEC
 int __init parse_cec_param(char *str);
+/**
+ * cec_cpu_add_elem - add the count of CPU correctable errors to the
+ * CEC(correctable errors collector).
+ * @cpu: CPU index.
+ * @ce_count: CPU correctable errors count.
+ */
+int cec_cpu_add_elem(int cpu, u64 ce_count);
+#else
+static inline int cec_cpu_add_elem(int cpu, u64 ce_count) { return -ENODEV; }
 #endif
 
 #ifdef CONFIG_RAS
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH 7/7] ACPI / APEI: Add reporting ARM64 CPU correctable errors to the CEC
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
                   ` (5 preceding siblings ...)
  2020-10-02 12:22 ` [RFC PATCH 6/7] RAS/CEC: Add CPU Correctable Error Collector to isolate an erroneous CPU core Shiju Jose
@ 2020-10-02 12:22 ` Shiju Jose
  2020-10-02 12:43 ` [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Borislav Petkov
  7 siblings, 0 replies; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 12:22 UTC (permalink / raw)
  To: linux-edac, linux-acpi, linux-kernel, bp, tony.luck, rjw,
	james.morse, lenb
  Cc: linuxarm, shiju.jose

Add reporting ARM64 CPU correctable errors to the RAS correctable
errors collector(CEC).

ARM processor error types are cache/TLB/bus errors.
Any of the above error types should not be consider for the
error collection and CPU core isolation?

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
---
 drivers/acpi/apei/ghes.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 81bf71b10d44..3cecb457d352 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -511,6 +511,38 @@ static void ghes_handle_aer(struct acpi_hest_generic_data *gdata)
 #endif
 }
 
+static void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata)
+{
+	struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
+	struct cper_arm_err_info *err_info;
+	int sec_sev;
+	int cpu, i, ret;
+
+	log_arm_hw_error(err);
+
+	sec_sev = ghes_severity(gdata->error_severity);
+	if (sec_sev != GHES_SEV_CORRECTED)
+		return;
+
+#if defined(CONFIG_ARM64)
+	cpu = get_logical_index(err->mpidr);
+	if (cpu == -EINVAL)
+		return;
+
+	/* ARM processor error types are cache/tlb/bus errors.
+	 * Any of the above error types should not be consider for the
+	 * error collection and CPU core isolation?
+	 */
+	err_info = (struct cper_arm_err_info *)(err + 1);
+	for (i = 0; i < err->err_info_num; i++) {
+		ret = cec_cpu_add_elem(cpu, err_info->multiple_error + 1);
+		if (ret)
+			break;
+		err_info += 1;
+	}
+#endif
+}
+
 static bool ghes_do_proc(struct ghes *ghes,
 			 const struct acpi_hest_generic_status *estatus)
 {
@@ -543,9 +575,7 @@ static bool ghes_do_proc(struct ghes *ghes,
 			ghes_handle_aer(gdata);
 		}
 		else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
-			struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
-
-			log_arm_hw_error(err);
+			ghes_handle_arm_hw_error(gdata);
 		} else {
 			void *err = acpi_hest_get_payload(gdata);
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
                   ` (6 preceding siblings ...)
  2020-10-02 12:22 ` [RFC PATCH 7/7] ACPI / APEI: Add reporting ARM64 CPU correctable errors to the CEC Shiju Jose
@ 2020-10-02 12:43 ` Borislav Petkov
  2020-10-02 15:38   ` Shiju Jose
  2020-10-02 16:04   ` Luck, Tony
  7 siblings, 2 replies; 15+ messages in thread
From: Borislav Petkov @ 2020-10-02 12:43 UTC (permalink / raw)
  To: Shiju Jose
  Cc: linux-edac, linux-acpi, linux-kernel, tony.luck, rjw,
	james.morse, lenb, linuxarm

On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
> Open Questions based on the feedback from Boris,
> 1. ARM processor error types are cache/TLB/bus errors.
>    [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8]
> Any of the above error types should not be consider for the
> error collection and CPU core isolation?
> 
> 2.If disabling entire CPU core is not acceptable,
> please suggest method to disable L1 and L2 cache on ARM64 core?

More open questions:

> This requirement is the part of the early fault prediction by taking
> action when large number of corrected errors reported on a CPU core
> before it causing serious faults.

And do you know of actual real-life examples where this is really the
case? Do you have any users who report a large error count on ARM CPUs,
originating from the caches and that something like that would really
help?

Because from my x86 CPUs limited experience, the cache arrays are mostly
fine and errors reported there are not something that happens very
frequently so we don't even need to collect and count those.

So is this something which you need to have in order to check a box
somewhere that there is some functionality or is there an actual
real-life use case behind it which a customer has requested?

Open question from James with my reply to it:

On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
> If the corrected-count is available somewhere, can't this policy be
> made in user-space?

You mean rasdaemon goes and offlines CPUs when certain thresholds are
reached? Sure. It would be much more flexible too.

First we answer questions and discuss, then we code.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 12:43 ` [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Borislav Petkov
@ 2020-10-02 15:38   ` Shiju Jose
  2020-10-02 17:33     ` James Morse
  2020-10-02 16:04   ` Luck, Tony
  1 sibling, 1 reply; 15+ messages in thread
From: Shiju Jose @ 2020-10-02 15:38 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-edac, linux-acpi, linux-kernel, tony.luck, rjw,
	james.morse, lenb, Linuxarm

Hi Boris, Hi James,

>-----Original Message-----
>From: Borislav Petkov [mailto:bp@alien8.de]
>Sent: 02 October 2020 13:44
>To: Shiju Jose <shiju.jose@huawei.com>
>Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
>kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net;
>james.morse@arm.com; lenb@kernel.org; Linuxarm
><linuxarm@huawei.com>
>Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
>short time period
>
>On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
>> Open Questions based on the feedback from Boris, 1. ARM processor
>> error types are cache/TLB/bus errors.
>>    [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8]
>> Any of the above error types should not be consider for the error
>> collection and CPU core isolation?
>>
>> 2.If disabling entire CPU core is not acceptable, please suggest
>> method to disable L1 and L2 cache on ARM64 core?
>
>More open questions:
>
>> This requirement is the part of the early fault prediction by taking
>> action when large number of corrected errors reported on a CPU core
>> before it causing serious faults.
>
>And do you know of actual real-life examples where this is really the case? Do
>you have any users who report a large error count on ARM CPUs, originating
>from the caches and that something like that would really help?
>
>Because from my x86 CPUs limited experience, the cache arrays are mostly
>fine and errors reported there are not something that happens very
>frequently so we don't even need to collect and count those.
>
>So is this something which you need to have in order to check a box
>somewhere that there is some functionality or is there an actual real-life use
>case behind it which a customer has requested?
We have not got a real-life example for this case. However rare errors
like this can occur frequently sometimes at scale, which would cause
more serious issues if not handled.
>
>Open question from James with my reply to it:
>
>On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
>> If the corrected-count is available somewhere, can't this policy be
>> made in user-space?
The error count is present in the struct cper_arm_err_info, the fields of
this structure  are not reported to the user-space through trace events?
Presently the fields of table struct cper_sec_proc_arm only are reported 
to the user-space through trace-arm-event.
Also there can be multiple cper_arm_err_info per cper_sec_proc_arm.
Thus I think this need reporting through a new trace event?

Also the logical index of a CPU which I think need to extract from the 'mpidr' field of
struct cper_sec_proc_arm using platform dependent kernel function get_logical_index().    
Thus cpu index also need to report to the user space.
>
>You mean rasdaemon goes and offlines CPUs when certain thresholds are
>reached? Sure. It would be much more flexible too.
I think adding the CPU error collection to the kernel
has the following advantages,
    1. The CPU error collection and isolation would not be active if the
         rasdaemon stopped running or not running on a machine.
    2. Receiving errors and isolating a CPU core from the user-space would
        probably delayed more,  when large number of errors are reported.
   3. Supporting the interface for configuration parameters and  error statistics etc
        probably easy to implement in the kernel.
   4. The interface given for disabling a CPU is easy to use from the kernel level.

>
>First we answer questions and discuss, then we code.
>
>--
>Regards/Gruss,
>    Boris.
>

Thanks,
Shiju

>https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 12:43 ` [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Borislav Petkov
  2020-10-02 15:38   ` Shiju Jose
@ 2020-10-02 16:04   ` Luck, Tony
  1 sibling, 0 replies; 15+ messages in thread
From: Luck, Tony @ 2020-10-02 16:04 UTC (permalink / raw)
  To: Borislav Petkov, Shiju Jose
  Cc: linux-edac, linux-acpi, linux-kernel, rjw, james.morse, lenb, linuxarm

> Because from my x86 CPUs limited experience, the cache arrays are mostly
> fine and errors reported there are not something that happens very
> frequently so we don't even need to collect and count those.

On Intel X86 we leave the counting and threshold decisions about cache
health to the hardware. When a cache reaches the limit, it logs a "yellow"
status instead of "green" in the machine check bank (error is still marked
as "corrected"). The mcelog(8) daemon may attempt to take CPUs that share
that cache offline.

See Intel SDM volume 3B "15.4 Enhanced Cache Error Reporting"

-Tony



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 15:38   ` Shiju Jose
@ 2020-10-02 17:33     ` James Morse
  2020-10-02 18:02       ` Borislav Petkov
  2020-10-06 16:13       ` Shiju Jose
  0 siblings, 2 replies; 15+ messages in thread
From: James Morse @ 2020-10-02 17:33 UTC (permalink / raw)
  To: Shiju Jose
  Cc: Borislav Petkov, linux-edac, linux-acpi, linux-kernel, tony.luck,
	rjw, lenb, Linuxarm

Hi Shiju,

On 02/10/2020 16:38, Shiju Jose wrote:
>> -----Original Message-----
>> From: Borislav Petkov [mailto:bp@alien8.de]
>> Sent: 02 October 2020 13:44
>> To: Shiju Jose <shiju.jose@huawei.com>
>> Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
>> kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net;
>> james.morse@arm.com; lenb@kernel.org; Linuxarm
>> <linuxarm@huawei.com>
>> Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
>> short time period
>>
>> On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
>>> Open Questions based on the feedback from Boris, 1. ARM processor
>>> error types are cache/TLB/bus errors.
>>>    [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8]
>>> Any of the above error types should not be consider for the error
>>> collection and CPU core isolation?

Boris' earlier example was that Bus errors have very little to do with the CPU. It may
just be that this CPU is handling the IRQs for a fault device, and thus receiving the
errors. irqbalance could change that anytime.

I'd prefer we just stick with the caches for now.


>>> 2.If disabling entire CPU core is not acceptable, please suggest
>>> method to disable L1 and L2 cache on ARM64 core?

This is not something linux can do. It may not be possible to do it all.


>> More open questions:
>>
>>> This requirement is the part of the early fault prediction by taking
>>> action when large number of corrected errors reported on a CPU core
>>> before it causing serious faults.
>>
>> And do you know of actual real-life examples where this is really the case? Do
>> you have any users who report a large error count on ARM CPUs, originating
>>from the caches and that something like that would really help?
>>
>> Because from my x86 CPUs limited experience, the cache arrays are mostly
>> fine and errors reported there are not something that happens very
>> frequently so we don't even need to collect and count those.
>>
>> So is this something which you need to have in order to check a box
>> somewhere that there is some functionality or is there an actual real-life use
>> case behind it which a customer has requested?

> We have not got a real-life example for this case. However rare errors
> like this can occur frequently sometimes at scale, which would cause
> more serious issues if not handled.

Don't you need to look across all your 'at scale' machines to know what normal looks like?

I can't see how a reasonable prediction can be made from just one machine's behaviour
since boot. These are corrected errors, nothing has gone wrong.


>> Open question from James with my reply to it:
>>
>> On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
>>> If the corrected-count is available somewhere, can't this policy be
>>> made in user-space?

> The error count is present in the struct cper_arm_err_info, the fields of
> this structure  are not reported to the user-space through trace events?

> Presently the fields of table struct cper_sec_proc_arm only are reported 
> to the user-space through trace-arm-event.
> Also there can be multiple cper_arm_err_info per cper_sec_proc_arm.
> Thus I think this need reporting through a new trace event?

I think it would be more useful to feed this into edac like ghes.c already does for memory
errors. These would end up as corrected errors counts on devices for L3 or whatever.

This saves fixing your user-space component to the arm specific CPER record format, or
even firmware-first, meaning its useful to the widest number of people.


> Also the logical index of a CPU which I think need to extract from the 'mpidr' field of
> struct cper_sec_proc_arm using platform dependent kernel function get_logical_index().    
> Thus cpu index also need to report to the user space.

I thought you were talking about caches. These structures have a 'level' for cache errors.

Certainly you need a way of knowing which cache it is, and from that number you should
also be able to work out which the CPUs it is attached to.

x86 already has a way of doing this:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/x86/resctrl_ui.rst#n327

arm64 doesn't have anything equivalent, but my current proposal for MPAM is here:
https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git/commit/?h=mpam/snapshot/feb&id=ce3148bd39509ac8b12f5917f0f92ce014a5b22f

I was hoping the PPTT table would grow something we could use as an ID, but I've not seen
anything yet.


>> You mean rasdaemon goes and offlines CPUs when certain thresholds are
>> reached? Sure. It would be much more flexible too.

> I think adding the CPU error collection to the kernel
> has the following advantages,
>     1. The CPU error collection and isolation would not be active if the
>          rasdaemon stopped running or not running on a machine.

Having CPUs disappear when nothing has gone wrong is deeply surprising. This is going to
be very specific to a small number of people. I bet they want to calculate the threshold
cluster-wide.

Having this policy in user-space means you have the ability to do something much more
useful... e.g move your corrected-error-intolerant workload off the CPU that seems to be
affected.

(If someone who needs to solve this problem by offlining CPUs could chime in, that would
really help)


>     2. Receiving errors and isolating a CPU core from the user-space would
>         probably delayed more,  when large number of errors are reported.

These are corrected errors. Nothing has gone wrong.


>    3. Supporting the interface for configuration parameters and  error statistics etc
>         probably easy to implement in the kernel.

I disagree! Once its added as a kernel interface, we can't change it. Its much better for
these policy-specific algorithms and thresholds to live in user-space. The kernel can just
deal with the unchanging work of making the counter available.


>    4. The interface given for disabling a CPU is easy to use from the kernel level.

Its even easier for privileged user-space:
| echo 0 > /sys/devices/system/cpu/cpu0/online


Thanks,

James

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 17:33     ` James Morse
@ 2020-10-02 18:02       ` Borislav Petkov
  2020-10-06 16:13       ` Shiju Jose
  1 sibling, 0 replies; 15+ messages in thread
From: Borislav Petkov @ 2020-10-02 18:02 UTC (permalink / raw)
  To: Shiju Jose
  Cc: James Morse, linux-edac, linux-acpi, linux-kernel, tony.luck,
	rjw, lenb, Linuxarm

On Fri, Oct 02, 2020 at 06:33:17PM +0100, James Morse wrote:
> > I think adding the CPU error collection to the kernel
> > has the following advantages,
> >     1. The CPU error collection and isolation would not be active if the
> >          rasdaemon stopped running or not running on a machine.

Wasn't there this thing called systemd which promised that it would
restart daemons when they fail? And even if it is not there, you can
always do your own cronjob which checks rasdaemon presence and restarts
it if it has died and sends a mail to the admin to check why it had
died.

Everything else I've trimmed but James has put it a lot more eloquently
than me and I cannot agree more with what he says. Doing this in
userspace is better in every aspect you can think of.

The current CEC thing runs in the kernel because it has a completely
different purpose - to limit corrected error reports which turn into
very expensive support calls for errors which were corrected but people
simply don't get that they were corrected. Instead, they throw hands in
the air and go "OMG, my hardware is failing".

Where those are, as James says:

> These are corrected errors. Nothing has gone wrong.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-02 17:33     ` James Morse
  2020-10-02 18:02       ` Borislav Petkov
@ 2020-10-06 16:13       ` Shiju Jose
  2020-10-07 16:45         ` James Morse
  1 sibling, 1 reply; 15+ messages in thread
From: Shiju Jose @ 2020-10-06 16:13 UTC (permalink / raw)
  To: James Morse
  Cc: Borislav Petkov, linux-edac, linux-acpi, linux-kernel, tony.luck,
	rjw, lenb, Linuxarm, Jonathan Cameron

Hi James,

Thanks for the reply and the information shared.

>-----Original Message-----
>From: James Morse [mailto:james.morse@arm.com]
>Sent: 02 October 2020 18:33
>To: Shiju Jose <shiju.jose@huawei.com>
>Cc: Borislav Petkov <bp@alien8.de>; linux-edac@vger.kernel.org; linux-
>acpi@vger.kernel.org; linux-kernel@vger.kernel.org; tony.luck@intel.com;
>rjw@rjwysocki.net; lenb@kernel.org; Linuxarm <linuxarm@huawei.com>
>Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
>short time period
>
>Hi Shiju,
>
>On 02/10/2020 16:38, Shiju Jose wrote:
>>> -----Original Message-----
>>> From: Borislav Petkov [mailto:bp@alien8.de]
>>> Sent: 02 October 2020 13:44
>>> To: Shiju Jose <shiju.jose@huawei.com>
>>> Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
>>> kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net;
>>> james.morse@arm.com; lenb@kernel.org; Linuxarm
><linuxarm@huawei.com>
>>> Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count
>>> check on short time period
>>>
>>> On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
>>>> Open Questions based on the feedback from Boris, 1. ARM processor
>>>> error types are cache/TLB/bus errors.
>>>>    [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec
>>>> v2.8] Any of the above error types should not be consider for the
>>>> error collection and CPU core isolation?
>
>Boris' earlier example was that Bus errors have very little to do with the CPU.
>It may just be that this CPU is handling the IRQs for a fault device, and thus
>receiving the errors. irqbalance could change that anytime.
>
>I'd prefer we just stick with the caches for now.
>
[...]

>
>>> Open question from James with my reply to it:
>>>
>>> On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
>>>> If the corrected-count is available somewhere, can't this policy be
>>>> made in user-space?
>
>> The error count is present in the struct cper_arm_err_info, the fields
>> of this structure  are not reported to the user-space through trace events?
>
>> Presently the fields of table struct cper_sec_proc_arm only are
>> reported to the user-space through trace-arm-event.
>> Also there can be multiple cper_arm_err_info per cper_sec_proc_arm.
>> Thus I think this need reporting through a new trace event?
>
>I think it would be more useful to feed this into edac like ghes.c already does
>for memory errors. These would end up as corrected errors counts on devices
>for L3 or whatever.
>
>This saves fixing your user-space component to the arm specific CPER record
>format, or even firmware-first, meaning its useful to the widest number of
>people.
>
>
>> Also the logical index of a CPU which I think need to extract from the
>'mpidr' field of
>> struct cper_sec_proc_arm using platform dependent kernel function
>get_logical_index().
>> Thus cpu index also need to report to the user space.
>
>I thought you were talking about caches. These structures have a 'level' for
>cache errors.
>
>Certainly you need a way of knowing which cache it is, and from that number
>you should also be able to work out which the CPUs it is attached to.
>
>x86 already has a way of doing this:
>https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Docu
>mentation/x86/resctrl_ui.rst#n327
>
>arm64 doesn't have anything equivalent, but my current proposal for MPAM
>is here:
>https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git/commit/?h=
>mpam/snapshot/feb&id=ce3148bd39509ac8b12f5917f0f92ce014a5b22f
>
>I was hoping the PPTT table would grow something we could use as an ID, but
>I've not seen anything yet.

Please find following pseudo code we added for the kernel side to make sure
we correctly understand your suggestions.

1. Create edac device and edac device sysfs entries for the online CPU caches.
/drivers/edac/edac_device.c
struct edac_device_ctl_info  *edac_device_add_cache(unsigned int id, u8 level, u8 type) {
	...
	/* Check edac entry for cache already present */       
	edev_cache = find_edac_device_cache(id, level, type);
	if (edev_cache)
		return edev_cache;
 
	edev_cache = edac_device_alloc_ctrl_info(...);
 	if (!edev_cache)
		return NULL;

	rc = edac_device_add_device(edev_cache);
 	if (rc)
		goto exit;

 	/* store edev_cache for future use */
 	...
	return edev_cache;

 exit:
	...
	return NULL; 
 }

/drivers/base/cacheinfo.c
int cache_create_edac_entries(u64 mpidr, u8 cache_level, u8 cache_type)
{ 
	...
	/* Get cacheinfo for each online cpus */
	for_each_online_cpu(i) {
		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);
		if (!cpu_ci || !cpu_ci->id)
			continue;
        		... 
		/*Add  the edac entry for the CPU cache */
		edev_cache = edac_device_add_cache(cpu_ci->id, cpu_ci ->level, cpu_ci ->type)
		if (!edev_cache)
			break;
		...
	}
	...	
}
     
unsigned int cache_get_cache_id(u64 proc_id, u8 cache_level, u8 cache_type)
{ 
	unsigned int cache_id = 0;
	...
	/* Walk looking for matching cache node */   
	for_each_online_cpu(i) {
		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);
		if (!cpu_ci || !cpu_ci->id)
			continue;

		id = CONV(proc_id);  /* need to check */
		if((id == cpu_ci->id) && (cache_level == cpu_ci->level) && (cache_type == cpu_ci->type))  {
			cache_id = cpu_ci->id;
			break;
		}
	}
	return cache_id;
}

2. Store CPU CE count in the edac sysfs entry for the CPU cache.

drivers/edac/ghes_edac.c
void ghes_edac_report_cpu_error(int cache_id, u8 cache_level, u8 cache_type , uint32 ce_count)
{
	...
	/* Check edac entry for cache already present, if not add new entry */       
	edev_cache = find_edac_device_cache(cache_id, cache_level, cache_type);
	if (!edev_cache) {
		/*Add  the edac entry for the cache */
		edev_cache = edac_device_add_cache(cache_id, cache_level, cache_type);
		if (!edev_cache)
			return;
	}

	/* Store the ce_count to /sys/devices/system/edac/ cpu/cpu<no>/L<N>cache/ce_count */
	edac_device_handle_ce_count(edev_cache, ce_count, ...)
}
 
drivers/acpi/apei/ghes.c
void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata) {
 	...
 	if (sec_sev != GHES_SEV_CORRECTED)
 		return;
 	mpidr = cper_sec_proc_arm->mpidr;    
 	for(i = 0; i < cper_sec_proc_arm->err_info_num; i++) {
 		if(cper_sec_proc_info->type != CPER_ARM_CACHE_ERROR) 
 			continue; 
 		ce_count = cper_arm_err_info->multiple_error + 1;
		cache_type = cper_arm_err_info->type;
		cache_level = cper_arm_err_info->error_info<24: 22>;  
		cache_id = cache_get_cache_id(mpidr, cache_level, cache_type);
 		if (!cache_id)
 			return;
		ghes_edac_report_cpu_error(cache_id, cache_level, cache_type , ce_count);
	}
              ...
	return;	
}

>
>
>>> You mean rasdaemon goes and offlines CPUs when certain thresholds are
>>> reached? Sure. It would be much more flexible too.
>
[...]
>
>
>Thanks,
>
>James

Thanks,
Shiju


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
  2020-10-06 16:13       ` Shiju Jose
@ 2020-10-07 16:45         ` James Morse
  0 siblings, 0 replies; 15+ messages in thread
From: James Morse @ 2020-10-07 16:45 UTC (permalink / raw)
  To: Shiju Jose
  Cc: Borislav Petkov, linux-edac, linux-acpi, linux-kernel, tony.luck,
	rjw, lenb, Linuxarm, Jonathan Cameron

Hi Shiju,

On 06/10/2020 17:13, Shiju Jose wrote:

[...]

> Please find following pseudo code we added for the kernel side to make sure
> we correctly understand your suggestions.
> 
> 1. Create edac device and edac device sysfs entries for the online CPU caches.
> /drivers/edac/edac_device.c
> struct edac_device_ctl_info  *edac_device_add_cache(unsigned int id, u8 level, u8 type) {

Eh? Ah, you are adding helpers for devices that are a cache. As far as I can see, edac
only cares about 'devices', I don't think this is needed unless there are multiple users,
or it makes a visible difference to user-space.

Otherwise this could just go into ghes_edac.c


How this looks to user-space probably needs discussing. We should avoid inventing anything
new. I'd expect user-space to see something like the structure described at the top of
edac_device.h... but I can't spot a driver using this.
(its a shame its not the other way up, to avoid duplicating shared caches)

Some archaeology may be needed!

(if there is some existing structure, I agree it should be wrapped up in helpers to ensure
its easiest to be the same. This may be what a edac_device_block is...)


>  }


> /drivers/base/cacheinfo.c
> int cache_create_edac_entries(u64 mpidr, u8 cache_level, u8 cache_type)
> { 
> 	...
> 	/* Get cacheinfo for each online cpus */
> 	for_each_online_cpu(i) {
> 		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);

I agree the structure of the caches should come from cacheinfo, and you spotted it only
works for online CPUs!. This means there is an interaction with cpuhp here)


> 		if (!cpu_ci || !cpu_ci->id)

cpu->id?  0 is a valid id, there is an attribute flag to say this is valid.
This field exists in struct cacheinfo, not struct cpu_cacheinfo.


> 			continue;
>         		... 
> 		/*Add  the edac entry for the CPU cache */
> 		edev_cache = edac_device_add_cache(cpu_ci->id, cpu_ci ->level, cpu_ci ->type)
> 		if (!edev_cache)
> 			break;

This would break all other edac users.
The edac driver for the platform should take care of creating this stuff, not the core
cacheinfo code.

The edac driver for the platform may know that L2 doesn't report RAS errors, so there is
no point exposing it.

For firmware-first, we can't know this until an error shows up, so have to create
everything. This stuff should only be created/exported when ghes_edac.c is determined to
be this platforms edac driver. This code should live in ghes_edac.c.


> 		...
> 	}
> 	...	
> }

> unsigned int cache_get_cache_id(u64 proc_id, u8 cache_level, u8 cache_type)

See get_cpu_cacheinfo_id(int cpu, int level) in next. (something very similar to this
lived in arch/x86, bits of the MPAM tree that moved it got queued for next)


> { 
> 	unsigned int cache_id = 0;
> 	...
> 	/* Walk looking for matching cache node */   
> 	for_each_online_cpu(i) {

(there is an interaction with cpuhp here)


> 		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);
> 		if (!cpu_ci || !cpu_ci->id)
> 			continue;


> 		id = CONV(proc_id);  /* need to check */

No idea what is going on here.

(Deriving an ID from the CPU_s_ that are attached to the cache is arm64 specific. This has
to work for x86 too.
The MPAM out-of-tree code does this as we don't have anything else. Feedback when it was
posted as RFC was that the id values should be compacted, I was hoping we would get
something like an id from the PPTT before needing this value as resctrl ABI for MPAM)


> 		if((id == cpu_ci->id) && (cache_level == cpu_ci->level) && (cache_type == cpu_ci->type))  {
> 			cache_id = cpu_ci->id;
> 			break;
> 		}
> 	}
> 	return cache_id;
> }


> 2. Store CPU CE count in the edac sysfs entry for the CPU cache.
> 
> drivers/edac/ghes_edac.c
> void ghes_edac_report_cpu_error(int cache_id, u8 cache_level, u8 cache_type , uint32 ce_count)
> {
> 	...
> 	/* Check edac entry for cache already present, if not add new entry */       

You can't create devices at runtime! The notification comes in irq context, and
edac_device_add_device() takes a mutex, and you need to allocate memory.

This could be deferred to process context - but I bet its a nuisance for user-space to now
know what counters are available until errors show up.


> 	edev_cache = find_edac_device_cache(cache_id, cache_level, cache_type);
> 	if (!edev_cache) {
> 		/*Add  the edac entry for the cache */
> 		edev_cache = edac_device_add_cache(cache_id, cache_level, cache_type);
> 		if (!edev_cache)
> 			return;
> 	}

Better would be to lookup the device based on the CPER. (It already looks up the DIMM
based on the CPER)


> 	/* Store the ce_count to /sys/devices/system/edac/ cpu/cpu<no>/L<N>cache/ce_count */
> 	edac_device_handle_ce_count(edev_cache, ce_count, ...)
> }
>  
> drivers/acpi/apei/ghes.c
> void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata) {
>  	...
>  	if (sec_sev != GHES_SEV_CORRECTED)
>  		return;

(Xiaofei Tan is looking at supporting some other of these, so you need to interact with
that work too)


>  	mpidr = cper_sec_proc_arm->mpidr;    

You want an arch-specific call to convert this to the linux CPU number, and then use that
everywhere. This makes the code more re-usable, and less surprising to other readers.

arm64's get_logical_index() looks like it does what you want.


>  	for(i = 0; i < cper_sec_proc_arm->err_info_num; i++) {
>  		if(cper_sec_proc_info->type != CPER_ARM_CACHE_ERROR) 
>  			continue; 
>  		ce_count = cper_arm_err_info->multiple_error + 1;
> 		cache_type = cper_arm_err_info->type;
> 		cache_level = cper_arm_err_info->error_info<24: 22>;  

> 		cache_id = cache_get_cache_id(mpidr, cache_level, cache_type);

This needs to be architecture agnostic, it must take the cpu number.


>  		if (!cache_id)
>  			return;

0 is a valid id.


> 		ghes_edac_report_cpu_error(cache_id, cache_level, cache_type , ce_count);
> 	}
>               ...
> 	return;	
> }


Thanks,

James

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-10-07 16:45 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-02 12:22 [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 1/7] RAS/CEC: Replace the macro PFN with ELEM_NO Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 2/7] RAS/CEC: Replace pfns_poisoned with elems_poisoned Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 3/7] RAS/CEC: Move X86 MCE specific code under CONFIG_X86_MCE Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 4/7] RAS/CEC: Modify cec_mod_work() for common use Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 5/7] RAS/CEC: Add support for errors count check on short time period Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 6/7] RAS/CEC: Add CPU Correctable Error Collector to isolate an erroneous CPU core Shiju Jose
2020-10-02 12:22 ` [RFC PATCH 7/7] ACPI / APEI: Add reporting ARM64 CPU correctable errors to the CEC Shiju Jose
2020-10-02 12:43 ` [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period Borislav Petkov
2020-10-02 15:38   ` Shiju Jose
2020-10-02 17:33     ` James Morse
2020-10-02 18:02       ` Borislav Petkov
2020-10-06 16:13       ` Shiju Jose
2020-10-07 16:45         ` James Morse
2020-10-02 16:04   ` Luck, Tony

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).