All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/3] Introduce per NUMA node memory error statistics
@ 2023-01-16 19:38 Jiaqi Yan
  2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-16 19:38 UTC (permalink / raw)
  To: tony.luck, naoya.horiguchi
  Cc: jiaqiyan, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

Background
==========
In the RFC for Kernel Support of Memory Error Detection [1], one advantage
of software-based scanning over hardware patrol scrubber is the ability
to make statistics visible to system administrators. The statistics
include 2 categories:
* Memory error statistics, for example, how many memory error are
  encountered, how many of them are recovered by the kernel. Note these
  memory errors are non-fatal to kernel: during the machine check
  exception (MCE) handling kernel already classified MCE's severity to
  be unnecessary to panic (but either action required or optional).
* Scanner statistics, for example how many times the scanner have fully
  scanned a NUMA node, how many errors are first detected by the scanner.

The memory error statistics are useful to userspace and actually not
specific to scanner detected memory errors, and are the focus of this RFC.

Motivation
==========
Memory error stats are important to userspace but insufficient in kernel
today. Datacenter administrators can better monitor a machine's memory
health with the visible stats. For example, while memory errors are
inevitable on servers with 10+ TB memory, starting server maintenance
when there are only 1~2 recovered memory errors could be overreacting;
in cloud production environment maintenance usually means live migrate
all the workload running on the server and this usually causes nontrivial
disruption to the customer. Providing insight into the scope of memory
errors on a system helps to determine the appropriate follow-up action.
In addition, the kernel's existing memory error stats need to be
standardized so that userspace can reliably count on their usefulness.

Today kernel provides following memory error info to userspace, but they
are not sufficient or have disadvantages:
* HardwareCorrupted in /proc/meminfo: number of bytes poisoned in total,
  not per NUMA node stats though
* ras:memory_failure_event: only available after explicitly enabled
* /dev/mcelog provides many useful info about the MCEs, but doesn't
  capture how memory_failure recovered memory MCEs
* kernel logs: userspace needs to process log text

Exposing memory error stats is also a good start for the in-kernel memory
error detector. Today the data source of memory error stats are either
direct memory error consumption, or hardware patrol scrubber detection
(when signaled as UCNA; these signaled as SRAO are not handled by
memory_failure). Once in-kernel memory scanner is implemented, it will be
the main source as it is usually configured to scan memory DIMMs constantly
and faster than hardware patrol scrubber.

How Implemented
===============
As Naoya pointed out [2], exposing memory error statistics to userspace
is useful independent of software or hardware scanner. Therefore we
implement the memory error statistics independent of the in-kernel memory
error detector. It exposes the following per NUMA node memory error
counters:

  /sys/devices/system/node/node${X}/memory_failure/pages_poisoned
  /sys/devices/system/node/node${X}/memory_failure/pages_recovered
  /sys/devices/system/node/node${X}/memory_failure/pages_ignored
  /sys/devices/system/node/node${X}/memory_failure/pages_failed
  /sys/devices/system/node/node${X}/memory_failure/pages_delayed

These counters describe how many raw pages are poisoned and after the
attempted recoveries by the kernel, their resolutions: how many are
recovered, ignored, failed, or delayed respectively. This approach can be
easier to extend for future use cases than /proc/meminfo, trace event,
and log. The following math holds for the statistics:
* pages_poisoned = pages_recovered + pages_ignored + pages_failed +
  pages_delayed
* pages_poisoned * page_size = /proc/meminfo/HardwareCorrupted
These memory error stats are reset during machine boot.

The 1st commit introduces these sysfs entries. The 2nd commit populates
memory error stats every time memory_failure finishes memory error
recovery. The 3rd commit adds documentations for introduced stats.

[1] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#mc22959244f5388891c523882e61163c6e4d703af
[2] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#m52d8d7a333d8536bd7ce74253298858b1c0c0ac6

Jiaqi Yan (3):
  mm: memory-failure: Add memory failure stats to sysfs
  mm: memory-failure: Bump memory failure stats to pglist_data
  mm: memory-failure: Document memory failure stats

 Documentation/ABI/stable/sysfs-devices-node | 39 +++++++++++
 drivers/base/node.c                         |  3 +
 include/linux/mm.h                          |  5 ++
 include/linux/mmzone.h                      | 28 ++++++++
 mm/memory-failure.c                         | 71 +++++++++++++++++++++
 5 files changed, 146 insertions(+)

-- 
2.39.0.314.g84b9a713c41-goog



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs
  2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
@ 2023-01-16 19:39 ` Jiaqi Yan
  2023-01-16 20:15   ` Andrew Morton
  2023-01-17  9:02   ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-16 19:39 ` [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data Jiaqi Yan
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-16 19:39 UTC (permalink / raw)
  To: tony.luck, naoya.horiguchi
  Cc: jiaqiyan, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

Today kernel provides following memory error info to userspace, but each
has its own disadvantage
* HardwareCorrupted in /proc/meminfo: number of bytes poisoned in total,
  not per NUMA node stats though
* ras:memory_failure_event: only available after explicitly enabled
* /dev/mcelog provides many useful info about the MCEs, but
  doesn't capture how memory_failure recovered memory MCEs
* kernel logs: userspace needs to process log text

Exposes per NUMA node memory error stats as sysfs entries:

  /sys/devices/system/node/node${X}/memory_failure/pages_poisoned
  /sys/devices/system/node/node${X}/memory_failure/pages_recovered
  /sys/devices/system/node/node${X}/memory_failure/pages_ignored
  /sys/devices/system/node/node${X}/memory_failure/pages_failed
  /sys/devices/system/node/node${X}/memory_failure/pages_delayed

These counters describe how many raw pages are poisoned and after the
attempted recoveries by the kernel, their resolutions: how many are
recovered, ignored, failed, or delayed respectively.

The following math holds for the statistics:
* pages_poisoned = pages_recovered + pages_ignored + pages_failed +
                   pages_delayed
* pages_poisoned * PAGE_SIZE = /proc/meminfo/HardwareCorrupted

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
---
 drivers/base/node.c    |  3 +++
 include/linux/mm.h     |  5 +++++
 include/linux/mmzone.h | 28 ++++++++++++++++++++++++++++
 mm/memory-failure.c    | 35 +++++++++++++++++++++++++++++++++++
 4 files changed, 71 insertions(+)

diff --git a/drivers/base/node.c b/drivers/base/node.c
index faf3597a96da..b46db17124f3 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -586,6 +586,9 @@ static const struct attribute_group *node_dev_groups[] = {
 	&node_dev_group,
 #ifdef CONFIG_HAVE_ARCH_NODE_DEV_GROUP
 	&arch_node_dev_group,
+#endif
+#ifdef CONFIG_MEMORY_FAILURE
+	&memory_failure_attr_group,
 #endif
 	NULL
 };
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f3f196e4d66d..888576884eb9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3521,6 +3521,11 @@ enum mf_action_page_type {
 	MF_MSG_UNKNOWN,
 };
 
+/*
+ * Sysfs entries for memory failure handling statistics.
+ */
+extern const struct attribute_group memory_failure_attr_group;
+
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS)
 extern void clear_huge_page(struct page *page,
 			    unsigned long addr_hint,
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index cd28a100d9e4..0a14b35a96da 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1110,6 +1110,31 @@ struct deferred_split {
 };
 #endif
 
+#ifdef CONFIG_MEMORY_FAILURE
+/*
+ * Per NUMA node memory failure handling statistics.
+ */
+struct memory_failure_stats {
+	/*
+	 * Number of pages poisoned.
+	 * Cases not accounted: memory outside kernel control, offline page,
+	 * arch-specific memory_failure (SGX), and hwpoison_filter()
+	 * filtered error events.
+	 */
+	unsigned long pages_poisoned;
+	/*
+	 * Recovery results of poisoned pages handled by memory_failure,
+	 * in sync with mf_result.
+	 * pages_poisoned = pages_ignored + pages_failed +
+	 *		    pages_delayed + pages_recovered
+	 */
+	unsigned long pages_ignored;
+	unsigned long pages_failed;
+	unsigned long pages_delayed;
+	unsigned long pages_recovered;
+};
+#endif
+
 /*
  * On NUMA machines, each NUMA node would have a pg_data_t to describe
  * it's memory layout. On UMA machines there is a single pglist_data which
@@ -1253,6 +1278,9 @@ typedef struct pglist_data {
 #ifdef CONFIG_NUMA
 	struct memory_tier __rcu *memtier;
 #endif
+#ifdef CONFIG_MEMORY_FAILURE
+	struct memory_failure_stats mf_stats;
+#endif
 } pg_data_t;
 
 #define node_present_pages(nid)	(NODE_DATA(nid)->node_present_pages)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index c77a9e37e27e..cb782fa552d5 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -87,6 +87,41 @@ inline void num_poisoned_pages_sub(unsigned long pfn, long i)
 		memblk_nr_poison_sub(pfn, i);
 }
 
+/**
+ * MF_ATTR_RO - Create sysfs entry for each memory failure statistics.
+ * @_name: name of the file in the per NUMA sysfs directory.
+ */
+#define MF_ATTR_RO(_name)					\
+static ssize_t _name##_show(struct device *dev,		\
+			    struct device_attribute *attr,	\
+			    char *buf)				\
+{								\
+	struct memory_failure_stats *mf_stats =			\
+		&NODE_DATA(dev->id)->mf_stats;			\
+	return sprintf(buf, "%lu\n", mf_stats->_name);		\
+}								\
+static DEVICE_ATTR_RO(_name)
+
+MF_ATTR_RO(pages_poisoned);
+MF_ATTR_RO(pages_ignored);
+MF_ATTR_RO(pages_failed);
+MF_ATTR_RO(pages_delayed);
+MF_ATTR_RO(pages_recovered);
+
+static struct attribute *memory_failure_attr[] = {
+	&dev_attr_pages_poisoned.attr,
+	&dev_attr_pages_ignored.attr,
+	&dev_attr_pages_failed.attr,
+	&dev_attr_pages_delayed.attr,
+	&dev_attr_pages_recovered.attr,
+	NULL,
+};
+
+const struct attribute_group memory_failure_attr_group = {
+	.name = "memory_failure",
+	.attrs = memory_failure_attr,
+};
+
 /*
  * Return values:
  *   1:   the page is dissolved (if needed) and taken off from buddy,
-- 
2.39.0.314.g84b9a713c41-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
  2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
@ 2023-01-16 19:39 ` Jiaqi Yan
  2023-01-16 20:16   ` Andrew Morton
  2023-01-16 19:39 ` [PATCH v1 3/3] mm: memory-failure: Document memory failure stats Jiaqi Yan
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-16 19:39 UTC (permalink / raw)
  To: tony.luck, naoya.horiguchi
  Cc: jiaqiyan, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

Right before memory_failure finishes its handling, accumulate poisoned
page's resolution counters to pglist_data's memory_failure_stats, so as
to update the corresponding sysfs entries.

Tested:
1) Start an application to allocate memory buffer chunks
2) Convert random memory buffer addresses to physical addresses
3) Inject memory errors using EINJ at chosen physical addresses
4) Access poisoned memory buffer and recover from SIGBUS
5) Check counter values under
   /sys/devices/system/node/node*/memory_failure/pages_*

Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
---
 mm/memory-failure.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index cb782fa552d5..c90417cfcda4 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
 #undef slab
 #undef reserved
 
+static void update_per_node_mf_stats(unsigned long pfn,
+				     enum mf_result result)
+{
+	int nid = MAX_NUMNODES;
+	struct memory_failure_stats *mf_stats = NULL;
+
+	nid = pfn_to_nid(pfn);
+	if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
+		WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
+		return;
+	}
+
+	mf_stats = &NODE_DATA(nid)->mf_stats;
+	switch (result) {
+	case MF_IGNORED:
+		++mf_stats->pages_ignored;
+		break;
+	case MF_FAILED:
+		++mf_stats->pages_failed;
+		break;
+	case MF_DELAYED:
+		++mf_stats->pages_delayed;
+		break;
+	case MF_RECOVERED:
+		++mf_stats->pages_recovered;
+		break;
+	default:
+		WARN_ONCE(1, "Memory failure: mf_result=%d is not properly handled", result);
+		break;
+	}
+	++mf_stats->pages_poisoned;
+}
+
 /*
  * "Dirty/Clean" indication is not 100% accurate due to the possibility of
  * setting PG_dirty outside page lock. See also comment above set_page_dirty().
@@ -1237,6 +1270,9 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type,
 	trace_memory_failure_event(pfn, type, result);
 
 	num_poisoned_pages_inc(pfn);
+
+	update_per_node_mf_stats(pfn, result);
+
 	pr_err("%#lx: recovery action for %s: %s\n",
 		pfn, action_page_types[type], action_name[result]);
 
-- 
2.39.0.314.g84b9a713c41-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 3/3] mm: memory-failure: Document memory failure stats
  2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
  2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
  2023-01-16 19:39 ` [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data Jiaqi Yan
@ 2023-01-16 19:39 ` Jiaqi Yan
  2023-01-16 20:13 ` [PATCH v1 0/3] Introduce per NUMA node memory error statistics Andrew Morton
  2023-01-17  9:18 ` HORIGUCHI NAOYA(堀口 直也)
  4 siblings, 0 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-16 19:39 UTC (permalink / raw)
  To: tony.luck, naoya.horiguchi
  Cc: jiaqiyan, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

Add documentation for memory_failure's per NUMA node sysfs entries.

Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
---
 Documentation/ABI/stable/sysfs-devices-node | 39 +++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/Documentation/ABI/stable/sysfs-devices-node b/Documentation/ABI/stable/sysfs-devices-node
index 8db67aa472f1..a2fefb0e61a7 100644
--- a/Documentation/ABI/stable/sysfs-devices-node
+++ b/Documentation/ABI/stable/sysfs-devices-node
@@ -182,3 +182,42 @@ Date:		November 2021
 Contact:	Jarkko Sakkinen <jarkko@kernel.org>
 Description:
 		The total amount of SGX physical memory in bytes.
+
+What:		/sys/devices/system/node/nodeX/memory_failure/pages_poisoned
+Date:		January 2023
+Contact:	Jiaqi Yan <jiaqiyan@google.com>
+Description:
+		The total number of raw poisoned pages (pages containing
+		corrupted data due to memory errors) on a NUMA node.
+
+What:		/sys/devices/system/node/nodeX/memory_failure/pages_ignored
+Date:		January 2023
+Contact:	Jiaqi Yan <jiaqiyan@google.com>
+Description:
+		Of the raw poisoned pages on a NUMA node, how many pages are
+		ignored by memory error recovery attempt, usually because
+		support for this type of pages is unavailable, and kernel
+		gives up the recovery.
+
+What:		/sys/devices/system/node/nodeX/memory_failure/pages_failed
+Date:		January 2023
+Contact:	Jiaqi Yan <jiaqiyan@google.com>
+Description:
+		Of the raw poisoned pages on a NUMA node, how many pages are
+		failed by memory error recovery attempt. This usually means
+		a key recovery operation failed.
+
+What:		/sys/devices/system/node/nodeX/memory_failure/pages_delayed
+Date:		January 2023
+Contact:	Jiaqi Yan <jiaqiyan@google.com>
+Description:
+		Of the raw poisoned pages on a NUMA node, how many pages are
+		delayed by memory error recovery attempt. Delayed poisoned
+		pages usually will be retried by kernel.
+
+What:		/sys/devices/system/node/nodeX/memory_failure/pages_recovered
+Date:		January 2023
+Contact:	Jiaqi Yan <jiaqiyan@google.com>
+Description:
+		Of the raw poisoned pages on a NUMA node, how many pages are
+		recovered by memory error recovery attempt.
-- 
2.39.0.314.g84b9a713c41-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
                   ` (2 preceding siblings ...)
  2023-01-16 19:39 ` [PATCH v1 3/3] mm: memory-failure: Document memory failure stats Jiaqi Yan
@ 2023-01-16 20:13 ` Andrew Morton
  2023-01-16 21:52   ` Jiaqi Yan
  2023-01-17  9:18 ` HORIGUCHI NAOYA(堀口 直也)
  4 siblings, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2023-01-16 20:13 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: tony.luck, naoya.horiguchi, duenwen, rientjes, linux-mm,
	shy828301, wangkefeng.wang

On Mon, 16 Jan 2023 19:38:59 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:

> Background
> ==========
> In the RFC for Kernel Support of Memory Error Detection [1], one advantage
> of software-based scanning over hardware patrol scrubber is the ability
> to make statistics visible to system administrators. The statistics
> include 2 categories:
> * Memory error statistics, for example, how many memory error are
>   encountered, how many of them are recovered by the kernel. Note these
>   memory errors are non-fatal to kernel: during the machine check
>   exception (MCE) handling kernel already classified MCE's severity to
>   be unnecessary to panic (but either action required or optional).
> * Scanner statistics, for example how many times the scanner have fully
>   scanned a NUMA node, how many errors are first detected by the scanner.
> 
> The memory error statistics are useful to userspace and actually not
> specific to scanner detected memory errors, and are the focus of this RFC.

I assume this is a leftover and this is no longer "RFC".

I'd normally sit back and await reviewer input, but this series is
simple, so I'll slurp it up so we get some testing while that review is
ongoing.




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs
  2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
@ 2023-01-16 20:15   ` Andrew Morton
  2023-01-17  9:14     ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-17  9:02   ` HORIGUCHI NAOYA(堀口 直也)
  1 sibling, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2023-01-16 20:15 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: tony.luck, naoya.horiguchi, duenwen, rientjes, linux-mm,
	shy828301, wangkefeng.wang

On Mon, 16 Jan 2023 19:39:00 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:

> +/*
> + * Per NUMA node memory failure handling statistics.
> + */
> +struct memory_failure_stats {
> +	/*
> +	 * Number of pages poisoned.
> +	 * Cases not accounted: memory outside kernel control, offline page,
> +	 * arch-specific memory_failure (SGX), and hwpoison_filter()
> +	 * filtered error events.
> +	 */
> +	unsigned long pages_poisoned;
> +	/*
> +	 * Recovery results of poisoned pages handled by memory_failure,
> +	 * in sync with mf_result.
> +	 * pages_poisoned = pages_ignored + pages_failed +
> +	 *		    pages_delayed + pages_recovered
> +	 */
> +	unsigned long pages_ignored;
> +	unsigned long pages_failed;
> +	unsigned long pages_delayed;
> +	unsigned long pages_recovered;
> +};

I don't think the "pages_" here add much value - a simple code comment
saying "everything counts in pages" should suffice.  If you're feeling
deprived of columnar space, maybe just remove?  Or not ;)


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-16 19:39 ` [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data Jiaqi Yan
@ 2023-01-16 20:16   ` Andrew Morton
  2023-01-17  9:03     ` HORIGUCHI NAOYA(堀口 直也)
  0 siblings, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2023-01-16 20:16 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: tony.luck, naoya.horiguchi, duenwen, rientjes, linux-mm,
	shy828301, wangkefeng.wang

On Mon, 16 Jan 2023 19:39:01 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:

> Right before memory_failure finishes its handling, accumulate poisoned
> page's resolution counters to pglist_data's memory_failure_stats, so as
> to update the corresponding sysfs entries.
> 
> Tested:
> 1) Start an application to allocate memory buffer chunks
> 2) Convert random memory buffer addresses to physical addresses
> 3) Inject memory errors using EINJ at chosen physical addresses
> 4) Access poisoned memory buffer and recover from SIGBUS
> 5) Check counter values under
>    /sys/devices/system/node/node*/memory_failure/pages_*
> 
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
>  #undef slab
>  #undef reserved
>  
> +static void update_per_node_mf_stats(unsigned long pfn,
> +				     enum mf_result result)
> +{
> +	int nid = MAX_NUMNODES;
> +	struct memory_failure_stats *mf_stats = NULL;
> +
> +	nid = pfn_to_nid(pfn);
> +	if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
> +		WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
> +		return;
> +	}
> +
> +	mf_stats = &NODE_DATA(nid)->mf_stats;
> +	switch (result) {
> +	case MF_IGNORED:
> +		++mf_stats->pages_ignored;

What is the locking here, to prevent concurrent increments?

> +		break;
> +	case MF_FAILED:
> +		++mf_stats->pages_failed;
> +		break;
> +	case MF_DELAYED:
> +		++mf_stats->pages_delayed;
> +		break;
> +	case MF_RECOVERED:
> +		++mf_stats->pages_recovered;
> +		break;
> +	default:
> +		WARN_ONCE(1, "Memory failure: mf_result=%d is not properly handled", result);
> +		break;
> +	}
> +	++mf_stats->pages_poisoned;
> +}



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-16 20:13 ` [PATCH v1 0/3] Introduce per NUMA node memory error statistics Andrew Morton
@ 2023-01-16 21:52   ` Jiaqi Yan
  0 siblings, 0 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-16 21:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: tony.luck, naoya.horiguchi, duenwen, rientjes, linux-mm,
	shy828301, wangkefeng.wang

[-- Attachment #1: Type: text/plain, Size: 1483 bytes --]

On Mon, Jan 16, 2023 at 12:13 PM Andrew Morton <akpm@linux-foundation.org>
wrote:

> On Mon, 16 Jan 2023 19:38:59 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
>
> > Background
> > ==========
> > In the RFC for Kernel Support of Memory Error Detection [1], one
> advantage
> > of software-based scanning over hardware patrol scrubber is the ability
> > to make statistics visible to system administrators. The statistics
> > include 2 categories:
> > * Memory error statistics, for example, how many memory error are
> >   encountered, how many of them are recovered by the kernel. Note these
> >   memory errors are non-fatal to kernel: during the machine check
> >   exception (MCE) handling kernel already classified MCE's severity to
> >   be unnecessary to panic (but either action required or optional).
> > * Scanner statistics, for example how many times the scanner have fully
> >   scanned a NUMA node, how many errors are first detected by the scanner.
> >
> > The memory error statistics are useful to userspace and actually not
> > specific to scanner detected memory errors, and are the focus of this
> RFC.
>
> I assume this is a leftover and this is no longer "RFC".
>
> I'd normally sit back and await reviewer input, but this series is
> simple, so I'll slurp it up so we get some testing while that review is
> ongoing.
>

Ah, yes, my typo, my intent is PATCH.
I did test the patches on several test hosts I have, but more testing is
always better. Thanks, Andrew!

[-- Attachment #2: Type: text/html, Size: 2057 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs
  2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
  2023-01-16 20:15   ` Andrew Morton
@ 2023-01-17  9:02   ` HORIGUCHI NAOYA(堀口 直也)
  1 sibling, 0 replies; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-17  9:02 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: tony.luck, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

On Mon, Jan 16, 2023 at 07:39:00PM +0000, Jiaqi Yan wrote:
> Today kernel provides following memory error info to userspace, but each
> has its own disadvantage
> * HardwareCorrupted in /proc/meminfo: number of bytes poisoned in total,
>   not per NUMA node stats though
> * ras:memory_failure_event: only available after explicitly enabled
> * /dev/mcelog provides many useful info about the MCEs, but
>   doesn't capture how memory_failure recovered memory MCEs
> * kernel logs: userspace needs to process log text
> 
> Exposes per NUMA node memory error stats as sysfs entries:
> 
>   /sys/devices/system/node/node${X}/memory_failure/pages_poisoned
>   /sys/devices/system/node/node${X}/memory_failure/pages_recovered
>   /sys/devices/system/node/node${X}/memory_failure/pages_ignored
>   /sys/devices/system/node/node${X}/memory_failure/pages_failed
>   /sys/devices/system/node/node${X}/memory_failure/pages_delayed
> 
> These counters describe how many raw pages are poisoned and after the
> attempted recoveries by the kernel, their resolutions: how many are
> recovered, ignored, failed, or delayed respectively.
> 
> The following math holds for the statistics:
> * pages_poisoned = pages_recovered + pages_ignored + pages_failed +
>                    pages_delayed
> * pages_poisoned * PAGE_SIZE = /proc/meminfo/HardwareCorrupted
> 
> Acked-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
...

> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index cd28a100d9e4..0a14b35a96da 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1110,6 +1110,31 @@ struct deferred_split {
>  };
>  #endif
>  
> +#ifdef CONFIG_MEMORY_FAILURE
> +/*
> + * Per NUMA node memory failure handling statistics.
> + */
> +struct memory_failure_stats {
> +	/*
> +	 * Number of pages poisoned.
> +	 * Cases not accounted: memory outside kernel control, offline page,
> +	 * arch-specific memory_failure (SGX), and hwpoison_filter()
> +	 * filtered error events.
> +	 */

Yes, this comment is important.  So the sum of the pages_poisoned counters
over NUMA nodes can be mismatched to the global counter shown in /proc/meminfo.
But this makes code simple, and maybe the new stats info is useful enough
even without supporting the special cases.  So I'm OK with this.

BTW, maybe "unpoison" can be also mentioned here?

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-16 20:16   ` Andrew Morton
@ 2023-01-17  9:03     ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-18 23:05       ` Jiaqi Yan
  0 siblings, 1 reply; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-17  9:03 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Jiaqi Yan, tony.luck, duenwen, rientjes, linux-mm, shy828301,
	wangkefeng.wang

On Mon, Jan 16, 2023 at 12:16:13PM -0800, Andrew Morton wrote:
> On Mon, 16 Jan 2023 19:39:01 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
...
> > @@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
> >  #undef slab
> >  #undef reserved
> >  
> > +static void update_per_node_mf_stats(unsigned long pfn,
> > +				     enum mf_result result)
> > +{
> > +	int nid = MAX_NUMNODES;
> > +	struct memory_failure_stats *mf_stats = NULL;
> > +
> > +	nid = pfn_to_nid(pfn);
> > +	if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
> > +		WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
> > +		return;
> > +	}
> > +
> > +	mf_stats = &NODE_DATA(nid)->mf_stats;
> > +	switch (result) {
> > +	case MF_IGNORED:
> > +		++mf_stats->pages_ignored;
> 
> What is the locking here, to prevent concurrent increments?

mf_mutex should prevent the race.

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs
  2023-01-16 20:15   ` Andrew Morton
@ 2023-01-17  9:14     ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-19 21:16       ` Jiaqi Yan
  0 siblings, 1 reply; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-17  9:14 UTC (permalink / raw)
  To: Jiaqi Yan, Andrew Morton
  Cc: tony.luck, duenwen, rientjes, linux-mm, shy828301, wangkefeng.wang

On Mon, Jan 16, 2023 at 12:15:33PM -0800, Andrew Morton wrote:
> On Mon, 16 Jan 2023 19:39:00 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
> 
> > +/*
> > + * Per NUMA node memory failure handling statistics.
> > + */
> > +struct memory_failure_stats {
> > +	/*
> > +	 * Number of pages poisoned.
> > +	 * Cases not accounted: memory outside kernel control, offline page,
> > +	 * arch-specific memory_failure (SGX), and hwpoison_filter()
> > +	 * filtered error events.
> > +	 */
> > +	unsigned long pages_poisoned;
> > +	/*
> > +	 * Recovery results of poisoned pages handled by memory_failure,
> > +	 * in sync with mf_result.
> > +	 * pages_poisoned = pages_ignored + pages_failed +
> > +	 *		    pages_delayed + pages_recovered
> > +	 */
> > +	unsigned long pages_ignored;
> > +	unsigned long pages_failed;
> > +	unsigned long pages_delayed;
> > +	unsigned long pages_recovered;
> > +};
> 
> I don't think the "pages_" here add much value - a simple code comment
> saying "everything counts in pages" should suffice.  If you're feeling
> deprived of columnar space, maybe just remove?  Or not ;)

I personally feel more like removing the "pages_".  And I feel that the
main counter can be renamed with pages_total or total, because it show the
relationship among counters, and the structure name "memory_failure_stats"
already implies that the counter is about hwpoisoned pages.

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
                   ` (3 preceding siblings ...)
  2023-01-16 20:13 ` [PATCH v1 0/3] Introduce per NUMA node memory error statistics Andrew Morton
@ 2023-01-17  9:18 ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-17 17:51   ` Jiaqi Yan
  4 siblings, 1 reply; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-17  9:18 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: tony.luck, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

On Mon, Jan 16, 2023 at 07:38:59PM +0000, Jiaqi Yan wrote:
> Background
> ==========
> In the RFC for Kernel Support of Memory Error Detection [1], one advantage
> of software-based scanning over hardware patrol scrubber is the ability
> to make statistics visible to system administrators. The statistics
> include 2 categories:
> * Memory error statistics, for example, how many memory error are
>   encountered, how many of them are recovered by the kernel. Note these
>   memory errors are non-fatal to kernel: during the machine check
>   exception (MCE) handling kernel already classified MCE's severity to
>   be unnecessary to panic (but either action required or optional).
> * Scanner statistics, for example how many times the scanner have fully
>   scanned a NUMA node, how many errors are first detected by the scanner.
> 
> The memory error statistics are useful to userspace and actually not
> specific to scanner detected memory errors, and are the focus of this RFC.
> 
> Motivation
> ==========
> Memory error stats are important to userspace but insufficient in kernel
> today. Datacenter administrators can better monitor a machine's memory
> health with the visible stats. For example, while memory errors are
> inevitable on servers with 10+ TB memory, starting server maintenance
> when there are only 1~2 recovered memory errors could be overreacting;
> in cloud production environment maintenance usually means live migrate
> all the workload running on the server and this usually causes nontrivial
> disruption to the customer. Providing insight into the scope of memory
> errors on a system helps to determine the appropriate follow-up action.
> In addition, the kernel's existing memory error stats need to be
> standardized so that userspace can reliably count on their usefulness.
> 
> Today kernel provides following memory error info to userspace, but they
> are not sufficient or have disadvantages:
> * HardwareCorrupted in /proc/meminfo: number of bytes poisoned in total,
>   not per NUMA node stats though
> * ras:memory_failure_event: only available after explicitly enabled
> * /dev/mcelog provides many useful info about the MCEs, but doesn't
>   capture how memory_failure recovered memory MCEs
> * kernel logs: userspace needs to process log text
> 
> Exposing memory error stats is also a good start for the in-kernel memory
> error detector. Today the data source of memory error stats are either
> direct memory error consumption, or hardware patrol scrubber detection
> (when signaled as UCNA; these signaled as SRAO are not handled by
> memory_failure).

Sorry, I don't follow this "(...)" part, so let me question. I thought that
SRAO events are handled by memory_failure and UCNA events are not, so does
this say the opposite?

Other than that, the whole description sounds nice and convincing to me.
Thank you for your work.

- Naoya Horiguchi

> Once in-kernel memory scanner is implemented, it will be
> the main source as it is usually configured to scan memory DIMMs constantly
> and faster than hardware patrol scrubber.
> 
> How Implemented
> ===============
> As Naoya pointed out [2], exposing memory error statistics to userspace
> is useful independent of software or hardware scanner. Therefore we
> implement the memory error statistics independent of the in-kernel memory
> error detector. It exposes the following per NUMA node memory error
> counters:
> 
>   /sys/devices/system/node/node${X}/memory_failure/pages_poisoned
>   /sys/devices/system/node/node${X}/memory_failure/pages_recovered
>   /sys/devices/system/node/node${X}/memory_failure/pages_ignored
>   /sys/devices/system/node/node${X}/memory_failure/pages_failed
>   /sys/devices/system/node/node${X}/memory_failure/pages_delayed
> 
> These counters describe how many raw pages are poisoned and after the
> attempted recoveries by the kernel, their resolutions: how many are
> recovered, ignored, failed, or delayed respectively. This approach can be
> easier to extend for future use cases than /proc/meminfo, trace event,
> and log. The following math holds for the statistics:
> * pages_poisoned = pages_recovered + pages_ignored + pages_failed +
>   pages_delayed
> * pages_poisoned * page_size = /proc/meminfo/HardwareCorrupted
> These memory error stats are reset during machine boot.
> 
> The 1st commit introduces these sysfs entries. The 2nd commit populates
> memory error stats every time memory_failure finishes memory error
> recovery. The 3rd commit adds documentations for introduced stats.
> 
> [1] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#mc22959244f5388891c523882e61163c6e4d703af
> [2] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#m52d8d7a333d8536bd7ce74253298858b1c0c0ac6
> 
> Jiaqi Yan (3):
>   mm: memory-failure: Add memory failure stats to sysfs
>   mm: memory-failure: Bump memory failure stats to pglist_data
>   mm: memory-failure: Document memory failure stats
> 
>  Documentation/ABI/stable/sysfs-devices-node | 39 +++++++++++
>  drivers/base/node.c                         |  3 +
>  include/linux/mm.h                          |  5 ++
>  include/linux/mmzone.h                      | 28 ++++++++
>  mm/memory-failure.c                         | 71 +++++++++++++++++++++
>  5 files changed, 146 insertions(+)
> 
> -- 
> 2.39.0.314.g84b9a713c41-goog

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-17  9:18 ` HORIGUCHI NAOYA(堀口 直也)
@ 2023-01-17 17:51   ` Jiaqi Yan
  2023-01-17 18:33     ` Luck, Tony
  0 siblings, 1 reply; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-17 17:51 UTC (permalink / raw)
  To: HORIGUCHI NAOYA(堀口 直也)
  Cc: tony.luck, duenwen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

On Tue, Jan 17, 2023 at 1:19 AM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@nec.com> wrote:
>
> On Mon, Jan 16, 2023 at 07:38:59PM +0000, Jiaqi Yan wrote:
> > Background
> > ==========
> > In the RFC for Kernel Support of Memory Error Detection [1], one advantage
> > of software-based scanning over hardware patrol scrubber is the ability
> > to make statistics visible to system administrators. The statistics
> > include 2 categories:
> > * Memory error statistics, for example, how many memory error are
> >   encountered, how many of them are recovered by the kernel. Note these
> >   memory errors are non-fatal to kernel: during the machine check
> >   exception (MCE) handling kernel already classified MCE's severity to
> >   be unnecessary to panic (but either action required or optional).
> > * Scanner statistics, for example how many times the scanner have fully
> >   scanned a NUMA node, how many errors are first detected by the scanner.
> >
> > The memory error statistics are useful to userspace and actually not
> > specific to scanner detected memory errors, and are the focus of this RFC.
> >
> > Motivation
> > ==========
> > Memory error stats are important to userspace but insufficient in kernel
> > today. Datacenter administrators can better monitor a machine's memory
> > health with the visible stats. For example, while memory errors are
> > inevitable on servers with 10+ TB memory, starting server maintenance
> > when there are only 1~2 recovered memory errors could be overreacting;
> > in cloud production environment maintenance usually means live migrate
> > all the workload running on the server and this usually causes nontrivial
> > disruption to the customer. Providing insight into the scope of memory
> > errors on a system helps to determine the appropriate follow-up action.
> > In addition, the kernel's existing memory error stats need to be
> > standardized so that userspace can reliably count on their usefulness.
> >
> > Today kernel provides following memory error info to userspace, but they
> > are not sufficient or have disadvantages:
> > * HardwareCorrupted in /proc/meminfo: number of bytes poisoned in total,
> >   not per NUMA node stats though
> > * ras:memory_failure_event: only available after explicitly enabled
> > * /dev/mcelog provides many useful info about the MCEs, but doesn't
> >   capture how memory_failure recovered memory MCEs
> > * kernel logs: userspace needs to process log text
> >
> > Exposing memory error stats is also a good start for the in-kernel memory
> > error detector. Today the data source of memory error stats are either
> > direct memory error consumption, or hardware patrol scrubber detection
> > (when signaled as UCNA; these signaled as SRAO are not handled by
> > memory_failure).
>
> Sorry, I don't follow this "(...)" part, so let me question. I thought that
> SRAO events are handled by memory_failure and UCNA events are not, so does
> this say the opposite?

I think UCNA is definitely handled by memory failure, but I was not
correct about SRAO. According to Intel® 64 and IA-32 Architectures
Software Developer’s Manual Volume 3B: System Programming Guide, Part
2, Section 15.6.3: SRAO can be signaled via **either via MCE or
CMCI***.

For SRAO signaled via **machine check exception**, my reading of the
current x86 MCE code is this:
1) kill_current_task is init to 0, and as long as restart IP is valid
(MCG_STATUS_RIPV = 1), it remains 0:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n1473
2) after classifying severity, worst should be MCE_AO_SEVERITY:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n1496
3) therefore, do_machine_check just skips kill_me_now or
kill_me_maybe, and directly goto out:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n1539

For UCNA and SRAO signled via CMCI, CMCI handler should eventually
calls into memory_failure via uc_decode_notifier
((MCE_UCNA_SEVERITY==MCE_DEFERRED_SEVERITY):
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n579

So it seems the signaling mechanism matters.

>
> Other than that, the whole description sounds nice and convincing to me.
> Thank you for your work.
>
> - Naoya Horiguchi
>
> > Once in-kernel memory scanner is implemented, it will be
> > the main source as it is usually configured to scan memory DIMMs constantly
> > and faster than hardware patrol scrubber.
> >
> > How Implemented
> > ===============
> > As Naoya pointed out [2], exposing memory error statistics to userspace
> > is useful independent of software or hardware scanner. Therefore we
> > implement the memory error statistics independent of the in-kernel memory
> > error detector. It exposes the following per NUMA node memory error
> > counters:
> >
> >   /sys/devices/system/node/node${X}/memory_failure/pages_poisoned
> >   /sys/devices/system/node/node${X}/memory_failure/pages_recovered
> >   /sys/devices/system/node/node${X}/memory_failure/pages_ignored
> >   /sys/devices/system/node/node${X}/memory_failure/pages_failed
> >   /sys/devices/system/node/node${X}/memory_failure/pages_delayed
> >
> > These counters describe how many raw pages are poisoned and after the
> > attempted recoveries by the kernel, their resolutions: how many are
> > recovered, ignored, failed, or delayed respectively. This approach can be
> > easier to extend for future use cases than /proc/meminfo, trace event,
> > and log. The following math holds for the statistics:
> > * pages_poisoned = pages_recovered + pages_ignored + pages_failed +
> >   pages_delayed
> > * pages_poisoned * page_size = /proc/meminfo/HardwareCorrupted
> > These memory error stats are reset during machine boot.
> >
> > The 1st commit introduces these sysfs entries. The 2nd commit populates
> > memory error stats every time memory_failure finishes memory error
> > recovery. The 3rd commit adds documentations for introduced stats.
> >
> > [1] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#mc22959244f5388891c523882e61163c6e4d703af
> > [2] https://lore.kernel.org/linux-mm/7E670362-C29E-4626-B546-26530D54F937@gmail.com/T/#m52d8d7a333d8536bd7ce74253298858b1c0c0ac6
> >
> > Jiaqi Yan (3):
> >   mm: memory-failure: Add memory failure stats to sysfs
> >   mm: memory-failure: Bump memory failure stats to pglist_data
> >   mm: memory-failure: Document memory failure stats
> >
> >  Documentation/ABI/stable/sysfs-devices-node | 39 +++++++++++
> >  drivers/base/node.c                         |  3 +
> >  include/linux/mm.h                          |  5 ++
> >  include/linux/mmzone.h                      | 28 ++++++++
> >  mm/memory-failure.c                         | 71 +++++++++++++++++++++
> >  5 files changed, 146 insertions(+)
> >
> > --
> > 2.39.0.314.g84b9a713c41-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-17 17:51   ` Jiaqi Yan
@ 2023-01-17 18:33     ` Luck, Tony
  2023-01-18 17:31       ` Jiaqi Yan
  0 siblings, 1 reply; 22+ messages in thread
From: Luck, Tony @ 2023-01-17 18:33 UTC (permalink / raw)
  To: Jiaqi Yan, HORIGUCHI NAOYA(堀口 直也)
  Cc: Hsiao, Duen-wen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

> For SRAO signaled via **machine check exception**, my reading of the
> current x86 MCE code is this:
...
> 3) therefore, do_machine_check just skips kill_me_now or
> kill_me_maybe, and directly goto out:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n1539

That does appear to be what we do. But it looks like a regression from older
behavior. An SRAO machine check *ought* to call memory_failure() without
the MF_ACTION_REQUIRED bit set in flags.

-Tony


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-17 18:33     ` Luck, Tony
@ 2023-01-18 17:31       ` Jiaqi Yan
  2023-01-18 17:50         ` Luck, Tony
  0 siblings, 1 reply; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-18 17:31 UTC (permalink / raw)
  To: Luck, Tony
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hsiao, Duen-wen, rientjes, linux-mm, shy828301, akpm,
	wangkefeng.wang

On Tue, Jan 17, 2023 at 10:34 AM Luck, Tony <tony.luck@intel.com> wrote:
>
> > For SRAO signaled via **machine check exception**, my reading of the
> > current x86 MCE code is this:
> ...
> > 3) therefore, do_machine_check just skips kill_me_now or
> > kill_me_maybe, and directly goto out:
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/kernel/cpu/mce/core.c#n1539
>
> That does appear to be what we do. But it looks like a regression from older
> behavior. An SRAO machine check *ought* to call memory_failure() without
> the MF_ACTION_REQUIRED bit set in flags.
>
> -Tony
>

Oh, maybe SRAO signaled via MCE calls memory_failure() with these
async code paths?

1. __mc_scan_banks => mce_log => mce_gen_pool_add + irq_work_queue(mce_irq_work)

2. mce_irq_work_cb => mce_schedule_work => schedule_work(&mce_work)

3. mce_work => mce_gen_pool_process =>
blocking_notifier_call_chain(&x86_mce_decoder_chain, 0, mce)
  => mce_uc_nb => uc_decode_notifier => memory_failure


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-18 17:31       ` Jiaqi Yan
@ 2023-01-18 17:50         ` Luck, Tony
  2023-01-18 23:33           ` Jiaqi Yan
  0 siblings, 1 reply; 22+ messages in thread
From: Luck, Tony @ 2023-01-18 17:50 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hsiao, Duen-wen, rientjes, linux-mm, shy828301, akpm,
	wangkefeng.wang

> Oh, maybe SRAO signaled via MCE calls memory_failure() with these
> async code paths?
>
> 1. __mc_scan_banks => mce_log => mce_gen_pool_add + irq_work_queue(mce_irq_work)
>
> 2. mce_irq_work_cb => mce_schedule_work => schedule_work(&mce_work)
>
> 3. mce_work => mce_gen_pool_process =>
> blocking_notifier_call_chain(&x86_mce_decoder_chain, 0, mce)
>  => mce_uc_nb => uc_decode_notifier => memory_failure

Yes. That's right. Both SRAO (#MC) and UCNA (CMCI) will follow that path.
So memory_failure() is called from the kthread context for that notifier.

-Tony

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-17  9:03     ` HORIGUCHI NAOYA(堀口 直也)
@ 2023-01-18 23:05       ` Jiaqi Yan
  2023-01-19  6:40         ` HORIGUCHI NAOYA(堀口 直也)
  0 siblings, 1 reply; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-18 23:05 UTC (permalink / raw)
  To: HORIGUCHI NAOYA(堀口 直也)
  Cc: Andrew Morton, tony.luck, duenwen, rientjes, linux-mm, shy828301,
	wangkefeng.wang

On Tue, Jan 17, 2023 at 1:03 AM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@nec.com> wrote:
>
> On Mon, Jan 16, 2023 at 12:16:13PM -0800, Andrew Morton wrote:
> > On Mon, 16 Jan 2023 19:39:01 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
> ...
> > > @@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
> > >  #undef slab
> > >  #undef reserved
> > >
> > > +static void update_per_node_mf_stats(unsigned long pfn,
> > > +                                enum mf_result result)
> > > +{
> > > +   int nid = MAX_NUMNODES;
> > > +   struct memory_failure_stats *mf_stats = NULL;
> > > +
> > > +   nid = pfn_to_nid(pfn);
> > > +   if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
> > > +           WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
> > > +           return;
> > > +   }
> > > +
> > > +   mf_stats = &NODE_DATA(nid)->mf_stats;
> > > +   switch (result) {
> > > +   case MF_IGNORED:
> > > +           ++mf_stats->pages_ignored;
> >
> > What is the locking here, to prevent concurrent increments?
>
> mf_mutex should prevent the race.

Since we are only incrementing these counters, I wonder if it makes
sense to convert them into atomic_long_t (encapsulated inside
memory_failure_stats) and do atomic_long_add, instead of introducing a
struct mutex?

>
> Thanks,
> Naoya Horiguchi


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-18 17:50         ` Luck, Tony
@ 2023-01-18 23:33           ` Jiaqi Yan
  2023-01-19  4:52             ` HORIGUCHI NAOYA(堀口 直也)
  0 siblings, 1 reply; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-18 23:33 UTC (permalink / raw)
  To: Luck, Tony
  Cc: HORIGUCHI NAOYA(堀口 直也),
	Hsiao, Duen-wen, rientjes, linux-mm, shy828301, akpm,
	wangkefeng.wang

Awesome, thanks Tony, I will correct my cover letter.

On Wed, Jan 18, 2023 at 9:51 AM Luck, Tony <tony.luck@intel.com> wrote:
>
> > Oh, maybe SRAO signaled via MCE calls memory_failure() with these
> > async code paths?
> >
> > 1. __mc_scan_banks => mce_log => mce_gen_pool_add + irq_work_queue(mce_irq_work)
> >
> > 2. mce_irq_work_cb => mce_schedule_work => schedule_work(&mce_work)
> >
> > 3. mce_work => mce_gen_pool_process =>
> > blocking_notifier_call_chain(&x86_mce_decoder_chain, 0, mce)
> >  => mce_uc_nb => uc_decode_notifier => memory_failure
>
> Yes. That's right. Both SRAO (#MC) and UCNA (CMCI) will follow that path.
> So memory_failure() is called from the kthread context for that notifier.
>
> -Tony


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 0/3] Introduce per NUMA node memory error statistics
  2023-01-18 23:33           ` Jiaqi Yan
@ 2023-01-19  4:52             ` HORIGUCHI NAOYA(堀口 直也)
  0 siblings, 0 replies; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-19  4:52 UTC (permalink / raw)
  To: Jiaqi Yan, Luck, Tony
  Cc: Hsiao, Duen-wen, rientjes, linux-mm, shy828301, akpm, wangkefeng.wang

On Wed, Jan 18, 2023 at 03:33:00PM -0800, Jiaqi Yan wrote:
> Awesome, thanks Tony, I will correct my cover letter.
> 
> On Wed, Jan 18, 2023 at 9:51 AM Luck, Tony <tony.luck@intel.com> wrote:
> >
> > > Oh, maybe SRAO signaled via MCE calls memory_failure() with these
> > > async code paths?
> > >
> > > 1. __mc_scan_banks => mce_log => mce_gen_pool_add + irq_work_queue(mce_irq_work)
> > >
> > > 2. mce_irq_work_cb => mce_schedule_work => schedule_work(&mce_work)
> > >
> > > 3. mce_work => mce_gen_pool_process =>
> > > blocking_notifier_call_chain(&x86_mce_decoder_chain, 0, mce)
> > >  => mce_uc_nb => uc_decode_notifier => memory_failure
> >
> > Yes. That's right. Both SRAO (#MC) and UCNA (CMCI) will follow that path.
> > So memory_failure() is called from the kthread context for that notifier.

I misunderstood the behavior of UCNA (CMCI), so thank you for confirming
the behaviors, both of you.

- Naoya Horiguchi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-18 23:05       ` Jiaqi Yan
@ 2023-01-19  6:40         ` HORIGUCHI NAOYA(堀口 直也)
  2023-01-19 18:05           ` Jiaqi Yan
  0 siblings, 1 reply; 22+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2023-01-19  6:40 UTC (permalink / raw)
  To: Jiaqi Yan
  Cc: Andrew Morton, tony.luck, duenwen, rientjes, linux-mm, shy828301,
	wangkefeng.wang

On Wed, Jan 18, 2023 at 03:05:21PM -0800, Jiaqi Yan wrote:
> On Tue, Jan 17, 2023 at 1:03 AM HORIGUCHI NAOYA(堀口 直也)
> <naoya.horiguchi@nec.com> wrote:
> >
> > On Mon, Jan 16, 2023 at 12:16:13PM -0800, Andrew Morton wrote:
> > > On Mon, 16 Jan 2023 19:39:01 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
> > ...
> > > > @@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
> > > >  #undef slab
> > > >  #undef reserved
> > > >
> > > > +static void update_per_node_mf_stats(unsigned long pfn,
> > > > +                                enum mf_result result)
> > > > +{
> > > > +   int nid = MAX_NUMNODES;
> > > > +   struct memory_failure_stats *mf_stats = NULL;
> > > > +
> > > > +   nid = pfn_to_nid(pfn);
> > > > +   if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
> > > > +           WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
> > > > +           return;
> > > > +   }
> > > > +
> > > > +   mf_stats = &NODE_DATA(nid)->mf_stats;
> > > > +   switch (result) {
> > > > +   case MF_IGNORED:
> > > > +           ++mf_stats->pages_ignored;
> > >
> > > What is the locking here, to prevent concurrent increments?
> >
> > mf_mutex should prevent the race.
> 
> Since we are only incrementing these counters, I wonder if it makes
> sense to convert them into atomic_long_t (encapsulated inside
> memory_failure_stats) and do atomic_long_add, instead of introducing a
> struct mutex?

I thought that all callers of action_result() (which is an only caller of
update_per_node_mf_stats()) are called with holding mf_mutex at the
beginning of memory_failure(), so the concurrent increments should not
happen on the new counters.  But writing counters with atomic type/API is
fine to me.

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data
  2023-01-19  6:40         ` HORIGUCHI NAOYA(堀口 直也)
@ 2023-01-19 18:05           ` Jiaqi Yan
  0 siblings, 0 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-19 18:05 UTC (permalink / raw)
  To: HORIGUCHI NAOYA(堀口 直也)
  Cc: Andrew Morton, tony.luck, duenwen, rientjes, linux-mm, shy828301,
	wangkefeng.wang

Oh, sorry I misunderstood, I thought you are suggesting introducing a
new mutex for these counters.
With the existing mf_mutex, I don't think atomic is really necessary.
I will just leave them as unsigned long.
Thanks for the clarification.

On Wed, Jan 18, 2023 at 10:40 PM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@nec.com> wrote:
>
> On Wed, Jan 18, 2023 at 03:05:21PM -0800, Jiaqi Yan wrote:
> > On Tue, Jan 17, 2023 at 1:03 AM HORIGUCHI NAOYA(堀口 直也)
> > <naoya.horiguchi@nec.com> wrote:
> > >
> > > On Mon, Jan 16, 2023 at 12:16:13PM -0800, Andrew Morton wrote:
> > > > On Mon, 16 Jan 2023 19:39:01 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
> > > ...
> > > > > @@ -1227,6 +1227,39 @@ static struct page_state error_states[] = {
> > > > >  #undef slab
> > > > >  #undef reserved
> > > > >
> > > > > +static void update_per_node_mf_stats(unsigned long pfn,
> > > > > +                                enum mf_result result)
> > > > > +{
> > > > > +   int nid = MAX_NUMNODES;
> > > > > +   struct memory_failure_stats *mf_stats = NULL;
> > > > > +
> > > > > +   nid = pfn_to_nid(pfn);
> > > > > +   if (unlikely(nid < 0 || nid >= MAX_NUMNODES)) {
> > > > > +           WARN_ONCE(1, "Memory failure: pfn=%#lx, invalid nid=%d", pfn, nid);
> > > > > +           return;
> > > > > +   }
> > > > > +
> > > > > +   mf_stats = &NODE_DATA(nid)->mf_stats;
> > > > > +   switch (result) {
> > > > > +   case MF_IGNORED:
> > > > > +           ++mf_stats->pages_ignored;
> > > >
> > > > What is the locking here, to prevent concurrent increments?
> > >
> > > mf_mutex should prevent the race.
> >
> > Since we are only incrementing these counters, I wonder if it makes
> > sense to convert them into atomic_long_t (encapsulated inside
> > memory_failure_stats) and do atomic_long_add, instead of introducing a
> > struct mutex?
>
> I thought that all callers of action_result() (which is an only caller of
> update_per_node_mf_stats()) are called with holding mf_mutex at the
> beginning of memory_failure(), so the concurrent increments should not
> happen on the new counters.  But writing counters with atomic type/API is
> fine to me.
>
> Thanks,
> Naoya Horiguchi


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs
  2023-01-17  9:14     ` HORIGUCHI NAOYA(堀口 直也)
@ 2023-01-19 21:16       ` Jiaqi Yan
  0 siblings, 0 replies; 22+ messages in thread
From: Jiaqi Yan @ 2023-01-19 21:16 UTC (permalink / raw)
  To: HORIGUCHI NAOYA(堀口 直也)
  Cc: Andrew Morton, tony.luck, duenwen, rientjes, linux-mm, shy828301,
	wangkefeng.wang

On Tue, Jan 17, 2023 at 1:14 AM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@nec.com> wrote:
>
> On Mon, Jan 16, 2023 at 12:15:33PM -0800, Andrew Morton wrote:
> > On Mon, 16 Jan 2023 19:39:00 +0000 Jiaqi Yan <jiaqiyan@google.com> wrote:
> >
> > > +/*
> > > + * Per NUMA node memory failure handling statistics.
> > > + */
> > > +struct memory_failure_stats {
> > > +   /*
> > > +    * Number of pages poisoned.
> > > +    * Cases not accounted: memory outside kernel control, offline page,
> > > +    * arch-specific memory_failure (SGX), and hwpoison_filter()
> > > +    * filtered error events.
> > > +    */
> > > +   unsigned long pages_poisoned;
> > > +   /*
> > > +    * Recovery results of poisoned pages handled by memory_failure,
> > > +    * in sync with mf_result.
> > > +    * pages_poisoned = pages_ignored + pages_failed +
> > > +    *                  pages_delayed + pages_recovered
> > > +    */
> > > +   unsigned long pages_ignored;
> > > +   unsigned long pages_failed;
> > > +   unsigned long pages_delayed;
> > > +   unsigned long pages_recovered;
> > > +};
> >
> > I don't think the "pages_" here add much value - a simple code comment
> > saying "everything counts in pages" should suffice.  If you're feeling
> > deprived of columnar space, maybe just remove?  Or not ;)
>
> I personally feel more like removing the "pages_".  And I feel that the
> main counter can be renamed with pages_total or total, because it show the
> relationship among counters, and the structure name "memory_failure_stats"
> already implies that the counter is about hwpoisoned pages.
>
> Thanks,
> Naoya Horiguchi

Thanks, v2 will 1) rename pages_poisoned to total and 2) remove the
"pages_" prefix from the rest counters.


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-01-19 21:17 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-16 19:38 [PATCH v1 0/3] Introduce per NUMA node memory error statistics Jiaqi Yan
2023-01-16 19:39 ` [PATCH v1 1/3] mm: memory-failure: Add memory failure stats to sysfs Jiaqi Yan
2023-01-16 20:15   ` Andrew Morton
2023-01-17  9:14     ` HORIGUCHI NAOYA(堀口 直也)
2023-01-19 21:16       ` Jiaqi Yan
2023-01-17  9:02   ` HORIGUCHI NAOYA(堀口 直也)
2023-01-16 19:39 ` [PATCH v1 2/3] mm: memory-failure: Bump memory failure stats to pglist_data Jiaqi Yan
2023-01-16 20:16   ` Andrew Morton
2023-01-17  9:03     ` HORIGUCHI NAOYA(堀口 直也)
2023-01-18 23:05       ` Jiaqi Yan
2023-01-19  6:40         ` HORIGUCHI NAOYA(堀口 直也)
2023-01-19 18:05           ` Jiaqi Yan
2023-01-16 19:39 ` [PATCH v1 3/3] mm: memory-failure: Document memory failure stats Jiaqi Yan
2023-01-16 20:13 ` [PATCH v1 0/3] Introduce per NUMA node memory error statistics Andrew Morton
2023-01-16 21:52   ` Jiaqi Yan
2023-01-17  9:18 ` HORIGUCHI NAOYA(堀口 直也)
2023-01-17 17:51   ` Jiaqi Yan
2023-01-17 18:33     ` Luck, Tony
2023-01-18 17:31       ` Jiaqi Yan
2023-01-18 17:50         ` Luck, Tony
2023-01-18 23:33           ` Jiaqi Yan
2023-01-19  4:52             ` HORIGUCHI NAOYA(堀口 直也)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.