linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors
@ 2020-05-01 16:45 James Morse
  2020-05-01 16:45 ` [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick() James Morse
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: James Morse @ 2020-05-01 16:45 UTC (permalink / raw)
  To: linux-mm, linux-acpi, linux-arm-kernel, Naoya Horiguchi
  Cc: Andrew Morton, Rafael Wysocki, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi, James Morse

Hello!

These are the remaining patches from the SDEI series[0] that fix
a race between memory_failure() and user-space re-triggering the error
taking us back to ghes.c.


ghes_handle_memory_failure() calls memory_failure_queue() from
IRQ context to schedule memory_failure()s work as it needs to sleep.
Once the GHES machinery returns from the IRQ, it may return to user-space
before memory_failure() runs.

If the error that kicked all this off is specific to user-space, e.g. a
load from corrupted memory, we may find ourselves taking the error
again. If the user-space task is scheduled out, and memory_failure() runs,
the same user-space task may be scheduled in on another CPU, which could
also take the same error.

These lead to exaggerated error counters, which may cause some threshold
to be reached early.

This can happen with any error that causes a Synchronous External Abort
on arm64. I can't see why the same wouldn't happen with a machine-check
handled firmware first on x86.


This series adds a memory_failure_queue_kick() helper to
memory-failure.c, and calls it as task-work before returning to
user-space.

Currently arm64 papers over this problem by ignoring ghes_notify_sea()'s
return code as it knows there is still work to do. arm64 generates its
own signal to user-space, which means the first task to discover an
error will always be killed, even if the error was later handled.
(which is no improvement on the no-RAS behaviour)

As a final piece, arm64 can try to process the irq work queued by
ghes_notify_sea() while its still in the external abort handler. A succesfull
return value here now means the memory_failure() work will be done before we
return to user-space, we no longer need to generate our own signal.
This lets the original task survive the error if memory_failure() can
recover the corrupted memory.

Based on v5.7-rc3. I'm afraid it touches three different trees.
$subject says ACPI as that is where the bulk of the diffstat is.

This series may conflict in arm64 with a series from Mark Rutland to
cleanup the daif/PMR toggling.

Changes since v1:
 * Removed spurious 'ghes' parameter.
 * Collected tags.

Known issues:
 * arm64's apei_claim_sea() may unwittingly re-enable debug if it takes
   an external-abort from debug context. Patch 3 makes this worse
   instead of fixing it. The fix would make use of helpers from Mark R's
   series.


Thanks,

James Morse (3):
  mm/memory-failure: Add memory_failure_queue_kick()
  ACPI / APEI: Kick the memory_failure() queue for synchronous errors
  arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work


[0] https://lore.kernel.org/linux-arm-kernel/20190129184902.102850-1-james.morse@arm.com/
[1] https://lore.kernel.org/linux-acpi/1506516620-20033-3-git-send-email-xiexiuqi@huawei.com/

 arch/arm64/kernel/acpi.c | 25 +++++++++++++++
 arch/arm64/mm/fault.c    | 12 ++++---
 drivers/acpi/apei/ghes.c | 67 +++++++++++++++++++++++++++++++++-------
 include/acpi/ghes.h      |  3 ++
 include/linux/mm.h       |  1 +
 mm/memory-failure.c      | 15 ++++++++-
 6 files changed, 106 insertions(+), 17 deletions(-)

-- 
2.26.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick()
  2020-05-01 16:45 [PATCH v2 0/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
@ 2020-05-01 16:45 ` James Morse
  2020-05-18 12:45   ` Rafael J. Wysocki
  2020-05-01 16:45 ` [PATCH v2 2/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
  2020-05-01 16:45 ` [PATCH v2 3/3] arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work James Morse
  2 siblings, 1 reply; 8+ messages in thread
From: James Morse @ 2020-05-01 16:45 UTC (permalink / raw)
  To: linux-mm, linux-acpi, linux-arm-kernel, Naoya Horiguchi
  Cc: Andrew Morton, Rafael Wysocki, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi, James Morse

The GHES code calls memory_failure_queue() from IRQ context to schedule
work on the current CPU so that memory_failure() can sleep.

For synchronous memory errors the arch code needs to know any signals
that memory_failure() will trigger are pending before it returns to
user-space, possibly when exiting from the IRQ.

Add a helper to kick the memory failure queue, to ensure the scheduled
work has happened. This has to be called from process context, so may
have been migrated from the original cpu. Pass the cpu the work was
queued on.

Change memory_failure_work_func() to permit being called on the 'wrong'
cpu.

Signed-off-by: James Morse <james.morse@arm.com>
Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>
---
 include/linux/mm.h  |  1 +
 mm/memory-failure.c | 15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5a323422d783..c606dbbfa5e1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3012,6 +3012,7 @@ enum mf_flags {
 };
 extern int memory_failure(unsigned long pfn, int flags);
 extern void memory_failure_queue(unsigned long pfn, int flags);
+extern void memory_failure_queue_kick(int cpu);
 extern int unpoison_memory(unsigned long pfn);
 extern int get_hwpoison_page(struct page *page);
 #define put_hwpoison_page(page)	put_page(page)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index a96364be8ab4..c4afb407bf0f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work)
 	unsigned long proc_flags;
 	int gotten;
 
-	mf_cpu = this_cpu_ptr(&memory_failure_cpu);
+	mf_cpu = container_of(work, struct memory_failure_cpu, work);
 	for (;;) {
 		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
 		gotten = kfifo_get(&mf_cpu->fifo, &entry);
@@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work)
 	}
 }
 
+/*
+ * Process memory_failure work queued on the specified CPU.
+ * Used to avoid return-to-userspace racing with the memory_failure workqueue.
+ */
+void memory_failure_queue_kick(int cpu)
+{
+	struct memory_failure_cpu *mf_cpu;
+
+	mf_cpu = &per_cpu(memory_failure_cpu, cpu);
+	cancel_work_sync(&mf_cpu->work);
+	memory_failure_work_func(&mf_cpu->work);
+}
+
 static int __init memory_failure_init(void)
 {
 	struct memory_failure_cpu *mf_cpu;
-- 
2.26.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors
  2020-05-01 16:45 [PATCH v2 0/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
  2020-05-01 16:45 ` [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick() James Morse
@ 2020-05-01 16:45 ` James Morse
  2020-05-01 16:45 ` [PATCH v2 3/3] arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work James Morse
  2 siblings, 0 replies; 8+ messages in thread
From: James Morse @ 2020-05-01 16:45 UTC (permalink / raw)
  To: linux-mm, linux-acpi, linux-arm-kernel, Naoya Horiguchi
  Cc: Andrew Morton, Rafael Wysocki, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi, James Morse

memory_failure() offlines or repairs pages of memory that have been
discovered to be corrupt. These may be detected by an external
component, (e.g. the memory controller), and notified via an IRQ.
In this case the work is queued as not all of memory_failure()s work
can happen in IRQ context.

If the error was detected as a result of user-space accessing a
corrupt memory location the CPU may take an abort instead. On arm64
this is a 'synchronous external abort', and on a firmware first
system it is replayed using NOTIFY_SEA.

This notification has NMI like properties, (it can interrupt
IRQ-masked code), so the memory_failure() work is queued. If we
return to user-space before the queued memory_failure() work is
processed, we will take the fault again. This loop may cause platform
firmware to exceed some threshold and reboot when Linux could have
recovered from this error.

For NMIlike notifications keep track of whether memory_failure() work
was queued, and make task_work pending to flush out the queue.
To save memory allocations, the task_work is allocated as part of
the ghes_estatus_node, and free()ing it back to the pool is deferred.

Signed-off-by: James Morse <james.morse@arm.com>
Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>

---
current->mm == &init_mm ? I couldn't find a helper for this.
The intent is not to set TIF flags on kernel threads. What happens
if a kernel-thread takes on of these? Its just one of the many
not-handled-very-well cases we have already, as memory_failure()
puts it: "try to be lucky".

I assume that if NOTIFY_NMI is coming from SMM it must suffer from
this problem too.
---
 drivers/acpi/apei/ghes.c | 67 +++++++++++++++++++++++++++++++++-------
 include/acpi/ghes.h      |  3 ++
 2 files changed, 59 insertions(+), 11 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 24c9642e8fc7..5abca09455ad 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -40,6 +40,7 @@
 #include <linux/sched/clock.h>
 #include <linux/uuid.h>
 #include <linux/ras.h>
+#include <linux/task_work.h>
 
 #include <acpi/actbl1.h>
 #include <acpi/ghes.h>
@@ -414,23 +415,46 @@ static void ghes_clear_estatus(struct ghes *ghes,
 		ghes_ack_error(ghes->generic_v2);
 }
 
-static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int sev)
+/*
+ * Called as task_work before returning to user-space.
+ * Ensure any queued work has been done before we return to the context that
+ * triggered the notification.
+ */
+static void ghes_kick_task_work(struct callback_head *head)
+{
+	struct acpi_hest_generic_status *estatus;
+	struct ghes_estatus_node *estatus_node;
+	u32 node_len;
+
+	estatus_node = container_of(head, struct ghes_estatus_node, task_work);
+	if (IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
+		memory_failure_queue_kick(estatus_node->task_work_cpu);
+
+	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
+	node_len = GHES_ESTATUS_NODE_LEN(cper_estatus_len(estatus));
+	gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node, node_len);
+}
+
+static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
+				       int sev)
 {
-#ifdef CONFIG_ACPI_APEI_MEMORY_FAILURE
 	unsigned long pfn;
 	int flags = -1;
 	int sec_sev = ghes_severity(gdata->error_severity);
 	struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
 
+	if (!IS_ENABLED(CONFIG_ACPI_APEI_MEMORY_FAILURE))
+		return false;
+
 	if (!(mem_err->validation_bits & CPER_MEM_VALID_PA))
-		return;
+		return false;
 
 	pfn = mem_err->physical_addr >> PAGE_SHIFT;
 	if (!pfn_valid(pfn)) {
 		pr_warn_ratelimited(FW_WARN GHES_PFX
 		"Invalid address in generic error data: %#llx\n",
 		mem_err->physical_addr);
-		return;
+		return false;
 	}
 
 	/* iff following two events can be handled properly by now */
@@ -440,9 +464,12 @@ static void ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata, int
 	if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
 		flags = 0;
 
-	if (flags != -1)
+	if (flags != -1) {
 		memory_failure_queue(pfn, flags);
-#endif
+		return true;
+	}
+
+	return false;
 }
 
 /*
@@ -490,7 +517,7 @@ static void ghes_handle_aer(struct acpi_hest_generic_data *gdata)
 #endif
 }
 
-static void ghes_do_proc(struct ghes *ghes,
+static bool ghes_do_proc(struct ghes *ghes,
 			 const struct acpi_hest_generic_status *estatus)
 {
 	int sev, sec_sev;
@@ -498,6 +525,7 @@ static void ghes_do_proc(struct ghes *ghes,
 	guid_t *sec_type;
 	const guid_t *fru_id = &guid_null;
 	char *fru_text = "";
+	bool queued = false;
 
 	sev = ghes_severity(estatus->error_severity);
 	apei_estatus_for_each_section(estatus, gdata) {
@@ -515,7 +543,7 @@ static void ghes_do_proc(struct ghes *ghes,
 			ghes_edac_report_mem_error(sev, mem_err);
 
 			arch_apei_report_mem_error(sev, mem_err);
-			ghes_handle_memory_failure(gdata, sev);
+			queued = ghes_handle_memory_failure(gdata, sev);
 		}
 		else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
 			ghes_handle_aer(gdata);
@@ -532,6 +560,8 @@ static void ghes_do_proc(struct ghes *ghes,
 					       gdata->error_data_length);
 		}
 	}
+
+	return queued;
 }
 
 static void __ghes_print_estatus(const char *pfx,
@@ -827,7 +857,9 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
 	struct ghes_estatus_node *estatus_node;
 	struct acpi_hest_generic *generic;
 	struct acpi_hest_generic_status *estatus;
+	bool task_work_pending;
 	u32 len, node_len;
+	int ret;
 
 	llnode = llist_del_all(&ghes_estatus_llist);
 	/*
@@ -842,14 +874,26 @@ static void ghes_proc_in_irq(struct irq_work *irq_work)
 		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
 		len = cper_estatus_len(estatus);
 		node_len = GHES_ESTATUS_NODE_LEN(len);
-		ghes_do_proc(estatus_node->ghes, estatus);
+		task_work_pending = ghes_do_proc(estatus_node->ghes, estatus);
 		if (!ghes_estatus_cached(estatus)) {
 			generic = estatus_node->generic;
 			if (ghes_print_estatus(NULL, generic, estatus))
 				ghes_estatus_cache_add(generic, estatus);
 		}
-		gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node,
-			      node_len);
+
+		if (task_work_pending && current->mm != &init_mm) {
+			estatus_node->task_work.func = ghes_kick_task_work;
+			estatus_node->task_work_cpu = smp_processor_id();
+			ret = task_work_add(current, &estatus_node->task_work,
+					    true);
+			if (ret)
+				estatus_node->task_work.func = NULL;
+		}
+
+		if (!estatus_node->task_work.func)
+			gen_pool_free(ghes_estatus_pool,
+				      (unsigned long)estatus_node, node_len);
+
 		llnode = next;
 	}
 }
@@ -909,6 +953,7 @@ static int ghes_in_nmi_queue_one_entry(struct ghes *ghes,
 
 	estatus_node->ghes = ghes;
 	estatus_node->generic = ghes->generic;
+	estatus_node->task_work.func = NULL;
 	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
 
 	if (__ghes_read_estatus(estatus, buf_paddr, fixmap_idx, len)) {
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index e3f1cddb4ac8..517a5231cc1b 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -33,6 +33,9 @@ struct ghes_estatus_node {
 	struct llist_node llnode;
 	struct acpi_hest_generic *generic;
 	struct ghes *ghes;
+
+	int task_work_cpu;
+	struct callback_head task_work;
 };
 
 struct ghes_estatus_cache {
-- 
2.26.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work
  2020-05-01 16:45 [PATCH v2 0/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
  2020-05-01 16:45 ` [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick() James Morse
  2020-05-01 16:45 ` [PATCH v2 2/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
@ 2020-05-01 16:45 ` James Morse
  2 siblings, 0 replies; 8+ messages in thread
From: James Morse @ 2020-05-01 16:45 UTC (permalink / raw)
  To: linux-mm, linux-acpi, linux-arm-kernel, Naoya Horiguchi
  Cc: Andrew Morton, Rafael Wysocki, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi, James Morse

APEI is unable to do all of its error handling work in nmi-context, so
it defers non-fatal work onto the irq_work queue. arch_irq_work_raise()
sends an IPI to the calling cpu, but this is not guaranteed to be taken
before returning to user-space.

Unless the exception interrupted a context with irqs-masked,
irq_work_run() can run immediately. Otherwise return -EINPROGRESS to
indicate ghes_notify_sea() found some work to do, but it hasn't
finished yet.

With this apei_claim_sea() returning '0' means this external-abort was
also notification of a firmware-first RAS error, and that APEI has
processed the CPER records.

Signed-off-by: James Morse <james.morse@arm.com>
Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/acpi.c | 25 +++++++++++++++++++++++++
 arch/arm64/mm/fault.c    | 12 +++++++-----
 2 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index a100483b47c4..46ec402e97ed 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -19,6 +19,7 @@
 #include <linux/init.h>
 #include <linux/irq.h>
 #include <linux/irqdomain.h>
+#include <linux/irq_work.h>
 #include <linux/memblock.h>
 #include <linux/of_fdt.h>
 #include <linux/smp.h>
@@ -269,6 +270,7 @@ pgprot_t __acpi_get_mem_attribute(phys_addr_t addr)
 int apei_claim_sea(struct pt_regs *regs)
 {
 	int err = -ENOENT;
+	bool return_to_irqs_enabled;
 	unsigned long current_flags;
 
 	if (!IS_ENABLED(CONFIG_ACPI_APEI_GHES))
@@ -276,6 +278,12 @@ int apei_claim_sea(struct pt_regs *regs)
 
 	current_flags = local_daif_save_flags();
 
+	/* current_flags isn't useful here as daif doesn't tell us about pNMI */
+	return_to_irqs_enabled = !irqs_disabled_flags(arch_local_save_flags());
+
+	if (regs)
+		return_to_irqs_enabled = interrupts_enabled(regs);
+
 	/*
 	 * SEA can interrupt SError, mask it and describe this as an NMI so
 	 * that APEI defers the handling.
@@ -284,6 +292,23 @@ int apei_claim_sea(struct pt_regs *regs)
 	nmi_enter();
 	err = ghes_notify_sea();
 	nmi_exit();
+
+	/*
+	 * APEI NMI-like notifications are deferred to irq_work. Unless
+	 * we interrupted irqs-masked code, we can do that now.
+	 */
+	if (!err) {
+		if (return_to_irqs_enabled) {
+			local_daif_restore(DAIF_PROCCTX_NOIRQ);
+			__irq_enter();
+			irq_work_run();
+			__irq_exit();
+		} else {
+			pr_warn_ratelimited("APEI work queued but not completed");
+			err = -EINPROGRESS;
+		}
+	}
+
 	local_daif_restore(current_flags);
 
 	return err;
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index c9cedc0432d2..dff2d72b0883 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -635,11 +635,13 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 
 	inf = esr_to_fault_info(esr);
 
-	/*
-	 * Return value ignored as we rely on signal merging.
-	 * Future patches will make this more robust.
-	 */
-	apei_claim_sea(regs);
+	if (user_mode(regs) && apei_claim_sea(regs) == 0) {
+		/*
+		 * APEI claimed this as a firmware-first notification.
+		 * Some processing deferred to task_work before ret_to_user().
+		 */
+		return 0;
+	}
 
 	if (esr & ESR_ELx_FnV)
 		siaddr = NULL;
-- 
2.26.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick()
  2020-05-01 16:45 ` [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick() James Morse
@ 2020-05-18 12:45   ` Rafael J. Wysocki
  2020-05-18 19:58     ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael J. Wysocki @ 2020-05-18 12:45 UTC (permalink / raw)
  To: James Morse
  Cc: linux-mm, linux-acpi, linux-arm-kernel, Naoya Horiguchi,
	Andrew Morton, Len Brown, Tony Luck, Borislav Petkov,
	Catalin Marinas, Will Deacon, Mark Rutland, Tyler Baicar,
	Xie XiuQi

On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote:
> The GHES code calls memory_failure_queue() from IRQ context to schedule
> work on the current CPU so that memory_failure() can sleep.
> 
> For synchronous memory errors the arch code needs to know any signals
> that memory_failure() will trigger are pending before it returns to
> user-space, possibly when exiting from the IRQ.
> 
> Add a helper to kick the memory failure queue, to ensure the scheduled
> work has happened. This has to be called from process context, so may
> have been migrated from the original cpu. Pass the cpu the work was
> queued on.
> 
> Change memory_failure_work_func() to permit being called on the 'wrong'
> cpu.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Tested-by: Tyler Baicar <baicar@os.amperecomputing.com>
> ---
>  include/linux/mm.h  |  1 +
>  mm/memory-failure.c | 15 ++++++++++++++-
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 5a323422d783..c606dbbfa5e1 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3012,6 +3012,7 @@ enum mf_flags {
>  };
>  extern int memory_failure(unsigned long pfn, int flags);
>  extern void memory_failure_queue(unsigned long pfn, int flags);
> +extern void memory_failure_queue_kick(int cpu);
>  extern int unpoison_memory(unsigned long pfn);
>  extern int get_hwpoison_page(struct page *page);
>  #define put_hwpoison_page(page)	put_page(page)
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index a96364be8ab4..c4afb407bf0f 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work)
>  	unsigned long proc_flags;
>  	int gotten;
>  
> -	mf_cpu = this_cpu_ptr(&memory_failure_cpu);
> +	mf_cpu = container_of(work, struct memory_failure_cpu, work);
>  	for (;;) {
>  		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
>  		gotten = kfifo_get(&mf_cpu->fifo, &entry);
> @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work)
>  	}
>  }
>  
> +/*
> + * Process memory_failure work queued on the specified CPU.
> + * Used to avoid return-to-userspace racing with the memory_failure workqueue.
> + */
> +void memory_failure_queue_kick(int cpu)
> +{
> +	struct memory_failure_cpu *mf_cpu;
> +
> +	mf_cpu = &per_cpu(memory_failure_cpu, cpu);
> +	cancel_work_sync(&mf_cpu->work);
> +	memory_failure_work_func(&mf_cpu->work);
> +}
> +
>  static int __init memory_failure_init(void)
>  {
>  	struct memory_failure_cpu *mf_cpu;
> 

I could apply this provided an ACK from the mm people.

Thanks!





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick()
  2020-05-18 12:45   ` Rafael J. Wysocki
@ 2020-05-18 19:58     ` Andrew Morton
  2020-05-19  3:15       ` HORIGUCHI NAOYA(堀口 直也)
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2020-05-18 19:58 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: James Morse, linux-mm, linux-acpi, linux-arm-kernel,
	Naoya Horiguchi, Len Brown, Tony Luck, Borislav Petkov,
	Catalin Marinas, Will Deacon, Mark Rutland, Tyler Baicar,
	Xie XiuQi

On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:

> On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote:
> > The GHES code calls memory_failure_queue() from IRQ context to schedule
> > work on the current CPU so that memory_failure() can sleep.
> > 
> > For synchronous memory errors the arch code needs to know any signals
> > that memory_failure() will trigger are pending before it returns to
> > user-space, possibly when exiting from the IRQ.
> > 
> > Add a helper to kick the memory failure queue, to ensure the scheduled
> > work has happened. This has to be called from process context, so may
> > have been migrated from the original cpu. Pass the cpu the work was
> > queued on.
> > 
> > Change memory_failure_work_func() to permit being called on the 'wrong'
> > cpu.
> > 
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -3012,6 +3012,7 @@ enum mf_flags {
> >  };
> >  extern int memory_failure(unsigned long pfn, int flags);
> >  extern void memory_failure_queue(unsigned long pfn, int flags);
> > +extern void memory_failure_queue_kick(int cpu);
> >  extern int unpoison_memory(unsigned long pfn);
> >  extern int get_hwpoison_page(struct page *page);
> >  #define put_hwpoison_page(page)	put_page(page)
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index a96364be8ab4..c4afb407bf0f 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work)
> >  	unsigned long proc_flags;
> >  	int gotten;
> >  
> > -	mf_cpu = this_cpu_ptr(&memory_failure_cpu);
> > +	mf_cpu = container_of(work, struct memory_failure_cpu, work);
> >  	for (;;) {
> >  		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
> >  		gotten = kfifo_get(&mf_cpu->fifo, &entry);
> > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work)
> >  	}
> >  }
> >  
> > +/*
> > + * Process memory_failure work queued on the specified CPU.
> > + * Used to avoid return-to-userspace racing with the memory_failure workqueue.
> > + */
> > +void memory_failure_queue_kick(int cpu)
> > +{
> > +	struct memory_failure_cpu *mf_cpu;
> > +
> > +	mf_cpu = &per_cpu(memory_failure_cpu, cpu);
> > +	cancel_work_sync(&mf_cpu->work);
> > +	memory_failure_work_func(&mf_cpu->work);
> > +}
> > +
> >  static int __init memory_failure_init(void)
> >  {
> >  	struct memory_failure_cpu *mf_cpu;
> > 
> 
> I could apply this provided an ACK from the mm people.
> 

Naoya Horiguchi is the memory-failure.c person.  A review would be
appreciated please?

I'm struggling with it a bit.  memory_failure_queue_kick() should be
called on the cpu which is identified by arg `cpu', yes? 
memory_failure_work_func() appears to assume this.

If that's right then a) why bother passing in the `cpu' arg?  and b)
what keeps this thread pinned to that CPU?  cancel_work_sync() can
schedule.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick()
  2020-05-18 19:58     ` Andrew Morton
@ 2020-05-19  3:15       ` HORIGUCHI NAOYA(堀口 直也)
  2020-05-19 17:53         ` Rafael J. Wysocki
  0 siblings, 1 reply; 8+ messages in thread
From: HORIGUCHI NAOYA(堀口 直也) @ 2020-05-19  3:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Rafael J. Wysocki, James Morse, linux-mm, linux-acpi,
	linux-arm-kernel, Naoya Horiguchi, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi

On Mon, May 18, 2020 at 12:58:28PM -0700, Andrew Morton wrote:
> On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> 
> > On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote:
> > > The GHES code calls memory_failure_queue() from IRQ context to schedule
> > > work on the current CPU so that memory_failure() can sleep.
> > > 
> > > For synchronous memory errors the arch code needs to know any signals
> > > that memory_failure() will trigger are pending before it returns to
> > > user-space, possibly when exiting from the IRQ.
> > > 
> > > Add a helper to kick the memory failure queue, to ensure the scheduled
> > > work has happened. This has to be called from process context, so may
> > > have been migrated from the original cpu. Pass the cpu the work was
> > > queued on.
> > > 
> > > Change memory_failure_work_func() to permit being called on the 'wrong'
> > > cpu.
> > > 
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -3012,6 +3012,7 @@ enum mf_flags {
> > >  };
> > >  extern int memory_failure(unsigned long pfn, int flags);
> > >  extern void memory_failure_queue(unsigned long pfn, int flags);
> > > +extern void memory_failure_queue_kick(int cpu);
> > >  extern int unpoison_memory(unsigned long pfn);
> > >  extern int get_hwpoison_page(struct page *page);
> > >  #define put_hwpoison_page(page)	put_page(page)
> > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > > index a96364be8ab4..c4afb407bf0f 100644
> > > --- a/mm/memory-failure.c
> > > +++ b/mm/memory-failure.c
> > > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work)
> > >  	unsigned long proc_flags;
> > >  	int gotten;
> > >  
> > > -	mf_cpu = this_cpu_ptr(&memory_failure_cpu);
> > > +	mf_cpu = container_of(work, struct memory_failure_cpu, work);
> > >  	for (;;) {
> > >  		spin_lock_irqsave(&mf_cpu->lock, proc_flags);
> > >  		gotten = kfifo_get(&mf_cpu->fifo, &entry);
> > > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work)
> > >  	}
> > >  }
> > >  
> > > +/*
> > > + * Process memory_failure work queued on the specified CPU.
> > > + * Used to avoid return-to-userspace racing with the memory_failure workqueue.
> > > + */
> > > +void memory_failure_queue_kick(int cpu)
> > > +{
> > > +	struct memory_failure_cpu *mf_cpu;
> > > +
> > > +	mf_cpu = &per_cpu(memory_failure_cpu, cpu);
> > > +	cancel_work_sync(&mf_cpu->work);
> > > +	memory_failure_work_func(&mf_cpu->work);
> > > +}
> > > +
> > >  static int __init memory_failure_init(void)
> > >  {
> > >  	struct memory_failure_cpu *mf_cpu;
> > > 
> > 
> > I could apply this provided an ACK from the mm people.
> > 
> 
> Naoya Horiguchi is the memory-failure.c person.  A review would be
> appreciated please?
> 
> I'm struggling with it a bit.  memory_failure_queue_kick() should be
> called on the cpu which is identified by arg `cpu', yes? 
> memory_failure_work_func() appears to assume this.
> 
> If that's right then a) why bother passing in the `cpu' arg?  and b)
> what keeps this thread pinned to that CPU?  cancel_work_sync() can
> schedule.

If I read correctly, memory_failure work is queue on the CPU on which the
user process ran when it touched the corrupted memory, and the process can
be scheduled on another CPU when the kernel returned back to userspace after
handling the GHES event.  So we need to remember where the memory_failure
event is queued to flush proper work queue.  So I feel that this properly
implements it.

Considering the effect to the other caller, currently memory_failure_queue()
has 2 callers, ghes_handle_memory_failure() and cec_add_elem(). The former
is what we try to change now.  And the latter is to execute soft offline
(which is related to corrected non-fatal errors), so that's not affected by
the reported issue.  So I don't think that this change breaks the other
caller.

So I'm fine with the suggested change.

Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>

Thanks,
Naoya Horiguchi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick()
  2020-05-19  3:15       ` HORIGUCHI NAOYA(堀口 直也)
@ 2020-05-19 17:53         ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2020-05-19 17:53 UTC (permalink / raw)
  To: HORIGUCHI NAOYA(堀口 直也), James Morse
  Cc: Andrew Morton, Rafael J. Wysocki, linux-mm, linux-acpi,
	linux-arm-kernel, Naoya Horiguchi, Len Brown, Tony Luck,
	Borislav Petkov, Catalin Marinas, Will Deacon, Mark Rutland,
	Tyler Baicar, Xie XiuQi

On Tue, May 19, 2020 at 5:15 AM HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@nec.com> wrote:
>
> On Mon, May 18, 2020 at 12:58:28PM -0700, Andrew Morton wrote:
> > On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote:
> >
> > > On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote:
> > > > The GHES code calls memory_failure_queue() from IRQ context to schedule
> > > > work on the current CPU so that memory_failure() can sleep.
> > > >
> > > > For synchronous memory errors the arch code needs to know any signals
> > > > that memory_failure() will trigger are pending before it returns to
> > > > user-space, possibly when exiting from the IRQ.
> > > >
> > > > Add a helper to kick the memory failure queue, to ensure the scheduled
> > > > work has happened. This has to be called from process context, so may
> > > > have been migrated from the original cpu. Pass the cpu the work was
> > > > queued on.
> > > >
> > > > Change memory_failure_work_func() to permit being called on the 'wrong'
> > > > cpu.
> > > >
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -3012,6 +3012,7 @@ enum mf_flags {
> > > >  };
> > > >  extern int memory_failure(unsigned long pfn, int flags);
> > > >  extern void memory_failure_queue(unsigned long pfn, int flags);
> > > > +extern void memory_failure_queue_kick(int cpu);
> > > >  extern int unpoison_memory(unsigned long pfn);
> > > >  extern int get_hwpoison_page(struct page *page);
> > > >  #define put_hwpoison_page(page)  put_page(page)
> > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > > > index a96364be8ab4..c4afb407bf0f 100644
> > > > --- a/mm/memory-failure.c
> > > > +++ b/mm/memory-failure.c
> > > > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work)
> > > >   unsigned long proc_flags;
> > > >   int gotten;
> > > >
> > > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu);
> > > > + mf_cpu = container_of(work, struct memory_failure_cpu, work);
> > > >   for (;;) {
> > > >           spin_lock_irqsave(&mf_cpu->lock, proc_flags);
> > > >           gotten = kfifo_get(&mf_cpu->fifo, &entry);
> > > > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work)
> > > >   }
> > > >  }
> > > >
> > > > +/*
> > > > + * Process memory_failure work queued on the specified CPU.
> > > > + * Used to avoid return-to-userspace racing with the memory_failure workqueue.
> > > > + */
> > > > +void memory_failure_queue_kick(int cpu)
> > > > +{
> > > > + struct memory_failure_cpu *mf_cpu;
> > > > +
> > > > + mf_cpu = &per_cpu(memory_failure_cpu, cpu);
> > > > + cancel_work_sync(&mf_cpu->work);
> > > > + memory_failure_work_func(&mf_cpu->work);
> > > > +}
> > > > +
> > > >  static int __init memory_failure_init(void)
> > > >  {
> > > >   struct memory_failure_cpu *mf_cpu;
> > > >
> > >
> > > I could apply this provided an ACK from the mm people.
> > >
> >
> > Naoya Horiguchi is the memory-failure.c person.  A review would be
> > appreciated please?
> >
> > I'm struggling with it a bit.  memory_failure_queue_kick() should be
> > called on the cpu which is identified by arg `cpu', yes?
> > memory_failure_work_func() appears to assume this.
> >
> > If that's right then a) why bother passing in the `cpu' arg?  and b)
> > what keeps this thread pinned to that CPU?  cancel_work_sync() can
> > schedule.
>
> If I read correctly, memory_failure work is queue on the CPU on which the
> user process ran when it touched the corrupted memory, and the process can
> be scheduled on another CPU when the kernel returned back to userspace after
> handling the GHES event.  So we need to remember where the memory_failure
> event is queued to flush proper work queue.  So I feel that this properly
> implements it.
>
> Considering the effect to the other caller, currently memory_failure_queue()
> has 2 callers, ghes_handle_memory_failure() and cec_add_elem(). The former
> is what we try to change now.  And the latter is to execute soft offline
> (which is related to corrected non-fatal errors), so that's not affected by
> the reported issue.  So I don't think that this change breaks the other
> caller.
>
> So I'm fine with the suggested change.
>
> Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>

OK, thanks!

So because patch [1/3] has been ACKed already, I'm applying this
series as 5.8 material.

Thanks everyone!


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-05-19 17:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-01 16:45 [PATCH v2 0/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
2020-05-01 16:45 ` [PATCH v2 1/3] mm/memory-failure: Add memory_failure_queue_kick() James Morse
2020-05-18 12:45   ` Rafael J. Wysocki
2020-05-18 19:58     ` Andrew Morton
2020-05-19  3:15       ` HORIGUCHI NAOYA(堀口 直也)
2020-05-19 17:53         ` Rafael J. Wysocki
2020-05-01 16:45 ` [PATCH v2 2/3] ACPI / APEI: Kick the memory_failure() queue for synchronous errors James Morse
2020-05-01 16:45 ` [PATCH v2 3/3] arm64: acpi: Make apei_claim_sea() synchronise with APEI's irq work James Morse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).