linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next v8 0/4]arm64: add machine check safe support
@ 2022-12-19 12:00 Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 1/4] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Tong Tiangen @ 2022-12-19 12:00 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi, Tong Tiangen

With the increase of memory capacity and density, the probability of
memory error increases. The increasing size and density of server RAM
in the data center and cloud have shown increased uncorrectable memory
errors.

Currently, the kernel has a mechanism to recover from hardware memory
errors. This patchset provides an new recovery mechanism.

For arm64, the hardware memory error handling is do_sea() which divided
into two cases:
 1. The user state consumed the memory errors, the solution is kill the
    user process and isolate the error page.
 2. The kernel state consumed the memory errors, the solution is panic.

For case 2, Undifferentiated panic maybe not the optimal choice, it can
be handled better, in some scenarios, we can avoid panic, such as
uaccess, if the uaccess fails due to memory error, only the user
process will be affected, kill the user process and isolate the user
page with hardware memory errors is a better choice.

Since V7:
 Currently, there are patches supporting recover from poison
 consumption for the cow scenario[1]. Therefore, Supporting cow
 scenario under the arm64 architecture only needs to modify the relevant
 code under the arch/.
 [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/

Since V6:
 Resend patches that are not merged into the mainline in V6.

Since V5:
 1. Add patch2/3 to add uaccess assembly helpers.
 2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
 3. Remove kernel access fixup in patch9.
 All suggestion are from Mark. 

Since V4:
 1. According Michael's suggestion, add patch5.
 2. According Mark's suggestiog, do some restructuring to arm64
 extable, then a new adaptation of machine check safe support is made based
 on this.
 3. According Mark's suggestion, support machine check safe in do_mte() in
 cow scene.
 4. In V4, two patches have been merged into -next, so V5 not send these
 two patches.

Since V3:
 1. According to Robin's suggestion, direct modify user_ldst and
 user_ldp in asm-uaccess.h and modify mte.S.
 2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
 and copy_to_user.S.
 3. According to Robin's suggestion, using micro in copy_page_mc.S to
 simplify code.
 4. According to KeFeng's suggestion, modify powerpc code in patch1.
 5. According to KeFeng's suggestion, modify mm/extable.c and some code
 optimization.

Since V2:
 1. According to Mark's suggestion, all uaccess can be recovered due to
    memory error.
 2. Scenario pagecache reading is also supported as part of uaccess
    (copy_to_user()) and duplication code problem is also solved. 
    Thanks for Robin's suggestion.
 3. According Mark's suggestion, update commit message of patch 2/5.
 4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V1:
 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
   ARM64_UCE_KERNEL_RECOVERY.
 2.Add two new scenes, cow and pagecache reading.
 3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/

Tong Tiangen (4):
  uaccess: add generic fallback version of copy_mc_to_user()
  arm64: add support for machine check error safe
  arm64: add uaccess to machine check safe
  arm64: add cow to machine check safe

 arch/arm64/Kconfig                   |  1 +
 arch/arm64/include/asm/asm-extable.h |  5 ++
 arch/arm64/include/asm/assembler.h   |  4 ++
 arch/arm64/include/asm/extable.h     |  1 +
 arch/arm64/include/asm/mte.h         |  4 ++
 arch/arm64/include/asm/page.h        | 10 ++++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_mc_page.S        | 82 ++++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                 | 19 +++++++
 arch/arm64/mm/copypage.c             | 42 ++++++++++++--
 arch/arm64/mm/extable.c              | 25 +++++++++
 arch/arm64/mm/fault.c                | 29 +++++++++-
 arch/powerpc/include/asm/uaccess.h   |  1 +
 arch/x86/include/asm/uaccess.h       |  1 +
 include/linux/highmem.h              |  2 +
 include/linux/uaccess.h              |  9 +++
 16 files changed, 230 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S

-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH -next v8 1/4] uaccess: add generic fallback version of copy_mc_to_user()
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
@ 2022-12-19 12:00 ` Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 2/4] arm64: add support for machine check error safe Tong Tiangen
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2022-12-19 12:00 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi, Tong Tiangen

x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
fallback in include/linux/uaccess.h prepare for other architechures to
enable CONFIG_ARCH_HAS_COPY_MC.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/uaccess.h | 1 +
 arch/x86/include/asm/uaccess.h     | 1 +
 include/linux/uaccess.h            | 9 +++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 3ddc65c63a49..82dc55707c4b 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -357,6 +357,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 
 	return n;
 }
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 extern long __copy_from_user_flushcache(void *dst, const void __user *src,
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 1d2c79246681..71a4d7bf9e38 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -559,6 +559,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 
 unsigned long __must_check
 copy_mc_to_user(void *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 /*
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 46680189d761..8726260e5508 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -198,6 +198,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
 }
 #endif
 
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+	check_object_size(src, cnt, true);
+	return raw_copy_to_user(dst, src, cnt);
+}
+#endif
+
 static __always_inline void pagefault_disabled_inc(void)
 {
 	current->pagefault_disabled++;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH -next v8 2/4] arm64: add support for machine check error safe
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 1/4] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
@ 2022-12-19 12:00 ` Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 3/4] arm64: add uaccess to machine check safe Tong Tiangen
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2022-12-19 12:00 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi, Tong Tiangen

During the processing of arm64 kernel hardware memory errors(do_sea()), if
the errors is consumed in the kernel, the current processing is panic.
However, it is not optimal.

Take uaccess for example, if the uaccess operation fails due to memory
error, only the user process will be affected, kill the user process
and isolate the user page with hardware memory errors is a better choice.

This patch only enable machine error check framework, it add exception
fixup before kernel panic in do_sea().

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/extable.h |  1 +
 arch/arm64/mm/extable.c          | 16 ++++++++++++++++
 arch/arm64/mm/fault.c            | 29 ++++++++++++++++++++++++++++-
 4 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 03934808b2ed..cb0adee2eb8f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -20,6 +20,7 @@ config ARM64
 	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
 	select ARCH_HAS_CACHE_LINE_SIZE
+	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
 	select ARCH_HAS_CURRENT_STACK_POINTER
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..f80ebd0addfd 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
 #endif /* !CONFIG_BPF_JIT */
 
 bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_mc(struct pt_regs *regs);
 #endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 228d681a8715..478e639f8680 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs)
 
 	BUG();
 }
+
+bool fixup_exception_mc(struct pt_regs *regs)
+{
+	const struct exception_table_entry *ex;
+
+	ex = search_exception_tables(instruction_pointer(regs));
+	if (!ex)
+		return false;
+
+	/*
+	 * This is not complete, More Machine check safe extable type can
+	 * be processed here.
+	 */
+
+	return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 1d832c92cbe8..3021047873d6 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -713,6 +713,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
 	return 1; /* "fault" */
 }
 
+static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr,
+				     struct pt_regs *regs, int sig, int code)
+{
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
+		return false;
+
+	if (user_mode(regs))
+		return false;
+
+	if (apei_claim_sea(regs) < 0)
+		return false;
+
+	if (!fixup_exception_mc(regs))
+		return false;
+
+	if (current->flags & PF_KTHREAD)
+		return true;
+
+	set_thread_esr(0, esr);
+	arm64_force_sig_fault(sig, code, addr,
+		"Uncorrected memory error on access to user memory\n");
+
+	return true;
+}
+
 static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf;
@@ -738,7 +763,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 		 */
 		siaddr  = untagged_addr(current->mm, far);
 	}
-	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
+
+	if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
+		arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH -next v8 3/4] arm64: add uaccess to machine check safe
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 1/4] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 2/4] arm64: add support for machine check error safe Tong Tiangen
@ 2022-12-19 12:00 ` Tong Tiangen
  2022-12-19 12:00 ` [PATCH -next v8 4/4] arm64: add cow " Tong Tiangen
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2022-12-19 12:00 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi, Tong Tiangen

If user access fail due to hardware memory error, only the relevant
processes are affected, so killing the user process and isolate the
error page with hardware memory errors is a more reasonable choice
than kernel panic.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/mm/extable.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 478e639f8680..28ec35e3d210 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs)
 	if (!ex)
 		return false;
 
-	/*
-	 * This is not complete, More Machine check safe extable type can
-	 * be processed here.
-	 */
+	switch (ex->type) {
+	case EX_TYPE_UACCESS_ERR_ZERO:
+		return ex_handler_uaccess_err_zero(ex, regs);
+	}
 
 	return false;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH -next v8 4/4] arm64: add cow to machine check safe
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
                   ` (2 preceding siblings ...)
  2022-12-19 12:00 ` [PATCH -next v8 3/4] arm64: add uaccess to machine check safe Tong Tiangen
@ 2022-12-19 12:00 ` Tong Tiangen
  2023-04-11 16:45   ` Catalin Marinas
  2023-01-11  1:31 ` [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
  2023-03-25  3:38 ` Tong Tiangen
  5 siblings, 1 reply; 9+ messages in thread
From: Tong Tiangen @ 2022-12-19 12:00 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi, Tong Tiangen

At present, Recover from poison consumption from copy-on-write has been
supported[1], arm64 should also support this mechanism.

Add new helper copy_mc_page() which provide a page copy implementation with
machine check safe. At present, only used in cow. In the future, we can
expand more scenes. As long as the consequences of page copy failure are
not fatal(eg: only affect user process), we can use this helper.

The copy_mc_page() in copy_page_mc.S is largely borrows from copy_page()
in copy_page.S and the main difference is copy_mc_page() add extable entry
to every load/store insn to support machine check safe. largely to keep the
patch simple. If needed those optimizations can be folded in.

Add new extable type EX_TYPE_COPY_MC_PAGE which used in copy_mc_page().

[1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h |  5 ++
 arch/arm64/include/asm/assembler.h   |  4 ++
 arch/arm64/include/asm/mte.h         |  4 ++
 arch/arm64/include/asm/page.h        | 10 ++++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_mc_page.S        | 82 ++++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                 | 19 +++++++
 arch/arm64/mm/copypage.c             | 42 ++++++++++++--
 arch/arm64/mm/extable.c              |  9 +++
 include/linux/highmem.h              |  2 +
 10 files changed, 173 insertions(+), 6 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 980d1dd8e1a3..32625c2839fb 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -10,6 +10,7 @@
 #define EX_TYPE_UACCESS_ERR_ZERO	2
 #define EX_TYPE_KACCESS_ERR_ZERO	3
 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
+#define EX_TYPE_COPY_MC_PAGE		5
 
 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
 #define EX_DATA_REG_ERR_SHIFT	0
@@ -59,6 +60,10 @@
 	_ASM_EXTABLE_UACCESS(\insn, \fixup)
 	.endm
 
+	.macro          _asm_extable_copy_mc_page, insn, fixup
+	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_COPY_MC_PAGE, 0)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 376a980f2bad..547ab2f85888 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -154,6 +154,10 @@ lr	.req	x30		// link register
 #define CPU_LE(code...) code
 #endif
 
+#define CPY_MC(l, x...)		\
+9999:   x;			\
+	_asm_extable_copy_mc_page    9999b, l
+
 /*
  * Define a macro that constructs a 64-bit value by concatenating two
  * 32-bit registers. Note that on big endian systems the order of the
diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 20dd06d70af5..a7a888ef9dbf 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -92,6 +92,7 @@ static inline bool try_page_mte_tagging(struct page *page)
 void mte_zero_clear_page_tags(void *addr);
 void mte_sync_tags(pte_t old_pte, pte_t pte);
 void mte_copy_page_tags(void *kto, const void *kfrom);
+void mte_copy_mc_page_tags(void *kto, const void *kfrom);
 void mte_thread_init_user(void);
 void mte_thread_switch(struct task_struct *next);
 void mte_cpu_setup(void);
@@ -128,6 +129,9 @@ static inline void mte_sync_tags(pte_t old_pte, pte_t pte)
 static inline void mte_copy_page_tags(void *kto, const void *kfrom)
 {
 }
+static inline void mte_copy_mc_page_tags(void *kto, const void *kfrom)
+{
+}
 static inline void mte_thread_init_user(void)
 {
 }
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 993a27ea6f54..0780ac57ac27 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+extern void copy_mc_page(void *to, const void *from);
+void copy_mc_highpage(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_MC_HIGHPAGE
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+		unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
+#endif
+
 struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
 						unsigned long vaddr);
 #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 29490be2546b..a2fd865b816d 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -15,6 +15,8 @@ endif
 
 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
 
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+
 obj-$(CONFIG_CRC32) += crc32.o
 
 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
new file mode 100644
index 000000000000..03d657a182f6
--- /dev/null
+++ b/arch/arm64/lib/copy_mc_page.S
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with machine check
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ */
+SYM_FUNC_START(__pi_copy_mc_page)
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	// Prefetch three cache lines ahead.
+	prfm	pldl1strm, [x1, #128]
+	prfm	pldl1strm, [x1, #256]
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+CPY_MC(9998f, ldp	x2, x3, [x1])
+CPY_MC(9998f, ldp	x4, x5, [x1, #16])
+CPY_MC(9998f, ldp	x6, x7, [x1, #32])
+CPY_MC(9998f, ldp	x8, x9, [x1, #48])
+CPY_MC(9998f, ldp	x10, x11, [x1, #64])
+CPY_MC(9998f, ldp	x12, x13, [x1, #80])
+CPY_MC(9998f, ldp	x14, x15, [x1, #96])
+CPY_MC(9998f, ldp	x16, x17, [x1, #112])
+
+	add	x0, x0, #256
+	add	x1, x1, #128
+1:
+	tst	x0, #(PAGE_SIZE - 1)
+
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
+CPY_MC(9998f, ldp	x2, x3, [x1])
+CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
+CPY_MC(9998f, ldp	x4, x5, [x1, #16])
+CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
+CPY_MC(9998f, ldp	x6, x7, [x1, #32])
+CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
+CPY_MC(9998f, ldp	x8, x9, [x1, #48])
+CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
+CPY_MC(9998f, ldp	x10, x11, [x1, #64])
+CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
+CPY_MC(9998f, ldp	x12, x13, [x1, #80])
+CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
+CPY_MC(9998f, ldp	x14, x15, [x1, #96])
+CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
+CPY_MC(9998f, ldp	x16, x17, [x1, #112])
+
+	add	x0, x0, #128
+	add	x1, x1, #128
+
+	b.ne	1b
+
+CPY_MC(9998f, stnp	x2, x3, [x0, #-256])
+CPY_MC(9998f, stnp	x4, x5, [x0, #16 - 256])
+CPY_MC(9998f, stnp	x6, x7, [x0, #32 - 256])
+CPY_MC(9998f, stnp	x8, x9, [x0, #48 - 256])
+CPY_MC(9998f, stnp	x10, x11, [x0, #64 - 256])
+CPY_MC(9998f, stnp	x12, x13, [x0, #80 - 256])
+CPY_MC(9998f, stnp	x14, x15, [x0, #96 - 256])
+CPY_MC(9998f, stnp	x16, x17, [x0, #112 - 256])
+
+9998:	ret
+
+SYM_FUNC_END(__pi_copy_mc_page)
+SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
+EXPORT_SYMBOL(copy_mc_page)
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 5018ac03b6bf..bf4dd861c41c 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags)
 	ret
 SYM_FUNC_END(mte_copy_page_tags)
 
+/*
+ * Copy the tags from the source page to the destination one wiht machine check safe
+ *   x0 - address of the destination page
+ *   x1 - address of the source page
+ */
+SYM_FUNC_START(mte_copy_mc_page_tags)
+	mov	x2, x0
+	mov	x3, x1
+	multitag_transfer_size x5, x6
+1:
+CPY_MC(2f, ldgm	x4, [x3])
+	stgm	x4, [x2]
+	add	x2, x2, x5
+	add	x3, x3, x5
+	tst	x2, #(PAGE_SIZE - 1)
+	b.ne	1b
+2:	ret
+SYM_FUNC_END(mte_copy_mc_page_tags)
+
 /*
  * Read tags from a user buffer (one tag per byte) and set the corresponding
  * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 8dd5a8fe64b4..005ee2a3cb4e 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -14,21 +14,30 @@
 #include <asm/cpufeature.h>
 #include <asm/mte.h>
 
-void copy_highpage(struct page *to, struct page *from)
+static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc)
 {
-	void *kto = page_address(to);
-	void *kfrom = page_address(from);
-
-	copy_page(kto, kfrom);
 
 	if (system_supports_mte() && page_mte_tagged(from)) {
 		page_kasan_tag_reset(to);
 		/* It's a new page, shouldn't have been tagged yet */
 		WARN_ON_ONCE(!try_page_mte_tagging(to));
-		mte_copy_page_tags(kto, kfrom);
+		if (mc)
+			mte_copy_mc_page_tags(kto, kfrom);
+		else
+			mte_copy_page_tags(kto, kfrom);
+
 		set_page_mte_tagged(to);
 	}
 }
+
+void copy_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_page(kto, kfrom);
+	do_mte(to, from, kto, kfrom, false);
+}
 EXPORT_SYMBOL(copy_highpage);
 
 void copy_user_highpage(struct page *to, struct page *from,
@@ -38,3 +47,24 @@ void copy_user_highpage(struct page *to, struct page *from,
 	flush_dcache_page(to);
 }
 EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+void copy_mc_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_mc_page(kto, kfrom);
+	do_mte(to, from, kto, kfrom, true);
+}
+EXPORT_SYMBOL(copy_mc_highpage);
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma)
+{
+	copy_mc_highpage(to, from);
+	flush_dcache_page(to);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
+#endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 28ec35e3d210..0fdab18f2f07 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex)
 	return ((unsigned long)&ex->fixup + ex->fixup);
 }
 
+static bool ex_handler_fixup(const struct exception_table_entry *ex,
+			     struct pt_regs *regs)
+{
+	regs->pc = get_ex_fixup(ex);
+	return true;
+}
+
 static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex,
 					struct pt_regs *regs)
 {
@@ -88,6 +95,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_ERR_ZERO:
 		return ex_handler_uaccess_err_zero(ex, regs);
+	case EX_TYPE_COPY_MC_PAGE:
+		return ex_handler_fixup(ex, regs);
 	}
 
 	return false;
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 44242268f53b..3ad39d4d81d5 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -319,6 +319,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
 
 #endif
 
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
 #ifdef copy_mc_to_kernel
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 					unsigned long vaddr, struct vm_area_struct *vma)
@@ -344,6 +345,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 	return 0;
 }
 #endif
+#endif
 
 #ifndef __HAVE_ARCH_COPY_HIGHPAGE
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH -next v8 0/4]arm64: add machine check safe support
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
                   ` (3 preceding siblings ...)
  2022-12-19 12:00 ` [PATCH -next v8 4/4] arm64: add cow " Tong Tiangen
@ 2023-01-11  1:31 ` Tong Tiangen
  2023-03-25  3:38 ` Tong Tiangen
  5 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2023-01-11  1:31 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi

Hi Catalin and Will:

Kindly ping...

This RAS enhanced feature is important in the server field and helps to 
improve the reliability of equipment. Similar features have been applied 
in other architectures (such as x86 and Powerpc).
In addition, in the previous versions of this patchset, Mark has given 
some very good suggestions and has made corresponding improvements in 
this version.

Thanks,
Tong.

在 2022/12/19 20:00, Tong Tiangen 写道:
> With the increase of memory capacity and density, the probability of
> memory error increases. The increasing size and density of server RAM
> in the data center and cloud have shown increased uncorrectable memory
> errors.
> 
> Currently, the kernel has a mechanism to recover from hardware memory
> errors. This patchset provides an new recovery mechanism.
> 
> For arm64, the hardware memory error handling is do_sea() which divided
> into two cases:
>   1. The user state consumed the memory errors, the solution is kill the
>      user process and isolate the error page.
>   2. The kernel state consumed the memory errors, the solution is panic.
> 
> For case 2, Undifferentiated panic maybe not the optimal choice, it can
> be handled better, in some scenarios, we can avoid panic, such as
> uaccess, if the uaccess fails due to memory error, only the user
> process will be affected, kill the user process and isolate the user
> page with hardware memory errors is a better choice.
> 
> Since V7:
>   Currently, there are patches supporting recover from poison
>   consumption for the cow scenario[1]. Therefore, Supporting cow
>   scenario under the arm64 architecture only needs to modify the relevant
>   code under the arch/.
>   [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
> 
> Since V6:
>   Resend patches that are not merged into the mainline in V6.
> 
> Since V5:
>   1. Add patch2/3 to add uaccess assembly helpers.
>   2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
>   3. Remove kernel access fixup in patch9.
>   All suggestion are from Mark.
> 
> Since V4:
>   1. According Michael's suggestion, add patch5.
>   2. According Mark's suggestiog, do some restructuring to arm64
>   extable, then a new adaptation of machine check safe support is made based
>   on this.
>   3. According Mark's suggestion, support machine check safe in do_mte() in
>   cow scene.
>   4. In V4, two patches have been merged into -next, so V5 not send these
>   two patches.
> 
> Since V3:
>   1. According to Robin's suggestion, direct modify user_ldst and
>   user_ldp in asm-uaccess.h and modify mte.S.
>   2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
>   and copy_to_user.S.
>   3. According to Robin's suggestion, using micro in copy_page_mc.S to
>   simplify code.
>   4. According to KeFeng's suggestion, modify powerpc code in patch1.
>   5. According to KeFeng's suggestion, modify mm/extable.c and some code
>   optimization.
> 
> Since V2:
>   1. According to Mark's suggestion, all uaccess can be recovered due to
>      memory error.
>   2. Scenario pagecache reading is also supported as part of uaccess
>      (copy_to_user()) and duplication code problem is also solved.
>      Thanks for Robin's suggestion.
>   3. According Mark's suggestion, update commit message of patch 2/5.
>   4. According Borisllav's suggestion, update commit message of patch 1/5.
> 
> Since V1:
>   1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
>     ARM64_UCE_KERNEL_RECOVERY.
>   2.Add two new scenes, cow and pagecache reading.
>   3.Fix two small bug(the first two patch).
> 
> V1 in here:
> https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/
> 
> Tong Tiangen (4):
>    uaccess: add generic fallback version of copy_mc_to_user()
>    arm64: add support for machine check error safe
>    arm64: add uaccess to machine check safe
>    arm64: add cow to machine check safe
> 
>   arch/arm64/Kconfig                   |  1 +
>   arch/arm64/include/asm/asm-extable.h |  5 ++
>   arch/arm64/include/asm/assembler.h   |  4 ++
>   arch/arm64/include/asm/extable.h     |  1 +
>   arch/arm64/include/asm/mte.h         |  4 ++
>   arch/arm64/include/asm/page.h        | 10 ++++
>   arch/arm64/lib/Makefile              |  2 +
>   arch/arm64/lib/copy_mc_page.S        | 82 ++++++++++++++++++++++++++++
>   arch/arm64/lib/mte.S                 | 19 +++++++
>   arch/arm64/mm/copypage.c             | 42 ++++++++++++--
>   arch/arm64/mm/extable.c              | 25 +++++++++
>   arch/arm64/mm/fault.c                | 29 +++++++++-
>   arch/powerpc/include/asm/uaccess.h   |  1 +
>   arch/x86/include/asm/uaccess.h       |  1 +
>   include/linux/highmem.h              |  2 +
>   include/linux/uaccess.h              |  9 +++
>   16 files changed, 230 insertions(+), 7 deletions(-)
>   create mode 100644 arch/arm64/lib/copy_mc_page.S
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH -next v8 0/4]arm64: add machine check safe support
  2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
                   ` (4 preceding siblings ...)
  2023-01-11  1:31 ` [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
@ 2023-03-25  3:38 ` Tong Tiangen
  5 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2023-03-25  3:38 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi

Hi all maintainer,

With the increase of memory capacity and density, the memory error
increases,memory errors is the main factor causing server system
downtime, a statistics from intel [1].

For ARM64 server, it is more serious without mc safe copy feature,
feedback from our products, that is why we want to support it on arm64.

Also some new kind of low-reliable memory like HBM(high bandwidth but
low reliability)was introduced,this becomes more of a problem.

We hope the patch set could be incorporated into the community, can you
give me some follow-up suggestions?

[1] 
https://www.intel.com.tw/content/www/tw/zh/software/intel-memory-failure-prediction-jd-cloud.html

Thanks.
Tong.

在 2022/12/19 20:00, Tong Tiangen 写道:
> With the increase of memory capacity and density, the probability of
> memory error increases. The increasing size and density of server RAM
> in the data center and cloud have shown increased uncorrectable memory
> errors.
> 
> Currently, the kernel has a mechanism to recover from hardware memory
> errors. This patchset provides an new recovery mechanism.
> 
> For arm64, the hardware memory error handling is do_sea() which divided
> into two cases:
>   1. The user state consumed the memory errors, the solution is kill the
>      user process and isolate the error page.
>   2. The kernel state consumed the memory errors, the solution is panic.
> 
> For case 2, Undifferentiated panic maybe not the optimal choice, it can
> be handled better, in some scenarios, we can avoid panic, such as
> uaccess, if the uaccess fails due to memory error, only the user
> process will be affected, kill the user process and isolate the user
> page with hardware memory errors is a better choice.
> 
> Since V7:
>   Currently, there are patches supporting recover from poison
>   consumption for the cow scenario[1]. Therefore, Supporting cow
>   scenario under the arm64 architecture only needs to modify the relevant
>   code under the arch/.
>   [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
> 
> Since V6:
>   Resend patches that are not merged into the mainline in V6.
> 
> Since V5:
>   1. Add patch2/3 to add uaccess assembly helpers.
>   2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
>   3. Remove kernel access fixup in patch9.
>   All suggestion are from Mark.
> 
> Since V4:
>   1. According Michael's suggestion, add patch5.
>   2. According Mark's suggestiog, do some restructuring to arm64
>   extable, then a new adaptation of machine check safe support is made based
>   on this.
>   3. According Mark's suggestion, support machine check safe in do_mte() in
>   cow scene.
>   4. In V4, two patches have been merged into -next, so V5 not send these
>   two patches.
> 
> Since V3:
>   1. According to Robin's suggestion, direct modify user_ldst and
>   user_ldp in asm-uaccess.h and modify mte.S.
>   2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
>   and copy_to_user.S.
>   3. According to Robin's suggestion, using micro in copy_page_mc.S to
>   simplify code.
>   4. According to KeFeng's suggestion, modify powerpc code in patch1.
>   5. According to KeFeng's suggestion, modify mm/extable.c and some code
>   optimization.
> 
> Since V2:
>   1. According to Mark's suggestion, all uaccess can be recovered due to
>      memory error.
>   2. Scenario pagecache reading is also supported as part of uaccess
>      (copy_to_user()) and duplication code problem is also solved.
>      Thanks for Robin's suggestion.
>   3. According Mark's suggestion, update commit message of patch 2/5.
>   4. According Borisllav's suggestion, update commit message of patch 1/5.
> 
> Since V1:
>   1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
>     ARM64_UCE_KERNEL_RECOVERY.
>   2.Add two new scenes, cow and pagecache reading.
>   3.Fix two small bug(the first two patch).
> 
> V1 in here:
> https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/
> 
> Tong Tiangen (4):
>    uaccess: add generic fallback version of copy_mc_to_user()
>    arm64: add support for machine check error safe
>    arm64: add uaccess to machine check safe
>    arm64: add cow to machine check safe
> 
>   arch/arm64/Kconfig                   |  1 +
>   arch/arm64/include/asm/asm-extable.h |  5 ++
>   arch/arm64/include/asm/assembler.h   |  4 ++
>   arch/arm64/include/asm/extable.h     |  1 +
>   arch/arm64/include/asm/mte.h         |  4 ++
>   arch/arm64/include/asm/page.h        | 10 ++++
>   arch/arm64/lib/Makefile              |  2 +
>   arch/arm64/lib/copy_mc_page.S        | 82 ++++++++++++++++++++++++++++
>   arch/arm64/lib/mte.S                 | 19 +++++++
>   arch/arm64/mm/copypage.c             | 42 ++++++++++++--
>   arch/arm64/mm/extable.c              | 25 +++++++++
>   arch/arm64/mm/fault.c                | 29 +++++++++-
>   arch/powerpc/include/asm/uaccess.h   |  1 +
>   arch/x86/include/asm/uaccess.h       |  1 +
>   include/linux/highmem.h              |  2 +
>   include/linux/uaccess.h              |  9 +++
>   16 files changed, 230 insertions(+), 7 deletions(-)
>   create mode 100644 arch/arm64/lib/copy_mc_page.S
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH -next v8 4/4] arm64: add cow to machine check safe
  2022-12-19 12:00 ` [PATCH -next v8 4/4] arm64: add cow " Tong Tiangen
@ 2023-04-11 16:45   ` Catalin Marinas
  2023-04-12  2:31     ` Tong Tiangen
  0 siblings, 1 reply; 9+ messages in thread
From: Catalin Marinas @ 2023-04-11 16:45 UTC (permalink / raw)
  To: Tong Tiangen
  Cc: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Will Deacon, Alexander Viro, x86, H . Peter Anvin,
	linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi

On Mon, Dec 19, 2022 at 12:00:08PM +0000, Tong Tiangen wrote:
> At present, Recover from poison consumption from copy-on-write has been
> supported[1], arm64 should also support this mechanism.
> 
> Add new helper copy_mc_page() which provide a page copy implementation with
> machine check safe. At present, only used in cow. In the future, we can
> expand more scenes. As long as the consequences of page copy failure are
> not fatal(eg: only affect user process), we can use this helper.
> 
> The copy_mc_page() in copy_page_mc.S is largely borrows from copy_page()
> in copy_page.S and the main difference is copy_mc_page() add extable entry
> to every load/store insn to support machine check safe. largely to keep the
> patch simple. If needed those optimizations can be folded in.
> 
> Add new extable type EX_TYPE_COPY_MC_PAGE which used in copy_mc_page().
> 
> [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
> 
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>

This series needs rebasing onto a newer kernel. Some random comments
below.

> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
> new file mode 100644
> index 000000000000..03d657a182f6
> --- /dev/null
> +++ b/arch/arm64/lib/copy_mc_page.S
> @@ -0,0 +1,82 @@
[...]
> +SYM_FUNC_START(__pi_copy_mc_page)
> +alternative_if ARM64_HAS_NO_HW_PREFETCH
> +	// Prefetch three cache lines ahead.
> +	prfm	pldl1strm, [x1, #128]
> +	prfm	pldl1strm, [x1, #256]
> +	prfm	pldl1strm, [x1, #384]
> +alternative_else_nop_endif
> +
> +CPY_MC(9998f, ldp	x2, x3, [x1])
> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
[...]
[...]
> +9998:	ret

What I don't understand, is there any error returned here or the bytes
not copied? I can see its return value is never used in this series.

Also, do we need to distinguish between fault on the source or the
destination?

> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 5018ac03b6bf..bf4dd861c41c 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags)
>  	ret
>  SYM_FUNC_END(mte_copy_page_tags)
>  
> +/*
> + * Copy the tags from the source page to the destination one wiht machine check safe
> + *   x0 - address of the destination page
> + *   x1 - address of the source page
> + */
> +SYM_FUNC_START(mte_copy_mc_page_tags)
> +	mov	x2, x0
> +	mov	x3, x1
> +	multitag_transfer_size x5, x6
> +1:
> +CPY_MC(2f, ldgm	x4, [x3])
> +	stgm	x4, [x2]
> +	add	x2, x2, x5
> +	add	x3, x3, x5
> +	tst	x2, #(PAGE_SIZE - 1)
> +	b.ne	1b
> +2:	ret
> +SYM_FUNC_END(mte_copy_mc_page_tags)

While the data copy above handles errors on both source and destination,
here you skip the destination. Any reason?

> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index 8dd5a8fe64b4..005ee2a3cb4e 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
[...]
> +#ifdef CONFIG_ARCH_HAS_COPY_MC
> +void copy_mc_highpage(struct page *to, struct page *from)
> +{
> +	void *kto = page_address(to);
> +	void *kfrom = page_address(from);
> +
> +	copy_mc_page(kto, kfrom);
> +	do_mte(to, from, kto, kfrom, true);
> +}
> +EXPORT_SYMBOL(copy_mc_highpage);
> +
> +int copy_mc_user_highpage(struct page *to, struct page *from,
> +			unsigned long vaddr, struct vm_area_struct *vma)
> +{
> +	copy_mc_highpage(to, from);
> +	flush_dcache_page(to);
> +	return 0;
> +}

This one always returns 0. Does it actually catch any memory failures?

> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
> +#endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 28ec35e3d210..0fdab18f2f07 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex)
>  	return ((unsigned long)&ex->fixup + ex->fixup);
>  }
>  
> +static bool ex_handler_fixup(const struct exception_table_entry *ex,
> +			     struct pt_regs *regs)
> +{
> +	regs->pc = get_ex_fixup(ex);
> +	return true;
> +}

Should we prepare some error here like -EFAULT? That's done in
ex_handler_uaccess_err_zero().

-- 
Catalin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH -next v8 4/4] arm64: add cow to machine check safe
  2023-04-11 16:45   ` Catalin Marinas
@ 2023-04-12  2:31     ` Tong Tiangen
  0 siblings, 0 replies; 9+ messages in thread
From: Tong Tiangen @ 2023-04-12  2:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Will Deacon, Alexander Viro, x86, H . Peter Anvin,
	linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Guohanjun,
	Xie XiuQi



在 2023/4/12 0:45, Catalin Marinas 写道:
> On Mon, Dec 19, 2022 at 12:00:08PM +0000, Tong Tiangen wrote:
>> At present, Recover from poison consumption from copy-on-write has been
>> supported[1], arm64 should also support this mechanism.
>>
>> Add new helper copy_mc_page() which provide a page copy implementation with
>> machine check safe. At present, only used in cow. In the future, we can
>> expand more scenes. As long as the consequences of page copy failure are
>> not fatal(eg: only affect user process), we can use this helper.
>>
>> The copy_mc_page() in copy_page_mc.S is largely borrows from copy_page()
>> in copy_page.S and the main difference is copy_mc_page() add extable entry
>> to every load/store insn to support machine check safe. largely to keep the
>> patch simple. If needed those optimizations can be folded in.
>>
>> Add new extable type EX_TYPE_COPY_MC_PAGE which used in copy_mc_page().
>>
>> [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
>>
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> 
> This series needs rebasing onto a newer kernel. Some random comments
> below.

OK, very willing to do it :)

> 
>> diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
>> new file mode 100644
>> index 000000000000..03d657a182f6
>> --- /dev/null
>> +++ b/arch/arm64/lib/copy_mc_page.S
>> @@ -0,0 +1,82 @@
> [...]
>> +SYM_FUNC_START(__pi_copy_mc_page)
>> +alternative_if ARM64_HAS_NO_HW_PREFETCH
>> +	// Prefetch three cache lines ahead.
>> +	prfm	pldl1strm, [x1, #128]
>> +	prfm	pldl1strm, [x1, #256]
>> +	prfm	pldl1strm, [x1, #384]
>> +alternative_else_nop_endif
>> +
>> +CPY_MC(9998f, ldp	x2, x3, [x1])
>> +CPY_MC(9998f, ldp	x4, x5, [x1, #16])
>> +CPY_MC(9998f, ldp	x6, x7, [x1, #32])
>> +CPY_MC(9998f, ldp	x8, x9, [x1, #48])
>> +CPY_MC(9998f, ldp	x10, x11, [x1, #64])
>> +CPY_MC(9998f, ldp	x12, x13, [x1, #80])
>> +CPY_MC(9998f, ldp	x14, x15, [x1, #96])
>> +CPY_MC(9998f, ldp	x16, x17, [x1, #112])
> [...]
> [...]
>> +9998:	ret
> 
> What I don't understand, is there any error returned here or the bytes
> not copied? I can see its return value is never used in this series.
> 
> Also, do we need to distinguish between fault on the source or the
> destination?

Oh, missing it, This should rerun bytes not copied.
will be fixed next version.

> 
>> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
>> index 5018ac03b6bf..bf4dd861c41c 100644
>> --- a/arch/arm64/lib/mte.S
>> +++ b/arch/arm64/lib/mte.S
>> @@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags)
>>   	ret
>>   SYM_FUNC_END(mte_copy_page_tags)
>>   
>> +/*
>> + * Copy the tags from the source page to the destination one wiht machine check safe
>> + *   x0 - address of the destination page
>> + *   x1 - address of the source page
>> + */
>> +SYM_FUNC_START(mte_copy_mc_page_tags)
>> +	mov	x2, x0
>> +	mov	x3, x1
>> +	multitag_transfer_size x5, x6
>> +1:
>> +CPY_MC(2f, ldgm	x4, [x3])
>> +	stgm	x4, [x2]
>> +	add	x2, x2, x5
>> +	add	x3, x3, x5
>> +	tst	x2, #(PAGE_SIZE - 1)
>> +	b.ne	1b
>> +2:	ret
>> +SYM_FUNC_END(mte_copy_mc_page_tags)
> 
> While the data copy above handles errors on both source and destination,
> here you skip the destination. Any reason?

Oh, my fault, miss the destination.


> 
>> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
>> index 8dd5a8fe64b4..005ee2a3cb4e 100644
>> --- a/arch/arm64/mm/copypage.c
>> +++ b/arch/arm64/mm/copypage.c
> [...]
>> +#ifdef CONFIG_ARCH_HAS_COPY_MC
>> +void copy_mc_highpage(struct page *to, struct page *from)
>> +{
>> +	void *kto = page_address(to);
>> +	void *kfrom = page_address(from);
>> +
>> +	copy_mc_page(kto, kfrom);
>> +	do_mte(to, from, kto, kfrom, true);
>> +}
>> +EXPORT_SYMBOL(copy_mc_highpage);
>> +
>> +int copy_mc_user_highpage(struct page *to, struct page *from,
>> +			unsigned long vaddr, struct vm_area_struct *vma)
>> +{
>> +	copy_mc_highpage(to, from);
>> +	flush_dcache_page(to);
>> +	return 0;
>> +}
> 
> This one always returns 0. Does it actually catch any memory failures?

Yes, will be fixed next version.

> 
>> +EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
>> +#endif
>> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
>> index 28ec35e3d210..0fdab18f2f07 100644
>> --- a/arch/arm64/mm/extable.c
>> +++ b/arch/arm64/mm/extable.c
>> @@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex)
>>   	return ((unsigned long)&ex->fixup + ex->fixup);
>>   }
>>   
>> +static bool ex_handler_fixup(const struct exception_table_entry *ex,
>> +			     struct pt_regs *regs)
>> +{
>> +	regs->pc = get_ex_fixup(ex);
>> +	return true;
>> +}
> 
> Should we prepare some error here like -EFAULT? That's done in
> ex_handler_uaccess_err_zero().

Yes, it should be done here and will be fixed next version.

Thank you for these great suggestions.
Tong.

> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-12  2:32 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-19 12:00 [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
2022-12-19 12:00 ` [PATCH -next v8 1/4] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
2022-12-19 12:00 ` [PATCH -next v8 2/4] arm64: add support for machine check error safe Tong Tiangen
2022-12-19 12:00 ` [PATCH -next v8 3/4] arm64: add uaccess to machine check safe Tong Tiangen
2022-12-19 12:00 ` [PATCH -next v8 4/4] arm64: add cow " Tong Tiangen
2023-04-11 16:45   ` Catalin Marinas
2023-04-12  2:31     ` Tong Tiangen
2023-01-11  1:31 ` [PATCH -next v8 0/4]arm64: add machine check safe support Tong Tiangen
2023-03-25  3:38 ` Tong Tiangen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).