All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH -next V3 0/6]arm64: add machine check safe support
@ 2022-04-12  7:25 ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

With the increase of memory capacity and density, the probability of
memory error increases. The increasing size and density of server RAM
in the data center and cloud have shown increased uncorrectable memory
errors.

Currently, the kernel has a mechanism to recover from hardware memory
errors. This patchset provides an new recovery mechanism.

For arm64, the hardware memory error handling is do_sea() which divided
into two cases:
 1. The user state consumed the memory errors, the solution is kill the
    user process and isolate the error page.
 2. The kernel state consumed the memory errors, the solution is panic.

For case 2, Undifferentiated panic maybe not the optimal choice, it can be
handled better, in some scenes, we can avoid panic, such as uaccess, if the
uaccess fails due to memory error, only the user process will be affected,
kill the user process and isolate the user page with hardware memory errors
is a better choice.

This patchset can be divided into three parts:
 1. Patch 0/1    - make some minor fixes to the associated code.
 2. Patch 3      - arm64 add support for machine check safe framework.
 3. Pathc 4/5/6  - arm64 add uaccess and cow to machine check safe.

Since V3:
 1. According to Mark's suggestion, all uaccess can be recovered due to
    memory error.
 2. Scenario pagecache reading is also supported as part of uaccess
    (copy_to_user()) and duplication code problem is also solved. 
    Thanks for Robin's suggestion.
 3. According Mark's suggestion, update commit message of patch 2/5.
 4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V2:
 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
   ARM64_UCE_KERNEL_RECOVERY.
 2.Add two new scenes, cow and pagecache reading.
 3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/

Tong Tiangen (6):
  x86: fix function define in copy_mc_to_user
  arm64: fix types in copy_highpage()
  arm64: add support for machine check error safe
  arm64: add copy_{to, from}_user to machine check safe
  arm64: add {get, put}_user to machine check safe
  arm64: add cow to machine check safe

 arch/arm64/Kconfig                   |  1 +
 arch/arm64/include/asm/asm-extable.h | 30 +++++++++
 arch/arm64/include/asm/asm-uaccess.h | 16 +++++
 arch/arm64/include/asm/extable.h     |  1 +
 arch/arm64/include/asm/page.h        | 10 +++
 arch/arm64/include/asm/uaccess.h     |  4 +-
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_from_user.S      | 15 +++--
 arch/arm64/lib/copy_page_mc.S        | 99 ++++++++++++++++++++++++++++
 arch/arm64/lib/copy_to_user.S        | 25 ++++---
 arch/arm64/mm/copypage.c             | 36 ++++++++--
 arch/arm64/mm/extable.c              | 22 +++++++
 arch/arm64/mm/fault.c                | 28 ++++++++
 arch/x86/include/asm/uaccess.h       |  1 +
 include/linux/highmem.h              |  8 +++
 include/linux/uaccess.h              |  8 +++
 mm/memory.c                          |  2 +-
 17 files changed, 286 insertions(+), 22 deletions(-)
 create mode 100644 arch/arm64/lib/copy_page_mc.S

-- 
2.18.0.huawei.25


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 0/6]arm64: add machine check safe support
@ 2022-04-12  7:25 ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

With the increase of memory capacity and density, the probability of
memory error increases. The increasing size and density of server RAM
in the data center and cloud have shown increased uncorrectable memory
errors.

Currently, the kernel has a mechanism to recover from hardware memory
errors. This patchset provides an new recovery mechanism.

For arm64, the hardware memory error handling is do_sea() which divided
into two cases:
 1. The user state consumed the memory errors, the solution is kill the
    user process and isolate the error page.
 2. The kernel state consumed the memory errors, the solution is panic.

For case 2, Undifferentiated panic maybe not the optimal choice, it can be
handled better, in some scenes, we can avoid panic, such as uaccess, if the
uaccess fails due to memory error, only the user process will be affected,
kill the user process and isolate the user page with hardware memory errors
is a better choice.

This patchset can be divided into three parts:
 1. Patch 0/1    - make some minor fixes to the associated code.
 2. Patch 3      - arm64 add support for machine check safe framework.
 3. Pathc 4/5/6  - arm64 add uaccess and cow to machine check safe.

Since V3:
 1. According to Mark's suggestion, all uaccess can be recovered due to
    memory error.
 2. Scenario pagecache reading is also supported as part of uaccess
    (copy_to_user()) and duplication code problem is also solved. 
    Thanks for Robin's suggestion.
 3. According Mark's suggestion, update commit message of patch 2/5.
 4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V2:
 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
   ARM64_UCE_KERNEL_RECOVERY.
 2.Add two new scenes, cow and pagecache reading.
 3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/

Tong Tiangen (6):
  x86: fix function define in copy_mc_to_user
  arm64: fix types in copy_highpage()
  arm64: add support for machine check error safe
  arm64: add copy_{to, from}_user to machine check safe
  arm64: add {get, put}_user to machine check safe
  arm64: add cow to machine check safe

 arch/arm64/Kconfig                   |  1 +
 arch/arm64/include/asm/asm-extable.h | 30 +++++++++
 arch/arm64/include/asm/asm-uaccess.h | 16 +++++
 arch/arm64/include/asm/extable.h     |  1 +
 arch/arm64/include/asm/page.h        | 10 +++
 arch/arm64/include/asm/uaccess.h     |  4 +-
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_from_user.S      | 15 +++--
 arch/arm64/lib/copy_page_mc.S        | 99 ++++++++++++++++++++++++++++
 arch/arm64/lib/copy_to_user.S        | 25 ++++---
 arch/arm64/mm/copypage.c             | 36 ++++++++--
 arch/arm64/mm/extable.c              | 22 +++++++
 arch/arm64/mm/fault.c                | 28 ++++++++
 arch/x86/include/asm/uaccess.h       |  1 +
 include/linux/highmem.h              |  8 +++
 include/linux/uaccess.h              |  8 +++
 mm/memory.c                          |  2 +-
 17 files changed, 286 insertions(+), 22 deletions(-)
 create mode 100644 arch/arm64/lib/copy_page_mc.S

-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

X86 has it's implementation of copy_mc_to_user but not use #define to
declare.

This may cause problems, for example, if other architectures open
CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
architecture, the code add to include/linux/uaddess.h is as follows:

    #ifndef copy_mc_to_user
    static inline unsigned long __must_check
    copy_mc_to_user(void *dst, const void *src, size_t cnt)
    {
	    ...
    }
    #endif

Then this definition will conflict with the implementation of X86 and cause
compilation errors.

Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()")
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/x86/include/asm/uaccess.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index f78e2b3501a1..e18c5f098025 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 
 unsigned long __must_check
 copy_mc_to_user(void *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 /*
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

X86 has it's implementation of copy_mc_to_user but not use #define to
declare.

This may cause problems, for example, if other architectures open
CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
architecture, the code add to include/linux/uaddess.h is as follows:

    #ifndef copy_mc_to_user
    static inline unsigned long __must_check
    copy_mc_to_user(void *dst, const void *src, size_t cnt)
    {
	    ...
    }
    #endif

Then this definition will conflict with the implementation of X86 and cause
compilation errors.

Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()")
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/x86/include/asm/uaccess.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index f78e2b3501a1..e18c5f098025 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 
 unsigned long __must_check
 copy_mc_to_user(void *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 /*
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 2/6] arm64: fix types in copy_highpage()
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

In copy_highpage() the `kto` and `kfrom` local variables are pointers to
struct page, but these are used to hold arbitrary pointers to kernel memory
. Each call to page_address() returns a void pointer to memory associated
with the relevant page, and copy_page() expects void pointers to this
memory.

This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte:
Tags-aware copy_{user_,}highpage() implementations") and while this
doesn't appear to be harmful in practice it is clearly wrong.

Correct this by making `kto` and `kfrom` void pointers.

Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations")
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/mm/copypage.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index b5447e53cd73..0dea80bf6de4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -16,8 +16,8 @@
 
 void copy_highpage(struct page *to, struct page *from)
 {
-	struct page *kto = page_address(to);
-	struct page *kfrom = page_address(from);
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
 
 	copy_page(kto, kfrom);
 
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 2/6] arm64: fix types in copy_highpage()
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

In copy_highpage() the `kto` and `kfrom` local variables are pointers to
struct page, but these are used to hold arbitrary pointers to kernel memory
. Each call to page_address() returns a void pointer to memory associated
with the relevant page, and copy_page() expects void pointers to this
memory.

This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte:
Tags-aware copy_{user_,}highpage() implementations") and while this
doesn't appear to be harmful in practice it is clearly wrong.

Correct this by making `kto` and `kfrom` void pointers.

Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations")
Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/mm/copypage.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index b5447e53cd73..0dea80bf6de4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -16,8 +16,8 @@
 
 void copy_highpage(struct page *to, struct page *from)
 {
-	struct page *kto = page_address(to);
-	struct page *kfrom = page_address(from);
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
 
 	copy_page(kto, kfrom);
 
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

During the processing of arm64 kernel hardware memory errors(do_sea()), if
the errors is consumed in the kernel, the current processing is panic.
However, it is not optimal.

Take uaccess for example, if the uaccess operation fails due to memory
error, only the user process will be affected, kill the user process
and isolate the user page with hardware memory errors is a better choice.

This patch only enable machine error check framework, it add exception
fixup before kernel panic in do_sea() and only limit the consumption of
hardware memory errors in kernel mode triggered by user mode processes.
If fixup successful, panic can be avoided.

Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC.

Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is
called when CONFIG_ARCH_HAS_COPOY_MC is open.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/extable.h |  1 +
 arch/arm64/mm/extable.c          | 18 ++++++++++++++++++
 arch/arm64/mm/fault.c            | 28 ++++++++++++++++++++++++++++
 include/linux/uaccess.h          |  8 ++++++++
 5 files changed, 56 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index d9325dd95eba..012e38309955 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -19,6 +19,7 @@ config ARM64
 	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
 	select ARCH_HAS_CACHE_LINE_SIZE
+	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
 	select ARCH_HAS_CURRENT_STACK_POINTER
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..f80ebd0addfd 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
 #endif /* !CONFIG_BPF_JIT */
 
 bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_mc(struct pt_regs *regs);
 #endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 489455309695..5de256a25464 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -9,6 +9,7 @@
 
 #include <asm/asm-extable.h>
 #include <asm/ptrace.h>
+#include <asm/esr.h>
 
 static inline unsigned long
 get_ex_fixup(const struct exception_table_entry *ex)
@@ -73,6 +74,7 @@ bool fixup_exception(struct pt_regs *regs)
 
 	switch (ex->type) {
 	case EX_TYPE_FIXUP:
+	case EX_TYPE_UACCESS_MC:
 		return ex_handler_fixup(ex, regs);
 	case EX_TYPE_BPF:
 		return ex_handler_bpf(ex, regs);
@@ -84,3 +86,19 @@ bool fixup_exception(struct pt_regs *regs)
 
 	BUG();
 }
+
+bool fixup_exception_mc(struct pt_regs *regs)
+{
+	const struct exception_table_entry *ex;
+
+	ex = search_exception_tables(instruction_pointer(regs));
+	if (!ex)
+		return false;
+
+	switch (ex->type) {
+	case EX_TYPE_UACCESS_MC:
+		return ex_handler_fixup(ex, regs);
+	}
+
+	return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 77341b160aca..56b13cf8bf1d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs)
 	return 1; /* "fault" */
 }
 
+static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr,
+				     struct pt_regs *regs, int sig, int code)
+{
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
+		return false;
+
+	if (user_mode(regs) || !current->mm)
+		return false;
+
+	if (apei_claim_sea(regs) < 0)
+		return false;
+
+	current->thread.fault_address = 0;
+	current->thread.fault_code = esr;
+
+	if (!fixup_exception_mc(regs))
+		return false;
+
+	arm64_force_sig_fault(sig, code, addr,
+		"Uncorrected hardware memory error in kernel-access\n");
+
+	return true;
+}
+
 static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf;
@@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
 		 */
 		siaddr  = untagged_addr(far);
 	}
+
+	if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
+		return 0;
+
 	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
 
 	return 0;
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 546179418ffa..dd952aeecdc1 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
 }
 #endif
 
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+	return raw_copy_to_user(dst, src, cnt);
+}
+#endif
+
 static __always_inline void pagefault_disabled_inc(void)
 {
 	current->pagefault_disabled++;
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

During the processing of arm64 kernel hardware memory errors(do_sea()), if
the errors is consumed in the kernel, the current processing is panic.
However, it is not optimal.

Take uaccess for example, if the uaccess operation fails due to memory
error, only the user process will be affected, kill the user process
and isolate the user page with hardware memory errors is a better choice.

This patch only enable machine error check framework, it add exception
fixup before kernel panic in do_sea() and only limit the consumption of
hardware memory errors in kernel mode triggered by user mode processes.
If fixup successful, panic can be avoided.

Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC.

Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is
called when CONFIG_ARCH_HAS_COPOY_MC is open.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/extable.h |  1 +
 arch/arm64/mm/extable.c          | 18 ++++++++++++++++++
 arch/arm64/mm/fault.c            | 28 ++++++++++++++++++++++++++++
 include/linux/uaccess.h          |  8 ++++++++
 5 files changed, 56 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index d9325dd95eba..012e38309955 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -19,6 +19,7 @@ config ARM64
 	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
 	select ARCH_HAS_CACHE_LINE_SIZE
+	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
 	select ARCH_HAS_CURRENT_STACK_POINTER
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..f80ebd0addfd 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
 #endif /* !CONFIG_BPF_JIT */
 
 bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_mc(struct pt_regs *regs);
 #endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 489455309695..5de256a25464 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -9,6 +9,7 @@
 
 #include <asm/asm-extable.h>
 #include <asm/ptrace.h>
+#include <asm/esr.h>
 
 static inline unsigned long
 get_ex_fixup(const struct exception_table_entry *ex)
@@ -73,6 +74,7 @@ bool fixup_exception(struct pt_regs *regs)
 
 	switch (ex->type) {
 	case EX_TYPE_FIXUP:
+	case EX_TYPE_UACCESS_MC:
 		return ex_handler_fixup(ex, regs);
 	case EX_TYPE_BPF:
 		return ex_handler_bpf(ex, regs);
@@ -84,3 +86,19 @@ bool fixup_exception(struct pt_regs *regs)
 
 	BUG();
 }
+
+bool fixup_exception_mc(struct pt_regs *regs)
+{
+	const struct exception_table_entry *ex;
+
+	ex = search_exception_tables(instruction_pointer(regs));
+	if (!ex)
+		return false;
+
+	switch (ex->type) {
+	case EX_TYPE_UACCESS_MC:
+		return ex_handler_fixup(ex, regs);
+	}
+
+	return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 77341b160aca..56b13cf8bf1d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs)
 	return 1; /* "fault" */
 }
 
+static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr,
+				     struct pt_regs *regs, int sig, int code)
+{
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
+		return false;
+
+	if (user_mode(regs) || !current->mm)
+		return false;
+
+	if (apei_claim_sea(regs) < 0)
+		return false;
+
+	current->thread.fault_address = 0;
+	current->thread.fault_code = esr;
+
+	if (!fixup_exception_mc(regs))
+		return false;
+
+	arm64_force_sig_fault(sig, code, addr,
+		"Uncorrected hardware memory error in kernel-access\n");
+
+	return true;
+}
+
 static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf;
@@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
 		 */
 		siaddr  = untagged_addr(far);
 	}
+
+	if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
+		return 0;
+
 	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
 
 	return 0;
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 546179418ffa..dd952aeecdc1 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
 }
 #endif
 
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+	return raw_copy_to_user(dst, src, cnt);
+}
+#endif
+
 static __always_inline void pagefault_disabled_inc(void)
 {
 	current->pagefault_disabled++;
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

Add copy_{to, from}_user() to machine check safe.

If copy fail due to hardware memory error, only the relevant processes are
affected, so killing the user process and isolate the user page with
hardware memory errors is a more reasonable choice than kernel panic.

Add new extable type EX_TYPE_UACCESS_MC which can be used for uaccess that
can be recovered from hardware memory errors.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h | 11 +++++++++++
 arch/arm64/include/asm/asm-uaccess.h | 16 ++++++++++++++++
 arch/arm64/lib/copy_from_user.S      | 15 ++++++++++-----
 arch/arm64/lib/copy_to_user.S        | 25 +++++++++++++++++--------
 4 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index c39f2437e08e..8af4e7cc9578 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -8,6 +8,9 @@
 #define EX_TYPE_UACCESS_ERR_ZERO	3
 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
 
+/* _MC indicates that can fixup from machine check errors */
+#define EX_TYPE_UACCESS_MC		5
+
 #ifdef __ASSEMBLY__
 
 #define __ASM_EXTABLE_RAW(insn, fixup, type, data)	\
@@ -27,6 +30,14 @@
 	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0)
 	.endm
 
+/*
+ * Create an exception table entry for `insn`, which will branch to `fixup`
+ * when an unhandled fault(include sea fault) is taken.
+ */
+	.macro          _asm_extable_uaccess_mc, insn, fixup
+	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 0557af834e03..bb17f0829042 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -92,4 +92,20 @@ alternative_else_nop_endif
 
 		_asm_extable	8888b,\l;
 	.endm
+
+	.macro user_ldp_mc l, reg1, reg2, addr, post_inc
+8888:		ldtr	\reg1, [\addr];
+8889:		ldtr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
+
+		_asm_extable_uaccess_mc	8888b, \l;
+		_asm_extable_uaccess_mc	8889b, \l;
+	.endm
+
+	.macro user_ldst_mc l, inst, reg, addr, post_inc
+8888:		\inst		\reg, [\addr];
+		add		\addr, \addr, \post_inc;
+
+		_asm_extable_uaccess_mc	8888b, \l;
+	.endm
 #endif
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 34e317907524..e32c0747a5f1 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -21,7 +21,7 @@
  */
 
 	.macro ldrb1 reg, ptr, val
-	user_ldst 9998f, ldtrb, \reg, \ptr, \val
+	user_ldst_mc 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -29,7 +29,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	user_ldst 9997f, ldtrh, \reg, \ptr, \val
+	user_ldst_mc 9997f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -37,7 +37,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	user_ldst 9997f, ldtr, \reg, \ptr, \val
+	user_ldst_mc 9997f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -45,7 +45,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	user_ldp 9997f, \reg1, \reg2, \ptr, \val
+	user_ldp_mc 9997f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -54,6 +54,7 @@
 
 end	.req	x5
 srcin	.req	x15
+esr	.req	x16
 SYM_FUNC_START(__arch_copy_from_user)
 	add	end, x0, x2
 	mov	srcin, x1
@@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
 	ret
 
 	// Exception fixups
-9997:	cmp	dst, dstin
+9997:	mrs esr, esr_el1			// Check exception first
+	and esr, esr, #ESR_ELx_FSC
+	cmp esr, #ESR_ELx_FSC_EXTABT
+	b.eq 9998f
+	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
 USER(9998f, ldtrb tmp1w, [srcin])
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..afb53e45a21f 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,31 +20,35 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	ldrb  \reg, [\ptr], \val
+	1000:	ldrb  \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1000b, 9998f;
 	.endm
 
 	.macro strb1 reg, ptr, val
-	user_ldst 9998f, sttrb, \reg, \ptr, \val
+	user_ldst_mc 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	ldrh  \reg, [\ptr], \val
+	1001:	ldrh  \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1001b, 9998f;
 	.endm
 
 	.macro strh1 reg, ptr, val
-	user_ldst 9997f, sttrh, \reg, \ptr, \val
+	user_ldst_mc 9997f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	ldr \reg, [\ptr], \val
+	1002:	ldr \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1002b, 9998f;
 	.endm
 
 	.macro str1 reg, ptr, val
-	user_ldst 9997f, sttr, \reg, \ptr, \val
+	user_ldst_mc 9997f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	ldp \reg1, \reg2, [\ptr], \val
+	1003:	ldp \reg1, \reg2, [\ptr], \val
+	_asm_extable_uaccess_mc 1003b, 9998f;
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -53,6 +57,7 @@
 
 end	.req	x5
 srcin	.req	x15
+esr	.req	x16
 SYM_FUNC_START(__arch_copy_to_user)
 	add	end, x0, x2
 	mov	srcin, x1
@@ -61,7 +66,11 @@ SYM_FUNC_START(__arch_copy_to_user)
 	ret
 
 	// Exception fixups
-9997:	cmp	dst, dstin
+9997:	mrs esr, esr_el1			// Check exception first
+	and esr, esr, #ESR_ELx_FSC
+	cmp esr, #ESR_ELx_FSC_EXTABT
+	b.eq 9998f
+	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
 	ldrb	tmp1w, [srcin]
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

Add copy_{to, from}_user() to machine check safe.

If copy fail due to hardware memory error, only the relevant processes are
affected, so killing the user process and isolate the user page with
hardware memory errors is a more reasonable choice than kernel panic.

Add new extable type EX_TYPE_UACCESS_MC which can be used for uaccess that
can be recovered from hardware memory errors.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h | 11 +++++++++++
 arch/arm64/include/asm/asm-uaccess.h | 16 ++++++++++++++++
 arch/arm64/lib/copy_from_user.S      | 15 ++++++++++-----
 arch/arm64/lib/copy_to_user.S        | 25 +++++++++++++++++--------
 4 files changed, 54 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index c39f2437e08e..8af4e7cc9578 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -8,6 +8,9 @@
 #define EX_TYPE_UACCESS_ERR_ZERO	3
 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
 
+/* _MC indicates that can fixup from machine check errors */
+#define EX_TYPE_UACCESS_MC		5
+
 #ifdef __ASSEMBLY__
 
 #define __ASM_EXTABLE_RAW(insn, fixup, type, data)	\
@@ -27,6 +30,14 @@
 	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0)
 	.endm
 
+/*
+ * Create an exception table entry for `insn`, which will branch to `fixup`
+ * when an unhandled fault(include sea fault) is taken.
+ */
+	.macro          _asm_extable_uaccess_mc, insn, fixup
+	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 0557af834e03..bb17f0829042 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -92,4 +92,20 @@ alternative_else_nop_endif
 
 		_asm_extable	8888b,\l;
 	.endm
+
+	.macro user_ldp_mc l, reg1, reg2, addr, post_inc
+8888:		ldtr	\reg1, [\addr];
+8889:		ldtr	\reg2, [\addr, #8];
+		add	\addr, \addr, \post_inc;
+
+		_asm_extable_uaccess_mc	8888b, \l;
+		_asm_extable_uaccess_mc	8889b, \l;
+	.endm
+
+	.macro user_ldst_mc l, inst, reg, addr, post_inc
+8888:		\inst		\reg, [\addr];
+		add		\addr, \addr, \post_inc;
+
+		_asm_extable_uaccess_mc	8888b, \l;
+	.endm
 #endif
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 34e317907524..e32c0747a5f1 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -21,7 +21,7 @@
  */
 
 	.macro ldrb1 reg, ptr, val
-	user_ldst 9998f, ldtrb, \reg, \ptr, \val
+	user_ldst_mc 9998f, ldtrb, \reg, \ptr, \val
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -29,7 +29,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	user_ldst 9997f, ldtrh, \reg, \ptr, \val
+	user_ldst_mc 9997f, ldtrh, \reg, \ptr, \val
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -37,7 +37,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	user_ldst 9997f, ldtr, \reg, \ptr, \val
+	user_ldst_mc 9997f, ldtr, \reg, \ptr, \val
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -45,7 +45,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	user_ldp 9997f, \reg1, \reg2, \ptr, \val
+	user_ldp_mc 9997f, \reg1, \reg2, \ptr, \val
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -54,6 +54,7 @@
 
 end	.req	x5
 srcin	.req	x15
+esr	.req	x16
 SYM_FUNC_START(__arch_copy_from_user)
 	add	end, x0, x2
 	mov	srcin, x1
@@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
 	ret
 
 	// Exception fixups
-9997:	cmp	dst, dstin
+9997:	mrs esr, esr_el1			// Check exception first
+	and esr, esr, #ESR_ELx_FSC
+	cmp esr, #ESR_ELx_FSC_EXTABT
+	b.eq 9998f
+	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
 USER(9998f, ldtrb tmp1w, [srcin])
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..afb53e45a21f 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,31 +20,35 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	ldrb  \reg, [\ptr], \val
+	1000:	ldrb  \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1000b, 9998f;
 	.endm
 
 	.macro strb1 reg, ptr, val
-	user_ldst 9998f, sttrb, \reg, \ptr, \val
+	user_ldst_mc 9998f, sttrb, \reg, \ptr, \val
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	ldrh  \reg, [\ptr], \val
+	1001:	ldrh  \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1001b, 9998f;
 	.endm
 
 	.macro strh1 reg, ptr, val
-	user_ldst 9997f, sttrh, \reg, \ptr, \val
+	user_ldst_mc 9997f, sttrh, \reg, \ptr, \val
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	ldr \reg, [\ptr], \val
+	1002:	ldr \reg, [\ptr], \val
+	_asm_extable_uaccess_mc 1002b, 9998f;
 	.endm
 
 	.macro str1 reg, ptr, val
-	user_ldst 9997f, sttr, \reg, \ptr, \val
+	user_ldst_mc 9997f, sttr, \reg, \ptr, \val
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	ldp \reg1, \reg2, [\ptr], \val
+	1003:	ldp \reg1, \reg2, [\ptr], \val
+	_asm_extable_uaccess_mc 1003b, 9998f;
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -53,6 +57,7 @@
 
 end	.req	x5
 srcin	.req	x15
+esr	.req	x16
 SYM_FUNC_START(__arch_copy_to_user)
 	add	end, x0, x2
 	mov	srcin, x1
@@ -61,7 +66,11 @@ SYM_FUNC_START(__arch_copy_to_user)
 	ret
 
 	// Exception fixups
-9997:	cmp	dst, dstin
+9997:	mrs esr, esr_el1			// Check exception first
+	and esr, esr, #ESR_ELx_FSC
+	cmp esr, #ESR_ELx_FSC_EXTABT
+	b.eq 9998f
+	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
 	ldrb	tmp1w, [srcin]
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 5/6] arm64: add {get, put}_user to machine check safe
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

Add {get, put}_user() to machine check safe.

If get/put fail due to hardware memory error, if get/put fail due to
hardware memory error, only the relevant processes are affected, so killing
the user process and isolate the user page with hardware memory errors is a
more reasonable choice than kernel panic.

Add new extable type EX_TYPE_UACCESS_MC_ERR_ZERO which can be used for
uaccess that can be recovered from hardware memory errors. The difference
from EX_TYPE_UACCESS_MC is that this type also sets additional two target
register which save error code and value needs to be set zero.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h | 14 ++++++++++++++
 arch/arm64/include/asm/uaccess.h     |  4 ++--
 arch/arm64/mm/extable.c              |  3 +++
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 8af4e7cc9578..62eafb651773 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -10,6 +10,7 @@
 
 /* _MC indicates that can fixup from machine check errors */
 #define EX_TYPE_UACCESS_MC		5
+#define EX_TYPE_UACCESS_MC_ERR_ZERO	6
 
 #ifdef __ASSEMBLY__
 
@@ -75,6 +76,15 @@
 #define EX_DATA_REG(reg, gpr)						\
 	"((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
 
+#define _ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, zero)		\
+	__DEFINE_ASM_GPR_NUMS							\
+	__ASM_EXTABLE_RAW(#insn, #fixup,					\
+			  __stringify(EX_TYPE_UACCESS_MC_ERR_ZERO),		\
+			  "("							\
+			    EX_DATA_REG(ERR, err) " | "				\
+			    EX_DATA_REG(ZERO, zero)				\
+			  ")")
+
 #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)		\
 	__DEFINE_ASM_GPR_NUMS						\
 	__ASM_EXTABLE_RAW(#insn, #fixup, 				\
@@ -87,6 +97,10 @@
 #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err)			\
 	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr)
 
+
+#define _ASM_EXTABLE_UACCESS_MC_ERR(insn, fixup, err)			\
+	_ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, wzr)
+
 #define EX_DATA_REG_DATA_SHIFT	0
 #define EX_DATA_REG_DATA	GENMASK(4, 0)
 #define EX_DATA_REG_ADDR_SHIFT	5
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index e8dce0cc5eaa..e41b47df48b0 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	asm volatile(							\
 	"1:	" load "	" reg "1, [%2]\n"			\
 	"2:\n"								\
-	_ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1)			\
+	_ASM_EXTABLE_UACCESS_MC_ERR_ZERO(1b, 2b, %w0, %w1)		\
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr))
 
@@ -325,7 +325,7 @@ do {									\
 	asm volatile(							\
 	"1:	" store "	" reg "1, [%2]\n"			\
 	"2:\n"								\
-	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)				\
+	_ASM_EXTABLE_UACCESS_MC_ERR(1b, 2b, %w0)			\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr))
 
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 5de256a25464..ca7388f3923b 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -79,6 +79,7 @@ bool fixup_exception(struct pt_regs *regs)
 	case EX_TYPE_BPF:
 		return ex_handler_bpf(ex, regs);
 	case EX_TYPE_UACCESS_ERR_ZERO:
+	case EX_TYPE_UACCESS_MC_ERR_ZERO:
 		return ex_handler_uaccess_err_zero(ex, regs);
 	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
 		return ex_handler_load_unaligned_zeropad(ex, regs);
@@ -98,6 +99,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_MC:
 		return ex_handler_fixup(ex, regs);
+	case EX_TYPE_UACCESS_MC_ERR_ZERO:
+		return ex_handler_uaccess_err_zero(ex, regs);
 	}
 
 	return false;
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 5/6] arm64: add {get, put}_user to machine check safe
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

Add {get, put}_user() to machine check safe.

If get/put fail due to hardware memory error, if get/put fail due to
hardware memory error, only the relevant processes are affected, so killing
the user process and isolate the user page with hardware memory errors is a
more reasonable choice than kernel panic.

Add new extable type EX_TYPE_UACCESS_MC_ERR_ZERO which can be used for
uaccess that can be recovered from hardware memory errors. The difference
from EX_TYPE_UACCESS_MC is that this type also sets additional two target
register which save error code and value needs to be set zero.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h | 14 ++++++++++++++
 arch/arm64/include/asm/uaccess.h     |  4 ++--
 arch/arm64/mm/extable.c              |  3 +++
 3 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 8af4e7cc9578..62eafb651773 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -10,6 +10,7 @@
 
 /* _MC indicates that can fixup from machine check errors */
 #define EX_TYPE_UACCESS_MC		5
+#define EX_TYPE_UACCESS_MC_ERR_ZERO	6
 
 #ifdef __ASSEMBLY__
 
@@ -75,6 +76,15 @@
 #define EX_DATA_REG(reg, gpr)						\
 	"((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")"
 
+#define _ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, zero)		\
+	__DEFINE_ASM_GPR_NUMS							\
+	__ASM_EXTABLE_RAW(#insn, #fixup,					\
+			  __stringify(EX_TYPE_UACCESS_MC_ERR_ZERO),		\
+			  "("							\
+			    EX_DATA_REG(ERR, err) " | "				\
+			    EX_DATA_REG(ZERO, zero)				\
+			  ")")
+
 #define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero)		\
 	__DEFINE_ASM_GPR_NUMS						\
 	__ASM_EXTABLE_RAW(#insn, #fixup, 				\
@@ -87,6 +97,10 @@
 #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err)			\
 	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr)
 
+
+#define _ASM_EXTABLE_UACCESS_MC_ERR(insn, fixup, err)			\
+	_ASM_EXTABLE_UACCESS_MC_ERR_ZERO(insn, fixup, err, wzr)
+
 #define EX_DATA_REG_DATA_SHIFT	0
 #define EX_DATA_REG_DATA	GENMASK(4, 0)
 #define EX_DATA_REG_ADDR_SHIFT	5
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index e8dce0cc5eaa..e41b47df48b0 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -236,7 +236,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
 	asm volatile(							\
 	"1:	" load "	" reg "1, [%2]\n"			\
 	"2:\n"								\
-	_ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1)			\
+	_ASM_EXTABLE_UACCESS_MC_ERR_ZERO(1b, 2b, %w0, %w1)		\
 	: "+r" (err), "=&r" (x)						\
 	: "r" (addr))
 
@@ -325,7 +325,7 @@ do {									\
 	asm volatile(							\
 	"1:	" store "	" reg "1, [%2]\n"			\
 	"2:\n"								\
-	_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0)				\
+	_ASM_EXTABLE_UACCESS_MC_ERR(1b, 2b, %w0)			\
 	: "+r" (err)							\
 	: "r" (x), "r" (addr))
 
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 5de256a25464..ca7388f3923b 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -79,6 +79,7 @@ bool fixup_exception(struct pt_regs *regs)
 	case EX_TYPE_BPF:
 		return ex_handler_bpf(ex, regs);
 	case EX_TYPE_UACCESS_ERR_ZERO:
+	case EX_TYPE_UACCESS_MC_ERR_ZERO:
 		return ex_handler_uaccess_err_zero(ex, regs);
 	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
 		return ex_handler_load_unaligned_zeropad(ex, regs);
@@ -98,6 +99,8 @@ bool fixup_exception_mc(struct pt_regs *regs)
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_MC:
 		return ex_handler_fixup(ex, regs);
+	case EX_TYPE_UACCESS_MC_ERR_ZERO:
+		return ex_handler_uaccess_err_zero(ex, regs);
 	}
 
 	return false;
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 6/6] arm64: add cow to machine check safe
  2022-04-12  7:25 ` Tong Tiangen
@ 2022-04-12  7:25   ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

In the cow(copy on write) processing, the data of the user process is
copied, when hardware memory error is encountered during copy, only the
relevant processes are affected, so killing the user process and isolate
the user page with hardware memory errors is a more reasonable choice than
kernel panic.

Add new helper copy_page_mc() which provide a page copy implementation with
machine check safe. At present, only used in cow. In future, we can expand
more scenes. As long as the consequences of page copy failure are not
fatal(eg: only affect user process), we can use this helper.

The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page()
in copy_page.S and the main difference is copy_page_mc() add some extable
entry to support machine check safe. largely to keep the patch simple. If
needed those optimizations can be folded in.

Add new extable type EX_TYPE_COPY_PAGE_MC which used in copy_page_mc().

This type only be processed in fixup_exception_mc(), The reason is that
copy_page_mc() is consistent with copy_page() except machine check safe is
considered, and copy_page() do not need to consider exception fixup.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h |  5 ++
 arch/arm64/include/asm/page.h        | 10 +++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_page_mc.S        | 99 ++++++++++++++++++++++++++++
 arch/arm64/mm/copypage.c             | 36 ++++++++--
 arch/arm64/mm/extable.c              |  1 +
 include/linux/highmem.h              |  8 +++
 mm/memory.c                          |  2 +-
 8 files changed, 156 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/lib/copy_page_mc.S

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 62eafb651773..274bd7edcff6 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -11,6 +11,7 @@
 /* _MC indicates that can fixup from machine check errors */
 #define EX_TYPE_UACCESS_MC		5
 #define EX_TYPE_UACCESS_MC_ERR_ZERO	6
+#define EX_TYPE_COPY_PAGE_MC		7
 
 #ifdef __ASSEMBLY__
 
@@ -39,6 +40,10 @@
 	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0)
 	.endm
 
+	.macro          _asm_extable_copy_page_mc, insn, fixup
+	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_COPY_PAGE_MC, 0)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 993a27ea6f54..832571a7dddb 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+extern void copy_page_mc(void *to, const void *from);
+void copy_highpage_mc(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_HIGHPAGE_MC
+
+void copy_user_highpage_mc(struct page *to, struct page *from,
+		unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC
+#endif
+
 struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
 						unsigned long vaddr);
 #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 29490be2546b..0d9f292ef68a 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -15,6 +15,8 @@ endif
 
 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
 
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o
+
 obj-$(CONFIG_CRC32) += crc32.o
 
 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S
new file mode 100644
index 000000000000..93b4203bdf45
--- /dev/null
+++ b/arch/arm64/lib/copy_page_mc.S
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with machine check
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ */
+SYM_FUNC_START(__pi_copy_page_mc)
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	// Prefetch three cache lines ahead.
+	prfm	pldl1strm, [x1, #128]
+	prfm	pldl1strm, [x1, #256]
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+100:	ldp	x2, x3, [x1]
+101:	ldp	x4, x5, [x1, #16]
+102:	ldp	x6, x7, [x1, #32]
+103:	ldp	x8, x9, [x1, #48]
+104:	ldp	x10, x11, [x1, #64]
+105:	ldp	x12, x13, [x1, #80]
+106:	ldp	x14, x15, [x1, #96]
+107:	ldp	x16, x17, [x1, #112]
+
+	add	x0, x0, #256
+	add	x1, x1, #128
+1:
+	tst	x0, #(PAGE_SIZE - 1)
+
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+	stnp	x2, x3, [x0, #-256]
+200:	ldp	x2, x3, [x1]
+	stnp	x4, x5, [x0, #16 - 256]
+201:	ldp	x4, x5, [x1, #16]
+	stnp	x6, x7, [x0, #32 - 256]
+202:	ldp	x6, x7, [x1, #32]
+	stnp	x8, x9, [x0, #48 - 256]
+203:	ldp	x8, x9, [x1, #48]
+	stnp	x10, x11, [x0, #64 - 256]
+204:	ldp	x10, x11, [x1, #64]
+	stnp	x12, x13, [x0, #80 - 256]
+205:	ldp	x12, x13, [x1, #80]
+	stnp	x14, x15, [x0, #96 - 256]
+206:	ldp	x14, x15, [x1, #96]
+	stnp	x16, x17, [x0, #112 - 256]
+207:	ldp	x16, x17, [x1, #112]
+
+	add	x0, x0, #128
+	add	x1, x1, #128
+
+	b.ne	1b
+
+	stnp	x2, x3, [x0, #-256]
+	stnp	x4, x5, [x0, #16 - 256]
+	stnp	x6, x7, [x0, #32 - 256]
+	stnp	x8, x9, [x0, #48 - 256]
+	stnp	x10, x11, [x0, #64 - 256]
+	stnp	x12, x13, [x0, #80 - 256]
+	stnp	x14, x15, [x0, #96 - 256]
+	stnp	x16, x17, [x0, #112 - 256]
+
+300:	ret
+
+_asm_extable_copy_page_mc 100b, 300b
+_asm_extable_copy_page_mc 101b, 300b
+_asm_extable_copy_page_mc 102b, 300b
+_asm_extable_copy_page_mc 103b, 300b
+_asm_extable_copy_page_mc 104b, 300b
+_asm_extable_copy_page_mc 105b, 300b
+_asm_extable_copy_page_mc 106b, 300b
+_asm_extable_copy_page_mc 107b, 300b
+_asm_extable_copy_page_mc 200b, 300b
+_asm_extable_copy_page_mc 201b, 300b
+_asm_extable_copy_page_mc 202b, 300b
+_asm_extable_copy_page_mc 203b, 300b
+_asm_extable_copy_page_mc 204b, 300b
+_asm_extable_copy_page_mc 205b, 300b
+_asm_extable_copy_page_mc 206b, 300b
+_asm_extable_copy_page_mc 207b, 300b
+
+SYM_FUNC_END(__pi_copy_page_mc)
+SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc)
+EXPORT_SYMBOL(copy_page_mc)
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 0dea80bf6de4..0f28edfcb234 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -14,13 +14,8 @@
 #include <asm/cpufeature.h>
 #include <asm/mte.h>
 
-void copy_highpage(struct page *to, struct page *from)
+static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom)
 {
-	void *kto = page_address(to);
-	void *kfrom = page_address(from);
-
-	copy_page(kto, kfrom);
-
 	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
 		set_bit(PG_mte_tagged, &to->flags);
 		page_kasan_tag_reset(to);
@@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from)
 		mte_copy_page_tags(kto, kfrom);
 	}
 }
+
+void copy_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_page(kto, kfrom);
+	do_mte(to, from, kto, kfrom);
+}
 EXPORT_SYMBOL(copy_highpage);
 
 void copy_user_highpage(struct page *to, struct page *from,
@@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from,
 	flush_dcache_page(to);
 }
 EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+void copy_highpage_mc(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_page_mc(kto, kfrom);
+	do_mte(to, from, kto, kfrom);
+}
+EXPORT_SYMBOL(copy_highpage_mc);
+
+void copy_user_highpage_mc(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma)
+{
+	copy_highpage_mc(to, from);
+	flush_dcache_page(to);
+}
+EXPORT_SYMBOL_GPL(copy_user_highpage_mc);
+#endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index ca7388f3923b..7ee67fcf9e81 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -98,6 +98,7 @@ bool fixup_exception_mc(struct pt_regs *regs)
 
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_MC:
+	case EX_TYPE_COPY_PAGE_MC:
 		return ex_handler_fixup(ex, regs);
 	case EX_TYPE_UACCESS_MC_ERR_ZERO:
 		return ex_handler_uaccess_err_zero(ex, regs);
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 39bb9b47fa9c..a9dbf331b038 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
 
 #endif
 
+#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC
+#define copy_user_highpage_mc copy_user_highpage
+#endif
+
 #ifndef __HAVE_ARCH_COPY_HIGHPAGE
 
 static inline void copy_highpage(struct page *to, struct page *from)
@@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from)
 
 #endif
 
+#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC
+#define cop_highpage_mc copy_highpage
+#endif
+
 static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 			       struct page *src_page, size_t src_off,
 			       size_t len)
diff --git a/mm/memory.c b/mm/memory.c
index 76e3af9639d9..d5f62234152d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
 	unsigned long addr = vmf->address;
 
 	if (likely(src)) {
-		copy_user_highpage(dst, src, addr, vma);
+		copy_user_highpage_mc(dst, src, addr, vma);
 		return true;
 	}
 
-- 
2.18.0.huawei.25


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC PATCH -next V3 6/6] arm64: add cow to machine check safe
@ 2022-04-12  7:25   ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-12  7:25 UTC (permalink / raw)
  To: Mark Rutland, James Morse, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Robin Murphy, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi,
	Tong Tiangen

In the cow(copy on write) processing, the data of the user process is
copied, when hardware memory error is encountered during copy, only the
relevant processes are affected, so killing the user process and isolate
the user page with hardware memory errors is a more reasonable choice than
kernel panic.

Add new helper copy_page_mc() which provide a page copy implementation with
machine check safe. At present, only used in cow. In future, we can expand
more scenes. As long as the consequences of page copy failure are not
fatal(eg: only affect user process), we can use this helper.

The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page()
in copy_page.S and the main difference is copy_page_mc() add some extable
entry to support machine check safe. largely to keep the patch simple. If
needed those optimizations can be folded in.

Add new extable type EX_TYPE_COPY_PAGE_MC which used in copy_page_mc().

This type only be processed in fixup_exception_mc(), The reason is that
copy_page_mc() is consistent with copy_page() except machine check safe is
considered, and copy_page() do not need to consider exception fixup.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/asm-extable.h |  5 ++
 arch/arm64/include/asm/page.h        | 10 +++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_page_mc.S        | 99 ++++++++++++++++++++++++++++
 arch/arm64/mm/copypage.c             | 36 ++++++++--
 arch/arm64/mm/extable.c              |  1 +
 include/linux/highmem.h              |  8 +++
 mm/memory.c                          |  2 +-
 8 files changed, 156 insertions(+), 7 deletions(-)
 create mode 100644 arch/arm64/lib/copy_page_mc.S

diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 62eafb651773..274bd7edcff6 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -11,6 +11,7 @@
 /* _MC indicates that can fixup from machine check errors */
 #define EX_TYPE_UACCESS_MC		5
 #define EX_TYPE_UACCESS_MC_ERR_ZERO	6
+#define EX_TYPE_COPY_PAGE_MC		7
 
 #ifdef __ASSEMBLY__
 
@@ -39,6 +40,10 @@
 	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_UACCESS_MC, 0)
 	.endm
 
+	.macro          _asm_extable_copy_page_mc, insn, fixup
+	__ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_COPY_PAGE_MC, 0)
+	.endm
+
 /*
  * Create an exception table entry for `insn` if `fixup` is provided. Otherwise
  * do nothing.
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 993a27ea6f54..832571a7dddb 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+extern void copy_page_mc(void *to, const void *from);
+void copy_highpage_mc(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_HIGHPAGE_MC
+
+void copy_user_highpage_mc(struct page *to, struct page *from,
+		unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC
+#endif
+
 struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
 						unsigned long vaddr);
 #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 29490be2546b..0d9f292ef68a 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -15,6 +15,8 @@ endif
 
 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
 
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o
+
 obj-$(CONFIG_CRC32) += crc32.o
 
 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S
new file mode 100644
index 000000000000..93b4203bdf45
--- /dev/null
+++ b/arch/arm64/lib/copy_page_mc.S
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with machine check
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ */
+SYM_FUNC_START(__pi_copy_page_mc)
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	// Prefetch three cache lines ahead.
+	prfm	pldl1strm, [x1, #128]
+	prfm	pldl1strm, [x1, #256]
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+100:	ldp	x2, x3, [x1]
+101:	ldp	x4, x5, [x1, #16]
+102:	ldp	x6, x7, [x1, #32]
+103:	ldp	x8, x9, [x1, #48]
+104:	ldp	x10, x11, [x1, #64]
+105:	ldp	x12, x13, [x1, #80]
+106:	ldp	x14, x15, [x1, #96]
+107:	ldp	x16, x17, [x1, #112]
+
+	add	x0, x0, #256
+	add	x1, x1, #128
+1:
+	tst	x0, #(PAGE_SIZE - 1)
+
+alternative_if ARM64_HAS_NO_HW_PREFETCH
+	prfm	pldl1strm, [x1, #384]
+alternative_else_nop_endif
+
+	stnp	x2, x3, [x0, #-256]
+200:	ldp	x2, x3, [x1]
+	stnp	x4, x5, [x0, #16 - 256]
+201:	ldp	x4, x5, [x1, #16]
+	stnp	x6, x7, [x0, #32 - 256]
+202:	ldp	x6, x7, [x1, #32]
+	stnp	x8, x9, [x0, #48 - 256]
+203:	ldp	x8, x9, [x1, #48]
+	stnp	x10, x11, [x0, #64 - 256]
+204:	ldp	x10, x11, [x1, #64]
+	stnp	x12, x13, [x0, #80 - 256]
+205:	ldp	x12, x13, [x1, #80]
+	stnp	x14, x15, [x0, #96 - 256]
+206:	ldp	x14, x15, [x1, #96]
+	stnp	x16, x17, [x0, #112 - 256]
+207:	ldp	x16, x17, [x1, #112]
+
+	add	x0, x0, #128
+	add	x1, x1, #128
+
+	b.ne	1b
+
+	stnp	x2, x3, [x0, #-256]
+	stnp	x4, x5, [x0, #16 - 256]
+	stnp	x6, x7, [x0, #32 - 256]
+	stnp	x8, x9, [x0, #48 - 256]
+	stnp	x10, x11, [x0, #64 - 256]
+	stnp	x12, x13, [x0, #80 - 256]
+	stnp	x14, x15, [x0, #96 - 256]
+	stnp	x16, x17, [x0, #112 - 256]
+
+300:	ret
+
+_asm_extable_copy_page_mc 100b, 300b
+_asm_extable_copy_page_mc 101b, 300b
+_asm_extable_copy_page_mc 102b, 300b
+_asm_extable_copy_page_mc 103b, 300b
+_asm_extable_copy_page_mc 104b, 300b
+_asm_extable_copy_page_mc 105b, 300b
+_asm_extable_copy_page_mc 106b, 300b
+_asm_extable_copy_page_mc 107b, 300b
+_asm_extable_copy_page_mc 200b, 300b
+_asm_extable_copy_page_mc 201b, 300b
+_asm_extable_copy_page_mc 202b, 300b
+_asm_extable_copy_page_mc 203b, 300b
+_asm_extable_copy_page_mc 204b, 300b
+_asm_extable_copy_page_mc 205b, 300b
+_asm_extable_copy_page_mc 206b, 300b
+_asm_extable_copy_page_mc 207b, 300b
+
+SYM_FUNC_END(__pi_copy_page_mc)
+SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc)
+EXPORT_SYMBOL(copy_page_mc)
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 0dea80bf6de4..0f28edfcb234 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -14,13 +14,8 @@
 #include <asm/cpufeature.h>
 #include <asm/mte.h>
 
-void copy_highpage(struct page *to, struct page *from)
+static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom)
 {
-	void *kto = page_address(to);
-	void *kfrom = page_address(from);
-
-	copy_page(kto, kfrom);
-
 	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
 		set_bit(PG_mte_tagged, &to->flags);
 		page_kasan_tag_reset(to);
@@ -35,6 +30,15 @@ void copy_highpage(struct page *to, struct page *from)
 		mte_copy_page_tags(kto, kfrom);
 	}
 }
+
+void copy_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_page(kto, kfrom);
+	do_mte(to, from, kto, kfrom);
+}
 EXPORT_SYMBOL(copy_highpage);
 
 void copy_user_highpage(struct page *to, struct page *from,
@@ -44,3 +48,23 @@ void copy_user_highpage(struct page *to, struct page *from,
 	flush_dcache_page(to);
 }
 EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+void copy_highpage_mc(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+
+	copy_page_mc(kto, kfrom);
+	do_mte(to, from, kto, kfrom);
+}
+EXPORT_SYMBOL(copy_highpage_mc);
+
+void copy_user_highpage_mc(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma)
+{
+	copy_highpage_mc(to, from);
+	flush_dcache_page(to);
+}
+EXPORT_SYMBOL_GPL(copy_user_highpage_mc);
+#endif
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index ca7388f3923b..7ee67fcf9e81 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -98,6 +98,7 @@ bool fixup_exception_mc(struct pt_regs *regs)
 
 	switch (ex->type) {
 	case EX_TYPE_UACCESS_MC:
+	case EX_TYPE_COPY_PAGE_MC:
 		return ex_handler_fixup(ex, regs);
 	case EX_TYPE_UACCESS_MC_ERR_ZERO:
 		return ex_handler_uaccess_err_zero(ex, regs);
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 39bb9b47fa9c..a9dbf331b038 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -283,6 +283,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
 
 #endif
 
+#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC
+#define copy_user_highpage_mc copy_user_highpage
+#endif
+
 #ifndef __HAVE_ARCH_COPY_HIGHPAGE
 
 static inline void copy_highpage(struct page *to, struct page *from)
@@ -298,6 +302,10 @@ static inline void copy_highpage(struct page *to, struct page *from)
 
 #endif
 
+#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC
+#define cop_highpage_mc copy_highpage
+#endif
+
 static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 			       struct page *src_page, size_t src_off,
 			       size_t len)
diff --git a/mm/memory.c b/mm/memory.c
index 76e3af9639d9..d5f62234152d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2767,7 +2767,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
 	unsigned long addr = vmf->address;
 
 	if (likely(src)) {
-		copy_user_highpage(dst, src, addr, vma);
+		copy_user_highpage_mc(dst, src, addr, vma);
 		return true;
 	}
 
-- 
2.18.0.huawei.25


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
  2022-04-12  7:25   ` Tong Tiangen
@ 2022-04-12 11:49     ` Kefeng Wang
  -1 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 11:49 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> X86 has it's implementation of copy_mc_to_user but not use #define to
> declare.
>
> This may cause problems, for example, if other architectures open
> CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
> architecture, the code add to include/linux/uaddess.h is as follows:
>
>      #ifndef copy_mc_to_user
>      static inline unsigned long __must_check
>      copy_mc_to_user(void *dst, const void *src, size_t cnt)
>      {
> 	    ...
>      }
>      #endif
>
> Then this definition will conflict with the implementation of X86 and cause
> compilation errors.
Does powerpc need this define?
>
> Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()")
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>   arch/x86/include/asm/uaccess.h | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index f78e2b3501a1..e18c5f098025 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
>   
>   unsigned long __must_check
>   copy_mc_to_user(void *to, const void *from, unsigned len);
> +#define copy_mc_to_user copy_mc_to_user
>   #endif
>   
>   /*

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
@ 2022-04-12 11:49     ` Kefeng Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 11:49 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> X86 has it's implementation of copy_mc_to_user but not use #define to
> declare.
>
> This may cause problems, for example, if other architectures open
> CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
> architecture, the code add to include/linux/uaddess.h is as follows:
>
>      #ifndef copy_mc_to_user
>      static inline unsigned long __must_check
>      copy_mc_to_user(void *dst, const void *src, size_t cnt)
>      {
> 	    ...
>      }
>      #endif
>
> Then this definition will conflict with the implementation of X86 and cause
> compilation errors.
Does powerpc need this define?
>
> Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()")
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>   arch/x86/include/asm/uaccess.h | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
> index f78e2b3501a1..e18c5f098025 100644
> --- a/arch/x86/include/asm/uaccess.h
> +++ b/arch/x86/include/asm/uaccess.h
> @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
>   
>   unsigned long __must_check
>   copy_mc_to_user(void *to, const void *from, unsigned len);
> +#define copy_mc_to_user copy_mc_to_user
>   #endif
>   
>   /*

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 2/6] arm64: fix types in copy_highpage()
  2022-04-12  7:25   ` Tong Tiangen
@ 2022-04-12 11:50     ` Kefeng Wang
  -1 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 11:50 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> In copy_highpage() the `kto` and `kfrom` local variables are pointers to
> struct page, but these are used to hold arbitrary pointers to kernel memory
> . Each call to page_address() returns a void pointer to memory associated
> with the relevant page, and copy_page() expects void pointers to this
> memory.
>
> This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte:
> Tags-aware copy_{user_,}highpage() implementations") and while this
> doesn't appear to be harmful in practice it is clearly wrong.
>
> Correct this by making `kto` and `kfrom` void pointers.
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>

> Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations")
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> ---
>   arch/arm64/mm/copypage.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index b5447e53cd73..0dea80bf6de4 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -16,8 +16,8 @@
>   
>   void copy_highpage(struct page *to, struct page *from)
>   {
> -	struct page *kto = page_address(to);
> -	struct page *kfrom = page_address(from);
> +	void *kto = page_address(to);
> +	void *kfrom = page_address(from);
>   
>   	copy_page(kto, kfrom);
>   

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 2/6] arm64: fix types in copy_highpage()
@ 2022-04-12 11:50     ` Kefeng Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 11:50 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> In copy_highpage() the `kto` and `kfrom` local variables are pointers to
> struct page, but these are used to hold arbitrary pointers to kernel memory
> . Each call to page_address() returns a void pointer to memory associated
> with the relevant page, and copy_page() expects void pointers to this
> memory.
>
> This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte:
> Tags-aware copy_{user_,}highpage() implementations") and while this
> doesn't appear to be harmful in practice it is clearly wrong.
>
> Correct this by making `kto` and `kfrom` void pointers.
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>

> Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations")
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> ---
>   arch/arm64/mm/copypage.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index b5447e53cd73..0dea80bf6de4 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -16,8 +16,8 @@
>   
>   void copy_highpage(struct page *to, struct page *from)
>   {
> -	struct page *kto = page_address(to);
> -	struct page *kfrom = page_address(from);
> +	void *kto = page_address(to);
> +	void *kfrom = page_address(from);
>   
>   	copy_page(kto, kfrom);
>   

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
  2022-04-12  7:25   ` Tong Tiangen
@ 2022-04-12 13:08     ` Kefeng Wang
  -1 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 13:08 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> During the processing of arm64 kernel hardware memory errors(do_sea()), if
> the errors is consumed in the kernel, the current processing is panic.
> However, it is not optimal.
>
> Take uaccess for example, if the uaccess operation fails due to memory
> error, only the user process will be affected, kill the user process
> and isolate the user page with hardware memory errors is a better choice.
>
> This patch only enable machine error check framework, it add exception
> fixup before kernel panic in do_sea() and only limit the consumption of
> hardware memory errors in kernel mode triggered by user mode processes.
> If fixup successful, panic can be avoided.
>
> Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC.
>
> Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is
> called when CONFIG_ARCH_HAS_COPOY_MC is open.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>   arch/arm64/Kconfig               |  1 +
>   arch/arm64/include/asm/extable.h |  1 +
>   arch/arm64/mm/extable.c          | 18 ++++++++++++++++++
>   arch/arm64/mm/fault.c            | 28 ++++++++++++++++++++++++++++
>   include/linux/uaccess.h          |  8 ++++++++
>   5 files changed, 56 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index d9325dd95eba..012e38309955 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -19,6 +19,7 @@ config ARM64
>   	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
>   	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
>   	select ARCH_HAS_CACHE_LINE_SIZE
> +	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
>   	select ARCH_HAS_CURRENT_STACK_POINTER
>   	select ARCH_HAS_DEBUG_VIRTUAL
>   	select ARCH_HAS_DEBUG_VM_PGTABLE
> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
> index 72b0e71cc3de..f80ebd0addfd 100644
> --- a/arch/arm64/include/asm/extable.h
> +++ b/arch/arm64/include/asm/extable.h
> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
>   #endif /* !CONFIG_BPF_JIT */
>   
>   bool fixup_exception(struct pt_regs *regs);
> +bool fixup_exception_mc(struct pt_regs *regs);
>   #endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 489455309695..5de256a25464 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -9,6 +9,7 @@
>   
>   #include <asm/asm-extable.h>
>   #include <asm/ptrace.h>
> +#include <asm/esr.h>
>   
>   static inline unsigned long
>   get_ex_fixup(const struct exception_table_entry *ex)
> @@ -73,6 +74,7 @@ bool fixup_exception(struct pt_regs *regs)
>   
>   	switch (ex->type) {
>   	case EX_TYPE_FIXUP:
> +	case EX_TYPE_UACCESS_MC:
>   		return ex_handler_fixup(ex, regs);
>   	case EX_TYPE_BPF:
>   		return ex_handler_bpf(ex, regs);
> @@ -84,3 +86,19 @@ bool fixup_exception(struct pt_regs *regs)
>   
>   	BUG();
>   }
> +
> +bool fixup_exception_mc(struct pt_regs *regs)
> +{
> +	const struct exception_table_entry *ex;
> +
> +	ex = search_exception_tables(instruction_pointer(regs));
> +	if (!ex)
> +		return false;
> +
> +	switch (ex->type) {
> +	case EX_TYPE_UACCESS_MC:
> +		return ex_handler_fixup(ex, regs);
> +	}
> +
> +	return false;
> +}

The definition of EX_TYPE_UACCESS_MC is in patch4, please fix it, and if 
arm64 exception table

is sorted by exception type, we could drop fixup_exception_mc(), right?

> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 77341b160aca..56b13cf8bf1d 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   	return 1; /* "fault" */
>   }
>   
> +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr,
> +				     struct pt_regs *regs, int sig, int code)
> +{
> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> +		return false;
> +
> +	if (user_mode(regs) || !current->mm)
> +		return false;
> +
> +	if (apei_claim_sea(regs) < 0)
> +		return false;
> +
> +	current->thread.fault_address = 0;
> +	current->thread.fault_code = esr;
> +
Use set_thread_esr(0, esr) and move it after fixup_exception_mc();
> +	if (!fixup_exception_mc(regs))
> +		return false;
> +
> +	arm64_force_sig_fault(sig, code, addr,
> +		"Uncorrected hardware memory error in kernel-access\n");
> +
> +	return true;
> +}
> +
>   static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   {
>   	const struct fault_info *inf;
> @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   		 */
>   		siaddr  = untagged_addr(far);
>   	}
> +
> +	if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
> +		return 0;
> +

Rename arm64_process_kernel_sea() to arm64_do_kernel_sea() ?

if (!arm64_do_kernel_sea())

     arm64_notify_die();

>   	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
>   
>   	return 0;
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index 546179418ffa..dd952aeecdc1 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
>   }
>   #endif
>   
> +#ifndef copy_mc_to_user
> +static inline unsigned long __must_check
> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
> +{
Add check_object_size(cnt, src, true);  which could make 
HARDENED_USERCOPY works.
> +	return raw_copy_to_user(dst, src, cnt);
> +}
> +#endif
> +
>   static __always_inline void pagefault_disabled_inc(void)
>   {
>   	current->pagefault_disabled++;

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
@ 2022-04-12 13:08     ` Kefeng Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Kefeng Wang @ 2022-04-12 13:08 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi


On 2022/4/12 15:25, Tong Tiangen wrote:
> During the processing of arm64 kernel hardware memory errors(do_sea()), if
> the errors is consumed in the kernel, the current processing is panic.
> However, it is not optimal.
>
> Take uaccess for example, if the uaccess operation fails due to memory
> error, only the user process will be affected, kill the user process
> and isolate the user page with hardware memory errors is a better choice.
>
> This patch only enable machine error check framework, it add exception
> fixup before kernel panic in do_sea() and only limit the consumption of
> hardware memory errors in kernel mode triggered by user mode processes.
> If fixup successful, panic can be avoided.
>
> Consistent with PPC/x86, it is implemented by CONFIG_ARCH_HAS_COPY_MC.
>
> Also add copy_mc_to_user() in include/linux/uaccess.h, this helper is
> called when CONFIG_ARCH_HAS_COPOY_MC is open.
>
> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
> ---
>   arch/arm64/Kconfig               |  1 +
>   arch/arm64/include/asm/extable.h |  1 +
>   arch/arm64/mm/extable.c          | 18 ++++++++++++++++++
>   arch/arm64/mm/fault.c            | 28 ++++++++++++++++++++++++++++
>   include/linux/uaccess.h          |  8 ++++++++
>   5 files changed, 56 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index d9325dd95eba..012e38309955 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -19,6 +19,7 @@ config ARM64
>   	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
>   	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
>   	select ARCH_HAS_CACHE_LINE_SIZE
> +	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
>   	select ARCH_HAS_CURRENT_STACK_POINTER
>   	select ARCH_HAS_DEBUG_VIRTUAL
>   	select ARCH_HAS_DEBUG_VM_PGTABLE
> diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
> index 72b0e71cc3de..f80ebd0addfd 100644
> --- a/arch/arm64/include/asm/extable.h
> +++ b/arch/arm64/include/asm/extable.h
> @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
>   #endif /* !CONFIG_BPF_JIT */
>   
>   bool fixup_exception(struct pt_regs *regs);
> +bool fixup_exception_mc(struct pt_regs *regs);
>   #endif
> diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
> index 489455309695..5de256a25464 100644
> --- a/arch/arm64/mm/extable.c
> +++ b/arch/arm64/mm/extable.c
> @@ -9,6 +9,7 @@
>   
>   #include <asm/asm-extable.h>
>   #include <asm/ptrace.h>
> +#include <asm/esr.h>
>   
>   static inline unsigned long
>   get_ex_fixup(const struct exception_table_entry *ex)
> @@ -73,6 +74,7 @@ bool fixup_exception(struct pt_regs *regs)
>   
>   	switch (ex->type) {
>   	case EX_TYPE_FIXUP:
> +	case EX_TYPE_UACCESS_MC:
>   		return ex_handler_fixup(ex, regs);
>   	case EX_TYPE_BPF:
>   		return ex_handler_bpf(ex, regs);
> @@ -84,3 +86,19 @@ bool fixup_exception(struct pt_regs *regs)
>   
>   	BUG();
>   }
> +
> +bool fixup_exception_mc(struct pt_regs *regs)
> +{
> +	const struct exception_table_entry *ex;
> +
> +	ex = search_exception_tables(instruction_pointer(regs));
> +	if (!ex)
> +		return false;
> +
> +	switch (ex->type) {
> +	case EX_TYPE_UACCESS_MC:
> +		return ex_handler_fixup(ex, regs);
> +	}
> +
> +	return false;
> +}

The definition of EX_TYPE_UACCESS_MC is in patch4, please fix it, and if 
arm64 exception table

is sorted by exception type, we could drop fixup_exception_mc(), right?

> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 77341b160aca..56b13cf8bf1d 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   	return 1; /* "fault" */
>   }
>   
> +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int esr,
> +				     struct pt_regs *regs, int sig, int code)
> +{
> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
> +		return false;
> +
> +	if (user_mode(regs) || !current->mm)
> +		return false;
> +
> +	if (apei_claim_sea(regs) < 0)
> +		return false;
> +
> +	current->thread.fault_address = 0;
> +	current->thread.fault_code = esr;
> +
Use set_thread_esr(0, esr) and move it after fixup_exception_mc();
> +	if (!fixup_exception_mc(regs))
> +		return false;
> +
> +	arm64_force_sig_fault(sig, code, addr,
> +		"Uncorrected hardware memory error in kernel-access\n");
> +
> +	return true;
> +}
> +
>   static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   {
>   	const struct fault_info *inf;
> @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)
>   		 */
>   		siaddr  = untagged_addr(far);
>   	}
> +
> +	if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, inf->code))
> +		return 0;
> +

Rename arm64_process_kernel_sea() to arm64_do_kernel_sea() ?

if (!arm64_do_kernel_sea())

     arm64_notify_die();

>   	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
>   
>   	return 0;
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index 546179418ffa..dd952aeecdc1 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
>   }
>   #endif
>   
> +#ifndef copy_mc_to_user
> +static inline unsigned long __must_check
> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
> +{
Add check_object_size(cnt, src, true);  which could make 
HARDENED_USERCOPY works.
> +	return raw_copy_to_user(dst, src, cnt);
> +}
> +#endif
> +
>   static __always_inline void pagefault_disabled_inc(void)
>   {
>   	current->pagefault_disabled++;

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 6/6] arm64: add cow to machine check safe
  2022-04-12  7:25   ` Tong Tiangen
@ 2022-04-12 16:39     ` Robin Murphy
  -1 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 16:39 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 8:25 am, Tong Tiangen wrote:
[...]
> +100:	ldp	x2, x3, [x1]
> +101:	ldp	x4, x5, [x1, #16]
> +102:	ldp	x6, x7, [x1, #32]
> +103:	ldp	x8, x9, [x1, #48]
> +104:	ldp	x10, x11, [x1, #64]
> +105:	ldp	x12, x13, [x1, #80]
> +106:	ldp	x14, x15, [x1, #96]
> +107:	ldp	x16, x17, [x1, #112]
> +
> +	add	x0, x0, #256
> +	add	x1, x1, #128
> +1:
> +	tst	x0, #(PAGE_SIZE - 1)
> +
> +alternative_if ARM64_HAS_NO_HW_PREFETCH
> +	prfm	pldl1strm, [x1, #384]
> +alternative_else_nop_endif
> +
> +	stnp	x2, x3, [x0, #-256]
> +200:	ldp	x2, x3, [x1]
> +	stnp	x4, x5, [x0, #16 - 256]
> +201:	ldp	x4, x5, [x1, #16]
> +	stnp	x6, x7, [x0, #32 - 256]
> +202:	ldp	x6, x7, [x1, #32]
> +	stnp	x8, x9, [x0, #48 - 256]
> +203:	ldp	x8, x9, [x1, #48]
> +	stnp	x10, x11, [x0, #64 - 256]
> +204:	ldp	x10, x11, [x1, #64]
> +	stnp	x12, x13, [x0, #80 - 256]
> +205:	ldp	x12, x13, [x1, #80]
> +	stnp	x14, x15, [x0, #96 - 256]
> +206:	ldp	x14, x15, [x1, #96]
> +	stnp	x16, x17, [x0, #112 - 256]
> +207:	ldp	x16, x17, [x1, #112]
> +
> +	add	x0, x0, #128
> +	add	x1, x1, #128
> +
> +	b.ne	1b
> +
> +	stnp	x2, x3, [x0, #-256]
> +	stnp	x4, x5, [x0, #16 - 256]
> +	stnp	x6, x7, [x0, #32 - 256]
> +	stnp	x8, x9, [x0, #48 - 256]
> +	stnp	x10, x11, [x0, #64 - 256]
> +	stnp	x12, x13, [x0, #80 - 256]
> +	stnp	x14, x15, [x0, #96 - 256]
> +	stnp	x16, x17, [x0, #112 - 256]
> +
> +300:	ret
> +
> +_asm_extable_copy_page_mc 100b, 300b
> +_asm_extable_copy_page_mc 101b, 300b
> +_asm_extable_copy_page_mc 102b, 300b
> +_asm_extable_copy_page_mc 103b, 300b
> +_asm_extable_copy_page_mc 104b, 300b
> +_asm_extable_copy_page_mc 105b, 300b
> +_asm_extable_copy_page_mc 106b, 300b
> +_asm_extable_copy_page_mc 107b, 300b
> +_asm_extable_copy_page_mc 200b, 300b
> +_asm_extable_copy_page_mc 201b, 300b
> +_asm_extable_copy_page_mc 202b, 300b
> +_asm_extable_copy_page_mc 203b, 300b
> +_asm_extable_copy_page_mc 204b, 300b
> +_asm_extable_copy_page_mc 205b, 300b
> +_asm_extable_copy_page_mc 206b, 300b
> +_asm_extable_copy_page_mc 207b, 300b


Please add a USER_MC() macro to parallel the existing USER() one (we can 
worry about names and eventually consolidating things later), then use 
that to save all the label mess here.

Thanks,
Robin.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 6/6] arm64: add cow to machine check safe
@ 2022-04-12 16:39     ` Robin Murphy
  0 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 16:39 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 8:25 am, Tong Tiangen wrote:
[...]
> +100:	ldp	x2, x3, [x1]
> +101:	ldp	x4, x5, [x1, #16]
> +102:	ldp	x6, x7, [x1, #32]
> +103:	ldp	x8, x9, [x1, #48]
> +104:	ldp	x10, x11, [x1, #64]
> +105:	ldp	x12, x13, [x1, #80]
> +106:	ldp	x14, x15, [x1, #96]
> +107:	ldp	x16, x17, [x1, #112]
> +
> +	add	x0, x0, #256
> +	add	x1, x1, #128
> +1:
> +	tst	x0, #(PAGE_SIZE - 1)
> +
> +alternative_if ARM64_HAS_NO_HW_PREFETCH
> +	prfm	pldl1strm, [x1, #384]
> +alternative_else_nop_endif
> +
> +	stnp	x2, x3, [x0, #-256]
> +200:	ldp	x2, x3, [x1]
> +	stnp	x4, x5, [x0, #16 - 256]
> +201:	ldp	x4, x5, [x1, #16]
> +	stnp	x6, x7, [x0, #32 - 256]
> +202:	ldp	x6, x7, [x1, #32]
> +	stnp	x8, x9, [x0, #48 - 256]
> +203:	ldp	x8, x9, [x1, #48]
> +	stnp	x10, x11, [x0, #64 - 256]
> +204:	ldp	x10, x11, [x1, #64]
> +	stnp	x12, x13, [x0, #80 - 256]
> +205:	ldp	x12, x13, [x1, #80]
> +	stnp	x14, x15, [x0, #96 - 256]
> +206:	ldp	x14, x15, [x1, #96]
> +	stnp	x16, x17, [x0, #112 - 256]
> +207:	ldp	x16, x17, [x1, #112]
> +
> +	add	x0, x0, #128
> +	add	x1, x1, #128
> +
> +	b.ne	1b
> +
> +	stnp	x2, x3, [x0, #-256]
> +	stnp	x4, x5, [x0, #16 - 256]
> +	stnp	x6, x7, [x0, #32 - 256]
> +	stnp	x8, x9, [x0, #48 - 256]
> +	stnp	x10, x11, [x0, #64 - 256]
> +	stnp	x12, x13, [x0, #80 - 256]
> +	stnp	x14, x15, [x0, #96 - 256]
> +	stnp	x16, x17, [x0, #112 - 256]
> +
> +300:	ret
> +
> +_asm_extable_copy_page_mc 100b, 300b
> +_asm_extable_copy_page_mc 101b, 300b
> +_asm_extable_copy_page_mc 102b, 300b
> +_asm_extable_copy_page_mc 103b, 300b
> +_asm_extable_copy_page_mc 104b, 300b
> +_asm_extable_copy_page_mc 105b, 300b
> +_asm_extable_copy_page_mc 106b, 300b
> +_asm_extable_copy_page_mc 107b, 300b
> +_asm_extable_copy_page_mc 200b, 300b
> +_asm_extable_copy_page_mc 201b, 300b
> +_asm_extable_copy_page_mc 202b, 300b
> +_asm_extable_copy_page_mc 203b, 300b
> +_asm_extable_copy_page_mc 204b, 300b
> +_asm_extable_copy_page_mc 205b, 300b
> +_asm_extable_copy_page_mc 206b, 300b
> +_asm_extable_copy_page_mc 207b, 300b


Please add a USER_MC() macro to parallel the existing USER() one (we can 
worry about names and eventually consolidating things later), then use 
that to save all the label mess here.

Thanks,
Robin.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12  7:25   ` Tong Tiangen
@ 2022-04-12 17:08     ` Robin Murphy
  -1 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 17:08 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 8:25 am, Tong Tiangen wrote:
[...]
> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index 0557af834e03..bb17f0829042 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>   
>   		_asm_extable	8888b,\l;
>   	.endm
> +
> +	.macro user_ldp_mc l, reg1, reg2, addr, post_inc
> +8888:		ldtr	\reg1, [\addr];
> +8889:		ldtr	\reg2, [\addr, #8];
> +		add	\addr, \addr, \post_inc;
> +
> +		_asm_extable_uaccess_mc	8888b, \l;
> +		_asm_extable_uaccess_mc	8889b, \l;
> +	.endm

You're replacing the only user of this, so please just 
s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
the rest of the churn.

Furthermore, how come you're not similarly updating user_stp, given that 
you *are* updating the other stores in copy_to_user?

> +
> +	.macro user_ldst_mc l, inst, reg, addr, post_inc
> +8888:		\inst		\reg, [\addr];
> +		add		\addr, \addr, \post_inc;
> +
> +		_asm_extable_uaccess_mc	8888b, \l;
> +	.endm

Similarly, I think we can just update user_ldst itself. The two 
instances that you're not replacing here are bogus anyway, and deserve 
to be fixed with the patch below first.

[...]
> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>   	ret
>   
>   	// Exception fixups
> -9997:	cmp	dst, dstin
> +9997:	mrs esr, esr_el1			// Check exception first
> +	and esr, esr, #ESR_ELx_FSC
> +	cmp esr, #ESR_ELx_FSC_EXTABT

Should we be checking EC to make sure it's a data abort - and thus FSC 
is valid - in the first place? I'm a little fuzzy on all the possible 
paths into fixup_exception(), and it's not entirely obvious whether this 
is actually safe or not.

Thanks,
Robin.

----->8-----
Subject: [PATCH] arm64: mte: Clean up user tag accessors

Invoking user_ldst to explicitly add a post-increment of 0 is silly.
Just use a normal USER() annotation and save the redundant instruction.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  arch/arm64/lib/mte.S | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 8590af3c98c0..eeb9e45bcce8 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -93,7 +93,7 @@ SYM_FUNC_START(mte_copy_tags_from_user)
  	mov	x3, x1
  	cbz	x2, 2f
  1:
-	user_ldst 2f, ldtrb, w4, x1, 0
+USER(2f, ldtrb	w4, [x1])
  	lsl	x4, x4, #MTE_TAG_SHIFT
  	stg	x4, [x0], #MTE_GRANULE_SIZE
  	add	x1, x1, #1
@@ -120,7 +120,7 @@ SYM_FUNC_START(mte_copy_tags_to_user)
  1:
  	ldg	x4, [x1]
  	ubfx	x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE
-	user_ldst 2f, sttrb, w4, x0, 0
+USER(2f, sttrb	w4, [x0])
  	add	x0, x0, #1
  	add	x1, x1, #MTE_GRANULE_SIZE
  	subs	x2, x2, #1
-- 
2.28.0.dirty

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-12 17:08     ` Robin Murphy
  0 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 17:08 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 8:25 am, Tong Tiangen wrote:
[...]
> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index 0557af834e03..bb17f0829042 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>   
>   		_asm_extable	8888b,\l;
>   	.endm
> +
> +	.macro user_ldp_mc l, reg1, reg2, addr, post_inc
> +8888:		ldtr	\reg1, [\addr];
> +8889:		ldtr	\reg2, [\addr, #8];
> +		add	\addr, \addr, \post_inc;
> +
> +		_asm_extable_uaccess_mc	8888b, \l;
> +		_asm_extable_uaccess_mc	8889b, \l;
> +	.endm

You're replacing the only user of this, so please just 
s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
the rest of the churn.

Furthermore, how come you're not similarly updating user_stp, given that 
you *are* updating the other stores in copy_to_user?

> +
> +	.macro user_ldst_mc l, inst, reg, addr, post_inc
> +8888:		\inst		\reg, [\addr];
> +		add		\addr, \addr, \post_inc;
> +
> +		_asm_extable_uaccess_mc	8888b, \l;
> +	.endm

Similarly, I think we can just update user_ldst itself. The two 
instances that you're not replacing here are bogus anyway, and deserve 
to be fixed with the patch below first.

[...]
> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>   	ret
>   
>   	// Exception fixups
> -9997:	cmp	dst, dstin
> +9997:	mrs esr, esr_el1			// Check exception first
> +	and esr, esr, #ESR_ELx_FSC
> +	cmp esr, #ESR_ELx_FSC_EXTABT

Should we be checking EC to make sure it's a data abort - and thus FSC 
is valid - in the first place? I'm a little fuzzy on all the possible 
paths into fixup_exception(), and it's not entirely obvious whether this 
is actually safe or not.

Thanks,
Robin.

----->8-----
Subject: [PATCH] arm64: mte: Clean up user tag accessors

Invoking user_ldst to explicitly add a post-increment of 0 is silly.
Just use a normal USER() annotation and save the redundant instruction.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  arch/arm64/lib/mte.S | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 8590af3c98c0..eeb9e45bcce8 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -93,7 +93,7 @@ SYM_FUNC_START(mte_copy_tags_from_user)
  	mov	x3, x1
  	cbz	x2, 2f
  1:
-	user_ldst 2f, ldtrb, w4, x1, 0
+USER(2f, ldtrb	w4, [x1])
  	lsl	x4, x4, #MTE_TAG_SHIFT
  	stg	x4, [x0], #MTE_GRANULE_SIZE
  	add	x1, x1, #1
@@ -120,7 +120,7 @@ SYM_FUNC_START(mte_copy_tags_to_user)
  1:
  	ldg	x4, [x1]
  	ubfx	x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE
-	user_ldst 2f, sttrb, w4, x0, 0
+USER(2f, sttrb	w4, [x0])
  	add	x0, x0, #1
  	add	x1, x1, #MTE_GRANULE_SIZE
  	subs	x2, x2, #1
-- 
2.28.0.dirty

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12 17:08     ` Robin Murphy
@ 2022-04-12 17:17       ` Robin Murphy
  -1 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 17:17 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 6:08 pm, Robin Murphy wrote:
[...]
>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>       ret
>>       // Exception fixups
>> -9997:    cmp    dst, dstin
>> +9997:    mrs esr, esr_el1            // Check exception first
>> +    and esr, esr, #ESR_ELx_FSC
>> +    cmp esr, #ESR_ELx_FSC_EXTABT
> 
> Should we be checking EC to make sure it's a data abort - and thus FSC 
> is valid - in the first place? I'm a little fuzzy on all the possible 
> paths into fixup_exception(), and it's not entirely obvious whether this 
> is actually safe or not.

In fact, thinking some more about that, I don't think there should be 
any need for this sort of logic in these handlers at all. The 
fixup_exception() machinery should already know enough about the 
exception that's happened and the extable entry to figure this out and 
not bother calling the handler at all.

Thanks,
Robin.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-12 17:17       ` Robin Murphy
  0 siblings, 0 replies; 36+ messages in thread
From: Robin Murphy @ 2022-04-12 17:17 UTC (permalink / raw)
  To: Tong Tiangen, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi

On 12/04/2022 6:08 pm, Robin Murphy wrote:
[...]
>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>       ret
>>       // Exception fixups
>> -9997:    cmp    dst, dstin
>> +9997:    mrs esr, esr_el1            // Check exception first
>> +    and esr, esr, #ESR_ELx_FSC
>> +    cmp esr, #ESR_ELx_FSC_EXTABT
> 
> Should we be checking EC to make sure it's a data abort - and thus FSC 
> is valid - in the first place? I'm a little fuzzy on all the possible 
> paths into fixup_exception(), and it's not entirely obvious whether this 
> is actually safe or not.

In fact, thinking some more about that, I don't think there should be 
any need for this sort of logic in these handlers at all. The 
fixup_exception() machinery should already know enough about the 
exception that's happened and the extable entry to figure this out and 
not bother calling the handler at all.

Thanks,
Robin.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
  2022-04-12 11:49     ` Kefeng Wang
@ 2022-04-13  6:01       ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Kefeng Wang, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi



在 2022/4/12 19:49, Kefeng Wang 写道:
> 
> On 2022/4/12 15:25, Tong Tiangen wrote:
>> X86 has it's implementation of copy_mc_to_user but not use #define to
>> declare.
>>
>> This may cause problems, for example, if other architectures open
>> CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
>> architecture, the code add to include/linux/uaddess.h is as follows:
>>
>>      #ifndef copy_mc_to_user
>>      static inline unsigned long __must_check
>>      copy_mc_to_user(void *dst, const void *src, size_t cnt)
>>      {
>>         ...
>>      }
>>      #endif
>>
>> Then this definition will conflict with the implementation of X86 and 
>> cause
>> compilation errors.
> Does powerpc need this define?

Oh, missing that, will do next version.
>>
>> Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to 
>> copy_mc_to_{user, kernel}()")
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/x86/include/asm/uaccess.h | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/include/asm/uaccess.h 
>> b/arch/x86/include/asm/uaccess.h
>> index f78e2b3501a1..e18c5f098025 100644
>> --- a/arch/x86/include/asm/uaccess.h
>> +++ b/arch/x86/include/asm/uaccess.h
>> @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, 
>> unsigned len);
>>   unsigned long __must_check
>>   copy_mc_to_user(void *to, const void *from, unsigned len);
>> +#define copy_mc_to_user copy_mc_to_user
>>   #endif
>>   /*
> .

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user
@ 2022-04-13  6:01       ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  6:01 UTC (permalink / raw)
  To: Kefeng Wang, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi



在 2022/4/12 19:49, Kefeng Wang 写道:
> 
> On 2022/4/12 15:25, Tong Tiangen wrote:
>> X86 has it's implementation of copy_mc_to_user but not use #define to
>> declare.
>>
>> This may cause problems, for example, if other architectures open
>> CONFIG_ARCH_HAS_COPY_MC, but want to use copy_mc_to_user() outside the
>> architecture, the code add to include/linux/uaddess.h is as follows:
>>
>>      #ifndef copy_mc_to_user
>>      static inline unsigned long __must_check
>>      copy_mc_to_user(void *dst, const void *src, size_t cnt)
>>      {
>>         ...
>>      }
>>      #endif
>>
>> Then this definition will conflict with the implementation of X86 and 
>> cause
>> compilation errors.
> Does powerpc need this define?

Oh, missing that, will do next version.
>>
>> Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to 
>> copy_mc_to_{user, kernel}()")
>> Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
>> ---
>>   arch/x86/include/asm/uaccess.h | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/include/asm/uaccess.h 
>> b/arch/x86/include/asm/uaccess.h
>> index f78e2b3501a1..e18c5f098025 100644
>> --- a/arch/x86/include/asm/uaccess.h
>> +++ b/arch/x86/include/asm/uaccess.h
>> @@ -415,6 +415,7 @@ copy_mc_to_kernel(void *to, const void *from, 
>> unsigned len);
>>   unsigned long __must_check
>>   copy_mc_to_user(void *to, const void *from, unsigned len);
>> +#define copy_mc_to_user copy_mc_to_user
>>   #endif
>>   /*
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12 17:08     ` Robin Murphy
@ 2022-04-13  6:36       ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  6:36 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:08, Robin Murphy 写道:
> On 12/04/2022 8:25 am, Tong Tiangen wrote:
> [...]
>> diff --git a/arch/arm64/include/asm/asm-uaccess.h 
>> b/arch/arm64/include/asm/asm-uaccess.h
>> index 0557af834e03..bb17f0829042 100644
>> --- a/arch/arm64/include/asm/asm-uaccess.h
>> +++ b/arch/arm64/include/asm/asm-uaccess.h
>> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>>           _asm_extable    8888b,\l;
>>       .endm
>> +
>> +    .macro user_ldp_mc l, reg1, reg2, addr, post_inc
>> +8888:        ldtr    \reg1, [\addr];
>> +8889:        ldtr    \reg2, [\addr, #8];
>> +        add    \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +        _asm_extable_uaccess_mc    8889b, \l;
>> +    .endm
> 
> You're replacing the only user of this, so please just 
> s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
> the rest of the churn.

Agreed, *user_ldp* -- This name has clearly explained the scences where 
this macro is used. It is more appropriate to modify it directly.

> 
> Furthermore, how come you're not similarly updating user_stp, given that 
> you *are* updating the other stores in copy_to_user?
> 
>> +
>> +    .macro user_ldst_mc l, inst, reg, addr, post_inc
>> +8888:        \inst        \reg, [\addr];
>> +        add        \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +    .endm
> 
> Similarly, I think we can just update user_ldst itself. The two 
> instances that you're not replacing here are bogus anyway, and deserve 
> to be fixed with the patch below first.

OK, great thanks. will do next version.

> 
> [...]
>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>       ret
>>       // Exception fixups
>> -9997:    cmp    dst, dstin
>> +9997:    mrs esr, esr_el1            // Check exception first
>> +    and esr, esr, #ESR_ELx_FSC
>> +    cmp esr, #ESR_ELx_FSC_EXTABT
> 
> Should we be checking EC to make sure it's a data abort - and thus FSC 
> is valid - in the first place? I'm a little fuzzy on all the possible 
> paths into fixup_exception(), and it's not entirely obvious whether this 
> is actually safe or not.
> 
> Thanks,
> Robin.

I think checking EC here is more rigorous in code logic and it's doesn't 
appear to be harmful.

It is really not appropriate to check the ESR at this stage (it has been 
checked where the exception processing starts). At present, I haven't 
thought of a better way. If anyone has a better way, please reply to me :)

Thanks Robin.
Tong.

> 
> ----->8-----
> Subject: [PATCH] arm64: mte: Clean up user tag accessors
> 
> Invoking user_ldst to explicitly add a post-increment of 0 is silly.
> Just use a normal USER() annotation and save the redundant instruction.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>   arch/arm64/lib/mte.S | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 8590af3c98c0..eeb9e45bcce8 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -93,7 +93,7 @@ SYM_FUNC_START(mte_copy_tags_from_user)
>       mov    x3, x1
>       cbz    x2, 2f
>   1:
> -    user_ldst 2f, ldtrb, w4, x1, 0
> +USER(2f, ldtrb    w4, [x1])
>       lsl    x4, x4, #MTE_TAG_SHIFT
>       stg    x4, [x0], #MTE_GRANULE_SIZE
>       add    x1, x1, #1
> @@ -120,7 +120,7 @@ SYM_FUNC_START(mte_copy_tags_to_user)
>   1:
>       ldg    x4, [x1]
>       ubfx    x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE
> -    user_ldst 2f, sttrb, w4, x0, 0
> +USER(2f, sttrb    w4, [x0])
>       add    x0, x0, #1
>       add    x1, x1, #MTE_GRANULE_SIZE
>       subs    x2, x2, #1

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-13  6:36       ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  6:36 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:08, Robin Murphy 写道:
> On 12/04/2022 8:25 am, Tong Tiangen wrote:
> [...]
>> diff --git a/arch/arm64/include/asm/asm-uaccess.h 
>> b/arch/arm64/include/asm/asm-uaccess.h
>> index 0557af834e03..bb17f0829042 100644
>> --- a/arch/arm64/include/asm/asm-uaccess.h
>> +++ b/arch/arm64/include/asm/asm-uaccess.h
>> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>>           _asm_extable    8888b,\l;
>>       .endm
>> +
>> +    .macro user_ldp_mc l, reg1, reg2, addr, post_inc
>> +8888:        ldtr    \reg1, [\addr];
>> +8889:        ldtr    \reg2, [\addr, #8];
>> +        add    \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +        _asm_extable_uaccess_mc    8889b, \l;
>> +    .endm
> 
> You're replacing the only user of this, so please just 
> s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
> the rest of the churn.

Agreed, *user_ldp* -- This name has clearly explained the scences where 
this macro is used. It is more appropriate to modify it directly.

> 
> Furthermore, how come you're not similarly updating user_stp, given that 
> you *are* updating the other stores in copy_to_user?
> 
>> +
>> +    .macro user_ldst_mc l, inst, reg, addr, post_inc
>> +8888:        \inst        \reg, [\addr];
>> +        add        \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +    .endm
> 
> Similarly, I think we can just update user_ldst itself. The two 
> instances that you're not replacing here are bogus anyway, and deserve 
> to be fixed with the patch below first.

OK, great thanks. will do next version.

> 
> [...]
>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>       ret
>>       // Exception fixups
>> -9997:    cmp    dst, dstin
>> +9997:    mrs esr, esr_el1            // Check exception first
>> +    and esr, esr, #ESR_ELx_FSC
>> +    cmp esr, #ESR_ELx_FSC_EXTABT
> 
> Should we be checking EC to make sure it's a data abort - and thus FSC 
> is valid - in the first place? I'm a little fuzzy on all the possible 
> paths into fixup_exception(), and it's not entirely obvious whether this 
> is actually safe or not.
> 
> Thanks,
> Robin.

I think checking EC here is more rigorous in code logic and it's doesn't 
appear to be harmful.

It is really not appropriate to check the ESR at this stage (it has been 
checked where the exception processing starts). At present, I haven't 
thought of a better way. If anyone has a better way, please reply to me :)

Thanks Robin.
Tong.

> 
> ----->8-----
> Subject: [PATCH] arm64: mte: Clean up user tag accessors
> 
> Invoking user_ldst to explicitly add a post-increment of 0 is silly.
> Just use a normal USER() annotation and save the redundant instruction.
> 
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>   arch/arm64/lib/mte.S | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
> index 8590af3c98c0..eeb9e45bcce8 100644
> --- a/arch/arm64/lib/mte.S
> +++ b/arch/arm64/lib/mte.S
> @@ -93,7 +93,7 @@ SYM_FUNC_START(mte_copy_tags_from_user)
>       mov    x3, x1
>       cbz    x2, 2f
>   1:
> -    user_ldst 2f, ldtrb, w4, x1, 0
> +USER(2f, ldtrb    w4, [x1])
>       lsl    x4, x4, #MTE_TAG_SHIFT
>       stg    x4, [x0], #MTE_GRANULE_SIZE
>       add    x1, x1, #1
> @@ -120,7 +120,7 @@ SYM_FUNC_START(mte_copy_tags_to_user)
>   1:
>       ldg    x4, [x1]
>       ubfx    x4, x4, #MTE_TAG_SHIFT, #MTE_TAG_SIZE
> -    user_ldst 2f, sttrb, w4, x0, 0
> +USER(2f, sttrb    w4, [x0])
>       add    x0, x0, #1
>       add    x1, x1, #MTE_GRANULE_SIZE
>       subs    x2, x2, #1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12 17:08     ` Robin Murphy
@ 2022-04-13  7:30       ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  7:30 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:08, Robin Murphy 写道:
> On 12/04/2022 8:25 am, Tong Tiangen wrote:
> [...]
>> diff --git a/arch/arm64/include/asm/asm-uaccess.h 
>> b/arch/arm64/include/asm/asm-uaccess.h
>> index 0557af834e03..bb17f0829042 100644
>> --- a/arch/arm64/include/asm/asm-uaccess.h
>> +++ b/arch/arm64/include/asm/asm-uaccess.h
>> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>>           _asm_extable    8888b,\l;
>>       .endm
>> +
>> +    .macro user_ldp_mc l, reg1, reg2, addr, post_inc
>> +8888:        ldtr    \reg1, [\addr];
>> +8889:        ldtr    \reg2, [\addr, #8];
>> +        add    \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +        _asm_extable_uaccess_mc    8889b, \l;
>> +    .endm
> 
> You're replacing the only user of this, so please just 
> s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
> the rest of the churn.
> 
> Furthermore, how come you're not similarly updating user_stp, given that 
> you *are* updating the other stores in copy_to_user?

I think all load/store instructions should be handled.

Generally speaking, the load operation will receive a sea when consuming 
a hardware memory error, and the store operation will not receive a sea 
when consuming a hardware error. Depends on chip behavior.

So add store class instructions to processed is no harm.

If there is any problem with my understanding, correct me.

Thanks,
Tong.

> 
>> +
>> +    .macro user_ldst_mc l, inst, reg, addr, post_inc
>> +8888:        \inst        \reg, [\addr];
>> +        add        \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +    .endm
>
[...]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-13  7:30       ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13  7:30 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:08, Robin Murphy 写道:
> On 12/04/2022 8:25 am, Tong Tiangen wrote:
> [...]
>> diff --git a/arch/arm64/include/asm/asm-uaccess.h 
>> b/arch/arm64/include/asm/asm-uaccess.h
>> index 0557af834e03..bb17f0829042 100644
>> --- a/arch/arm64/include/asm/asm-uaccess.h
>> +++ b/arch/arm64/include/asm/asm-uaccess.h
>> @@ -92,4 +92,20 @@ alternative_else_nop_endif
>>           _asm_extable    8888b,\l;
>>       .endm
>> +
>> +    .macro user_ldp_mc l, reg1, reg2, addr, post_inc
>> +8888:        ldtr    \reg1, [\addr];
>> +8889:        ldtr    \reg2, [\addr, #8];
>> +        add    \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +        _asm_extable_uaccess_mc    8889b, \l;
>> +    .endm
> 
> You're replacing the only user of this, so please just 
> s/_asm_extable/_asm_extable_uaccess_mc/ in the existing macro and save 
> the rest of the churn.
> 
> Furthermore, how come you're not similarly updating user_stp, given that 
> you *are* updating the other stores in copy_to_user?

I think all load/store instructions should be handled.

Generally speaking, the load operation will receive a sea when consuming 
a hardware memory error, and the store operation will not receive a sea 
when consuming a hardware error. Depends on chip behavior.

So add store class instructions to processed is no harm.

If there is any problem with my understanding, correct me.

Thanks,
Tong.

> 
>> +
>> +    .macro user_ldst_mc l, inst, reg, addr, post_inc
>> +8888:        \inst        \reg, [\addr];
>> +        add        \addr, \addr, \post_inc;
>> +
>> +        _asm_extable_uaccess_mc    8888b, \l;
>> +    .endm
>
[...]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
  2022-04-12 13:08     ` Kefeng Wang
@ 2022-04-13 14:41       ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13 14:41 UTC (permalink / raw)
  To: Kefeng Wang, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi



在 2022/4/12 21:08, Kefeng Wang 写道:
[...]
>> +
>> +bool fixup_exception_mc(struct pt_regs *regs)
>> +{
>> +    const struct exception_table_entry *ex;
>> +
>> +    ex = search_exception_tables(instruction_pointer(regs));
>> +    if (!ex)
>> +        return false;
>> +
>> +    switch (ex->type) {
>> +    case EX_TYPE_UACCESS_MC:
>> +        return ex_handler_fixup(ex, regs);
>> +    }
>> +
>> +    return false;
>> +}
> 
> The definition of EX_TYPE_UACCESS_MC is in patch4, please fix it, and if 
> arm64 exception table

ok, will do next version.

> 
> is sorted by exception type, we could drop fixup_exception_mc(), right?

In sort_relative_table_with_data(), it seems sorted by insn and data.

> 
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index 77341b160aca..56b13cf8bf1d 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int 
>> esr, struct pt_regs *regs)
>>       return 1; /* "fault" */
>>   }
>> +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int 
>> esr,
>> +                     struct pt_regs *regs, int sig, int code)
>> +{
>> +    if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
>> +        return false;
>> +
>> +    if (user_mode(regs) || !current->mm)
>> +        return false;
>> +
>> +    if (apei_claim_sea(regs) < 0)
>> +        return false;
>> +
>> +    current->thread.fault_address = 0;
>> +    current->thread.fault_code = esr;
>> +
> Use set_thread_esr(0, esr) and move it after fixup_exception_mc();
>> +    if (!fixup_exception_mc(regs))
>> +        return false;
>> +
>> +    arm64_force_sig_fault(sig, code, addr,
>> +        "Uncorrected hardware memory error in kernel-access\n");
>> +
>> +    return true;
>> +}
>> +
>>   static int do_sea(unsigned long far, unsigned int esr, struct 
>> pt_regs *regs)
>>   {
>>       const struct fault_info *inf;
>> @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int 
>> esr, struct pt_regs *regs)
>>            */
>>           siaddr  = untagged_addr(far);
>>       }
>> +
>> +    if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, 
>> inf->code))
>> +        return 0;
>> +
> 
> Rename arm64_process_kernel_sea() to arm64_do_kernel_sea() 
> 
> if (!arm64_do_kernel_sea())
> 
>      arm64_notify_die();
> 

Agreed, will do next version.

>>       arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, 
>> esr);
>>       return 0;
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index 546179418ffa..dd952aeecdc1 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, 
>> size_t cnt)
>>   }
>>   #endif
>> +#ifndef copy_mc_to_user
>> +static inline unsigned long __must_check
>> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
>> +{
> Add check_object_size(cnt, src, true);  which could make 
> HARDENED_USERCOPY works.

Agreed, will do next version.

Thanks KeFeng,
Tong.

>> +    return raw_copy_to_user(dst, src, cnt);
>> +}
>> +#endif
>> +
>>   static __always_inline void pagefault_disabled_inc(void)
>>   {
>>       current->pagefault_disabled++;
> .

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe
@ 2022-04-13 14:41       ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-13 14:41 UTC (permalink / raw)
  To: Kefeng Wang, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Robin Murphy,
	Dave Hansen, Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Xie XiuQi



在 2022/4/12 21:08, Kefeng Wang 写道:
[...]
>> +
>> +bool fixup_exception_mc(struct pt_regs *regs)
>> +{
>> +    const struct exception_table_entry *ex;
>> +
>> +    ex = search_exception_tables(instruction_pointer(regs));
>> +    if (!ex)
>> +        return false;
>> +
>> +    switch (ex->type) {
>> +    case EX_TYPE_UACCESS_MC:
>> +        return ex_handler_fixup(ex, regs);
>> +    }
>> +
>> +    return false;
>> +}
> 
> The definition of EX_TYPE_UACCESS_MC is in patch4, please fix it, and if 
> arm64 exception table

ok, will do next version.

> 
> is sorted by exception type, we could drop fixup_exception_mc(), right?

In sort_relative_table_with_data(), it seems sorted by insn and data.

> 
>> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
>> index 77341b160aca..56b13cf8bf1d 100644
>> --- a/arch/arm64/mm/fault.c
>> +++ b/arch/arm64/mm/fault.c
>> @@ -695,6 +695,30 @@ static int do_bad(unsigned long far, unsigned int 
>> esr, struct pt_regs *regs)
>>       return 1; /* "fault" */
>>   }
>> +static bool arm64_process_kernel_sea(unsigned long addr, unsigned int 
>> esr,
>> +                     struct pt_regs *regs, int sig, int code)
>> +{
>> +    if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC))
>> +        return false;
>> +
>> +    if (user_mode(regs) || !current->mm)
>> +        return false;
>> +
>> +    if (apei_claim_sea(regs) < 0)
>> +        return false;
>> +
>> +    current->thread.fault_address = 0;
>> +    current->thread.fault_code = esr;
>> +
> Use set_thread_esr(0, esr) and move it after fixup_exception_mc();
>> +    if (!fixup_exception_mc(regs))
>> +        return false;
>> +
>> +    arm64_force_sig_fault(sig, code, addr,
>> +        "Uncorrected hardware memory error in kernel-access\n");
>> +
>> +    return true;
>> +}
>> +
>>   static int do_sea(unsigned long far, unsigned int esr, struct 
>> pt_regs *regs)
>>   {
>>       const struct fault_info *inf;
>> @@ -720,6 +744,10 @@ static int do_sea(unsigned long far, unsigned int 
>> esr, struct pt_regs *regs)
>>            */
>>           siaddr  = untagged_addr(far);
>>       }
>> +
>> +    if (arm64_process_kernel_sea(siaddr, esr, regs, inf->sig, 
>> inf->code))
>> +        return 0;
>> +
> 
> Rename arm64_process_kernel_sea() to arm64_do_kernel_sea() 
> 
> if (!arm64_do_kernel_sea())
> 
>      arm64_notify_die();
> 

Agreed, will do next version.

>>       arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, 
>> esr);
>>       return 0;
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index 546179418ffa..dd952aeecdc1 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -174,6 +174,14 @@ copy_mc_to_kernel(void *dst, const void *src, 
>> size_t cnt)
>>   }
>>   #endif
>> +#ifndef copy_mc_to_user
>> +static inline unsigned long __must_check
>> +copy_mc_to_user(void *dst, const void *src, size_t cnt)
>> +{
> Add check_object_size(cnt, src, true);  which could make 
> HARDENED_USERCOPY works.

Agreed, will do next version.

Thanks KeFeng,
Tong.

>> +    return raw_copy_to_user(dst, src, cnt);
>> +}
>> +#endif
>> +
>>   static __always_inline void pagefault_disabled_inc(void)
>>   {
>>       current->pagefault_disabled++;
> .

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
  2022-04-12 17:17       ` Robin Murphy
@ 2022-04-16  7:41         ` Tong Tiangen
  -1 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-16  7:41 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:17, Robin Murphy 写道:
> On 12/04/2022 6:08 pm, Robin Murphy wrote:
> [...]
>>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>>       ret
>>>       // Exception fixups
>>> -9997:    cmp    dst, dstin
>>> +9997:    mrs esr, esr_el1            // Check exception first
>>> +    and esr, esr, #ESR_ELx_FSC
>>> +    cmp esr, #ESR_ELx_FSC_EXTABT
>>
>> Should we be checking EC to make sure it's a data abort - and thus FSC 
>> is valid - in the first place? I'm a little fuzzy on all the possible 
>> paths into fixup_exception(), and it's not entirely obvious whether 
>> this is actually safe or not.
> 
> In fact, thinking some more about that, I don't think there should be 
> any need for this sort of logic in these handlers at all. The 
> fixup_exception() machinery should already know enough about the 
> exception that's happened and the extable entry to figure this out and 
> not bother calling the handler at all.
> 
> Thanks,
> Robin.
> .

Hi Robin:
As you said, it seems that it's not good to judge esr here, how about 
using the following method, i need your suggestion :)

+#define FIXUP_TYPE_NORMAL	0
+#define FIXUP_TYPE_MC		1

arch/arm64/mm/extable.c
static bool ex_handler_fixup(const struct exception_table_entry *ex,
-	struct pt_regs *regs)
+	struct pt_regs *regs, int fixuptype)
{
+	regs->regs[16] = fixuptype;
	[...]
}

bool fixup_exception(struct pt_regs *regs)
{
	[...]
	switch(ex->type) {
	case EX_TYPE_UACCESS_MC:
-		return ex_handler_fixup(ex, regs)
+		return ex_handler_fixup(ex, regs, FIXUP_TYPE_NORMAL)
	break;
	}
	[...]
}

bool fixup_exception_mc(struct pt_regs *regs)
{
	[...]
	switch(ex->type) {
	case EX_TYPE_UACCESS_MC:
-		return ex_handler_fixup(ex, regs)
+		return ex_handler_fixup(ex, regs, FIXUP_TYPE_MC)
	break;
	}
	[...]
}

arch/arm64/lib/copy_from_user.S
arch/arm64/lib/copy_to_user.S

+fixup_type      .req    x16

// Exception fixups
//x16: fixup type written by ex_handler_fixup
-9997:  cmp     dst, dstin
+9997:	cmp fixup_type, #FIXUP_TYPE_MC
+	b.eq 9998f
+ 	cmp     dst, dstin
  	b.ne    9998f

Thanks,
Tong.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe
@ 2022-04-16  7:41         ` Tong Tiangen
  0 siblings, 0 replies; 36+ messages in thread
From: Tong Tiangen @ 2022-04-16  7:41 UTC (permalink / raw)
  To: Robin Murphy, Mark Rutland, James Morse, Andrew Morton,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Catalin Marinas, Will Deacon, Alexander Viro, x86,
	H . Peter Anvin
  Cc: linux-arm-kernel, linux-kernel, linux-mm, Kefeng Wang, Xie XiuQi



在 2022/4/13 1:17, Robin Murphy 写道:
> On 12/04/2022 6:08 pm, Robin Murphy wrote:
> [...]
>>> @@ -62,7 +63,11 @@ SYM_FUNC_START(__arch_copy_from_user)
>>>       ret
>>>       // Exception fixups
>>> -9997:    cmp    dst, dstin
>>> +9997:    mrs esr, esr_el1            // Check exception first
>>> +    and esr, esr, #ESR_ELx_FSC
>>> +    cmp esr, #ESR_ELx_FSC_EXTABT
>>
>> Should we be checking EC to make sure it's a data abort - and thus FSC 
>> is valid - in the first place? I'm a little fuzzy on all the possible 
>> paths into fixup_exception(), and it's not entirely obvious whether 
>> this is actually safe or not.
> 
> In fact, thinking some more about that, I don't think there should be 
> any need for this sort of logic in these handlers at all. The 
> fixup_exception() machinery should already know enough about the 
> exception that's happened and the extable entry to figure this out and 
> not bother calling the handler at all.
> 
> Thanks,
> Robin.
> .

Hi Robin:
As you said, it seems that it's not good to judge esr here, how about 
using the following method, i need your suggestion :)

+#define FIXUP_TYPE_NORMAL	0
+#define FIXUP_TYPE_MC		1

arch/arm64/mm/extable.c
static bool ex_handler_fixup(const struct exception_table_entry *ex,
-	struct pt_regs *regs)
+	struct pt_regs *regs, int fixuptype)
{
+	regs->regs[16] = fixuptype;
	[...]
}

bool fixup_exception(struct pt_regs *regs)
{
	[...]
	switch(ex->type) {
	case EX_TYPE_UACCESS_MC:
-		return ex_handler_fixup(ex, regs)
+		return ex_handler_fixup(ex, regs, FIXUP_TYPE_NORMAL)
	break;
	}
	[...]
}

bool fixup_exception_mc(struct pt_regs *regs)
{
	[...]
	switch(ex->type) {
	case EX_TYPE_UACCESS_MC:
-		return ex_handler_fixup(ex, regs)
+		return ex_handler_fixup(ex, regs, FIXUP_TYPE_MC)
	break;
	}
	[...]
}

arch/arm64/lib/copy_from_user.S
arch/arm64/lib/copy_to_user.S

+fixup_type      .req    x16

// Exception fixups
//x16: fixup type written by ex_handler_fixup
-9997:  cmp     dst, dstin
+9997:	cmp fixup_type, #FIXUP_TYPE_MC
+	b.eq 9998f
+ 	cmp     dst, dstin
  	b.ne    9998f

Thanks,
Tong.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2022-04-16  7:42 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-12  7:25 [RFC PATCH -next V3 0/6]arm64: add machine check safe support Tong Tiangen
2022-04-12  7:25 ` Tong Tiangen
2022-04-12  7:25 ` [RFC PATCH -next V3 1/6] x86: fix function define in copy_mc_to_user Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12 11:49   ` Kefeng Wang
2022-04-12 11:49     ` Kefeng Wang
2022-04-13  6:01     ` Tong Tiangen
2022-04-13  6:01       ` Tong Tiangen
2022-04-12  7:25 ` [RFC PATCH -next V3 2/6] arm64: fix types in copy_highpage() Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12 11:50   ` Kefeng Wang
2022-04-12 11:50     ` Kefeng Wang
2022-04-12  7:25 ` [RFC PATCH -next V3 3/6] arm64: add support for machine check error safe Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12 13:08   ` Kefeng Wang
2022-04-12 13:08     ` Kefeng Wang
2022-04-13 14:41     ` Tong Tiangen
2022-04-13 14:41       ` Tong Tiangen
2022-04-12  7:25 ` [RFC PATCH -next V3 4/6] arm64: add copy_{to, from}_user to machine check safe Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12 17:08   ` Robin Murphy
2022-04-12 17:08     ` Robin Murphy
2022-04-12 17:17     ` Robin Murphy
2022-04-12 17:17       ` Robin Murphy
2022-04-16  7:41       ` Tong Tiangen
2022-04-16  7:41         ` Tong Tiangen
2022-04-13  6:36     ` Tong Tiangen
2022-04-13  6:36       ` Tong Tiangen
2022-04-13  7:30     ` Tong Tiangen
2022-04-13  7:30       ` Tong Tiangen
2022-04-12  7:25 ` [RFC PATCH -next V3 5/6] arm64: add {get, put}_user " Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12  7:25 ` [RFC PATCH -next V3 6/6] arm64: add cow " Tong Tiangen
2022-04-12  7:25   ` Tong Tiangen
2022-04-12 16:39   ` Robin Murphy
2022-04-12 16:39     ` Robin Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.