* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-ia64, linux-sh, x86, kexec, linux-mips, linux-kernel,
linux-fsdevel, linux-riscv, linuxppc-dev, linux-arm-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 0/2] kdump: simplify code
@ 2021-12-10 13:35 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:35 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Tiezhu Yang (2):
kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
kdump: crashdump: use copy_to() to simplify the related code
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
11 files changed, 34 insertions(+), 76 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
2021-12-10 13:35 ` Tiezhu Yang
` (3 preceding siblings ...)
(?)
@ 2021-12-10 13:36 ` Tiezhu Yang
-1 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-ia64, linux-sh, x86, kexec, linux-mips, linux-kernel,
linux-fsdevel, linux-riscv, linuxppc-dev, linux-arm-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
In arch/*/kernel/crash_dump*.c, there exist similar code about
copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
and then we can use copy_to() to simplify the related code.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
fs/proc/vmcore.c | 14 --------------
include/linux/uaccess.h | 14 ++++++++++++++
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f851..c5976a8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
{
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..4a6c3e4 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
return n;
}
+/*
+ * Copy to either kernel or user space
+ */
+static inline int copy_to(void *target, void *src, size_t size, int userbuf)
+{
+ if (userbuf) {
+ if (copy_to_user((char __user *) target, src, size))
+ return -EFAULT;
+ } else {
+ memcpy(target, src, size);
+ }
+ return 0;
+}
+
#ifndef copy_mc_to_kernel
/*
* Without arch opt-in this generic copy_mc_to_kernel() will not handle
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
2021-12-10 13:35 ` Tiezhu Yang
` (3 preceding siblings ...)
(?)
@ 2021-12-10 13:36 ` Tiezhu Yang
-1 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-ia64, linux-sh, x86, kexec, linux-mips, linux-kernel,
linux-fsdevel, linux-riscv, linuxppc-dev, linux-arm-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply related [flat|nested] 36+ messages in thread
* [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code
@ 2021-12-10 13:36 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 13:36 UTC (permalink / raw)
To: Dave Young, Baoquan He, Vivek Goyal, Andrew Morton
Cc: linux-arm-kernel, linux-ia64, linux-mips, linuxppc-dev,
linux-riscv, linux-sh, x86, linux-fsdevel, kexec, linux-kernel
Use copy_to() to simplify the related code about copy_oldmem_page()
in arch/*/kernel/crash_dump*.c files.
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
arch/arm/kernel/crash_dump.c | 10 ++--------
arch/arm64/kernel/crash_dump.c | 10 ++--------
arch/ia64/kernel/crash_dump.c | 10 ++++------
arch/mips/kernel/crash_dump.c | 9 ++-------
arch/powerpc/kernel/crash_dump.c | 7 ++-----
arch/riscv/kernel/crash_dump.c | 9 ++-------
arch/sh/kernel/crash_dump.c | 9 ++-------
arch/x86/kernel/crash_dump_32.c | 9 ++-------
arch/x86/kernel/crash_dump_64.c | 9 ++-------
9 files changed, 20 insertions(+), 62 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb924..6491f1d 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -40,14 +40,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9..496e6a5 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -38,14 +38,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3d..20f4c4e 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -39,13 +39,11 @@ copy_oldmem_page(unsigned long pfn, char *buf,
if (!csize)
return 0;
+
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
+
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f551..80704dc 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -24,13 +24,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67..43b2658 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -71,11 +71,8 @@ void __init setup_kdump_trampoline(void)
static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
unsigned long offset, int userbuf)
{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ return -EFAULT;
return csize;
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ad..707fbc1 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
memunmap(vaddr);
return csize;
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59..2af9286 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -33,13 +33,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
iounmap(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46..731658b 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -54,13 +54,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
kunmap_local(vaddr);
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a..8e7c192 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -29,13 +29,8 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ if (copy_to(buf, vaddr + offset, csize, userbuf))
+ csize = -EFAULT;
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
--
2.1.0
^ permalink raw reply related [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
2021-12-10 13:36 ` Tiezhu Yang
` (3 preceding siblings ...)
(?)
@ 2021-12-10 16:59 ` Andrew Morton
-1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 16:59 ` Andrew Morton
0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 16:59 ` Andrew Morton
0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 16:59 ` Andrew Morton
0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 16:59 ` Andrew Morton
0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 16:59 ` Andrew Morton
0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2021-12-10 16:59 UTC (permalink / raw)
To: Tiezhu Yang
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel
On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
> In arch/*/kernel/crash_dump*.c, there exist similar code about
> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
> and then we can use copy_to() to simplify the related code.
>
> ...
>
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
> }
>
> -/*
> - * Copy to either kernel or user space
> - */
> -static int copy_to(void *target, void *src, size_t size, int userbuf)
> -{
> - if (userbuf) {
> - if (copy_to_user((char __user *) target, src, size))
> - return -EFAULT;
> - } else {
> - memcpy(target, src, size);
> - }
> - return 0;
> -}
> -
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
> {
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index ac03940..4a6c3e4 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
> return n;
> }
>
> +/*
> + * Copy to either kernel or user space
> + */
> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
> +{
> + if (userbuf) {
> + if (copy_to_user((char __user *) target, src, size))
> + return -EFAULT;
> + } else {
> + memcpy(target, src, size);
> + }
> + return 0;
> +}
> +
Ordinarily I'd say "this is too large to be inlined". But the function
has only a single callsite per architecture so inlining it won't cause
bloat at present.
But hopefully copy_to() will get additional callers in the future, in
which case it shouldn't be inlined. So I'm thinking it would be best
to start out with this as a regular non-inlined function, in
lib/usercopy.c.
Also, copy_to() is a very poor name for a globally-visible helper
function. Better would be copy_to_user_or_kernel(), although that's
perhaps a bit long.
And the `userbuf' arg should have type bool, yes?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
2021-12-10 16:59 ` Andrew Morton
` (3 preceding siblings ...)
(?)
@ 2021-12-10 23:50 ` Tiezhu Yang
-1 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel, Xuefeng Li
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 23:50 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel, Xuefeng Li
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 23:50 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, Xuefeng Li, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 23:50 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel, Xuefeng Li
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 23:50 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel, Xuefeng Li
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-10 23:50 ` Tiezhu Yang
0 siblings, 0 replies; 36+ messages in thread
From: Tiezhu Yang @ 2021-12-10 23:50 UTC (permalink / raw)
To: Andrew Morton
Cc: Dave Young, Baoquan He, Vivek Goyal, linux-arm-kernel,
linux-ia64, linux-mips, linuxppc-dev, linux-riscv, linux-sh, x86,
linux-fsdevel, kexec, linux-kernel, Xuefeng Li
On 12/11/2021 12:59 AM, Andrew Morton wrote:
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
Hi Andrew,
Thank you very much for your reply and suggestion, I agree with you,
I will send v2 later.
Thanks,
Tiezhu
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
2021-12-10 16:59 ` Andrew Morton
` (3 preceding siblings ...)
(?)
@ 2021-12-11 7:37 ` Christophe Leroy
-1 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-11 7:37 ` Christophe Leroy
0 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-11 7:37 ` Christophe Leroy
0 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, Dave Young, x86, kexec,
linux-mips, linux-kernel, linux-arm-kernel, linux-fsdevel,
linux-riscv, linuxppc-dev, Vivek Goyal
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-11 7:37 ` Christophe Leroy
0 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-11 7:37 ` Christophe Leroy
0 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
@ 2021-12-11 7:37 ` Christophe Leroy
0 siblings, 0 replies; 36+ messages in thread
From: Christophe Leroy @ 2021-12-11 7:37 UTC (permalink / raw)
To: Andrew Morton, Tiezhu Yang
Cc: linux-ia64, Baoquan He, linux-sh, linuxppc-dev, x86, kexec,
linux-mips, linux-kernel, Vivek Goyal, linux-fsdevel,
linux-riscv, Dave Young, linux-arm-kernel
DQoNCkxlIDEwLzEyLzIwMjEgw6AgMTc6NTksIEFuZHJldyBNb3J0b24gYSDDqWNyaXTCoDoNCj4g
T24gRnJpLCAxMCBEZWMgMjAyMSAyMTozNjowMCArMDgwMCBUaWV6aHUgWWFuZyA8eWFuZ3RpZXpo
dUBsb29uZ3Nvbi5jbj4gd3JvdGU6DQo+IA0KPj4gSW4gYXJjaC8qL2tlcm5lbC9jcmFzaF9kdW1w
Ki5jLCB0aGVyZSBleGlzdCBzaW1pbGFyIGNvZGUgYWJvdXQNCj4+IGNvcHlfb2xkbWVtX3BhZ2Uo
KSwgbW92ZSBjb3B5X3RvKCkgZnJvbSB2bWNvcmUuYyB0byB1YWNjZXNzLmgsDQo+PiBhbmQgdGhl
biB3ZSBjYW4gdXNlIGNvcHlfdG8oKSB0byBzaW1wbGlmeSB0aGUgcmVsYXRlZCBjb2RlLg0KPj4N
Cj4+IC4uLg0KPj4NCj4+IC0tLSBhL2ZzL3Byb2Mvdm1jb3JlLmMNCj4+ICsrKyBiL2ZzL3Byb2Mv
dm1jb3JlLmMNCj4+IEBAIC0yMzgsMjAgKzIzOCw2IEBAIGNvcHlfb2xkbWVtX3BhZ2VfZW5jcnlw
dGVkKHVuc2lnbmVkIGxvbmcgcGZuLCBjaGFyICpidWYsIHNpemVfdCBjc2l6ZSwNCj4+ICAgCXJl
dHVybiBjb3B5X29sZG1lbV9wYWdlKHBmbiwgYnVmLCBjc2l6ZSwgb2Zmc2V0LCB1c2VyYnVmKTsN
Cj4+ICAgfQ0KPj4gICANCj4+IC0vKg0KPj4gLSAqIENvcHkgdG8gZWl0aGVyIGtlcm5lbCBvciB1
c2VyIHNwYWNlDQo+PiAtICovDQo+PiAtc3RhdGljIGludCBjb3B5X3RvKHZvaWQgKnRhcmdldCwg
dm9pZCAqc3JjLCBzaXplX3Qgc2l6ZSwgaW50IHVzZXJidWYpDQo+PiAtew0KPj4gLQlpZiAodXNl
cmJ1Zikgew0KPj4gLQkJaWYgKGNvcHlfdG9fdXNlcigoY2hhciBfX3VzZXIgKikgdGFyZ2V0LCBz
cmMsIHNpemUpKQ0KPj4gLQkJCXJldHVybiAtRUZBVUxUOw0KPj4gLQl9IGVsc2Ugew0KPj4gLQkJ
bWVtY3B5KHRhcmdldCwgc3JjLCBzaXplKTsNCj4+IC0JfQ0KPj4gLQlyZXR1cm4gMDsNCj4+IC19
DQo+PiAtDQo+PiAgICNpZmRlZiBDT05GSUdfUFJPQ19WTUNPUkVfREVWSUNFX0RVTVANCj4+ICAg
c3RhdGljIGludCB2bWNvcmVkZF9jb3B5X2R1bXBzKHZvaWQgKmRzdCwgdTY0IHN0YXJ0LCBzaXpl
X3Qgc2l6ZSwgaW50IHVzZXJidWYpDQo+PiAgIHsNCj4+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xp
bnV4L3VhY2Nlc3MuaCBiL2luY2x1ZGUvbGludXgvdWFjY2Vzcy5oDQo+PiBpbmRleCBhYzAzOTQw
Li40YTZjM2U0IDEwMDY0NA0KPj4gLS0tIGEvaW5jbHVkZS9saW51eC91YWNjZXNzLmgNCj4+ICsr
KyBiL2luY2x1ZGUvbGludXgvdWFjY2Vzcy5oDQo+PiBAQCAtMjAxLDYgKzIwMSwyMCBAQCBjb3B5
X3RvX3VzZXIodm9pZCBfX3VzZXIgKnRvLCBjb25zdCB2b2lkICpmcm9tLCB1bnNpZ25lZCBsb25n
IG4pDQo+PiAgIAlyZXR1cm4gbjsNCj4+ICAgfQ0KPj4gICANCj4+ICsvKg0KPj4gKyAqIENvcHkg
dG8gZWl0aGVyIGtlcm5lbCBvciB1c2VyIHNwYWNlDQo+PiArICovDQo+PiArc3RhdGljIGlubGlu
ZSBpbnQgY29weV90byh2b2lkICp0YXJnZXQsIHZvaWQgKnNyYywgc2l6ZV90IHNpemUsIGludCB1
c2VyYnVmKQ0KPj4gK3sNCj4+ICsJaWYgKHVzZXJidWYpIHsNCj4+ICsJCWlmIChjb3B5X3RvX3Vz
ZXIoKGNoYXIgX191c2VyICopIHRhcmdldCwgc3JjLCBzaXplKSkNCj4+ICsJCQlyZXR1cm4gLUVG
QVVMVDsNCj4+ICsJfSBlbHNlIHsNCj4+ICsJCW1lbWNweSh0YXJnZXQsIHNyYywgc2l6ZSk7DQo+
PiArCX0NCj4+ICsJcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPiANCj4gT3JkaW5hcmlseSBJJ2Qg
c2F5ICJ0aGlzIGlzIHRvbyBsYXJnZSB0byBiZSBpbmxpbmVkIi4gIEJ1dCB0aGUgZnVuY3Rpb24N
Cj4gaGFzIG9ubHkgYSBzaW5nbGUgY2FsbHNpdGUgcGVyIGFyY2hpdGVjdHVyZSBzbyBpbmxpbmlu
ZyBpdCB3b24ndCBjYXVzZQ0KPiBibG9hdCBhdCBwcmVzZW50Lg0KPiANCj4gQnV0IGhvcGVmdWxs
eSBjb3B5X3RvKCkgd2lsbCBnZXQgYWRkaXRpb25hbCBjYWxsZXJzIGluIHRoZSBmdXR1cmUsIGlu
DQo+IHdoaWNoIGNhc2UgaXQgc2hvdWxkbid0IGJlIGlubGluZWQuICBTbyBJJ20gdGhpbmtpbmcg
aXQgd291bGQgYmUgYmVzdA0KPiB0byBzdGFydCBvdXQgd2l0aCB0aGlzIGFzIGEgcmVndWxhciBu
b24taW5saW5lZCBmdW5jdGlvbiwgaW4NCj4gbGliL3VzZXJjb3B5LmMuDQo+IA0KPiBBbHNvLCBj
b3B5X3RvKCkgaXMgYSB2ZXJ5IHBvb3IgbmFtZSBmb3IgYSBnbG9iYWxseS12aXNpYmxlIGhlbHBl
cg0KPiBmdW5jdGlvbi4gIEJldHRlciB3b3VsZCBiZSBjb3B5X3RvX3VzZXJfb3Jfa2VybmVsKCks
IGFsdGhvdWdoIHRoYXQncw0KPiBwZXJoYXBzIGEgYml0IGxvbmcuDQo+IA0KPiBBbmQgdGhlIGB1
c2VyYnVmJyBhcmcgc2hvdWxkIGhhdmUgdHlwZSBib29sLCB5ZXM/DQo+IA0KDQpJIHRoaW5rIGtl
ZXBpbmcgaXQgaW5saW5lZCBpcyBiZXR0ZXIuDQoNCmNvcHlfb2xkbWVtX3BhZ2UoKSBpcyBiaWdn
ZXIgd2l0aCB2MiAob3V0bGluZWQpIHRoYW4gd2l0aCB2MSAoaW5saW5lZCksIA0Kc2VlIGJvdGgg
YmVsb3c6DQoNCnYxOg0KDQowMDAwMDAwMCA8Y29weV9vbGRtZW1fcGFnZT46DQogICAgMDoJOTQg
MjEgZmYgZTAgCXN0d3UgICAgcjEsLTMyKHIxKQ0KICAgIDQ6CTkzIGUxIDAwIDFjIAlzdHcgICAg
IHIzMSwyOChyMSkNCiAgICA4Ogk3YyBiZiAyYiA3OSAJbXIuICAgICByMzEscjUNCiAgICBjOgk0
MCA4MiAwMCAxNCAJYm5lICAgICAyMCA8Y29weV9vbGRtZW1fcGFnZSsweDIwPg0KICAgMTA6CTgz
IGUxIDAwIDFjIAlsd3ogICAgIHIzMSwyOChyMSkNCiAgIDE0OgkzOCA2MCAwMCAwMCAJbGkgICAg
ICByMywwDQogICAxODoJMzggMjEgMDAgMjAgCWFkZGkgICAgcjEscjEsMzINCiAgIDFjOgk0ZSA4
MCAwMCAyMCAJYmxyDQogICAyMDoJMjggMWYgMTAgMDAgCWNtcGx3aSAgcjMxLDQwOTYNCiAgIDI0
Ogk5MyA2MSAwMCAwYyAJc3R3ICAgICByMjcsMTIocjEpDQogICAyODoJN2MgMDggMDIgYTYgCW1m
bHIgICAgcjANCiAgIDJjOgk5MyA4MSAwMCAxMCAJc3R3ICAgICByMjgsMTYocjEpDQogICAzMDoJ
OTMgYTEgMDAgMTQgCXN0dyAgICAgcjI5LDIwKHIxKQ0KICAgMzQ6CTdjIDliIDIzIDc4IAltciAg
ICAgIHIyNyxyNA0KICAgMzg6CTkwIDAxIDAwIDI0IAlzdHcgICAgIHIwLDM2KHIxKQ0KICAgM2M6
CTdjIGRkIDMzIDc4IAltciAgICAgIHIyOSxyNg0KICAgNDA6CTkzIGMxIDAwIDE4IAlzdHcgICAg
IHIzMCwyNChyMSkNCiAgIDQ0Ogk3YyBmYyAzYiA3OCAJbXIgICAgICByMjgscjcNCiAgIDQ4Ogk0
MCA4MSAwMCAwOCAJYmxlICAgICA1MCA8Y29weV9vbGRtZW1fcGFnZSsweDUwPg0KICAgNGM6CTNi
IGUwIDEwIDAwIAlsaSAgICAgIHIzMSw0MDk2DQogICA1MDoJNTQgN2UgNjAgMjYgCXJsd2lubSAg
cjMwLHIzLDEyLDAsMTkNCiAgIDU0Ogk3ZiBjMyBmMyA3OCAJbXIgICAgICByMyxyMzANCiAgIDU4
Ogk3ZiBlNCBmYiA3OCAJbXIgICAgICByNCxyMzENCiAgIDVjOgk0OCAwMCAwMCAwMSAJYmwgICAg
ICA1YyA8Y29weV9vbGRtZW1fcGFnZSsweDVjPg0KCQkJNWM6IFJfUFBDX1JFTDI0CW1lbWJsb2Nr
X2lzX3JlZ2lvbl9tZW1vcnkNCiAgIDYwOgkyYyAwMyAwMCAwMCAJY21wd2kgICByMywwDQogICA2
NDoJNDEgODIgMDAgMzAgCWJlcSAgICAgOTQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg5ND4NCiAgIDY4
OgkyYyAxYyAwMCAwMCAJY21wd2kgICByMjgsMA0KICAgNmM6CTNmIGRlIGMwIDAwIAlhZGRpcyAg
IHIzMCxyMzAsLTE2Mzg0DQogICA3MDoJN2YgNjMgZGIgNzggCW1yICAgICAgcjMscjI3DQogICA3
NDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUscjMxDQogICA3ODoJN2MgOWUgZWEgMTQgCWFkZCAg
ICAgcjQscjMwLHIyOQ0KICAgN2M6CTQxIDgyIDAwIDdjIAliZXEgICAgIGY4IDxjb3B5X29sZG1l
bV9wYWdlKzB4Zjg+DQogICA4MDoJNDggMDAgMDAgMDEgCWJsICAgICAgODAgPGNvcHlfb2xkbWVt
X3BhZ2UrMHg4MD4NCgkJCTgwOiBSX1BQQ19SRUwyNAlfY29weV90b191c2VyDQogICA4NDoJMmMg
MDMgMDAgMDAgCWNtcHdpICAgcjMsMA0KICAgODg6CTQxIGEyIDAwIDQ4IAliZXEgICAgIGQwIDxj
b3B5X29sZG1lbV9wYWdlKzB4ZDA+DQogICA4YzoJM2IgZTAgZmYgZjIgCWxpICAgICAgcjMxLC0x
NA0KICAgOTA6CTQ4IDAwIDAwIDQwIAliICAgICAgIGQwIDxjb3B5X29sZG1lbV9wYWdlKzB4ZDA+
DQogICA5NDoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICA5ODoJMzggYTAgMDUgOTEg
CWxpICAgICAgcjUsMTQyNQ0KICAgOWM6CTM4IDgwIDEwIDAwIAlsaSAgICAgIHI0LDQwOTYNCiAg
IGEwOgk0OCAwMCAwMCAwMSAJYmwgICAgICBhMCA8Y29weV9vbGRtZW1fcGFnZSsweGEwPg0KCQkJ
YTA6IFJfUFBDX1JFTDI0CWlvcmVtYXBfcHJvdA0KICAgYTQ6CTJjIDFjIDAwIDAwIAljbXB3aSAg
IHIyOCwwDQogICBhODoJN2MgN2UgMWIgNzggCW1yICAgICAgcjMwLHIzDQogICBhYzoJN2MgODMg
ZWEgMTQgCWFkZCAgICAgcjQscjMscjI5DQogICBiMDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUs
cjMxDQogICBiNDoJN2YgNjMgZGIgNzggCW1yICAgICAgcjMscjI3DQogICBiODoJNDEgODIgMDAg
NDggCWJlcSAgICAgMTAwIDxjb3B5X29sZG1lbV9wYWdlKzB4MTAwPg0KICAgYmM6CTQ4IDAwIDAw
IDAxIAlibCAgICAgIGJjIDxjb3B5X29sZG1lbV9wYWdlKzB4YmM+DQoJCQliYzogUl9QUENfUkVM
MjQJX2NvcHlfdG9fdXNlcg0KICAgYzA6CTJjIDAzIDAwIDAwIAljbXB3aSAgIHIzLDANCiAgIGM0
Ogk0MCA4MiAwMCA0NCAJYm5lICAgICAxMDggPGNvcHlfb2xkbWVtX3BhZ2UrMHgxMDg+DQogICBj
ODoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICBjYzoJNDggMDAgMDAgMDEgCWJsICAg
ICAgY2MgPGNvcHlfb2xkbWVtX3BhZ2UrMHhjYz4NCgkJCWNjOiBSX1BQQ19SRUwyNAlpb3VubWFw
DQogICBkMDoJODAgMDEgMDAgMjQgCWx3eiAgICAgcjAsMzYocjEpDQogICBkNDoJN2YgZTMgZmIg
NzggCW1yICAgICAgcjMscjMxDQogICBkODoJODMgNjEgMDAgMGMgCWx3eiAgICAgcjI3LDEyKHIx
KQ0KICAgZGM6CTgzIDgxIDAwIDEwIAlsd3ogICAgIHIyOCwxNihyMSkNCiAgIGUwOgk3YyAwOCAw
MyBhNiAJbXRsciAgICByMA0KICAgZTQ6CTgzIGExIDAwIDE0IAlsd3ogICAgIHIyOSwyMChyMSkN
CiAgIGU4Ogk4MyBjMSAwMCAxOCAJbHd6ICAgICByMzAsMjQocjEpDQogICBlYzoJODMgZTEgMDAg
MWMgCWx3eiAgICAgcjMxLDI4KHIxKQ0KICAgZjA6CTM4IDIxIDAwIDIwIAlhZGRpICAgIHIxLHIx
LDMyDQogICBmNDoJNGUgODAgMDAgMjAgCWJscg0KICAgZjg6CTQ4IDAwIDAwIDAxIAlibCAgICAg
IGY4IDxjb3B5X29sZG1lbV9wYWdlKzB4Zjg+DQoJCQlmODogUl9QUENfUkVMMjQJbWVtY3B5DQog
ICBmYzoJNGIgZmYgZmYgZDQgCWIgICAgICAgZDAgPGNvcHlfb2xkbWVtX3BhZ2UrMHhkMD4NCiAg
MTAwOgk0OCAwMCAwMCAwMSAJYmwgICAgICAxMDAgPGNvcHlfb2xkbWVtX3BhZ2UrMHgxMDA+DQoJ
CQkxMDA6IFJfUFBDX1JFTDI0CW1lbWNweQ0KICAxMDQ6CTRiIGZmIGZmIGM0IAliICAgICAgIGM4
IDxjb3B5X29sZG1lbV9wYWdlKzB4Yzg+DQogIDEwODoJM2IgZTAgZmYgZjIgCWxpICAgICAgcjMx
LC0xNA0KICAxMGM6CTRiIGZmIGZmIGJjIAliICAgICAgIGM4IDxjb3B5X29sZG1lbV9wYWdlKzB4
Yzg+DQoNCg0KdjI6DQoNCjAwMDAwMDAwIDxjb3B5X29sZG1lbV9wYWdlPjoNCiAgICAwOgk5NCAy
MSBmZiBlMCAJc3R3dSAgICByMSwtMzIocjEpDQogICAgNDoJOTMgZTEgMDAgMWMgCXN0dyAgICAg
cjMxLDI4KHIxKQ0KICAgIDg6CTdjIGJmIDJiIDc5IAltci4gICAgIHIzMSxyNQ0KICAgIGM6CTkz
IGMxIDAwIDE4IAlzdHcgICAgIHIzMCwyNChyMSkNCiAgIDEwOgkzYiBjMCAwMCAwMCAJbGkgICAg
ICByMzAsMA0KICAgMTQ6CTQwIDgyIDAwIDE4IAlibmUgICAgIDJjIDxjb3B5X29sZG1lbV9wYWdl
KzB4MmM+DQogICAxODoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICAxYzoJODMgZTEg
MDAgMWMgCWx3eiAgICAgcjMxLDI4KHIxKQ0KICAgMjA6CTgzIGMxIDAwIDE4IAlsd3ogICAgIHIz
MCwyNChyMSkNCiAgIDI0OgkzOCAyMSAwMCAyMCAJYWRkaSAgICByMSxyMSwzMg0KICAgMjg6CTRl
IDgwIDAwIDIwIAlibHINCiAgIDJjOgkyOCAxZiAxMCAwMCAJY21wbHdpICByMzEsNDA5Ng0KICAg
MzA6CTkzIDYxIDAwIDBjIAlzdHcgICAgIHIyNywxMihyMSkNCiAgIDM0Ogk3YyAwOCAwMiBhNiAJ
bWZsciAgICByMA0KICAgMzg6CTkzIDgxIDAwIDEwIAlzdHcgICAgIHIyOCwxNihyMSkNCiAgIDNj
Ogk5MyBhMSAwMCAxNCAJc3R3ICAgICByMjksMjAocjEpDQogICA0MDoJN2MgZGIgMzMgNzggCW1y
ICAgICAgcjI3LHI2DQogICA0NDoJOTAgMDEgMDAgMjQgCXN0dyAgICAgcjAsMzYocjEpDQogICA0
ODoJN2MgOWQgMjMgNzggCW1yICAgICAgcjI5LHI0DQogICA0YzoJN2MgZmMgM2IgNzggCW1yICAg
ICAgcjI4LHI3DQogICA1MDoJNDAgODEgMDAgMDggCWJsZSAgICAgNTggPGNvcHlfb2xkbWVtX3Bh
Z2UrMHg1OD4NCiAgIDU0OgkzYiBlMCAxMCAwMCAJbGkgICAgICByMzEsNDA5Ng0KICAgNTg6CTU0
IDdlIDYwIDI2IAlybHdpbm0gIHIzMCxyMywxMiwwLDE5DQogICA1YzoJN2YgYzMgZjMgNzggCW1y
ICAgICAgcjMscjMwDQogICA2MDoJN2YgZTQgZmIgNzggCW1yICAgICAgcjQscjMxDQogICA2NDoJ
NDggMDAgMDAgMDEgCWJsICAgICAgNjQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg2ND4NCgkJCTY0OiBS
X1BQQ19SRUwyNAltZW1ibG9ja19pc19yZWdpb25fbWVtb3J5DQogICA2ODoJMmMgMDMgMDAgMDAg
CWNtcHdpICAgcjMsMA0KICAgNmM6CTQxIDgyIDAwIDU0IAliZXEgICAgIGMwIDxjb3B5X29sZG1l
bV9wYWdlKzB4YzA+DQogICA3MDoJM2YgZGUgYzAgMDAgCWFkZGlzICAgcjMwLHIzMCwtMTYzODQN
CiAgIDc0Ogk3YyA5ZSBkYSAxNCAJYWRkICAgICByNCxyMzAscjI3DQogICA3ODoJN2YgODYgZTMg
NzggCW1yICAgICAgcjYscjI4DQogICA3YzoJN2YgYTMgZWIgNzggCW1yICAgICAgcjMscjI5DQog
ICA4MDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUscjMxDQogICA4NDoJNDggMDAgMDAgMDEgCWJs
ICAgICAgODQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg4ND4NCgkJCTg0OiBSX1BQQ19SRUwyNAljb3B5
X3RvX3VzZXJfb3Jfa2VybmVsDQogICA4ODoJM2IgYzAgZmYgZjIgCWxpICAgICAgcjMwLC0xNA0K
ICAgOGM6CTJjIDAzIDAwIDAwIAljbXB3aSAgIHIzLDANCiAgIDkwOgk0MCA4MiAwMCAwOCAJYm5l
ICAgICA5OCA8Y29weV9vbGRtZW1fcGFnZSsweDk4Pg0KICAgOTQ6CTdmIGZlIGZiIDc4IAltciAg
ICAgIHIzMCxyMzENCiAgIDk4Ogk4MCAwMSAwMCAyNCAJbHd6ICAgICByMCwzNihyMSkNCiAgIDlj
Ogk4MyA2MSAwMCAwYyAJbHd6ICAgICByMjcsMTIocjEpDQogICBhMDoJODMgODEgMDAgMTAgCWx3
eiAgICAgcjI4LDE2KHIxKQ0KICAgYTQ6CTdjIDA4IDAzIGE2IAltdGxyICAgIHIwDQogICBhODoJ
ODMgYTEgMDAgMTQgCWx3eiAgICAgcjI5LDIwKHIxKQ0KICAgYWM6CTdmIGMzIGYzIDc4IAltciAg
ICAgIHIzLHIzMA0KICAgYjA6CTgzIGUxIDAwIDFjIAlsd3ogICAgIHIzMSwyOChyMSkNCiAgIGI0
Ogk4MyBjMSAwMCAxOCAJbHd6ICAgICByMzAsMjQocjEpDQogICBiODoJMzggMjEgMDAgMjAgCWFk
ZGkgICAgcjEscjEsMzINCiAgIGJjOgk0ZSA4MCAwMCAyMCAJYmxyDQogICBjMDoJN2YgYzMgZjMg
NzggCW1yICAgICAgcjMscjMwDQogICBjNDoJOTMgNDEgMDAgMDggCXN0dyAgICAgcjI2LDgocjEp
DQogICBjODoJMzggYTAgMDUgOTEgCWxpICAgICAgcjUsMTQyNQ0KICAgY2M6CTM4IDgwIDEwIDAw
IAlsaSAgICAgIHI0LDQwOTYNCiAgIGQwOgk0OCAwMCAwMCAwMSAJYmwgICAgICBkMCA8Y29weV9v
bGRtZW1fcGFnZSsweGQwPg0KCQkJZDA6IFJfUFBDX1JFTDI0CWlvcmVtYXBfcHJvdA0KICAgZDQ6
CTdmIDg2IGUzIDc4IAltciAgICAgIHI2LHIyOA0KICAgZDg6CTdjIDgzIGRhIDE0IAlhZGQgICAg
IHI0LHIzLHIyNw0KICAgZGM6CTdjIDdhIDFiIDc4IAltciAgICAgIHIyNixyMw0KICAgZTA6CTdm
IGU1IGZiIDc4IAltciAgICAgIHI1LHIzMQ0KICAgZTQ6CTdmIGEzIGViIDc4IAltciAgICAgIHIz
LHIyOQ0KICAgZTg6CTQ4IDAwIDAwIDAxIAlibCAgICAgIGU4IDxjb3B5X29sZG1lbV9wYWdlKzB4
ZTg+DQoJCQllODogUl9QUENfUkVMMjQJY29weV90b191c2VyX29yX2tlcm5lbA0KICAgZWM6CTNi
IGMwIGZmIGYyIAlsaSAgICAgIHIzMCwtMTQNCiAgIGYwOgkyYyAwMyAwMCAwMCAJY21wd2kgICBy
MywwDQogICBmNDoJNDAgODIgMDAgMDggCWJuZSAgICAgZmMgPGNvcHlfb2xkbWVtX3BhZ2UrMHhm
Yz4NCiAgIGY4Ogk3ZiBmZSBmYiA3OCAJbXIgICAgICByMzAscjMxDQogICBmYzoJN2YgNDMgZDMg
NzggCW1yICAgICAgcjMscjI2DQogIDEwMDoJNDggMDAgMDAgMDEgCWJsICAgICAgMTAwIDxjb3B5
X29sZG1lbV9wYWdlKzB4MTAwPg0KCQkJMTAwOiBSX1BQQ19SRUwyNAlpb3VubWFwDQogIDEwNDoJ
ODAgMDEgMDAgMjQgCWx3eiAgICAgcjAsMzYocjEpDQogIDEwODoJODMgNDEgMDAgMDggCWx3eiAg
ICAgcjI2LDgocjEpDQogIDEwYzoJODMgNjEgMDAgMGMgCWx3eiAgICAgcjI3LDEyKHIxKQ0KICAx
MTA6CTdjIDA4IDAzIGE2IAltdGxyICAgIHIwDQogIDExNDoJODMgODEgMDAgMTAgCWx3eiAgICAg
cjI4LDE2KHIxKQ0KICAxMTg6CTgzIGExIDAwIDE0IAlsd3ogICAgIHIyOSwyMChyMSkNCiAgMTFj
Ogk0YiBmZiBmZiA5MCAJYiAgICAgICBhYyA8Y29weV9vbGRtZW1fcGFnZSsweGFjPg0KDQoNCkNo
cmlzdG9waGU
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2021-12-11 7:39 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-10 13:35 [PATCH 0/2] kdump: simplify code Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:36 ` [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-10 13:36 ` [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.