From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Vivek Goyal <vgoyal@redhat.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>, Dave Young <dyoung@redhat.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
WARNING: multiple messages have this Message-ID (diff)
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Vivek Goyal <vgoyal@redhat.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>, Dave Young <dyoung@redhat.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
WARNING: multiple messages have this Message-ID (diff)
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
Dave Young <dyoung@redhat.com>, "x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
Vivek Goyal <vgoyal@redhat.com>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
WARNING: multiple messages have this Message-ID (diff)
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Vivek Goyal <vgoyal@redhat.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>, Dave Young <dyoung@redhat.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
WARNING: multiple messages have this Message-ID (diff)
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Vivek Goyal <vgoyal@redhat.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>, Dave Young <dyoung@redhat.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
Le 10/12/2021 à 17:59, Andrew Morton a écrit :
> On Fri, 10 Dec 2021 21:36:00 +0800 Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
>> In arch/*/kernel/crash_dump*.c, there exist similar code about
>> copy_oldmem_page(), move copy_to() from vmcore.c to uaccess.h,
>> and then we can use copy_to() to simplify the related code.
>>
>> ...
>>
>> --- a/fs/proc/vmcore.c
>> +++ b/fs/proc/vmcore.c
>> @@ -238,20 +238,6 @@ copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
>> return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
>> }
>>
>> -/*
>> - * Copy to either kernel or user space
>> - */
>> -static int copy_to(void *target, void *src, size_t size, int userbuf)
>> -{
>> - if (userbuf) {
>> - if (copy_to_user((char __user *) target, src, size))
>> - return -EFAULT;
>> - } else {
>> - memcpy(target, src, size);
>> - }
>> - return 0;
>> -}
>> -
>> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
>> static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
>> {
>> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
>> index ac03940..4a6c3e4 100644
>> --- a/include/linux/uaccess.h
>> +++ b/include/linux/uaccess.h
>> @@ -201,6 +201,20 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
>> return n;
>> }
>>
>> +/*
>> + * Copy to either kernel or user space
>> + */
>> +static inline int copy_to(void *target, void *src, size_t size, int userbuf)
>> +{
>> + if (userbuf) {
>> + if (copy_to_user((char __user *) target, src, size))
>> + return -EFAULT;
>> + } else {
>> + memcpy(target, src, size);
>> + }
>> + return 0;
>> +}
>> +
>
> Ordinarily I'd say "this is too large to be inlined". But the function
> has only a single callsite per architecture so inlining it won't cause
> bloat at present.
>
> But hopefully copy_to() will get additional callers in the future, in
> which case it shouldn't be inlined. So I'm thinking it would be best
> to start out with this as a regular non-inlined function, in
> lib/usercopy.c.
>
> Also, copy_to() is a very poor name for a globally-visible helper
> function. Better would be copy_to_user_or_kernel(), although that's
> perhaps a bit long.
>
> And the `userbuf' arg should have type bool, yes?
>
I think keeping it inlined is better.
copy_oldmem_page() is bigger with v2 (outlined) than with v1 (inlined),
see both below:
v1:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 40 82 00 14 bne 20 <copy_oldmem_page+0x20>
10: 83 e1 00 1c lwz r31,28(r1)
14: 38 60 00 00 li r3,0
18: 38 21 00 20 addi r1,r1,32
1c: 4e 80 00 20 blr
20: 28 1f 10 00 cmplwi r31,4096
24: 93 61 00 0c stw r27,12(r1)
28: 7c 08 02 a6 mflr r0
2c: 93 81 00 10 stw r28,16(r1)
30: 93 a1 00 14 stw r29,20(r1)
34: 7c 9b 23 78 mr r27,r4
38: 90 01 00 24 stw r0,36(r1)
3c: 7c dd 33 78 mr r29,r6
40: 93 c1 00 18 stw r30,24(r1)
44: 7c fc 3b 78 mr r28,r7
48: 40 81 00 08 ble 50 <copy_oldmem_page+0x50>
4c: 3b e0 10 00 li r31,4096
50: 54 7e 60 26 rlwinm r30,r3,12,0,19
54: 7f c3 f3 78 mr r3,r30
58: 7f e4 fb 78 mr r4,r31
5c: 48 00 00 01 bl 5c <copy_oldmem_page+0x5c>
5c: R_PPC_REL24 memblock_is_region_memory
60: 2c 03 00 00 cmpwi r3,0
64: 41 82 00 30 beq 94 <copy_oldmem_page+0x94>
68: 2c 1c 00 00 cmpwi r28,0
6c: 3f de c0 00 addis r30,r30,-16384
70: 7f 63 db 78 mr r3,r27
74: 7f e5 fb 78 mr r5,r31
78: 7c 9e ea 14 add r4,r30,r29
7c: 41 82 00 7c beq f8 <copy_oldmem_page+0xf8>
80: 48 00 00 01 bl 80 <copy_oldmem_page+0x80>
80: R_PPC_REL24 _copy_to_user
84: 2c 03 00 00 cmpwi r3,0
88: 41 a2 00 48 beq d0 <copy_oldmem_page+0xd0>
8c: 3b e0 ff f2 li r31,-14
90: 48 00 00 40 b d0 <copy_oldmem_page+0xd0>
94: 7f c3 f3 78 mr r3,r30
98: 38 a0 05 91 li r5,1425
9c: 38 80 10 00 li r4,4096
a0: 48 00 00 01 bl a0 <copy_oldmem_page+0xa0>
a0: R_PPC_REL24 ioremap_prot
a4: 2c 1c 00 00 cmpwi r28,0
a8: 7c 7e 1b 78 mr r30,r3
ac: 7c 83 ea 14 add r4,r3,r29
b0: 7f e5 fb 78 mr r5,r31
b4: 7f 63 db 78 mr r3,r27
b8: 41 82 00 48 beq 100 <copy_oldmem_page+0x100>
bc: 48 00 00 01 bl bc <copy_oldmem_page+0xbc>
bc: R_PPC_REL24 _copy_to_user
c0: 2c 03 00 00 cmpwi r3,0
c4: 40 82 00 44 bne 108 <copy_oldmem_page+0x108>
c8: 7f c3 f3 78 mr r3,r30
cc: 48 00 00 01 bl cc <copy_oldmem_page+0xcc>
cc: R_PPC_REL24 iounmap
d0: 80 01 00 24 lwz r0,36(r1)
d4: 7f e3 fb 78 mr r3,r31
d8: 83 61 00 0c lwz r27,12(r1)
dc: 83 81 00 10 lwz r28,16(r1)
e0: 7c 08 03 a6 mtlr r0
e4: 83 a1 00 14 lwz r29,20(r1)
e8: 83 c1 00 18 lwz r30,24(r1)
ec: 83 e1 00 1c lwz r31,28(r1)
f0: 38 21 00 20 addi r1,r1,32
f4: 4e 80 00 20 blr
f8: 48 00 00 01 bl f8 <copy_oldmem_page+0xf8>
f8: R_PPC_REL24 memcpy
fc: 4b ff ff d4 b d0 <copy_oldmem_page+0xd0>
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 memcpy
104: 4b ff ff c4 b c8 <copy_oldmem_page+0xc8>
108: 3b e0 ff f2 li r31,-14
10c: 4b ff ff bc b c8 <copy_oldmem_page+0xc8>
v2:
00000000 <copy_oldmem_page>:
0: 94 21 ff e0 stwu r1,-32(r1)
4: 93 e1 00 1c stw r31,28(r1)
8: 7c bf 2b 79 mr. r31,r5
c: 93 c1 00 18 stw r30,24(r1)
10: 3b c0 00 00 li r30,0
14: 40 82 00 18 bne 2c <copy_oldmem_page+0x2c>
18: 7f c3 f3 78 mr r3,r30
1c: 83 e1 00 1c lwz r31,28(r1)
20: 83 c1 00 18 lwz r30,24(r1)
24: 38 21 00 20 addi r1,r1,32
28: 4e 80 00 20 blr
2c: 28 1f 10 00 cmplwi r31,4096
30: 93 61 00 0c stw r27,12(r1)
34: 7c 08 02 a6 mflr r0
38: 93 81 00 10 stw r28,16(r1)
3c: 93 a1 00 14 stw r29,20(r1)
40: 7c db 33 78 mr r27,r6
44: 90 01 00 24 stw r0,36(r1)
48: 7c 9d 23 78 mr r29,r4
4c: 7c fc 3b 78 mr r28,r7
50: 40 81 00 08 ble 58 <copy_oldmem_page+0x58>
54: 3b e0 10 00 li r31,4096
58: 54 7e 60 26 rlwinm r30,r3,12,0,19
5c: 7f c3 f3 78 mr r3,r30
60: 7f e4 fb 78 mr r4,r31
64: 48 00 00 01 bl 64 <copy_oldmem_page+0x64>
64: R_PPC_REL24 memblock_is_region_memory
68: 2c 03 00 00 cmpwi r3,0
6c: 41 82 00 54 beq c0 <copy_oldmem_page+0xc0>
70: 3f de c0 00 addis r30,r30,-16384
74: 7c 9e da 14 add r4,r30,r27
78: 7f 86 e3 78 mr r6,r28
7c: 7f a3 eb 78 mr r3,r29
80: 7f e5 fb 78 mr r5,r31
84: 48 00 00 01 bl 84 <copy_oldmem_page+0x84>
84: R_PPC_REL24 copy_to_user_or_kernel
88: 3b c0 ff f2 li r30,-14
8c: 2c 03 00 00 cmpwi r3,0
90: 40 82 00 08 bne 98 <copy_oldmem_page+0x98>
94: 7f fe fb 78 mr r30,r31
98: 80 01 00 24 lwz r0,36(r1)
9c: 83 61 00 0c lwz r27,12(r1)
a0: 83 81 00 10 lwz r28,16(r1)
a4: 7c 08 03 a6 mtlr r0
a8: 83 a1 00 14 lwz r29,20(r1)
ac: 7f c3 f3 78 mr r3,r30
b0: 83 e1 00 1c lwz r31,28(r1)
b4: 83 c1 00 18 lwz r30,24(r1)
b8: 38 21 00 20 addi r1,r1,32
bc: 4e 80 00 20 blr
c0: 7f c3 f3 78 mr r3,r30
c4: 93 41 00 08 stw r26,8(r1)
c8: 38 a0 05 91 li r5,1425
cc: 38 80 10 00 li r4,4096
d0: 48 00 00 01 bl d0 <copy_oldmem_page+0xd0>
d0: R_PPC_REL24 ioremap_prot
d4: 7f 86 e3 78 mr r6,r28
d8: 7c 83 da 14 add r4,r3,r27
dc: 7c 7a 1b 78 mr r26,r3
e0: 7f e5 fb 78 mr r5,r31
e4: 7f a3 eb 78 mr r3,r29
e8: 48 00 00 01 bl e8 <copy_oldmem_page+0xe8>
e8: R_PPC_REL24 copy_to_user_or_kernel
ec: 3b c0 ff f2 li r30,-14
f0: 2c 03 00 00 cmpwi r3,0
f4: 40 82 00 08 bne fc <copy_oldmem_page+0xfc>
f8: 7f fe fb 78 mr r30,r31
fc: 7f 43 d3 78 mr r3,r26
100: 48 00 00 01 bl 100 <copy_oldmem_page+0x100>
100: R_PPC_REL24 iounmap
104: 80 01 00 24 lwz r0,36(r1)
108: 83 41 00 08 lwz r26,8(r1)
10c: 83 61 00 0c lwz r27,12(r1)
110: 7c 08 03 a6 mtlr r0
114: 83 81 00 10 lwz r28,16(r1)
118: 83 a1 00 14 lwz r29,20(r1)
11c: 4b ff ff 90 b ac <copy_oldmem_page+0xac>
Christophe
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
WARNING: multiple messages have this Message-ID (diff)
From: Christophe Leroy <christophe.leroy@csgroup.eu>
To: Andrew Morton <akpm@linux-foundation.org>,
Tiezhu Yang <yangtiezhu@loongson.cn>
Cc: "linux-ia64@vger.kernel.org" <linux-ia64@vger.kernel.org>,
Baoquan He <bhe@redhat.com>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
"x86@kernel.org" <x86@kernel.org>,
"kexec@lists.infradead.org" <kexec@lists.infradead.org>,
"linux-mips@vger.kernel.org" <linux-mips@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Vivek Goyal <vgoyal@redhat.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-riscv@lists.infradead.org"
<linux-riscv@lists.infradead.org>, Dave Young <dyoung@redhat.com>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h
Date: Sat, 11 Dec 2021 07:37:21 +0000 [thread overview]
Message-ID: <d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu> (raw)
In-Reply-To: <20211210085903.e7820815e738d7dc6da06050@linux-foundation.org>
DQoNCkxlIDEwLzEyLzIwMjEgw6AgMTc6NTksIEFuZHJldyBNb3J0b24gYSDDqWNyaXTCoDoNCj4g
T24gRnJpLCAxMCBEZWMgMjAyMSAyMTozNjowMCArMDgwMCBUaWV6aHUgWWFuZyA8eWFuZ3RpZXpo
dUBsb29uZ3Nvbi5jbj4gd3JvdGU6DQo+IA0KPj4gSW4gYXJjaC8qL2tlcm5lbC9jcmFzaF9kdW1w
Ki5jLCB0aGVyZSBleGlzdCBzaW1pbGFyIGNvZGUgYWJvdXQNCj4+IGNvcHlfb2xkbWVtX3BhZ2Uo
KSwgbW92ZSBjb3B5X3RvKCkgZnJvbSB2bWNvcmUuYyB0byB1YWNjZXNzLmgsDQo+PiBhbmQgdGhl
biB3ZSBjYW4gdXNlIGNvcHlfdG8oKSB0byBzaW1wbGlmeSB0aGUgcmVsYXRlZCBjb2RlLg0KPj4N
Cj4+IC4uLg0KPj4NCj4+IC0tLSBhL2ZzL3Byb2Mvdm1jb3JlLmMNCj4+ICsrKyBiL2ZzL3Byb2Mv
dm1jb3JlLmMNCj4+IEBAIC0yMzgsMjAgKzIzOCw2IEBAIGNvcHlfb2xkbWVtX3BhZ2VfZW5jcnlw
dGVkKHVuc2lnbmVkIGxvbmcgcGZuLCBjaGFyICpidWYsIHNpemVfdCBjc2l6ZSwNCj4+ICAgCXJl
dHVybiBjb3B5X29sZG1lbV9wYWdlKHBmbiwgYnVmLCBjc2l6ZSwgb2Zmc2V0LCB1c2VyYnVmKTsN
Cj4+ICAgfQ0KPj4gICANCj4+IC0vKg0KPj4gLSAqIENvcHkgdG8gZWl0aGVyIGtlcm5lbCBvciB1
c2VyIHNwYWNlDQo+PiAtICovDQo+PiAtc3RhdGljIGludCBjb3B5X3RvKHZvaWQgKnRhcmdldCwg
dm9pZCAqc3JjLCBzaXplX3Qgc2l6ZSwgaW50IHVzZXJidWYpDQo+PiAtew0KPj4gLQlpZiAodXNl
cmJ1Zikgew0KPj4gLQkJaWYgKGNvcHlfdG9fdXNlcigoY2hhciBfX3VzZXIgKikgdGFyZ2V0LCBz
cmMsIHNpemUpKQ0KPj4gLQkJCXJldHVybiAtRUZBVUxUOw0KPj4gLQl9IGVsc2Ugew0KPj4gLQkJ
bWVtY3B5KHRhcmdldCwgc3JjLCBzaXplKTsNCj4+IC0JfQ0KPj4gLQlyZXR1cm4gMDsNCj4+IC19
DQo+PiAtDQo+PiAgICNpZmRlZiBDT05GSUdfUFJPQ19WTUNPUkVfREVWSUNFX0RVTVANCj4+ICAg
c3RhdGljIGludCB2bWNvcmVkZF9jb3B5X2R1bXBzKHZvaWQgKmRzdCwgdTY0IHN0YXJ0LCBzaXpl
X3Qgc2l6ZSwgaW50IHVzZXJidWYpDQo+PiAgIHsNCj4+IGRpZmYgLS1naXQgYS9pbmNsdWRlL2xp
bnV4L3VhY2Nlc3MuaCBiL2luY2x1ZGUvbGludXgvdWFjY2Vzcy5oDQo+PiBpbmRleCBhYzAzOTQw
Li40YTZjM2U0IDEwMDY0NA0KPj4gLS0tIGEvaW5jbHVkZS9saW51eC91YWNjZXNzLmgNCj4+ICsr
KyBiL2luY2x1ZGUvbGludXgvdWFjY2Vzcy5oDQo+PiBAQCAtMjAxLDYgKzIwMSwyMCBAQCBjb3B5
X3RvX3VzZXIodm9pZCBfX3VzZXIgKnRvLCBjb25zdCB2b2lkICpmcm9tLCB1bnNpZ25lZCBsb25n
IG4pDQo+PiAgIAlyZXR1cm4gbjsNCj4+ICAgfQ0KPj4gICANCj4+ICsvKg0KPj4gKyAqIENvcHkg
dG8gZWl0aGVyIGtlcm5lbCBvciB1c2VyIHNwYWNlDQo+PiArICovDQo+PiArc3RhdGljIGlubGlu
ZSBpbnQgY29weV90byh2b2lkICp0YXJnZXQsIHZvaWQgKnNyYywgc2l6ZV90IHNpemUsIGludCB1
c2VyYnVmKQ0KPj4gK3sNCj4+ICsJaWYgKHVzZXJidWYpIHsNCj4+ICsJCWlmIChjb3B5X3RvX3Vz
ZXIoKGNoYXIgX191c2VyICopIHRhcmdldCwgc3JjLCBzaXplKSkNCj4+ICsJCQlyZXR1cm4gLUVG
QVVMVDsNCj4+ICsJfSBlbHNlIHsNCj4+ICsJCW1lbWNweSh0YXJnZXQsIHNyYywgc2l6ZSk7DQo+
PiArCX0NCj4+ICsJcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPiANCj4gT3JkaW5hcmlseSBJJ2Qg
c2F5ICJ0aGlzIGlzIHRvbyBsYXJnZSB0byBiZSBpbmxpbmVkIi4gIEJ1dCB0aGUgZnVuY3Rpb24N
Cj4gaGFzIG9ubHkgYSBzaW5nbGUgY2FsbHNpdGUgcGVyIGFyY2hpdGVjdHVyZSBzbyBpbmxpbmlu
ZyBpdCB3b24ndCBjYXVzZQ0KPiBibG9hdCBhdCBwcmVzZW50Lg0KPiANCj4gQnV0IGhvcGVmdWxs
eSBjb3B5X3RvKCkgd2lsbCBnZXQgYWRkaXRpb25hbCBjYWxsZXJzIGluIHRoZSBmdXR1cmUsIGlu
DQo+IHdoaWNoIGNhc2UgaXQgc2hvdWxkbid0IGJlIGlubGluZWQuICBTbyBJJ20gdGhpbmtpbmcg
aXQgd291bGQgYmUgYmVzdA0KPiB0byBzdGFydCBvdXQgd2l0aCB0aGlzIGFzIGEgcmVndWxhciBu
b24taW5saW5lZCBmdW5jdGlvbiwgaW4NCj4gbGliL3VzZXJjb3B5LmMuDQo+IA0KPiBBbHNvLCBj
b3B5X3RvKCkgaXMgYSB2ZXJ5IHBvb3IgbmFtZSBmb3IgYSBnbG9iYWxseS12aXNpYmxlIGhlbHBl
cg0KPiBmdW5jdGlvbi4gIEJldHRlciB3b3VsZCBiZSBjb3B5X3RvX3VzZXJfb3Jfa2VybmVsKCks
IGFsdGhvdWdoIHRoYXQncw0KPiBwZXJoYXBzIGEgYml0IGxvbmcuDQo+IA0KPiBBbmQgdGhlIGB1
c2VyYnVmJyBhcmcgc2hvdWxkIGhhdmUgdHlwZSBib29sLCB5ZXM/DQo+IA0KDQpJIHRoaW5rIGtl
ZXBpbmcgaXQgaW5saW5lZCBpcyBiZXR0ZXIuDQoNCmNvcHlfb2xkbWVtX3BhZ2UoKSBpcyBiaWdn
ZXIgd2l0aCB2MiAob3V0bGluZWQpIHRoYW4gd2l0aCB2MSAoaW5saW5lZCksIA0Kc2VlIGJvdGgg
YmVsb3c6DQoNCnYxOg0KDQowMDAwMDAwMCA8Y29weV9vbGRtZW1fcGFnZT46DQogICAgMDoJOTQg
MjEgZmYgZTAgCXN0d3UgICAgcjEsLTMyKHIxKQ0KICAgIDQ6CTkzIGUxIDAwIDFjIAlzdHcgICAg
IHIzMSwyOChyMSkNCiAgICA4Ogk3YyBiZiAyYiA3OSAJbXIuICAgICByMzEscjUNCiAgICBjOgk0
MCA4MiAwMCAxNCAJYm5lICAgICAyMCA8Y29weV9vbGRtZW1fcGFnZSsweDIwPg0KICAgMTA6CTgz
IGUxIDAwIDFjIAlsd3ogICAgIHIzMSwyOChyMSkNCiAgIDE0OgkzOCA2MCAwMCAwMCAJbGkgICAg
ICByMywwDQogICAxODoJMzggMjEgMDAgMjAgCWFkZGkgICAgcjEscjEsMzINCiAgIDFjOgk0ZSA4
MCAwMCAyMCAJYmxyDQogICAyMDoJMjggMWYgMTAgMDAgCWNtcGx3aSAgcjMxLDQwOTYNCiAgIDI0
Ogk5MyA2MSAwMCAwYyAJc3R3ICAgICByMjcsMTIocjEpDQogICAyODoJN2MgMDggMDIgYTYgCW1m
bHIgICAgcjANCiAgIDJjOgk5MyA4MSAwMCAxMCAJc3R3ICAgICByMjgsMTYocjEpDQogICAzMDoJ
OTMgYTEgMDAgMTQgCXN0dyAgICAgcjI5LDIwKHIxKQ0KICAgMzQ6CTdjIDliIDIzIDc4IAltciAg
ICAgIHIyNyxyNA0KICAgMzg6CTkwIDAxIDAwIDI0IAlzdHcgICAgIHIwLDM2KHIxKQ0KICAgM2M6
CTdjIGRkIDMzIDc4IAltciAgICAgIHIyOSxyNg0KICAgNDA6CTkzIGMxIDAwIDE4IAlzdHcgICAg
IHIzMCwyNChyMSkNCiAgIDQ0Ogk3YyBmYyAzYiA3OCAJbXIgICAgICByMjgscjcNCiAgIDQ4Ogk0
MCA4MSAwMCAwOCAJYmxlICAgICA1MCA8Y29weV9vbGRtZW1fcGFnZSsweDUwPg0KICAgNGM6CTNi
IGUwIDEwIDAwIAlsaSAgICAgIHIzMSw0MDk2DQogICA1MDoJNTQgN2UgNjAgMjYgCXJsd2lubSAg
cjMwLHIzLDEyLDAsMTkNCiAgIDU0Ogk3ZiBjMyBmMyA3OCAJbXIgICAgICByMyxyMzANCiAgIDU4
Ogk3ZiBlNCBmYiA3OCAJbXIgICAgICByNCxyMzENCiAgIDVjOgk0OCAwMCAwMCAwMSAJYmwgICAg
ICA1YyA8Y29weV9vbGRtZW1fcGFnZSsweDVjPg0KCQkJNWM6IFJfUFBDX1JFTDI0CW1lbWJsb2Nr
X2lzX3JlZ2lvbl9tZW1vcnkNCiAgIDYwOgkyYyAwMyAwMCAwMCAJY21wd2kgICByMywwDQogICA2
NDoJNDEgODIgMDAgMzAgCWJlcSAgICAgOTQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg5ND4NCiAgIDY4
OgkyYyAxYyAwMCAwMCAJY21wd2kgICByMjgsMA0KICAgNmM6CTNmIGRlIGMwIDAwIAlhZGRpcyAg
IHIzMCxyMzAsLTE2Mzg0DQogICA3MDoJN2YgNjMgZGIgNzggCW1yICAgICAgcjMscjI3DQogICA3
NDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUscjMxDQogICA3ODoJN2MgOWUgZWEgMTQgCWFkZCAg
ICAgcjQscjMwLHIyOQ0KICAgN2M6CTQxIDgyIDAwIDdjIAliZXEgICAgIGY4IDxjb3B5X29sZG1l
bV9wYWdlKzB4Zjg+DQogICA4MDoJNDggMDAgMDAgMDEgCWJsICAgICAgODAgPGNvcHlfb2xkbWVt
X3BhZ2UrMHg4MD4NCgkJCTgwOiBSX1BQQ19SRUwyNAlfY29weV90b191c2VyDQogICA4NDoJMmMg
MDMgMDAgMDAgCWNtcHdpICAgcjMsMA0KICAgODg6CTQxIGEyIDAwIDQ4IAliZXEgICAgIGQwIDxj
b3B5X29sZG1lbV9wYWdlKzB4ZDA+DQogICA4YzoJM2IgZTAgZmYgZjIgCWxpICAgICAgcjMxLC0x
NA0KICAgOTA6CTQ4IDAwIDAwIDQwIAliICAgICAgIGQwIDxjb3B5X29sZG1lbV9wYWdlKzB4ZDA+
DQogICA5NDoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICA5ODoJMzggYTAgMDUgOTEg
CWxpICAgICAgcjUsMTQyNQ0KICAgOWM6CTM4IDgwIDEwIDAwIAlsaSAgICAgIHI0LDQwOTYNCiAg
IGEwOgk0OCAwMCAwMCAwMSAJYmwgICAgICBhMCA8Y29weV9vbGRtZW1fcGFnZSsweGEwPg0KCQkJ
YTA6IFJfUFBDX1JFTDI0CWlvcmVtYXBfcHJvdA0KICAgYTQ6CTJjIDFjIDAwIDAwIAljbXB3aSAg
IHIyOCwwDQogICBhODoJN2MgN2UgMWIgNzggCW1yICAgICAgcjMwLHIzDQogICBhYzoJN2MgODMg
ZWEgMTQgCWFkZCAgICAgcjQscjMscjI5DQogICBiMDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUs
cjMxDQogICBiNDoJN2YgNjMgZGIgNzggCW1yICAgICAgcjMscjI3DQogICBiODoJNDEgODIgMDAg
NDggCWJlcSAgICAgMTAwIDxjb3B5X29sZG1lbV9wYWdlKzB4MTAwPg0KICAgYmM6CTQ4IDAwIDAw
IDAxIAlibCAgICAgIGJjIDxjb3B5X29sZG1lbV9wYWdlKzB4YmM+DQoJCQliYzogUl9QUENfUkVM
MjQJX2NvcHlfdG9fdXNlcg0KICAgYzA6CTJjIDAzIDAwIDAwIAljbXB3aSAgIHIzLDANCiAgIGM0
Ogk0MCA4MiAwMCA0NCAJYm5lICAgICAxMDggPGNvcHlfb2xkbWVtX3BhZ2UrMHgxMDg+DQogICBj
ODoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICBjYzoJNDggMDAgMDAgMDEgCWJsICAg
ICAgY2MgPGNvcHlfb2xkbWVtX3BhZ2UrMHhjYz4NCgkJCWNjOiBSX1BQQ19SRUwyNAlpb3VubWFw
DQogICBkMDoJODAgMDEgMDAgMjQgCWx3eiAgICAgcjAsMzYocjEpDQogICBkNDoJN2YgZTMgZmIg
NzggCW1yICAgICAgcjMscjMxDQogICBkODoJODMgNjEgMDAgMGMgCWx3eiAgICAgcjI3LDEyKHIx
KQ0KICAgZGM6CTgzIDgxIDAwIDEwIAlsd3ogICAgIHIyOCwxNihyMSkNCiAgIGUwOgk3YyAwOCAw
MyBhNiAJbXRsciAgICByMA0KICAgZTQ6CTgzIGExIDAwIDE0IAlsd3ogICAgIHIyOSwyMChyMSkN
CiAgIGU4Ogk4MyBjMSAwMCAxOCAJbHd6ICAgICByMzAsMjQocjEpDQogICBlYzoJODMgZTEgMDAg
MWMgCWx3eiAgICAgcjMxLDI4KHIxKQ0KICAgZjA6CTM4IDIxIDAwIDIwIAlhZGRpICAgIHIxLHIx
LDMyDQogICBmNDoJNGUgODAgMDAgMjAgCWJscg0KICAgZjg6CTQ4IDAwIDAwIDAxIAlibCAgICAg
IGY4IDxjb3B5X29sZG1lbV9wYWdlKzB4Zjg+DQoJCQlmODogUl9QUENfUkVMMjQJbWVtY3B5DQog
ICBmYzoJNGIgZmYgZmYgZDQgCWIgICAgICAgZDAgPGNvcHlfb2xkbWVtX3BhZ2UrMHhkMD4NCiAg
MTAwOgk0OCAwMCAwMCAwMSAJYmwgICAgICAxMDAgPGNvcHlfb2xkbWVtX3BhZ2UrMHgxMDA+DQoJ
CQkxMDA6IFJfUFBDX1JFTDI0CW1lbWNweQ0KICAxMDQ6CTRiIGZmIGZmIGM0IAliICAgICAgIGM4
IDxjb3B5X29sZG1lbV9wYWdlKzB4Yzg+DQogIDEwODoJM2IgZTAgZmYgZjIgCWxpICAgICAgcjMx
LC0xNA0KICAxMGM6CTRiIGZmIGZmIGJjIAliICAgICAgIGM4IDxjb3B5X29sZG1lbV9wYWdlKzB4
Yzg+DQoNCg0KdjI6DQoNCjAwMDAwMDAwIDxjb3B5X29sZG1lbV9wYWdlPjoNCiAgICAwOgk5NCAy
MSBmZiBlMCAJc3R3dSAgICByMSwtMzIocjEpDQogICAgNDoJOTMgZTEgMDAgMWMgCXN0dyAgICAg
cjMxLDI4KHIxKQ0KICAgIDg6CTdjIGJmIDJiIDc5IAltci4gICAgIHIzMSxyNQ0KICAgIGM6CTkz
IGMxIDAwIDE4IAlzdHcgICAgIHIzMCwyNChyMSkNCiAgIDEwOgkzYiBjMCAwMCAwMCAJbGkgICAg
ICByMzAsMA0KICAgMTQ6CTQwIDgyIDAwIDE4IAlibmUgICAgIDJjIDxjb3B5X29sZG1lbV9wYWdl
KzB4MmM+DQogICAxODoJN2YgYzMgZjMgNzggCW1yICAgICAgcjMscjMwDQogICAxYzoJODMgZTEg
MDAgMWMgCWx3eiAgICAgcjMxLDI4KHIxKQ0KICAgMjA6CTgzIGMxIDAwIDE4IAlsd3ogICAgIHIz
MCwyNChyMSkNCiAgIDI0OgkzOCAyMSAwMCAyMCAJYWRkaSAgICByMSxyMSwzMg0KICAgMjg6CTRl
IDgwIDAwIDIwIAlibHINCiAgIDJjOgkyOCAxZiAxMCAwMCAJY21wbHdpICByMzEsNDA5Ng0KICAg
MzA6CTkzIDYxIDAwIDBjIAlzdHcgICAgIHIyNywxMihyMSkNCiAgIDM0Ogk3YyAwOCAwMiBhNiAJ
bWZsciAgICByMA0KICAgMzg6CTkzIDgxIDAwIDEwIAlzdHcgICAgIHIyOCwxNihyMSkNCiAgIDNj
Ogk5MyBhMSAwMCAxNCAJc3R3ICAgICByMjksMjAocjEpDQogICA0MDoJN2MgZGIgMzMgNzggCW1y
ICAgICAgcjI3LHI2DQogICA0NDoJOTAgMDEgMDAgMjQgCXN0dyAgICAgcjAsMzYocjEpDQogICA0
ODoJN2MgOWQgMjMgNzggCW1yICAgICAgcjI5LHI0DQogICA0YzoJN2MgZmMgM2IgNzggCW1yICAg
ICAgcjI4LHI3DQogICA1MDoJNDAgODEgMDAgMDggCWJsZSAgICAgNTggPGNvcHlfb2xkbWVtX3Bh
Z2UrMHg1OD4NCiAgIDU0OgkzYiBlMCAxMCAwMCAJbGkgICAgICByMzEsNDA5Ng0KICAgNTg6CTU0
IDdlIDYwIDI2IAlybHdpbm0gIHIzMCxyMywxMiwwLDE5DQogICA1YzoJN2YgYzMgZjMgNzggCW1y
ICAgICAgcjMscjMwDQogICA2MDoJN2YgZTQgZmIgNzggCW1yICAgICAgcjQscjMxDQogICA2NDoJ
NDggMDAgMDAgMDEgCWJsICAgICAgNjQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg2ND4NCgkJCTY0OiBS
X1BQQ19SRUwyNAltZW1ibG9ja19pc19yZWdpb25fbWVtb3J5DQogICA2ODoJMmMgMDMgMDAgMDAg
CWNtcHdpICAgcjMsMA0KICAgNmM6CTQxIDgyIDAwIDU0IAliZXEgICAgIGMwIDxjb3B5X29sZG1l
bV9wYWdlKzB4YzA+DQogICA3MDoJM2YgZGUgYzAgMDAgCWFkZGlzICAgcjMwLHIzMCwtMTYzODQN
CiAgIDc0Ogk3YyA5ZSBkYSAxNCAJYWRkICAgICByNCxyMzAscjI3DQogICA3ODoJN2YgODYgZTMg
NzggCW1yICAgICAgcjYscjI4DQogICA3YzoJN2YgYTMgZWIgNzggCW1yICAgICAgcjMscjI5DQog
ICA4MDoJN2YgZTUgZmIgNzggCW1yICAgICAgcjUscjMxDQogICA4NDoJNDggMDAgMDAgMDEgCWJs
ICAgICAgODQgPGNvcHlfb2xkbWVtX3BhZ2UrMHg4ND4NCgkJCTg0OiBSX1BQQ19SRUwyNAljb3B5
X3RvX3VzZXJfb3Jfa2VybmVsDQogICA4ODoJM2IgYzAgZmYgZjIgCWxpICAgICAgcjMwLC0xNA0K
ICAgOGM6CTJjIDAzIDAwIDAwIAljbXB3aSAgIHIzLDANCiAgIDkwOgk0MCA4MiAwMCAwOCAJYm5l
ICAgICA5OCA8Y29weV9vbGRtZW1fcGFnZSsweDk4Pg0KICAgOTQ6CTdmIGZlIGZiIDc4IAltciAg
ICAgIHIzMCxyMzENCiAgIDk4Ogk4MCAwMSAwMCAyNCAJbHd6ICAgICByMCwzNihyMSkNCiAgIDlj
Ogk4MyA2MSAwMCAwYyAJbHd6ICAgICByMjcsMTIocjEpDQogICBhMDoJODMgODEgMDAgMTAgCWx3
eiAgICAgcjI4LDE2KHIxKQ0KICAgYTQ6CTdjIDA4IDAzIGE2IAltdGxyICAgIHIwDQogICBhODoJ
ODMgYTEgMDAgMTQgCWx3eiAgICAgcjI5LDIwKHIxKQ0KICAgYWM6CTdmIGMzIGYzIDc4IAltciAg
ICAgIHIzLHIzMA0KICAgYjA6CTgzIGUxIDAwIDFjIAlsd3ogICAgIHIzMSwyOChyMSkNCiAgIGI0
Ogk4MyBjMSAwMCAxOCAJbHd6ICAgICByMzAsMjQocjEpDQogICBiODoJMzggMjEgMDAgMjAgCWFk
ZGkgICAgcjEscjEsMzINCiAgIGJjOgk0ZSA4MCAwMCAyMCAJYmxyDQogICBjMDoJN2YgYzMgZjMg
NzggCW1yICAgICAgcjMscjMwDQogICBjNDoJOTMgNDEgMDAgMDggCXN0dyAgICAgcjI2LDgocjEp
DQogICBjODoJMzggYTAgMDUgOTEgCWxpICAgICAgcjUsMTQyNQ0KICAgY2M6CTM4IDgwIDEwIDAw
IAlsaSAgICAgIHI0LDQwOTYNCiAgIGQwOgk0OCAwMCAwMCAwMSAJYmwgICAgICBkMCA8Y29weV9v
bGRtZW1fcGFnZSsweGQwPg0KCQkJZDA6IFJfUFBDX1JFTDI0CWlvcmVtYXBfcHJvdA0KICAgZDQ6
CTdmIDg2IGUzIDc4IAltciAgICAgIHI2LHIyOA0KICAgZDg6CTdjIDgzIGRhIDE0IAlhZGQgICAg
IHI0LHIzLHIyNw0KICAgZGM6CTdjIDdhIDFiIDc4IAltciAgICAgIHIyNixyMw0KICAgZTA6CTdm
IGU1IGZiIDc4IAltciAgICAgIHI1LHIzMQ0KICAgZTQ6CTdmIGEzIGViIDc4IAltciAgICAgIHIz
LHIyOQ0KICAgZTg6CTQ4IDAwIDAwIDAxIAlibCAgICAgIGU4IDxjb3B5X29sZG1lbV9wYWdlKzB4
ZTg+DQoJCQllODogUl9QUENfUkVMMjQJY29weV90b191c2VyX29yX2tlcm5lbA0KICAgZWM6CTNi
IGMwIGZmIGYyIAlsaSAgICAgIHIzMCwtMTQNCiAgIGYwOgkyYyAwMyAwMCAwMCAJY21wd2kgICBy
MywwDQogICBmNDoJNDAgODIgMDAgMDggCWJuZSAgICAgZmMgPGNvcHlfb2xkbWVtX3BhZ2UrMHhm
Yz4NCiAgIGY4Ogk3ZiBmZSBmYiA3OCAJbXIgICAgICByMzAscjMxDQogICBmYzoJN2YgNDMgZDMg
NzggCW1yICAgICAgcjMscjI2DQogIDEwMDoJNDggMDAgMDAgMDEgCWJsICAgICAgMTAwIDxjb3B5
X29sZG1lbV9wYWdlKzB4MTAwPg0KCQkJMTAwOiBSX1BQQ19SRUwyNAlpb3VubWFwDQogIDEwNDoJ
ODAgMDEgMDAgMjQgCWx3eiAgICAgcjAsMzYocjEpDQogIDEwODoJODMgNDEgMDAgMDggCWx3eiAg
ICAgcjI2LDgocjEpDQogIDEwYzoJODMgNjEgMDAgMGMgCWx3eiAgICAgcjI3LDEyKHIxKQ0KICAx
MTA6CTdjIDA4IDAzIGE2IAltdGxyICAgIHIwDQogIDExNDoJODMgODEgMDAgMTAgCWx3eiAgICAg
cjI4LDE2KHIxKQ0KICAxMTg6CTgzIGExIDAwIDE0IAlsd3ogICAgIHIyOSwyMChyMSkNCiAgMTFj
Ogk0YiBmZiBmZiA5MCAJYiAgICAgICBhYyA8Y29weV9vbGRtZW1fcGFnZSsweGFjPg0KDQoNCkNo
cmlzdG9waGU
next prev parent reply other threads:[~2021-12-11 7:37 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-10 13:35 [PATCH 0/2] kdump: simplify code Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:35 ` Tiezhu Yang
2021-12-10 13:36 ` [PATCH 1/2] kdump: vmcore: move copy_to() from vmcore.c to uaccess.h Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 16:59 ` Andrew Morton
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-10 23:50 ` Tiezhu Yang
2021-12-11 7:37 ` Christophe Leroy [this message]
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-11 7:37 ` Christophe Leroy
2021-12-10 13:36 ` [PATCH 2/2] kdump: crashdump: use copy_to() to simplify the related code Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
2021-12-10 13:36 ` Tiezhu Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d72f6169-4779-0901-ae1d-743a93a196c4@csgroup.eu \
--to=christophe.leroy@csgroup.eu \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=dyoung@redhat.com \
--cc=kexec@lists.infradead.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=vgoyal@redhat.com \
--cc=x86@kernel.org \
--cc=yangtiezhu@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.