* [PATCH v4] powerpc: Avoid code patching freed init sections
@ 2018-09-14 1:14 Michael Neuling
2018-09-14 4:22 ` Nicholas Piggin
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Michael Neuling @ 2018-09-14 1:14 UTC (permalink / raw)
To: mpe
Cc: linuxppc-dev, Nicholas Piggin, paulus, Haren Myneni, mikey,
Michal Suchánek, Christophe LEROY
This stops us from doing code patching in init sections after they've
been freed.
In this chain:
kvm_guest_init() ->
kvm_use_magic_page() ->
fault_in_pages_readable() ->
__get_user() ->
__get_user_nocheck() ->
barrier_nospec();
We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.
We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.
cc: stable@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling <mikey@neuling.org>
---
For stable I've marked this as v4.13+ since that's when we refactored
code-patching.c but it could go back even further than that. In
reality though, I think we can only hit this since the first
spectre/meltdown changes.
v4:
Feedback from Christophe Leroy:
- init_mem_free -> init_mem_is_free
- prlog %lx -> %px
v3:
Add init_mem_free flag to avoid potential race.
Feedback from Christophe Leroy:
- use init_section_contains()
- change order of init test for performance
- use pr_debug()
- remove blank line
v2:
Print when we skip an address
---
arch/powerpc/include/asm/setup.h | 1 +
arch/powerpc/lib/code-patching.c | 6 ++++++
arch/powerpc/mm/mem.c | 2 ++
3 files changed, 9 insertions(+)
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 1a951b0046..1fffbba8d6 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
extern unsigned int rtas_data;
extern unsigned long long memory_limit;
+extern bool init_mem_is_free;
extern unsigned long klimit;
extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 850f3b8f4d..6ae2777c22 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
{
int err;
+ /* Make sure we aren't patching a freed init section */
+ if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+ pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
+ return 0;
+ }
+
__put_user_size(instr, patch_addr, 4, err);
if (err)
return err;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 5c8530d0c6..04ccb274a6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -63,6 +63,7 @@
#endif
unsigned long long memory_limit;
+bool init_mem_is_free;
#ifdef CONFIG_HIGHMEM
pte_t *kmap_pte;
@@ -396,6 +397,7 @@ void free_initmem(void)
{
ppc_md.progress = ppc_printk_progress;
mark_initmem_nx();
+ init_mem_is_free = true;
free_initmem_default(POISON_FREE_INITMEM);
}
--
2.17.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-09-14 1:14 [PATCH v4] powerpc: Avoid code patching freed init sections Michael Neuling
@ 2018-09-14 4:22 ` Nicholas Piggin
2018-09-18 8:52 ` Christophe LEROY
2018-09-14 5:32 ` Christophe LEROY
2018-09-21 11:59 ` [v4] " Michael Ellerman
2 siblings, 1 reply; 12+ messages in thread
From: Nicholas Piggin @ 2018-09-14 4:22 UTC (permalink / raw)
To: Michael Neuling
Cc: mpe, linuxppc-dev, paulus, Haren Myneni, Michal Suchánek,
Christophe LEROY
On Fri, 14 Sep 2018 11:14:11 +1000
Michael Neuling <mikey@neuling.org> wrote:
> This stops us from doing code patching in init sections after they've
> been freed.
>
> In this chain:
> kvm_guest_init() ->
> kvm_use_magic_page() ->
> fault_in_pages_readable() ->
> __get_user() ->
> __get_user_nocheck() ->
> barrier_nospec();
>
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
>
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
>
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
>
> ---
> For stable I've marked this as v4.13+ since that's when we refactored
> code-patching.c but it could go back even further than that. In
> reality though, I think we can only hit this since the first
> spectre/meltdown changes.
>
> v4:
> Feedback from Christophe Leroy:
> - init_mem_free -> init_mem_is_free
> - prlog %lx -> %px
>
> v3:
> Add init_mem_free flag to avoid potential race.
> Feedback from Christophe Leroy:
> - use init_section_contains()
> - change order of init test for performance
> - use pr_debug()
> - remove blank line
>
> v2:
> Print when we skip an address
> ---
> arch/powerpc/include/asm/setup.h | 1 +
> arch/powerpc/lib/code-patching.c | 6 ++++++
> arch/powerpc/mm/mem.c | 2 ++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 1a951b0046..1fffbba8d6 100644
> --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>
> extern unsigned int rtas_data;
> extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
> extern unsigned long klimit;
> extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 850f3b8f4d..6ae2777c22 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
> {
> int err;
>
> + /* Make sure we aren't patching a freed init section */
> + if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> + pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
> + return 0;
> + }
What we should do is a whitelist, make sure it's only patching the
sections we want it to.
That's a bigger job when you consider modules and things too though,
so this looks good for now. Thanks,
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-09-14 1:14 [PATCH v4] powerpc: Avoid code patching freed init sections Michael Neuling
2018-09-14 4:22 ` Nicholas Piggin
@ 2018-09-14 5:32 ` Christophe LEROY
2018-09-21 11:59 ` [v4] " Michael Ellerman
2 siblings, 0 replies; 12+ messages in thread
From: Christophe LEROY @ 2018-09-14 5:32 UTC (permalink / raw)
To: Michael Neuling, mpe
Cc: linuxppc-dev, Nicholas Piggin, paulus, Haren Myneni,
Michal Suchánek
Le 14/09/2018 à 03:14, Michael Neuling a écrit :
> This stops us from doing code patching in init sections after they've
> been freed.
>
> In this chain:
> kvm_guest_init() ->
> kvm_use_magic_page() ->
> fault_in_pages_readable() ->
> __get_user() ->
> __get_user_nocheck() ->
> barrier_nospec();
>
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
>
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
>
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
>
> ---
> For stable I've marked this as v4.13+ since that's when we refactored
> code-patching.c but it could go back even further than that. In
> reality though, I think we can only hit this since the first
> spectre/meltdown changes.
>
> v4:
> Feedback from Christophe Leroy:
> - init_mem_free -> init_mem_is_free
> - prlog %lx -> %px
>
> v3:
> Add init_mem_free flag to avoid potential race.
> Feedback from Christophe Leroy:
> - use init_section_contains()
> - change order of init test for performance
> - use pr_debug()
> - remove blank line
>
> v2:
> Print when we skip an address
> ---
> arch/powerpc/include/asm/setup.h | 1 +
> arch/powerpc/lib/code-patching.c | 6 ++++++
> arch/powerpc/mm/mem.c | 2 ++
> 3 files changed, 9 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
> index 1a951b0046..1fffbba8d6 100644
> --- a/arch/powerpc/include/asm/setup.h
> +++ b/arch/powerpc/include/asm/setup.h
> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>
> extern unsigned int rtas_data;
> extern unsigned long long memory_limit;
> +extern bool init_mem_is_free;
> extern unsigned long klimit;
> extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 850f3b8f4d..6ae2777c22 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
> {
> int err;
>
> + /* Make sure we aren't patching a freed init section */
> + if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> + pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
> + return 0;
> + }
> +
> __put_user_size(instr, patch_addr, 4, err);
> if (err)
> return err;
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 5c8530d0c6..04ccb274a6 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -63,6 +63,7 @@
> #endif
>
> unsigned long long memory_limit;
> +bool init_mem_is_free;
>
> #ifdef CONFIG_HIGHMEM
> pte_t *kmap_pte;
> @@ -396,6 +397,7 @@ void free_initmem(void)
> {
> ppc_md.progress = ppc_printk_progress;
> mark_initmem_nx();
> + init_mem_is_free = true;
> free_initmem_default(POISON_FREE_INITMEM);
> }
>
>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-09-14 4:22 ` Nicholas Piggin
@ 2018-09-18 8:52 ` Christophe LEROY
2018-09-18 11:35 ` Michal Suchánek
0 siblings, 1 reply; 12+ messages in thread
From: Christophe LEROY @ 2018-09-18 8:52 UTC (permalink / raw)
To: Nicholas Piggin, Michael Neuling
Cc: mpe, linuxppc-dev, paulus, Haren Myneni, Michal Suchánek
Le 14/09/2018 à 06:22, Nicholas Piggin a écrit :
> On Fri, 14 Sep 2018 11:14:11 +1000
> Michael Neuling <mikey@neuling.org> wrote:
>
>> This stops us from doing code patching in init sections after they've
>> been freed.
>>
>> In this chain:
>> kvm_guest_init() ->
>> kvm_use_magic_page() ->
>> fault_in_pages_readable() ->
>> __get_user() ->
>> __get_user_nocheck() ->
>> barrier_nospec();
>>
>> We have a code patching location at barrier_nospec() and
>> kvm_guest_init() is an init function. This whole chain gets inlined,
>> so when we free the init section (hence kvm_guest_init()), this code
>> goes away and hence should no longer be patched.
>>
>> We seen this as userspace memory corruption when using a memory
>> checker while doing partition migration testing on powervm (this
>> starts the code patching post migration via
>> /sys/kernel/mobility/migration). In theory, it could also happen when
>> using /sys/kernel/debug/powerpc/barrier_nospec.
>>
>> cc: stable@vger.kernel.org # 4.13+
>> Signed-off-by: Michael Neuling <mikey@neuling.org>
>>
>> ---
>> For stable I've marked this as v4.13+ since that's when we refactored
>> code-patching.c but it could go back even further than that. In
>> reality though, I think we can only hit this since the first
>> spectre/meltdown changes.
>>
>> v4:
>> Feedback from Christophe Leroy:
>> - init_mem_free -> init_mem_is_free
>> - prlog %lx -> %px
>>
>> v3:
>> Add init_mem_free flag to avoid potential race.
>> Feedback from Christophe Leroy:
>> - use init_section_contains()
>> - change order of init test for performance
>> - use pr_debug()
>> - remove blank line
>>
>> v2:
>> Print when we skip an address
>> ---
>> arch/powerpc/include/asm/setup.h | 1 +
>> arch/powerpc/lib/code-patching.c | 6 ++++++
>> arch/powerpc/mm/mem.c | 2 ++
>> 3 files changed, 9 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
>> index 1a951b0046..1fffbba8d6 100644
>> --- a/arch/powerpc/include/asm/setup.h
>> +++ b/arch/powerpc/include/asm/setup.h
>> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned short hex);
>>
>> extern unsigned int rtas_data;
>> extern unsigned long long memory_limit;
>> +extern bool init_mem_is_free;
>> extern unsigned long klimit;
>> extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
>>
>> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
>> index 850f3b8f4d..6ae2777c22 100644
>> --- a/arch/powerpc/lib/code-patching.c
>> +++ b/arch/powerpc/lib/code-patching.c
>> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int *exec_addr, unsigned int instr,
>> {
>> int err;
>>
>> + /* Make sure we aren't patching a freed init section */
>> + if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
>> + pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);
>> + return 0;
>> + }
>
> What we should do is a whitelist, make sure it's only patching the
> sections we want it to.
>
> That's a bigger job when you consider modules and things too though,
> so this looks good for now. Thanks,
What about using kernel_text_address() for it then ? It also handles
modules, is it more complicated than that ?
Christophe
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-09-18 8:52 ` Christophe LEROY
@ 2018-09-18 11:35 ` Michal Suchánek
0 siblings, 0 replies; 12+ messages in thread
From: Michal Suchánek @ 2018-09-18 11:35 UTC (permalink / raw)
To: Christophe LEROY
Cc: Nicholas Piggin, Michael Neuling, linuxppc-dev, Haren Myneni
On Tue, 18 Sep 2018 10:52:09 +0200
Christophe LEROY <christophe.leroy@c-s.fr> wrote:
>=20
>=20
> Le 14/09/2018 =C3=A0 06:22, Nicholas Piggin a =C3=A9crit=C2=A0:
> > On Fri, 14 Sep 2018 11:14:11 +1000
> > Michael Neuling <mikey@neuling.org> wrote:
> >=20
> >> This stops us from doing code patching in init sections after
> >> they've been freed.
> >>
> >> In this chain:
> >> kvm_guest_init() ->
> >> kvm_use_magic_page() ->
> >> fault_in_pages_readable() ->
> >> __get_user() ->
> >> __get_user_nocheck() ->
> >> barrier_nospec();
> >>
> >> We have a code patching location at barrier_nospec() and
> >> kvm_guest_init() is an init function. This whole chain gets
> >> inlined, so when we free the init section (hence
> >> kvm_guest_init()), this code goes away and hence should no longer
> >> be patched.
> >>
> >> We seen this as userspace memory corruption when using a memory
> >> checker while doing partition migration testing on powervm (this
> >> starts the code patching post migration via
> >> /sys/kernel/mobility/migration). In theory, it could also happen
> >> when using /sys/kernel/debug/powerpc/barrier_nospec.
> >>
> >> cc: stable@vger.kernel.org # 4.13+
> >> Signed-off-by: Michael Neuling <mikey@neuling.org>
> >>
> >> ---
> >> For stable I've marked this as v4.13+ since that's when we
> >> refactored code-patching.c but it could go back even further than
> >> that. In reality though, I think we can only hit this since the
> >> first spectre/meltdown changes.
> >>
> >> v4:
> >> Feedback from Christophe Leroy:
> >> - init_mem_free -> init_mem_is_free
> >> - prlog %lx -> %px
> >>
> >> v3:
> >> Add init_mem_free flag to avoid potential race.
> >> Feedback from Christophe Leroy:
> >> - use init_section_contains()
> >> - change order of init test for performance
> >> - use pr_debug()
> >> - remove blank line
> >>
> >> v2:
> >> Print when we skip an address
> >> ---
> >> arch/powerpc/include/asm/setup.h | 1 +
> >> arch/powerpc/lib/code-patching.c | 6 ++++++
> >> arch/powerpc/mm/mem.c | 2 ++
> >> 3 files changed, 9 insertions(+)
> >>
> >> diff --git a/arch/powerpc/include/asm/setup.h
> >> b/arch/powerpc/include/asm/setup.h index 1a951b0046..1fffbba8d6
> >> 100644 --- a/arch/powerpc/include/asm/setup.h
> >> +++ b/arch/powerpc/include/asm/setup.h
> >> @@ -9,6 +9,7 @@ extern void ppc_printk_progress(char *s, unsigned
> >> short hex);=20
> >> extern unsigned int rtas_data;
> >> extern unsigned long long memory_limit;
> >> +extern bool init_mem_is_free;
> >> extern unsigned long klimit;
> >> extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);
> >> =20
> >> diff --git a/arch/powerpc/lib/code-patching.c
> >> b/arch/powerpc/lib/code-patching.c index 850f3b8f4d..6ae2777c22
> >> 100644 --- a/arch/powerpc/lib/code-patching.c
> >> +++ b/arch/powerpc/lib/code-patching.c
> >> @@ -28,6 +28,12 @@ static int __patch_instruction(unsigned int
> >> *exec_addr, unsigned int instr, {
> >> int err;
> >> =20
> >> + /* Make sure we aren't patching a freed init section */
> >> + if (init_mem_is_free && init_section_contains(exec_addr,
> >> 4)) {
> >> + pr_debug("Skipping init section patching addr:
> >> 0x%px\n", exec_addr);
> >> + return 0;
> >> + }
> >=20
> > What we should do is a whitelist, make sure it's only patching the
> > sections we want it to.
> >=20
> > That's a bigger job when you consider modules and things too though,
> > so this looks good for now. Thanks,
>=20
> What about using kernel_text_address() for it then ? It also handles=20
> modules, is it more complicated than that ?
Modules are patched separately so should not need to be excluded here.
There is a different problem with modules: when the mitigation type
changes the modules are not re-patched with the new settings.
Thanks
Michal
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [v4] powerpc: Avoid code patching freed init sections
2018-09-14 1:14 [PATCH v4] powerpc: Avoid code patching freed init sections Michael Neuling
2018-09-14 4:22 ` Nicholas Piggin
2018-09-14 5:32 ` Christophe LEROY
@ 2018-09-21 11:59 ` Michael Ellerman
2018-10-01 11:25 ` Christophe LEROY
2 siblings, 1 reply; 12+ messages in thread
From: Michael Ellerman @ 2018-09-21 11:59 UTC (permalink / raw)
To: Michael Neuling
Cc: mikey, Nicholas Piggin, Michal Suchánek, linuxppc-dev, Haren Myneni
On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> This stops us from doing code patching in init sections after they've
> been freed.
>
> In this chain:
> kvm_guest_init() ->
> kvm_use_magic_page() ->
> fault_in_pages_readable() ->
> __get_user() ->
> __get_user_nocheck() ->
> barrier_nospec();
>
> We have a code patching location at barrier_nospec() and
> kvm_guest_init() is an init function. This whole chain gets inlined,
> so when we free the init section (hence kvm_guest_init()), this code
> goes away and hence should no longer be patched.
>
> We seen this as userspace memory corruption when using a memory
> checker while doing partition migration testing on powervm (this
> starts the code patching post migration via
> /sys/kernel/mobility/migration). In theory, it could also happen when
> using /sys/kernel/debug/powerpc/barrier_nospec.
>
> cc: stable@vger.kernel.org # 4.13+
> Signed-off-by: Michael Neuling <mikey@neuling.org>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Applied to powerpc fixes, thanks.
https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
cheers
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [v4] powerpc: Avoid code patching freed init sections
2018-09-21 11:59 ` [v4] " Michael Ellerman
@ 2018-10-01 11:25 ` Christophe LEROY
2018-10-01 22:57 ` Michael Neuling
0 siblings, 1 reply; 12+ messages in thread
From: Christophe LEROY @ 2018-10-01 11:25 UTC (permalink / raw)
To: Michael Ellerman, Michael Neuling
Cc: Michal Suchánek, linuxppc-dev, Nicholas Piggin, Haren Myneni
Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
>> This stops us from doing code patching in init sections after they've
>> been freed.
>>
>> In this chain:
>> kvm_guest_init() ->
>> kvm_use_magic_page() ->
>> fault_in_pages_readable() ->
>> __get_user() ->
>> __get_user_nocheck() ->
>> barrier_nospec();
>>
>> We have a code patching location at barrier_nospec() and
>> kvm_guest_init() is an init function. This whole chain gets inlined,
>> so when we free the init section (hence kvm_guest_init()), this code
>> goes away and hence should no longer be patched.
>>
>> We seen this as userspace memory corruption when using a memory
>> checker while doing partition migration testing on powervm (this
>> starts the code patching post migration via
>> /sys/kernel/mobility/migration). In theory, it could also happen when
>> using /sys/kernel/debug/powerpc/barrier_nospec.
>>
>> cc: stable@vger.kernel.org # 4.13+
>> Signed-off-by: Michael Neuling <mikey@neuling.org>
>> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>
> Applied to powerpc fixes, thanks.
>
> https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
>
This patch breaks booting on my MPC83xx board (book3s/32) very early
(before console is active), provoking restart.
u-boot reports a checkstop reset at restart.
Reverting this commit fixes the issue.
The following patch fixes the issue as well, but I think it is not the
best solution. I still think the test should be in patch_instruction()
instead of being in __patch_instruction(), see my comment on v2
Christophe
diff --git a/arch/powerpc/lib/code-patching.c
b/arch/powerpc/lib/code-patching.c
index 6ae2777..6192fda 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int
*exec_addr, unsigned int instr,
int err;
/* Make sure we aren't patching a freed init section */
- if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
+ if (*PTRRELOC(&init_mem_is_free) &&
init_section_contains(exec_addr, 4)) {
pr_debug("Skipping init section patching addr:
0x%px\n", exec_addr);
return 0;
}
Christophe
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [v4] powerpc: Avoid code patching freed init sections
2018-10-01 11:25 ` Christophe LEROY
@ 2018-10-01 22:57 ` Michael Neuling
0 siblings, 0 replies; 12+ messages in thread
From: Michael Neuling @ 2018-10-01 22:57 UTC (permalink / raw)
To: Christophe LEROY, Michael Ellerman
Cc: Michal Suchánek, linuxppc-dev, Nicholas Piggin, Haren Myneni
On Mon, 2018-10-01 at 13:25 +0200, Christophe LEROY wrote:
>
> Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> > On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> > > This stops us from doing code patching in init sections after they've
> > > been freed.
> > >
> > > In this chain:
> > > kvm_guest_init() ->
> > > kvm_use_magic_page() ->
> > > fault_in_pages_readable() ->
> > > __get_user() ->
> > > __get_user_nocheck() ->
> > > barrier_nospec();
> > >
> > > We have a code patching location at barrier_nospec() and
> > > kvm_guest_init() is an init function. This whole chain gets inlined,
> > > so when we free the init section (hence kvm_guest_init()), this code
> > > goes away and hence should no longer be patched.
> > >
> > > We seen this as userspace memory corruption when using a memory
> > > checker while doing partition migration testing on powervm (this
> > > starts the code patching post migration via
> > > /sys/kernel/mobility/migration). In theory, it could also happen when
> > > using /sys/kernel/debug/powerpc/barrier_nospec.
> > >
> > > cc: stable@vger.kernel.org # 4.13+
> > > Signed-off-by: Michael Neuling <mikey@neuling.org>
> > > Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> > > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> >
> > Applied to powerpc fixes, thanks.
> >
> > https://git.kernel.org/powerpc/c/51c3c62b58b357e8d35e4cc32f7b4e
> >
>
> This patch breaks booting on my MPC83xx board (book3s/32) very early
> (before console is active), provoking restart.
> u-boot reports a checkstop reset at restart.
>
> Reverting this commit fixes the issue.
>
> The following patch fixes the issue as well, but I think it is not the
> best solution. I still think the test should be in patch_instruction()
> instead of being in __patch_instruction(), see my comment on v2
Arrh, sorry.
Can you write this up as a real patch with a signed off by so mpe can take it?
Mikey
>
> Christophe
>
> diff --git a/arch/powerpc/lib/code-patching.c
> b/arch/powerpc/lib/code-patching.c
> index 6ae2777..6192fda 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -29,7 +29,7 @@ static int __patch_instruction(unsigned int
> *exec_addr, unsigned int instr,
> int err;
>
> /* Make sure we aren't patching a freed init section */
> - if (init_mem_is_free && init_section_contains(exec_addr, 4)) {
> + if (*PTRRELOC(&init_mem_is_free) &&
> init_section_contains(exec_addr, 4)) {
> pr_debug("Skipping init section patching addr:
> 0x%px\n", exec_addr);
> return 0;
> }
>
>
> Christophe
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-10-03 3:20 ` Michael Ellerman
@ 2018-10-03 5:42 ` Christophe LEROY
0 siblings, 0 replies; 12+ messages in thread
From: Christophe LEROY @ 2018-10-03 5:42 UTC (permalink / raw)
To: Michael Ellerman, Andreas Schwab, Michael Neuling
Cc: Michal Suchánek, linuxppc-dev, Nicholas Piggin, Haren Myneni
Le 03/10/2018 à 05:20, Michael Ellerman a écrit :
> Andreas Schwab <schwab@linux-m68k.org> writes:
>
>> On Sep 14 2018, Michael Neuling <mikey@neuling.org> wrote:
>>
>>> This stops us from doing code patching in init sections after they've
>>> been freed.
>>
>> This breaks booting on PowerBook6,7, crashing very early.
>
> Crud, sorry.
>
> My CI setup tests with the mac99 qemu model, but that boots happily, not
> sure why.
Maybe mac99 doesn't use relocation ?
The issue is that during early boot, 'init_mem_is_free' is not yet at
its definitif address.
I sent a fix for it : https://patchwork.ozlabs.org/patch/977195/
Christophe
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-10-02 21:35 ` [PATCH v4] " Andreas Schwab
2018-10-03 1:57 ` Michael Neuling
@ 2018-10-03 3:20 ` Michael Ellerman
2018-10-03 5:42 ` Christophe LEROY
1 sibling, 1 reply; 12+ messages in thread
From: Michael Ellerman @ 2018-10-03 3:20 UTC (permalink / raw)
To: Andreas Schwab, Michael Neuling
Cc: Michal Suchánek, linuxppc-dev, Haren Myneni, Nicholas Piggin
Andreas Schwab <schwab@linux-m68k.org> writes:
> On Sep 14 2018, Michael Neuling <mikey@neuling.org> wrote:
>
>> This stops us from doing code patching in init sections after they've
>> been freed.
>
> This breaks booting on PowerBook6,7, crashing very early.
Crud, sorry.
My CI setup tests with the mac99 qemu model, but that boots happily, not
sure why.
cheers
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
2018-10-02 21:35 ` [PATCH v4] " Andreas Schwab
@ 2018-10-03 1:57 ` Michael Neuling
2018-10-03 3:20 ` Michael Ellerman
1 sibling, 0 replies; 12+ messages in thread
From: Michael Neuling @ 2018-10-03 1:57 UTC (permalink / raw)
To: Andreas Schwab, Christophe Leroy
Cc: Michal Suchánek, linuxppc-dev, Haren Myneni, Nicholas Piggin
On Tue, 2018-10-02 at 23:35 +0200, Andreas Schwab wrote:
> On Sep 14 2018, Michael Neuling <mikey@neuling.org> wrote:
>
> > This stops us from doing code patching in init sections after they've
> > been freed.
>
> This breaks booting on PowerBook6,7, crashing very early.
Sorry, Can you try?
http://patchwork.ozlabs.org/patch/977195/
Mikey
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] powerpc: Avoid code patching freed init sections
[not found] <20180914011411.3184-1-mikey__14553.8904158913$1536887645$gmane$org@neuling.org>
@ 2018-10-02 21:35 ` Andreas Schwab
2018-10-03 1:57 ` Michael Neuling
2018-10-03 3:20 ` Michael Ellerman
0 siblings, 2 replies; 12+ messages in thread
From: Andreas Schwab @ 2018-10-02 21:35 UTC (permalink / raw)
To: Michael Neuling
Cc: Michal Suchánek, linuxppc-dev, Haren Myneni, Nicholas Piggin
On Sep 14 2018, Michael Neuling <mikey@neuling.org> wrote:
> This stops us from doing code patching in init sections after they've
> been freed.
This breaks booting on PowerBook6,7, crashing very early.
Andreas.
--
Andreas Schwab, schwab@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1
"And now for something completely different."
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2018-10-03 5:47 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-14 1:14 [PATCH v4] powerpc: Avoid code patching freed init sections Michael Neuling
2018-09-14 4:22 ` Nicholas Piggin
2018-09-18 8:52 ` Christophe LEROY
2018-09-18 11:35 ` Michal Suchánek
2018-09-14 5:32 ` Christophe LEROY
2018-09-21 11:59 ` [v4] " Michael Ellerman
2018-10-01 11:25 ` Christophe LEROY
2018-10-01 22:57 ` Michael Neuling
[not found] <20180914011411.3184-1-mikey__14553.8904158913$1536887645$gmane$org@neuling.org>
2018-10-02 21:35 ` [PATCH v4] " Andreas Schwab
2018-10-03 1:57 ` Michael Neuling
2018-10-03 3:20 ` Michael Ellerman
2018-10-03 5:42 ` Christophe LEROY
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.