All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
@ 2015-03-19 15:43 Will Deacon
  2015-03-19 16:01 ` Ard Biesheuvel
  2015-03-23 13:22 ` Jon Medhurst (Tixy)
  0 siblings, 2 replies; 7+ messages in thread
From: Will Deacon @ 2015-03-19 15:43 UTC (permalink / raw)
  To: linux-arm-kernel

init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
contains kernel mappings) and is used as the active_mm for the idle
thread.

When restoring the pgd after an EFI call, we write current->active_mm
into TTBR0. If the current task is actually the idle thread (e.g. when
initialising the EFI RTC before entering userspace), then the TLB can
erroneously populate itself with junk global entries as a result of
speculative table walks.

When we do eventually return to userspace, the task can end up hitting
these junk mappings leading to lockups, corruption or crashes.

This patch fixes the problem in the same way as the CPU suspend code by
ensuring that we never switch to the init_mm in efi_set_pgd and instead
point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
BUG if we get passed swapper_pg_dir.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: f3cdfd239da5 ("arm64/efi: move SetVirtualAddressMap() to UEFI stub")
Signed-off-by: Will Deacon <will.deacon@arm.com>
---

This patch gets armhf Debian booting again on my Juno (I guess 64-bit
userspace tends to use virtual addresses that are high enough to avoid
hitting the junk TLB entries!).

 arch/arm64/include/asm/proc-fns.h | 6 +++++-
 arch/arm64/kernel/efi.c           | 6 +++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
index 9a8fd84f8fb2..941c375616e2 100644
--- a/arch/arm64/include/asm/proc-fns.h
+++ b/arch/arm64/include/asm/proc-fns.h
@@ -39,7 +39,11 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
 
 #include <asm/memory.h>
 
-#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
+#define cpu_switch_mm(pgd,mm)				\
+do {							\
+	BUG_ON(pgd == swapper_pg_dir);			\
+	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
+} while (0)
 
 #define cpu_get_pgd()					\
 ({							\
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 2b8d70164428..ab21e0d58278 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -337,7 +337,11 @@ core_initcall(arm64_dmi_init);
 
 static void efi_set_pgd(struct mm_struct *mm)
 {
-	cpu_switch_mm(mm->pgd, mm);
+	if (mm == &init_mm)
+		cpu_set_reserved_ttbr0();
+	else
+		cpu_switch_mm(mm->pgd, mm);
+
 	flush_tlb_all();
 	if (icache_is_aivivt())
 		__flush_icache_all();
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-19 15:43 [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm Will Deacon
@ 2015-03-19 16:01 ` Ard Biesheuvel
  2015-03-23 13:22 ` Jon Medhurst (Tixy)
  1 sibling, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2015-03-19 16:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 March 2015 at 16:43, Will Deacon <will.deacon@arm.com> wrote:
> init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
> contains kernel mappings) and is used as the active_mm for the idle
> thread.
>
> When restoring the pgd after an EFI call, we write current->active_mm
> into TTBR0. If the current task is actually the idle thread (e.g. when
> initialising the EFI RTC before entering userspace), then the TLB can
> erroneously populate itself with junk global entries as a result of
> speculative table walks.
>
> When we do eventually return to userspace, the task can end up hitting
> these junk mappings leading to lockups, corruption or crashes.
>
> This patch fixes the problem in the same way as the CPU suspend code by
> ensuring that we never switch to the init_mm in efi_set_pgd and instead
> point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
> BUG if we get passed swapper_pg_dir.
>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Fixes: f3cdfd239da5 ("arm64/efi: move SetVirtualAddressMap() to UEFI stub")
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>
> This patch gets armhf Debian booting again on my Juno (I guess 64-bit
> userspace tends to use virtual addresses that are high enough to avoid
> hitting the junk TLB entries!).
>

I guess that also explains the lack of bug reports: quite a few people
have been running with these patches over the past months, but nobody
noticed afaik

>  arch/arm64/include/asm/proc-fns.h | 6 +++++-
>  arch/arm64/kernel/efi.c           | 6 +++++-
>  2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> index 9a8fd84f8fb2..941c375616e2 100644
> --- a/arch/arm64/include/asm/proc-fns.h
> +++ b/arch/arm64/include/asm/proc-fns.h
> @@ -39,7 +39,11 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
>
>  #include <asm/memory.h>
>
> -#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
> +#define cpu_switch_mm(pgd,mm)                          \
> +do {                                                   \
> +       BUG_ON(pgd == swapper_pg_dir);                  \
> +       cpu_do_switch_mm(virt_to_phys(pgd),mm);         \
> +} while (0)
>
>  #define cpu_get_pgd()                                  \
>  ({                                                     \
> diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> index 2b8d70164428..ab21e0d58278 100644
> --- a/arch/arm64/kernel/efi.c
> +++ b/arch/arm64/kernel/efi.c
> @@ -337,7 +337,11 @@ core_initcall(arm64_dmi_init);
>
>  static void efi_set_pgd(struct mm_struct *mm)
>  {
> -       cpu_switch_mm(mm->pgd, mm);
> +       if (mm == &init_mm)
> +               cpu_set_reserved_ttbr0();
> +       else
> +               cpu_switch_mm(mm->pgd, mm);
> +
>         flush_tlb_all();
>         if (icache_is_aivivt())
>                 __flush_icache_all();
> --
> 2.1.4
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-19 15:43 [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm Will Deacon
  2015-03-19 16:01 ` Ard Biesheuvel
@ 2015-03-23 13:22 ` Jon Medhurst (Tixy)
  2015-03-23 15:44   ` Catalin Marinas
  1 sibling, 1 reply; 7+ messages in thread
From: Jon Medhurst (Tixy) @ 2015-03-23 13:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 2015-03-19 at 15:43 +0000, Will Deacon wrote:
> init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
> contains kernel mappings) and is used as the active_mm for the idle
> thread.
> 
> When restoring the pgd after an EFI call, we write current->active_mm
> into TTBR0. If the current task is actually the idle thread (e.g. when
> initialising the EFI RTC before entering userspace), then the TLB can
> erroneously populate itself with junk global entries as a result of
> speculative table walks.
> 
> When we do eventually return to userspace, the task can end up hitting
> these junk mappings leading to lockups, corruption or crashes.
> 
> This patch fixes the problem in the same way as the CPU suspend code by
> ensuring that we never switch to the init_mm in efi_set_pgd and instead
> point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
> BUG if we get passed swapper_pg_dir.

Which seems to happen in idle_task_exit() when you offline a cpu. This
patch is now in 4.0-rc5 and I get ...

# echo 0 > cpu1/online
[   51.750107] BUG: failure at ./arch/arm64/include/asm/mmu_context.h:74/switch_new_context()!
[   51.750111] Kernel panic - not syncing: BUG!
[   51.750116] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.0.0-rc5+ #3
[   51.750118] Hardware name: ARM Juno development board (r0) (DT)
[   51.750120] Call trace:
[   51.750131] [<ffffffc00008a4cc>] dump_backtrace+0x0/0x138
[   51.750136] [<ffffffc00008a620>] show_stack+0x1c/0x28
[   51.750143] [<ffffffc0006f8ed4>] dump_stack+0x80/0xc4
[   51.750146] [<ffffffc0006f58b8>] panic+0xe8/0x220
[   51.750151] [<ffffffc0000c70bc>] idle_task_exit+0x220/0x274
[   51.750155] [<ffffffc0000919bc>] cpu_die+0x20/0x7c
[   51.750159] [<ffffffc0000869fc>] arch_cpu_idle_dead+0x10/0x1c
[   51.750163] [<ffffffc0000d4de4>] cpu_startup_entry+0x2c4/0x36c
[   51.750167] [<ffffffc00009185c>] secondary_start_kernel+0x12c/0x13c

There seems to be quite a number of uses of cpu_switch_mm in the kernel.
I don't know if they have bugs which need fixing, or if it makes more
sense to replace the
	BUG_ON(pgd == swapper_pg_dir);
with 
	if (pgd != swapper_pg_dir)


> 
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Fixes: f3cdfd239da5 ("arm64/efi: move SetVirtualAddressMap() to UEFI stub")
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
> 
> This patch gets armhf Debian booting again on my Juno (I guess 64-bit
> userspace tends to use virtual addresses that are high enough to avoid
> hitting the junk TLB entries!).
> 
>  arch/arm64/include/asm/proc-fns.h | 6 +++++-
>  arch/arm64/kernel/efi.c           | 6 +++++-
>  2 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
> index 9a8fd84f8fb2..941c375616e2 100644
> --- a/arch/arm64/include/asm/proc-fns.h
> +++ b/arch/arm64/include/asm/proc-fns.h
> @@ -39,7 +39,11 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
>  
>  #include <asm/memory.h>
>  
> -#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
> +#define cpu_switch_mm(pgd,mm)				\
> +do {							\
> +	BUG_ON(pgd == swapper_pg_dir);			\
> +	cpu_do_switch_mm(virt_to_phys(pgd),mm);		\
> +} while (0)
>  
>  #define cpu_get_pgd()					\
>  ({							\
> diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
> index 2b8d70164428..ab21e0d58278 100644
> --- a/arch/arm64/kernel/efi.c
> +++ b/arch/arm64/kernel/efi.c
> @@ -337,7 +337,11 @@ core_initcall(arm64_dmi_init);
>  
>  static void efi_set_pgd(struct mm_struct *mm)
>  {
> -	cpu_switch_mm(mm->pgd, mm);
> +	if (mm == &init_mm)
> +		cpu_set_reserved_ttbr0();
> +	else
> +		cpu_switch_mm(mm->pgd, mm);
> +
>  	flush_tlb_all();
>  	if (icache_is_aivivt())
>  		__flush_icache_all();

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-23 13:22 ` Jon Medhurst (Tixy)
@ 2015-03-23 15:44   ` Catalin Marinas
  2015-03-23 17:22     ` Jon Medhurst (Tixy)
  0 siblings, 1 reply; 7+ messages in thread
From: Catalin Marinas @ 2015-03-23 15:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 23, 2015 at 01:22:25PM +0000, Jon Medhurst (Tixy) wrote:
> On Thu, 2015-03-19 at 15:43 +0000, Will Deacon wrote:
> > init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
> > contains kernel mappings) and is used as the active_mm for the idle
> > thread.
> > 
> > When restoring the pgd after an EFI call, we write current->active_mm
> > into TTBR0. If the current task is actually the idle thread (e.g. when
> > initialising the EFI RTC before entering userspace), then the TLB can
> > erroneously populate itself with junk global entries as a result of
> > speculative table walks.
> > 
> > When we do eventually return to userspace, the task can end up hitting
> > these junk mappings leading to lockups, corruption or crashes.
> > 
> > This patch fixes the problem in the same way as the CPU suspend code by
> > ensuring that we never switch to the init_mm in efi_set_pgd and instead
> > point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
> > BUG if we get passed swapper_pg_dir.
> 
> Which seems to happen in idle_task_exit() when you offline a cpu. This
> patch is now in 4.0-rc5 and I get ...
> 
> # echo 0 > cpu1/online
> [   51.750107] BUG: failure at ./arch/arm64/include/asm/mmu_context.h:74/switch_new_context()!
> [   51.750111] Kernel panic - not syncing: BUG!
> [   51.750116] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.0.0-rc5+ #3
> [   51.750118] Hardware name: ARM Juno development board (r0) (DT)
> [   51.750120] Call trace:
> [   51.750131] [<ffffffc00008a4cc>] dump_backtrace+0x0/0x138
> [   51.750136] [<ffffffc00008a620>] show_stack+0x1c/0x28
> [   51.750143] [<ffffffc0006f8ed4>] dump_stack+0x80/0xc4
> [   51.750146] [<ffffffc0006f58b8>] panic+0xe8/0x220
> [   51.750151] [<ffffffc0000c70bc>] idle_task_exit+0x220/0x274
> [   51.750155] [<ffffffc0000919bc>] cpu_die+0x20/0x7c
> [   51.750159] [<ffffffc0000869fc>] arch_cpu_idle_dead+0x10/0x1c
> [   51.750163] [<ffffffc0000d4de4>] cpu_startup_entry+0x2c4/0x36c
> [   51.750167] [<ffffffc00009185c>] secondary_start_kernel+0x12c/0x13c

It's good that we now trap on this, otherwise we wouldn't have noticed
any issue for a long time.

> There seems to be quite a number of uses of cpu_switch_mm in the kernel.
> I don't know if they have bugs which need fixing, or if it makes more
> sense to replace the
> 	BUG_ON(pgd == swapper_pg_dir);
> with 
> 	if (pgd != swapper_pg_dir)

That's not always correct since the previous pgd may be freed, so we
need to make sure we move away from it. I think for stable, we can do
with the patch below. We could clean up the cpu_switch_mm() uses through
the arch/arm64 and set the reserved ttbr0 but we only have two at the
moment (__cpu_suspend and efi_set_pgd).

-----------------8<-------------------------

>From 5d9e3540b6480558528612dd3672543fa8ab3528 Mon Sep 17 00:00:00 2001
From: Catalin Marinas <catalin.marinas@arm.com>
Date: Mon, 23 Mar 2015 15:06:50 +0000
Subject: [PATCH] arm64: Use the reserved TTBR0 if context switching to the
 init_mm

The idle_task_exit() function may call switch_mm() with next ==
&init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
this patch simply sets the reserved TTBR0.

Cc: <stable@vger.kernel.org>
Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/mmu_context.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a9eee33dfa62..101a42bde728 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -151,6 +151,15 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 {
 	unsigned int cpu = smp_processor_id();
 
+	/*
+	 * init_mm.pgd does not contain any user mappings and it is always
+	 * active for kernel addresses in TTBR1. Just set the reserved TTBR0.
+	 */
+	if (next == &init_mm) {
+		cpu_set_reserved_ttbr0();
+		return;
+	}
+
 	if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next)
 		check_and_switch_context(next, tsk);
 }

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-23 15:44   ` Catalin Marinas
@ 2015-03-23 17:22     ` Jon Medhurst (Tixy)
  2015-03-23 17:50       ` Catalin Marinas
  0 siblings, 1 reply; 7+ messages in thread
From: Jon Medhurst (Tixy) @ 2015-03-23 17:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2015-03-23 at 15:44 +0000, Catalin Marinas wrote:
[...]
> I think for stable, we can do
> with the patch below. We could clean up the cpu_switch_mm() uses through
> the arch/arm64 and set the reserved ttbr0 but we only have two at the
> moment (__cpu_suspend and efi_set_pgd).
> 
> -----------------8<-------------------------
> 
> From 5d9e3540b6480558528612dd3672543fa8ab3528 Mon Sep 17 00:00:00 2001
> From: Catalin Marinas <catalin.marinas@arm.com>
> Date: Mon, 23 Mar 2015 15:06:50 +0000
> Subject: [PATCH] arm64: Use the reserved TTBR0 if context switching to the
>  init_mm
> 
> The idle_task_exit() function may call switch_mm() with next ==
> &init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
> this patch simply sets the reserved TTBR0.
> 
> Cc: <stable@vger.kernel.org>
> Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

That unsurprising fixes the BUG_ON I was seeing on Juno, so...
Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org>

One question, is bypassing setting the mm_cpumask and context.id for
init_mm OK? I'm not familiar with the code but had a quick look, and it
looks like they are just used for ASID management, in which case I
assume everything is OK - ASIDs only being relevant for user mappings in
ttbr0?
 
> ---
>  arch/arm64/include/asm/mmu_context.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index a9eee33dfa62..101a42bde728 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -151,6 +151,15 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  {
>  	unsigned int cpu = smp_processor_id();
>  
> +	/*
> +	 * init_mm.pgd does not contain any user mappings and it is always
> +	 * active for kernel addresses in TTBR1. Just set the reserved TTBR0.
> +	 */
> +	if (next == &init_mm) {
> +		cpu_set_reserved_ttbr0();
> +		return;
> +	}
> +
>  	if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next)
>  		check_and_switch_context(next, tsk);
>  }

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-23 17:22     ` Jon Medhurst (Tixy)
@ 2015-03-23 17:50       ` Catalin Marinas
  2015-03-23 18:00         ` Will Deacon
  0 siblings, 1 reply; 7+ messages in thread
From: Catalin Marinas @ 2015-03-23 17:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 23, 2015 at 05:22:57PM +0000, Jon Medhurst (Tixy) wrote:
> On Mon, 2015-03-23 at 15:44 +0000, Catalin Marinas wrote:
> > From 5d9e3540b6480558528612dd3672543fa8ab3528 Mon Sep 17 00:00:00 2001
> > From: Catalin Marinas <catalin.marinas@arm.com>
> > Date: Mon, 23 Mar 2015 15:06:50 +0000
> > Subject: [PATCH] arm64: Use the reserved TTBR0 if context switching to the
> >  init_mm
> > 
> > The idle_task_exit() function may call switch_mm() with next ==
> > &init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
> > this patch simply sets the reserved TTBR0.
> > 
> > Cc: <stable@vger.kernel.org>
> > Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> That unsurprising fixes the BUG_ON I was seeing on Juno, so...
> Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org>

Thanks.

> One question, is bypassing setting the mm_cpumask and context.id for
> init_mm OK? I'm not familiar with the code but had a quick look, and it
> looks like they are just used for ASID management, in which case I
> assume everything is OK - ASIDs only being relevant for user mappings in
> ttbr0?

That's my thinking as well. Will asked me the same question, so I'll let
him confirm if he's seeing anything wrong.

-- 
Catalin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm
  2015-03-23 17:50       ` Catalin Marinas
@ 2015-03-23 18:00         ` Will Deacon
  0 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2015-03-23 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 23, 2015 at 05:50:55PM +0000, Catalin Marinas wrote:
> On Mon, Mar 23, 2015 at 05:22:57PM +0000, Jon Medhurst (Tixy) wrote:
> > On Mon, 2015-03-23 at 15:44 +0000, Catalin Marinas wrote:
> > > From 5d9e3540b6480558528612dd3672543fa8ab3528 Mon Sep 17 00:00:00 2001
> > > From: Catalin Marinas <catalin.marinas@arm.com>
> > > Date: Mon, 23 Mar 2015 15:06:50 +0000
> > > Subject: [PATCH] arm64: Use the reserved TTBR0 if context switching to the
> > >  init_mm
> > > 
> > > The idle_task_exit() function may call switch_mm() with next ==
> > > &init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
> > > this patch simply sets the reserved TTBR0.
> > > 
> > > Cc: <stable@vger.kernel.org>
> > > Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
> > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > 
> > That unsurprising fixes the BUG_ON I was seeing on Juno, so...
> > Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org>
> 
> Thanks.
> 
> > One question, is bypassing setting the mm_cpumask and context.id for
> > init_mm OK? I'm not familiar with the code but had a quick look, and it
> > looks like they are just used for ASID management, in which case I
> > assume everything is OK - ASIDs only being relevant for user mappings in
> > ttbr0?
> 
> That's my thinking as well. Will asked me the same question, so I'll let
> him confirm if he's seeing anything wrong.

I can't seem to break it. The ASID state should all be up to date with the
previous mm, so this should be harmless...

Will

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-03-23 18:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-19 15:43 [PATCH] arm64: efi: don't restore TTBR0 if active_mm points at init_mm Will Deacon
2015-03-19 16:01 ` Ard Biesheuvel
2015-03-23 13:22 ` Jon Medhurst (Tixy)
2015-03-23 15:44   ` Catalin Marinas
2015-03-23 17:22     ` Jon Medhurst (Tixy)
2015-03-23 17:50       ` Catalin Marinas
2015-03-23 18:00         ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.