* [PATCH 0/3] arm64: Assorted MMU-on fixes
@ 2021-02-24 9:37 Marc Zyngier
2021-02-24 9:37 ` [PATCH 1/3] arm64: VHE: Enable EL2 MMU from the idmap Marc Zyngier
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Marc Zyngier @ 2021-02-24 9:37 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Mark Rutland, Will Deacon, Guillaume Tucker, Catalin Marinas,
kernel-team, Ard Biesheuvel
Guillaume (and his mate the KernelCI bot) recently reported that the
new VHE enablement code lamentably fails when 64k pages are enabled.
And fair enough, the code was broken to the point that it really
should have never booted the first place...
The first patch fixes what amounts to a total brain fart.
The next two patches address a potential issue noted by Will, where we
may be missing some ISBs when invalidating TLBs, right before enabling
the MMU (I've decided the have separate patches for the two instances
so that the first one can be easily backported if required).
I've tested this on my local VHE-capable machine as well as on the FVP
model (with both 4k and 64k pages).
Marc Zyngier (3):
arm64: VHE: Enable EL2 MMU from the idmap
arm64: Add missing ISB after invalidating TLB in __primary_switch
arm64: Add missing ISB after invalidating TLB in enter_vhe
arch/arm64/kernel/head.S | 1 +
arch/arm64/kernel/hyp-stub.S | 40 ++++++++++++++++++++++++------------
2 files changed, 28 insertions(+), 13 deletions(-)
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/3] arm64: VHE: Enable EL2 MMU from the idmap
2021-02-24 9:37 [PATCH 0/3] arm64: Assorted MMU-on fixes Marc Zyngier
@ 2021-02-24 9:37 ` Marc Zyngier
2021-02-24 9:37 ` [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch Marc Zyngier
` (2 subsequent siblings)
3 siblings, 0 replies; 10+ messages in thread
From: Marc Zyngier @ 2021-02-24 9:37 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Mark Rutland, kernelci.org bot, Will Deacon, Guillaume Tucker,
Catalin Marinas, kernel-team, Ard Biesheuvel
Enabling the MMU requires the write to SCTLR_ELx (and the ISB
that follows) to live in some identity-mapped memory. Otherwise,
the translation will result in something totally unexpected
(either fetching the wrong instruction stream, or taking a
fault of some sort).
This is exactly what happens in mutate_to_vhe(), as this code
lives in the .hyp.text section, which isn't identity-mapped.
With the right configuration, this explodes badly.
Extract the MMU-enabling part of mutate_to_vhe(), and move
it to its own function that lives in the idmap. This ensures
nothing bad happens.
Fixes: f359182291c7 ("arm64: Provide an 'upgrade to VHE' stub hypercall")
Reported-by: "kernelci.org bot" <bot@kernelci.org>
Tested-by: Guillaume Tucker <guillaume.tucker@collabora.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/hyp-stub.S | 39 ++++++++++++++++++++++++------------
1 file changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 678cd2c618ee..ae56787ea7c1 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -75,9 +75,6 @@ SYM_CODE_END(el1_sync)
// nVHE? No way! Give me the real thing!
SYM_CODE_START_LOCAL(mutate_to_vhe)
- // Be prepared to fail
- mov_q x0, HVC_STUB_ERR
-
// Sanity check: MMU *must* be off
mrs x1, sctlr_el2
tbnz x1, #0, 1f
@@ -96,8 +93,11 @@ SYM_CODE_START_LOCAL(mutate_to_vhe)
cmp x1, xzr
and x2, x2, x1
csinv x2, x2, xzr, ne
- cbz x2, 1f
+ cbnz x2, 2f
+1: mov_q x0, HVC_STUB_ERR
+ eret
+2:
// Engage the VHE magic!
mov_q x0, HCR_HOST_VHE_FLAGS
msr hcr_el2, x0
@@ -131,6 +131,24 @@ SYM_CODE_START_LOCAL(mutate_to_vhe)
msr mair_el1, x0
isb
+ // Hack the exception return to stay at EL2
+ mrs x0, spsr_el1
+ and x0, x0, #~PSR_MODE_MASK
+ mov x1, #PSR_MODE_EL2h
+ orr x0, x0, x1
+ msr spsr_el1, x0
+
+ b enter_vhe
+SYM_CODE_END(mutate_to_vhe)
+
+ // At the point where we reach enter_vhe(), we run with
+ // the MMU off (which is enforced by mutate_to_vhe()).
+ // We thus need to be in the idmap, or everything will
+ // explode when enabling the MMU.
+
+ .pushsection .idmap.text, "ax"
+
+SYM_CODE_START_LOCAL(enter_vhe)
// Invalidate TLBs before enabling the MMU
tlbi vmalle1
dsb nsh
@@ -143,17 +161,12 @@ SYM_CODE_START_LOCAL(mutate_to_vhe)
mov_q x0, INIT_SCTLR_EL1_MMU_OFF
msr_s SYS_SCTLR_EL12, x0
- // Hack the exception return to stay at EL2
- mrs x0, spsr_el1
- and x0, x0, #~PSR_MODE_MASK
- mov x1, #PSR_MODE_EL2h
- orr x0, x0, x1
- msr spsr_el1, x0
-
mov x0, xzr
-1: eret
-SYM_CODE_END(mutate_to_vhe)
+ eret
+SYM_CODE_END(enter_vhe)
+
+ .popsection
.macro invalid_vector label
SYM_CODE_START_LOCAL(\label)
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
2021-02-24 9:37 [PATCH 0/3] arm64: Assorted MMU-on fixes Marc Zyngier
2021-02-24 9:37 ` [PATCH 1/3] arm64: VHE: Enable EL2 MMU from the idmap Marc Zyngier
@ 2021-02-24 9:37 ` Marc Zyngier
2021-02-24 11:06 ` Mark Rutland
2021-02-24 9:37 ` [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe Marc Zyngier
2021-02-24 12:36 ` [PATCH 0/3] arm64: Assorted MMU-on fixes Will Deacon
3 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2021-02-24 9:37 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Mark Rutland, Will Deacon, Guillaume Tucker, Catalin Marinas,
kernel-team, Ard Biesheuvel
Although there has been a bit of back and forth on the subject,
it appears that invalidating TLBs requires an ISB instruction
after the TLBI/DSB sequence, as documented in d0b7a302d58a
("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
Add the missing ISB in __primary_switch, just in case.
Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/head.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 1e30b5550d2a..66b0e0b66e31 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -837,6 +837,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
tlbi vmalle1 // Remove any stale TLB entries
dsb nsh
+ isb
set_sctlr_el1 x19 // re-enable the MMU
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe
2021-02-24 9:37 [PATCH 0/3] arm64: Assorted MMU-on fixes Marc Zyngier
2021-02-24 9:37 ` [PATCH 1/3] arm64: VHE: Enable EL2 MMU from the idmap Marc Zyngier
2021-02-24 9:37 ` [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch Marc Zyngier
@ 2021-02-24 9:37 ` Marc Zyngier
2021-02-24 11:12 ` Mark Rutland
2021-02-24 12:36 ` [PATCH 0/3] arm64: Assorted MMU-on fixes Will Deacon
3 siblings, 1 reply; 10+ messages in thread
From: Marc Zyngier @ 2021-02-24 9:37 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Mark Rutland, Will Deacon, Guillaume Tucker, Catalin Marinas,
kernel-team, Ard Biesheuvel
Although there has been a bit of back and forth on the subject,
it appears that invalidating TLBs requires an ISB instruction
after the TLBI/DSB sequence, as documented in d0b7a302d58a
("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
Add the missing ISB in enter_vhe(), just in case.
Fixes: f359182291c7 ("arm64: Provide an 'upgrade to VHE' stub hypercall")
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/hyp-stub.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index ae56787ea7c1..5eccbd62fec8 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -152,6 +152,7 @@ SYM_CODE_START_LOCAL(enter_vhe)
// Invalidate TLBs before enabling the MMU
tlbi vmalle1
dsb nsh
+ isb
// Enable the EL2 S1 MMU, as set up from EL1
mrs_s x0, SYS_SCTLR_EL12
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
2021-02-24 9:37 ` [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch Marc Zyngier
@ 2021-02-24 11:06 ` Mark Rutland
2021-02-24 11:16 ` Will Deacon
0 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2021-02-24 11:06 UTC (permalink / raw)
To: Marc Zyngier
Cc: Will Deacon, Guillaume Tucker, Catalin Marinas, kernel-team,
Ard Biesheuvel, linux-arm-kernel
Hi Marc,
On Wed, Feb 24, 2021 at 09:37:37AM +0000, Marc Zyngier wrote:
> Although there has been a bit of back and forth on the subject,
> it appears that invalidating TLBs requires an ISB instruction
> after the TLBI/DSB sequence, as documented in d0b7a302d58a
> ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
That commit describes a different scenario (going faulting->valid
without TLB maintenance), and I don't think that implies anything about
the behaviour in the presence of a TLBI, which is quite different.
Howerver, I do see that commits:
7f0b1bf045113489 ("arm64: Fix barriers used for page table modifications")
51696d346c49c6cf ("arm64: tlb: Ensure we execute an ISB following walk cache invalidation")
... assume that we need an ISB after a TLBI+DSB, so I think it would be
better to refer to those, to avoid conflating the distinct cases.
> Add the missing ISB in __primary_switch, just in case.
>
> Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
For consistency with the other kernel TLBI paths, I'm fine with this
(assuming we update the commit message accordingly):
Acked-by: Mark Rutland <mark.rutland@arm.com>
My understanding is that we don't need an ISB after invalidation, and if
we align on that understanding we can follow up and update all of the
TLBI paths in one go.
Thanks,
Mark.
> ---
> arch/arm64/kernel/head.S | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 1e30b5550d2a..66b0e0b66e31 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -837,6 +837,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
>
> tlbi vmalle1 // Remove any stale TLB entries
> dsb nsh
> + isb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe
2021-02-24 9:37 ` [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe Marc Zyngier
@ 2021-02-24 11:12 ` Mark Rutland
0 siblings, 0 replies; 10+ messages in thread
From: Mark Rutland @ 2021-02-24 11:12 UTC (permalink / raw)
To: Marc Zyngier
Cc: Will Deacon, Guillaume Tucker, Catalin Marinas, kernel-team,
Ard Biesheuvel, linux-arm-kernel
On Wed, Feb 24, 2021 at 09:37:38AM +0000, Marc Zyngier wrote:
> Although there has been a bit of back and forth on the subject,
> it appears that invalidating TLBs requires an ISB instruction
> after the TLBI/DSB sequence, as documented in d0b7a302d58a
> ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
>
> Add the missing ISB in enter_vhe(), just in case.
>
> Fixes: f359182291c7 ("arm64: Provide an 'upgrade to VHE' stub hypercall")
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
Same comment as for patch 2; with the commit message similarly updated:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Thanks,
Mark.
> ---
> arch/arm64/kernel/hyp-stub.S | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
> index ae56787ea7c1..5eccbd62fec8 100644
> --- a/arch/arm64/kernel/hyp-stub.S
> +++ b/arch/arm64/kernel/hyp-stub.S
> @@ -152,6 +152,7 @@ SYM_CODE_START_LOCAL(enter_vhe)
> // Invalidate TLBs before enabling the MMU
> tlbi vmalle1
> dsb nsh
> + isb
>
> // Enable the EL2 S1 MMU, as set up from EL1
> mrs_s x0, SYS_SCTLR_EL12
> --
> 2.29.2
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
2021-02-24 11:06 ` Mark Rutland
@ 2021-02-24 11:16 ` Will Deacon
2021-02-24 11:45 ` Mark Rutland
0 siblings, 1 reply; 10+ messages in thread
From: Will Deacon @ 2021-02-24 11:16 UTC (permalink / raw)
To: Mark Rutland
Cc: Marc Zyngier, Guillaume Tucker, Catalin Marinas, kernel-team,
Ard Biesheuvel, linux-arm-kernel
On Wed, Feb 24, 2021 at 11:06:43AM +0000, Mark Rutland wrote:
> On Wed, Feb 24, 2021 at 09:37:37AM +0000, Marc Zyngier wrote:
> > Although there has been a bit of back and forth on the subject,
> > it appears that invalidating TLBs requires an ISB instruction
> > after the TLBI/DSB sequence, as documented in d0b7a302d58a
> > ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
>
> That commit describes a different scenario (going faulting->valid
> without TLB maintenance), and I don't think that implies anything about
> the behaviour in the presence of a TLBI, which is quite different.
Sorry, that was my fault. I remember this all happened around the same
time.
> Howerver, I do see that commits:
>
> 7f0b1bf045113489 ("arm64: Fix barriers used for page table modifications")
> 51696d346c49c6cf ("arm64: tlb: Ensure we execute an ISB following walk cache invalidation")
>
> ... assume that we need an ISB after a TLBI+DSB, so I think it would be
> better to refer to those, to avoid conflating the distinct cases.
>
> > Add the missing ISB in __primary_switch, just in case.
> >
> > Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
> > Suggested-by: Will Deacon <will@kernel.org>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
>
> For consistency with the other kernel TLBI paths, I'm fine with this
> (assuming we update the commit message accordingly):
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> My understanding is that we don't need an ISB after invalidation, and if
> we align on that understanding we can follow up and update all of the
> TLBI paths in one go.
Without FEAT_ETS, I think the ISB is required to complete TLBI:
| In an implementation that does not implement FEAT_ETS, a TLB maintenance
| instruction executed by a PE, PEx, can complete at any time after it is
| issued, but is only guaranteed to be finished for a PE, PEx, after the
| execution of DSB by the PEx followed by a Context synchronization event.
Will
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
2021-02-24 11:16 ` Will Deacon
@ 2021-02-24 11:45 ` Mark Rutland
2021-02-24 12:06 ` Will Deacon
0 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2021-02-24 11:45 UTC (permalink / raw)
To: Will Deacon
Cc: Marc Zyngier, Guillaume Tucker, Catalin Marinas, kernel-team,
Ard Biesheuvel, linux-arm-kernel
On Wed, Feb 24, 2021 at 11:16:50AM +0000, Will Deacon wrote:
> On Wed, Feb 24, 2021 at 11:06:43AM +0000, Mark Rutland wrote:
> > On Wed, Feb 24, 2021 at 09:37:37AM +0000, Marc Zyngier wrote:
> > > Although there has been a bit of back and forth on the subject,
> > > it appears that invalidating TLBs requires an ISB instruction
> > > after the TLBI/DSB sequence, as documented in d0b7a302d58a
> > > ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
> >
> > That commit describes a different scenario (going faulting->valid
> > without TLB maintenance), and I don't think that implies anything about
> > the behaviour in the presence of a TLBI, which is quite different.
>
> Sorry, that was my fault. I remember this all happened around the same
> time.
>
> > Howerver, I do see that commits:
> >
> > 7f0b1bf045113489 ("arm64: Fix barriers used for page table modifications")
> > 51696d346c49c6cf ("arm64: tlb: Ensure we execute an ISB following walk cache invalidation")
> >
> > ... assume that we need an ISB after a TLBI+DSB, so I think it would be
> > better to refer to those, to avoid conflating the distinct cases.
> >
> > > Add the missing ISB in __primary_switch, just in case.
> > >
> > > Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
> > > Suggested-by: Will Deacon <will@kernel.org>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> >
> > For consistency with the other kernel TLBI paths, I'm fine with this
> > (assuming we update the commit message accordingly):
> >
> > Acked-by: Mark Rutland <mark.rutland@arm.com>
> >
> > My understanding is that we don't need an ISB after invalidation, and if
> > we align on that understanding we can follow up and update all of the
> > TLBI paths in one go.
>
> Without FEAT_ETS, I think the ISB is required to complete TLBI:
>
> | In an implementation that does not implement FEAT_ETS, a TLB maintenance
> | instruction executed by a PE, PEx, can complete at any time after it is
> | issued, but is only guaranteed to be finished for a PE, PEx, after the
> | execution of DSB by the PEx followed by a Context synchronization event.
Ah; given that quote, I agree.
Can we put that in the commit message? I think that's clearer than
referring to any commit, and with that my ack definitely stands.
I /hope/ that quote means the same PEx in both cases (i.e. only the
issuing PE needs the context synchronization event), or we have much
greater problems!
Thanks,
Mark.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
2021-02-24 11:45 ` Mark Rutland
@ 2021-02-24 12:06 ` Will Deacon
0 siblings, 0 replies; 10+ messages in thread
From: Will Deacon @ 2021-02-24 12:06 UTC (permalink / raw)
To: Mark Rutland
Cc: Marc Zyngier, Guillaume Tucker, Catalin Marinas, kernel-team,
Ard Biesheuvel, linux-arm-kernel
On Wed, Feb 24, 2021 at 11:45:40AM +0000, Mark Rutland wrote:
> On Wed, Feb 24, 2021 at 11:16:50AM +0000, Will Deacon wrote:
> > On Wed, Feb 24, 2021 at 11:06:43AM +0000, Mark Rutland wrote:
> > > On Wed, Feb 24, 2021 at 09:37:37AM +0000, Marc Zyngier wrote:
> > > > Although there has been a bit of back and forth on the subject,
> > > > it appears that invalidating TLBs requires an ISB instruction
> > > > after the TLBI/DSB sequence, as documented in d0b7a302d58a
> > > > ("Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}"").
> > >
> > > That commit describes a different scenario (going faulting->valid
> > > without TLB maintenance), and I don't think that implies anything about
> > > the behaviour in the presence of a TLBI, which is quite different.
> >
> > Sorry, that was my fault. I remember this all happened around the same
> > time.
> >
> > > Howerver, I do see that commits:
> > >
> > > 7f0b1bf045113489 ("arm64: Fix barriers used for page table modifications")
> > > 51696d346c49c6cf ("arm64: tlb: Ensure we execute an ISB following walk cache invalidation")
> > >
> > > ... assume that we need an ISB after a TLBI+DSB, so I think it would be
> > > better to refer to those, to avoid conflating the distinct cases.
> > >
> > > > Add the missing ISB in __primary_switch, just in case.
> > > >
> > > > Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()")
> > > > Suggested-by: Will Deacon <will@kernel.org>
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > >
> > > For consistency with the other kernel TLBI paths, I'm fine with this
> > > (assuming we update the commit message accordingly):
> > >
> > > Acked-by: Mark Rutland <mark.rutland@arm.com>
> > >
> > > My understanding is that we don't need an ISB after invalidation, and if
> > > we align on that understanding we can follow up and update all of the
> > > TLBI paths in one go.
> >
> > Without FEAT_ETS, I think the ISB is required to complete TLBI:
> >
> > | In an implementation that does not implement FEAT_ETS, a TLB maintenance
> > | instruction executed by a PE, PEx, can complete at any time after it is
> > | issued, but is only guaranteed to be finished for a PE, PEx, after the
> > | execution of DSB by the PEx followed by a Context synchronization event.
>
> Ah; given that quote, I agree.
>
> Can we put that in the commit message? I think that's clearer than
> referring to any commit, and with that my ack definitely stands.
Done!
> I /hope/ that quote means the same PEx in both cases (i.e. only the
> issuing PE needs the context synchronization event), or we have much
> greater problems!
Yeah, that's definitely what it's supposed to mean (I remember working on
this), but the wording ain't great! Other PEs are covered in a separate
statement:
| A TLB maintenance instruction executed by a PE, PEx, can complete at any time
| after it is issued, but is only guaranteed to be finished for a PE other than
| PEx after the execution of DSB by the PEx.
which would make for an interesting tutorial exercise for somebody studying
temporal logic.
Will
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/3] arm64: Assorted MMU-on fixes
2021-02-24 9:37 [PATCH 0/3] arm64: Assorted MMU-on fixes Marc Zyngier
` (2 preceding siblings ...)
2021-02-24 9:37 ` [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe Marc Zyngier
@ 2021-02-24 12:36 ` Will Deacon
3 siblings, 0 replies; 10+ messages in thread
From: Will Deacon @ 2021-02-24 12:36 UTC (permalink / raw)
To: Marc Zyngier, linux-arm-kernel
Cc: Mark Rutland, Will Deacon, catalin.marinas, Guillaume Tucker,
kernel-team, Ard Biesheuvel
On Wed, 24 Feb 2021 09:37:35 +0000, Marc Zyngier wrote:
> Guillaume (and his mate the KernelCI bot) recently reported that the
> new VHE enablement code lamentably fails when 64k pages are enabled.
> And fair enough, the code was broken to the point that it really
> should have never booted the first place...
>
> The first patch fixes what amounts to a total brain fart.
>
> [...]
Applied to arm64 (for-next/fixes), thanks!
[1/3] arm64: VHE: Enable EL2 MMU from the idmap
https://git.kernel.org/arm64/c/f1b6cff7c98b
[2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch
https://git.kernel.org/arm64/c/9d41053e8dc1
[3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe
https://git.kernel.org/arm64/c/430251cc864b
Cheers,
--
Will
https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-02-24 12:37 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-24 9:37 [PATCH 0/3] arm64: Assorted MMU-on fixes Marc Zyngier
2021-02-24 9:37 ` [PATCH 1/3] arm64: VHE: Enable EL2 MMU from the idmap Marc Zyngier
2021-02-24 9:37 ` [PATCH 2/3] arm64: Add missing ISB after invalidating TLB in __primary_switch Marc Zyngier
2021-02-24 11:06 ` Mark Rutland
2021-02-24 11:16 ` Will Deacon
2021-02-24 11:45 ` Mark Rutland
2021-02-24 12:06 ` Will Deacon
2021-02-24 9:37 ` [PATCH 3/3] arm64: Add missing ISB after invalidating TLB in enter_vhe Marc Zyngier
2021-02-24 11:12 ` Mark Rutland
2021-02-24 12:36 ` [PATCH 0/3] arm64: Assorted MMU-on fixes Will Deacon
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.