* [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
@ 2021-07-14 1:36 Peter Collingbourne
2021-07-14 14:04 ` Mark Rutland
2021-07-28 17:46 ` Catalin Marinas
0 siblings, 2 replies; 4+ messages in thread
From: Peter Collingbourne @ 2021-07-14 1:36 UTC (permalink / raw)
To: Catalin Marinas, Vincenzo Frascino, Will Deacon, Andrey Konovalov
Cc: Peter Collingbourne, Evgenii Stepanov, Szabolcs Nagy,
Tejas Belagod, linux-arm-kernel
Accessing GCR_EL1 and issuing an ISB can be expensive on some
microarchitectures. Although we must write to GCR_EL1, we can
restructure the code to avoid reading from it because the new value
can be derived entirely from the exclusion mask, which is already in
a GPR. Do so.
Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75
---
v4:
- split in two
v3:
- go back to modifying on entry/exit; optimize that path instead
v2:
- rebase onto v9 of the tag checking mode preference series
arch/arm64/kernel/entry.S | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ce59280355c5..2d6dc62d929a 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -175,15 +175,11 @@ alternative_else_nop_endif
#endif
.endm
- .macro mte_set_gcr, tmp, tmp2
+ .macro mte_set_gcr, mte_ctrl, tmp
#ifdef CONFIG_ARM64_MTE
- /*
- * Calculate and set the exclude mask preserving
- * the RRND (bit[16]) setting.
- */
- mrs_s \tmp2, SYS_GCR_EL1
- bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
- msr_s SYS_GCR_EL1, \tmp2
+ ubfx \tmp, \mte_ctrl, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
+ orr \tmp, \tmp, #SYS_GCR_EL1_RRND
+ msr_s SYS_GCR_EL1, \tmp
#endif
.endm
--
2.32.0.93.g670b81a890-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
2021-07-14 1:36 [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Peter Collingbourne
@ 2021-07-14 14:04 ` Mark Rutland
2021-07-28 17:18 ` Catalin Marinas
2021-07-28 17:46 ` Catalin Marinas
1 sibling, 1 reply; 4+ messages in thread
From: Mark Rutland @ 2021-07-14 14:04 UTC (permalink / raw)
To: Peter Collingbourne
Cc: Catalin Marinas, Vincenzo Frascino, Will Deacon,
Andrey Konovalov, Evgenii Stepanov, Szabolcs Nagy, Tejas Belagod,
linux-arm-kernel
Hi Peter,
On Tue, Jul 13, 2021 at 06:36:38PM -0700, Peter Collingbourne wrote:
> Accessing GCR_EL1 and issuing an ISB can be expensive on some
> microarchitectures. Although we must write to GCR_EL1, we can
> restructure the code to avoid reading from it because the new value
> can be derived entirely from the exclusion mask, which is already in
> a GPR. Do so.
>
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75
> ---
> v4:
> - split in two
>
> v3:
> - go back to modifying on entry/exit; optimize that path instead
>
> v2:
> - rebase onto v9 of the tag checking mode preference series
>
> arch/arm64/kernel/entry.S | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ce59280355c5..2d6dc62d929a 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -175,15 +175,11 @@ alternative_else_nop_endif
> #endif
> .endm
>
> - .macro mte_set_gcr, tmp, tmp2
> + .macro mte_set_gcr, mte_ctrl, tmp
> #ifdef CONFIG_ARM64_MTE
> - /*
> - * Calculate and set the exclude mask preserving
> - * the RRND (bit[16]) setting.
> - */
> - mrs_s \tmp2, SYS_GCR_EL1
> - bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
> - msr_s SYS_GCR_EL1, \tmp2
> + ubfx \tmp, \mte_ctrl, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
> + orr \tmp, \tmp, #SYS_GCR_EL1_RRND
> + msr_s SYS_GCR_EL1, \tmp
> #endif
> .endm
Since the mte_ctrl value only has the Exclude bits set, we can make this
even simpler:
orr \tmp, \mte_ctrl, #SYS_GCR_EL1_RRND
msr_s SYS_GCR_EL1, \tmp
Otherwise, looks good to me!
Thanks,
Mark.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
2021-07-14 14:04 ` Mark Rutland
@ 2021-07-28 17:18 ` Catalin Marinas
0 siblings, 0 replies; 4+ messages in thread
From: Catalin Marinas @ 2021-07-28 17:18 UTC (permalink / raw)
To: Mark Rutland
Cc: Peter Collingbourne, Vincenzo Frascino, Will Deacon,
Andrey Konovalov, Evgenii Stepanov, Szabolcs Nagy, Tejas Belagod,
linux-arm-kernel
On Wed, Jul 14, 2021 at 03:04:42PM +0100, Mark Rutland wrote:
> On Tue, Jul 13, 2021 at 06:36:38PM -0700, Peter Collingbourne wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ce59280355c5..2d6dc62d929a 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -175,15 +175,11 @@ alternative_else_nop_endif
> > #endif
> > .endm
> >
> > - .macro mte_set_gcr, tmp, tmp2
> > + .macro mte_set_gcr, mte_ctrl, tmp
> > #ifdef CONFIG_ARM64_MTE
> > - /*
> > - * Calculate and set the exclude mask preserving
> > - * the RRND (bit[16]) setting.
> > - */
> > - mrs_s \tmp2, SYS_GCR_EL1
> > - bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
> > - msr_s SYS_GCR_EL1, \tmp2
> > + ubfx \tmp, \mte_ctrl, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
> > + orr \tmp, \tmp, #SYS_GCR_EL1_RRND
> > + msr_s SYS_GCR_EL1, \tmp
> > #endif
> > .endm
>
> Since the mte_ctrl value only has the Exclude bits set, we can make this
> even simpler:
>
> orr \tmp, \mte_ctrl, #SYS_GCR_EL1_RRND
> msr_s SYS_GCR_EL1, \tmp
I don't think we can guarantee it following this patch (some other bits
added to mte_ctrl):
https://lore.kernel.org/r/20210727205300.2554659-3-pcc@google.com
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
2021-07-14 1:36 [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Peter Collingbourne
2021-07-14 14:04 ` Mark Rutland
@ 2021-07-28 17:46 ` Catalin Marinas
1 sibling, 0 replies; 4+ messages in thread
From: Catalin Marinas @ 2021-07-28 17:46 UTC (permalink / raw)
To: Vincenzo Frascino, Peter Collingbourne, Will Deacon, Andrey Konovalov
Cc: Greg Kroah-Hartman, Szabolcs Nagy, linux-arm-kernel,
Tejas Belagod, Evgenii Stepanov
On Tue, 13 Jul 2021 18:36:38 -0700, Peter Collingbourne wrote:
> Accessing GCR_EL1 and issuing an ISB can be expensive on some
> microarchitectures. Although we must write to GCR_EL1, we can
> restructure the code to avoid reading from it because the new value
> can be derived entirely from the exclusion mask, which is already in
> a GPR. Do so.
Applied to arm64 (for-next/mte), thanks!
[1/1] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
https://git.kernel.org/arm64/c/afdfd93a53ae
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-07-28 17:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14 1:36 [PATCH v4] arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Peter Collingbourne
2021-07-14 14:04 ` Mark Rutland
2021-07-28 17:18 ` Catalin Marinas
2021-07-28 17:46 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).