All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] arm64: E0PD support
@ 2019-08-14 18:31 Mark Brown
  2019-08-14 18:31 ` [PATCH v2 1/2] arm64: Add initial support for E0PD Mark Brown
  2019-08-14 18:31 ` [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD Mark Brown
  0 siblings, 2 replies; 18+ messages in thread
From: Mark Brown @ 2019-08-14 18:31 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel, Suzuki K Poulose

This series adds support for E0PD. We enable E0PD unconditionally where
present and on systems where all the CPUs in the system support E0PD we
will not automatically enable KPTI for KASLR. The integration with KPTI
is safe but not optimal for big.LITTLE systems where only some CPUs
support E0PD since for them it will still use KPTI even on the CPUs that
have E0PD, I've not yet come up with something I like for integrating
support for them with the command line overrides.

Mark Brown (2):
      arm64: Add initial support for E0PD
      arm64: Don't use KPTI where we have E0PD

 arch/arm64/Kconfig                     | 15 +++++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/mmu.h           | 13 ++++++++++++-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 29 ++++++++++++++++++++++++++++-
 6 files changed, 60 insertions(+), 3 deletions(-)



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-08-14 18:31 [PATCH v2 0/2] arm64: E0PD support Mark Brown
@ 2019-08-14 18:31 ` Mark Brown
  2019-10-10 16:13   ` Mark Rutland
  2019-08-14 18:31 ` [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD Mark Brown
  1 sibling, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-08-14 18:31 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Brown, linux-arm-kernel, Suzuki K Poulose

Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
based security issues by ensuring that the kernel is not mapped when
userspace is running but this approach is expensive and is incompatible
with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
alternative to this which ensures that accesses from userspace to the
kernel's half of the memory map to always fault with constant time,
preventing timing attacks without requiring constant unmapping and
remapping or preventing legitimate accesses.

This initial patch does not yet integrate with KPTI, this will be dealt
with in followup patches.  Ideally we could ensure that by default we
don't use KPTI on CPUs where E0PD is present.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---

Tweak the Kconfig text as suggested by Will.

 arch/arm64/Kconfig                     | 15 ++++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 27 ++++++++++++++++++++++++++
 5 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 12de5c6075ec..7bf403405c99 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1392,6 +1392,21 @@ config ARM64_PTR_AUTH
 
 endmenu
 
+menu "ARMv8.5 architectural features"
+
+config ARM64_E0PD
+	bool "Enable support for E0PD"
+	default y
+	help
+	   E0PD (part of the ARMv8.5 extensions) allows us to ensure
+	   that EL0 accesses made via TTBR1 always fault in constant time,
+	   providing benefits to KPTI with lower overhead and without
+	   disrupting legitimate access to kernel memory such as SPE.
+
+	   This option enables E0PD for TTBR1 where available.
+
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f19fe4b9acc4..f25388981075 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,7 +52,8 @@
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
 #define ARM64_HAS_DCPODP			43
 #define ARM64_WORKAROUND_1463225		44
+#define ARM64_HAS_E0PD				45
 
-#define ARM64_NCAPS				45
+#define ARM64_NCAPS				46
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index db92950bb1a0..1a2708ebcae8 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -292,6 +292,8 @@
 #define TCR_HD			(UL(1) << 40)
 #define TCR_NFD0		(UL(1) << 53)
 #define TCR_NFD1		(UL(1) << 54)
+#define TCR_E0PD0		(UL(1) << 55)
+#define TCR_E0PD1		(UL(1) << 56)
 
 /*
  * TTBR.
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 1df45c7ffcf7..37a0926536d3 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -652,6 +652,7 @@
 #define ID_AA64MMFR1_VMIDBITS_16	2
 
 /* id_aa64mmfr2 */
+#define ID_AA64MMFR2_E0PD_SHIFT		60
 #define ID_AA64MMFR2_FWB_SHIFT		40
 #define ID_AA64MMFR2_AT_SHIFT		32
 #define ID_AA64MMFR2_LVA_SHIFT		16
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9323bcc40a58..62b01fc35ef6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -219,6 +219,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
@@ -1244,6 +1245,19 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 }
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
+#ifdef CONFIG_ARM64_E0PD
+static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
+{
+	/*
+	 * The cpu_enable() callback gets called even on CPUs that
+	 * don't detect the feature so we need to verify if we can
+	 * enable.
+	 */
+	if (this_cpu_has_cap(ARM64_HAS_E0PD))
+		sysreg_clear_set(tcr_el1, 0, TCR_E0PD1);
+}
+#endif /* CONFIG_ARM64_E0PD */
+
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 static bool enable_pseudo_nmi;
 
@@ -1559,6 +1573,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.min_field_value = 1,
 	},
+#endif
+#ifdef CONFIG_ARM64_E0PD
+	{
+		.desc = "E0PD",
+		.capability = ARM64_HAS_E0PD,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.sys_reg = SYS_ID_AA64MMFR2_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64MMFR2_E0PD_SHIFT,
+		.matches = has_cpuid_feature,
+		.min_field_value = 1,
+		.cpu_enable = cpu_enable_e0pd,
+	},
 #endif
 	{},
 };
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-14 18:31 [PATCH v2 0/2] arm64: E0PD support Mark Brown
  2019-08-14 18:31 ` [PATCH v2 1/2] arm64: Add initial support for E0PD Mark Brown
@ 2019-08-14 18:31 ` Mark Brown
  2019-08-15 16:35   ` Will Deacon
  1 sibling, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-08-14 18:31 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Brown, linux-arm-kernel, Suzuki K Poulose

Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since we repeat the KPTI check for all CPUs we will still enable KPTI if
any of the CPUs in the system lacks E0PD. Since KPTI itself is not changed
by this patch once we enable KPTI we will do so for all CPUs. This is safe
but not optimally performant for such systems.

In order to ensure that we don't install any non-global mappings in
cases where we use E0PD for the system instead we add a check for E0PD
to the early checks in arm64_kernel_use_ng_mappings(), not installing NG
mappings if the current CPU has E0PD. This will incur an overhead on
systems where the boot CPU has E0PD but some others do not, however it
is expected that systems with very large memories which benefit most
from this optimization will be symmetric.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---

Added a check in arm64_kernel_use_ng_mappings() to suppress non-global
mappings when E0PD is present and KPTI isn't forced on.

 arch/arm64/include/asm/mmu.h   | 13 ++++++++++++-
 arch/arm64/kernel/cpufeature.c |  2 +-
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index fd6161336653..85552f6fceda 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,6 +38,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 static inline bool arm64_kernel_use_ng_mappings(void)
 {
 	bool tx1_bug;
+	u64 ftr;
 
 	/* What's a kpti? Use global mappings if we don't know. */
 	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
@@ -59,7 +60,7 @@ static inline bool arm64_kernel_use_ng_mappings(void)
 	 * KASLR is enabled so we're going to be enabling kpti on non-broken
 	 * CPUs regardless of their susceptibility to Meltdown. Rather
 	 * than force everybody to go through the G -> nG dance later on,
-	 * just put down non-global mappings from the beginning.
+	 * just put down non-global mappings from the beginning...
 	 */
 	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
 		tx1_bug = false;
@@ -74,6 +75,16 @@ static inline bool arm64_kernel_use_ng_mappings(void)
 		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
 	}
 
+	/*
+	 * ...unless we have E0PD in which case we may use that in
+	 * preference to unmapping the kernel.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
 	return !tx1_bug && kaslr_offset() > 0;
 }
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 62b01fc35ef6..6bed144867ad 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1003,7 +1003,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 
 	/* Useful for KASLR robustness */
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
-		if (!__kpti_forced) {
+		if (!__kpti_forced && !this_cpu_has_cap(ARM64_HAS_E0PD)) {
 			str = "KASLR";
 			__kpti_forced = 1;
 		}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-14 18:31 ` [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD Mark Brown
@ 2019-08-15 16:35   ` Will Deacon
  2019-08-15 18:00     ` Mark Brown
  2019-08-16 10:24     ` Catalin Marinas
  0 siblings, 2 replies; 18+ messages in thread
From: Will Deacon @ 2019-08-15 16:35 UTC (permalink / raw)
  To: Mark Brown; +Cc: Catalin Marinas, linux-arm-kernel, Suzuki K Poulose

Hi Mark,

Thanks for respinning. Comments below...

On Wed, Aug 14, 2019 at 07:31:03PM +0100, Mark Brown wrote:
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index fd6161336653..85552f6fceda 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -38,6 +38,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
>  static inline bool arm64_kernel_use_ng_mappings(void)
>  {
>  	bool tx1_bug;
> +	u64 ftr;
>  
>  	/* What's a kpti? Use global mappings if we don't know. */
>  	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> @@ -59,7 +60,7 @@ static inline bool arm64_kernel_use_ng_mappings(void)
>  	 * KASLR is enabled so we're going to be enabling kpti on non-broken
>  	 * CPUs regardless of their susceptibility to Meltdown. Rather
>  	 * than force everybody to go through the G -> nG dance later on,
> -	 * just put down non-global mappings from the beginning.
> +	 * just put down non-global mappings from the beginning...
>  	 */
>  	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
>  		tx1_bug = false;
> @@ -74,6 +75,16 @@ static inline bool arm64_kernel_use_ng_mappings(void)
>  		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
>  	}
>  
> +	/*
> +	 * ...unless we have E0PD in which case we may use that in
> +	 * preference to unmapping the kernel.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> +			return false;
> +	}
> +
>  	return !tx1_bug && kaslr_offset() > 0;

I'm still unsure as to how this works with the kaslr check in
kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
requires kpti.

In this case, I think we'll:

	1. Start off with global mappings installed by the boot CPU
	2. Detect KPTI as being required on the secondary CPU
	3. Avoid rewriting the page tables because kaslr_offset > 0

At this point, we've got exposed global mappings on the secondary CPU.

Thinking about this further, I think we can simply move all of the
'kaslr_offset() > 0' checks used by the kpti code (i.e. in
arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
unmap_kernel_at_el0()) into a helper function which does the check for
E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

I think that should simplify your patch as well. What do you think?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-15 16:35   ` Will Deacon
@ 2019-08-15 18:00     ` Mark Brown
  2019-08-16 11:31       ` Mark Brown
  2019-08-16 10:24     ` Catalin Marinas
  1 sibling, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-08-15 18:00 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, Suzuki K Poulose


[-- Attachment #1.1: Type: text/plain, Size: 1351 bytes --]

On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> I'm still unsure as to how this works with the kaslr check in
> kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
> kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
> requires kpti.

Yes, in fact that is my default big.LITTLE test case.

> In this case, I think we'll:

> 	1. Start off with global mappings installed by the boot CPU
> 	2. Detect KPTI as being required on the secondary CPU
> 	3. Avoid rewriting the page tables because kaslr_offset > 0

> At this point, we've got exposed global mappings on the secondary CPU.

Right, yes.  It'd be enormously helpful if KASLR were a bit more visible
in the boot logs or something since I yet again managed to do that bit
of my testing without KASLR actually taking effect :/

> Thinking about this further, I think we can simply move all of the
> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> unmap_kernel_at_el0()) into a helper function which does the check for
> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> I think that should simplify your patch as well. What do you think?

Dunno about simplifying the patch particularly, looks very similar but
in any case it does appear to solve the problem - thanks.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-15 16:35   ` Will Deacon
  2019-08-15 18:00     ` Mark Brown
@ 2019-08-16 10:24     ` Catalin Marinas
  2019-08-16 12:10       ` Mark Brown
  1 sibling, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2019-08-16 10:24 UTC (permalink / raw)
  To: Will Deacon; +Cc: Mark Brown, linux-arm-kernel, Suzuki K Poulose

On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 07:31:03PM +0100, Mark Brown wrote:
> > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > index fd6161336653..85552f6fceda 100644
> > --- a/arch/arm64/include/asm/mmu.h
> > +++ b/arch/arm64/include/asm/mmu.h
> > @@ -38,6 +38,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
> >  static inline bool arm64_kernel_use_ng_mappings(void)
> >  {
> >  	bool tx1_bug;
> > +	u64 ftr;
> >  
> >  	/* What's a kpti? Use global mappings if we don't know. */
> >  	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> > @@ -59,7 +60,7 @@ static inline bool arm64_kernel_use_ng_mappings(void)
> >  	 * KASLR is enabled so we're going to be enabling kpti on non-broken
> >  	 * CPUs regardless of their susceptibility to Meltdown. Rather
> >  	 * than force everybody to go through the G -> nG dance later on,
> > -	 * just put down non-global mappings from the beginning.
> > +	 * just put down non-global mappings from the beginning...
> >  	 */
> >  	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> >  		tx1_bug = false;
> > @@ -74,6 +75,16 @@ static inline bool arm64_kernel_use_ng_mappings(void)
> >  		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
> >  	}
> >  
> > +	/*
> > +	 * ...unless we have E0PD in which case we may use that in
> > +	 * preference to unmapping the kernel.
> > +	 */
> > +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> > +			return false;
> > +	}

What I don't particularly like here is that on big.LITTLE this hunk may
have a different behaviour depending on which CPU you run it on. In
general, such CPUID access should only be done in a non-preemptible
context.

We probably get away with this during early boot (before CPU caps have
been set up) when arm64_kernel_unmapped_at_el0() is false since we only
have a single CPU running. Later on at run-time, we either have
arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
CPUs having E0PD. But I find it hard to reason about.

Could we move the above hunk in this block:

	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
		...
	}

and reshuffle the rest of the function to only rely on
arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?

> > +
> >  	return !tx1_bug && kaslr_offset() > 0;
> 
> I'm still unsure as to how this works with the kaslr check in
> kpti_install_ng_mappings(). Imagine you have a big.LITTLE system using
> kaslr where the boot CPU has E0PD but the secondary CPU doesn't, and
> requires kpti.
> 
> In this case, I think we'll:
> 
> 	1. Start off with global mappings installed by the boot CPU
> 	2. Detect KPTI as being required on the secondary CPU
> 	3. Avoid rewriting the page tables because kaslr_offset > 0
> 
> At this point, we've got exposed global mappings on the secondary CPU.
> 
> Thinking about this further, I think we can simply move all of the
> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> unmap_kernel_at_el0()) into a helper function which does the check for
> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

I agree, this needs some refactoring as we have this decision in three
separate places.

Trying to put my thoughts together. At run-time, with capabilities fully
enabled, we want:

  arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()

  KPTI is equivalent to arm64_kernel_unmapped_at_el0()

At boot time, it's a best effort but we can only move from G to nG
mappings. We start with nG if the primary CPU requires it to avoid
unnecessary page table rewriting. For your above scenario,
kpti_install_ng_mappings() needs to know the boot CPU G/nG state and
skip the rewriting if already nG. If we have a kaslr_requires_kpti()
that only checks the current CPU, it wouldn't know whether kpti was
already applied at boot.

I think kaslr_requires_kpti() should access the raw CPUID registers (for
E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
The boot CPU should store kaslr_requires_kpti() value somewhere and
kpti_install_ng_mappings() should check this variable before deciding to
skip the page table rewrite.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-15 18:00     ` Mark Brown
@ 2019-08-16 11:31       ` Mark Brown
  0 siblings, 0 replies; 18+ messages in thread
From: Mark Brown @ 2019-08-16 11:31 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, Suzuki K Poulose


[-- Attachment #1.1: Type: text/plain, Size: 1067 bytes --]

On Thu, Aug 15, 2019 at 07:00:30PM +0100, Mark Brown wrote:
> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> > Thinking about this further, I think we can simply move all of the
> > 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> > arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> > unmap_kernel_at_el0()) into a helper function which does the check for
> > E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> > I think that should simplify your patch as well. What do you think?

> Dunno about simplifying the patch particularly, looks very similar but
> in any case it does appear to solve the problem - thanks.

Actually no, it's not quite that simple.  They're not all looking for
quite the same thing even if they're all currently doing the same check.
For example kpti_install_ng_mappings() should run on all CPUs unless
none of them have installed global mappings and in particular currently
needs to run on the boot CPU but that's not what we want in for example
unmap_kernel_at_el0().  I'll poke at it some more.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-16 10:24     ` Catalin Marinas
@ 2019-08-16 12:10       ` Mark Brown
  2019-09-24  9:13         ` Suzuki K Poulose
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-08-16 12:10 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Will Deacon, linux-arm-kernel, Suzuki K Poulose


[-- Attachment #1.1: Type: text/plain, Size: 3404 bytes --]

On Fri, Aug 16, 2019 at 11:24:24AM +0100, Catalin Marinas wrote:
> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:

> > > +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > > +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> > > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> > > +			return false;
> > > +	}

> What I don't particularly like here is that on big.LITTLE this hunk may
> have a different behaviour depending on which CPU you run it on. In
> general, such CPUID access should only be done in a non-preemptible
> context.

> We probably get away with this during early boot (before CPU caps have
> been set up) when arm64_kernel_unmapped_at_el0() is false since we only
> have a single CPU running. Later on at run-time, we either have
> arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
> E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
> CPUs having E0PD. But I find it hard to reason about.

Yes, all this stuff is unfortunately hard to reason about since there's
several environment changes during boot which have a material effect and
also multiple different things that might trigger KPTI.  IIRC my thinking
here was that if we turned on KPTI we're turning it on for all CPUs so 
by the time we could be prempted we'd be returning true from the earlier
check for arm64_kernel_unmapped_at_el0() but it's possible I missed some
case there.  I was trying to avoid disturbing the existing code too much
unless I had a strong reason to on the basis that I might be missing
something about the way it was done.

> Could we move the above hunk in this block:

> 	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
> 		...
> 	}

> and reshuffle the rest of the function to only rely on
> arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?

I've added the check, will look at the reshuffle.

> > Thinking about this further, I think we can simply move all of the
> > 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
> > arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
> > unmap_kernel_at_el0()) into a helper function which does the check for
> > E0PD as well. Perhaps 'kaslr_requires_kpti()' ?

> I agree, this needs some refactoring as we have this decision in three
> separate places.

> Trying to put my thoughts together. At run-time, with capabilities fully
> enabled, we want:

>   arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()

>   KPTI is equivalent to arm64_kernel_unmapped_at_el0()

Yes, this bit is simple - once we're up and running everything is clear.

> I think kaslr_requires_kpti() should access the raw CPUID registers (for
> E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
> arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
> The boot CPU should store kaslr_requires_kpti() value somewhere and
> kpti_install_ng_mappings() should check this variable before deciding to
> skip the page table rewrite.

We definitely need some variable I think, and I think you're right that
making the decision on the boot CPU would simplify things a lot.  The
systems with very large memories that are most affected by the cost of
moving from global to non-global mappings are most likely symmetric
anyway so only looking at the boot CPU should be fine for that.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-08-16 12:10       ` Mark Brown
@ 2019-09-24  9:13         ` Suzuki K Poulose
  2019-10-09 17:52           ` Mark Brown
  0 siblings, 1 reply; 18+ messages in thread
From: Suzuki K Poulose @ 2019-09-24  9:13 UTC (permalink / raw)
  To: Mark Brown, Catalin Marinas; +Cc: Will Deacon, linux-arm-kernel



On 16/08/2019 13:10, Mark Brown wrote:
> On Fri, Aug 16, 2019 at 11:24:24AM +0100, Catalin Marinas wrote:
>> On Thu, Aug 15, 2019 at 05:35:42PM +0100, Will Deacon wrote:
> 
>>>> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
>>>> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
>>>> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
>>>> +			return false;
>>>> +	}
> 
>> What I don't particularly like here is that on big.LITTLE this hunk may
>> have a different behaviour depending on which CPU you run it on. In
>> general, such CPUID access should only be done in a non-preemptible
>> context.
> 
>> We probably get away with this during early boot (before CPU caps have
>> been set up) when arm64_kernel_unmapped_at_el0() is false since we only
>> have a single CPU running. Later on at run-time, we either have
>> arm64_kernel_unmapped_at_el0() true, meaning that some CPU is missing
>> E0PD with kaslr_offset() > 0, or the kernel is mapped at EL0 with all
>> CPUs having E0PD. But I find it hard to reason about.
> 
> Yes, all this stuff is unfortunately hard to reason about since there's
> several environment changes during boot which have a material effect and
> also multiple different things that might trigger KPTI.  IIRC my thinking
> here was that if we turned on KPTI we're turning it on for all CPUs so
> by the time we could be prempted we'd be returning true from the earlier
> check for arm64_kernel_unmapped_at_el0() but it's possible I missed some
> case there.  I was trying to avoid disturbing the existing code too much
> unless I had a strong reason to on the basis that I might be missing
> something about the way it was done.
> 
>> Could we move the above hunk in this block:
> 
>> 	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
>> 		...
>> 	}
> 
>> and reshuffle the rest of the function to only rely on
>> arm64_kernel_unmapped_at_el0() when the caps are ready (at run-time)?
> 
> I've added the check, will look at the reshuffle.
> 
>>> Thinking about this further, I think we can simply move all of the
>>> 'kaslr_offset() > 0' checks used by the kpti code (i.e. in
>>> arm64_kernel_unmapped_at_el0(), kpti_install_ng_mappings() and
>>> unmap_kernel_at_el0()) into a helper function which does the check for
>>> E0PD as well. Perhaps 'kaslr_requires_kpti()' ?
> 
>> I agree, this needs some refactoring as we have this decision in three
>> separate places.
> 
>> Trying to put my thoughts together. At run-time, with capabilities fully
>> enabled, we want:
> 
>>    arm64_kernel_use_ng_mappings() == arm64_kernel_unmapped_at_el0()
> 
>>    KPTI is equivalent to arm64_kernel_unmapped_at_el0()
> 
> Yes, this bit is simple - once we're up and running everything is clear.
> 
>> I think kaslr_requires_kpti() should access the raw CPUID registers (for
>> E0PD, TX1 bug) and be called only by unmap_kernel_at_el0() and
>> arm64_kernel_use_ng_mappings(), the latter if !arm64_const_caps_ready.
>> The boot CPU should store kaslr_requires_kpti() value somewhere and
>> kpti_install_ng_mappings() should check this variable before deciding to
>> skip the page table rewrite.
> 
> We definitely need some variable I think, and I think you're right that
> making the decision on the boot CPU would simplify things a lot.  The
> systems with very large memories that are most affected by the cost of
> moving from global to non-global mappings are most likely symmetric
> anyway so only looking at the boot CPU should be fine for that.
> 

With KASLR, we already rewrite the page table from __primary_switch() after
relocating the kernel. So, we may be able to perform "raw cpuid check" on
the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
the boot CPU and store this information somewhere. Thus we may be able to
avoid another re-write of the pagetables after we have booted the secondaries.

We could continue to do the per-CPU check to see if we need nG mappings
and perform the transition later if needed, like we do now.

Discussing this with Catalin, he suggests to use a variable for the status
of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
all the time. By using the per-CPU check we can make sure the flag is uptodate.

Also, we can continue to fail the hotplugged CPUs if we detect that the 
pagetables are Global and the new CPU requires nG (for heterogeneous systems).

Suzuki


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-09-24  9:13         ` Suzuki K Poulose
@ 2019-10-09 17:52           ` Mark Brown
  2019-10-10 10:24             ` Suzuki K Poulose
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-10-09 17:52 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1600 bytes --]

On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:
> On 16/08/2019 13:10, Mark Brown wrote:

> > We definitely need some variable I think, and I think you're right that
> > making the decision on the boot CPU would simplify things a lot.  The

> relocating the kernel. So, we may be able to perform "raw cpuid check" on
> the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
> displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
> the boot CPU and store this information somewhere. Thus we may be able to
> avoid another re-write of the pagetables after we have booted the secondaries.

The boot CPU is straightforward, there is only an issue on the
secondaries where IIRC the rewrite code needs some updates as we
get left with non-global mappings lying around.

> Discussing this with Catalin, he suggests to use a variable for the status
> of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
> all the time. By using the per-CPU check we can make sure the flag is uptodate.

That was the discussion about the variable above.  We need one
for non-optimization reasons anyway since we can't rely on
checking the state on the current CPU.

> Also, we can continue to fail the hotplugged CPUs if we detect that the
> pagetables are Global and the new CPU requires nG (for heterogeneous
> systems).

There's no continuing to reject those CPUs unfortunately, we
don't reject anything currently.  Any such systems would
experience a regression when moving to a kernel where E0PD is
enabled which doesn't seem ideal.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-10-09 17:52           ` Mark Brown
@ 2019-10-10 10:24             ` Suzuki K Poulose
  2019-10-10 16:04               ` Mark Brown
  0 siblings, 1 reply; 18+ messages in thread
From: Suzuki K Poulose @ 2019-10-10 10:24 UTC (permalink / raw)
  To: Mark Brown; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel

Hi Mark

On 09/10/2019 18:52, Mark Brown wrote:
> On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:
>> On 16/08/2019 13:10, Mark Brown wrote:
> 
>>> We definitely need some variable I think, and I think you're right that
>>> making the decision on the boot CPU would simplify things a lot.  The
> 
>> relocating the kernel. So, we may be able to perform "raw cpuid check" on
>> the boot CPU with MMU turned on, before we re-write the pagetables for KASLR
>> displacement and nG if that is needed (by maybe updating SWWAPPER_MMU_FLAGS) for
>> the boot CPU and store this information somewhere. Thus we may be able to
>> avoid another re-write of the pagetables after we have booted the secondaries.
> 
> The boot CPU is straightforward, there is only an issue on the
> secondaries where IIRC the rewrite code needs some updates as we
> get left with non-global mappings lying around.
> 
>> Discussing this with Catalin, he suggests to use a variable for the status
>> of "nG" flag for PTE/PMD_MAYBE_NG, to avoid calling the helper function
>> all the time. By using the per-CPU check we can make sure the flag is uptodate.
> 
> That was the discussion about the variable above.  We need one
> for non-optimization reasons anyway since we can't rely on
> checking the state on the current CPU.
> 
>> Also, we can continue to fail the hotplugged CPUs if we detect that the
>> pagetables are Global and the new CPU requires nG (for heterogeneous
>> systems).
> 
> There's no continuing to reject those CPUs unfortunately, we
> don't reject anything currently.  Any such systems would

In fact we do reject the hotplugged CPUs, after we have finalised
the capabilities for KPTI. So, I don't see how the behavior is different.

Cheers
Suzuki


> experience a regression when moving to a kernel where E0PD is
> enabled which doesn't seem ideal.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD
  2019-10-10 10:24             ` Suzuki K Poulose
@ 2019-10-10 16:04               ` Mark Brown
  0 siblings, 0 replies; 18+ messages in thread
From: Mark Brown @ 2019-10-10 16:04 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1275 bytes --]

On Thu, Oct 10, 2019 at 11:24:55AM +0100, Suzuki K Poulose wrote:
> On 09/10/2019 18:52, Mark Brown wrote:
> > On Tue, Sep 24, 2019 at 10:13:18AM +0100, Suzuki K Poulose wrote:

> > > Also, we can continue to fail the hotplugged CPUs if we detect that the
> > > pagetables are Global and the new CPU requires nG (for heterogeneous
> > > systems).

> > There's no continuing to reject those CPUs unfortunately, we
> > don't reject anything currently.  Any such systems would

> In fact we do reject the hotplugged CPUs, after we have finalised
> the capabilities for KPTI. So, I don't see how the behavior is different.

If we don't have E0PD we will always enable KPTI if we have
enabled KASLR at runtime so a system with late CPUs without E0PD
will boot those CPUs as KPTI will have been enabled from the boot
CPU onwards.  When we add E0PD/KASLR integration and change to
only enabling KPTI when we encounter a CPU without E0PD then we
could potentially encounter a system where we no longer enable
KPTI during initial boot due to having E0PD on all the CPUs we
see at that time and then end up rejecting late CPUs which don't
have E0PD.  To be honest I'm not sure how realistic this is and
users could work around it by explicitly forcing KPTI but the
potential is there.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-08-14 18:31 ` [PATCH v2 1/2] arm64: Add initial support for E0PD Mark Brown
@ 2019-10-10 16:13   ` Mark Rutland
  2019-10-11 11:17     ` Mark Brown
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Rutland @ 2019-10-10 16:13 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, Suzuki K Poulose

Hi Mark,

On Wed, Aug 14, 2019 at 07:31:02PM +0100, Mark Brown wrote:
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9323bcc40a58..62b01fc35ef6 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -219,6 +219,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>  };
>  
>  static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),

The presence of this entry means the system-wide value will be exposed
to a KVM guest, but since it's NONSTRICT we won't warn about a
late-onlined CPU that has a mismatch.

So if all the boot-time CPUs have E0PD, we can spawn a VM that starts
using E0PD, but we might (silently) later migrate it to a CPU without
E0PD, breaking the security guarantee.

I think we want this to be STRICT, so that we at least warn in such a
case.

More generally than this patch, I suspect we probably want to abort the
hotplug if we online a CPU that doesn't provide the same gaurantees as
the sys_val for the field.

[...]

> @@ -1559,6 +1573,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
>  		.sign = FTR_UNSIGNED,
>  		.min_field_value = 1,
>  	},
> +#endif
> +#ifdef CONFIG_ARM64_E0PD
> +	{
> +		.desc = "E0PD",
> +		.capability = ARM64_HAS_E0PD,
> +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,

I suspect it would be better to treat this as a system-wide capability,
as with KPTI, which will make it much easier to reason about.

That would rule out having E0PD on a subset of CPUs, with or without
KPTI. With KPTI it's not really necessary, and without KPTI we don't
have a consistent guarantee, so that sounds reasonable to me.

Thanks,
Mark.

> +		.sys_reg = SYS_ID_AA64MMFR2_EL1,
> +		.sign = FTR_UNSIGNED,
> +		.field_pos = ID_AA64MMFR2_E0PD_SHIFT,
> +		.matches = has_cpuid_feature,
> +		.min_field_value = 1,
> +		.cpu_enable = cpu_enable_e0pd,
> +	},
>  #endif
>  	{},
>  };
> -- 
> 2.20.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-10-10 16:13   ` Mark Rutland
@ 2019-10-11 11:17     ` Mark Brown
  2019-10-11 11:40       ` Will Deacon
  0 siblings, 1 reply; 18+ messages in thread
From: Mark Brown @ 2019-10-11 11:17 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, Suzuki K Poulose


[-- Attachment #1.1: Type: text/plain, Size: 1304 bytes --]

On Thu, Oct 10, 2019 at 05:13:17PM +0100, Mark Rutland wrote:

> So if all the boot-time CPUs have E0PD, we can spawn a VM that starts
> using E0PD, but we might (silently) later migrate it to a CPU without
> E0PD, breaking the security guarantee.

> I think we want this to be STRICT, so that we at least warn in such a
> case.

> More generally than this patch, I suspect we probably want to abort the
> hotplug if we online a CPU that doesn't provide the same gaurantees as
> the sys_val for the field.

Right, if we make it STRICT we at least avoid that issue with KVM.

> > +#ifdef CONFIG_ARM64_E0PD
> > +	{
> > +		.desc = "E0PD",
> > +		.capability = ARM64_HAS_E0PD,
> > +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,

> I suspect it would be better to treat this as a system-wide capability,
> as with KPTI, which will make it much easier to reason about.

> That would rule out having E0PD on a subset of CPUs, with or without
> KPTI. With KPTI it's not really necessary, and without KPTI we don't
> have a consistent guarantee, so that sounds reasonable to me.

It does - the main motivation for doing it as a local feature was
to avoid the regression with systems with late CPUs that lack the
capability which Will was concerned about but I'm not sure how
realistic such systems actually are.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-10-11 11:17     ` Mark Brown
@ 2019-10-11 11:40       ` Will Deacon
  2019-10-11 12:57         ` Mark Rutland
                           ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Will Deacon @ 2019-10-11 11:40 UTC (permalink / raw)
  To: Mark Brown
  Cc: Mark Rutland, Catalin Marinas, linux-arm-kernel, Suzuki K Poulose

On Fri, Oct 11, 2019 at 12:17:15PM +0100, Mark Brown wrote:
> On Thu, Oct 10, 2019 at 05:13:17PM +0100, Mark Rutland wrote:
> 
> > So if all the boot-time CPUs have E0PD, we can spawn a VM that starts
> > using E0PD, but we might (silently) later migrate it to a CPU without
> > E0PD, breaking the security guarantee.
> 
> > I think we want this to be STRICT, so that we at least warn in such a
> > case.
> 
> > More generally than this patch, I suspect we probably want to abort the
> > hotplug if we online a CPU that doesn't provide the same gaurantees as
> > the sys_val for the field.
> 
> Right, if we make it STRICT we at least avoid that issue with KVM.
> 
> > > +#ifdef CONFIG_ARM64_E0PD
> > > +	{
> > > +		.desc = "E0PD",
> > > +		.capability = ARM64_HAS_E0PD,
> > > +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
> 
> > I suspect it would be better to treat this as a system-wide capability,
> > as with KPTI, which will make it much easier to reason about.
> 
> > That would rule out having E0PD on a subset of CPUs, with or without
> > KPTI. With KPTI it's not really necessary, and without KPTI we don't
> > have a consistent guarantee, so that sounds reasonable to me.
> 
> It does - the main motivation for doing it as a local feature was
> to avoid the regression with systems with late CPUs that lack the
> capability which Will was concerned about but I'm not sure how
> realistic such systems actually are.

I think we need to handle the case where not all CPUs support ExPD :(

It's not like E0PD is something critical like support for a particular page
size, so I think it's a tough sell for SoC integrators to ensure that it's
matched up.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-10-11 11:40       ` Will Deacon
@ 2019-10-11 12:57         ` Mark Rutland
  2019-10-11 12:58         ` Catalin Marinas
  2019-10-11 13:46         ` Mark Brown
  2 siblings, 0 replies; 18+ messages in thread
From: Mark Rutland @ 2019-10-11 12:57 UTC (permalink / raw)
  To: Will Deacon
  Cc: Catalin Marinas, Mark Brown, linux-arm-kernel, Suzuki K Poulose

On Fri, Oct 11, 2019 at 12:40:13PM +0100, Will Deacon wrote:
> On Fri, Oct 11, 2019 at 12:17:15PM +0100, Mark Brown wrote:
> > On Thu, Oct 10, 2019 at 05:13:17PM +0100, Mark Rutland wrote:
> > 
> > > So if all the boot-time CPUs have E0PD, we can spawn a VM that starts
> > > using E0PD, but we might (silently) later migrate it to a CPU without
> > > E0PD, breaking the security guarantee.
> > 
> > > I think we want this to be STRICT, so that we at least warn in such a
> > > case.
> > 
> > > More generally than this patch, I suspect we probably want to abort the
> > > hotplug if we online a CPU that doesn't provide the same gaurantees as
> > > the sys_val for the field.
> > 
> > Right, if we make it STRICT we at least avoid that issue with KVM.
> > 
> > > > +#ifdef CONFIG_ARM64_E0PD
> > > > +	{
> > > > +		.desc = "E0PD",
> > > > +		.capability = ARM64_HAS_E0PD,
> > > > +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
> > 
> > > I suspect it would be better to treat this as a system-wide capability,
> > > as with KPTI, which will make it much easier to reason about.
> > 
> > > That would rule out having E0PD on a subset of CPUs, with or without
> > > KPTI. With KPTI it's not really necessary, and without KPTI we don't
> > > have a consistent guarantee, so that sounds reasonable to me.
> > 
> > It does - the main motivation for doing it as a local feature was
> > to avoid the regression with systems with late CPUs that lack the
> > capability which Will was concerned about but I'm not sure how
> > realistic such systems actually are.
> 
> I think we need to handle the case where not all CPUs support ExPD :(

I think we'll have to handle systems with mixed support, but:

* We should only make use E0PD the feature when all the boot CPUs have
  it.

* We should reject late-onlining of a CPU without E0PD if we detected it
  at boot time and either used it or expose it to guests.

If people are late-onlining mismatched secondaries they get to pick up
the pieces, as with other features.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-10-11 11:40       ` Will Deacon
  2019-10-11 12:57         ` Mark Rutland
@ 2019-10-11 12:58         ` Catalin Marinas
  2019-10-11 13:46         ` Mark Brown
  2 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2019-10-11 12:58 UTC (permalink / raw)
  To: Will Deacon; +Cc: Mark Rutland, Mark Brown, linux-arm-kernel, Suzuki K Poulose

On Fri, Oct 11, 2019 at 12:40:13PM +0100, Will Deacon wrote:
> On Fri, Oct 11, 2019 at 12:17:15PM +0100, Mark Brown wrote:
> > On Thu, Oct 10, 2019 at 05:13:17PM +0100, Mark Rutland wrote:
> > 
> > > So if all the boot-time CPUs have E0PD, we can spawn a VM that starts
> > > using E0PD, but we might (silently) later migrate it to a CPU without
> > > E0PD, breaking the security guarantee.
> > 
> > > I think we want this to be STRICT, so that we at least warn in such a
> > > case.
> > 
> > > More generally than this patch, I suspect we probably want to abort the
> > > hotplug if we online a CPU that doesn't provide the same gaurantees as
> > > the sys_val for the field.
> > 
> > Right, if we make it STRICT we at least avoid that issue with KVM.
> > 
> > > > +#ifdef CONFIG_ARM64_E0PD
> > > > +	{
> > > > +		.desc = "E0PD",
> > > > +		.capability = ARM64_HAS_E0PD,
> > > > +		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
> > 
> > > I suspect it would be better to treat this as a system-wide capability,
> > > as with KPTI, which will make it much easier to reason about.
> > 
> > > That would rule out having E0PD on a subset of CPUs, with or without
> > > KPTI. With KPTI it's not really necessary, and without KPTI we don't
> > > have a consistent guarantee, so that sounds reasonable to me.
> > 
> > It does - the main motivation for doing it as a local feature was
> > to avoid the regression with systems with late CPUs that lack the
> > capability which Will was concerned about but I'm not sure how
> > realistic such systems actually are.
> 
> I think we need to handle the case where not all CPUs support ExPD :(

That's fine but if such CPUs are brought up late and the kernel has
already made the decision to switch to global mappings, we should just
park them.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/2] arm64: Add initial support for E0PD
  2019-10-11 11:40       ` Will Deacon
  2019-10-11 12:57         ` Mark Rutland
  2019-10-11 12:58         ` Catalin Marinas
@ 2019-10-11 13:46         ` Mark Brown
  2 siblings, 0 replies; 18+ messages in thread
From: Mark Brown @ 2019-10-11 13:46 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Catalin Marinas, linux-arm-kernel, Suzuki K Poulose


[-- Attachment #1.1: Type: text/plain, Size: 985 bytes --]

On Fri, Oct 11, 2019 at 12:40:13PM +0100, Will Deacon wrote:
> On Fri, Oct 11, 2019 at 12:17:15PM +0100, Mark Brown wrote:

> > It does - the main motivation for doing it as a local feature was
> > to avoid the regression with systems with late CPUs that lack the
> > capability which Will was concerned about but I'm not sure how
> > realistic such systems actually are.

> I think we need to handle the case where not all CPUs support ExPD :(

> It's not like E0PD is something critical like support for a particular page
> size, so I think it's a tough sell for SoC integrators to ensure that it's
> matched up.

That's fine, we handle them by falling back to KPTI if we detect
any non-E0PD CPUs.  The only difference in the end result between
doing this by making it a system wide capability and doing it as
a weak local capability is what happens if the first CPU that
appears without E0PD is a late CPU then instead of accepting the
CPU and enabling KPTI we will reject the CPU.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-10-11 13:47 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-14 18:31 [PATCH v2 0/2] arm64: E0PD support Mark Brown
2019-08-14 18:31 ` [PATCH v2 1/2] arm64: Add initial support for E0PD Mark Brown
2019-10-10 16:13   ` Mark Rutland
2019-10-11 11:17     ` Mark Brown
2019-10-11 11:40       ` Will Deacon
2019-10-11 12:57         ` Mark Rutland
2019-10-11 12:58         ` Catalin Marinas
2019-10-11 13:46         ` Mark Brown
2019-08-14 18:31 ` [PATCH v2 2/2] arm64: Don't use KPTI where we have E0PD Mark Brown
2019-08-15 16:35   ` Will Deacon
2019-08-15 18:00     ` Mark Brown
2019-08-16 11:31       ` Mark Brown
2019-08-16 10:24     ` Catalin Marinas
2019-08-16 12:10       ` Mark Brown
2019-09-24  9:13         ` Suzuki K Poulose
2019-10-09 17:52           ` Mark Brown
2019-10-10 10:24             ` Suzuki K Poulose
2019-10-10 16:04               ` Mark Brown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.