linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] arm64: E0PD support
@ 2019-10-16 15:14 Mark Brown
  2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-16 15:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel

This series adds support for E0PD. We enable E0PD unconditionally where
present on systems where all the CPUs in the system support E0PD and
change to not enabling KPTI by default on systems where we have enabled
E0PD.

v3: Make E0PD a system wide feature.

Mark Brown (3):
      arm64: Factor out checks for KASLR in KPTI code into separate function
      arm64: Add initial support for E0PD
      arm64: Don't use KPTI where we have E0PD

 arch/arm64/Kconfig                     | 15 ++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 +-
 arch/arm64/include/asm/mmu.h           | 64 +++++++++++++++++++++++++---------
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 29 ++++++++++++++-
 6 files changed, 95 insertions(+), 19 deletions(-)

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function
  2019-10-16 15:14 [PATCH v3 0/3] arm64: E0PD support Mark Brown
@ 2019-10-16 15:14 ` Mark Brown
  2019-10-16 15:19   ` Mark Brown
  2019-10-18 12:01   ` Catalin Marinas
  2019-10-16 15:14 ` [PATCH v3 2/3] arm64: Add initial support for E0PD Mark Brown
  2019-10-16 15:14 ` [PATCH v3 3/3] arm64: Don't use KPTI where we have E0PD Mark Brown
  2 siblings, 2 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-16 15:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

In preparation for E0PD support factor out the checks for interaction
between KASLR and KPTI done in boot context into a new function
kaslr_requires_kpti(), in the process clarifying the distinction between
what we do in boot context and what we do at runtime.

Signed-off-by: Mark Brown <broonie@kernel.org>
---

New patch.

 arch/arm64/include/asm/mmu.h   | 53 +++++++++++++++++++++++-----------
 arch/arm64/kernel/cpufeature.c |  2 +-
 2 files changed, 37 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index f217e3292919..55e285fff262 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -35,10 +35,37 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
-static inline bool arm64_kernel_use_ng_mappings(void)
+static inline bool kaslr_requires_kpti(void)
 {
 	bool tx1_bug;
 
+	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return false;
+
+	/*
+	 * Systems affected by Cavium erratum 24756 are incompatible
+	 * with KPTI.
+	 */
+	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
+		tx1_bug = false;
+#ifndef MODULE
+	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
+		extern const struct midr_range cavium_erratum_27456_cpus[];
+
+		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
+						cavium_erratum_27456_cpus);
+#endif
+	} else {
+		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
+	}
+	if (tx1_bug)
+		return false;
+
+	return kaslr_offset() > 0;
+}
+
+static inline bool arm64_kernel_use_ng_mappings(void)
+{
 	/* What's a kpti? Use global mappings if we don't know. */
 	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
 		return false;
@@ -52,29 +79,21 @@ static inline bool arm64_kernel_use_ng_mappings(void)
 	if (arm64_kernel_unmapped_at_el0())
 		return true;
 
-	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+	/*
+	 * Once we are far enough into boot for capabilities to be
+	 * ready we will have confirmed if we are using non-global
+	 * mappings so don't need to consider anything else here.
+	 */
+	if (static_branch_likely(&arm64_const_caps_ready))
 		return false;
 
 	/*
 	 * KASLR is enabled so we're going to be enabling kpti on non-broken
 	 * CPUs regardless of their susceptibility to Meltdown. Rather
 	 * than force everybody to go through the G -> nG dance later on,
-	 * just put down non-global mappings from the beginning.
+	 * just put down non-global mappings from the beginning
 	 */
-	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
-		tx1_bug = false;
-#ifndef MODULE
-	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
-		extern const struct midr_range cavium_erratum_27456_cpus[];
-
-		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
-						cavium_erratum_27456_cpus);
-#endif
-	} else {
-		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
-	}
-
-	return !tx1_bug && kaslr_offset() > 0;
+	return kaslr_requires_kpti();
 }
 
 typedef void (*bp_hardening_cb_t)(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index cabebf1a7976..db311a34a21e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1002,7 +1002,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	}
 
 	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+	if (kaslr_requires_kpti()) {
 		if (!__kpti_forced) {
 			str = "KASLR";
 			__kpti_forced = 1;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/3] arm64: Add initial support for E0PD
  2019-10-16 15:14 [PATCH v3 0/3] arm64: E0PD support Mark Brown
  2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
@ 2019-10-16 15:14 ` Mark Brown
  2019-10-16 15:14 ` [PATCH v3 3/3] arm64: Don't use KPTI where we have E0PD Mark Brown
  2 siblings, 0 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-16 15:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
based security issues by ensuring that the kernel is not mapped when
userspace is running but this approach is expensive and is incompatible
with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
alternative to this which ensures that accesses from userspace to the
kernel's half of the memory map to always fault with constant time,
preventing timing attacks without requiring constant unmapping and
remapping or preventing legitimate accesses.

Currently this feature will only be enabled if all CPUs in the system
support E0PD, if some CPUs do not support the feature at boot time then
the feature will not be enabled and in the unlikely event that a late
CPU is the first CPU to lack the feature then we will reject that CPU.

This initial patch does not yet integrate with KPTI, this will be dealt
with in followup patches.  Ideally we could ensure that by default we
don't use KPTI on CPUs where E0PD is present.

Signed-off-by: Mark Brown <broonie@kernel.org>
---

v3: Make E0PD a system feature.

 arch/arm64/Kconfig                     | 15 ++++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 27 ++++++++++++++++++++++++++
 5 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea62c87d9ced..cd5be8fe1cda 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1426,6 +1426,21 @@ config ARM64_PTR_AUTH
 
 endmenu
 
+menu "ARMv8.5 architectural features"
+
+config ARM64_E0PD
+	bool "Enable support for E0PD"
+	default y
+	help
+	   E0PD (part of the ARMv8.5 extensions) allows us to ensure
+	   that EL0 accesses made via TTBR1 always fault in constant time,
+	   providing benefits to KPTI with lower overhead and without
+	   disrupting legitimate access to kernel memory such as SPE.
+
+	   This option enables E0PD for TTBR1 where available.
+
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f19fe4b9acc4..f25388981075 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,7 +52,8 @@
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
 #define ARM64_HAS_DCPODP			43
 #define ARM64_WORKAROUND_1463225		44
+#define ARM64_HAS_E0PD				45
 
-#define ARM64_NCAPS				45
+#define ARM64_NCAPS				46
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 3df60f97da1f..685842e52c3d 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -292,6 +292,8 @@
 #define TCR_HD			(UL(1) << 40)
 #define TCR_NFD0		(UL(1) << 53)
 #define TCR_NFD1		(UL(1) << 54)
+#define TCR_E0PD0		(UL(1) << 55)
+#define TCR_E0PD1		(UL(1) << 56)
 
 /*
  * TTBR.
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 972d196c7714..36227a5a22ba 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -655,6 +655,7 @@
 #define ID_AA64MMFR1_VMIDBITS_16	2
 
 /* id_aa64mmfr2 */
+#define ID_AA64MMFR2_E0PD_SHIFT		60
 #define ID_AA64MMFR2_FWB_SHIFT		40
 #define ID_AA64MMFR2_AT_SHIFT		32
 #define ID_AA64MMFR2_LVA_SHIFT		16
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index db311a34a21e..0d551af06421 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -220,6 +220,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
@@ -1245,6 +1246,19 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 }
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
+#ifdef CONFIG_ARM64_E0PD
+static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
+{
+	/*
+	 * The cpu_enable() callback gets called even on CPUs that
+	 * don't detect the feature so we need to verify if we can
+	 * enable.
+	 */
+	if (this_cpu_has_cap(ARM64_HAS_E0PD))
+		sysreg_clear_set(tcr_el1, 0, TCR_E0PD1);
+}
+#endif /* CONFIG_ARM64_E0PD */
+
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 static bool enable_pseudo_nmi;
 
@@ -1560,6 +1574,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.min_field_value = 1,
 	},
+#endif
+#ifdef CONFIG_ARM64_E0PD
+	{
+		.desc = "E0PD",
+		.capability = ARM64_HAS_E0PD,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.sys_reg = SYS_ID_AA64MMFR2_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64MMFR2_E0PD_SHIFT,
+		.matches = has_cpuid_feature,
+		.min_field_value = 1,
+		.cpu_enable = cpu_enable_e0pd,
+	},
 #endif
 	{},
 };
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 3/3] arm64: Don't use KPTI where we have E0PD
  2019-10-16 15:14 [PATCH v3 0/3] arm64: E0PD support Mark Brown
  2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
  2019-10-16 15:14 ` [PATCH v3 2/3] arm64: Add initial support for E0PD Mark Brown
@ 2019-10-16 15:14 ` Mark Brown
  2 siblings, 0 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-16 15:14 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since E0PD is a system capability we will still enable KPTI if any of
the CPUs in the system lacks E0PD, this will rewrite any global mappings
that were established in systems where some but not all CPUs support
E0PD.  We may transiently have a mix of global and non-global mappings
while booting since we use the local CPU when deciding if KPTI will be
required prior to completing CPU enumeration but any global mappings
will be converted to non-global ones when KPTI is applied.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---

v3: Greatly simplified as a result of E0PD becoming a system feature.

 arch/arm64/include/asm/mmu.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 55e285fff262..d61908bf4c9c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,10 +38,21 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 static inline bool kaslr_requires_kpti(void)
 {
 	bool tx1_bug;
+	u64 ftr;
 
 	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return false;
 
+	/*
+	 * E0PD does a similar job to KPTI so can be used instead
+	 * where available.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
 	/*
 	 * Systems affected by Cavium erratum 24756 are incompatible
 	 * with KPTI.
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function
  2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
@ 2019-10-16 15:19   ` Mark Brown
  2019-10-18 12:01   ` Catalin Marinas
  1 sibling, 0 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-16 15:19 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 699 bytes --]

On Wed, Oct 16, 2019 at 04:14:19PM +0100, Mark Brown wrote:
> In preparation for E0PD support factor out the checks for interaction
> between KASLR and KPTI done in boot context into a new function
> kaslr_requires_kpti(), in the process clarifying the distinction between
> what we do in boot context and what we do at runtime.

Sorry, whatever I did with git send-email appears to have eaten my cover
letter on this series - the text was:

This series adds support for E0PD. We enable E0PD unconditionally where
present on systems where all the CPUs in the system support E0PD and
change to not enabling KPTI by default on systems where we have enabled
E0PD.

v3: Make E0PD a system wide feature.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function
  2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
  2019-10-16 15:19   ` Mark Brown
@ 2019-10-18 12:01   ` Catalin Marinas
  2019-10-18 12:22     ` Mark Brown
  1 sibling, 1 reply; 7+ messages in thread
From: Catalin Marinas @ 2019-10-18 12:01 UTC (permalink / raw)
  To: Mark Brown; +Cc: Will Deacon, linux-arm-kernel

On Wed, Oct 16, 2019 at 04:14:19PM +0100, Mark Brown wrote:
> +static inline bool arm64_kernel_use_ng_mappings(void)
> +{
>  	/* What's a kpti? Use global mappings if we don't know. */
>  	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
>  		return false;
> @@ -52,29 +79,21 @@ static inline bool arm64_kernel_use_ng_mappings(void)
>  	if (arm64_kernel_unmapped_at_el0())
>  		return true;
>  
> -	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> +	/*
> +	 * Once we are far enough into boot for capabilities to be
> +	 * ready we will have confirmed if we are using non-global
> +	 * mappings so don't need to consider anything else here.
> +	 */
> +	if (static_branch_likely(&arm64_const_caps_ready))
>  		return false;
>  
>  	/*
>  	 * KASLR is enabled so we're going to be enabling kpti on non-broken
>  	 * CPUs regardless of their susceptibility to Meltdown. Rather
>  	 * than force everybody to go through the G -> nG dance later on,
> -	 * just put down non-global mappings from the beginning.
> +	 * just put down non-global mappings from the beginning
>  	 */
> -	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> -		tx1_bug = false;
> -#ifndef MODULE
> -	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
> -		extern const struct midr_range cavium_erratum_27456_cpus[];
> -
> -		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
> -						cavium_erratum_27456_cpus);
> -#endif
> -	} else {
> -		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
> -	}
> -
> -	return !tx1_bug && kaslr_offset() > 0;
> +	return kaslr_requires_kpti();
>  }

While that's a step in the right direction, I'd like to see
{PTE,PMD}_MAYBE_NG move away from the current use of
arm64_kernel_use_ng_mappings(). These macros are used during early
during boot and we seem to rely on cpu_hwcaps not being populated yet
(arm64_kernel_unmapped_at_el0() checking it via cpus_have_const_cap()).

Could we have a global variable (boot or a pgtable attr type) which we
populate during early boot and subsequently use in the PTE_MAYBE_NG
macro?

Thanks.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function
  2019-10-18 12:01   ` Catalin Marinas
@ 2019-10-18 12:22     ` Mark Brown
  0 siblings, 0 replies; 7+ messages in thread
From: Mark Brown @ 2019-10-18 12:22 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1372 bytes --]

On Fri, Oct 18, 2019 at 01:01:02PM +0100, Catalin Marinas wrote:
> On Wed, Oct 16, 2019 at 04:14:19PM +0100, Mark Brown wrote:

> >  	if (arm64_kernel_unmapped_at_el0())
> >  		return true;

> > -	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> > +	/*
> > +	 * Once we are far enough into boot for capabilities to be
> > +	 * ready we will have confirmed if we are using non-global
> > +	 * mappings so don't need to consider anything else here.
> > +	 */
> > +	if (static_branch_likely(&arm64_const_caps_ready))
> >  		return false;

> While that's a step in the right direction, I'd like to see
> {PTE,PMD}_MAYBE_NG move away from the current use of
> arm64_kernel_use_ng_mappings(). These macros are used during early
> during boot and we seem to rely on cpu_hwcaps not being populated yet
> (arm64_kernel_unmapped_at_el0() checking it via cpus_have_const_cap()).

> Could we have a global variable (boot or a pgtable attr type) which we
> populate during early boot and subsequently use in the PTE_MAYBE_NG
> macro?

I did look at that but figured that now E0PD is a system capability the
existing variables that the cpu capabilities stuff uses wound up being
set at pretty much the same times so we could just reuse them and avoid
having to worry about anything being out of sync or if the update to
flip to non-global mappings was being done in a concurrency safe manner.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-10-18 12:22 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-16 15:14 [PATCH v3 0/3] arm64: E0PD support Mark Brown
2019-10-16 15:14 ` [PATCH v3 1/3] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
2019-10-16 15:19   ` Mark Brown
2019-10-18 12:01   ` Catalin Marinas
2019-10-18 12:22     ` Mark Brown
2019-10-16 15:14 ` [PATCH v3 2/3] arm64: Add initial support for E0PD Mark Brown
2019-10-16 15:14 ` [PATCH v3 3/3] arm64: Don't use KPTI where we have E0PD Mark Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).