linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/4] E0PD support
@ 2019-11-06 13:00 Mark Brown
  2019-11-06 13:00 ` [PATCH v7 1/4] arm64: Add initial support for E0PD Mark Brown
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Mark Brown @ 2019-11-06 13:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

This series adds support for E0PD. We enable E0PD unconditionally where
present on systems where all the CPUs in the system support E0PD and
change to not enabling KPTI by default on systems where we have enabled
E0PD. It also converts the runtime checks for use of non-global mappings
into a variable.

v7: Move early initialization to the start of setup_arch()
v6: Move early initialization earlier.
v5: Rebase on v5.4-rc3 (there will be conflicts in cpucaps.h) and add
    code in patch 4 to start making non-global mappings from boot if we
    know then that we will need KPTI, preserving an existing performance
    improvement.
v4: Use a variable to store our decision about using non-global mappings
    rather than rechecking constantly at runtime. I've added this as a
    separate patch mainly for bisection.
v3: Make E0PD a system wide feature.

Mark Brown (4):
  arm64: Add initial support for E0PD
  arm64: Factor out checks for KASLR in KPTI code into separate function
  arm64: Don't use KPTI where we have E0PD
  arm64: Use a variable to store non-global mappings decision

 arch/arm64/Kconfig                     | 15 ++++++
 arch/arm64/include/asm/cpucaps.h       |  3 +-
 arch/arm64/include/asm/mmu.h           | 48 ++----------------
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +
 arch/arm64/include/asm/pgtable-prot.h  |  4 +-
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 70 ++++++++++++++++++++++++--
 arch/arm64/kernel/setup.c              |  7 +++
 8 files changed, 99 insertions(+), 51 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v7 1/4] arm64: Add initial support for E0PD
  2019-11-06 13:00 [PATCH v7 0/4] E0PD support Mark Brown
@ 2019-11-06 13:00 ` Mark Brown
  2019-11-07 10:12   ` Suzuki K Poulose
  2019-11-06 13:00 ` [PATCH v7 2/4] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Mark Brown @ 2019-11-06 13:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
based security issues by ensuring that the kernel is not mapped when
userspace is running but this approach is expensive and is incompatible
with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
alternative to this which ensures that accesses from userspace to the
kernel's half of the memory map to always fault with constant time,
preventing timing attacks without requiring constant unmapping and
remapping or preventing legitimate accesses.

Currently this feature will only be enabled if all CPUs in the system
support E0PD, if some CPUs do not support the feature at boot time then
the feature will not be enabled and in the unlikely event that a late
CPU is the first CPU to lack the feature then we will reject that CPU.

This initial patch does not yet integrate with KPTI, this will be dealt
with in followup patches.  Ideally we could ensure that by default we
don't use KPTI on CPUs where E0PD is present.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/Kconfig                     | 15 ++++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/include/asm/sysreg.h        |  1 +
 arch/arm64/kernel/cpufeature.c         | 27 ++++++++++++++++++++++++++
 5 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 950a56b71ff0..9f881acb7acf 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1421,6 +1421,21 @@ config ARM64_PTR_AUTH
 
 endmenu
 
+menu "ARMv8.5 architectural features"
+
+config ARM64_E0PD
+	bool "Enable support for E0PD"
+	default y
+	help
+	   E0PD (part of the ARMv8.5 extensions) allows us to ensure
+	   that EL0 accesses made via TTBR1 always fault in constant time,
+	   providing benefits to KPTI with lower overhead and without
+	   disrupting legitimate access to kernel memory such as SPE.
+
+	   This option enables E0PD for TTBR1 where available.
+
+endmenu
+
 config ARM64_SVE
 	bool "ARM Scalable Vector Extension support"
 	default y
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f19fe4b9acc4..f25388981075 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,7 +52,8 @@
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
 #define ARM64_HAS_DCPODP			43
 #define ARM64_WORKAROUND_1463225		44
+#define ARM64_HAS_E0PD				45
 
-#define ARM64_NCAPS				45
+#define ARM64_NCAPS				46
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 3df60f97da1f..685842e52c3d 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -292,6 +292,8 @@
 #define TCR_HD			(UL(1) << 40)
 #define TCR_NFD0		(UL(1) << 53)
 #define TCR_NFD1		(UL(1) << 54)
+#define TCR_E0PD0		(UL(1) << 55)
+#define TCR_E0PD1		(UL(1) << 56)
 
 /*
  * TTBR.
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 972d196c7714..36227a5a22ba 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -655,6 +655,7 @@
 #define ID_AA64MMFR1_VMIDBITS_16	2
 
 /* id_aa64mmfr2 */
+#define ID_AA64MMFR2_E0PD_SHIFT		60
 #define ID_AA64MMFR2_FWB_SHIFT		40
 #define ID_AA64MMFR2_AT_SHIFT		32
 #define ID_AA64MMFR2_LVA_SHIFT		16
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index cabebf1a7976..2cf2b129ebb4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -220,6 +220,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
 };
 
 static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
@@ -1245,6 +1246,19 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
 }
 #endif /* CONFIG_ARM64_PTR_AUTH */
 
+#ifdef CONFIG_ARM64_E0PD
+static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
+{
+	/*
+	 * The cpu_enable() callback gets called even on CPUs that
+	 * don't detect the feature so we need to verify if we can
+	 * enable.
+	 */
+	if (this_cpu_has_cap(ARM64_HAS_E0PD))
+		sysreg_clear_set(tcr_el1, 0, TCR_E0PD1);
+}
+#endif /* CONFIG_ARM64_E0PD */
+
 #ifdef CONFIG_ARM64_PSEUDO_NMI
 static bool enable_pseudo_nmi;
 
@@ -1560,6 +1574,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.sign = FTR_UNSIGNED,
 		.min_field_value = 1,
 	},
+#endif
+#ifdef CONFIG_ARM64_E0PD
+	{
+		.desc = "E0PD",
+		.capability = ARM64_HAS_E0PD,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.sys_reg = SYS_ID_AA64MMFR2_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64MMFR2_E0PD_SHIFT,
+		.matches = has_cpuid_feature,
+		.min_field_value = 1,
+		.cpu_enable = cpu_enable_e0pd,
+	},
 #endif
 	{},
 };
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 2/4] arm64: Factor out checks for KASLR in KPTI code into separate function
  2019-11-06 13:00 [PATCH v7 0/4] E0PD support Mark Brown
  2019-11-06 13:00 ` [PATCH v7 1/4] arm64: Add initial support for E0PD Mark Brown
@ 2019-11-06 13:00 ` Mark Brown
  2019-11-06 13:00 ` [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD Mark Brown
  2019-11-06 13:00 ` [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision Mark Brown
  3 siblings, 0 replies; 13+ messages in thread
From: Mark Brown @ 2019-11-06 13:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

In preparation for integrating E0PD support with KASLR factor out the
checks for interaction between KASLR and KPTI done in boot context into
a new function kaslr_requires_kpti(), in the process clarifying the
distinction between what we do in boot context and what we do at
runtime.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/mmu.h   | 53 +++++++++++++++++++++++-----------
 arch/arm64/kernel/cpufeature.c |  2 +-
 2 files changed, 37 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index f217e3292919..55e285fff262 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -35,10 +35,37 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
-static inline bool arm64_kernel_use_ng_mappings(void)
+static inline bool kaslr_requires_kpti(void)
 {
 	bool tx1_bug;
 
+	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return false;
+
+	/*
+	 * Systems affected by Cavium erratum 24756 are incompatible
+	 * with KPTI.
+	 */
+	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
+		tx1_bug = false;
+#ifndef MODULE
+	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
+		extern const struct midr_range cavium_erratum_27456_cpus[];
+
+		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
+						cavium_erratum_27456_cpus);
+#endif
+	} else {
+		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
+	}
+	if (tx1_bug)
+		return false;
+
+	return kaslr_offset() > 0;
+}
+
+static inline bool arm64_kernel_use_ng_mappings(void)
+{
 	/* What's a kpti? Use global mappings if we don't know. */
 	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
 		return false;
@@ -52,29 +79,21 @@ static inline bool arm64_kernel_use_ng_mappings(void)
 	if (arm64_kernel_unmapped_at_el0())
 		return true;
 
-	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+	/*
+	 * Once we are far enough into boot for capabilities to be
+	 * ready we will have confirmed if we are using non-global
+	 * mappings so don't need to consider anything else here.
+	 */
+	if (static_branch_likely(&arm64_const_caps_ready))
 		return false;
 
 	/*
 	 * KASLR is enabled so we're going to be enabling kpti on non-broken
 	 * CPUs regardless of their susceptibility to Meltdown. Rather
 	 * than force everybody to go through the G -> nG dance later on,
-	 * just put down non-global mappings from the beginning.
+	 * just put down non-global mappings from the beginning
 	 */
-	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
-		tx1_bug = false;
-#ifndef MODULE
-	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
-		extern const struct midr_range cavium_erratum_27456_cpus[];
-
-		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
-						cavium_erratum_27456_cpus);
-#endif
-	} else {
-		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
-	}
-
-	return !tx1_bug && kaslr_offset() > 0;
+	return kaslr_requires_kpti();
 }
 
 typedef void (*bp_hardening_cb_t)(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 2cf2b129ebb4..0d551af06421 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1003,7 +1003,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	}
 
 	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+	if (kaslr_requires_kpti()) {
 		if (!__kpti_forced) {
 			str = "KASLR";
 			__kpti_forced = 1;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-06 13:00 [PATCH v7 0/4] E0PD support Mark Brown
  2019-11-06 13:00 ` [PATCH v7 1/4] arm64: Add initial support for E0PD Mark Brown
  2019-11-06 13:00 ` [PATCH v7 2/4] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
@ 2019-11-06 13:00 ` Mark Brown
  2019-11-07 12:01   ` Suzuki K Poulose
  2019-11-06 13:00 ` [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision Mark Brown
  3 siblings, 1 reply; 13+ messages in thread
From: Mark Brown @ 2019-11-06 13:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

Since E0PD is intended to fulfil the same role as KPTI we don't need to
use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
Change the check that forces KPTI on when KASLR is enabled to check for
E0PD before doing so, CPUs with E0PD are not expected to be affected by
meltdown so should not need to enable KPTI for other reasons.

Since E0PD is a system capability we will still enable KPTI if any of
the CPUs in the system lacks E0PD, this will rewrite any global mappings
that were established in systems where some but not all CPUs support
E0PD.  We may transiently have a mix of global and non-global mappings
while booting since we use the local CPU when deciding if KPTI will be
required prior to completing CPU enumeration but any global mappings
will be converted to non-global ones when KPTI is applied.

KPTI can still be forced on from the command line if required.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/mmu.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 55e285fff262..d61908bf4c9c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -38,10 +38,21 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 static inline bool kaslr_requires_kpti(void)
 {
 	bool tx1_bug;
+	u64 ftr;
 
 	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
 		return false;
 
+	/*
+	 * E0PD does a similar job to KPTI so can be used instead
+	 * where available.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
 	/*
 	 * Systems affected by Cavium erratum 24756 are incompatible
 	 * with KPTI.
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision
  2019-11-06 13:00 [PATCH v7 0/4] E0PD support Mark Brown
                   ` (2 preceding siblings ...)
  2019-11-06 13:00 ` [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD Mark Brown
@ 2019-11-06 13:00 ` Mark Brown
  2019-11-07 11:11   ` Suzuki K Poulose
  3 siblings, 1 reply; 13+ messages in thread
From: Mark Brown @ 2019-11-06 13:00 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: Mark Brown, linux-arm-kernel

Refactor the code which checks to see if we need to use non-global
mappings to use a variable instead of checking with the CPU capabilities
each time, doing the initial check for KPTI early in boot before we
start allocating memory so we still avoid transitioning to non-global
mappings in common cases.

Since this variable always matches our decision about non-global
mappings this means we can also combine arm64_kernel_use_ng_mappings()
and arm64_unmap_kernel_at_el0() into a single function, the variable
simply stores the result and the decision code is elsewhere. We could
just have the users check the variable directly but having a function
makes it clear that these uses are read-only.

The result is that we simplify the code a bit and reduces the amount of
code executed at runtime.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/mmu.h          | 78 ++-------------------------
 arch/arm64/include/asm/pgtable-prot.h |  4 +-
 arch/arm64/kernel/cpufeature.c        | 41 ++++++++++++--
 arch/arm64/kernel/setup.c             |  7 +++
 4 files changed, 51 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index d61908bf4c9c..e4d862420bb4 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -29,82 +29,11 @@ typedef struct {
  */
 #define ASID(mm)	((mm)->context.id.counter & 0xffff)
 
-static inline bool arm64_kernel_unmapped_at_el0(void)
-{
-	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
-	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
-}
+extern bool arm64_use_ng_mappings;
 
-static inline bool kaslr_requires_kpti(void)
-{
-	bool tx1_bug;
-	u64 ftr;
-
-	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
-		return false;
-
-	/*
-	 * E0PD does a similar job to KPTI so can be used instead
-	 * where available.
-	 */
-	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
-		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
-		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
-			return false;
-	}
-
-	/*
-	 * Systems affected by Cavium erratum 24756 are incompatible
-	 * with KPTI.
-	 */
-	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
-		tx1_bug = false;
-#ifndef MODULE
-	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
-		extern const struct midr_range cavium_erratum_27456_cpus[];
-
-		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
-						cavium_erratum_27456_cpus);
-#endif
-	} else {
-		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
-	}
-	if (tx1_bug)
-		return false;
-
-	return kaslr_offset() > 0;
-}
-
-static inline bool arm64_kernel_use_ng_mappings(void)
+static inline bool arm64_kernel_unmapped_at_el0(void)
 {
-	/* What's a kpti? Use global mappings if we don't know. */
-	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
-		return false;
-
-	/*
-	 * Note: this function is called before the CPU capabilities have
-	 * been configured, so our early mappings will be global. If we
-	 * later determine that kpti is required, then
-	 * kpti_install_ng_mappings() will make them non-global.
-	 */
-	if (arm64_kernel_unmapped_at_el0())
-		return true;
-
-	/*
-	 * Once we are far enough into boot for capabilities to be
-	 * ready we will have confirmed if we are using non-global
-	 * mappings so don't need to consider anything else here.
-	 */
-	if (static_branch_likely(&arm64_const_caps_ready))
-		return false;
-
-	/*
-	 * KASLR is enabled so we're going to be enabling kpti on non-broken
-	 * CPUs regardless of their susceptibility to Meltdown. Rather
-	 * than force everybody to go through the G -> nG dance later on,
-	 * just put down non-global mappings from the beginning
-	 */
-	return kaslr_requires_kpti();
+	return arm64_use_ng_mappings;
 }
 
 typedef void (*bp_hardening_cb_t)(void);
@@ -158,6 +87,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
 extern void mark_linear_text_alias_ro(void);
+extern bool kaslr_requires_kpti(void);
 
 #define INIT_MM_CONTEXT(name)	\
 	.pgd = init_pg_dir,
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 9a21b84536f2..eb1c6f83343d 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -26,8 +26,8 @@
 #define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 #define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
-#define PTE_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PTE_NG : 0)
-#define PMD_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0)
+#define PTE_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
+#define PMD_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
 
 #define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
 #define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 0d551af06421..7ee7cd8b32a0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -47,6 +47,9 @@ static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM6
 /* Need also bit for ARM64_CB_PATCH */
 DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 
+bool arm64_use_ng_mappings = false;
+EXPORT_SYMBOL(arm64_use_ng_mappings);
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -961,6 +964,39 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
+bool kaslr_requires_kpti(void)
+{
+	bool tx1_bug;
+	u64 ftr;
+
+	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return false;
+
+	/*
+	 * E0PD does a similar job to KPTI so can be used instead
+	 * where available.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
+		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
+		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
+			return false;
+	}
+
+	/*
+	 * Systems affected by Cavium erratum 24756 are incompatible
+	 * with KPTI.
+	 */
+	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
+		tx1_bug = false;
+	} else {
+		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
+	}
+	if (tx1_bug)
+		return false;
+
+	return kaslr_offset() > 0;
+}
+
 static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
@@ -1038,7 +1074,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
 	kpti_remap_fn *remap_fn;
 
-	static bool kpti_applied = false;
 	int cpu = smp_processor_id();
 
 	/*
@@ -1046,7 +1081,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	 * it already or we have KASLR enabled and therefore have not
 	 * created any global mappings at all.
 	 */
-	if (kpti_applied || kaslr_offset() > 0)
+	if (arm64_use_ng_mappings)
 		return;
 
 	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
@@ -1056,7 +1091,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	cpu_uninstall_idmap();
 
 	if (!cpu)
-		kpti_applied = true;
+		arm64_use_ng_mappings = true;
 
 	return;
 }
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 56f664561754..f28308c9882c 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -285,6 +285,13 @@ void __init setup_arch(char **cmdline_p)
 
 	*cmdline_p = boot_command_line;
 
+        /*
+         * If know now we are going to need KPTI then use non-global
+         * mappings from the start, avoiding the cost of rewriting
+         * everything later.
+         */
+        arm64_use_ng_mappings = kaslr_requires_kpti();
+
 	early_fixmap_init();
 	early_ioremap_init();
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 1/4] arm64: Add initial support for E0PD
  2019-11-06 13:00 ` [PATCH v7 1/4] arm64: Add initial support for E0PD Mark Brown
@ 2019-11-07 10:12   ` Suzuki K Poulose
  2019-11-07 11:55     ` Mark Brown
  0 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2019-11-07 10:12 UTC (permalink / raw)
  To: Mark Brown, Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel



On 06/11/2019 13:00, Mark Brown wrote:
> Kernel Page Table Isolation (KPTI) is used to mitigate some speculation
> based security issues by ensuring that the kernel is not mapped when
> userspace is running but this approach is expensive and is incompatible
> with SPE.  E0PD, introduced in the ARMv8.5 extensions, provides an
> alternative to this which ensures that accesses from userspace to the
> kernel's half of the memory map to always fault with constant time,
> preventing timing attacks without requiring constant unmapping and
> remapping or preventing legitimate accesses.
> 
> Currently this feature will only be enabled if all CPUs in the system
> support E0PD, if some CPUs do not support the feature at boot time then
> the feature will not be enabled and in the unlikely event that a late
> CPU is the first CPU to lack the feature then we will reject that CPU.
> 
> This initial patch does not yet integrate with KPTI, this will be dealt
> with in followup patches.  Ideally we could ensure that by default we
> don't use KPTI on CPUs where E0PD is present.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>


>   /* id_aa64mmfr2 */
> +#define ID_AA64MMFR2_E0PD_SHIFT		60
>   #define ID_AA64MMFR2_FWB_SHIFT		40
>   #define ID_AA64MMFR2_AT_SHIFT		32
>   #define ID_AA64MMFR2_LVA_SHIFT		16
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index cabebf1a7976..2cf2b129ebb4 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -220,6 +220,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>   };
>   
>   static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
>   	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> @@ -1245,6 +1246,19 @@ static void cpu_enable_address_auth(struct arm64_cpu_capabilities const *cap)
>   }
>   #endif /* CONFIG_ARM64_PTR_AUTH */
>   
> +#ifdef CONFIG_ARM64_E0PD
> +static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
> +{
> +	/*
> +	 * The cpu_enable() callback gets called even on CPUs that
> +	 * don't detect the feature so we need to verify if we can
> +	 * enable.
> +	 */
> +	if (this_cpu_has_cap(ARM64_HAS_E0PD))
> +		sysreg_clear_set(tcr_el1, 0, TCR_E0PD1);
> +}

Given that this is a SYSTEM_FEATURE now, we don't need the extra check.
All CPUs are guaranteed to have the feature, otherwise they would be
rejected early.

Rest looks fine to me.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision
  2019-11-06 13:00 ` [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision Mark Brown
@ 2019-11-07 11:11   ` Suzuki K Poulose
  0 siblings, 0 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2019-11-07 11:11 UTC (permalink / raw)
  To: Mark Brown, Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel

Hi Mark,

On 06/11/2019 13:00, Mark Brown wrote:
> Refactor the code which checks to see if we need to use non-global
> mappings to use a variable instead of checking with the CPU capabilities
> each time, doing the initial check for KPTI early in boot before we
> start allocating memory so we still avoid transitioning to non-global
> mappings in common cases.
> 
> Since this variable always matches our decision about non-global
> mappings this means we can also combine arm64_kernel_use_ng_mappings()
> and arm64_unmap_kernel_at_el0() into a single function, the variable
> simply stores the result and the decision code is elsewhere. We could
> just have the users check the variable directly but having a function
> makes it clear that these uses are read-only.
> 
> The result is that we simplify the code a bit and reduces the amount of
> code executed at runtime.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>   arch/arm64/include/asm/mmu.h          | 78 ++-------------------------
>   arch/arm64/include/asm/pgtable-prot.h |  4 +-
>   arch/arm64/kernel/cpufeature.c        | 41 ++++++++++++--
>   arch/arm64/kernel/setup.c             |  7 +++
>   4 files changed, 51 insertions(+), 79 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index d61908bf4c9c..e4d862420bb4 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -29,82 +29,11 @@ typedef struct {
>    */
>   #define ASID(mm)	((mm)->context.id.counter & 0xffff)
>   
> -static inline bool arm64_kernel_unmapped_at_el0(void)
> -{
> -	return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
> -	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
> -}
> +extern bool arm64_use_ng_mappings;
>   
> -static inline bool kaslr_requires_kpti(void)
> -{
> -	bool tx1_bug;
> -	u64 ftr;
> -
> -	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> -		return false;
> -
> -	/*
> -	 * E0PD does a similar job to KPTI so can be used instead
> -	 * where available.
> -	 */
> -	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> -		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> -		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> -			return false;
> -	}
> -
> -	/*
> -	 * Systems affected by Cavium erratum 24756 are incompatible
> -	 * with KPTI.
> -	 */
> -	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> -		tx1_bug = false;
> -#ifndef MODULE
> -	} else if (!static_branch_likely(&arm64_const_caps_ready)) {
> -		extern const struct midr_range cavium_erratum_27456_cpus[];
> -
> -		tx1_bug = is_midr_in_range_list(read_cpuid_id(),
> -						cavium_erratum_27456_cpus);
> -#endif
> -	} else {
> -		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);
> -	}
> -	if (tx1_bug)
> -		return false;
> -
> -	return kaslr_offset() > 0;
> -}
> -
> -static inline bool arm64_kernel_use_ng_mappings(void)
> +static inline bool arm64_kernel_unmapped_at_el0(void)
>   {
> -	/* What's a kpti? Use global mappings if we don't know. */
> -	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> -		return false;
> -
> -	/*
> -	 * Note: this function is called before the CPU capabilities have
> -	 * been configured, so our early mappings will be global. If we
> -	 * later determine that kpti is required, then
> -	 * kpti_install_ng_mappings() will make them non-global.
> -	 */
> -	if (arm64_kernel_unmapped_at_el0())
> -		return true;
> -
> -	/*
> -	 * Once we are far enough into boot for capabilities to be
> -	 * ready we will have confirmed if we are using non-global
> -	 * mappings so don't need to consider anything else here.
> -	 */
> -	if (static_branch_likely(&arm64_const_caps_ready))
> -		return false;
> -
> -	/*
> -	 * KASLR is enabled so we're going to be enabling kpti on non-broken
> -	 * CPUs regardless of their susceptibility to Meltdown. Rather
> -	 * than force everybody to go through the G -> nG dance later on,
> -	 * just put down non-global mappings from the beginning
> -	 */
> -	return kaslr_requires_kpti();
> +	return arm64_use_ng_mappings;
>   }
>   
>   typedef void (*bp_hardening_cb_t)(void);
> @@ -158,6 +87,7 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>   			       pgprot_t prot, bool page_mappings_only);
>   extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot);
>   extern void mark_linear_text_alias_ro(void);
> +extern bool kaslr_requires_kpti(void);
>   
>   #define INIT_MM_CONTEXT(name)	\
>   	.pgd = init_pg_dir,
> diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> index 9a21b84536f2..eb1c6f83343d 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
> @@ -26,8 +26,8 @@
>   #define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
>   #define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
>   
> -#define PTE_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PTE_NG : 0)
> -#define PMD_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0)
> +#define PTE_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
> +#define PMD_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
>   
>   #define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
>   #define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 0d551af06421..7ee7cd8b32a0 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -47,6 +47,9 @@ static struct arm64_cpu_capabilities const __ro_after_init *cpu_hwcaps_ptrs[ARM6
>   /* Need also bit for ARM64_CB_PATCH */
>   DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
>   
> +bool arm64_use_ng_mappings = false;
> +EXPORT_SYMBOL(arm64_use_ng_mappings);
> +
>   /*
>    * Flag to indicate if we have computed the system wide
>    * capabilities based on the boot time active CPUs. This
> @@ -961,6 +964,39 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>   	return has_cpuid_feature(entry, scope);
>   }
>   
> +bool kaslr_requires_kpti(void)
> +{
> +	bool tx1_bug;
> +	u64 ftr;
> +
> +	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> +		return false;
> +
> +	/*
> +	 * E0PD does a similar job to KPTI so can be used instead
> +	 * where available.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)
> +			return false;
> +	}
> +
> +	/*
> +	 * Systems affected by Cavium erratum 24756 are incompatible
> +	 * with KPTI.
> +	 */
> +	if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> +		tx1_bug = false;
> +	} else {
> +		tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456);

This is problematic. When this is called from setup_arch(), we haven't run
the capability check on the boot CPU (which happens via
smp_prepare_boot_cpu() -> cpuinfo_store_boot_cpu()-> init_cpu_features()).

So we could get "false" here, since the state is not updated and go ahead with
nG mappings on TX1 with KASLR, not good. So, you need to retain the midr_list
check when the caps are not ready.

> +	}
> +	if (tx1_bug)
> +		return false;
> +
> +	return kaslr_offset() > 0;
> +}
> +
>   static bool __meltdown_safe = true;
>   static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
>   
> @@ -1038,7 +1074,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>   	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
>   	kpti_remap_fn *remap_fn;
>   
> -	static bool kpti_applied = false;
>   	int cpu = smp_processor_id();
>   
>   	/*
> @@ -1046,7 +1081,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>   	 * it already or we have KASLR enabled and therefore have not
>   	 * created any global mappings at all.
>   	 */
> -	if (kpti_applied || kaslr_offset() > 0)
> +	if (arm64_use_ng_mappings)
>   		return;
>   
>   	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
> @@ -1056,7 +1091,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>   	cpu_uninstall_idmap();
>   
>   	if (!cpu)
> -		kpti_applied = true;
> +		arm64_use_ng_mappings = true;
>   
>   	return;
>   }
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 56f664561754..f28308c9882c 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -285,6 +285,13 @@ void __init setup_arch(char **cmdline_p)
>   
>   	*cmdline_p = boot_command_line;
>   
> +        /*
> +         * If know now we are going to need KPTI then use non-global
> +         * mappings from the start, avoiding the cost of rewriting
> +         * everything later.
> +         */
> +        arm64_use_ng_mappings = kaslr_requires_kpti();

nit: ^^^ white spaces instead of tabs.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 1/4] arm64: Add initial support for E0PD
  2019-11-07 10:12   ` Suzuki K Poulose
@ 2019-11-07 11:55     ` Mark Brown
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Brown @ 2019-11-07 11:55 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 958 bytes --]

On Thu, Nov 07, 2019 at 10:12:30AM +0000, Suzuki K Poulose wrote:

> > +static void cpu_enable_e0pd(struct arm64_cpu_capabilities const *cap)
> > +{
> > +	/*
> > +	 * The cpu_enable() callback gets called even on CPUs that
> > +	 * don't detect the feature so we need to verify if we can
> > +	 * enable.
> > +	 */
> > +	if (this_cpu_has_cap(ARM64_HAS_E0PD))
> > +		sysreg_clear_set(tcr_el1, 0, TCR_E0PD1);
> > +}

> Given that this is a SYSTEM_FEATURE now, we don't need the extra check.
> All CPUs are guaranteed to have the feature, otherwise they would be
> rejected early.

It's not strictly required but it does no harm, helps people who are
doing CPU local stuff see that this might be an issue if they just look
at other users rather than the header and it means that if someone
changes things (I'm actually been sitting on a patch for that but I want
to postpone any discussion about that until after at least the initial
three patches are merged).

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-06 13:00 ` [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD Mark Brown
@ 2019-11-07 12:01   ` Suzuki K Poulose
  2019-11-07 14:37     ` Mark Brown
  2019-11-07 14:48     ` Mark Brown
  0 siblings, 2 replies; 13+ messages in thread
From: Suzuki K Poulose @ 2019-11-07 12:01 UTC (permalink / raw)
  To: Mark Brown, Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel

Hi Mark,

On 06/11/2019 13:00, Mark Brown wrote:
> Since E0PD is intended to fulfil the same role as KPTI we don't need to
> use KPTI on CPUs where E0PD is available, we can rely on E0PD instead.
> Change the check that forces KPTI on when KASLR is enabled to check for
> E0PD before doing so, CPUs with E0PD are not expected to be affected by
> meltdown so should not need to enable KPTI for other reasons.
> 
> Since E0PD is a system capability we will still enable KPTI if any of
> the CPUs in the system lacks E0PD, this will rewrite any global mappings
> that were established in systems where some but not all CPUs support
> E0PD.  We may transiently have a mix of global and non-global mappings
> while booting since we use the local CPU when deciding if KPTI will be
> required prior to completing CPU enumeration but any global mappings
> will be converted to non-global ones when KPTI is applied.
> 
> KPTI can still be forced on from the command line if required.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>   arch/arm64/include/asm/mmu.h | 11 +++++++++++
>   1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 55e285fff262..d61908bf4c9c 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -38,10 +38,21 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
>   static inline bool kaslr_requires_kpti(void)
>   {
>   	bool tx1_bug;
> +	u64 ftr;
>   
>   	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
>   		return false;
>   
> +	/*
> +	 * E0PD does a similar job to KPTI so can be used instead
> +	 * where available.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> +		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);

I am trying to write down the rationale of checking this per-CPU.

Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
and the decision of choosing KPTI is affected by the lack of the E0PD feature
when it is helpful, having CPU local check is fine. Also this gives us the
advantage of choosing an nG mapping when the boot CPU indicates the need.

It may be helpful to have this added to the description/comment above.

> +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

nit: You may use the existing helper :
	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-07 12:01   ` Suzuki K Poulose
@ 2019-11-07 14:37     ` Mark Brown
  2019-11-07 15:03       ` Suzuki K Poulose
  2019-11-07 14:48     ` Mark Brown
  1 sibling, 1 reply; 13+ messages in thread
From: Mark Brown @ 2019-11-07 14:37 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1331 bytes --]

On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:
> On 06/11/2019 13:00, Mark Brown wrote:

> > +     /*
> > +      * E0PD does a similar job to KPTI so can be used instead
> > +      * where available.
> > +      */
> > +     if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
> > +             ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);

> I am trying to write down the rationale of checking this per-CPU.

> Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
> and the decision of choosing KPTI is affected by the lack of the E0PD feature
> when it is helpful, having CPU local check is fine. Also this gives us the
> advantage of choosing an nG mapping when the boot CPU indicates the need.

Well, it's mainly the fact that this runs really early on in boot before
the cpufeature code has fully initialized so as with the existing code
immediately below for identifying TX1 we can't rely on the cpufeature
code being done.

> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

> nit: You may use the existing helper :
> 	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

It's probably worth you going over the existing code and cleaning up
existing users, I copied this idiom from somewhere else.

I will note that we're on version 7 and nothing here has changed for
quite some time.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-07 12:01   ` Suzuki K Poulose
  2019-11-07 14:37     ` Mark Brown
@ 2019-11-07 14:48     ` Mark Brown
  1 sibling, 0 replies; 13+ messages in thread
From: Mark Brown @ 2019-11-07 14:48 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 510 bytes --]

On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:

> > +		if ((ftr >> ID_AA64MMFR2_E0PD_SHIFT) & 0xf)

> nit: You may use the existing helper :
> 	cpuid_feature_extract_unsigned_field(ftr, ID_AA64MMFR2_E0PD_SHIFT)

Actually, the name of the helper is so verbose that it's makes
formatting things into 80 columns hard, you end up with something like:

		ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
		if (cpuid_feature_extract_unsigned_field(ftr,
					ID_AA64MMFR2_E0PD_SHIFT))

which is awkward.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-07 14:37     ` Mark Brown
@ 2019-11-07 15:03       ` Suzuki K Poulose
  2019-11-08 14:10         ` Mark Brown
  0 siblings, 1 reply; 13+ messages in thread
From: Suzuki K Poulose @ 2019-11-07 15:03 UTC (permalink / raw)
  To: Mark Brown; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel



On 07/11/2019 14:37, Mark Brown wrote:
> On Thu, Nov 07, 2019 at 12:01:10PM +0000, Suzuki K Poulose wrote:
>> On 06/11/2019 13:00, Mark Brown wrote:
> 
>>> +     /*
>>> +      * E0PD does a similar job to KPTI so can be used instead
>>> +      * where available.
>>> +      */
>>> +     if (IS_ENABLED(CONFIG_ARM64_E0PD)) {
>>> +             ftr = read_sysreg_s(SYS_ID_AA64MMFR2_EL1);
> 
>> I am trying to write down the rationale of checking this per-CPU.
> 
>> Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
>> and the decision of choosing KPTI is affected by the lack of the E0PD feature
>> when it is helpful, having CPU local check is fine. Also this gives us the
>> advantage of choosing an nG mapping when the boot CPU indicates the need.
> 
> Well, it's mainly the fact that this runs really early on in boot before
> the cpufeature code has fully initialized so as with the existing code
> immediately below for identifying TX1 we can't rely on the cpufeature
> code being done.

Yes, I acknowledge that. I was writing it down to clear why this was
fine and why it has its own advantage. This may not be obvious for
someone who reads it later. So having this in a comment helps to
avoid staring at it.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD
  2019-11-07 15:03       ` Suzuki K Poulose
@ 2019-11-08 14:10         ` Mark Brown
  0 siblings, 0 replies; 13+ messages in thread
From: Mark Brown @ 2019-11-08 14:10 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: Catalin Marinas, Will Deacon, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1033 bytes --]

On Thu, Nov 07, 2019 at 03:03:49PM +0000, Suzuki K Poulose wrote:
> On 07/11/2019 14:37, Mark Brown wrote:

> > > Given that this gets run on all individual CPUs, via unmap_kernel_at_el0()
> > > and the decision of choosing KPTI is affected by the lack of the E0PD feature
> > > when it is helpful, having CPU local check is fine. Also this gives us the
> > > advantage of choosing an nG mapping when the boot CPU indicates the need.

> > Well, it's mainly the fact that this runs really early on in boot before
> > the cpufeature code has fully initialized so as with the existing code
> > immediately below for identifying TX1 we can't rely on the cpufeature
> > code being done.

> Yes, I acknowledge that. I was writing it down to clear why this was
> fine and why it has its own advantage. This may not be obvious for
> someone who reads it later. So having this in a comment helps to
> avoid staring at it.

Yes...  my point was more about idiom and urgency, especially given that
this code is moved later in the patch series.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-11-08 14:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-06 13:00 [PATCH v7 0/4] E0PD support Mark Brown
2019-11-06 13:00 ` [PATCH v7 1/4] arm64: Add initial support for E0PD Mark Brown
2019-11-07 10:12   ` Suzuki K Poulose
2019-11-07 11:55     ` Mark Brown
2019-11-06 13:00 ` [PATCH v7 2/4] arm64: Factor out checks for KASLR in KPTI code into separate function Mark Brown
2019-11-06 13:00 ` [PATCH v7 3/4] arm64: Don't use KPTI where we have E0PD Mark Brown
2019-11-07 12:01   ` Suzuki K Poulose
2019-11-07 14:37     ` Mark Brown
2019-11-07 15:03       ` Suzuki K Poulose
2019-11-08 14:10         ` Mark Brown
2019-11-07 14:48     ` Mark Brown
2019-11-06 13:00 ` [PATCH v7 4/4] arm64: Use a variable to store non-global mappings decision Mark Brown
2019-11-07 11:11   ` Suzuki K Poulose

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).