All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled
@ 2018-12-13 17:20 Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 1/4] arm64: kpti: enable KPTI only when KASLR is truly enabled Ard Biesheuvel
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 17:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Robin Murphy, John Garry, Will Deacon, Suzuki K Poulose, Ard Biesheuvel

John reports [0] that recent changes to the way the linear region is mapped
is causing ~30 seconds stalls on boot, and this turns out to be due to the
way we remap the linear space with nG mappings when enabling KPTI, which
occurs with the MMU and caches off. Recent kernels map the linear region down
to pages by default, so that restricted permissions can be applied at page
granularity (so that module .text and .rodata are no longer writable via
the linear map), and so the number of page table entries that require
updating from G to nG is much higher, resulting in the observed delays.

This series refactors the logic that tests whether KPTI should be enabled
so that the conditions that apply regardless of CPU identification are
evaluated early, allowing us to create the mappings of the kernel and
the linear region with nG attributes from the outset.

Patches #1 to #3 implement this so that we only create these nG mappings
if KPTI is force enabled.

As a followup, #4 may be applied, which inverts the logic so that nG mappings
are always created, unless KPTI is forced off. This allows us to get rid
of the slow and complex asm routines that make this change later on, which
is where the boot delays occur.

[0] https://marc.info/?l=linux-arm-kernel&m=154463433605723

Cc: Will Deacon <will.deacon@arm.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>

Ard Biesheuvel (4):
  arm64: kpti: enable KPTI only when KASLR is truly enabled.
  arm64: kpti: add helper to decide whether nG mappings should be used
    early
  arm64: kpti: use nG mappings from the outset if kpti is force enabled
  arm64: kpti: use non-global mappings unless KPTI is forced off

 arch/arm64/include/asm/cpufeature.h |   7 +
 arch/arm64/kernel/cpufeature.c      |  63 +++----
 arch/arm64/mm/mmu.c                 |  14 ++
 arch/arm64/mm/proc.S                | 189 --------------------
 4 files changed, 48 insertions(+), 225 deletions(-)

-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC PATCH 1/4] arm64: kpti: enable KPTI only when KASLR is truly enabled.
  2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
@ 2018-12-13 17:20 ` Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 2/4] arm64: kpti: add helper to decide whether nG mappings should be used early Ard Biesheuvel
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 17:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Robin Murphy, John Garry, Will Deacon, Suzuki K Poulose, Ard Biesheuvel

Kernels built with CONFIG_RANDOMIZE_BASE=y may run with KASLR disabled
when no RNG is provided by the firmware, or when it has been turned
off explicitly by putting 'nokaslr' on the command line.

In this case, there is no point in enabling KPTI on cores that have no
need for it otherwise, so take kaslr_offset() into account here.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index aec5ecb85737..ef8118274ca9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -937,7 +937,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	}
 
 	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0)
 		return true;
 
 	/* Don't force KPTI for CPUs that are not vulnerable */
-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 2/4] arm64: kpti: add helper to decide whether nG mappings should be used early
  2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 1/4] arm64: kpti: enable KPTI only when KASLR is truly enabled Ard Biesheuvel
@ 2018-12-13 17:20 ` Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 3/4] arm64: kpti: use nG mappings from the outset if kpti is force enabled Ard Biesheuvel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 17:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Robin Murphy, John Garry, Will Deacon, Suzuki K Poulose, Ard Biesheuvel

idmap_kpti_install_ng_mappings() traverses all kernel page tables with
the caches off to replace global with non-global attributes, so that
KPTI may be enabled safely. This is costly, and can be avoided in
cases where we know we will be enabling KPTI regardless of whether any
cores are present that are susceptible to Meltdown.

So add a helper that tells us whether KPTI was force en/disabled, which
we will help use decide whether to use nG mappings when creating the
mappings of the kernel address space.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/cpufeature.h |  7 ++++
 arch/arm64/kernel/cpufeature.c      | 37 ++++++++++++++------
 2 files changed, 34 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7e2ec64aa414..91bcab94a725 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -558,6 +558,13 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
 	default: return CONFIG_ARM64_PA_BITS;
 	}
 }
+
+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+extern bool kpti_is_forced(bool *enabled);
+#else
+static bool kpti_is_forced(bool *enabled) { return false; }
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ef8118274ca9..ecd8c65dd2d7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -908,13 +908,11 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
-static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
-				int scope)
+bool kpti_is_forced(bool *enabled)
 {
-	/* List of CPUs that are not vulnerable and don't need KPTI */
-	static const struct midr_range kpti_safe_list[] = {
-		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
-		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+	static const struct midr_range kpti_blacklist[] = {
+		MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1),
+		MIDR_RANGE(MIDR_THUNDERX_81XX, 0, 0, 0, 0),
 		{ /* sentinel */ }
 	};
 	char const *str = "command line option";
@@ -924,8 +922,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	 * ThunderX leads to apparent I-cache corruption of kernel text, which
 	 * ends as well as you might imagine. Don't even try.
 	 */
-	if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
-		str = "ARM64_WORKAROUND_CAVIUM_27456";
+	if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456) &&
+	    is_midr_in_range_list(read_cpuid_id(), kpti_blacklist)) {
 		__kpti_forced = -1;
 	}
 
@@ -933,12 +931,31 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	if (__kpti_forced) {
 		pr_info_once("kernel page table isolation forced %s by %s\n",
 			     __kpti_forced > 0 ? "ON" : "OFF", str);
-		return __kpti_forced > 0;
+		*enabled = __kpti_forced > 0;
+		return true;
 	}
 
 	/* Useful for KASLR robustness */
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0)
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+		*enabled = true;
 		return true;
+	}
+	return false;
+}
+
+static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
+				int scope)
+{
+	/* List of CPUs that are not vulnerable and don't need KPTI */
+	static const struct midr_range kpti_safe_list[] = {
+		MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
+		MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+		{ /* sentinel */ }
+	};
+	bool enabled;
+
+	if (kpti_is_forced(&enabled))
+		return enabled;
 
 	/* Don't force KPTI for CPUs that are not vulnerable */
 	if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 3/4] arm64: kpti: use nG mappings from the outset if kpti is force enabled
  2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 1/4] arm64: kpti: enable KPTI only when KASLR is truly enabled Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 2/4] arm64: kpti: add helper to decide whether nG mappings should be used early Ard Biesheuvel
@ 2018-12-13 17:20 ` Ard Biesheuvel
  2018-12-13 17:20 ` [RFC PATCH 4/4] arm64: kpti: use non-global mappings unless KPTI is forced off Ard Biesheuvel
  2018-12-13 18:05 ` [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Will Deacon
  4 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 17:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Robin Murphy, John Garry, Will Deacon, Suzuki K Poulose, Ard Biesheuvel

Instead of relying on a slow asm routine executing from the idmap to
change all global mappings into non-global ones, just apply non-global
mappings from the outset if KPTI is going to be enabled regardless of
CPU capabilities (i.e, when running with KASLR enabled)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpufeature.c | 3 ++-
 arch/arm64/mm/mmu.c            | 9 +++++++++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ecd8c65dd2d7..11ef6aadeb0c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -965,6 +965,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	return !has_cpuid_feature(entry, scope);
 }
 
+bool kpti_applied = false;
+
 static void
 kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 {
@@ -972,7 +974,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
 	kpti_remap_fn *remap_fn;
 
-	static bool kpti_applied = false;
 	int cpu = smp_processor_id();
 
 	if (kpti_applied)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d1d6601b385d..ab70834b45b8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -648,6 +648,15 @@ static void __init map_kernel(pgd_t *pgdp)
 void __init paging_init(void)
 {
 	pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir));
+	bool kpti_enabled;
+
+	/* create nG mappings if KPTI is enabled regardless of CPU features */
+	if (kpti_is_forced(&kpti_enabled) && kpti_enabled) {
+		extern bool kpti_applied;
+
+		cpus_set_cap(ARM64_UNMAP_KERNEL_AT_EL0);
+		kpti_applied = true;
+	}
 
 	map_kernel(pgdp);
 	map_mem(pgdp);
-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC PATCH 4/4] arm64: kpti: use non-global mappings unless KPTI is forced off
  2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2018-12-13 17:20 ` [RFC PATCH 3/4] arm64: kpti: use nG mappings from the outset if kpti is force enabled Ard Biesheuvel
@ 2018-12-13 17:20 ` Ard Biesheuvel
  2018-12-13 18:05 ` [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Will Deacon
  4 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 17:20 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Robin Murphy, John Garry, Will Deacon, Suzuki K Poulose, Ard Biesheuvel

KPTI requires non-global mappings but the converse is not true: we
can usually tolerate non-global mappings when KPTI is disabled (with
the exception of some ThunderX cores), but the increased TLB footprint
of kernel mappings may adversely affect performance in some cases.

So let's invert the early mapping logic to always create non-global
mappings unless KPTI was forced off, allowing us to get rid of the
costly and fragile remapping code that changes kernel mappings from
global to non-global at CPU feature detection time.

In cases where the increased TLB footprint does in fact cause performance
issues and Meltdown mitigations or KASLR are not required or desired,
kpti=off may be passed on the kernel command line to switch back
to global kernel mappings unconditionally.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/cpufeature.c |  27 ---
 arch/arm64/mm/mmu.c            |  15 +-
 arch/arm64/mm/proc.S           | 189 --------------------
 3 files changed, 10 insertions(+), 221 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 11ef6aadeb0c..649937753587 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -965,32 +965,6 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
 	return !has_cpuid_feature(entry, scope);
 }
 
-bool kpti_applied = false;
-
-static void
-kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
-{
-	typedef void (kpti_remap_fn)(int, int, phys_addr_t);
-	extern kpti_remap_fn idmap_kpti_install_ng_mappings;
-	kpti_remap_fn *remap_fn;
-
-	int cpu = smp_processor_id();
-
-	if (kpti_applied)
-		return;
-
-	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
-
-	cpu_install_idmap();
-	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
-	cpu_uninstall_idmap();
-
-	if (!cpu)
-		kpti_applied = true;
-
-	return;
-}
-
 static int __init parse_kpti(char *str)
 {
 	bool enabled;
@@ -1260,7 +1234,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64PFR0_CSV3_SHIFT,
 		.min_field_value = 1,
 		.matches = unmap_kernel_at_el0,
-		.cpu_enable = kpti_install_ng_mappings,
 	},
 #endif
 	{
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ab70834b45b8..74e27f4ae6ea 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -650,12 +650,17 @@ void __init paging_init(void)
 	pgd_t *pgdp = pgd_set_fixmap(__pa_symbol(swapper_pg_dir));
 	bool kpti_enabled;
 
-	/* create nG mappings if KPTI is enabled regardless of CPU features */
-	if (kpti_is_forced(&kpti_enabled) && kpti_enabled) {
-		extern bool kpti_applied;
-
+	/* create nG mappings unless KPTI is forced off */
+	if (!kpti_is_forced(&kpti_enabled) || kpti_enabled) {
+		/*
+		 * Set the capability so that PTE_MAYBE_NG will evaluate to
+		 * nG enabled. This capability will be cleared again in case
+		 * we decide not to enable KPTI after all at CPU feature
+		 * detection time, in which case we will end up running with
+		 * a mix of non-global and global kernel mappings but this
+		 * shouldn't hurt in practice.
+		 */
 		cpus_set_cap(ARM64_UNMAP_KERNEL_AT_EL0);
-		kpti_applied = true;
 	}
 
 	map_kernel(pgdp);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 2c75b0b903ae..b80d4220f7d0 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -209,195 +209,6 @@ ENTRY(idmap_cpu_replace_ttbr1)
 ENDPROC(idmap_cpu_replace_ttbr1)
 	.popsection
 
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-	.pushsection ".idmap.text", "awx"
-
-	.macro	__idmap_kpti_get_pgtable_ent, type
-	dc	cvac, cur_\()\type\()p		// Ensure any existing dirty
-	dmb	sy				// lines are written back before
-	ldr	\type, [cur_\()\type\()p]	// loading the entry
-	tbz	\type, #0, skip_\()\type	// Skip invalid and
-	tbnz	\type, #11, skip_\()\type	// non-global entries
-	.endm
-
-	.macro __idmap_kpti_put_pgtable_ent_ng, type
-	orr	\type, \type, #PTE_NG		// Same bit for blocks and pages
-	str	\type, [cur_\()\type\()p]	// Update the entry and ensure
-	dmb	sy				// that it is visible to all
-	dc	civac, cur_\()\type\()p		// CPUs.
-	.endm
-
-/*
- * void __kpti_install_ng_mappings(int cpu, int num_cpus, phys_addr_t swapper)
- *
- * Called exactly once from stop_machine context by each CPU found during boot.
- */
-__idmap_kpti_flag:
-	.long	1
-ENTRY(idmap_kpti_install_ng_mappings)
-	cpu		.req	w0
-	num_cpus	.req	w1
-	swapper_pa	.req	x2
-	swapper_ttb	.req	x3
-	flag_ptr	.req	x4
-	cur_pgdp	.req	x5
-	end_pgdp	.req	x6
-	pgd		.req	x7
-	cur_pudp	.req	x8
-	end_pudp	.req	x9
-	pud		.req	x10
-	cur_pmdp	.req	x11
-	end_pmdp	.req	x12
-	pmd		.req	x13
-	cur_ptep	.req	x14
-	end_ptep	.req	x15
-	pte		.req	x16
-
-	mrs	swapper_ttb, ttbr1_el1
-	adr	flag_ptr, __idmap_kpti_flag
-
-	cbnz	cpu, __idmap_kpti_secondary
-
-	/* We're the boot CPU. Wait for the others to catch up */
-	sevl
-1:	wfe
-	ldaxr	w18, [flag_ptr]
-	eor	w18, w18, num_cpus
-	cbnz	w18, 1b
-
-	/* We need to walk swapper, so turn off the MMU. */
-	pre_disable_mmu_workaround
-	mrs	x18, sctlr_el1
-	bic	x18, x18, #SCTLR_ELx_M
-	msr	sctlr_el1, x18
-	isb
-
-	/* Everybody is enjoying the idmap, so we can rewrite swapper. */
-	/* PGD */
-	mov	cur_pgdp, swapper_pa
-	add	end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
-do_pgd:	__idmap_kpti_get_pgtable_ent	pgd
-	tbnz	pgd, #1, walk_puds
-next_pgd:
-	__idmap_kpti_put_pgtable_ent_ng	pgd
-skip_pgd:
-	add	cur_pgdp, cur_pgdp, #8
-	cmp	cur_pgdp, end_pgdp
-	b.ne	do_pgd
-
-	/* Publish the updated tables and nuke all the TLBs */
-	dsb	sy
-	tlbi	vmalle1is
-	dsb	ish
-	isb
-
-	/* We're done: fire up the MMU again */
-	mrs	x18, sctlr_el1
-	orr	x18, x18, #SCTLR_ELx_M
-	msr	sctlr_el1, x18
-	isb
-
-	/* Set the flag to zero to indicate that we're all done */
-	str	wzr, [flag_ptr]
-	ret
-
-	/* PUD */
-walk_puds:
-	.if CONFIG_PGTABLE_LEVELS > 3
-	pte_to_phys	cur_pudp, pgd
-	add	end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
-do_pud:	__idmap_kpti_get_pgtable_ent	pud
-	tbnz	pud, #1, walk_pmds
-next_pud:
-	__idmap_kpti_put_pgtable_ent_ng	pud
-skip_pud:
-	add	cur_pudp, cur_pudp, 8
-	cmp	cur_pudp, end_pudp
-	b.ne	do_pud
-	b	next_pgd
-	.else /* CONFIG_PGTABLE_LEVELS <= 3 */
-	mov	pud, pgd
-	b	walk_pmds
-next_pud:
-	b	next_pgd
-	.endif
-
-	/* PMD */
-walk_pmds:
-	.if CONFIG_PGTABLE_LEVELS > 2
-	pte_to_phys	cur_pmdp, pud
-	add	end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
-do_pmd:	__idmap_kpti_get_pgtable_ent	pmd
-	tbnz	pmd, #1, walk_ptes
-next_pmd:
-	__idmap_kpti_put_pgtable_ent_ng	pmd
-skip_pmd:
-	add	cur_pmdp, cur_pmdp, #8
-	cmp	cur_pmdp, end_pmdp
-	b.ne	do_pmd
-	b	next_pud
-	.else /* CONFIG_PGTABLE_LEVELS <= 2 */
-	mov	pmd, pud
-	b	walk_ptes
-next_pmd:
-	b	next_pud
-	.endif
-
-	/* PTE */
-walk_ptes:
-	pte_to_phys	cur_ptep, pmd
-	add	end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
-do_pte:	__idmap_kpti_get_pgtable_ent	pte
-	__idmap_kpti_put_pgtable_ent_ng	pte
-skip_pte:
-	add	cur_ptep, cur_ptep, #8
-	cmp	cur_ptep, end_ptep
-	b.ne	do_pte
-	b	next_pmd
-
-	/* Secondary CPUs end up here */
-__idmap_kpti_secondary:
-	/* Uninstall swapper before surgery begins */
-	__idmap_cpu_set_reserved_ttbr1 x18, x17
-
-	/* Increment the flag to let the boot CPU we're ready */
-1:	ldxr	w18, [flag_ptr]
-	add	w18, w18, #1
-	stxr	w17, w18, [flag_ptr]
-	cbnz	w17, 1b
-
-	/* Wait for the boot CPU to finish messing around with swapper */
-	sevl
-1:	wfe
-	ldxr	w18, [flag_ptr]
-	cbnz	w18, 1b
-
-	/* All done, act like nothing happened */
-	msr	ttbr1_el1, swapper_ttb
-	isb
-	ret
-
-	.unreq	cpu
-	.unreq	num_cpus
-	.unreq	swapper_pa
-	.unreq	swapper_ttb
-	.unreq	flag_ptr
-	.unreq	cur_pgdp
-	.unreq	end_pgdp
-	.unreq	pgd
-	.unreq	cur_pudp
-	.unreq	end_pudp
-	.unreq	pud
-	.unreq	cur_pmdp
-	.unreq	end_pmdp
-	.unreq	pmd
-	.unreq	cur_ptep
-	.unreq	end_ptep
-	.unreq	pte
-ENDPROC(idmap_kpti_install_ng_mappings)
-	.popsection
-#endif
-
 /*
  *	__cpu_setup
  *
-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled
  2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2018-12-13 17:20 ` [RFC PATCH 4/4] arm64: kpti: use non-global mappings unless KPTI is forced off Ard Biesheuvel
@ 2018-12-13 18:05 ` Will Deacon
  2018-12-13 18:45   ` Ard Biesheuvel
  4 siblings, 1 reply; 8+ messages in thread
From: Will Deacon @ 2018-12-13 18:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: John Garry, Robin Murphy, linux-arm-kernel, Suzuki K Poulose

Hi Ard,

On Thu, Dec 13, 2018 at 06:20:31PM +0100, Ard Biesheuvel wrote:
> John reports [0] that recent changes to the way the linear region is mapped
> is causing ~30 seconds stalls on boot, and this turns out to be due to the
> way we remap the linear space with nG mappings when enabling KPTI, which
> occurs with the MMU and caches off. Recent kernels map the linear region down
> to pages by default, so that restricted permissions can be applied at page
> granularity (so that module .text and .rodata are no longer writable via
> the linear map), and so the number of page table entries that require
> updating from G to nG is much higher, resulting in the observed delays.
> 
> This series refactors the logic that tests whether KPTI should be enabled
> so that the conditions that apply regardless of CPU identification are
> evaluated early, allowing us to create the mappings of the kernel and
> the linear region with nG attributes from the outset.
> 
> Patches #1 to #3 implement this so that we only create these nG mappings
> if KPTI is force enabled.
> 
> As a followup, #4 may be applied, which inverts the logic so that nG mappings
> are always created, unless KPTI is forced off. This allows us to get rid
> of the slow and complex asm routines that make this change later on, which
> is where the boot delays occur.

I think I'd like to keep those for now, since it would be nice to use global
mappings for the vast majority of CPUs that are not affected by kpti.

I had a crack at the problem and ended up with a slightly simpler diff
below.

Will

--->8

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 7689c7aa1d77..feb05f7b46b1 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -16,6 +16,8 @@
 #ifndef __ASM_MMU_H
 #define __ASM_MMU_H
 
+#include <asm/cputype.h>
+
 #define MMCF_AARCH32	0x1	/* mm context flag for AArch32 executables */
 #define USER_ASID_BIT	48
 #define USER_ASID_FLAG	(UL(1) << USER_ASID_BIT)
@@ -44,6 +46,28 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
 	       cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
 }
 
+static inline bool arm64_kernel_use_ng_mappings(void)
+{
+	bool tx1_bug = false;
+
+	if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
+		return false;
+
+	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		return arm64_kernel_unmapped_at_el0();
+
+	if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
+		extern const struct midr_range cavium_erratum_27456_cpus[];
+
+		tx1_bug = static_branch_likely(&arm64_const_caps_ready) ?
+			  cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456) :
+			  is_midr_in_range_list(read_cpuid_id(),
+						cavium_erratum_27456_cpus);
+	}
+
+	return !tx1_bug;
+}
+
 typedef void (*bp_hardening_cb_t)(void);
 
 struct bp_hardening_data {
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 78b942c1bea4..986e41c4c32b 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -37,8 +37,8 @@
 #define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
 #define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
 
-#define PTE_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
-#define PMD_MAYBE_NG		(arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
+#define PTE_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PTE_NG : 0)
+#define PMD_MAYBE_NG		(arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0)
 
 #define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
 #define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index ff2fda3a98e1..2c42f57cf529 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -539,7 +539,7 @@ static const struct midr_range arm64_harden_el2_vectors[] = {
 #endif
 
 #ifdef CONFIG_CAVIUM_ERRATUM_27456
-static const struct midr_range cavium_erratum_27456_cpus[] = {
+const struct midr_range cavium_erratum_27456_cpus[] = {
 	/* Cavium ThunderX, T88 pass 1.x - 2.1 */
 	MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1),
 	/* Cavium ThunderX, T81 pass 1.0 */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7ea0b2f20262..c68f6ad6722f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1006,6 +1006,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
 	if (kpti_applied)
 		return;
 
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && __kpti_forced <= 0)
+		return;
+
 	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
 
 	cpu_install_idmap();

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled
  2018-12-13 18:05 ` [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Will Deacon
@ 2018-12-13 18:45   ` Ard Biesheuvel
  2018-12-13 20:10     ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 18:45 UTC (permalink / raw)
  To: Will Deacon; +Cc: John Garry, Robin Murphy, linux-arm-kernel, Suzuki K. Poulose

On Thu, 13 Dec 2018 at 19:05, Will Deacon <will.deacon@arm.com> wrote:
>
> Hi Ard,
>
> On Thu, Dec 13, 2018 at 06:20:31PM +0100, Ard Biesheuvel wrote:
> > John reports [0] that recent changes to the way the linear region is mapped
> > is causing ~30 seconds stalls on boot, and this turns out to be due to the
> > way we remap the linear space with nG mappings when enabling KPTI, which
> > occurs with the MMU and caches off. Recent kernels map the linear region down
> > to pages by default, so that restricted permissions can be applied at page
> > granularity (so that module .text and .rodata are no longer writable via
> > the linear map), and so the number of page table entries that require
> > updating from G to nG is much higher, resulting in the observed delays.
> >
> > This series refactors the logic that tests whether KPTI should be enabled
> > so that the conditions that apply regardless of CPU identification are
> > evaluated early, allowing us to create the mappings of the kernel and
> > the linear region with nG attributes from the outset.
> >
> > Patches #1 to #3 implement this so that we only create these nG mappings
> > if KPTI is force enabled.
> >
> > As a followup, #4 may be applied, which inverts the logic so that nG mappings
> > are always created, unless KPTI is forced off. This allows us to get rid
> > of the slow and complex asm routines that make this change later on, which
> > is where the boot delays occur.
>
> I think I'd like to keep those for now, since it would be nice to use global
> mappings for the vast majority of CPUs that are not affected by kpti.
>
> I had a crack at the problem and ended up with a slightly simpler diff
> below.
>
> Will
>
> --->8
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 7689c7aa1d77..feb05f7b46b1 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -16,6 +16,8 @@
>  #ifndef __ASM_MMU_H
>  #define __ASM_MMU_H
>
> +#include <asm/cputype.h>
> +
>  #define MMCF_AARCH32   0x1     /* mm context flag for AArch32 executables */
>  #define USER_ASID_BIT  48
>  #define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)
> @@ -44,6 +46,28 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
>                cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
>  }
>
> +static inline bool arm64_kernel_use_ng_mappings(void)
> +{
> +       bool tx1_bug = false;
> +
> +       if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> +               return false;
> +
> +       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> +               return arm64_kernel_unmapped_at_el0();
> +

This is all pretty subtle, but it looks correct to me. We are relying on the

> +       if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> +               extern const struct midr_range cavium_erratum_27456_cpus[];
> +
> +               tx1_bug = static_branch_likely(&arm64_const_caps_ready) ?
> +                         cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456) :
> +                         is_midr_in_range_list(read_cpuid_id(),
> +                                               cavium_erratum_27456_cpus);
> +       }
> +
> +       return !tx1_bug;

Should we make this !tx1_bug && kaslr_offset() > 0 ? That way, we take
'nokaslr' on the command line into account as well (see my patch #1)

> +}
> +
>  typedef void (*bp_hardening_cb_t)(void);
>
>  struct bp_hardening_data {
> diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> index 78b942c1bea4..986e41c4c32b 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
> @@ -37,8 +37,8 @@
>  #define _PROT_DEFAULT          (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
>  #define _PROT_SECT_DEFAULT     (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
>
> -#define PTE_MAYBE_NG           (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
> -#define PMD_MAYBE_NG           (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
> +#define PTE_MAYBE_NG           (arm64_kernel_use_ng_mappings() ? PTE_NG : 0)
> +#define PMD_MAYBE_NG           (arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0)
>
>  #define PROT_DEFAULT           (_PROT_DEFAULT | PTE_MAYBE_NG)
>  #define PROT_SECT_DEFAULT      (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index ff2fda3a98e1..2c42f57cf529 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -539,7 +539,7 @@ static const struct midr_range arm64_harden_el2_vectors[] = {
>  #endif
>
>  #ifdef CONFIG_CAVIUM_ERRATUM_27456
> -static const struct midr_range cavium_erratum_27456_cpus[] = {
> +const struct midr_range cavium_erratum_27456_cpus[] = {
>         /* Cavium ThunderX, T88 pass 1.x - 2.1 */
>         MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1),
>         /* Cavium ThunderX, T81 pass 1.0 */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 7ea0b2f20262..c68f6ad6722f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1006,6 +1006,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
>         if (kpti_applied)
>                 return;
>
> +       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && __kpti_forced <= 0)
> +               return;
> +

This is only called if the capability is detected, right? So how could
__kpti_forced be negative in this case?

In any case, I don't think __kpti_forced is relevant here: if we enter
this function, we can exit early if arm64_kernel_use_ng_mappings() is
guaranteed never to have returned 'false' at any point, and since we
know we won't enter this function on TX1, testing for KASLR should be
sufficient (but please include kaslr_offset() > 0)


>         remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
>
>         cpu_install_idmap();

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled
  2018-12-13 18:45   ` Ard Biesheuvel
@ 2018-12-13 20:10     ` Ard Biesheuvel
  0 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2018-12-13 20:10 UTC (permalink / raw)
  To: Will Deacon; +Cc: John Garry, Robin Murphy, linux-arm-kernel, Suzuki K. Poulose

On Thu, 13 Dec 2018 at 19:45, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
> On Thu, 13 Dec 2018 at 19:05, Will Deacon <will.deacon@arm.com> wrote:
> >
> > Hi Ard,
> >
> > On Thu, Dec 13, 2018 at 06:20:31PM +0100, Ard Biesheuvel wrote:
> > > John reports [0] that recent changes to the way the linear region is mapped
> > > is causing ~30 seconds stalls on boot, and this turns out to be due to the
> > > way we remap the linear space with nG mappings when enabling KPTI, which
> > > occurs with the MMU and caches off. Recent kernels map the linear region down
> > > to pages by default, so that restricted permissions can be applied at page
> > > granularity (so that module .text and .rodata are no longer writable via
> > > the linear map), and so the number of page table entries that require
> > > updating from G to nG is much higher, resulting in the observed delays.
> > >
> > > This series refactors the logic that tests whether KPTI should be enabled
> > > so that the conditions that apply regardless of CPU identification are
> > > evaluated early, allowing us to create the mappings of the kernel and
> > > the linear region with nG attributes from the outset.
> > >
> > > Patches #1 to #3 implement this so that we only create these nG mappings
> > > if KPTI is force enabled.
> > >
> > > As a followup, #4 may be applied, which inverts the logic so that nG mappings
> > > are always created, unless KPTI is forced off. This allows us to get rid
> > > of the slow and complex asm routines that make this change later on, which
> > > is where the boot delays occur.
> >
> > I think I'd like to keep those for now, since it would be nice to use global
> > mappings for the vast majority of CPUs that are not affected by kpti.
> >
> > I had a crack at the problem and ended up with a slightly simpler diff
> > below.
> >
> > Will
> >
> > --->8
> >
> > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> > index 7689c7aa1d77..feb05f7b46b1 100644
> > --- a/arch/arm64/include/asm/mmu.h
> > +++ b/arch/arm64/include/asm/mmu.h
> > @@ -16,6 +16,8 @@
> >  #ifndef __ASM_MMU_H
> >  #define __ASM_MMU_H
> >
> > +#include <asm/cputype.h>
> > +
> >  #define MMCF_AARCH32   0x1     /* mm context flag for AArch32 executables */
> >  #define USER_ASID_BIT  48
> >  #define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)
> > @@ -44,6 +46,28 @@ static inline bool arm64_kernel_unmapped_at_el0(void)
> >                cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
> >  }
> >
> > +static inline bool arm64_kernel_use_ng_mappings(void)
> > +{
> > +       bool tx1_bug = false;
> > +
> > +       if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> > +               return false;
> > +
> > +       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> > +               return arm64_kernel_unmapped_at_el0();
> > +
>
> This is all pretty subtle, but it looks correct to me. We are relying on the
>

.... fact that arm64_kernel_unmapped_at_el0() will not return true
before the CPU feature check has completed.

> > +       if (IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) {
> > +               extern const struct midr_range cavium_erratum_27456_cpus[];
> > +
> > +               tx1_bug = static_branch_likely(&arm64_const_caps_ready) ?
> > +                         cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456) :
> > +                         is_midr_in_range_list(read_cpuid_id(),
> > +                                               cavium_erratum_27456_cpus);
> > +       }
> > +
> > +       return !tx1_bug;
>
> Should we make this !tx1_bug && kaslr_offset() > 0 ? That way, we take
> 'nokaslr' on the command line into account as well (see my patch #1)
>
> > +}
> > +
> >  typedef void (*bp_hardening_cb_t)(void);
> >
> >  struct bp_hardening_data {
> > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> > index 78b942c1bea4..986e41c4c32b 100644
> > --- a/arch/arm64/include/asm/pgtable-prot.h
> > +++ b/arch/arm64/include/asm/pgtable-prot.h
> > @@ -37,8 +37,8 @@
> >  #define _PROT_DEFAULT          (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
> >  #define _PROT_SECT_DEFAULT     (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
> >
> > -#define PTE_MAYBE_NG           (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
> > -#define PMD_MAYBE_NG           (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
> > +#define PTE_MAYBE_NG           (arm64_kernel_use_ng_mappings() ? PTE_NG : 0)
> > +#define PMD_MAYBE_NG           (arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0)
> >
> >  #define PROT_DEFAULT           (_PROT_DEFAULT | PTE_MAYBE_NG)
> >  #define PROT_SECT_DEFAULT      (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
> > diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> > index ff2fda3a98e1..2c42f57cf529 100644
> > --- a/arch/arm64/kernel/cpu_errata.c
> > +++ b/arch/arm64/kernel/cpu_errata.c
> > @@ -539,7 +539,7 @@ static const struct midr_range arm64_harden_el2_vectors[] = {
> >  #endif
> >
> >  #ifdef CONFIG_CAVIUM_ERRATUM_27456
> > -static const struct midr_range cavium_erratum_27456_cpus[] = {
> > +const struct midr_range cavium_erratum_27456_cpus[] = {
> >         /* Cavium ThunderX, T88 pass 1.x - 2.1 */
> >         MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1),
> >         /* Cavium ThunderX, T81 pass 1.0 */
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index 7ea0b2f20262..c68f6ad6722f 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1006,6 +1006,9 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
> >         if (kpti_applied)
> >                 return;
> >
> > +       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && __kpti_forced <= 0)
> > +               return;
> > +
>
> This is only called if the capability is detected, right? So how could
> __kpti_forced be negative in this case?
>
> In any case, I don't think __kpti_forced is relevant here: if we enter
> this function, we can exit early if arm64_kernel_use_ng_mappings() is
> guaranteed never to have returned 'false' at any point, and since we
> know we won't enter this function on TX1, testing for KASLR should be
> sufficient (but please include kaslr_offset() > 0)
>
>
> >         remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
> >
> >         cpu_install_idmap();

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-12-13 20:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13 17:20 [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Ard Biesheuvel
2018-12-13 17:20 ` [RFC PATCH 1/4] arm64: kpti: enable KPTI only when KASLR is truly enabled Ard Biesheuvel
2018-12-13 17:20 ` [RFC PATCH 2/4] arm64: kpti: add helper to decide whether nG mappings should be used early Ard Biesheuvel
2018-12-13 17:20 ` [RFC PATCH 3/4] arm64: kpti: use nG mappings from the outset if kpti is force enabled Ard Biesheuvel
2018-12-13 17:20 ` [RFC PATCH 4/4] arm64: kpti: use non-global mappings unless KPTI is forced off Ard Biesheuvel
2018-12-13 18:05 ` [RFC PATCH 0/4] arm64: kpti: use nG mappings unless KPTI is force disabled Will Deacon
2018-12-13 18:45   ` Ard Biesheuvel
2018-12-13 20:10     ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.