All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] replace static mapping for pgdir region
@ 2018-03-19 11:19 Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment Ard Biesheuvel
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

On systems that are affected by S&M variant 3a, KPTI is not sufficient to
hide the physical placement of the kernel, since the trampoline page is
statically allocated, and the contents of TTBR1_EL1 is visible to attackers.

So instead, use a separate physical allocation and map it at the same
virtual offset as swapper_pg_dir and the trampoline were mapped before.

Ard Biesheuvel (6):
  arm64/mm: add explicit physical address argument to map_kernel_segment
  arm64/mm: create dedicated segment for pgdir mappings
  arm64/mm: use physical address as cpu_replace_ttbr1() argument
  arm64/mm: stop using __pa_symbol() for swapper_pg_dir
  arm64/mm: factor out clear_page() for unmapped memory
  arm64/mm: use independent physical allocation for pgdir segment

 arch/arm64/include/asm/mmu_context.h |  4 +-
 arch/arm64/include/asm/pgtable.h     |  2 +
 arch/arm64/include/asm/sections.h    |  1 +
 arch/arm64/kernel/cpufeature.c       |  2 +-
 arch/arm64/kernel/hibernate.c        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S      |  3 +
 arch/arm64/mm/kasan_init.c           |  4 +-
 arch/arm64/mm/mmu.c                  | 84 ++++++++++----------
 8 files changed, 55 insertions(+), 47 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-20  3:33   ` Mark Rutland
  2018-03-19 11:19 ` [RFC PATCH 2/6] arm64/mm: create dedicated segment for pgdir mappings Ard Biesheuvel
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of mapping a physically non-adjacent memory region
as backing for the vmlinux segment covering the trampoline and primary
level swapper_pg_dir regions, make the physical address an explicit
argument of map_kernel_segment().

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 22 ++++++++++++--------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8c704f1e53c2..007b2e32ca71 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -507,10 +507,10 @@ void mark_rodata_ro(void)
 }
 
 static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
-				      pgprot_t prot, struct vm_struct *vma,
-				      int flags, unsigned long vm_flags)
+				      phys_addr_t pa_start, pgprot_t prot,
+				      struct vm_struct *vma, int flags,
+				      unsigned long vm_flags)
 {
-	phys_addr_t pa_start = __pa_symbol(va_start);
 	unsigned long size = va_end - va_start;
 
 	BUG_ON(!PAGE_ALIGNED(pa_start));
@@ -585,15 +585,19 @@ static void __init map_kernel(pgd_t *pgdp)
 	 * Only rodata will be remapped with different permissions later on,
 	 * all other segments are allowed to use contiguous mappings.
 	 */
-	map_kernel_segment(pgdp, _text, _etext, text_prot, &vmlinux_text, 0,
-			   VM_NO_GUARD);
-	map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL,
+	map_kernel_segment(pgdp, _text, _etext, __pa_symbol(_text), text_prot,
+			   &vmlinux_text, 0, VM_NO_GUARD);
+	map_kernel_segment(pgdp, __start_rodata, __inittext_begin,
+			   __pa_symbol(__start_rodata), PAGE_KERNEL,
 			   &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD);
-	map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot,
+	map_kernel_segment(pgdp, __inittext_begin, __inittext_end,
+			   __pa_symbol(__inittext_begin), text_prot,
 			   &vmlinux_inittext, 0, VM_NO_GUARD);
-	map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL,
+	map_kernel_segment(pgdp, __initdata_begin, __initdata_end,
+			   __pa_symbol(__initdata_begin), PAGE_KERNEL,
 			   &vmlinux_initdata, 0, VM_NO_GUARD);
-	map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0);
+	map_kernel_segment(pgdp, _data, _end, __pa_symbol(_data), PAGE_KERNEL,
+			   &vmlinux_data, 0, 0);
 
 	if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) {
 		/*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 2/6] arm64/mm: create dedicated segment for pgdir mappings
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 3/6] arm64/mm: use physical address as cpu_replace_ttbr1() argument Ard Biesheuvel
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

In order to allow the pgdir mapping to be backed by a non-adjacent
physical region in a future patch, split it off from the data segment
and create a dedicated pgdir segment for it.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  1 +
 arch/arm64/kernel/vmlinux.lds.S   |  3 +++
 arch/arm64/mm/mmu.c               | 10 +++++++---
 3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index caab039d6305..f6b70a3fb332 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -29,5 +29,6 @@ extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
 extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
+extern char __pgdir_segment_start[], __pgdir_segment_end[];
 
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 0221aca6493d..b0fa2277e8d0 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -219,6 +219,8 @@ SECTIONS
 	idmap_pg_dir = .;
 	. += IDMAP_DIR_SIZE;
 
+	. = ALIGN(SEGMENT_ALIGN);
+	__pgdir_segment_start = .;
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 	tramp_pg_dir = .;
 	. += PAGE_SIZE;
@@ -229,6 +231,7 @@ SECTIONS
 	. += RESERVED_TTBR0_SIZE;
 #endif
 	swapper_pg_dir = .;
+	__pgdir_segment_end = . + PAGE_SIZE;
 	. += SWAPPER_DIR_SIZE;
 	swapper_pg_end = .;
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 007b2e32ca71..0a9c08c0948c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -572,7 +572,7 @@ core_initcall(map_entry_trampoline);
 static void __init map_kernel(pgd_t *pgdp)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
-				vmlinux_initdata, vmlinux_data;
+				vmlinux_initdata, vmlinux_data, vmlinux_pgdir;
 
 	/*
 	 * External debuggers may need to write directly to the text
@@ -596,8 +596,12 @@ static void __init map_kernel(pgd_t *pgdp)
 	map_kernel_segment(pgdp, __initdata_begin, __initdata_end,
 			   __pa_symbol(__initdata_begin), PAGE_KERNEL,
 			   &vmlinux_initdata, 0, VM_NO_GUARD);
-	map_kernel_segment(pgdp, _data, _end, __pa_symbol(_data), PAGE_KERNEL,
-			   &vmlinux_data, 0, 0);
+	map_kernel_segment(pgdp, _data, __pgdir_segment_start,
+			   __pa_symbol(_data), PAGE_KERNEL,
+			   &vmlinux_data, 0, VM_NO_GUARD);
+	map_kernel_segment(pgdp, __pgdir_segment_start, __pgdir_segment_end,
+			   __pa_symbol(__pgdir_segment_start), PAGE_KERNEL,
+			   &vmlinux_pgdir, 0, 0);
 
 	if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) {
 		/*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 3/6] arm64/mm: use physical address as cpu_replace_ttbr1() argument
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 2/6] arm64/mm: create dedicated segment for pgdir mappings Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 4/6] arm64/mm: stop using __pa_symbol() for swapper_pg_dir Ard Biesheuvel
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

Every caller of cpu_replace_ttbr1() currently translates the
pgd* argument from physical to virtual, only to translate it
back in the implementation of the function. Drop this redundancy.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu_context.h | 4 +---
 arch/arm64/mm/kasan_init.c           | 4 ++--
 arch/arm64/mm/mmu.c                  | 4 ++--
 3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 39ec0b8a689e..3eddb871f251 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -141,14 +141,12 @@ static inline void cpu_install_idmap(void)
  * Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD,
  * avoiding the possibility of conflicting TLB entries being allocated.
  */
-static inline void cpu_replace_ttbr1(pgd_t *pgdp)
+static inline void cpu_replace_ttbr1(phys_addr_t pgd_phys)
 {
 	typedef void (ttbr_replace_func)(phys_addr_t);
 	extern ttbr_replace_func idmap_cpu_replace_ttbr1;
 	ttbr_replace_func *replace_phys;
 
-	phys_addr_t pgd_phys = virt_to_phys(pgdp);
-
 	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index dabfc1ecda3d..1f17642811b3 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -199,7 +199,7 @@ void __init kasan_init(void)
 	 */
 	memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir));
 	dsb(ishst);
-	cpu_replace_ttbr1(lm_alias(tmp_pg_dir));
+	cpu_replace_ttbr1(__pa_symbol(tmp_pg_dir));
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
@@ -236,7 +236,7 @@ void __init kasan_init(void)
 			pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
-	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
+	cpu_replace_ttbr1(__pa_symbol(swapper_pg_dir));
 
 	/* At this point kasan is fully initialized. Enable error messages */
 	init_task.kasan_depth = 0;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 0a9c08c0948c..365fedf22fcd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -650,9 +650,9 @@ void __init paging_init(void)
 	 *
 	 * To do this we need to go via a temporary pgd.
 	 */
-	cpu_replace_ttbr1(__va(pgd_phys));
+	cpu_replace_ttbr1(pgd_phys);
 	memcpy(swapper_pg_dir, pgdp, PGD_SIZE);
-	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
+	cpu_replace_ttbr1(__pa_symbol(swapper_pg_dir));
 
 	pgd_clear_fixmap();
 	memblock_free(pgd_phys, PAGE_SIZE);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 4/6] arm64/mm: stop using __pa_symbol() for swapper_pg_dir
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2018-03-19 11:19 ` [RFC PATCH 3/6] arm64/mm: use physical address as cpu_replace_ttbr1() argument Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 5/6] arm64/mm: factor out clear_page() for unmapped memory Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment Ard Biesheuvel
  5 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

We want to decouple the physical backing of swapper_pg_dir from the
kernel itself, so that leaking the former (which is unavoidable on
cores susceptible to Spectre/Meltdown variant 3a) does not leak the
latter.

This means __pa_symbol() will no longer work, so make preparations
for dropping it, by keeping the physical address of swapper_pg_dir
in a variable.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/pgtable.h | 2 ++
 arch/arm64/kernel/cpufeature.c   | 2 +-
 arch/arm64/kernel/hibernate.c    | 2 +-
 arch/arm64/mm/kasan_init.c       | 2 +-
 arch/arm64/mm/mmu.c              | 6 +++++-
 5 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7e2c27e63cd8..ce5e51554468 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -723,6 +723,8 @@ extern pgd_t swapper_pg_end[];
 extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
 extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
 
+extern phys_addr_t __pa_swapper_pg_dir;
+
 /*
  * Encode and decode a swap entry:
  *	bits 0-1:	present (must be zero)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 2985a067fc13..4792cd8bad07 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -909,7 +909,7 @@ static int kpti_install_ng_mappings(void *__unused)
 	remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
 
 	cpu_install_idmap();
-	remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
+	remap_fn(cpu, num_online_cpus(), __pa_swapper_pg_dir);
 	cpu_uninstall_idmap();
 
 	if (!cpu)
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 1ec5f28c39fc..5797c9b141dc 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -125,7 +125,7 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
 		return -EOVERFLOW;
 
 	arch_hdr_invariants(&hdr->invariants);
-	hdr->ttbr1_el1		= __pa_symbol(swapper_pg_dir);
+	hdr->ttbr1_el1		= __pa_swapper_pg_dir;
 	hdr->reenter_kernel	= _cpu_resume;
 
 	/* We can't use __hyp_get_vectors() because kvm may still be loaded */
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 1f17642811b3..b01c7bb133a6 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -236,7 +236,7 @@ void __init kasan_init(void)
 			pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
 	memset(kasan_zero_page, 0, PAGE_SIZE);
-	cpu_replace_ttbr1(__pa_symbol(swapper_pg_dir));
+	cpu_replace_ttbr1(__pa_swapper_pg_dir);
 
 	/* At this point kasan is fully initialized. Enable error messages */
 	init_task.kasan_depth = 0;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 365fedf22fcd..af6fe001df0c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -55,6 +55,8 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
 
+phys_addr_t __pa_swapper_pg_dir;
+
 /*
  * Empty_zero_page is a special page that is used for zero-initialized data
  * and COW.
@@ -639,6 +641,8 @@ void __init paging_init(void)
 	phys_addr_t pgd_phys = early_pgtable_alloc();
 	pgd_t *pgdp = pgd_set_fixmap(pgd_phys);
 
+	__pa_swapper_pg_dir = __pa_symbol(swapper_pg_dir);
+
 	map_kernel(pgdp);
 	map_mem(pgdp);
 
@@ -652,7 +656,7 @@ void __init paging_init(void)
 	 */
 	cpu_replace_ttbr1(pgd_phys);
 	memcpy(swapper_pg_dir, pgdp, PGD_SIZE);
-	cpu_replace_ttbr1(__pa_symbol(swapper_pg_dir));
+	cpu_replace_ttbr1(__pa_swapper_pg_dir);
 
 	pgd_clear_fixmap();
 	memblock_free(pgd_phys, PAGE_SIZE);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 5/6] arm64/mm: factor out clear_page() for unmapped memory
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2018-03-19 11:19 ` [RFC PATCH 4/6] arm64/mm: stop using __pa_symbol() for swapper_pg_dir Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-19 11:19 ` [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment Ard Biesheuvel
  5 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

Factor out the code that clears newly memblock_alloc()'ed pages so
we can reuse it to clear the pgdir allocation later.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index af6fe001df0c..55c84d63244d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -79,19 +79,14 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 }
 EXPORT_SYMBOL(phys_mem_access_prot);
 
-static phys_addr_t __init early_pgtable_alloc(void)
+static void __init clear_page_phys(phys_addr_t phys)
 {
-	phys_addr_t phys;
-	void *ptr;
-
-	phys = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
-
 	/*
 	 * The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE
 	 * slot will be free, so we can (ab)use the FIX_PTE slot to initialise
 	 * any level of table.
 	 */
-	ptr = pte_set_fixmap(phys);
+	void *ptr = pte_set_fixmap(phys);
 
 	memset(ptr, 0, PAGE_SIZE);
 
@@ -100,7 +95,13 @@ static phys_addr_t __init early_pgtable_alloc(void)
 	 * table walker
 	 */
 	pte_clear_fixmap();
+}
+
+static phys_addr_t __init early_pgtable_alloc(void)
+{
+	phys_addr_t phys = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
+	clear_page_phys(phys);
 	return phys;
 }
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment
  2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2018-03-19 11:19 ` [RFC PATCH 5/6] arm64/mm: factor out clear_page() for unmapped memory Ard Biesheuvel
@ 2018-03-19 11:19 ` Ard Biesheuvel
  2018-03-19 16:17   ` Ard Biesheuvel
  5 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 11:19 UTC (permalink / raw)
  To: linux-arm-kernel

In order to avoid leaking the physical placement of the kernel via
the value off TTBR1_EL1 on platforms that are affected by variant 3a,
replace the statically allocated page table region with a dynamically
allocated buffer whose placement in the physical address space does
not correlate with the placement of the kernel itself.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 41 ++++++++------------
 1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 55c84d63244d..6c16e71c26e2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -572,7 +572,7 @@ core_initcall(map_entry_trampoline);
 /*
  * Create fine-grained mappings for the kernel.
  */
-static void __init map_kernel(pgd_t *pgdp)
+static void __init map_kernel(pgd_t *pgdp, phys_addr_t pgdir_phys)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
 				vmlinux_initdata, vmlinux_data, vmlinux_pgdir;
@@ -603,8 +603,7 @@ static void __init map_kernel(pgd_t *pgdp)
 			   __pa_symbol(_data), PAGE_KERNEL,
 			   &vmlinux_data, 0, VM_NO_GUARD);
 	map_kernel_segment(pgdp, __pgdir_segment_start, __pgdir_segment_end,
-			   __pa_symbol(__pgdir_segment_start), PAGE_KERNEL,
-			   &vmlinux_pgdir, 0, 0);
+			   pgdir_phys, PAGE_KERNEL, &vmlinux_pgdir, 0, 0);
 
 	if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) {
 		/*
@@ -639,36 +638,28 @@ static void __init map_kernel(pgd_t *pgdp)
  */
 void __init paging_init(void)
 {
-	phys_addr_t pgd_phys = early_pgtable_alloc();
-	pgd_t *pgdp = pgd_set_fixmap(pgd_phys);
+	int pgdir_segment_size = __pgdir_segment_end - __pgdir_segment_start;
+	phys_addr_t pgdir_phys = memblock_alloc(pgdir_segment_size, PAGE_SIZE);
+	phys_addr_t p;
+	pgd_t *pgdp;
+
+	for (p = 0; p < pgdir_segment_size; p += PAGE_SIZE)
+		clear_page_phys(p);
 
-	__pa_swapper_pg_dir = __pa_symbol(swapper_pg_dir);
+	__pa_swapper_pg_dir = pgdir_phys + (u64)swapper_pg_dir -
+			      (u64)__pgdir_segment_start;
 
-	map_kernel(pgdp);
+	pgdp = pgd_set_fixmap(__pa_swapper_pg_dir);
+
+	map_kernel(pgdp, pgdir_phys);
 	map_mem(pgdp);
 
-	/*
-	 * We want to reuse the original swapper_pg_dir so we don't have to
-	 * communicate the new address to non-coherent secondaries in
-	 * secondary_entry, and so cpu_switch_mm can generate the address with
-	 * adrp+add rather than a load from some global variable.
-	 *
-	 * To do this we need to go via a temporary pgd.
-	 */
-	cpu_replace_ttbr1(pgd_phys);
-	memcpy(swapper_pg_dir, pgdp, PGD_SIZE);
 	cpu_replace_ttbr1(__pa_swapper_pg_dir);
 
 	pgd_clear_fixmap();
-	memblock_free(pgd_phys, PAGE_SIZE);
 
-	/*
-	 * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
-	 * allocated with it.
-	 */
-	memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
-		      __pa_symbol(swapper_pg_end) - __pa_symbol(swapper_pg_dir)
-		      - PAGE_SIZE);
+	/* the statically allocated pgdir is no longer used after this point */
+	memblock_free(__pa_symbol(__pgdir_segment_start), pgdir_segment_size);
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment
  2018-03-19 11:19 ` [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment Ard Biesheuvel
@ 2018-03-19 16:17   ` Ard Biesheuvel
  0 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-19 16:17 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 March 2018 at 19:19, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> In order to avoid leaking the physical placement of the kernel via
> the value off TTBR1_EL1 on platforms that are affected by variant 3a,
> replace the statically allocated page table region with a dynamically
> allocated buffer whose placement in the physical address space does
> not correlate with the placement of the kernel itself.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/mm/mmu.c | 41 ++++++++------------
>  1 file changed, 16 insertions(+), 25 deletions(-)
>

Note: this patch needs some work to use the correct
__pa_swapper_pg_dir value when booting secondaries, which is not
complicated, just missing.


> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 55c84d63244d..6c16e71c26e2 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -572,7 +572,7 @@ core_initcall(map_entry_trampoline);
>  /*
>   * Create fine-grained mappings for the kernel.
>   */
> -static void __init map_kernel(pgd_t *pgdp)
> +static void __init map_kernel(pgd_t *pgdp, phys_addr_t pgdir_phys)
>  {
>         static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
>                                 vmlinux_initdata, vmlinux_data, vmlinux_pgdir;
> @@ -603,8 +603,7 @@ static void __init map_kernel(pgd_t *pgdp)
>                            __pa_symbol(_data), PAGE_KERNEL,
>                            &vmlinux_data, 0, VM_NO_GUARD);
>         map_kernel_segment(pgdp, __pgdir_segment_start, __pgdir_segment_end,
> -                          __pa_symbol(__pgdir_segment_start), PAGE_KERNEL,
> -                          &vmlinux_pgdir, 0, 0);
> +                          pgdir_phys, PAGE_KERNEL, &vmlinux_pgdir, 0, 0);
>
>         if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) {
>                 /*
> @@ -639,36 +638,28 @@ static void __init map_kernel(pgd_t *pgdp)
>   */
>  void __init paging_init(void)
>  {
> -       phys_addr_t pgd_phys = early_pgtable_alloc();
> -       pgd_t *pgdp = pgd_set_fixmap(pgd_phys);
> +       int pgdir_segment_size = __pgdir_segment_end - __pgdir_segment_start;
> +       phys_addr_t pgdir_phys = memblock_alloc(pgdir_segment_size, PAGE_SIZE);
> +       phys_addr_t p;
> +       pgd_t *pgdp;
> +
> +       for (p = 0; p < pgdir_segment_size; p += PAGE_SIZE)
> +               clear_page_phys(p);
>
> -       __pa_swapper_pg_dir = __pa_symbol(swapper_pg_dir);
> +       __pa_swapper_pg_dir = pgdir_phys + (u64)swapper_pg_dir -
> +                             (u64)__pgdir_segment_start;
>
> -       map_kernel(pgdp);
> +       pgdp = pgd_set_fixmap(__pa_swapper_pg_dir);
> +
> +       map_kernel(pgdp, pgdir_phys);
>         map_mem(pgdp);
>
> -       /*
> -        * We want to reuse the original swapper_pg_dir so we don't have to
> -        * communicate the new address to non-coherent secondaries in
> -        * secondary_entry, and so cpu_switch_mm can generate the address with
> -        * adrp+add rather than a load from some global variable.
> -        *
> -        * To do this we need to go via a temporary pgd.
> -        */
> -       cpu_replace_ttbr1(pgd_phys);
> -       memcpy(swapper_pg_dir, pgdp, PGD_SIZE);
>         cpu_replace_ttbr1(__pa_swapper_pg_dir);
>
>         pgd_clear_fixmap();
> -       memblock_free(pgd_phys, PAGE_SIZE);
>
> -       /*
> -        * We only reuse the PGD from the swapper_pg_dir, not the pud + pmd
> -        * allocated with it.
> -        */
> -       memblock_free(__pa_symbol(swapper_pg_dir) + PAGE_SIZE,
> -                     __pa_symbol(swapper_pg_end) - __pa_symbol(swapper_pg_dir)
> -                     - PAGE_SIZE);
> +       /* the statically allocated pgdir is no longer used after this point */
> +       memblock_free(__pa_symbol(__pgdir_segment_start), pgdir_segment_size);
>  }
>
>  /*
> --
> 2.11.0
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment
  2018-03-19 11:19 ` [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment Ard Biesheuvel
@ 2018-03-20  3:33   ` Mark Rutland
  2018-03-20  4:09     ` Ard Biesheuvel
  0 siblings, 1 reply; 10+ messages in thread
From: Mark Rutland @ 2018-03-20  3:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Mar 19, 2018 at 07:19:53PM +0800, Ard Biesheuvel wrote:
> In preparation of mapping a physically non-adjacent memory region
> as backing for the vmlinux segment covering the trampoline and primary
> level swapper_pg_dir regions, make the physical address an explicit
> argument of map_kernel_segment().
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/mm/mmu.c | 22 ++++++++++++--------
>  1 file changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 8c704f1e53c2..007b2e32ca71 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -507,10 +507,10 @@ void mark_rodata_ro(void)
>  }
>  
>  static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
> -				      pgprot_t prot, struct vm_struct *vma,
> -				      int flags, unsigned long vm_flags)
> +				      phys_addr_t pa_start, pgprot_t prot,
> +				      struct vm_struct *vma, int flags,
> +				      unsigned long vm_flags)
>  {
> -	phys_addr_t pa_start = __pa_symbol(va_start);
>  	unsigned long size = va_end - va_start;
>  
>  	BUG_ON(!PAGE_ALIGNED(pa_start));

How about we rename this to __map_kernel_segment(), and have a
map_kernel_segment wrapper that does the __pa_symbol() stuff?

That would avoid some redundancy in map_kernel(), and when we can use
__map_kernel_segment() for the trampoline bits.

Thanks,
Mark.

> @@ -585,15 +585,19 @@ static void __init map_kernel(pgd_t *pgdp)
>  	 * Only rodata will be remapped with different permissions later on,
>  	 * all other segments are allowed to use contiguous mappings.
>  	 */
> -	map_kernel_segment(pgdp, _text, _etext, text_prot, &vmlinux_text, 0,
> -			   VM_NO_GUARD);
> -	map_kernel_segment(pgdp, __start_rodata, __inittext_begin, PAGE_KERNEL,
> +	map_kernel_segment(pgdp, _text, _etext, __pa_symbol(_text), text_prot,
> +			   &vmlinux_text, 0, VM_NO_GUARD);
> +	map_kernel_segment(pgdp, __start_rodata, __inittext_begin,
> +			   __pa_symbol(__start_rodata), PAGE_KERNEL,
>  			   &vmlinux_rodata, NO_CONT_MAPPINGS, VM_NO_GUARD);
> -	map_kernel_segment(pgdp, __inittext_begin, __inittext_end, text_prot,
> +	map_kernel_segment(pgdp, __inittext_begin, __inittext_end,
> +			   __pa_symbol(__inittext_begin), text_prot,
>  			   &vmlinux_inittext, 0, VM_NO_GUARD);
> -	map_kernel_segment(pgdp, __initdata_begin, __initdata_end, PAGE_KERNEL,
> +	map_kernel_segment(pgdp, __initdata_begin, __initdata_end,
> +			   __pa_symbol(__initdata_begin), PAGE_KERNEL,
>  			   &vmlinux_initdata, 0, VM_NO_GUARD);
> -	map_kernel_segment(pgdp, _data, _end, PAGE_KERNEL, &vmlinux_data, 0, 0);
> +	map_kernel_segment(pgdp, _data, _end, __pa_symbol(_data), PAGE_KERNEL,
> +			   &vmlinux_data, 0, 0);
>  
>  	if (!READ_ONCE(pgd_val(*pgd_offset_raw(pgdp, FIXADDR_START)))) {
>  		/*
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment
  2018-03-20  3:33   ` Mark Rutland
@ 2018-03-20  4:09     ` Ard Biesheuvel
  0 siblings, 0 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2018-03-20  4:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 20 March 2018 at 11:33, Mark Rutland <mark.rutland@arm.com> wrote:
> On Mon, Mar 19, 2018 at 07:19:53PM +0800, Ard Biesheuvel wrote:
>> In preparation of mapping a physically non-adjacent memory region
>> as backing for the vmlinux segment covering the trampoline and primary
>> level swapper_pg_dir regions, make the physical address an explicit
>> argument of map_kernel_segment().
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>  arch/arm64/mm/mmu.c | 22 ++++++++++++--------
>>  1 file changed, 13 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 8c704f1e53c2..007b2e32ca71 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -507,10 +507,10 @@ void mark_rodata_ro(void)
>>  }
>>
>>  static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
>> -                                   pgprot_t prot, struct vm_struct *vma,
>> -                                   int flags, unsigned long vm_flags)
>> +                                   phys_addr_t pa_start, pgprot_t prot,
>> +                                   struct vm_struct *vma, int flags,
>> +                                   unsigned long vm_flags)
>>  {
>> -     phys_addr_t pa_start = __pa_symbol(va_start);
>>       unsigned long size = va_end - va_start;
>>
>>       BUG_ON(!PAGE_ALIGNED(pa_start));
>
> How about we rename this to __map_kernel_segment(), and have a
> map_kernel_segment wrapper that does the __pa_symbol() stuff?
>
> That would avoid some redundancy in map_kernel(), and when we can use
> __map_kernel_segment() for the trampoline bits.
>

Fair enough.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-03-20  4:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-19 11:19 [RFC PATCH 0/6] replace static mapping for pgdir region Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 1/6] arm64/mm: add explicit physical address argument to map_kernel_segment Ard Biesheuvel
2018-03-20  3:33   ` Mark Rutland
2018-03-20  4:09     ` Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 2/6] arm64/mm: create dedicated segment for pgdir mappings Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 3/6] arm64/mm: use physical address as cpu_replace_ttbr1() argument Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 4/6] arm64/mm: stop using __pa_symbol() for swapper_pg_dir Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 5/6] arm64/mm: factor out clear_page() for unmapped memory Ard Biesheuvel
2018-03-19 11:19 ` [RFC PATCH 6/6] arm64/mm: use independent physical allocation for pgdir segment Ard Biesheuvel
2018-03-19 16:17   ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.