All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible
@ 2022-06-30 14:42 Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k Ard Biesheuvel
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

This small series is what remains now that much of the prerequisite
changes that are shared with other work have landed. This is a follow-up
to [0], and implements the necessary changes that allow the EFI stub to
enter the kernel proper with MMU and caches enabled.

This is possible because the EFI spec mandates that all of memory is
mapped, and so we can rely on this mapping to set up the initial ID map,
instead of taking down the MMU first, and populating those page tables
using non-cacheable accesses that require special care in terms of cache
coherency. This also means that there is no need to clean the executable
image to the point of coherency (with the exception of the contents of
.idmap.text, which contains the code that switches from the firmware's
ID map to the kernel one)

Given that the image will typically be mapped by the firmware according
to the section descriptors in the PE/COFF header (R-X for .text and
.rodata, RW- for .data and .bss), this also has a slight robustness
advantage.

Note that this does not update the documented boot protocol [yet],
although any loader could take advantage of this, and load the kernel
image at any 64k aligned physical offset, and enter the image with the
MMU still enabled, either at EL2 or EL1.

[0] https://lore.kernel.org/linux-arm-kernel/20220330154205.2483167-1-ardb@kernel.org/

Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Pierre-Clément Tosi <ptosi@google.com>
Cc: Quentin Perret <qperret@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>

Ard Biesheuvel (6):
  arm64: lds: reduce effective minimum image alignment to 64k
  arm64: kernel: move ID map out of .text mapping
  arm64: head: record the MMU state at primary entry
  arm64: head: avoid cache invalidation when entering with the MMU on
  arm64: head: clean the ID map page to the PoC
  arm64: efi/libstub: enter with the MMU on if executing in place

 arch/arm64/include/asm/efi.h              |  7 ---
 arch/arm64/kernel/efi-entry.S             |  4 ++
 arch/arm64/kernel/head.S                  | 45 ++++++++++++++++++--
 arch/arm64/kernel/vmlinux.lds.S           | 13 +++++-
 arch/arm64/mm/proc.S                      |  2 -
 drivers/firmware/efi/libstub/arm64-stub.c |  2 +-
 include/linux/efi.h                       |  6 +--
 7 files changed, 58 insertions(+), 21 deletions(-)

-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 2/6] arm64: kernel: move ID map out of .text mapping Ard Biesheuvel
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

Our segment alignment is 64k for all configurations, and coincidentally,
this is the largest alignment supported by the PE/COFF executable
format used by EFI. This means that generally, there is no need to move
the image around in memory after it has been loaded by the firmware,
which can be advantageous as it also permits us to rely on the memory
attributes set by the firmware (R-X for [_text, __inittext_end] and RW-
for [__initdata_begin, _end]. This means we can jump right from the EFI
stub into the image with the MMU and caches enabled.

However, the minimum alignment of the image is actually 128k on 64k
pages configurations with CONFIG_VMAP_STACK=y, due to the existence of a
single 128k aligned object in the image, which is the stack of the init
task.

Let's work around this by adding some padding before the init stack
allocation, so we can round down the stack pointer to a suitably aligned
value if the image is not aligned to 128k in memory.

Note that this does not affect the boot protocol, which still requires 2
MiB alignment for bare metal boot.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/efi.h              |  7 -------
 arch/arm64/kernel/head.S                  |  3 +++
 arch/arm64/kernel/vmlinux.lds.S           | 11 ++++++++++-
 drivers/firmware/efi/libstub/arm64-stub.c |  2 +-
 include/linux/efi.h                       |  6 +-----
 5 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index ad55079abe47..3be3efee8fac 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -57,13 +57,6 @@ efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
 
 /* arch specific definitions used by the stub code */
 
-/*
- * In some configurations (e.g. VMAP_STACK && 64K pages), stacks built into the
- * kernel need greater alignment than we require the segments to be padded to.
- */
-#define EFI_KIMG_ALIGN	\
-	(SEGMENT_ALIGN > THREAD_ALIGN ? SEGMENT_ALIGN : THREAD_ALIGN)
-
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
  * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 5089660788fd..09b0cddf2161 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -394,6 +394,9 @@ SYM_FUNC_END(create_kernel_mapping)
 	msr	sp_el0, \tsk
 
 	ldr	\tmp1, [\tsk, #TSK_STACK]
+#if THREAD_ALIGN > SEGMENT_ALIGN
+	bic	\tmp1, \tmp1, #THREAD_ALIGN - 1
+#endif
 	add	sp, \tmp1, #THREAD_SIZE
 	sub	sp, sp, #PT_REGS_SIZE
 
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 45131e354e27..0efccdf52be2 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -274,7 +274,16 @@ SECTIONS
 
 	_data = .;
 	_sdata = .;
-	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+#if THREAD_ALIGN > SEGMENT_ALIGN
+	/*
+	 * Add some padding for the init stack so we can fix up any potential
+	 * misalignment at runtime. In practice, this can only occur on 64k
+	 * pages configurations with CONFIG_VMAP_STACK=y.
+	 */
+	. += THREAD_ALIGN - SEGMENT_ALIGN;
+	ASSERT(. == init_stack, "init_stack not at start of RW_DATA as expected")
+#endif
+	RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, SEGMENT_ALIGN)
 
 	/*
 	 * Data written with the MMU off but read with the MMU on requires
diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
index 577173ee1f83..ad7392e6c200 100644
--- a/drivers/firmware/efi/libstub/arm64-stub.c
+++ b/drivers/firmware/efi/libstub/arm64-stub.c
@@ -98,7 +98,7 @@ efi_status_t handle_kernel_image(unsigned long *image_addr,
 	 * 2M alignment if KASLR was explicitly disabled, even if it was not
 	 * going to be activated to begin with.
 	 */
-	u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : EFI_KIMG_ALIGN;
+	u64 min_kimg_align = efi_nokaslr ? MIN_KIMG_ALIGN : SEGMENT_ALIGN;
 
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
 		efi_guid_t li_fixed_proto = LINUX_EFI_LOADED_IMAGE_FIXED_GUID;
diff --git a/include/linux/efi.h b/include/linux/efi.h
index 7d9b0bb47eb3..492497054a5a 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -416,11 +416,7 @@ void efi_native_runtime_setup(void);
 /*
  * This GUID may be installed onto the kernel image's handle as a NULL protocol
  * to signal to the stub that the placement of the image should be respected,
- * and moving the image in physical memory is undesirable. To ensure
- * compatibility with 64k pages kernels with virtually mapped stacks, and to
- * avoid defeating physical randomization, this protocol should only be
- * installed if the image was placed at a randomized 128k aligned address in
- * memory.
+ * and moving the image in physical memory is undesirable.
  */
 #define LINUX_EFI_LOADED_IMAGE_FIXED_GUID	EFI_GUID(0xf5a37b6d, 0x3344, 0x42a5,  0xb6, 0xbb, 0x97, 0x86, 0x48, 0xc1, 0x89, 0x0a)
 
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/6] arm64: kernel: move ID map out of .text mapping
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 3/6] arm64: head: record the MMU state at primary entry Ard Biesheuvel
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

Reorganize the ID map slightly so that only code that is executed via
the 1:1 mapping remains. This allows us to move the ID map out of the
.text segment, as it will no longer need executable permissions via the
kernel mapping.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S        | 5 ++++-
 arch/arm64/kernel/vmlinux.lds.S | 2 +-
 arch/arm64/mm/proc.S            | 2 --
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 09b0cddf2161..2210bbd13cf9 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -464,7 +464,7 @@ SYM_FUNC_END(__primary_switched)
  * end early head section, begin head code that is also used for
  * hotplug and needs to have the same protections as the text region
  */
-	.section ".idmap.text","awx"
+	.text
 
 /*
  * Starting from EL2 or EL1, configure the CPU to execute at the highest
@@ -556,6 +556,7 @@ SYM_FUNC_START_LOCAL(set_cpu_boot_mode_flag)
 	ret
 SYM_FUNC_END(set_cpu_boot_mode_flag)
 
+	.section ".idmap.text","awx"
 	/*
 	 * This provides a "holding pen" for platforms to hold all secondary
 	 * cores are held until we're ready for them to initialise.
@@ -600,6 +601,7 @@ SYM_FUNC_START_LOCAL(secondary_startup)
 	br	x8
 SYM_FUNC_END(secondary_startup)
 
+	.text
 SYM_FUNC_START_LOCAL(__secondary_switched)
 	mov	x0, x20
 	bl	set_cpu_boot_mode_flag
@@ -659,6 +661,7 @@ SYM_FUNC_END(__secondary_too_slow)
  * Checks if the selected granule size is supported by the CPU.
  * If it isn't, park the CPU
  */
+	.section ".idmap.text","awx"
 SYM_FUNC_START(__enable_mmu)
 	mrs	x3, ID_AA64MMFR0_EL1
 	ubfx	x3, x3, #ID_AA64MMFR0_TGRAN_SHIFT, 4
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 0efccdf52be2..5002d869fa7f 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -168,7 +168,6 @@ SECTIONS
 			LOCK_TEXT
 			KPROBES_TEXT
 			HYPERVISOR_TEXT
-			IDMAP_TEXT
 			*(.gnu.warning)
 		. = ALIGN(16);
 		*(.got)			/* Global offset table		*/
@@ -195,6 +194,7 @@ SECTIONS
 		TRAMP_TEXT
 		HIBERNATE_TEXT
 		KEXEC_TEXT
+		IDMAP_TEXT
 		. = ALIGN(PAGE_SIZE);
 	}
 
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 7837a69524c5..113a4fedf5b8 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -107,7 +107,6 @@ SYM_FUNC_END(cpu_do_suspend)
  *
  * x0: Address of context pointer
  */
-	.pushsection ".idmap.text", "awx"
 SYM_FUNC_START(cpu_do_resume)
 	ldp	x2, x3, [x0]
 	ldp	x4, x5, [x0, #16]
@@ -163,7 +162,6 @@ alternative_else_nop_endif
 	isb
 	ret
 SYM_FUNC_END(cpu_do_resume)
-	.popsection
 #endif
 
 	.pushsection ".idmap.text", "awx"
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/6] arm64: head: record the MMU state at primary entry
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 2/6] arm64: kernel: move ID map out of .text mapping Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 4/6] arm64: head: avoid cache invalidation when entering with the MMU on Ard Biesheuvel
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

Prepare for being able to deal with primary entry with the MMU and
caches enabled, by recording whether or not we entered with the MMU on
in register x19.

While at it, add disable_mmu_workaround macro invocations to
init_kernel_el, as its manipulation of SCTLR_ELx may amount to disabling
of the MMU after subsequent patches.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2210bbd13cf9..a79c842395ee 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -77,6 +77,7 @@
 	 * primary lowlevel boot path:
 	 *
 	 *  Register   Scope                      Purpose
+	 *  x19        primary_entry() .. start_kernel()        whether we entered with the MMU on
 	 *  x20        primary_entry() .. __primary_switch()    CPU boot mode
 	 *  x21        primary_entry() .. start_kernel()        FDT pointer passed at boot in x0
 	 *  x22        create_idmap() .. start_kernel()         ID map VA of the DT blob
@@ -85,6 +86,7 @@
 	 *  x28        create_idmap()                           callee preserved temp register
 	 */
 SYM_CODE_START(primary_entry)
+	bl	record_mmu_state
 	bl	preserve_boot_args
 	bl	init_kernel_el			// w0=cpu_boot_mode
 	mov	x20, x0
@@ -107,6 +109,17 @@ SYM_CODE_START(primary_entry)
 	b	__primary_switch
 SYM_CODE_END(primary_entry)
 
+SYM_CODE_START_LOCAL(record_mmu_state)
+	mrs	x19, CurrentEL
+	cmp	x19, #CurrentEL_EL2
+	mrs	x19, sctlr_el1
+	b.ne	0f
+	mrs	x19, sctlr_el2
+0:	tst	x19, #SCTLR_ELx_M
+	cset	w19, ne
+	ret
+SYM_CODE_END(record_mmu_state)
+
 /*
  * Preserve the arguments passed by the bootloader in x0 .. x3
  */
@@ -484,6 +497,7 @@ SYM_FUNC_START(init_kernel_el)
 
 SYM_INNER_LABEL(init_el1, SYM_L_LOCAL)
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
+	pre_disable_mmu_workaround
 	msr	sctlr_el1, x0
 	isb
 	mov_q	x0, INIT_PSTATE_EL1
@@ -515,6 +529,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
 
 	/* Switching to VHE requires a sane SCTLR_EL1 as a start */
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
+	pre_disable_mmu_workaround
 	msr_s	SYS_SCTLR_EL12, x0
 
 	/*
@@ -530,6 +545,7 @@ SYM_INNER_LABEL(init_el2, SYM_L_LOCAL)
 
 1:
 	mov_q	x0, INIT_SCTLR_EL1_MMU_OFF
+	pre_disable_mmu_workaround
 	msr	sctlr_el1, x0
 
 	msr	elr_el2, lr
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/6] arm64: head: avoid cache invalidation when entering with the MMU on
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2022-06-30 14:42 ` [PATCH 3/6] arm64: head: record the MMU state at primary entry Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 5/6] arm64: head: clean the ID map page to the PoC Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 6/6] arm64: efi/libstub: enter with the MMU on if executing in place Ard Biesheuvel
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

If we enter with the MMU on, there is no need for explicit cache
invalidation for stores to memory, as they will be coherent with the
caches.

Let's take advantage of this, and create the ID map with the MMU still
enabled if that is how we entered, and avoid any cache invalidation
calls in that case.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index a79c842395ee..42fc7e980b35 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -88,9 +88,9 @@
 SYM_CODE_START(primary_entry)
 	bl	record_mmu_state
 	bl	preserve_boot_args
+	bl	create_idmap
 	bl	init_kernel_el			// w0=cpu_boot_mode
 	mov	x20, x0
-	bl	create_idmap
 
 	/*
 	 * The following calls CPU setup code, see arch/arm64/mm/proc.S for
@@ -130,11 +130,13 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	stp	x21, x1, [x0]			// x0 .. x3 at kernel entry
 	stp	x2, x3, [x0, #16]
 
+	cbnz	x19, 0f				// skip cache invalidation if MMU is on
 	dmb	sy				// needed before dc ivac with
 						// MMU off
 
 	add	x1, x0, #0x20			// 4 x 8 bytes
 	b	dcache_inval_poc		// tail call
+0:	ret
 SYM_CODE_END(preserve_boot_args)
 
 SYM_FUNC_START_LOCAL(clear_page_tables)
@@ -371,12 +373,13 @@ SYM_FUNC_START_LOCAL(create_idmap)
 	 * accesses (MMU disabled), invalidate those tables again to
 	 * remove any speculatively loaded cache lines.
 	 */
+	cbnz	x19, 0f				// skip cache invalidation if MMU is on
 	dmb	sy
 
 	adrp	x0, init_idmap_pg_dir
 	adrp	x1, init_idmap_pg_end
 	bl	dcache_inval_poc
-	ret	x28
+0:	ret	x28
 SYM_FUNC_END(create_idmap)
 
 SYM_FUNC_START_LOCAL(create_kernel_mapping)
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/6] arm64: head: clean the ID map page to the PoC
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2022-06-30 14:42 ` [PATCH 4/6] arm64: head: avoid cache invalidation when entering with the MMU on Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  2022-06-30 14:42 ` [PATCH 6/6] arm64: efi/libstub: enter with the MMU on if executing in place Ard Biesheuvel
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

If we enter with the MMU and caches enabled, the caller may not have
performed any cache maintenance. So clean the ID mapped page to the PoC,
to ensure that instruction and data accesses with the MMU off see the
correct data.

Note that this means primary_entry() itself needs to be moved into the
ID map as well, as we will return from init_kernel_el() with the MMU and
caches off.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/head.S | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 42fc7e980b35..4ca4d66b418f 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -70,7 +70,7 @@
 
 	__EFI_PE_HEADER
 
-	__INIT
+	.section ".idmap.text","awx"
 
 	/*
 	 * The following callee saved general purpose registers are used on the
@@ -89,6 +89,17 @@ SYM_CODE_START(primary_entry)
 	bl	record_mmu_state
 	bl	preserve_boot_args
 	bl	create_idmap
+
+	/*
+	 * If we entered with the MMU and caches on, clean the ID mapped part
+	 * of the primary boot code to the PoC so we can safely execute it with
+	 * the MMU off.
+	 */
+	cbz	x19, 0f
+	adrp	x0, __idmap_text_start
+	adr_l	x1, __idmap_text_end
+	bl	dcache_clean_poc
+0:
 	bl	init_kernel_el			// w0=cpu_boot_mode
 	mov	x20, x0
 
@@ -109,6 +120,7 @@ SYM_CODE_START(primary_entry)
 	b	__primary_switch
 SYM_CODE_END(primary_entry)
 
+	__INIT
 SYM_CODE_START_LOCAL(record_mmu_state)
 	mrs	x19, CurrentEL
 	cmp	x19, #CurrentEL_EL2
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6/6] arm64: efi/libstub: enter with the MMU on if executing in place
  2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2022-06-30 14:42 ` [PATCH 5/6] arm64: head: clean the ID map page to the PoC Ard Biesheuvel
@ 2022-06-30 14:42 ` Ard Biesheuvel
  5 siblings, 0 replies; 7+ messages in thread
From: Ard Biesheuvel @ 2022-06-30 14:42 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, Will Deacon, Marc Zyngier,
	Pierre-Clément Tosi, Quentin Perret, Mark Rutland

If the kernel image has not been moved from the place where it was
loaded by the firmware, just call the kernel entrypoint directly, and
keep the MMU and caches enabled. This removes the need for any cache
invalidation in the entry path.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kernel/efi-entry.S | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index 61a87fa1c305..0da0b373cf32 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -23,6 +23,10 @@ SYM_CODE_START(efi_enter_kernel)
 	add	x19, x0, x2		// relocated Image entrypoint
 	mov	x20, x1			// DTB address
 
+	adrp	x3, _text		// just call the entrypoint
+	cmp	x0, x3			// directly if the image was
+	b.eq	2f			// not moved around in memory
+
 	/*
 	 * Clean the copied Image to the PoC, and ensure it is not shadowed by
 	 * stale icache entries from before relocation.
-- 
2.35.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-06-30 14:45 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-30 14:42 [PATCH 0/6] arm64: efi: boot with MMU and caches on if possible Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 1/6] arm64: lds: reduce effective minimum image alignment to 64k Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 2/6] arm64: kernel: move ID map out of .text mapping Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 3/6] arm64: head: record the MMU state at primary entry Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 4/6] arm64: head: avoid cache invalidation when entering with the MMU on Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 5/6] arm64: head: clean the ID map page to the PoC Ard Biesheuvel
2022-06-30 14:42 ` [PATCH 6/6] arm64: efi/libstub: enter with the MMU on if executing in place Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.