linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] x86: head_64.S spring cleaning
@ 2022-08-15 13:42 Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Ard Biesheuvel
                   ` (6 more replies)
  0 siblings, 7 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

After doing some cleanup work on the EFI code in head_64.S, the mixed
mode code in particular, I noticed that the memory encryption pieces
could use some attention as well, so I cleaned that up too.

I have been sitting on these patches since November, waiting for the
right time to post them, i.e., after the SEV/SNP review had finished.
This has yet to happen, so I'm posting them now instead. Please feel
free to disregard for the time being, and propose a suitable timeframe
to repost them if this is likely to conflict with ongoing work.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michael Roth <michael.roth@amd.com>

Ard Biesheuvel (6):
  x86/head_64: clean up mixed mode 32-bit entry code
  efi/x86: simplify IDT/GDT preserve/restore
  x86/compressed: move startup32_load_idt() out of startup code
  x86/compressed: move startup32_check_sev_cbit out of startup code
  x86/compressed: adhere to calling convention in
    get_sev_encryption_bit()
  x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y

 arch/x86/boot/compressed/Makefile       |   8 +-
 arch/x86/boot/compressed/efi_mixed.S    | 352 ++++++++++++++++++++
 arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
 arch/x86/boot/compressed/head_32.S      |   4 -
 arch/x86/boot/compressed/head_64.S      | 309 +----------------
 arch/x86/boot/compressed/mem_encrypt.S  | 146 +++++++-
 drivers/firmware/efi/libstub/x86-stub.c |   3 +-
 7 files changed, 506 insertions(+), 511 deletions(-)
 create mode 100644 arch/x86/boot/compressed/efi_mixed.S
 delete mode 100644 arch/x86/boot/compressed/efi_thunk_64.S

-- 
2.35.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-09-20 19:19   ` Borislav Petkov
  2022-08-15 13:42 ` [PATCH 2/6] efi/x86: simplify IDT/GDT preserve/restore Ard Biesheuvel
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

The x86_64 32-bit entry code is a jumble of EFI and SEV routines, which
is not good for maintainability. Let's isolate the EFI mixed mode code
and combine it with the boot service thunk that lives in another .S
file, so that we can remove it from head_64.S

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/Makefile       |   6 +-
 arch/x86/boot/compressed/efi_mixed.S    | 358 ++++++++++++++++++++
 arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
 arch/x86/boot/compressed/head_32.S      |   4 -
 arch/x86/boot/compressed/head_64.S      | 149 +-------
 drivers/firmware/efi/libstub/x86-stub.c |   3 +-
 6 files changed, 370 insertions(+), 345 deletions(-)

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 35ce1a64068b..d6dbb46696a2 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,11 +107,11 @@ endif
 vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_INTEL_TDX_GUEST) += $(obj)/tdx.o $(obj)/tdcall.o
 
-vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
-efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
+vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
+vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
 
-$(obj)/vmlinux: $(vmlinux-objs-y) $(efi-obj-y) FORCE
+$(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 	$(call if_changed,ld)
 
 OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
new file mode 100644
index 000000000000..6fd7ac517c53
--- /dev/null
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -0,0 +1,358 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2014, 2015 Intel Corporation; author Matt Fleming
+ *
+ * Early support for invoking 32-bit EFI services from a 64-bit kernel.
+ *
+ * Because this thunking occurs before ExitBootServices() we have to
+ * restore the firmware's 32-bit GDT and IDT before we make EFI service
+ * calls.
+ *
+ * On the plus side, we don't have to worry about mangling 64-bit
+ * addresses into 32-bits because we're executing with an identity
+ * mapped pagetable and haven't transitioned to 64-bit virtual addresses
+ * yet.
+ */
+
+#include <linux/linkage.h>
+#include <asm/msr.h>
+#include <asm/page_types.h>
+#include <asm/processor-flags.h>
+#include <asm/segment.h>
+
+	.text
+	.code64
+/*
+ * This is the first thing that runs after switching to long mode. Depending on
+ * whether we are using the EFI handover protocol or the compat entry point, we
+ * will either branch to the 64-bit EFI handover entrypoint at offset 0x390 in
+ * the image, or to the 64-bit EFI PE/COFF entrypoint efi_pe_entry(). In the
+ * former case, the bootloader must provide a struct bootparams pointer as the
+ * third argument, so we use that to disambiguate.
+ *
+ *                                                           +--------------+
+ *  +----------------+     +------------+              +---->| efi_pe_entry |
+ *  | efi32_pe_entry |---->|            |              |     +-----------+--+
+ *  +----------------+     |            |     +--------+-------------+   |
+ *                         | startup_32 |---->| startup_64_mixedmode |   |
+ *  +----------------+     |            |     +--------+-------------+   V
+ *  | efi32_handover |---->|            |              |     +-----------+------+
+ *  +----------------+     +------------+              +---->| efi64_stub_entry |
+ *                                                           +-----------+------+
+ *                         +------------+     +----------+               |
+ *                         | startup_64 |<----| efi_main |<--------------+
+ *                         +------------+     +----------+
+ */
+SYM_FUNC_START(startup_64_mixedmode)
+	lea	efi32_boot_args(%rip), %rdx
+	mov	0(%rdx), %edi
+	mov	4(%rdx), %esi
+	mov	8(%rdx), %edx		// saved bootparams pointer
+	test	%edx, %edx
+	jnz	efi64_stub_entry
+	/*
+	 * efi_pe_entry uses MS calling convention, which requires 32 bytes of
+	 * shadow space on the stack even if all arguments are passed in
+	 * registers. We also need an additional 8 bytes for the space that
+	 * would be occupied by the return address, and this also results in
+	 * the correct stack alignment for entry.
+	 */
+	sub	$40, %rsp
+	mov	%rdi, %rcx		// MS calling convention
+	mov	%rsi, %rdx
+	jmp	efi_pe_entry
+SYM_FUNC_END(startup_64_mixedmode)
+
+SYM_FUNC_START(__efi64_thunk)
+	push	%rbp
+	push	%rbx
+
+	movl	%ds, %eax
+	push	%rax
+	movl	%es, %eax
+	push	%rax
+	movl	%ss, %eax
+	push	%rax
+
+	/* Copy args passed on stack */
+	movq	0x30(%rsp), %rbp
+	movq	0x38(%rsp), %rbx
+	movq	0x40(%rsp), %rax
+
+	/*
+	 * Convert x86-64 ABI params to i386 ABI
+	 */
+	subq	$64, %rsp
+	movl	%esi, 0x0(%rsp)
+	movl	%edx, 0x4(%rsp)
+	movl	%ecx, 0x8(%rsp)
+	movl	%r8d, 0xc(%rsp)
+	movl	%r9d, 0x10(%rsp)
+	movl	%ebp, 0x14(%rsp)
+	movl	%ebx, 0x18(%rsp)
+	movl	%eax, 0x1c(%rsp)
+
+	leaq	0x20(%rsp), %rbx
+	sgdt	(%rbx)
+
+	addq	$16, %rbx
+	sidt	(%rbx)
+
+	leaq	1f(%rip), %rbp
+
+	/*
+	 * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
+	 * and IDT that was installed when the kernel started executing. The
+	 * pointers were saved at the efi32_stub_handover entry point below.
+	 *
+	 * Pass the saved DS selector to the 32-bit code, and use far return to
+	 * restore the saved CS selector.
+	 */
+	leaq	efi32_boot_idt(%rip), %rax
+	lidt	(%rax)
+	leaq	efi32_boot_gdt(%rip), %rax
+	lgdt	(%rax)
+
+	movzwl	efi32_boot_ds(%rip), %edx
+	movzwq	efi32_boot_cs(%rip), %rax
+	pushq	%rax
+	leaq	efi_enter32(%rip), %rax
+	pushq	%rax
+	lretq
+
+1:	addq	$64, %rsp
+	movq	%rdi, %rax
+
+	pop	%rbx
+	movl	%ebx, %ss
+	pop	%rbx
+	movl	%ebx, %es
+	pop	%rbx
+	movl	%ebx, %ds
+	/* Clear out 32-bit selector from FS and GS */
+	xorl	%ebx, %ebx
+	movl	%ebx, %fs
+	movl	%ebx, %gs
+
+	/*
+	 * Convert 32-bit status code into 64-bit.
+	 */
+	roll	$1, %eax
+	rorq	$1, %rax
+
+	pop	%rbx
+	pop	%rbp
+	RET
+SYM_FUNC_END(__efi64_thunk)
+
+	.code32
+/*
+ * EFI service pointer must be in %edi.
+ *
+ * The stack should represent the 32-bit calling convention.
+ */
+SYM_FUNC_START_LOCAL(efi_enter32)
+	/* Load firmware selector into data and stack segment registers */
+	movl	%edx, %ds
+	movl	%edx, %es
+	movl	%edx, %fs
+	movl	%edx, %gs
+	movl	%edx, %ss
+
+	/* Reload pgtables */
+	movl	%cr3, %eax
+	movl	%eax, %cr3
+
+	/* Disable paging */
+	movl	%cr0, %eax
+	btrl	$X86_CR0_PG_BIT, %eax
+	movl	%eax, %cr0
+
+	/* Disable long mode via EFER */
+	movl	$MSR_EFER, %ecx
+	rdmsr
+	btrl	$_EFER_LME, %eax
+	wrmsr
+
+	call	*%edi
+
+	/* We must preserve return value */
+	movl	%eax, %edi
+
+	/*
+	 * Some firmware will return with interrupts enabled. Be sure to
+	 * disable them before we switch GDTs and IDTs.
+	 */
+	cli
+
+	lidtl	(%ebx)
+	subl	$16, %ebx
+
+	lgdtl	(%ebx)
+
+	movl	%cr4, %eax
+	btsl	$(X86_CR4_PAE_BIT), %eax
+	movl	%eax, %cr4
+
+	movl	%cr3, %eax
+	movl	%eax, %cr3
+
+	movl	$MSR_EFER, %ecx
+	rdmsr
+	btsl	$_EFER_LME, %eax
+	wrmsr
+
+	xorl	%eax, %eax
+	lldt	%ax
+
+	pushl	$__KERNEL_CS
+	pushl	%ebp
+
+	/* Enable paging */
+	movl	%cr0, %eax
+	btsl	$X86_CR0_PG_BIT, %eax
+	movl	%eax, %cr0
+	lret
+SYM_FUNC_END(efi_enter32)
+
+#define ST32_boottime		60 // offsetof(efi_system_table_32_t, boottime)
+#define BS32_handle_protocol	88 // offsetof(efi_boot_services_32_t, handle_protocol)
+#define LI32_image_base		32 // offsetof(efi_loaded_image_32_t, image_base)
+
+/*
+ * efi_status_t efi32_pe_entry(efi_handle_t image_handle,
+ *			       efi_system_table_32_t *sys_table)
+ *
+ * This is the EFI compat entrypoint that will be invoked by the 32-bit EFI
+ * firmware directly (provided that it implements support for compat
+ * entrypoints).
+ */
+SYM_FUNC_START(efi32_pe_entry)
+	pushl	%ebp
+	movl	%esp, %ebp
+	pushl	%eax				// dummy push to allocate loaded_image
+
+	pushl	%ebx				// save callee-save registers
+	pushl	%edi
+
+	call	verify_cpu			// check for long mode support
+	testl	%eax, %eax
+	movl	$0x80000003, %eax		// EFI_UNSUPPORTED
+	jnz	2f
+
+	call	1f
+1:	pop	%ebx
+
+	/* Get the loaded image protocol pointer from the image handle */
+	leal	-4(%ebp), %eax
+	pushl	%eax				// &loaded_image
+	leal	(loaded_image_proto - 1b)(%ebx), %eax
+	pushl	%eax				// pass the GUID address
+	pushl	8(%ebp)				// pass the image handle
+
+	/*
+	 * Note the alignment of the stack frame.
+	 *   sys_table
+	 *   handle             <-- 16-byte aligned on entry by ABI
+	 *   return address
+	 *   frame pointer
+	 *   loaded_image       <-- local variable
+	 *   saved %ebx		<-- 16-byte aligned here
+	 *   saved %edi
+	 *   &loaded_image
+	 *   &loaded_image_proto
+	 *   handle             <-- 16-byte aligned for call to handle_protocol
+	 */
+
+	movl	12(%ebp), %eax			// sys_table
+	movl	ST32_boottime(%eax), %eax	// sys_table->boottime
+	call	*BS32_handle_protocol(%eax)	// sys_table->boottime->handle_protocol
+	addl	$12, %esp			// restore argument space
+	testl	%eax, %eax
+	jnz	2f
+
+	movl	8(%ebp), %ecx			// image_handle
+	movl	12(%ebp), %edx			// sys_table
+	movl	-4(%ebp), %esi			// loaded_image
+	movl	LI32_image_base(%esi), %esi	// loaded_image->image_base
+	leal	(startup_32 - 1b)(%ebx), %ebp	// runtime address of startup_32
+	/*
+	 * We need to set the image_offset variable here since startup_32() will
+	 * use it before we get to the 64-bit efi_pe_entry() in C code.
+	 */
+	subl	%esi, %ebp
+	movl	%ebp, (image_offset - 1b)(%ebx)	// save image_offset
+	xorl	%esi, %esi
+	jmp	efi32_entry
+
+2:	popl	%edi				// restore callee-save registers
+	popl	%ebx
+	leave
+	RET
+SYM_FUNC_END(efi32_pe_entry)
+
+/*
+ * This is the EFI handover protocol entrypoint, which is invoked by jumping to
+ * an offset of 0x190 bytes into the kernel image after it has been loaded into
+ * memory by the EFI firmware or the bootloader.
+ */
+SYM_FUNC_START(efi32_handover)
+	add	$0x4, %esp			// Discard return address
+	popl	%ecx
+	popl	%edx
+	popl	%esi
+	jmp	efi32_entry
+SYM_FUNC_END(efi32_handover)
+
+SYM_FUNC_START_LOCAL(efi32_entry)
+	call	3f
+3:	pop	%ebx
+
+	/* Save firmware GDTR and code/data selectors */
+	sgdtl	(efi32_boot_gdt - 3b)(%ebx)
+	movw	%cs, (efi32_boot_cs - 3b)(%ebx)
+	movw	%ds, (efi32_boot_ds - 3b)(%ebx)
+
+	/* Store firmware IDT descriptor */
+	sidtl	(efi32_boot_idt - 3b)(%ebx)
+
+	leal	(efi32_boot_args - 3b)(%ebx), %ebx
+	movl	%ecx, 0(%ebx)
+	movl	%edx, 4(%ebx)
+	movl	%esi, 8(%ebx)
+	movb	$0x0, 12(%ebx)		// efi_is64
+
+	/* Disable paging */
+	movl	%cr0, %eax
+	btrl	$X86_CR0_PG_BIT, %eax
+	movl	%eax, %cr0
+
+	jmp	startup_32
+SYM_FUNC_END(efi32_entry)
+
+	.section ".rodata"
+	/* EFI loaded image protocol GUID */
+	.balign 4
+SYM_DATA_START_LOCAL(loaded_image_proto)
+	.long	0x5b1b31a1
+	.word	0x9562, 0x11d2
+	.byte	0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b
+SYM_DATA_END(loaded_image_proto)
+
+	.data
+	.balign	8
+SYM_DATA_START_LOCAL(efi32_boot_gdt)
+	.word	0
+	.quad	0
+SYM_DATA_END(efi32_boot_gdt)
+
+SYM_DATA_START_LOCAL(efi32_boot_idt)
+	.word	0
+	.quad	0
+SYM_DATA_END(efi32_boot_idt)
+
+SYM_DATA_LOCAL(efi32_boot_cs,	.word	0)
+SYM_DATA_LOCAL(efi32_boot_ds,	.word	0)
+SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0)
+SYM_DATA(efi_is64, .byte 1)
+
diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
deleted file mode 100644
index 67e7edcdfea8..000000000000
--- a/arch/x86/boot/compressed/efi_thunk_64.S
+++ /dev/null
@@ -1,195 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright (C) 2014, 2015 Intel Corporation; author Matt Fleming
- *
- * Early support for invoking 32-bit EFI services from a 64-bit kernel.
- *
- * Because this thunking occurs before ExitBootServices() we have to
- * restore the firmware's 32-bit GDT and IDT before we make EFI service
- * calls.
- *
- * On the plus side, we don't have to worry about mangling 64-bit
- * addresses into 32-bits because we're executing with an identity
- * mapped pagetable and haven't transitioned to 64-bit virtual addresses
- * yet.
- */
-
-#include <linux/linkage.h>
-#include <asm/msr.h>
-#include <asm/page_types.h>
-#include <asm/processor-flags.h>
-#include <asm/segment.h>
-
-	.code64
-	.text
-SYM_FUNC_START(__efi64_thunk)
-	push	%rbp
-	push	%rbx
-
-	movl	%ds, %eax
-	push	%rax
-	movl	%es, %eax
-	push	%rax
-	movl	%ss, %eax
-	push	%rax
-
-	/* Copy args passed on stack */
-	movq	0x30(%rsp), %rbp
-	movq	0x38(%rsp), %rbx
-	movq	0x40(%rsp), %rax
-
-	/*
-	 * Convert x86-64 ABI params to i386 ABI
-	 */
-	subq	$64, %rsp
-	movl	%esi, 0x0(%rsp)
-	movl	%edx, 0x4(%rsp)
-	movl	%ecx, 0x8(%rsp)
-	movl	%r8d, 0xc(%rsp)
-	movl	%r9d, 0x10(%rsp)
-	movl	%ebp, 0x14(%rsp)
-	movl	%ebx, 0x18(%rsp)
-	movl	%eax, 0x1c(%rsp)
-
-	leaq	0x20(%rsp), %rbx
-	sgdt	(%rbx)
-
-	addq	$16, %rbx
-	sidt	(%rbx)
-
-	leaq	1f(%rip), %rbp
-
-	/*
-	 * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
-	 * and IDT that was installed when the kernel started executing. The
-	 * pointers were saved at the EFI stub entry point in head_64.S.
-	 *
-	 * Pass the saved DS selector to the 32-bit code, and use far return to
-	 * restore the saved CS selector.
-	 */
-	leaq	efi32_boot_idt(%rip), %rax
-	lidt	(%rax)
-	leaq	efi32_boot_gdt(%rip), %rax
-	lgdt	(%rax)
-
-	movzwl	efi32_boot_ds(%rip), %edx
-	movzwq	efi32_boot_cs(%rip), %rax
-	pushq	%rax
-	leaq	efi_enter32(%rip), %rax
-	pushq	%rax
-	lretq
-
-1:	addq	$64, %rsp
-	movq	%rdi, %rax
-
-	pop	%rbx
-	movl	%ebx, %ss
-	pop	%rbx
-	movl	%ebx, %es
-	pop	%rbx
-	movl	%ebx, %ds
-	/* Clear out 32-bit selector from FS and GS */
-	xorl	%ebx, %ebx
-	movl	%ebx, %fs
-	movl	%ebx, %gs
-
-	/*
-	 * Convert 32-bit status code into 64-bit.
-	 */
-	roll	$1, %eax
-	rorq	$1, %rax
-
-	pop	%rbx
-	pop	%rbp
-	RET
-SYM_FUNC_END(__efi64_thunk)
-
-	.code32
-/*
- * EFI service pointer must be in %edi.
- *
- * The stack should represent the 32-bit calling convention.
- */
-SYM_FUNC_START_LOCAL(efi_enter32)
-	/* Load firmware selector into data and stack segment registers */
-	movl	%edx, %ds
-	movl	%edx, %es
-	movl	%edx, %fs
-	movl	%edx, %gs
-	movl	%edx, %ss
-
-	/* Reload pgtables */
-	movl	%cr3, %eax
-	movl	%eax, %cr3
-
-	/* Disable paging */
-	movl	%cr0, %eax
-	btrl	$X86_CR0_PG_BIT, %eax
-	movl	%eax, %cr0
-
-	/* Disable long mode via EFER */
-	movl	$MSR_EFER, %ecx
-	rdmsr
-	btrl	$_EFER_LME, %eax
-	wrmsr
-
-	call	*%edi
-
-	/* We must preserve return value */
-	movl	%eax, %edi
-
-	/*
-	 * Some firmware will return with interrupts enabled. Be sure to
-	 * disable them before we switch GDTs and IDTs.
-	 */
-	cli
-
-	lidtl	(%ebx)
-	subl	$16, %ebx
-
-	lgdtl	(%ebx)
-
-	movl	%cr4, %eax
-	btsl	$(X86_CR4_PAE_BIT), %eax
-	movl	%eax, %cr4
-
-	movl	%cr3, %eax
-	movl	%eax, %cr3
-
-	movl	$MSR_EFER, %ecx
-	rdmsr
-	btsl	$_EFER_LME, %eax
-	wrmsr
-
-	xorl	%eax, %eax
-	lldt	%ax
-
-	pushl	$__KERNEL_CS
-	pushl	%ebp
-
-	/* Enable paging */
-	movl	%cr0, %eax
-	btsl	$X86_CR0_PG_BIT, %eax
-	movl	%eax, %cr0
-	lret
-SYM_FUNC_END(efi_enter32)
-
-	.data
-	.balign	8
-SYM_DATA_START(efi32_boot_gdt)
-	.word	0
-	.quad	0
-SYM_DATA_END(efi32_boot_gdt)
-
-SYM_DATA_START(efi32_boot_idt)
-	.word	0
-	.quad	0
-SYM_DATA_END(efi32_boot_idt)
-
-SYM_DATA_START(efi32_boot_cs)
-	.word	0
-SYM_DATA_END(efi32_boot_cs)
-
-SYM_DATA_START(efi32_boot_ds)
-	.word	0
-SYM_DATA_END(efi32_boot_ds)
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 3b354eb9516d..6589ddd4cfaf 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -208,10 +208,6 @@ SYM_DATA_START_LOCAL(gdt)
 	.quad	0x00cf92000000ffff	/* __KERNEL_DS */
 SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)
 
-#ifdef CONFIG_EFI_STUB
-SYM_DATA(image_offset, .long 0)
-#endif
-
 /*
  * Stack and heap for uncompression
  */
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index d33f060900d2..c2cdbd8a3375 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -259,32 +259,16 @@ SYM_FUNC_START(startup_32)
 	 * We place all of the values on our mini stack so lret can
 	 * used to perform that far jump.
 	 */
-	leal	rva(startup_64)(%ebp), %eax
 #ifdef CONFIG_EFI_MIXED
-	movl	rva(efi32_boot_args)(%ebp), %edi
-	testl	%edi, %edi
-	jz	1f
-	leal	rva(efi64_stub_entry)(%ebp), %eax
-	movl	rva(efi32_boot_args+4)(%ebp), %esi
-	movl	rva(efi32_boot_args+8)(%ebp), %edx	// saved bootparams pointer
-	testl	%edx, %edx
-	jnz	1f
-	/*
-	 * efi_pe_entry uses MS calling convention, which requires 32 bytes of
-	 * shadow space on the stack even if all arguments are passed in
-	 * registers. We also need an additional 8 bytes for the space that
-	 * would be occupied by the return address, and this also results in
-	 * the correct stack alignment for entry.
-	 */
-	subl	$40, %esp
-	leal	rva(efi_pe_entry)(%ebp), %eax
-	movl	%edi, %ecx			// MS calling convention
-	movl	%esi, %edx
-1:
+	cmpb	$1, rva(efi_is64)(%ebp)
+	leal	rva(startup_64_mixedmode)(%ebp), %eax
+	jne	1f
 #endif
 	/* Check if the C-bit position is correct when SEV is active */
 	call	startup32_check_sev_cbit
 
+	leal	rva(startup_64)(%ebp), %eax
+1:
 	pushl	$__KERNEL_CS
 	pushl	%eax
 
@@ -299,35 +283,7 @@ SYM_FUNC_END(startup_32)
 #ifdef CONFIG_EFI_MIXED
 	.org 0x190
 SYM_FUNC_START(efi32_stub_entry)
-	add	$0x4, %esp		/* Discard return address */
-	popl	%ecx
-	popl	%edx
-	popl	%esi
-
-	call	1f
-1:	pop	%ebp
-	subl	$ rva(1b), %ebp
-
-	movl	%esi, rva(efi32_boot_args+8)(%ebp)
-SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
-	movl	%ecx, rva(efi32_boot_args)(%ebp)
-	movl	%edx, rva(efi32_boot_args+4)(%ebp)
-	movb	$0, rva(efi_is64)(%ebp)
-
-	/* Save firmware GDTR and code/data selectors */
-	sgdtl	rva(efi32_boot_gdt)(%ebp)
-	movw	%cs, rva(efi32_boot_cs)(%ebp)
-	movw	%ds, rva(efi32_boot_ds)(%ebp)
-
-	/* Store firmware IDT descriptor */
-	sidtl	rva(efi32_boot_idt)(%ebp)
-
-	/* Disable paging */
-	movl	%cr0, %eax
-	btrl	$X86_CR0_PG_BIT, %eax
-	movl	%eax, %cr0
-
-	jmp	startup_32
+	jmp	efi32_handover
 SYM_FUNC_END(efi32_stub_entry)
 #endif
 
@@ -713,6 +669,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmode)
 	jmp     1b
 SYM_FUNC_END(.Lno_longmode)
 
+	.globl	verify_cpu
 #include "../../kernel/verify_cpu.S"
 
 	.data
@@ -757,98 +714,6 @@ SYM_DATA_START(boot32_idt)
 SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
 #endif
 
-#ifdef CONFIG_EFI_STUB
-SYM_DATA(image_offset, .long 0)
-#endif
-#ifdef CONFIG_EFI_MIXED
-SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0)
-SYM_DATA(efi_is64, .byte 1)
-
-#define ST32_boottime		60 // offsetof(efi_system_table_32_t, boottime)
-#define BS32_handle_protocol	88 // offsetof(efi_boot_services_32_t, handle_protocol)
-#define LI32_image_base		32 // offsetof(efi_loaded_image_32_t, image_base)
-
-	__HEAD
-	.code32
-SYM_FUNC_START(efi32_pe_entry)
-/*
- * efi_status_t efi32_pe_entry(efi_handle_t image_handle,
- *			       efi_system_table_32_t *sys_table)
- */
-
-	pushl	%ebp
-	movl	%esp, %ebp
-	pushl	%eax				// dummy push to allocate loaded_image
-
-	pushl	%ebx				// save callee-save registers
-	pushl	%edi
-
-	call	verify_cpu			// check for long mode support
-	testl	%eax, %eax
-	movl	$0x80000003, %eax		// EFI_UNSUPPORTED
-	jnz	2f
-
-	call	1f
-1:	pop	%ebx
-	subl	$ rva(1b), %ebx
-
-	/* Get the loaded image protocol pointer from the image handle */
-	leal	-4(%ebp), %eax
-	pushl	%eax				// &loaded_image
-	leal	rva(loaded_image_proto)(%ebx), %eax
-	pushl	%eax				// pass the GUID address
-	pushl	8(%ebp)				// pass the image handle
-
-	/*
-	 * Note the alignment of the stack frame.
-	 *   sys_table
-	 *   handle             <-- 16-byte aligned on entry by ABI
-	 *   return address
-	 *   frame pointer
-	 *   loaded_image       <-- local variable
-	 *   saved %ebx		<-- 16-byte aligned here
-	 *   saved %edi
-	 *   &loaded_image
-	 *   &loaded_image_proto
-	 *   handle             <-- 16-byte aligned for call to handle_protocol
-	 */
-
-	movl	12(%ebp), %eax			// sys_table
-	movl	ST32_boottime(%eax), %eax	// sys_table->boottime
-	call	*BS32_handle_protocol(%eax)	// sys_table->boottime->handle_protocol
-	addl	$12, %esp			// restore argument space
-	testl	%eax, %eax
-	jnz	2f
-
-	movl	8(%ebp), %ecx			// image_handle
-	movl	12(%ebp), %edx			// sys_table
-	movl	-4(%ebp), %esi			// loaded_image
-	movl	LI32_image_base(%esi), %esi	// loaded_image->image_base
-	movl	%ebx, %ebp			// startup_32 for efi32_pe_stub_entry
-	/*
-	 * We need to set the image_offset variable here since startup_32() will
-	 * use it before we get to the 64-bit efi_pe_entry() in C code.
-	 */
-	subl	%esi, %ebx
-	movl	%ebx, rva(image_offset)(%ebp)	// save image_offset
-	jmp	efi32_pe_stub_entry
-
-2:	popl	%edi				// restore callee-save registers
-	popl	%ebx
-	leave
-	RET
-SYM_FUNC_END(efi32_pe_entry)
-
-	.section ".rodata"
-	/* EFI loaded image protocol GUID */
-	.balign 4
-SYM_DATA_START_LOCAL(loaded_image_proto)
-	.long	0x5b1b31a1
-	.word	0x9562, 0x11d2
-	.byte	0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b
-SYM_DATA_END(loaded_image_proto)
-#endif
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	__HEAD
 	.code32
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 05ae8bcc9d67..d7b53dec01be 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -23,7 +23,8 @@
 
 const efi_system_table_t *efi_system_table;
 const efi_dxe_services_table_t *efi_dxe_table;
-extern u32 image_offset;
+
+u32 image_offset;
 static efi_loaded_image_t *image = NULL;
 
 static efi_status_t
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/6] efi/x86: simplify IDT/GDT preserve/restore
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 3/6] x86/compressed: move startup32_load_idt() out of startup code Ard Biesheuvel
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

Tweak the asm and remove some redundant instructions. While at it,
fix the associated comment for style and correctness.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/efi_mixed.S | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
index 6fd7ac517c53..a0f217c8ab38 100644
--- a/arch/x86/boot/compressed/efi_mixed.S
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -94,24 +94,20 @@ SYM_FUNC_START(__efi64_thunk)
 
 	leaq	0x20(%rsp), %rbx
 	sgdt	(%rbx)
-
-	addq	$16, %rbx
-	sidt	(%rbx)
+	sidt	16(%rbx)
 
 	leaq	1f(%rip), %rbp
 
 	/*
-	 * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
-	 * and IDT that was installed when the kernel started executing. The
-	 * pointers were saved at the efi32_stub_handover entry point below.
+	 * Switch to IDT and GDT with 32-bit segments. These are the firmware
+	 * GDT and IDT that were installed when the kernel started executing.
+	 * The pointers were saved at the efi32_entry entry point below.
 	 *
 	 * Pass the saved DS selector to the 32-bit code, and use far return to
 	 * restore the saved CS selector.
 	 */
-	leaq	efi32_boot_idt(%rip), %rax
-	lidt	(%rax)
-	leaq	efi32_boot_gdt(%rip), %rax
-	lgdt	(%rax)
+	lidt	efi32_boot_idt(%rip)
+	lgdt	efi32_boot_gdt(%rip)
 
 	movzwl	efi32_boot_ds(%rip), %edx
 	movzwq	efi32_boot_cs(%rip), %rax
@@ -185,9 +181,7 @@ SYM_FUNC_START_LOCAL(efi_enter32)
 	 */
 	cli
 
-	lidtl	(%ebx)
-	subl	$16, %ebx
-
+	lidtl	16(%ebx)
 	lgdtl	(%ebx)
 
 	movl	%cr4, %eax
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/6] x86/compressed: move startup32_load_idt() out of startup code
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 2/6] efi/x86: simplify IDT/GDT preserve/restore Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 4/6] x86/compressed: move startup32_check_sev_cbit " Ard Biesheuvel
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

Move the function startup32_load_idt() into mem_encrypt.S, and turn it
into an ordinary function, instead of relying on hidden register
arguments and non-standard calling conventions.

While at it, simplify the arithmetic involved in populating a IDT entry.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/head_64.S     | 71 +------------------
 arch/x86/boot/compressed/mem_encrypt.S | 72 ++++++++++++++++++--
 2 files changed, 69 insertions(+), 74 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index c2cdbd8a3375..1ca2ed52f93c 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -118,7 +118,9 @@ SYM_FUNC_START(startup_32)
 1:
 
 	/* Setup Exception handling for SEV-ES */
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 	call	startup32_load_idt
+#endif
 
 	/* Make sure cpu supports long mode. */
 	call	verify_cpu
@@ -701,80 +703,11 @@ SYM_DATA_START(boot_idt)
 	.endr
 SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-SYM_DATA_START(boot32_idt_desc)
-	.word   boot32_idt_end - boot32_idt - 1
-	.long   0
-SYM_DATA_END(boot32_idt_desc)
-	.balign 8
-SYM_DATA_START(boot32_idt)
-	.rept 32
-	.quad 0
-	.endr
-SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
-#endif
-
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	__HEAD
 	.code32
-/*
- * Write an IDT entry into boot32_idt
- *
- * Parameters:
- *
- * %eax:	Handler address
- * %edx:	Vector number
- *
- * Physical offset is expected in %ebp
- */
-SYM_FUNC_START(startup32_set_idt_entry)
-	push    %ebx
-	push    %ecx
-
-	/* IDT entry address to %ebx */
-	leal    rva(boot32_idt)(%ebp), %ebx
-	shl	$3, %edx
-	addl    %edx, %ebx
-
-	/* Build IDT entry, lower 4 bytes */
-	movl    %eax, %edx
-	andl    $0x0000ffff, %edx	# Target code segment offset [15:0]
-	movl    $__KERNEL32_CS, %ecx	# Target code segment selector
-	shl     $16, %ecx
-	orl     %ecx, %edx
-
-	/* Store lower 4 bytes to IDT */
-	movl    %edx, (%ebx)
-
-	/* Build IDT entry, upper 4 bytes */
-	movl    %eax, %edx
-	andl    $0xffff0000, %edx	# Target code segment offset [31:16]
-	orl     $0x00008e00, %edx	# Present, Type 32-bit Interrupt Gate
-
-	/* Store upper 4 bytes to IDT */
-	movl    %edx, 4(%ebx)
-
-	pop     %ecx
-	pop     %ebx
-	RET
-SYM_FUNC_END(startup32_set_idt_entry)
 #endif
 
-SYM_FUNC_START(startup32_load_idt)
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-	/* #VC handler */
-	leal    rva(startup32_vc_handler)(%ebp), %eax
-	movl    $X86_TRAP_VC, %edx
-	call    startup32_set_idt_entry
-
-	/* Load IDT */
-	leal	rva(boot32_idt)(%ebp), %eax
-	movl	%eax, rva(boot32_idt_desc+2)(%ebp)
-	lidt    rva(boot32_idt_desc)(%ebp)
-#endif
-	RET
-SYM_FUNC_END(startup32_load_idt)
-
 /*
  * Check for the correct C-bit position when the startup_32 boot-path is used.
  *
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index a73e4d783cae..889450d073ea 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -11,9 +11,14 @@
 
 #include <asm/processor-flags.h>
 #include <asm/msr.h>
+#include <asm/segment.h>
+#include <asm/trapnr.h>
 #include <asm/asm-offsets.h>
 
 	.text
+	.code64
+#include "../../kernel/sev_verify_cbit.S"
+
 	.code32
 SYM_FUNC_START(get_sev_encryption_bit)
 	xor	%eax, %eax
@@ -61,6 +66,8 @@ SYM_FUNC_START(get_sev_encryption_bit)
 	RET
 SYM_FUNC_END(get_sev_encryption_bit)
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+
 /**
  * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using
  *		      the GHCB MSR protocol
@@ -98,7 +105,7 @@ SYM_CODE_START_LOCAL(sev_es_req_cpuid)
 	jmp	1b
 SYM_CODE_END(sev_es_req_cpuid)
 
-SYM_CODE_START(startup32_vc_handler)
+SYM_CODE_START_LOCAL(startup32_vc_handler)
 	pushl	%eax
 	pushl	%ebx
 	pushl	%ecx
@@ -184,15 +191,70 @@ SYM_CODE_START(startup32_vc_handler)
 	jmp .Lfail
 SYM_CODE_END(startup32_vc_handler)
 
-	.code64
+/*
+ * Write an IDT entry
+ *
+ * Parameters:
+ *
+ * %eax:	Handler address
+ * %edx:	Vector number
+ * %ecx:	IDT address
+ */
+SYM_FUNC_START_LOCAL(startup32_set_idt_entry)
+	/* IDT entry address to %ecx */
+	leal	(%ecx, %edx, 8), %ecx
+
+	/* Build IDT entry */
+	movl    %eax, %edx
+	andl    $0x0000ffff, %edx		# Target code segment offset [15:0]
+	andl    $0xffff0000, %eax		# Target code segment offset [31:16]
+	orl     $(__KERNEL32_CS << 16), %edx	# Target code segment selector
+	orl     $0x00008e00, %eax		# Present, Type 32-bit Interrupt Gate
+
+	/* Store entry to IDT */
+	movl    %edx, (%ecx)
+	movl    %eax, 4(%ecx)
+	RET
+SYM_FUNC_END(startup32_set_idt_entry)
 
-#include "../../kernel/sev_verify_cbit.S"
+SYM_FUNC_START(startup32_load_idt)
+	push	%ebp
+	push	%ebx
 
-	.data
+	call	1f
+1:	pop	%ebp
+	leal    (boot32_idt - 1b)(%ebp), %ebx
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* #VC handler */
+	leal    (startup32_vc_handler - 1b)(%ebp), %eax
+	movl    $X86_TRAP_VC, %edx
+	movl	%ebx, %ecx
+	call    startup32_set_idt_entry
+
+	/* Load IDT */
+	leal	(boot32_idt_desc - 1b)(%ebp), %ecx
+	movl	%ebx, 2(%ecx)
+	lidt    (%ecx)
+
+	pop	%ebx
+	pop	%ebp
+	RET
+SYM_FUNC_END(startup32_load_idt)
+
+	.data
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
 SYM_DATA(sev_status,		.quad 0)
 SYM_DATA(sev_check_data,	.quad 0)
+
+SYM_DATA_START_LOCAL(boot32_idt)
+	.rept 32
+	.quad 0
+	.endr
+SYM_DATA_END(boot32_idt)
+
+SYM_DATA_START_LOCAL(boot32_idt_desc)
+	.word   . - boot32_idt - 1
+	.long   0
+SYM_DATA_END(boot32_idt_desc)
 #endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/6] x86/compressed: move startup32_check_sev_cbit out of startup code
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2022-08-15 13:42 ` [PATCH 3/6] x86/compressed: move startup32_load_idt() out of startup code Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 5/6] x86/compressed: adhere to calling convention in get_sev_encryption_bit() Ard Biesheuvel
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

Move startup32_check_sev_cbit() out of head_64.S and turn it into an
ordinary function using the ordinary calling conventions, rather than
preserving and restoring the registers that are known to be live at the
call site.

Also reorder the call with the EFI mixed mode check, so we are not
omitting the call to startup32_check_sev_cbit() on mixed mode systems
implicitly, even if the set of machines supporting both features is
empty.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/head_64.S     | 83 ++------------------
 arch/x86/boot/compressed/mem_encrypt.S | 65 +++++++++++++++
 2 files changed, 70 insertions(+), 78 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 1ca2ed52f93c..382ed3d8b26a 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -251,6 +251,11 @@ SYM_FUNC_START(startup_32)
 	movl    $__BOOT_TSS, %eax
 	ltr	%ax
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Check if the C-bit position is correct when SEV is active */
+	call	startup32_check_sev_cbit
+#endif
+
 	/*
 	 * Setup for the jump to 64bit mode
 	 *
@@ -266,9 +271,6 @@ SYM_FUNC_START(startup_32)
 	leal	rva(startup_64_mixedmode)(%ebp), %eax
 	jne	1f
 #endif
-	/* Check if the C-bit position is correct when SEV is active */
-	call	startup32_check_sev_cbit
-
 	leal	rva(startup_64)(%ebp), %eax
 1:
 	pushl	$__KERNEL_CS
@@ -703,81 +705,6 @@ SYM_DATA_START(boot_idt)
 	.endr
 SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-	__HEAD
-	.code32
-#endif
-
-/*
- * Check for the correct C-bit position when the startup_32 boot-path is used.
- *
- * The check makes use of the fact that all memory is encrypted when paging is
- * disabled. The function creates 64 bits of random data using the RDRAND
- * instruction. RDRAND is mandatory for SEV guests, so always available. If the
- * hypervisor violates that the kernel will crash right here.
- *
- * The 64 bits of random data are stored to a memory location and at the same
- * time kept in the %eax and %ebx registers. Since encryption is always active
- * when paging is off the random data will be stored encrypted in main memory.
- *
- * Then paging is enabled. When the C-bit position is correct all memory is
- * still mapped encrypted and comparing the register values with memory will
- * succeed. An incorrect C-bit position will map all memory unencrypted, so that
- * the compare will use the encrypted random data and fail.
- */
-SYM_FUNC_START(startup32_check_sev_cbit)
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-	pushl	%eax
-	pushl	%ebx
-	pushl	%ecx
-	pushl	%edx
-
-	/* Check for non-zero sev_status */
-	movl	rva(sev_status)(%ebp), %eax
-	testl	%eax, %eax
-	jz	4f
-
-	/*
-	 * Get two 32-bit random values - Don't bail out if RDRAND fails
-	 * because it is better to prevent forward progress if no random value
-	 * can be gathered.
-	 */
-1:	rdrand	%eax
-	jnc	1b
-2:	rdrand	%ebx
-	jnc	2b
-
-	/* Store to memory and keep it in the registers */
-	movl	%eax, rva(sev_check_data)(%ebp)
-	movl	%ebx, rva(sev_check_data+4)(%ebp)
-
-	/* Enable paging to see if encryption is active */
-	movl	%cr0, %edx			 /* Backup %cr0 in %edx */
-	movl	$(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
-	movl	%ecx, %cr0
-
-	cmpl	%eax, rva(sev_check_data)(%ebp)
-	jne	3f
-	cmpl	%ebx, rva(sev_check_data+4)(%ebp)
-	jne	3f
-
-	movl	%edx, %cr0	/* Restore previous %cr0 */
-
-	jmp	4f
-
-3:	/* Check failed - hlt the machine */
-	hlt
-	jmp	3b
-
-4:
-	popl	%edx
-	popl	%ecx
-	popl	%ebx
-	popl	%eax
-#endif
-	RET
-SYM_FUNC_END(startup32_check_sev_cbit)
-
 /*
  * Stack and heap for uncompression
  */
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 889450d073ea..3cd3db0da49d 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -241,6 +241,71 @@ SYM_FUNC_START(startup32_load_idt)
 	RET
 SYM_FUNC_END(startup32_load_idt)
 
+/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+SYM_FUNC_START(startup32_check_sev_cbit)
+	push	%ebp
+	push	%ebx
+
+	call	0f
+0:	pop	%ebp
+
+	/* Check for non-zero sev_status */
+	movl	(sev_status - 0b)(%ebp), %eax
+	testl	%eax, %eax
+	jz	3f
+
+	/*
+	 * Get two 32-bit random values - Don't bail out if RDRAND fails
+	 * because it is better to prevent forward progress if no random value
+	 * can be gathered.
+	 */
+1:	rdrand	%eax
+	jnc	1b
+2:	rdrand	%ebx
+	jnc	2b
+
+	/* Store to memory and keep it in the registers */
+	leal	(sev_check_data - 0b)(%ebp), %ebp
+	movl	%eax, 0(%ebp)
+	movl	%ebx, 4(%ebp)
+
+	/* Enable paging to see if encryption is active */
+	movl	%cr0, %edx			 /* Backup %cr0 in %edx */
+	movl	$(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+	movl	%ecx, %cr0
+
+	cmpl	%eax, 0(%ebp)
+	jne	4f
+	cmpl	%ebx, 4(%ebp)
+	jne	4f
+
+	movl	%edx, %cr0	/* Restore previous %cr0 */
+
+3:	pop	%ebx
+	pop	%ebp
+	RET
+
+4:	/* Check failed - hlt the machine */
+	hlt
+	jmp	4b
+SYM_FUNC_END(startup32_check_sev_cbit)
+
 	.data
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/6] x86/compressed: adhere to calling convention in get_sev_encryption_bit()
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2022-08-15 13:42 ` [PATCH 4/6] x86/compressed: move startup32_check_sev_cbit " Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-08-15 13:42 ` [PATCH 6/6] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y Ard Biesheuvel
  2022-09-20 14:55 ` [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

Make get_sev_encryption_bit() follow the ordinary x86 calling
convention, and only call it if CONFIG_AMD_MEM_ENCRYPT is actually
enabled. This clarifies the calling code, and makes it more
maintainable.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/head_64.S     |  6 ++++--
 arch/x86/boot/compressed/mem_encrypt.S | 10 ----------
 2 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 382ed3d8b26a..4539e7c6d4c3 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -180,12 +180,12 @@ SYM_FUNC_START(startup_32)
   */
 	/*
 	 * If SEV is active then set the encryption mask in the page tables.
-	 * This will insure that when the kernel is copied and decompressed
+	 * This will ensure that when the kernel is copied and decompressed
 	 * it will be done so encrypted.
 	 */
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 	call	get_sev_encryption_bit
 	xorl	%edx, %edx
-#ifdef	CONFIG_AMD_MEM_ENCRYPT
 	testl	%eax, %eax
 	jz	1f
 	subl	$32, %eax	/* Encryption bit is always above bit 31 */
@@ -199,6 +199,8 @@ SYM_FUNC_START(startup_32)
 	 */
 	movl	$1, rva(sev_status)(%ebp)
 1:
+#else
+	xorl	%edx, %edx
 #endif
 
 	/* Initialize Page tables to 0 */
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 3cd3db0da49d..b4a116283bd9 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -21,12 +21,7 @@
 
 	.code32
 SYM_FUNC_START(get_sev_encryption_bit)
-	xor	%eax, %eax
-
-#ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%ebx
-	push	%ecx
-	push	%edx
 
 	movl	$0x80000000, %eax	/* CPUID to check the highest leaf */
 	cpuid
@@ -57,12 +52,7 @@ SYM_FUNC_START(get_sev_encryption_bit)
 	xor	%eax, %eax
 
 .Lsev_exit:
-	pop	%edx
-	pop	%ecx
 	pop	%ebx
-
-#endif	/* CONFIG_AMD_MEM_ENCRYPT */
-
 	RET
 SYM_FUNC_END(get_sev_encryption_bit)
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/6] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2022-08-15 13:42 ` [PATCH 5/6] x86/compressed: adhere to calling convention in get_sev_encryption_bit() Ard Biesheuvel
@ 2022-08-15 13:42 ` Ard Biesheuvel
  2022-09-20 14:55 ` [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-08-15 13:42 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Ard Biesheuvel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, Michael Roth

Avoid building the mem_encrypt.o object if memory encryption support is
not enabled to begin with.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/x86/boot/compressed/Makefile      | 2 +-
 arch/x86/boot/compressed/mem_encrypt.S | 3 ---
 2 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d6dbb46696a2..9aad9ddcf3b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -99,7 +99,7 @@ vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
 	vmlinux-objs-y += $(obj)/ident_map_64.o
 	vmlinux-objs-y += $(obj)/idt_64.o $(obj)/idt_handlers_64.o
-	vmlinux-objs-y += $(obj)/mem_encrypt.o
+	vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/mem_encrypt.o
 	vmlinux-objs-y += $(obj)/pgtable_64.o
 	vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o
 endif
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index b4a116283bd9..fad0978d6799 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -56,8 +56,6 @@ SYM_FUNC_START(get_sev_encryption_bit)
 	RET
 SYM_FUNC_END(get_sev_encryption_bit)
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-
 /**
  * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using
  *		      the GHCB MSR protocol
@@ -312,4 +310,3 @@ SYM_DATA_START_LOCAL(boot32_idt_desc)
 	.word   . - boot32_idt - 1
 	.long   0
 SYM_DATA_END(boot32_idt_desc)
-#endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/6] x86: head_64.S spring cleaning
  2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2022-08-15 13:42 ` [PATCH 6/6] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y Ard Biesheuvel
@ 2022-09-20 14:55 ` Ard Biesheuvel
  6 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2022-09-20 14:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-efi, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Michael Roth

On Mon, 15 Aug 2022 at 15:42, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> After doing some cleanup work on the EFI code in head_64.S, the mixed
> mode code in particular, I noticed that the memory encryption pieces
> could use some attention as well, so I cleaned that up too.
>
> I have been sitting on these patches since November, waiting for the
> right time to post them, i.e., after the SEV/SNP review had finished.
> This has yet to happen, so I'm posting them now instead. Please feel
> free to disregard for the time being, and propose a suitable timeframe
> to repost them if this is likely to conflict with ongoing work.
>

Ping?

> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Michael Roth <michael.roth@amd.com>
>
> Ard Biesheuvel (6):
>   x86/head_64: clean up mixed mode 32-bit entry code
>   efi/x86: simplify IDT/GDT preserve/restore
>   x86/compressed: move startup32_load_idt() out of startup code
>   x86/compressed: move startup32_check_sev_cbit out of startup code
>   x86/compressed: adhere to calling convention in
>     get_sev_encryption_bit()
>   x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y
>
>  arch/x86/boot/compressed/Makefile       |   8 +-
>  arch/x86/boot/compressed/efi_mixed.S    | 352 ++++++++++++++++++++
>  arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
>  arch/x86/boot/compressed/head_32.S      |   4 -
>  arch/x86/boot/compressed/head_64.S      | 309 +----------------
>  arch/x86/boot/compressed/mem_encrypt.S  | 146 +++++++-
>  drivers/firmware/efi/libstub/x86-stub.c |   3 +-
>  7 files changed, 506 insertions(+), 511 deletions(-)
>  create mode 100644 arch/x86/boot/compressed/efi_mixed.S
>  delete mode 100644 arch/x86/boot/compressed/efi_thunk_64.S
>
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code
  2022-08-15 13:42 ` [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Ard Biesheuvel
@ 2022-09-20 19:19   ` Borislav Petkov
  0 siblings, 0 replies; 9+ messages in thread
From: Borislav Petkov @ 2022-09-20 19:19 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-kernel, linux-efi, Thomas Gleixner, Ingo Molnar,
	Dave Hansen, Michael Roth

On Mon, Aug 15, 2022 at 03:42:18PM +0200, Ard Biesheuvel wrote:
> The x86_64 32-bit entry code is a jumble of EFI and SEV routines, which
> is not good for maintainability. Let's isolate the EFI mixed mode code
> and combine it with the boot service thunk that lives in another .S
> file, so that we can remove it from head_64.S

Who is "we"?

Please use passive voice in all text.

> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/x86/boot/compressed/Makefile       |   6 +-
>  arch/x86/boot/compressed/efi_mixed.S    | 358 ++++++++++++++++++++
>  arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
>  arch/x86/boot/compressed/head_32.S      |   4 -
>  arch/x86/boot/compressed/head_64.S      | 149 +-------
>  drivers/firmware/efi/libstub/x86-stub.c |   3 +-
>  6 files changed, 370 insertions(+), 345 deletions(-)

So I'm really nervous about patches touching early asm code where
multiple things happen all at once instead of each logical change being
split into a single patch: here I see code movement but then other
functionality is being added too.

So I'd really appreciate it if you split this one into smaller, obvious,
even boring patches - this will simplify review considerably. For
example, do only a mechanical code movement in one patch and then add
the new startup_64_mixedmode thing in another. And so on.

That would be greatly appreciated.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-09-20 19:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-15 13:42 [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel
2022-08-15 13:42 ` [PATCH 1/6] x86/head_64: clean up mixed mode 32-bit entry code Ard Biesheuvel
2022-09-20 19:19   ` Borislav Petkov
2022-08-15 13:42 ` [PATCH 2/6] efi/x86: simplify IDT/GDT preserve/restore Ard Biesheuvel
2022-08-15 13:42 ` [PATCH 3/6] x86/compressed: move startup32_load_idt() out of startup code Ard Biesheuvel
2022-08-15 13:42 ` [PATCH 4/6] x86/compressed: move startup32_check_sev_cbit " Ard Biesheuvel
2022-08-15 13:42 ` [PATCH 5/6] x86/compressed: adhere to calling convention in get_sev_encryption_bit() Ard Biesheuvel
2022-08-15 13:42 ` [PATCH 6/6] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y Ard Biesheuvel
2022-09-20 14:55 ` [PATCH 0/6] x86: head_64.S spring cleaning Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).