All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] x86/seves: Support 32-bit boot path and other updates
@ 2021-03-10  8:43 ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

Hi,

these patches add support for the 32-bit boot in the decompressor
code. This is needed to boot an SEV-ES guest on some firmware and grub
versions. The patches also add the necessary CPUID sanity checks and a
32-bit version of the C-bit check.

Other updates included here:

        1. Add code to shut down exception handling in the
           decompressor code before jumping to the real kernel.
           Once in the real kernel it is not safe anymore to jump
           back to the decompressor code via exceptions.

        2. Replace open-coded hlt loops with proper calls to
           sev_es_terminate().

Please review.

Thanks,

	Joerg

Changes to v1:

	- Addressed Boris' review comments.
	- Fixed a bug which caused the cbit-check to never be
	  executed even in an SEV guest.

Joerg Roedel (7):
  x86/boot/compressed/64: Cleanup exception handling before booting
    kernel
  x86/boot/compressed/64: Reload CS in startup_32
  x86/boot/compressed/64: Setup IDT in startup_32 boot path
  x86/boot/compressed/64: Add 32-bit boot #VC handler
  x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path
  x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate()

 arch/x86/boot/compressed/head_64.S     | 170 ++++++++++++++++++++++++-
 arch/x86/boot/compressed/idt_64.c      |  14 ++
 arch/x86/boot/compressed/mem_encrypt.S | 132 ++++++++++++++++++-
 arch/x86/boot/compressed/misc.c        |   7 +-
 arch/x86/boot/compressed/misc.h        |   6 +
 arch/x86/boot/compressed/sev-es.c      |  12 +-
 arch/x86/kernel/sev-es-shared.c        |  10 +-
 7 files changed, 328 insertions(+), 23 deletions(-)

-- 
2.30.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 0/7] x86/seves: Support 32-bit boot path and other updates
@ 2021-03-10  8:43 ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Hi,

these patches add support for the 32-bit boot in the decompressor
code. This is needed to boot an SEV-ES guest on some firmware and grub
versions. The patches also add the necessary CPUID sanity checks and a
32-bit version of the C-bit check.

Other updates included here:

        1. Add code to shut down exception handling in the
           decompressor code before jumping to the real kernel.
           Once in the real kernel it is not safe anymore to jump
           back to the decompressor code via exceptions.

        2. Replace open-coded hlt loops with proper calls to
           sev_es_terminate().

Please review.

Thanks,

	Joerg

Changes to v1:

	- Addressed Boris' review comments.
	- Fixed a bug which caused the cbit-check to never be
	  executed even in an SEV guest.

Joerg Roedel (7):
  x86/boot/compressed/64: Cleanup exception handling before booting
    kernel
  x86/boot/compressed/64: Reload CS in startup_32
  x86/boot/compressed/64: Setup IDT in startup_32 boot path
  x86/boot/compressed/64: Add 32-bit boot #VC handler
  x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path
  x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate()

 arch/x86/boot/compressed/head_64.S     | 170 ++++++++++++++++++++++++-
 arch/x86/boot/compressed/idt_64.c      |  14 ++
 arch/x86/boot/compressed/mem_encrypt.S | 132 ++++++++++++++++++-
 arch/x86/boot/compressed/misc.c        |   7 +-
 arch/x86/boot/compressed/misc.h        |   6 +
 arch/x86/boot/compressed/sev-es.c      |  12 +-
 arch/x86/kernel/sev-es-shared.c        |  10 +-
 7 files changed, 328 insertions(+), 23 deletions(-)

-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/7] x86/boot/compressed/64: Cleanup exception handling before booting kernel
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

Disable the exception handling before booting the kernel to make sure
any exceptions that happen during early kernel boot are not directed to
the pre-decompression code.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/idt_64.c | 14 ++++++++++++++
 arch/x86/boot/compressed/misc.c   |  7 ++-----
 arch/x86/boot/compressed/misc.h   |  6 ++++++
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c
index 804a502ee0d2..9b93567d663a 100644
--- a/arch/x86/boot/compressed/idt_64.c
+++ b/arch/x86/boot/compressed/idt_64.c
@@ -52,3 +52,17 @@ void load_stage2_idt(void)
 
 	load_boot_idt(&boot_idt_desc);
 }
+
+void cleanup_exception_handling(void)
+{
+	/*
+	 * Flush GHCB from cache and map it encrypted again when running as
+	 * SEV-ES guest.
+	 */
+	sev_es_shutdown_ghcb();
+
+	/* Set a null-idt, disabling #PF and #VC handling */
+	boot_idt_desc.size    = 0;
+	boot_idt_desc.address = 0;
+	load_boot_idt(&boot_idt_desc);
+}
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 267e7f93050e..cc9fd0e8766a 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -443,11 +443,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	handle_relocations(output, output_len, virt_addr);
 	debug_putstr("done.\nBooting the kernel.\n");
 
-	/*
-	 * Flush GHCB from cache and map it encrypted again when running as
-	 * SEV-ES guest.
-	 */
-	sev_es_shutdown_ghcb();
+	/* Disable exception handling before booting the kernel */
+	cleanup_exception_handling();
 
 	return output;
 }
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 901ea5ebec22..e5612f035498 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -155,6 +155,12 @@ extern pteval_t __default_kernel_pte_mask;
 extern gate_desc boot_idt[BOOT_IDT_ENTRIES];
 extern struct desc_ptr boot_idt_desc;
 
+#ifdef CONFIG_X86_64
+void cleanup_exception_handling(void);
+#else
+static inline void cleanup_exception_handling(void) { }
+#endif
+
 /* IDT Entry Points */
 void boot_page_fault(void);
 void boot_stage1_vc(void);
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 1/7] x86/boot/compressed/64: Cleanup exception handling before booting kernel
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Disable the exception handling before booting the kernel to make sure
any exceptions that happen during early kernel boot are not directed to
the pre-decompression code.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/idt_64.c | 14 ++++++++++++++
 arch/x86/boot/compressed/misc.c   |  7 ++-----
 arch/x86/boot/compressed/misc.h   |  6 ++++++
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c
index 804a502ee0d2..9b93567d663a 100644
--- a/arch/x86/boot/compressed/idt_64.c
+++ b/arch/x86/boot/compressed/idt_64.c
@@ -52,3 +52,17 @@ void load_stage2_idt(void)
 
 	load_boot_idt(&boot_idt_desc);
 }
+
+void cleanup_exception_handling(void)
+{
+	/*
+	 * Flush GHCB from cache and map it encrypted again when running as
+	 * SEV-ES guest.
+	 */
+	sev_es_shutdown_ghcb();
+
+	/* Set a null-idt, disabling #PF and #VC handling */
+	boot_idt_desc.size    = 0;
+	boot_idt_desc.address = 0;
+	load_boot_idt(&boot_idt_desc);
+}
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 267e7f93050e..cc9fd0e8766a 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -443,11 +443,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	handle_relocations(output, output_len, virt_addr);
 	debug_putstr("done.\nBooting the kernel.\n");
 
-	/*
-	 * Flush GHCB from cache and map it encrypted again when running as
-	 * SEV-ES guest.
-	 */
-	sev_es_shutdown_ghcb();
+	/* Disable exception handling before booting the kernel */
+	cleanup_exception_handling();
 
 	return output;
 }
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 901ea5ebec22..e5612f035498 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -155,6 +155,12 @@ extern pteval_t __default_kernel_pte_mask;
 extern gate_desc boot_idt[BOOT_IDT_ENTRIES];
 extern struct desc_ptr boot_idt_desc;
 
+#ifdef CONFIG_X86_64
+void cleanup_exception_handling(void);
+#else
+static inline void cleanup_exception_handling(void) { }
+#endif
+
 /* IDT Entry Points */
 void boot_page_fault(void);
 void boot_stage1_vc(void);
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/7] x86/boot/compressed/64: Reload CS in startup_32
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

Exception handling in the startup_32 boot path requires the CS
selector to be correctly set up. Reload it from the current GDT.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index e94874f4bbc1..c59c80ca546d 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -107,9 +107,16 @@ SYM_FUNC_START(startup_32)
 	movl	%eax, %gs
 	movl	%eax, %ss
 
-/* setup a stack and make sure cpu supports long mode. */
+	/* Setup a stack and load CS from current GDT */
 	leal	rva(boot_stack_end)(%ebp), %esp
 
+	pushl	$__KERNEL32_CS
+	leal	rva(1f)(%ebp), %eax
+	pushl	%eax
+	lretl
+1:
+
+	/* Make sure cpu supports long mode. */
 	call	verify_cpu
 	testl	%eax, %eax
 	jnz	.Lno_longmode
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/7] x86/boot/compressed/64: Reload CS in startup_32
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Exception handling in the startup_32 boot path requires the CS
selector to be correctly set up. Reload it from the current GDT.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index e94874f4bbc1..c59c80ca546d 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -107,9 +107,16 @@ SYM_FUNC_START(startup_32)
 	movl	%eax, %gs
 	movl	%eax, %ss
 
-/* setup a stack and make sure cpu supports long mode. */
+	/* Setup a stack and load CS from current GDT */
 	leal	rva(boot_stack_end)(%ebp), %esp
 
+	pushl	$__KERNEL32_CS
+	leal	rva(1f)(%ebp), %eax
+	pushl	%eax
+	lretl
+1:
+
+	/* Make sure cpu supports long mode. */
 	call	verify_cpu
 	testl	%eax, %eax
 	jnz	.Lno_longmode
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/7] x86/boot/compressed/64: Setup IDT in startup_32 boot path
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

This boot path needs exception handling when it is used with SEV-ES.
Setup an IDT and provide a helper function to write IDT entries for
use in 32-bit protected mode.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 72 ++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index c59c80ca546d..2001c3bf0748 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -116,6 +116,9 @@ SYM_FUNC_START(startup_32)
 	lretl
 1:
 
+	/* Setup Exception handling for SEV-ES */
+	call	startup32_load_idt
+
 	/* Make sure cpu supports long mode. */
 	call	verify_cpu
 	testl	%eax, %eax
@@ -701,6 +704,19 @@ SYM_DATA_START(boot_idt)
 	.endr
 SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+SYM_DATA_START(boot32_idt_desc)
+	.word   boot32_idt_end - boot32_idt - 1
+	.long   0
+SYM_DATA_END(boot32_idt_desc)
+	.balign 8
+SYM_DATA_START(boot32_idt)
+	.rept 32
+	.quad 0
+	.endr
+SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
+#endif
+
 #ifdef CONFIG_EFI_STUB
 SYM_DATA(image_offset, .long 0)
 #endif
@@ -793,6 +809,62 @@ SYM_DATA_START_LOCAL(loaded_image_proto)
 SYM_DATA_END(loaded_image_proto)
 #endif
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	__HEAD
+	.code32
+/*
+ * Write an IDT entry into boot32_idt
+ *
+ * Parameters:
+ *
+ * %eax:	Handler address
+ * %edx:	Vector number
+ *
+ * Physical offset is expected in %ebp
+ */
+SYM_FUNC_START(startup32_set_idt_entry)
+	push    %ebx
+	push    %ecx
+
+	/* IDT entry address to %ebx */
+	leal    rva(boot32_idt)(%ebp), %ebx
+	shl	$3, %edx
+	addl    %edx, %ebx
+
+	/* Build IDT entry, lower 4 bytes */
+	movl    %eax, %edx
+	andl    $0x0000ffff, %edx	# Target code segment offset [15:0]
+	movl    $__KERNEL32_CS, %ecx	# Target code segment selector
+	shl     $16, %ecx
+	orl     %ecx, %edx
+
+	/* Store lower 4 bytes to IDT */
+	movl    %edx, (%ebx)
+
+	/* Build IDT entry, upper 4 bytes */
+	movl    %eax, %edx
+	andl    $0xffff0000, %edx	# Target code segment offset [31:16]
+	orl     $0x00008e00, %edx	# Present, Type 32-bit Interrupt Gate
+
+	/* Store upper 4 bytes to IDT */
+	movl    %edx, 4(%ebx)
+
+	pop     %ecx
+	pop     %ebx
+	ret
+SYM_FUNC_END(startup32_set_idt_entry)
+#endif
+
+SYM_FUNC_START(startup32_load_idt)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Load IDT */
+	leal	rva(boot32_idt)(%ebp), %eax
+	movl	%eax, rva(boot32_idt_desc+2)(%ebp)
+	lidt    rva(boot32_idt_desc)(%ebp)
+#endif
+	ret
+SYM_FUNC_END(startup32_load_idt)
+
 /*
  * Stack and heap for uncompression
  */
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/7] x86/boot/compressed/64: Setup IDT in startup_32 boot path
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

This boot path needs exception handling when it is used with SEV-ES.
Setup an IDT and provide a helper function to write IDT entries for
use in 32-bit protected mode.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 72 ++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index c59c80ca546d..2001c3bf0748 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -116,6 +116,9 @@ SYM_FUNC_START(startup_32)
 	lretl
 1:
 
+	/* Setup Exception handling for SEV-ES */
+	call	startup32_load_idt
+
 	/* Make sure cpu supports long mode. */
 	call	verify_cpu
 	testl	%eax, %eax
@@ -701,6 +704,19 @@ SYM_DATA_START(boot_idt)
 	.endr
 SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+SYM_DATA_START(boot32_idt_desc)
+	.word   boot32_idt_end - boot32_idt - 1
+	.long   0
+SYM_DATA_END(boot32_idt_desc)
+	.balign 8
+SYM_DATA_START(boot32_idt)
+	.rept 32
+	.quad 0
+	.endr
+SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
+#endif
+
 #ifdef CONFIG_EFI_STUB
 SYM_DATA(image_offset, .long 0)
 #endif
@@ -793,6 +809,62 @@ SYM_DATA_START_LOCAL(loaded_image_proto)
 SYM_DATA_END(loaded_image_proto)
 #endif
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	__HEAD
+	.code32
+/*
+ * Write an IDT entry into boot32_idt
+ *
+ * Parameters:
+ *
+ * %eax:	Handler address
+ * %edx:	Vector number
+ *
+ * Physical offset is expected in %ebp
+ */
+SYM_FUNC_START(startup32_set_idt_entry)
+	push    %ebx
+	push    %ecx
+
+	/* IDT entry address to %ebx */
+	leal    rva(boot32_idt)(%ebp), %ebx
+	shl	$3, %edx
+	addl    %edx, %ebx
+
+	/* Build IDT entry, lower 4 bytes */
+	movl    %eax, %edx
+	andl    $0x0000ffff, %edx	# Target code segment offset [15:0]
+	movl    $__KERNEL32_CS, %ecx	# Target code segment selector
+	shl     $16, %ecx
+	orl     %ecx, %edx
+
+	/* Store lower 4 bytes to IDT */
+	movl    %edx, (%ebx)
+
+	/* Build IDT entry, upper 4 bytes */
+	movl    %eax, %edx
+	andl    $0xffff0000, %edx	# Target code segment offset [31:16]
+	orl     $0x00008e00, %edx	# Present, Type 32-bit Interrupt Gate
+
+	/* Store upper 4 bytes to IDT */
+	movl    %edx, 4(%ebx)
+
+	pop     %ecx
+	pop     %ebx
+	ret
+SYM_FUNC_END(startup32_set_idt_entry)
+#endif
+
+SYM_FUNC_START(startup32_load_idt)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* Load IDT */
+	leal	rva(boot32_idt)(%ebp), %eax
+	movl	%eax, rva(boot32_idt_desc+2)(%ebp)
+	lidt    rva(boot32_idt_desc)(%ebp)
+#endif
+	ret
+SYM_FUNC_END(startup32_load_idt)
+
 /*
  * Stack and heap for uncompression
  */
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/7] x86/boot/compressed/64: Add 32-bit boot #VC handler
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

Add a #VC exception handler which is used when the kernel still executes
in protected mode. This boot-path already uses CPUID, which will cause #VC
exceptions in an SEV-ES guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S     |  6 ++
 arch/x86/boot/compressed/mem_encrypt.S | 96 +++++++++++++++++++++++++-
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 2001c3bf0748..ee448aedb8b0 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -34,6 +34,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/bootparam.h>
 #include <asm/desc_defs.h>
+#include <asm/trapnr.h>
 #include "pgtable.h"
 
 /*
@@ -857,6 +858,11 @@ SYM_FUNC_END(startup32_set_idt_entry)
 
 SYM_FUNC_START(startup32_load_idt)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* #VC handler */
+	leal    rva(startup32_vc_handler)(%ebp), %eax
+	movl    $X86_TRAP_VC, %edx
+	call    startup32_set_idt_entry
+
 	/* Load IDT */
 	leal	rva(boot32_idt)(%ebp), %eax
 	movl	%eax, rva(boot32_idt_desc+2)(%ebp)
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index aa561795efd1..2ca056a3707c 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -67,10 +67,104 @@ SYM_FUNC_START(get_sev_encryption_bit)
 	ret
 SYM_FUNC_END(get_sev_encryption_bit)
 
+/**
+ * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using
+ *		      the GHCB MSR protocol
+ *
+ * @%eax:	Register to request (0=EAX, 1=EBX, 2=ECX, 3=EDX)
+ * @%edx:	CPUID Function
+ *
+ * Returns 0 in %eax on sucess, non-zero on failure
+ * %edx returns CPUID value on success
+ */
+SYM_CODE_START_LOCAL(sev_es_req_cpuid)
+	shll	$30, %eax
+	orl     $0x00000004, %eax
+	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
+	wrmsr
+	rep; vmmcall		# VMGEXIT
+	rdmsr
+
+	/* Check response */
+	movl	%eax, %ecx
+	andl	$0x3ffff000, %ecx	# Bits [12-29] MBZ
+	jnz	2f
+
+	/* Check return code */
+	andl    $0xfff, %eax
+	cmpl    $5, %eax
+	jne	2f
+
+	/* All good - return success */
+	xorl	%eax, %eax
+1:
+	ret
+2:
+	movl	$-1, %eax
+	jmp	1b
+SYM_CODE_END(sev_es_req_cpuid)
+
+SYM_CODE_START(startup32_vc_handler)
+	pushl	%eax
+	pushl	%ebx
+	pushl	%ecx
+	pushl	%edx
+
+	/* Keep CPUID function in %ebx */
+	movl	%eax, %ebx
+
+	/* Check if error-code == SVM_EXIT_CPUID */
+	cmpl	$0x72, 16(%esp)
+	jne	.Lfail
+
+	movl	$0, %eax		# Request CPUID[fn].EAX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 12(%esp)		# Store result
+
+	movl	$1, %eax		# Request CPUID[fn].EBX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 8(%esp)		# Store result
+
+	movl	$2, %eax		# Request CPUID[fn].ECX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 4(%esp)		# Store result
+
+	movl	$3, %eax		# Request CPUID[fn].EDX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 0(%esp)		# Store result
+
+	popl	%edx
+	popl	%ecx
+	popl	%ebx
+	popl	%eax
+
+	/* Remove error code */
+	addl	$4, %esp
+
+	/* Jump over CPUID instruction */
+	addl	$2, (%esp)
+
+	iret
+.Lfail:
+	hlt
+	jmp .Lfail
+SYM_CODE_END(startup32_vc_handler)
+
 	.code64
 
 #include "../../kernel/sev_verify_cbit.S"
-
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/7] x86/boot/compressed/64: Add 32-bit boot #VC handler
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Add a #VC exception handler which is used when the kernel still executes
in protected mode. This boot-path already uses CPUID, which will cause #VC
exceptions in an SEV-ES guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S     |  6 ++
 arch/x86/boot/compressed/mem_encrypt.S | 96 +++++++++++++++++++++++++-
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 2001c3bf0748..ee448aedb8b0 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -34,6 +34,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/bootparam.h>
 #include <asm/desc_defs.h>
+#include <asm/trapnr.h>
 #include "pgtable.h"
 
 /*
@@ -857,6 +858,11 @@ SYM_FUNC_END(startup32_set_idt_entry)
 
 SYM_FUNC_START(startup32_load_idt)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* #VC handler */
+	leal    rva(startup32_vc_handler)(%ebp), %eax
+	movl    $X86_TRAP_VC, %edx
+	call    startup32_set_idt_entry
+
 	/* Load IDT */
 	leal	rva(boot32_idt)(%ebp), %eax
 	movl	%eax, rva(boot32_idt_desc+2)(%ebp)
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index aa561795efd1..2ca056a3707c 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -67,10 +67,104 @@ SYM_FUNC_START(get_sev_encryption_bit)
 	ret
 SYM_FUNC_END(get_sev_encryption_bit)
 
+/**
+ * sev_es_req_cpuid - Request a CPUID value from the Hypervisor using
+ *		      the GHCB MSR protocol
+ *
+ * @%eax:	Register to request (0=EAX, 1=EBX, 2=ECX, 3=EDX)
+ * @%edx:	CPUID Function
+ *
+ * Returns 0 in %eax on sucess, non-zero on failure
+ * %edx returns CPUID value on success
+ */
+SYM_CODE_START_LOCAL(sev_es_req_cpuid)
+	shll	$30, %eax
+	orl     $0x00000004, %eax
+	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
+	wrmsr
+	rep; vmmcall		# VMGEXIT
+	rdmsr
+
+	/* Check response */
+	movl	%eax, %ecx
+	andl	$0x3ffff000, %ecx	# Bits [12-29] MBZ
+	jnz	2f
+
+	/* Check return code */
+	andl    $0xfff, %eax
+	cmpl    $5, %eax
+	jne	2f
+
+	/* All good - return success */
+	xorl	%eax, %eax
+1:
+	ret
+2:
+	movl	$-1, %eax
+	jmp	1b
+SYM_CODE_END(sev_es_req_cpuid)
+
+SYM_CODE_START(startup32_vc_handler)
+	pushl	%eax
+	pushl	%ebx
+	pushl	%ecx
+	pushl	%edx
+
+	/* Keep CPUID function in %ebx */
+	movl	%eax, %ebx
+
+	/* Check if error-code == SVM_EXIT_CPUID */
+	cmpl	$0x72, 16(%esp)
+	jne	.Lfail
+
+	movl	$0, %eax		# Request CPUID[fn].EAX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 12(%esp)		# Store result
+
+	movl	$1, %eax		# Request CPUID[fn].EBX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 8(%esp)		# Store result
+
+	movl	$2, %eax		# Request CPUID[fn].ECX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 4(%esp)		# Store result
+
+	movl	$3, %eax		# Request CPUID[fn].EDX
+	movl	%ebx, %edx		# CPUID fn
+	call	sev_es_req_cpuid	# Call helper
+	testl	%eax, %eax		# Check return code
+	jnz	.Lfail
+	movl	%edx, 0(%esp)		# Store result
+
+	popl	%edx
+	popl	%ecx
+	popl	%ebx
+	popl	%eax
+
+	/* Remove error code */
+	addl	$4, %esp
+
+	/* Jump over CPUID instruction */
+	addl	$2, (%esp)
+
+	iret
+.Lfail:
+	hlt
+	jmp .Lfail
+SYM_CODE_END(startup32_vc_handler)
+
 	.code64
 
 #include "../../kernel/sev_verify_cbit.S"
-
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

The 32-bit #VC handler has no GHCB and can only handle CPUID exit codes.
It is needed by the early boot code to handle #VC exceptions raised in
verify_cpu() and to get the position of the C bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the 32-bit boot #VC handler to make sure the
hypervisor does not pretend that SEV is not enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/mem_encrypt.S | 36 ++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 2ca056a3707c..8941c3a8ff8a 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -145,6 +145,34 @@ SYM_CODE_START(startup32_vc_handler)
 	jnz	.Lfail
 	movl	%edx, 0(%esp)		# Store result
 
+	/*
+	 * Sanity check CPUID results from the Hypervisor. See comment in
+	 * do_vc_no_ghcb() for more details on why this is necessary.
+	 */
+
+	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */
+	cmpl    $1, %ebx
+	jne     .Lcheck_leaf
+	btl     $31, 4(%esp)
+	jnc     .Lfail
+	jmp     .Ldone
+
+.Lcheck_leaf:
+	/* Fail if SEV leaf not available in CPUID[0x80000000].EAX */
+	cmpl    $0x80000000, %ebx
+	jne     .Lcheck_sev
+	cmpl    $0x8000001f, 12(%esp)
+	jb      .Lfail
+	jmp     .Ldone
+
+.Lcheck_sev:
+	/* Fail if SEV bit not set in CPUID[0x8000001f].EAX[1] */
+	cmpl    $0x8000001f, %ebx
+	jne     .Ldone
+	btl     $1, 12(%esp)
+	jnc     .Lfail
+
+.Ldone:
 	popl	%edx
 	popl	%ecx
 	popl	%ebx
@@ -158,6 +186,14 @@ SYM_CODE_START(startup32_vc_handler)
 
 	iret
 .Lfail:
+	/* Send terminate request to Hypervisor */
+	movl    $0x100, %eax
+	xorl    %edx, %edx
+	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
+	wrmsr
+	rep; vmmcall
+
+	/* If request fails, go to hlt loop */
 	hlt
 	jmp .Lfail
 SYM_CODE_END(startup32_vc_handler)
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

The 32-bit #VC handler has no GHCB and can only handle CPUID exit codes.
It is needed by the early boot code to handle #VC exceptions raised in
verify_cpu() and to get the position of the C bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the 32-bit boot #VC handler to make sure the
hypervisor does not pretend that SEV is not enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/mem_encrypt.S | 36 ++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 2ca056a3707c..8941c3a8ff8a 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -145,6 +145,34 @@ SYM_CODE_START(startup32_vc_handler)
 	jnz	.Lfail
 	movl	%edx, 0(%esp)		# Store result
 
+	/*
+	 * Sanity check CPUID results from the Hypervisor. See comment in
+	 * do_vc_no_ghcb() for more details on why this is necessary.
+	 */
+
+	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */
+	cmpl    $1, %ebx
+	jne     .Lcheck_leaf
+	btl     $31, 4(%esp)
+	jnc     .Lfail
+	jmp     .Ldone
+
+.Lcheck_leaf:
+	/* Fail if SEV leaf not available in CPUID[0x80000000].EAX */
+	cmpl    $0x80000000, %ebx
+	jne     .Lcheck_sev
+	cmpl    $0x8000001f, 12(%esp)
+	jb      .Lfail
+	jmp     .Ldone
+
+.Lcheck_sev:
+	/* Fail if SEV bit not set in CPUID[0x8000001f].EAX[1] */
+	cmpl    $0x8000001f, %ebx
+	jne     .Ldone
+	btl     $1, 12(%esp)
+	jnc     .Lfail
+
+.Ldone:
 	popl	%edx
 	popl	%ecx
 	popl	%ebx
@@ -158,6 +186,14 @@ SYM_CODE_START(startup32_vc_handler)
 
 	iret
 .Lfail:
+	/* Send terminate request to Hypervisor */
+	movl    $0x100, %eax
+	xorl    %edx, %edx
+	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
+	wrmsr
+	rep; vmmcall
+
+	/* If request fails, go to hlt loop */
 	hlt
 	jmp .Lfail
 SYM_CODE_END(startup32_vc_handler)
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 83 ++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index ee448aedb8b0..7c5c2698a96e 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -183,11 +183,21 @@ SYM_FUNC_START(startup_32)
 	 */
 	call	get_sev_encryption_bit
 	xorl	%edx, %edx
+#ifdef	CONFIG_AMD_MEM_ENCRYPT
 	testl	%eax, %eax
 	jz	1f
 	subl	$32, %eax	/* Encryption bit is always above bit 31 */
 	bts	%eax, %edx	/* Set encryption mask for page tables */
+	/*
+	 * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
+	 * will do a check. The sev_status memory will be fully initialized
+	 * with the contents of MSR_AMD_SEV_STATUS later in
+	 * set_sev_encryption_mask(). For now it is sufficient to know that SEV
+	 * is active.
+	 */
+	movl	$1, rva(sev_status)(%ebp)
 1:
+#endif
 
 	/* Initialize Page tables to 0 */
 	leal	rva(pgtable)(%ebx), %edi
@@ -272,6 +282,9 @@ SYM_FUNC_START(startup_32)
 	movl	%esi, %edx
 1:
 #endif
+	/* Check if the C-bit position is correct when SEV is active */
+	call	startup32_check_sev_cbit
+
 	pushl	$__KERNEL_CS
 	pushl	%eax
 
@@ -871,6 +884,76 @@ SYM_FUNC_START(startup32_load_idt)
 	ret
 SYM_FUNC_END(startup32_load_idt)
 
+/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+SYM_FUNC_START(startup32_check_sev_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	pushl	%eax
+	pushl	%ebx
+	pushl	%ecx
+	pushl	%edx
+
+	/* Check for non-zero sev_status */
+	movl	rva(sev_status)(%ebp), %eax
+	testl	%eax, %eax
+	jz	4f
+
+	/*
+	 * Get two 32-bit random values - Don't bail out if RDRAND fails
+	 * because it is better to prevent forward progress if no random value
+	 * can be gathered.
+	 */
+1:	rdrand	%eax
+	jnc	1b
+2:	rdrand	%ebx
+	jnc	2b
+
+	/* Store to memory and keep it in the registers */
+	movl	%eax, rva(sev_check_data)(%ebp)
+	movl	%ebx, rva(sev_check_data+4)(%ebp)
+
+	/* Enable paging to see if encryption is active */
+	movl	%cr0, %edx	/* Backup %cr0 in %edx */
+	movl	$(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+	movl	%ecx, %cr0
+
+	cmpl	%eax, rva(sev_check_data)(%ebp)
+	jne	3f
+	cmpl	%ebx, rva(sev_check_data+4)(%ebp)
+	jne	3f
+
+	movl	%edx, %cr0	/* Restore previous %cr0 */
+
+	jmp	4f
+
+3:	/* Check failed - hlt the machine */
+	hlt
+	jmp	3b
+
+4:
+	popl	%edx
+	popl	%ecx
+	popl	%ebx
+	popl	%eax
+#endif
+	ret
+SYM_FUNC_END(startup32_check_sev_cbit)
+
 /*
  * Stack and heap for uncompression
  */
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/head_64.S | 83 ++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index ee448aedb8b0..7c5c2698a96e 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -183,11 +183,21 @@ SYM_FUNC_START(startup_32)
 	 */
 	call	get_sev_encryption_bit
 	xorl	%edx, %edx
+#ifdef	CONFIG_AMD_MEM_ENCRYPT
 	testl	%eax, %eax
 	jz	1f
 	subl	$32, %eax	/* Encryption bit is always above bit 31 */
 	bts	%eax, %edx	/* Set encryption mask for page tables */
+	/*
+	 * Mark SEV as active in sev_status so that startup32_check_sev_cbit()
+	 * will do a check. The sev_status memory will be fully initialized
+	 * with the contents of MSR_AMD_SEV_STATUS later in
+	 * set_sev_encryption_mask(). For now it is sufficient to know that SEV
+	 * is active.
+	 */
+	movl	$1, rva(sev_status)(%ebp)
 1:
+#endif
 
 	/* Initialize Page tables to 0 */
 	leal	rva(pgtable)(%ebx), %edi
@@ -272,6 +282,9 @@ SYM_FUNC_START(startup_32)
 	movl	%esi, %edx
 1:
 #endif
+	/* Check if the C-bit position is correct when SEV is active */
+	call	startup32_check_sev_cbit
+
 	pushl	$__KERNEL_CS
 	pushl	%eax
 
@@ -871,6 +884,76 @@ SYM_FUNC_START(startup32_load_idt)
 	ret
 SYM_FUNC_END(startup32_load_idt)
 
+/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+SYM_FUNC_START(startup32_check_sev_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	pushl	%eax
+	pushl	%ebx
+	pushl	%ecx
+	pushl	%edx
+
+	/* Check for non-zero sev_status */
+	movl	rva(sev_status)(%ebp), %eax
+	testl	%eax, %eax
+	jz	4f
+
+	/*
+	 * Get two 32-bit random values - Don't bail out if RDRAND fails
+	 * because it is better to prevent forward progress if no random value
+	 * can be gathered.
+	 */
+1:	rdrand	%eax
+	jnc	1b
+2:	rdrand	%ebx
+	jnc	2b
+
+	/* Store to memory and keep it in the registers */
+	movl	%eax, rva(sev_check_data)(%ebp)
+	movl	%ebx, rva(sev_check_data+4)(%ebp)
+
+	/* Enable paging to see if encryption is active */
+	movl	%cr0, %edx	/* Backup %cr0 in %edx */
+	movl	$(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+	movl	%ecx, %cr0
+
+	cmpl	%eax, rva(sev_check_data)(%ebp)
+	jne	3f
+	cmpl	%ebx, rva(sev_check_data+4)(%ebp)
+	jne	3f
+
+	movl	%edx, %cr0	/* Restore previous %cr0 */
+
+	jmp	4f
+
+3:	/* Check failed - hlt the machine */
+	hlt
+	jmp	3b
+
+4:
+	popl	%edx
+	popl	%ecx
+	popl	%ebx
+	popl	%eax
+#endif
+	ret
+SYM_FUNC_END(startup32_check_sev_cbit)
+
 /*
  * Stack and heap for uncompression
  */
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 7/7] x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate()
  2021-03-10  8:43 ` Joerg Roedel
@ 2021-03-10  8:43   ` Joerg Roedel
  -1 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Sean Christopherson,
	Martin Radev, Arvind Sankar, linux-kernel, kvm, virtualization

From: Joerg Roedel <jroedel@suse.de>

There are a few places left in the SEV-ES C code where hlt loops and/or
terminate requests are implemented. Replace them all with calls to
sev_es_terminate().

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev-es.c | 12 +++---------
 arch/x86/kernel/sev-es-shared.c   | 10 +++-------
 2 files changed, 6 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/sev-es.c b/arch/x86/boot/compressed/sev-es.c
index 27826c265aab..d904bd56b3e3 100644
--- a/arch/x86/boot/compressed/sev-es.c
+++ b/arch/x86/boot/compressed/sev-es.c
@@ -200,14 +200,8 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	}
 
 finish:
-	if (result == ES_OK) {
+	if (result == ES_OK)
 		vc_finish_insn(&ctxt);
-	} else if (result != ES_RETRY) {
-		/*
-		 * For now, just halt the machine. That makes debugging easier,
-		 * later we just call sev_es_terminate() here.
-		 */
-		while (true)
-			asm volatile("hlt\n");
-	}
+	else if (result != ES_RETRY)
+		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index cdc04d091242..7c34be61258e 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -24,7 +24,7 @@ static bool __init sev_es_check_cpu_features(void)
 	return true;
 }
 
-static void sev_es_terminate(unsigned int reason)
+static void __noreturn sev_es_terminate(unsigned int reason)
 {
 	u64 val = GHCB_SEV_TERMINATE;
 
@@ -210,12 +210,8 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 	return;
 
 fail:
-	sev_es_wr_ghcb_msr(GHCB_SEV_TERMINATE);
-	VMGEXIT();
-
-	/* Shouldn't get here - if we do halt the machine */
-	while (true)
-		asm volatile("hlt\n");
+	/* Terminate the guest */
+	sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
 
 static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
-- 
2.30.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 7/7] x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate()
@ 2021-03-10  8:43   ` Joerg Roedel
  0 siblings, 0 replies; 20+ messages in thread
From: Joerg Roedel @ 2021-03-10  8:43 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, linux-kernel, Masami Hiramatsu, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

There are a few places left in the SEV-ES C code where hlt loops and/or
terminate requests are implemented. Replace them all with calls to
sev_es_terminate().

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev-es.c | 12 +++---------
 arch/x86/kernel/sev-es-shared.c   | 10 +++-------
 2 files changed, 6 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/sev-es.c b/arch/x86/boot/compressed/sev-es.c
index 27826c265aab..d904bd56b3e3 100644
--- a/arch/x86/boot/compressed/sev-es.c
+++ b/arch/x86/boot/compressed/sev-es.c
@@ -200,14 +200,8 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	}
 
 finish:
-	if (result == ES_OK) {
+	if (result == ES_OK)
 		vc_finish_insn(&ctxt);
-	} else if (result != ES_RETRY) {
-		/*
-		 * For now, just halt the machine. That makes debugging easier,
-		 * later we just call sev_es_terminate() here.
-		 */
-		while (true)
-			asm volatile("hlt\n");
-	}
+	else if (result != ES_RETRY)
+		sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index cdc04d091242..7c34be61258e 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -24,7 +24,7 @@ static bool __init sev_es_check_cpu_features(void)
 	return true;
 }
 
-static void sev_es_terminate(unsigned int reason)
+static void __noreturn sev_es_terminate(unsigned int reason)
 {
 	u64 val = GHCB_SEV_TERMINATE;
 
@@ -210,12 +210,8 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 	return;
 
 fail:
-	sev_es_wr_ghcb_msr(GHCB_SEV_TERMINATE);
-	VMGEXIT();
-
-	/* Shouldn't get here - if we do halt the machine */
-	while (true)
-		asm volatile("hlt\n");
+	/* Terminate the guest */
+	sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
 }
 
 static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
-- 
2.30.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  2021-03-10  8:43   ` Joerg Roedel
  (?)
@ 2021-03-10 16:08   ` Sean Christopherson
  2021-03-10 17:26     ` Martin Radev
  -1 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2021-03-10 16:08 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: x86, Joerg Roedel, hpa, Andy Lutomirski, Dave Hansen,
	Peter Zijlstra, Jiri Slaby, Dan Williams, Tom Lendacky,
	Juergen Gross, Kees Cook, David Rientjes, Cfir Cohen,
	Erdem Aktas, Masami Hiramatsu, Mike Stunes, Martin Radev,
	Arvind Sankar, linux-kernel, kvm, virtualization

On Wed, Mar 10, 2021, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> The 32-bit #VC handler has no GHCB and can only handle CPUID exit codes.
> It is needed by the early boot code to handle #VC exceptions raised in
> verify_cpu() and to get the position of the C bit.
> 
> But the CPUID information comes from the hypervisor, which is untrusted
> and might return results which trick the guest into the no-SEV boot path
> with no C bit set in the page-tables. All data written to memory would
> then be unencrypted and could leak sensitive data to the hypervisor.
> 
> Add sanity checks to the 32-bit boot #VC handler to make sure the
> hypervisor does not pretend that SEV is not enabled.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/boot/compressed/mem_encrypt.S | 36 ++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index 2ca056a3707c..8941c3a8ff8a 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -145,6 +145,34 @@ SYM_CODE_START(startup32_vc_handler)
>  	jnz	.Lfail
>  	movl	%edx, 0(%esp)		# Store result
>  
> +	/*
> +	 * Sanity check CPUID results from the Hypervisor. See comment in
> +	 * do_vc_no_ghcb() for more details on why this is necessary.
> +	 */
> +
> +	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */

This check is flawed, as is the existing check in 64-bit boot.  Or I guess more
accurately, the check in get_sev_encryption_bit() is flawed.  AIUI, SEV-ES
doesn't require the hypervisor to intercept CPUID.  A malicious hypervisor can
temporarily pass-through CPUID to bypass the CPUID[1].ECX[31] check.  The
hypervisor likely has access to the guest firmware source, so it wouldn't be
difficult for the hypervisor to disable CPUID interception once it detects that
firmware is handing over control to the kernel.

> +	cmpl    $1, %ebx
> +	jne     .Lcheck_leaf
> +	btl     $31, 4(%esp)
> +	jnc     .Lfail
> +	jmp     .Ldone
> +
> +.Lcheck_leaf:
> +	/* Fail if SEV leaf not available in CPUID[0x80000000].EAX */
> +	cmpl    $0x80000000, %ebx
> +	jne     .Lcheck_sev
> +	cmpl    $0x8000001f, 12(%esp)
> +	jb      .Lfail
> +	jmp     .Ldone
> +
> +.Lcheck_sev:
> +	/* Fail if SEV bit not set in CPUID[0x8000001f].EAX[1] */
> +	cmpl    $0x8000001f, %ebx
> +	jne     .Ldone
> +	btl     $1, 12(%esp)
> +	jnc     .Lfail
> +
> +.Ldone:
>  	popl	%edx
>  	popl	%ecx
>  	popl	%ebx
> @@ -158,6 +186,14 @@ SYM_CODE_START(startup32_vc_handler)
>  
>  	iret
>  .Lfail:
> +	/* Send terminate request to Hypervisor */
> +	movl    $0x100, %eax
> +	xorl    %edx, %edx
> +	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
> +	wrmsr
> +	rep; vmmcall
> +
> +	/* If request fails, go to hlt loop */
>  	hlt
>  	jmp .Lfail
>  SYM_CODE_END(startup32_vc_handler)
> -- 
> 2.30.1
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  2021-03-10 16:08   ` Sean Christopherson
@ 2021-03-10 17:26     ` Martin Radev
  2021-03-10 17:51       ` Sean Christopherson
  0 siblings, 1 reply; 20+ messages in thread
From: Martin Radev @ 2021-03-10 17:26 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Joerg Roedel, x86, Joerg Roedel, hpa, Andy Lutomirski,
	Dave Hansen, Peter Zijlstra, Jiri Slaby, Dan Williams,
	Tom Lendacky, Juergen Gross, Kees Cook, David Rientjes,
	Cfir Cohen, Erdem Aktas, Masami Hiramatsu, Mike Stunes,
	Arvind Sankar, linux-kernel, kvm, virtualization

On Wed, Mar 10, 2021 at 08:08:37AM -0800, Sean Christopherson wrote:
> On Wed, Mar 10, 2021, Joerg Roedel wrote:
> > From: Joerg Roedel <jroedel@suse.de>
> > 
> > The 32-bit #VC handler has no GHCB and can only handle CPUID exit codes.
> > It is needed by the early boot code to handle #VC exceptions raised in
> > verify_cpu() and to get the position of the C bit.
> > 
> > But the CPUID information comes from the hypervisor, which is untrusted
> > and might return results which trick the guest into the no-SEV boot path
> > with no C bit set in the page-tables. All data written to memory would
> > then be unencrypted and could leak sensitive data to the hypervisor.
> > 
> > Add sanity checks to the 32-bit boot #VC handler to make sure the
> > hypervisor does not pretend that SEV is not enabled.
> > 
> > Signed-off-by: Joerg Roedel <jroedel@suse.de>
> > ---
> >  arch/x86/boot/compressed/mem_encrypt.S | 36 ++++++++++++++++++++++++++
> >  1 file changed, 36 insertions(+)
> > 
> > diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> > index 2ca056a3707c..8941c3a8ff8a 100644
> > --- a/arch/x86/boot/compressed/mem_encrypt.S
> > +++ b/arch/x86/boot/compressed/mem_encrypt.S
> > @@ -145,6 +145,34 @@ SYM_CODE_START(startup32_vc_handler)
> >  	jnz	.Lfail
> >  	movl	%edx, 0(%esp)		# Store result
> >  
> > +	/*
> > +	 * Sanity check CPUID results from the Hypervisor. See comment in
> > +	 * do_vc_no_ghcb() for more details on why this is necessary.
> > +	 */
> > +
> > +	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */
> 
> This check is flawed, as is the existing check in 64-bit boot.  Or I guess more
> accurately, the check in get_sev_encryption_bit() is flawed.  AIUI, SEV-ES
> doesn't require the hypervisor to intercept CPUID.  A malicious hypervisor can
> temporarily pass-through CPUID to bypass the CPUID[1].ECX[31] check.

If erroneous information is provided, either through interception or without, there's
this check which is performed every time a new page table is set in the early linux stages:
https://elixir.bootlin.com/linux/v5.12-rc2/source/arch/x86/kernel/sev_verify_cbit.S#L22

This should lead to a halt if corruption is detected, unless I'm overlooking something.
Please share more info.


> The
> hypervisor likely has access to the guest firmware source, so it wouldn't be
> difficult for the hypervisor to disable CPUID interception once it detects that
> firmware is handing over control to the kernel.
> 

You probably don't even need to know the firmware for that. There the option to set CR* changes to cause
#AE which probably gives away enough information.

> > +	cmpl    $1, %ebx
> > +	jne     .Lcheck_leaf
> > +	btl     $31, 4(%esp)
> > +	jnc     .Lfail
> > +	jmp     .Ldone
> > +
> > +.Lcheck_leaf:
> > +	/* Fail if SEV leaf not available in CPUID[0x80000000].EAX */
> > +	cmpl    $0x80000000, %ebx
> > +	jne     .Lcheck_sev
> > +	cmpl    $0x8000001f, 12(%esp)
> > +	jb      .Lfail
> > +	jmp     .Ldone
> > +
> > +.Lcheck_sev:
> > +	/* Fail if SEV bit not set in CPUID[0x8000001f].EAX[1] */
> > +	cmpl    $0x8000001f, %ebx
> > +	jne     .Ldone
> > +	btl     $1, 12(%esp)
> > +	jnc     .Lfail
> > +
> > +.Ldone:
> >  	popl	%edx
> >  	popl	%ecx
> >  	popl	%ebx
> > @@ -158,6 +186,14 @@ SYM_CODE_START(startup32_vc_handler)
> >  
> >  	iret
> >  .Lfail:
> > +	/* Send terminate request to Hypervisor */
> > +	movl    $0x100, %eax
> > +	xorl    %edx, %edx
> > +	movl    $MSR_AMD64_SEV_ES_GHCB, %ecx
> > +	wrmsr
> > +	rep; vmmcall
> > +
> > +	/* If request fails, go to hlt loop */
> >  	hlt
> >  	jmp .Lfail
> >  SYM_CODE_END(startup32_vc_handler)
> > -- 
> > 2.30.1
> > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  2021-03-10 17:26     ` Martin Radev
@ 2021-03-10 17:51       ` Sean Christopherson
  2021-03-10 18:10         ` Martin Radev
  0 siblings, 1 reply; 20+ messages in thread
From: Sean Christopherson @ 2021-03-10 17:51 UTC (permalink / raw)
  To: Martin Radev
  Cc: Joerg Roedel, x86, Joerg Roedel, hpa, Andy Lutomirski,
	Dave Hansen, Peter Zijlstra, Jiri Slaby, Dan Williams,
	Tom Lendacky, Juergen Gross, Kees Cook, David Rientjes,
	Cfir Cohen, Erdem Aktas, Masami Hiramatsu, Mike Stunes,
	Arvind Sankar, linux-kernel, kvm, virtualization

On Wed, Mar 10, 2021, Martin Radev wrote:
> On Wed, Mar 10, 2021 at 08:08:37AM -0800, Sean Christopherson wrote:
> > On Wed, Mar 10, 2021, Joerg Roedel wrote:
> > > +	/*
> > > +	 * Sanity check CPUID results from the Hypervisor. See comment in
> > > +	 * do_vc_no_ghcb() for more details on why this is necessary.
> > > +	 */
> > > +
> > > +	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */
> > 
> > This check is flawed, as is the existing check in 64-bit boot.  Or I guess more
> > accurately, the check in get_sev_encryption_bit() is flawed.  AIUI, SEV-ES
> > doesn't require the hypervisor to intercept CPUID.  A malicious hypervisor can
> > temporarily pass-through CPUID to bypass the CPUID[1].ECX[31] check.
> 
> If erroneous information is provided, either through interception or without, there's
> this check which is performed every time a new page table is set in the early linux stages:
> https://elixir.bootlin.com/linux/v5.12-rc2/source/arch/x86/kernel/sev_verify_cbit.S#L22
> 
> This should lead to a halt if corruption is detected, unless I'm overlooking something.
> Please share more info.

That check is predicated on sme_me_mask != 0, sme_me_mask is set based on the
result of get_sev_encryption_bit(), and that returns '0' if CPUID[1].ECX[31] is
'0'.

sme_enable() also appears to have the same issue, as CPUID[1].ECX[31]=0 would
cause it to check for SME instead of SEV, and the hypervisor can simply return
0 for a VMGEXIT to get MSR_K8_SYSCFG.

I've no idea if the guest would actually survive with a bogus sme_me_mask, but
relying on CPUID[1] to #VC is flawed.

Since MSR_AMD64_SEV is non-interceptable, that seems like it should be the
canonical way to detect SEV/SEV-ES.  The only complication seems to be handling
#GP faults on the RDMSR in early boot.

> > The hypervisor likely has access to the guest firmware source, so it
> > wouldn't be difficult for the hypervisor to disable CPUID interception once
> > it detects that firmware is handing over control to the kernel.
> > 
> 
> You probably don't even need to know the firmware for that. There the option
> to set CR* changes to cause #AE which probably gives away enough information.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path
  2021-03-10 17:51       ` Sean Christopherson
@ 2021-03-10 18:10         ` Martin Radev
  0 siblings, 0 replies; 20+ messages in thread
From: Martin Radev @ 2021-03-10 18:10 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Joerg Roedel, x86, Joerg Roedel, hpa, Andy Lutomirski,
	Dave Hansen, Peter Zijlstra, Jiri Slaby, Dan Williams,
	Tom Lendacky, Juergen Gross, Kees Cook, David Rientjes,
	Cfir Cohen, Erdem Aktas, Masami Hiramatsu, Mike Stunes,
	Arvind Sankar, linux-kernel, kvm, virtualization

On Wed, Mar 10, 2021 at 09:51:48AM -0800, Sean Christopherson wrote:
> On Wed, Mar 10, 2021, Martin Radev wrote:
> > On Wed, Mar 10, 2021 at 08:08:37AM -0800, Sean Christopherson wrote:
> > > On Wed, Mar 10, 2021, Joerg Roedel wrote:
> > > > +	/*
> > > > +	 * Sanity check CPUID results from the Hypervisor. See comment in
> > > > +	 * do_vc_no_ghcb() for more details on why this is necessary.
> > > > +	 */
> > > > +
> > > > +	/* Fail if Hypervisor bit not set in CPUID[1].ECX[31] */
> > > 
> > > This check is flawed, as is the existing check in 64-bit boot.  Or I guess more
> > > accurately, the check in get_sev_encryption_bit() is flawed.  AIUI, SEV-ES
> > > doesn't require the hypervisor to intercept CPUID.  A malicious hypervisor can
> > > temporarily pass-through CPUID to bypass the CPUID[1].ECX[31] check.
> > 
> > If erroneous information is provided, either through interception or without, there's
> > this check which is performed every time a new page table is set in the early linux stages:
> > https://elixir.bootlin.com/linux/v5.12-rc2/source/arch/x86/kernel/sev_verify_cbit.S#L22
> > 
> > This should lead to a halt if corruption is detected, unless I'm overlooking something.
> > Please share more info.
> 
> That check is predicated on sme_me_mask != 0, sme_me_mask is set based on the
> result of get_sev_encryption_bit(), and that returns '0' if CPUID[1].ECX[31] is
> '0'.
> 
> sme_enable() also appears to have the same issue, as CPUID[1].ECX[31]=0 would
> cause it to check for SME instead of SEV, and the hypervisor can simply return
> 0 for a VMGEXIT to get MSR_K8_SYSCFG.
> 
> I've no idea if the guest would actually survive with a bogus sme_me_mask, but
> relying on CPUID[1] to #VC is flawed.
> 
> Since MSR_AMD64_SEV is non-interceptable, that seems like it should be the
> canonical way to detect SEV/SEV-ES.  The only complication seems to be handling
> #GP faults on the RDMSR in early boot.
> 
> > > The hypervisor likely has access to the guest firmware source, so it
> > > wouldn't be difficult for the hypervisor to disable CPUID interception once
> > > it detects that firmware is handing over control to the kernel.
> > > 
> > 
> > You probably don't even need to know the firmware for that. There the option
> > to set CR* changes to cause #AE which probably gives away enough information.

I see what you mean but I never tried out disabling interception for cpuid.
There was the idea of checking for bogus information in the VC handler, but what
you suggested would bypass it, I guess.

If the C-bit is not set and memory gets interpreted as unencrypted, then the HV
can gain code execution easily by means of ROP and then switch to the OVMF page
table to easily do proper payload injection.

If interested, check video at https://fosdem.org/2021/schedule/event/tee_sev_es/
on minute 15.


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-03-10 18:11 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-10  8:43 [PATCH v2 0/7] x86/seves: Support 32-bit boot path and other updates Joerg Roedel
2021-03-10  8:43 ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 1/7] x86/boot/compressed/64: Cleanup exception handling before booting kernel Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 2/7] x86/boot/compressed/64: Reload CS in startup_32 Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 3/7] x86/boot/compressed/64: Setup IDT in startup_32 boot path Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 4/7] x86/boot/compressed/64: Add 32-bit boot #VC handler Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 5/7] x86/boot/compressed/64: Add CPUID sanity check to 32-bit boot-path Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10 16:08   ` Sean Christopherson
2021-03-10 17:26     ` Martin Radev
2021-03-10 17:51       ` Sean Christopherson
2021-03-10 18:10         ` Martin Radev
2021-03-10  8:43 ` [PATCH v2 6/7] x86/boot/compressed/64: Check SEV encryption in " Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel
2021-03-10  8:43 ` [PATCH v2 7/7] x86/sev-es: Replace open-coded hlt-loops with sev_es_terminate() Joerg Roedel
2021-03-10  8:43   ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.