All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors
@ 2020-10-20 12:18 Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Hi,

here are some enhancements to the SEV(-ES) code in the Linux kernel to
self-protect it against some newly detected hypervisor attacks. There
are 3 attacks addressed here:

	1) Hypervisor does not present the SEV-enabled bit via CPUID

	2) The Hypervisor presents the wrong C-bit position via CPUID

	3) An encrypted RAM page is mapped as MMIO in the nested
	   page-table, causing #VC exceptions and possible leak of the
	   data to the hypervisor or data/code injection from the
	   Hypervisor.

The attacks are described in more detail in this paper:

	https://arxiv.org/abs/2010.07094

Please review.

Thanks,

	Joerg

Changes to v1:

	- Disable CR4.PGE during C-bit test

	- Do not safe/restore caller-safed registers in
	  set_sev_encryption_mask()

Joerg Roedel (5):
  x86/boot/compressed/64: Introduce sev_status
  x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  x86/head/64: Check SEV encryption before switching to kernel
    page-table
  x86/sev-es: Do not support MMIO to/from encrypted memory

 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  | 14 +++-
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/head_64.S               | 14 +++-
 arch/x86/kernel/sev-es-shared.c         | 26 +++++++
 arch/x86/kernel/sev-es.c                | 20 ++++--
 arch/x86/kernel/sev_verify_cbit.S       | 91 +++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c               |  1 +
 8 files changed, 160 insertions(+), 9 deletions(-)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

-- 
2.28.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
@ 2020-10-20 12:18 ` Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Introduce sev_status and initialize it together with sme_me_mask to have
an indicator which SEV features are enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/mem_encrypt.S | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index dd07e7b41b11..2192b3bd78d8 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -81,6 +81,13 @@ SYM_FUNC_START(set_sev_encryption_mask)
 
 	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
 
+	/* Read sev_status */
+	movl	$MSR_AMD64_SEV, %ecx
+	rdmsr
+	shlq	$32, %rdx
+	orq	%rdx, %rax
+	movq	%rax, sev_status(%rip)
+
 .Lno_sev_mask:
 	movq	%rbp, %rsp		/* Restore original stack pointer */
 
@@ -96,5 +103,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	.balign	8
-SYM_DATA(sme_me_mask, .quad 0)
+SYM_DATA(sme_me_mask,		.quad 0)
+SYM_DATA(sev_status,		.quad 0)
 #endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
@ 2020-10-20 12:18 ` Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

The early #VC handler which doesn't have a GHCB can only handle CPUID
exit codes. It is needed by the early boot code to handle #VC
exceptions raised in verify_cpu() and to get the position of the C
bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the early #VC handlers to make sure the hypervisor
can not pretend that SEV is disabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index 5f83ccaab877..48bb14563dcd 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 		goto fail;
 	regs->dx = val >> 32;
 
+	/*
+	 * This is a VC handler and it is only raised when SEV-ES is active,
+	 * which means SEV must be active too. Do sanity checks on the CPUID
+	 * results to make sure the hypervisor does not trick the kernel into
+	 * the no-sev path. This could map sensitive data unencrypted and make
+	 * it accessible to the hypervisor.
+	 *
+	 * In particular, check for:
+	 *	- Hypervisor CPUID bit
+	 *	- Availability of CPUID leaf 0x8000001f
+	 *	- SEV CPUID bit.
+	 *
+	 * The hypervisor might still report the wrong C-bit position, but this
+	 * can't be checked here.
+	 */
+
+	if ((fn == 1 && !(regs->cx & BIT(31))))
+		/* Hypervisor Bit */
+		goto fail;
+	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+		/* SEV Leaf check */
+		goto fail;
+	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+		/* SEV Bit */
+		goto fail;
+
 	/* Skip over the CPUID two-byte opcode */
 	regs->ip += 2;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
@ 2020-10-20 12:18 ` Joerg Roedel
  2020-10-20 14:12   ` Arvind Sankar
  2020-10-20 12:18 ` [PATCH v2 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
can be re-used in the running kernel image.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/sev_verify_cbit.S       | 91 +++++++++++++++++++++++++
 4 files changed, 98 insertions(+)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index 063a60edcf99..73abba3312a7 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -153,6 +153,7 @@ void initialize_identity_maps(void)
 	 * into cr3.
 	 */
 	add_identity_map((unsigned long)_head, (unsigned long)_end);
+	sev_verify_cbit(top_level_pgt);
 	write_cr3(top_level_pgt);
 }
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 2192b3bd78d8..7409f2343d38 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
 SYM_FUNC_END(get_sev_encryption_bit)
 
 	.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
@@ -105,4 +108,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
 SYM_DATA(sev_status,		.quad 0)
+SYM_DATA(sev_check_data,	.quad 0)
 #endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b4c4d1..53f4848ad392 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
 void boot_stage1_vc(void);
 void boot_stage2_vc(void);
 
+void sev_verify_cbit(unsigned long cr3);
+
 #endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index 000000000000..3f7153607956
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *	sev_verify_cbit.S - Code for verification of the C-bit position reported
+ *			    by the Hypervisor when running with SEV enabled.
+ *
+ *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
+ *
+ * Implements sev_verify_cbit() which is called before switching to a new
+ * long-mode page-table at boot.
+ *
+ * It verifies that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* First check if a C-bit was detected */
+	movq	sme_me_mask(%rip), %r10
+	testq	%r10, %r10
+	jz	3f
+
+	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+	movq	sev_status(%rip), %r10
+	testq	%r10, %r10
+	jz	3f
+
+	/* Save CR4 in %r12 */
+	pushq	%r12
+	movq	%cr4, %r12
+
+	/* Disable Global Pages */
+	pushq	%r12
+	andq	$(~X86_CR4_PGE), %r12
+	movq	%r12, %cr4
+	popq	%r12
+
+	/*
+	 * Verified that running under SEV - now get a random value using
+	 * RDRAND. This instruction is mandatory when running as an SEV guest.
+	 *
+	 * Don't bail out of the loop if RDRAND returns errors. It is better to
+	 * prevent forward progress than to work with a non-random value here.
+	 */
+1:	rdrand	%r10
+	jnc	1b
+
+	/* Store value to memory and keep it in %r10 */
+	movq	%r10, sev_check_data(%rip)
+
+	/* Backup current %cr3 value to restore it later */
+	movq	%cr3, %r11
+
+	/* Switch to new %cr3 - This might unmap the stack */
+	movq	%rdi, %cr3
+
+	/*
+	 * Compare value in %r10 with memory location - If C-Bit is incorrect
+	 * this would read the encrypted data and make the check fail.
+	 */
+	cmpq	%r10, sev_check_data(%rip)
+
+	/* Restore old %cr3 */
+	movq	%r11, %cr3
+
+	/* Restore previous CR4 and %r12 */
+	movq	%r12, %cr4
+	popq	%r12
+
+	/* Check CMPQ result */
+	je	3f
+
+	/*
+	 * The check failed - Prevent any forward progress to prevent ROP
+	 * attacks, invalidate the stack and go into a hlt loop.
+	 */
+	xorq	%rsp, %rsp
+	subq	$0x1000, %rsp
+2:	hlt
+	jmp 2b
+3:
+#endif
+	ret
+SYM_FUNC_END(sev_verify_cbit)
+
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table
  2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (2 preceding siblings ...)
  2020-10-20 12:18 ` [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-20 12:18 ` Joerg Roedel
  2020-10-20 12:18 ` [PATCH v2 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

When SEV is enabled the kernel requests the C-Bit position again from
the hypervisor to built its own page-table. Since the hypervisor is an
untrusted source the C-bit position needs to be verified before the
kernel page-table is used.

Call the sev_verify_cbit() function before writing the CR3.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/head_64.S | 14 +++++++++++++-
 arch/x86/mm/mem_encrypt.c |  1 +
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 7eb2a1c87969..c6f4562359a5 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -161,7 +161,18 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/* Setup early boot stage 4-/5-level pagetables. */
 	addq	phys_base(%rip), %rax
-	movq	%rax, %cr3
+
+	/*
+	 * For SEV guests: Verify that the C-bit is correct. A malicious
+	 * hypervisor could lie about the C-bit position to perform a ROP
+	 * attack on the guest by writing to the unencrypted stack and wait for
+	 * the next RET instruction.
+	 */
+	movq	%rax, %rdi
+	call	sev_verify_cbit
+
+	/* Switch to new page-table */
+	movq	%rdi, %cr3
 
 	/* Ensure I am executing from virtual addresses */
 	movq	$1f, %rax
@@ -279,6 +290,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 SYM_CODE_END(secondary_startup_64)
 
 #include "verify_cpu.S"
+#include "sev_verify_cbit.S"
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ebb7edc8bc0a..bd9b62af2e3d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -39,6 +39,7 @@
  */
 u64 sme_me_mask __section(.data) = 0;
 u64 sev_status __section(.data) = 0;
+u64 sev_check_data __section(.data) = 0;
 EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory
  2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (3 preceding siblings ...)
  2020-10-20 12:18 ` [PATCH v2 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
@ 2020-10-20 12:18 ` Joerg Roedel
  4 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 12:18 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

MMIO memory is usually not mapped encrypted, so there is no reason to
support emulated MMIO when it is mapped encrypted.

This prevents a possible hypervisor attack where it maps a RAM page as
an MMIO page in the nested page-table, so that any guest access to it
will trigger a #VC exception and leak the data on that page to the
hypervisor or allows the hypervisor to inject data into the guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 4a96726fbaf8..421fe0203c68 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -374,8 +374,8 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 	return ES_EXCEPTION;
 }
 
-static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
-				 unsigned long vaddr, phys_addr_t *paddr)
+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+					   unsigned long vaddr, phys_addr_t *paddr)
 {
 	unsigned long va = (unsigned long)vaddr;
 	unsigned int level;
@@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 		if (user_mode(ctxt->regs))
 			ctxt->fi.error_code |= X86_PF_USER;
 
-		return false;
+		return ES_EXCEPTION;
 	}
 
+	if (unlikely(WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC)))
+		/* Emulated MMIO to/from encrypted memory not supported */
+		return ES_UNSUPPORTED;
+
 	pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
 	pa |= va & ~page_level_mask(level);
 
 	*paddr = pa;
 
-	return true;
+	return ES_OK;
 }
 
 /* Include code shared with pre-decompression boot stage */
@@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 {
 	u64 exit_code, exit_info_1, exit_info_2;
 	unsigned long ghcb_pa = __pa(ghcb);
+	enum es_result res;
 	phys_addr_t paddr;
 	void __user *ref;
 
@@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 
 	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
 
-	if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
-		if (!read)
+	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
+	if (res != ES_OK) {
+		if (res == ES_EXCEPTION && !read)
 			ctxt->fi.error_code |= X86_PF_WRITE;
 
-		return ES_EXCEPTION;
+		return res;
 	}
 
 	exit_info_1 = paddr;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 12:18 ` [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-20 14:12   ` Arvind Sankar
  2020-10-20 15:48     ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Arvind Sankar @ 2020-10-20 14:12 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, Tom Lendacky,
	linux-kernel

On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
> 
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/boot/compressed/ident_map_64.c |  1 +
>  arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
>  arch/x86/boot/compressed/misc.h         |  2 +
>  arch/x86/kernel/sev_verify_cbit.S       | 91 +++++++++++++++++++++++++
>  4 files changed, 98 insertions(+)
>  create mode 100644 arch/x86/kernel/sev_verify_cbit.S
> 
> diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
> index 063a60edcf99..73abba3312a7 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -153,6 +153,7 @@ void initialize_identity_maps(void)
>  	 * into cr3.
>  	 */
>  	add_identity_map((unsigned long)_head, (unsigned long)_end);
> +	sev_verify_cbit(top_level_pgt);
>  	write_cr3(top_level_pgt);
>  }
>  
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index 2192b3bd78d8..7409f2343d38 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
>  SYM_FUNC_END(get_sev_encryption_bit)
>  
>  	.code64
> +
> +#include "../../kernel/sev_verify_cbit.S"
> +
>  SYM_FUNC_START(set_sev_encryption_mask)
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	push	%rbp
> @@ -105,4 +108,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
>  	.balign	8
>  SYM_DATA(sme_me_mask,		.quad 0)
>  SYM_DATA(sev_status,		.quad 0)
> +SYM_DATA(sev_check_data,	.quad 0)
>  #endif
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 6d31f1b4c4d1..53f4848ad392 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -159,4 +159,6 @@ void boot_page_fault(void);
>  void boot_stage1_vc(void);
>  void boot_stage2_vc(void);
>  
> +void sev_verify_cbit(unsigned long cr3);
> +
>  #endif /* BOOT_COMPRESSED_MISC_H */
> diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index 000000000000..3f7153607956
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S
> @@ -0,0 +1,91 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + *	sev_verify_cbit.S - Code for verification of the C-bit position reported
> + *			    by the Hypervisor when running with SEV enabled.
> + *
> + *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +	/* First check if a C-bit was detected */
> +	movq	sme_me_mask(%rip), %r10
> +	testq	%r10, %r10
> +	jz	3f
> +
> +	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
> +	movq	sev_status(%rip), %r10
> +	testq	%r10, %r10
> +	jz	3f
> +
> +	/* Save CR4 in %r12 */
> +	pushq	%r12
> +	movq	%cr4, %r12
> +
> +	/* Disable Global Pages */
> +	pushq	%r12
> +	andq	$(~X86_CR4_PGE), %r12
> +	movq	%r12, %cr4
> +	popq	%r12
> +
> +	/*
> +	 * Verified that running under SEV - now get a random value using
> +	 * RDRAND. This instruction is mandatory when running as an SEV guest.
> +	 *
> +	 * Don't bail out of the loop if RDRAND returns errors. It is better to
> +	 * prevent forward progress than to work with a non-random value here.
> +	 */
> +1:	rdrand	%r10
> +	jnc	1b
> +
> +	/* Store value to memory and keep it in %r10 */
> +	movq	%r10, sev_check_data(%rip)
> +
> +	/* Backup current %cr3 value to restore it later */
> +	movq	%cr3, %r11
> +
> +	/* Switch to new %cr3 - This might unmap the stack */
> +	movq	%rdi, %cr3
> +
> +	/*
> +	 * Compare value in %r10 with memory location - If C-Bit is incorrect
> +	 * this would read the encrypted data and make the check fail.
> +	 */
> +	cmpq	%r10, sev_check_data(%rip)
> +
> +	/* Restore old %cr3 */
> +	movq	%r11, %cr3
> +
> +	/* Restore previous CR4 and %r12 */
> +	movq	%r12, %cr4
> +	popq	%r12
> +
> +	/* Check CMPQ result */
> +	je	3f
> +
> +	/*
> +	 * The check failed - Prevent any forward progress to prevent ROP
> +	 * attacks, invalidate the stack and go into a hlt loop.
> +	 */
> +	xorq	%rsp, %rsp
> +	subq	$0x1000, %rsp
> +2:	hlt
> +	jmp 2b
> +3:
> +#endif
> +	ret
> +SYM_FUNC_END(sev_verify_cbit)
> +
> -- 
> 2.28.0
> 

Why use r10-r12 rather than the caller-save registers? Even for the head
code where you need to perserve the cr3 value you can just return it in
rax?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 14:12   ` Arvind Sankar
@ 2020-10-20 15:48     ` Joerg Roedel
  2020-10-20 16:04       ` Arvind Sankar
  0 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2020-10-20 15:48 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Tue, Oct 20, 2020 at 10:12:59AM -0400, Arvind Sankar wrote:
> On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> Why use r10-r12 rather than the caller-save registers? Even for the head
> code where you need to perserve the cr3 value you can just return it in
> rax?

It can surely be optimized, but it makes the code less robust.  This
function is only called from assembly so the standard x86-64 calling
conventions might not be followed strictly. I think its better to make
as few assumptions as possible about the calling code to avoid
regressions. Changes to the head code are not necessarily tested with
SEV/SEV-ES guests by developers.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 15:48     ` Joerg Roedel
@ 2020-10-20 16:04       ` Arvind Sankar
  2020-10-21 12:49         ` Joerg Roedel
  0 siblings, 1 reply; 10+ messages in thread
From: Arvind Sankar @ 2020-10-20 16:04 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Arvind Sankar, Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Martin Radev, Tom Lendacky,
	linux-kernel

On Tue, Oct 20, 2020 at 05:48:12PM +0200, Joerg Roedel wrote:
> On Tue, Oct 20, 2020 at 10:12:59AM -0400, Arvind Sankar wrote:
> > On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> > Why use r10-r12 rather than the caller-save registers? Even for the head
> > code where you need to perserve the cr3 value you can just return it in
> > rax?
> 
> It can surely be optimized, but it makes the code less robust.  This
> function is only called from assembly so the standard x86-64 calling
> conventions might not be followed strictly. I think its better to make
> as few assumptions as possible about the calling code to avoid
> regressions. Changes to the head code are not necessarily tested with
> SEV/SEV-ES guests by developers.
> 
> Regards,
> 
> 	Joerg

This is called from both assembly and C, but anyway, you're already
assuming r10 and r11 can be clobbered safely, and you just took out the
save/restores in set_sev_encryption_mask, which is actually called only
from assembly.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 16:04       ` Arvind Sankar
@ 2020-10-21 12:49         ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2020-10-21 12:49 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Tue, Oct 20, 2020 at 12:04:28PM -0400, Arvind Sankar wrote:
> This is called from both assembly and C, but anyway, you're already
> assuming r10 and r11 can be clobbered safely, and you just took out the
> save/restores in set_sev_encryption_mask, which is actually called only
> from assembly.

Alright, maybe I was a bit too caucious. I changed the CR4 handling to
use %r8 and %r9 instead, which are also clobbered.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-10-21 12:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-20 12:18 [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
2020-10-20 12:18 ` [PATCH v2 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
2020-10-20 12:18 ` [PATCH v2 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
2020-10-20 12:18 ` [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
2020-10-20 14:12   ` Arvind Sankar
2020-10-20 15:48     ` Joerg Roedel
2020-10-20 16:04       ` Arvind Sankar
2020-10-21 12:49         ` Joerg Roedel
2020-10-20 12:18 ` [PATCH v2 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
2020-10-20 12:18 ` [PATCH v2 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.