All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors
@ 2020-10-19 15:11 Joerg Roedel
  2020-10-19 15:11 ` [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Hi,

here are some enhancements to the SEV(-ES) code in the Linux kernel to
self-protect it against some newly detected hypervisor attacks. There
are 3 attacks addressed here:

	1) Hypervisor does not present the SEV-enabled bit via CPUID

	2) The Hypervisor presents the wrong C-bit position via CPUID

	3) An encrypted RAM page is mapped as MMIO in the nested
	   page-table, causing #VC exceptions and possible leak of the
	   data to the hypervisor or data/code injection from the
	   Hypervisor.

The attacks are described in more detail in this paper:

	https://arxiv.org/abs/2010.07094

Please review.

Thanks,

	Joerg

Joerg Roedel (5):
  x86/boot/compressed/64: Introduce sev_status
  x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  x86/head/64: Check SEV encryption before switching to kernel
    page-table
  x86/sev-es: Do not support MMIO to/from encrypted memory

 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  | 18 +++++-
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/head_64.S               | 14 ++++-
 arch/x86/kernel/sev-es-shared.c         | 26 +++++++++
 arch/x86/kernel/sev-es.c                | 20 ++++---
 arch/x86/kernel/sev_verify_cbit.S       | 77 +++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c               |  1 +
 8 files changed, 150 insertions(+), 9 deletions(-)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

-- 
2.28.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
@ 2020-10-19 15:11 ` Joerg Roedel
  2020-10-20  0:59   ` Sean Christopherson
  2020-10-19 15:11 ` [PATCH 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Introduce sev_status and initialize it together with sme_me_mask to have
an indicator which SEV features are enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/mem_encrypt.S | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index dd07e7b41b11..0effd58f0095 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -71,6 +71,8 @@ SYM_FUNC_END(get_sev_encryption_bit)
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
+	push	%rax
+	push	%rcx
 	push	%rdx
 
 	movq	%rsp, %rbp		/* Save current stack pointer */
@@ -81,10 +83,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
 
 	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
 
+	/* Read sev_status */
+	movl	$MSR_AMD64_SEV, %ecx
+	rdmsr
+	shlq	$32, %rdx
+	orq	%rdx, %rax
+	movq	%rax, sev_status(%rip)
+
 .Lno_sev_mask:
 	movq	%rbp, %rsp		/* Restore original stack pointer */
 
 	pop	%rdx
+	pop	%rcx
+	pop	%rax
 	pop	%rbp
 #endif
 
@@ -96,5 +107,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	.balign	8
-SYM_DATA(sme_me_mask, .quad 0)
+SYM_DATA(sme_me_mask,		.quad 0)
+SYM_DATA(sev_status,		.quad 0)
 #endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-19 15:11 ` [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
@ 2020-10-19 15:11 ` Joerg Roedel
  2020-10-19 15:11 ` [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

The early #VC handler which doesn't have a GHCB can only handle CPUID
exit codes. It is needed by the early boot code to handle #VC
exceptions raised in verify_cpu() and to get the position of the C
bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the early #VC handlers to make sure the hypervisor
can not pretend that SEV is disabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index 5f83ccaab877..48bb14563dcd 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 		goto fail;
 	regs->dx = val >> 32;
 
+	/*
+	 * This is a VC handler and it is only raised when SEV-ES is active,
+	 * which means SEV must be active too. Do sanity checks on the CPUID
+	 * results to make sure the hypervisor does not trick the kernel into
+	 * the no-sev path. This could map sensitive data unencrypted and make
+	 * it accessible to the hypervisor.
+	 *
+	 * In particular, check for:
+	 *	- Hypervisor CPUID bit
+	 *	- Availability of CPUID leaf 0x8000001f
+	 *	- SEV CPUID bit.
+	 *
+	 * The hypervisor might still report the wrong C-bit position, but this
+	 * can't be checked here.
+	 */
+
+	if ((fn == 1 && !(regs->cx & BIT(31))))
+		/* Hypervisor Bit */
+		goto fail;
+	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+		/* SEV Leaf check */
+		goto fail;
+	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+		/* SEV Bit */
+		goto fail;
+
 	/* Skip over the CPUID two-byte opcode */
 	regs->ip += 2;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-19 15:11 ` [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
  2020-10-19 15:11 ` [PATCH 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
@ 2020-10-19 15:11 ` Joerg Roedel
  2020-10-19 17:00   ` Arvind Sankar
  2020-10-19 15:11 ` [PATCH 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
  2020-10-19 15:11 ` [PATCH 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 1 reply; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
can be re-used in the running kernel image.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/sev_verify_cbit.S       | 77 +++++++++++++++++++++++++
 4 files changed, 84 insertions(+)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index 063a60edcf99..73abba3312a7 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -153,6 +153,7 @@ void initialize_identity_maps(void)
 	 * into cr3.
 	 */
 	add_identity_map((unsigned long)_head, (unsigned long)_end);
+	sev_verify_cbit(top_level_pgt);
 	write_cr3(top_level_pgt);
 }
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 0effd58f0095..1786d5f02825 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
 SYM_FUNC_END(get_sev_encryption_bit)
 
 	.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
@@ -109,4 +112,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
 SYM_DATA(sev_status,		.quad 0)
+SYM_DATA(sev_check_data,	.quad 0)
 #endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b4c4d1..53f4848ad392 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
 void boot_stage1_vc(void);
 void boot_stage2_vc(void);
 
+void sev_verify_cbit(unsigned long cr3);
+
 #endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index 000000000000..5b6f61465437
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *	sev_verify_cbit.S - Code for verification of the C-bit position reported
+ *			    by the Hypervisor when running with SEV enabled.
+ *
+ *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
+ *
+ * Implements sev_verify_cbit() which is called before switching to a new
+ * long-mode page-table at boot.
+ *
+ * It verifies that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* First check if a C-bit was detected */
+	movq	sme_me_mask(%rip), %r10
+	testq	%r10, %r10
+	jz	3f
+
+	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+	movq	sev_status(%rip), %r10
+	testq	%r10, %r10
+	jz	3f
+
+	/*
+	 * Verified that running under SEV - now get a random value using
+	 * RDRAND. This instruction is mandatory when running as an SEV guest.
+	 *
+	 * Don't bail out of the loop if RDRAND returns errors. It is better to
+	 * prevent forward progress than to work with a non-random value here.
+	 */
+1:	rdrand	%r10
+	jnc	1b
+
+	/* Store value to memory and keep it in %r10 */
+	movq	%r10, sev_check_data(%rip)
+
+	/* Backup current %cr3 value to restore it later */
+	movq	%cr3, %r11
+
+	/* Switch to new %cr3 - This might unmap the stack */
+	movq	%rdi, %cr3
+
+	/*
+	 * Compare value in %r10 with memory location - If C-Bit is incorrect
+	 * this would read the encrypted data and make the check fail.
+	 */
+	cmpq	%r10, sev_check_data(%rip)
+
+	/* Restore old %cr3 */
+	movq	%r11, %cr3
+
+	/* Check CMPQ result */
+	je	3f
+
+	/*
+	 * The check failed - Prevent any forward progress to prevent ROP
+	 * attacks, invalidate the stack and go into a hlt loop.
+	 */
+	xorq	%rsp, %rsp
+	subq	$0x1000, %rsp
+2:	hlt
+	jmp 2b
+3:
+#endif
+	ret
+SYM_FUNC_END(sev_verify_cbit)
+
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table
  2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (2 preceding siblings ...)
  2020-10-19 15:11 ` [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-19 15:11 ` Joerg Roedel
  2020-10-19 15:11 ` [PATCH 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

When SEV is enabled the kernel requests the C-Bit position again from
the hypervisor to built its own page-table. Since the hypervisor is an
untrusted source the C-bit position needs to be verified before the
kernel page-table is used.

Call the sev_verify_cbit() function before writing the CR3.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/head_64.S | 14 +++++++++++++-
 arch/x86/mm/mem_encrypt.c |  1 +
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 7eb2a1c87969..c6f4562359a5 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -161,7 +161,18 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/* Setup early boot stage 4-/5-level pagetables. */
 	addq	phys_base(%rip), %rax
-	movq	%rax, %cr3
+
+	/*
+	 * For SEV guests: Verify that the C-bit is correct. A malicious
+	 * hypervisor could lie about the C-bit position to perform a ROP
+	 * attack on the guest by writing to the unencrypted stack and wait for
+	 * the next RET instruction.
+	 */
+	movq	%rax, %rdi
+	call	sev_verify_cbit
+
+	/* Switch to new page-table */
+	movq	%rdi, %cr3
 
 	/* Ensure I am executing from virtual addresses */
 	movq	$1f, %rax
@@ -279,6 +290,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 SYM_CODE_END(secondary_startup_64)
 
 #include "verify_cpu.S"
+#include "sev_verify_cbit.S"
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ebb7edc8bc0a..bd9b62af2e3d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -39,6 +39,7 @@
  */
 u64 sme_me_mask __section(.data) = 0;
 u64 sev_status __section(.data) = 0;
+u64 sev_check_data __section(.data) = 0;
 EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory
  2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (3 preceding siblings ...)
  2020-10-19 15:11 ` [PATCH 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
@ 2020-10-19 15:11 ` Joerg Roedel
  4 siblings, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 15:11 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

MMIO memory is usually not mapped encrypted, so there is no reason to
support emulated MMIO when it is mapped encrypted.

This prevents a possible hypervisor attack where it maps a RAM page as
an MMIO page in the nested page-table, so that any guest access to it
will trigger a #VC exception and leak the data on that page to the
hypervisor or allows the hypervisor to inject data into the guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 4a96726fbaf8..421fe0203c68 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -374,8 +374,8 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 	return ES_EXCEPTION;
 }
 
-static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
-				 unsigned long vaddr, phys_addr_t *paddr)
+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+					   unsigned long vaddr, phys_addr_t *paddr)
 {
 	unsigned long va = (unsigned long)vaddr;
 	unsigned int level;
@@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 		if (user_mode(ctxt->regs))
 			ctxt->fi.error_code |= X86_PF_USER;
 
-		return false;
+		return ES_EXCEPTION;
 	}
 
+	if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))
+		/* Emulated MMIO to/from encrypted memory not supported */
+		return ES_UNSUPPORTED;
+
 	pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
 	pa |= va & ~page_level_mask(level);
 
 	*paddr = pa;
 
-	return true;
+	return ES_OK;
 }
 
 /* Include code shared with pre-decompression boot stage */
@@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 {
 	u64 exit_code, exit_info_1, exit_info_2;
 	unsigned long ghcb_pa = __pa(ghcb);
+	enum es_result res;
 	phys_addr_t paddr;
 	void __user *ref;
 
@@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 
 	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
 
-	if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
-		if (!read)
+	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
+	if (res != ES_OK) {
+		if (res == ES_EXCEPTION && !read)
 			ctxt->fi.error_code |= X86_PF_WRITE;
 
-		return ES_EXCEPTION;
+		return res;
 	}
 
 	exit_info_1 = paddr;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 15:11 ` [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-19 17:00   ` Arvind Sankar
  2020-10-19 17:54     ` Arvind Sankar
  2020-10-19 20:33     ` Joerg Roedel
  0 siblings, 2 replies; 19+ messages in thread
From: Arvind Sankar @ 2020-10-19 17:00 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 05:11:19PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
> 
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---

> +
> +	/* Store value to memory and keep it in %r10 */
> +	movq	%r10, sev_check_data(%rip)
> +

Does there need to be a cache flush/invalidation between this and the
read below to avoid just reading back from cache, or will the hardware
take care of that?

> +	/* Backup current %cr3 value to restore it later */
> +	movq	%cr3, %r11
> +
> +	/* Switch to new %cr3 - This might unmap the stack */
> +	movq	%rdi, %cr3

Does there need to be a TLB flush after this? When executed from the
main kernel's head code, CR4.PGE is enabled, and if the original page
mapping had the global bit set (the decompressor stub sets that in the
identity mapping), won't the read below still use the original encrypted
mapping if we don't explicitly flush it?

> +
> +	/*
> +	 * Compare value in %r10 with memory location - If C-Bit is incorrect
> +	 * this would read the encrypted data and make the check fail.
> +	 */
> +	cmpq	%r10, sev_check_data(%rip)
> +
> +	/* Restore old %cr3 */
> +	movq	%r11, %cr3
> +
> +	/* Check CMPQ result */
> +	je	3f
> +
> +	/*
> +	 * The check failed - Prevent any forward progress to prevent ROP
> +	 * attacks, invalidate the stack and go into a hlt loop.
> +	 */
> +	xorq	%rsp, %rsp
> +	subq	$0x1000, %rsp
> +2:	hlt
> +	jmp 2b
> +3:
> +#endif
> +	ret
> +SYM_FUNC_END(sev_verify_cbit)
> +
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 17:00   ` Arvind Sankar
@ 2020-10-19 17:54     ` Arvind Sankar
  2020-10-19 20:39       ` Joerg Roedel
  2020-10-19 20:33     ` Joerg Roedel
  1 sibling, 1 reply; 19+ messages in thread
From: Arvind Sankar @ 2020-10-19 17:54 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 01:00:08PM -0400, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 05:11:19PM +0200, Joerg Roedel wrote:
> > From: Joerg Roedel <jroedel@suse.de>
> > 
> > Check whether the hypervisor reported the correct C-bit when running as
> > an SEV guest. Using a wrong C-bit position could be used to leak
> > sensitive data from the guest to the hypervisor.
> > 
> > The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> > can be re-used in the running kernel image.
> > 
> > Signed-off-by: Joerg Roedel <jroedel@suse.de>
> > ---
> 
> > +
> > +	/* Store value to memory and keep it in %r10 */
> > +	movq	%r10, sev_check_data(%rip)
> > +
> 
> Does there need to be a cache flush/invalidation between this and the
> read below to avoid just reading back from cache, or will the hardware
> take care of that?

Also, isn't it possible that the initial page tables we're running on
have already been messed with and have the C-bit in the wrong location,
so that this write happens decrypted?

> 
> > +	/* Backup current %cr3 value to restore it later */
> > +	movq	%cr3, %r11
> > +
> > +	/* Switch to new %cr3 - This might unmap the stack */
> > +	movq	%rdi, %cr3
> 
> Does there need to be a TLB flush after this? When executed from the
> main kernel's head code, CR4.PGE is enabled, and if the original page
> mapping had the global bit set (the decompressor stub sets that in the
> identity mapping), won't the read below still use the original encrypted
> mapping if we don't explicitly flush it?
> 
> > +
> > +	/*
> > +	 * Compare value in %r10 with memory location - If C-Bit is incorrect
> > +	 * this would read the encrypted data and make the check fail.
> > +	 */
> > +	cmpq	%r10, sev_check_data(%rip)
> > +
> > +	/* Restore old %cr3 */
> > +	movq	%r11, %cr3
> > +
> > +	/* Check CMPQ result */
> > +	je	3f
> > +
> > +	/*
> > +	 * The check failed - Prevent any forward progress to prevent ROP
> > +	 * attacks, invalidate the stack and go into a hlt loop.
> > +	 */
> > +	xorq	%rsp, %rsp
> > +	subq	$0x1000, %rsp
> > +2:	hlt
> > +	jmp 2b
> > +3:
> > +#endif
> > +	ret
> > +SYM_FUNC_END(sev_verify_cbit)
> > +
> > -- 
> > 2.28.0
> > 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 17:00   ` Arvind Sankar
  2020-10-19 17:54     ` Arvind Sankar
@ 2020-10-19 20:33     ` Joerg Roedel
  2020-10-19 21:22       ` Arvind Sankar
  1 sibling, 1 reply; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 20:33 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

Hi Arvind,

On Mon, Oct 19, 2020 at 01:00:08PM -0400, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 05:11:19PM +0200, Joerg Roedel wrote:
> > +
> > +	/* Store value to memory and keep it in %r10 */
> > +	movq	%r10, sev_check_data(%rip)
> > +
> 
> Does there need to be a cache flush/invalidation between this and the
> read below to avoid just reading back from cache, or will the hardware
> take care of that?

No, a cache flush is not needed. When the C bit position is correct,
then the data will be mapped encrypted with the old and the new
page-table. If the C bit position is wrong, the access goes to a
different physical address.

> > +	/* Backup current %cr3 value to restore it later */
> > +	movq	%cr3, %r11
> > +
> > +	/* Switch to new %cr3 - This might unmap the stack */
> > +	movq	%rdi, %cr3
> 
> Does there need to be a TLB flush after this? When executed from the
> main kernel's head code, CR4.PGE is enabled, and if the original page
> mapping had the global bit set (the decompressor stub sets that in the
> identity mapping), won't the read below still use the original encrypted
> mapping if we don't explicitly flush it?

The check only really matters for the boot CPU, not for the secondary
CPUs. IIRC at this point in boot CR4.PGE is still off.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 17:54     ` Arvind Sankar
@ 2020-10-19 20:39       ` Joerg Roedel
  2020-10-19 21:31         ` Arvind Sankar
  0 siblings, 1 reply; 19+ messages in thread
From: Joerg Roedel @ 2020-10-19 20:39 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Mon, Oct 19, 2020 at 01:54:47PM -0400, Arvind Sankar wrote:
> Also, isn't it possible that the initial page tables we're running on
> have already been messed with and have the C-bit in the wrong location,
> so that this write happens decrypted?

The code assumes that the page-table it is running on has the correct C
bit position set and that the code which set it up verified that it is
correct. For the kernel itself this is true, at least, but when booting
via UEFI the check also needs to happen in the firmware.

Note that the possibilies are limited when the hypervisor reports the
wrong C bit position because code fetches always assume encryption, even
when the C bit is cleared in the page-table. So a wrong C bit position
in the decompression stub would write the kernel image to memory
unencrypted and executing it would not be possible.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 20:33     ` Joerg Roedel
@ 2020-10-19 21:22       ` Arvind Sankar
  2020-10-20  9:41         ` Joerg Roedel
  0 siblings, 1 reply; 19+ messages in thread
From: Arvind Sankar @ 2020-10-19 21:22 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Arvind Sankar, x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 10:33:45PM +0200, Joerg Roedel wrote:
> Hi Arvind,
> 
> On Mon, Oct 19, 2020 at 01:00:08PM -0400, Arvind Sankar wrote:
> > On Mon, Oct 19, 2020 at 05:11:19PM +0200, Joerg Roedel wrote:
> > > +
> > > +	/* Store value to memory and keep it in %r10 */
> > > +	movq	%r10, sev_check_data(%rip)
> > > +
> > 
> > Does there need to be a cache flush/invalidation between this and the
> > read below to avoid just reading back from cache, or will the hardware
> > take care of that?
> 
> No, a cache flush is not needed. When the C bit position is correct,
> then the data will be mapped encrypted with the old and the new
> page-table. If the C bit position is wrong, the access goes to a
> different physical address.

Ok.

> 
> > > +	/* Backup current %cr3 value to restore it later */
> > > +	movq	%cr3, %r11
> > > +
> > > +	/* Switch to new %cr3 - This might unmap the stack */
> > > +	movq	%rdi, %cr3
> > 
> > Does there need to be a TLB flush after this? When executed from the
> > main kernel's head code, CR4.PGE is enabled, and if the original page
> > mapping had the global bit set (the decompressor stub sets that in the
> > identity mapping), won't the read below still use the original encrypted
> > mapping if we don't explicitly flush it?
> 
> The check only really matters for the boot CPU, not for the secondary
> CPUs. IIRC at this point in boot CR4.PGE is still off.
> 
> Regards,
> 
> 	Joerg
> 

The boot cpu also enables CR4.PGE -- that code is shared between boot
and secondary cpus. The boot cpu jumps to the first "1" label below,
just before the call to sev_verify_cbit you're adding.

	/* Form the CR3 value being sure to include the CR3 modifier */
	addq	$(init_top_pgt - __START_KERNEL_map), %rax
1:

	/* Enable PAE mode, PGE and LA57 */
	movl	$(X86_CR4_PAE | X86_CR4_PGE), %ecx
#ifdef CONFIG_X86_5LEVEL
	testl	$1, __pgtable_l5_enabled(%rip)
	jz	1f
	orl	$X86_CR4_LA57, %ecx
1:
#endif
	movq	%rcx, %cr4

	/* Setup early boot stage 4-/5-level pagetables. */
	addq	phys_base(%rip), %rax
	movq	%rax, %cr3


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 20:39       ` Joerg Roedel
@ 2020-10-19 21:31         ` Arvind Sankar
  2020-10-20  8:59           ` Joerg Roedel
  0 siblings, 1 reply; 19+ messages in thread
From: Arvind Sankar @ 2020-10-19 21:31 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Arvind Sankar, x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 10:39:35PM +0200, Joerg Roedel wrote:
> On Mon, Oct 19, 2020 at 01:54:47PM -0400, Arvind Sankar wrote:
> > Also, isn't it possible that the initial page tables we're running on
> > have already been messed with and have the C-bit in the wrong location,
> > so that this write happens decrypted?
> 
> The code assumes that the page-table it is running on has the correct C
> bit position set and that the code which set it up verified that it is
> correct. For the kernel itself this is true, at least, but when booting
> via UEFI the check also needs to happen in the firmware.
> 
> Note that the possibilies are limited when the hypervisor reports the
> wrong C bit position because code fetches always assume encryption, even
> when the C bit is cleared in the page-table. So a wrong C bit position
> in the decompression stub would write the kernel image to memory
> unencrypted and executing it would not be possible.

Is it possible to take advantage of this to make the check independent
of the original page tables? i.e. switch to the new pagetables, then
write into .data or .bss the opcodes for a function that does
	movabs	$imm64, %rax
	jmp	*%rdi	// avoid using stack for the return
filling in the imm64 with the RDRAND value, and then try to execute it.
If the C-bit value is wrong, this will probably crash, and at any rate
shouldn't return with the correct value in %rax.

> 
> Regards,
> 
> 	Joerg
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-19 15:11 ` [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
@ 2020-10-20  0:59   ` Sean Christopherson
  2020-10-20  1:08     ` Sean Christopherson
  2020-10-20  9:55     ` Joerg Roedel
  0 siblings, 2 replies; 19+ messages in thread
From: Sean Christopherson @ 2020-10-20  0:59 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 05:11:17PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Introduce sev_status and initialize it together with sme_me_mask to have
> an indicator which SEV features are enabled.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/boot/compressed/mem_encrypt.S | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index dd07e7b41b11..0effd58f0095 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -71,6 +71,8 @@ SYM_FUNC_END(get_sev_encryption_bit)
>  SYM_FUNC_START(set_sev_encryption_mask)
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	push	%rbp
> +	push	%rax
> +	push	%rcx

There's no need to save/restore RAX and RCX, they are callee save.  This
function is only called from C, so I doubt it's using a custom ABI.

>  	push	%rdx
>  
>  	movq	%rsp, %rbp		/* Save current stack pointer */
> @@ -81,10 +83,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
>  
>  	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
>  
> +	/* Read sev_status */
> +	movl	$MSR_AMD64_SEV, %ecx
> +	rdmsr
> +	shlq	$32, %rdx
> +	orq	%rdx, %rax
> +	movq	%rax, sev_status(%rip)
> +
>  .Lno_sev_mask:
>  	movq	%rbp, %rsp		/* Restore original stack pointer */
>  
>  	pop	%rdx
> +	pop	%rcx
> +	pop	%rax
>  	pop	%rbp
>  #endif
>  
> @@ -96,5 +107,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
>  
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	.balign	8
> -SYM_DATA(sme_me_mask, .quad 0)
> +SYM_DATA(sme_me_mask,		.quad 0)
> +SYM_DATA(sev_status,		.quad 0)
>  #endif
> -- 
> 2.28.0
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-20  0:59   ` Sean Christopherson
@ 2020-10-20  1:08     ` Sean Christopherson
  2020-10-20  9:55     ` Joerg Roedel
  1 sibling, 0 replies; 19+ messages in thread
From: Sean Christopherson @ 2020-10-20  1:08 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: x86, Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 05:59:25PM -0700, Sean Christopherson wrote:
> On Mon, Oct 19, 2020 at 05:11:17PM +0200, Joerg Roedel wrote:
> > From: Joerg Roedel <jroedel@suse.de>
> > 
> > Introduce sev_status and initialize it together with sme_me_mask to have
> > an indicator which SEV features are enabled.
> > 
> > Signed-off-by: Joerg Roedel <jroedel@suse.de>
> > ---
> >  arch/x86/boot/compressed/mem_encrypt.S | 14 +++++++++++++-
> >  1 file changed, 13 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> > index dd07e7b41b11..0effd58f0095 100644
> > --- a/arch/x86/boot/compressed/mem_encrypt.S
> > +++ b/arch/x86/boot/compressed/mem_encrypt.S
> > @@ -71,6 +71,8 @@ SYM_FUNC_END(get_sev_encryption_bit)
> >  SYM_FUNC_START(set_sev_encryption_mask)
> >  #ifdef CONFIG_AMD_MEM_ENCRYPT
> >  	push	%rbp
> > +	push	%rax
> > +	push	%rcx
> 
> There's no need to save/restore RAX and RCX, they are callee save.  This
> function is only called from C, so I doubt it's using a custom ABI.

Gah, *caller* save.  Feedback stands, my English notwithstanding.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 21:31         ` Arvind Sankar
@ 2020-10-20  8:59           ` Joerg Roedel
  2020-10-20 14:33             ` Arvind Sankar
  0 siblings, 1 reply; 19+ messages in thread
From: Joerg Roedel @ 2020-10-20  8:59 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Mon, Oct 19, 2020 at 05:31:06PM -0400, Arvind Sankar wrote:
> Is it possible to take advantage of this to make the check independent
> of the original page tables? i.e. switch to the new pagetables, then
> write into .data or .bss the opcodes for a function that does
> 	movabs	$imm64, %rax
> 	jmp	*%rdi	// avoid using stack for the return
> filling in the imm64 with the RDRAND value, and then try to execute it.
> If the C-bit value is wrong, this will probably crash, and at any rate
> shouldn't return with the correct value in %rax.

That could work, but is not reliable. When the C bit is wrong the CPU
would essentially execute random data, which could also be a valid
instruction stream. A crash is not guaranteed.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-19 21:22       ` Arvind Sankar
@ 2020-10-20  9:41         ` Joerg Roedel
  0 siblings, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-20  9:41 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Mon, Oct 19, 2020 at 05:22:47PM -0400, Arvind Sankar wrote:
> The boot cpu also enables CR4.PGE -- that code is shared between boot
> and secondary cpus. The boot cpu jumps to the first "1" label below,
> just before the call to sev_verify_cbit you're adding.

You are right, in the real kernel image PGE gets enabled early. I added
code to save and restore CR4 in sev_verify_cbit() and disable PGE during
the test.

Thanks,

	Joerg


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-20  0:59   ` Sean Christopherson
  2020-10-20  1:08     ` Sean Christopherson
@ 2020-10-20  9:55     ` Joerg Roedel
  1 sibling, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-20  9:55 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, Tom Lendacky,
	linux-kernel

On Mon, Oct 19, 2020 at 05:59:25PM -0700, Sean Christopherson wrote:
> On Mon, Oct 19, 2020 at 05:11:17PM +0200, Joerg Roedel wrote:
> > +	push	%rax
> > +	push	%rcx
> 
> There's no need to save/restore RAX and RCX, they are callee save.  This
> function is only called from C, so I doubt it's using a custom ABI.

Right, removed the save/restore of these registers from the function.

Thanks,

	Joerg


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20  8:59           ` Joerg Roedel
@ 2020-10-20 14:33             ` Arvind Sankar
  2020-10-20 15:44               ` Joerg Roedel
  0 siblings, 1 reply; 19+ messages in thread
From: Arvind Sankar @ 2020-10-20 14:33 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Arvind Sankar, Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Martin Radev, Tom Lendacky,
	linux-kernel

On Tue, Oct 20, 2020 at 10:59:57AM +0200, Joerg Roedel wrote:
> On Mon, Oct 19, 2020 at 05:31:06PM -0400, Arvind Sankar wrote:
> > Is it possible to take advantage of this to make the check independent
> > of the original page tables? i.e. switch to the new pagetables, then
> > write into .data or .bss the opcodes for a function that does
> > 	movabs	$imm64, %rax
> > 	jmp	*%rdi	// avoid using stack for the return
> > filling in the imm64 with the RDRAND value, and then try to execute it.
> > If the C-bit value is wrong, this will probably crash, and at any rate
> > shouldn't return with the correct value in %rax.
> 
> That could work, but is not reliable. When the C bit is wrong the CPU
> would essentially execute random data, which could also be a valid
> instruction stream. A crash is not guaranteed.
> 

That doesn't feel like a big loss: if a malicious hypervisor wanted to
induce completely random code execution, it can do that anyway by just
messing with the guest-to-host translation, no?

We would need to avoid calling this in the secondary cpu startup, I guess.

I was hoping to be able to clean up the identity mapping in
__startup_64(), which currently maps the entire kernel using wraparound
entries, to just map the head page of the kernel, since AFAICT nothing
else is actually used from the identity mapping after switching to the
new page tables. But we'd need to keep it to support this check.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-20 14:33             ` Arvind Sankar
@ 2020-10-20 15:44               ` Joerg Roedel
  0 siblings, 0 replies; 19+ messages in thread
From: Joerg Roedel @ 2020-10-20 15:44 UTC (permalink / raw)
  To: Arvind Sankar
  Cc: Joerg Roedel, x86, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Martin Radev, Tom Lendacky, linux-kernel

On Tue, Oct 20, 2020 at 10:33:12AM -0400, Arvind Sankar wrote:
> That doesn't feel like a big loss: if a malicious hypervisor wanted to
> induce completely random code execution, it can do that anyway by just
> messing with the guest-to-host translation, no?

Yes, but relying on defined behavior is still better. Undefined behavior
could also mean it jumps to some other code which then leaks data.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-10-20 15:44 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-19 15:11 [PATCH 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
2020-10-19 15:11 ` [PATCH 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
2020-10-20  0:59   ` Sean Christopherson
2020-10-20  1:08     ` Sean Christopherson
2020-10-20  9:55     ` Joerg Roedel
2020-10-19 15:11 ` [PATCH 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
2020-10-19 15:11 ` [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
2020-10-19 17:00   ` Arvind Sankar
2020-10-19 17:54     ` Arvind Sankar
2020-10-19 20:39       ` Joerg Roedel
2020-10-19 21:31         ` Arvind Sankar
2020-10-20  8:59           ` Joerg Roedel
2020-10-20 14:33             ` Arvind Sankar
2020-10-20 15:44               ` Joerg Roedel
2020-10-19 20:33     ` Joerg Roedel
2020-10-19 21:22       ` Arvind Sankar
2020-10-20  9:41         ` Joerg Roedel
2020-10-19 15:11 ` [PATCH 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
2020-10-19 15:11 ` [PATCH 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.