linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors
@ 2020-10-28 16:46 Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Hi,

here are some enhancements to the SEV(-ES) code in the Linux kernel to
self-protect it against some newly detected hypervisor attacks. There are 3
attacks addressed here:

	1) Hypervisor does not present the SEV-enabled bit via CPUID

	2) The Hypervisor presents the wrong C-bit position via CPUID

	3) An encrypted RAM page is mapped as MMIO in the nested
	   page-table, causing #VC exceptions and possible leak of the
	   data to the hypervisor or data/code injection from the
	   Hypervisor.

The attacks are described in more detail in this paper:

	https://arxiv.org/abs/2010.07094

Please review.

Thanks,

        Joerg

Changes to v3:

	- Addressed Boris' review comments

Changes to v2:

	- Use %r8/%r9 to modify %cr4 in sev_verify_cbit()
	  and return the new page-table pointer in that function.

Changes to v1:

	- Disable CR4.PGE during C-bit test

	- Do not safe/restore caller-safed registers in
	  set_sev_encryption_mask()

Joerg Roedel (5):
  x86/boot/compressed/64: Introduce sev_status
  x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  x86/head/64: Check SEV encryption before switching to kernel
    page-table
  x86/sev-es: Do not support MMIO to/from encrypted memory

 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  | 20 +++++-
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/head_64.S               | 16 +++++
 arch/x86/kernel/sev-es-shared.c         | 26 +++++++
 arch/x86/kernel/sev-es.c                | 20 ++++--
 arch/x86/kernel/sev_verify_cbit.S       | 90 +++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c               |  1 +
 8 files changed, 168 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

-- 
2.28.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
@ 2020-10-28 16:46 ` Joerg Roedel
  2020-10-28 17:14   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Introduce sev_status and initialize it together with sme_me_mask to have
an indicator which SEV features are enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/mem_encrypt.S | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index dd07e7b41b11..0bae1ca658d9 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -81,6 +81,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
 
 	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
 
+	/*
+	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
+	 * get_sev_encryption_bit() because this function is 32 bit code and
+	 * shared between 64 bit and 32 bit boot path.
+	 */
+	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
+	rdmsr
+
+	/* Store MSR value in sev_status */
+	shlq	$32, %rdx
+	orq	%rdx, %rax
+	movq	%rax, sev_status(%rip)
+
 .Lno_sev_mask:
 	movq	%rbp, %rsp		/* Restore original stack pointer */
 
@@ -96,5 +109,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	.balign	8
-SYM_DATA(sme_me_mask, .quad 0)
+SYM_DATA(sme_me_mask,		.quad 0)
+SYM_DATA(sev_status,		.quad 0)
 #endif
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
@ 2020-10-28 16:46 ` Joerg Roedel
  2020-10-28 17:15   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] x86/boot/compressed/64: Sanity-check CPUID results in the " tip-bot2 for Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

The early #VC handler which doesn't have a GHCB can only handle CPUID
exit codes. It is needed by the early boot code to handle #VC
exceptions raised in verify_cpu() and to get the position of the C
bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the early #VC handlers to make sure the hypervisor
can not pretend that SEV is disabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index 5f83ccaab877..56d16c405b03 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 		goto fail;
 	regs->dx = val >> 32;
 
+	/*
+	 * This is a VC handler and the #VC is only raised when SEV-ES is
+	 * active, which means SEV must be active too. Do sanity checks on the
+	 * CPUID results to make sure the hypervisor does not trick the kernel
+	 * into the no-sev path. This could map sensitive data unencrypted and
+	 * make it accessible to the hypervisor.
+	 *
+	 * In particular, check for:
+	 *	- Hypervisor CPUID bit
+	 *	- Availability of CPUID leaf 0x8000001f
+	 *	- SEV CPUID bit.
+	 *
+	 * The hypervisor might still report the wrong C-bit position, but this
+	 * can't be checked here.
+	 */
+
+	if ((fn == 1 && !(regs->cx & BIT(31))))
+		/* Hypervisor bit */
+		goto fail;
+	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+		/* SEV Leaf check */
+		goto fail;
+	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+		/* SEV Bit */
+		goto fail;
+
 	/* Skip over the CPUID two-byte opcode */
 	regs->ip += 2;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
@ 2020-10-28 16:46 ` Joerg Roedel
  2020-10-28 17:25   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 2 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
can be re-used in the running kernel image.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/ident_map_64.c |  1 +
 arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
 arch/x86/boot/compressed/misc.h         |  2 +
 arch/x86/kernel/sev_verify_cbit.S       | 90 +++++++++++++++++++++++++
 4 files changed, 97 insertions(+)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index a5e5db6ada3c..39b2eded7bc2 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -164,6 +164,7 @@ void initialize_identity_maps(void *rmode)
 	add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
 
 	/* Load the new page-table. */
+	sev_verify_cbit(top_level_pgt);
 	write_cr3(top_level_pgt);
 }
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 0bae1ca658d9..3275dbab085d 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
 SYM_FUNC_END(get_sev_encryption_bit)
 
 	.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
@@ -111,4 +114,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
 SYM_DATA(sev_status,		.quad 0)
+SYM_DATA(sev_check_data,	.quad 0)
 #endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b4c4d1..d9a631c5973c 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
 void boot_stage1_vc(void);
 void boot_stage2_vc(void);
 
+unsigned long sev_verify_cbit(unsigned long cr3);
+
 #endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index 000000000000..b96f0573f8af
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *	sev_verify_cbit.S - Code for verification of the C-bit position reported
+ *			    by the Hypervisor when running with SEV enabled.
+ *
+ *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
+ *
+ * Implements sev_verify_cbit() which is called before switching to a new
+ * long-mode page-table at boot.
+ *
+ * It verifies that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* First check if a C-bit was detected */
+	movq	sme_me_mask(%rip), %rsi
+	testq	%rsi, %rsi
+	jz	3f
+
+	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+	movq	sev_status(%rip), %rsi
+	testq	%rsi, %rsi
+	jz	3f
+
+	/* Save CR4 in %rsi */
+	movq	%cr4, %rsi
+
+	/* Disable Global Pages */
+	movq	%rsi, %rdx
+	andq	$(~X86_CR4_PGE), %rdx
+	movq	%rdx, %cr4
+
+	/*
+	 * Verified that running under SEV - now get a random value using
+	 * RDRAND. This instruction is mandatory when running as an SEV guest.
+	 *
+	 * Don't bail out of the loop if RDRAND returns errors. It is better to
+	 * prevent forward progress than to work with a non-random value here.
+	 */
+1:	rdrand	%rdx
+	jnc	1b
+
+	/* Store value to memory and keep it in %r10 */
+	movq	%rdx, sev_check_data(%rip)
+
+	/* Backup current %cr3 value to restore it later */
+	movq	%cr3, %rcx
+
+	/* Switch to new %cr3 - This might unmap the stack */
+	movq	%rdi, %cr3
+
+	/*
+	 * Compare value in %rdx with memory location - If C-Bit is incorrect
+	 * this would read the encrypted data and make the check fail.
+	 */
+	cmpq	%rdx, sev_check_data(%rip)
+
+	/* Restore old %cr3 */
+	movq	%rcx, %cr3
+
+	/* Restore previous CR4 */
+	movq	%rsi, %cr4
+
+	/* Check CMPQ result */
+	je	3f
+
+	/*
+	 * The check failed - Prevent any forward progress to prevent ROP
+	 * attacks, invalidate the stack and go into a hlt loop.
+	 */
+	xorq	%rsp, %rsp
+	subq	$0x1000, %rsp
+2:	hlt
+	jmp 2b
+3:
+#endif
+	/* Return page-table pointer */
+	movq	%rdi, %rax
+	ret
+SYM_FUNC_END(sev_verify_cbit)
+
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table
  2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (2 preceding siblings ...)
  2020-10-28 16:46 ` [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-28 16:46 ` Joerg Roedel
  2020-10-28 17:29   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  2020-10-28 16:46 ` [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  4 siblings, 2 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

When SEV is enabled the kernel requests the C-Bit position again from
the hypervisor to built its own page-table. Since the hypervisor is an
untrusted source the C-bit position needs to be verified before the
kernel page-table is used.

Call the sev_verify_cbit() function before writing the CR3.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/head_64.S | 16 ++++++++++++++++
 arch/x86/mm/mem_encrypt.c |  1 +
 2 files changed, 17 insertions(+)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 7eb2a1c87969..3c417734790f 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -161,6 +161,21 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/* Setup early boot stage 4-/5-level pagetables. */
 	addq	phys_base(%rip), %rax
+
+	/*
+	 * For SEV guests: Verify that the C-bit is correct. A malicious
+	 * hypervisor could lie about the C-bit position to perform a ROP
+	 * attack on the guest by writing to the unencrypted stack and wait for
+	 * the next RET instruction.
+	 * %rsi carries pointer to realmode data and is callee-clobbered. Save
+	 * and restore it.
+	 */
+	pushq	%rsi
+	movq	%rax, %rdi
+	call	sev_verify_cbit
+	popq	%rsi
+
+	/* Switch to new page-table */
 	movq	%rax, %cr3
 
 	/* Ensure I am executing from virtual addresses */
@@ -279,6 +294,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 SYM_CODE_END(secondary_startup_64)
 
 #include "verify_cpu.S"
+#include "sev_verify_cbit.S"
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index efbb3de472df..bc0833713be9 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -39,6 +39,7 @@
  */
 u64 sme_me_mask __section(".data") = 0;
 u64 sev_status __section(".data") = 0;
+u64 sev_check_data __section(".data") = 0;
 EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory
  2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
                   ` (3 preceding siblings ...)
  2020-10-28 16:46 ` [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
@ 2020-10-28 16:46 ` Joerg Roedel
  2020-10-28 17:31   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  4 siblings, 2 replies; 16+ messages in thread
From: Joerg Roedel @ 2020-10-28 16:46 UTC (permalink / raw)
  To: x86
  Cc: Joerg Roedel, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Kees Cook, Arvind Sankar, Martin Radev,
	Tom Lendacky, linux-kernel

From: Joerg Roedel <jroedel@suse.de>

MMIO memory is usually not mapped encrypted, so there is no reason to
support emulated MMIO when it is mapped encrypted.

Prevent a possible hypervisor attack where a RAM page is mapped as
an MMIO page in the nested page-table, so that any guest access to it
will trigger a #VC exception and leak the data on that page to the
hypervisor via the GHCB (like with valid MMIO). On the read side this
attack would allow the HV to inject data into the guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-es.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 4a96726fbaf8..0bd1a0fc587e 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -374,8 +374,8 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 	return ES_EXCEPTION;
 }
 
-static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
-				 unsigned long vaddr, phys_addr_t *paddr)
+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+					   unsigned long vaddr, phys_addr_t *paddr)
 {
 	unsigned long va = (unsigned long)vaddr;
 	unsigned int level;
@@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 		if (user_mode(ctxt->regs))
 			ctxt->fi.error_code |= X86_PF_USER;
 
-		return false;
+		return ES_EXCEPTION;
 	}
 
+	if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))
+		/* Emulated MMIO to/from encrypted memory not supported */
+		return ES_UNSUPPORTED;
+
 	pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
 	pa |= va & ~page_level_mask(level);
 
 	*paddr = pa;
 
-	return true;
+	return ES_OK;
 }
 
 /* Include code shared with pre-decompression boot stage */
@@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 {
 	u64 exit_code, exit_info_1, exit_info_2;
 	unsigned long ghcb_pa = __pa(ghcb);
+	enum es_result res;
 	phys_addr_t paddr;
 	void __user *ref;
 
@@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 
 	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
 
-	if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
-		if (!read)
+	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
+	if (res != ES_OK) {
+		if (res == ES_EXCEPTION && !read)
 			ctxt->fi.error_code |= X86_PF_WRITE;
 
-		return ES_EXCEPTION;
+		return res;
 	}
 
 	exit_info_1 = paddr;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status
  2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
@ 2020-10-28 17:14   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2020-10-28 17:14 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, linux-kernel

On 10/28/20 11:46 AM, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Introduce sev_status and initialize it together with sme_me_mask to have
> an indicator which SEV features are enabled.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/boot/compressed/mem_encrypt.S | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index dd07e7b41b11..0bae1ca658d9 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -81,6 +81,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
>  
>  	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
>  
> +	/*
> +	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
> +	 * get_sev_encryption_bit() because this function is 32 bit code and
> +	 * shared between 64 bit and 32 bit boot path.
> +	 */
> +	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
> +	rdmsr
> +
> +	/* Store MSR value in sev_status */
> +	shlq	$32, %rdx
> +	orq	%rdx, %rax
> +	movq	%rax, sev_status(%rip)
> +
>  .Lno_sev_mask:
>  	movq	%rbp, %rsp		/* Restore original stack pointer */
>  
> @@ -96,5 +109,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
>  
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	.balign	8
> -SYM_DATA(sme_me_mask, .quad 0)
> +SYM_DATA(sme_me_mask,		.quad 0)
> +SYM_DATA(sev_status,		.quad 0)
>  #endif
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
  2020-10-28 16:46 ` [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
@ 2020-10-28 17:15   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] x86/boot/compressed/64: Sanity-check CPUID results in the " tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2020-10-28 17:15 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, linux-kernel

On 10/28/20 11:46 AM, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> The early #VC handler which doesn't have a GHCB can only handle CPUID
> exit codes. It is needed by the early boot code to handle #VC
> exceptions raised in verify_cpu() and to get the position of the C
> bit.
> 
> But the CPUID information comes from the hypervisor, which is untrusted
> and might return results which trick the guest into the no-SEV boot path
> with no C bit set in the page-tables. All data written to memory would
> then be unencrypted and could leak sensitive data to the hypervisor.
> 
> Add sanity checks to the early #VC handlers to make sure the hypervisor
> can not pretend that SEV is disabled.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
> index 5f83ccaab877..56d16c405b03 100644
> --- a/arch/x86/kernel/sev-es-shared.c
> +++ b/arch/x86/kernel/sev-es-shared.c
> @@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
>  		goto fail;
>  	regs->dx = val >> 32;
>  
> +	/*
> +	 * This is a VC handler and the #VC is only raised when SEV-ES is
> +	 * active, which means SEV must be active too. Do sanity checks on the
> +	 * CPUID results to make sure the hypervisor does not trick the kernel
> +	 * into the no-sev path. This could map sensitive data unencrypted and
> +	 * make it accessible to the hypervisor.
> +	 *
> +	 * In particular, check for:
> +	 *	- Hypervisor CPUID bit
> +	 *	- Availability of CPUID leaf 0x8000001f
> +	 *	- SEV CPUID bit.
> +	 *
> +	 * The hypervisor might still report the wrong C-bit position, but this
> +	 * can't be checked here.
> +	 */
> +
> +	if ((fn == 1 && !(regs->cx & BIT(31))))
> +		/* Hypervisor bit */
> +		goto fail;
> +	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
> +		/* SEV Leaf check */
> +		goto fail;
> +	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
> +		/* SEV Bit */
> +		goto fail;
> +
>  	/* Skip over the CPUID two-byte opcode */
>  	regs->ip += 2;
>  
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-28 16:46 ` [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
@ 2020-10-28 17:25   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2020-10-28 17:25 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, linux-kernel

On 10/28/20 11:46 AM, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
> 
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>

Just one minor comment below, otherwise:

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/boot/compressed/ident_map_64.c |  1 +
>  arch/x86/boot/compressed/mem_encrypt.S  |  4 ++
>  arch/x86/boot/compressed/misc.h         |  2 +
>  arch/x86/kernel/sev_verify_cbit.S       | 90 +++++++++++++++++++++++++
>  4 files changed, 97 insertions(+)
>  create mode 100644 arch/x86/kernel/sev_verify_cbit.S
> 
> diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
> index a5e5db6ada3c..39b2eded7bc2 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -164,6 +164,7 @@ void initialize_identity_maps(void *rmode)
>  	add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
>  
>  	/* Load the new page-table. */
> +	sev_verify_cbit(top_level_pgt);
>  	write_cr3(top_level_pgt);
>  }
>  
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index 0bae1ca658d9..3275dbab085d 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
>  SYM_FUNC_END(get_sev_encryption_bit)
>  
>  	.code64
> +
> +#include "../../kernel/sev_verify_cbit.S"
> +
>  SYM_FUNC_START(set_sev_encryption_mask)
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	push	%rbp
> @@ -111,4 +114,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
>  	.balign	8
>  SYM_DATA(sme_me_mask,		.quad 0)
>  SYM_DATA(sev_status,		.quad 0)
> +SYM_DATA(sev_check_data,	.quad 0)
>  #endif
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 6d31f1b4c4d1..d9a631c5973c 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -159,4 +159,6 @@ void boot_page_fault(void);
>  void boot_stage1_vc(void);
>  void boot_stage2_vc(void);
>  
> +unsigned long sev_verify_cbit(unsigned long cr3);
> +
>  #endif /* BOOT_COMPRESSED_MISC_H */
> diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index 000000000000..b96f0573f8af
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + *	sev_verify_cbit.S - Code for verification of the C-bit position reported
> + *			    by the Hypervisor when running with SEV enabled.
> + *
> + *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +	/* First check if a C-bit was detected */
> +	movq	sme_me_mask(%rip), %rsi
> +	testq	%rsi, %rsi
> +	jz	3f
> +
> +	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
> +	movq	sev_status(%rip), %rsi
> +	testq	%rsi, %rsi
> +	jz	3f
> +
> +	/* Save CR4 in %rsi */
> +	movq	%cr4, %rsi
> +
> +	/* Disable Global Pages */
> +	movq	%rsi, %rdx
> +	andq	$(~X86_CR4_PGE), %rdx
> +	movq	%rdx, %cr4
> +
> +	/*
> +	 * Verified that running under SEV - now get a random value using
> +	 * RDRAND. This instruction is mandatory when running as an SEV guest.
> +	 *
> +	 * Don't bail out of the loop if RDRAND returns errors. It is better to
> +	 * prevent forward progress than to work with a non-random value here.
> +	 */
> +1:	rdrand	%rdx
> +	jnc	1b
> +
> +	/* Store value to memory and keep it in %r10 */

This should say "keep it in %rdx"

> +	movq	%rdx, sev_check_data(%rip)
> +
> +	/* Backup current %cr3 value to restore it later */
> +	movq	%cr3, %rcx
> +
> +	/* Switch to new %cr3 - This might unmap the stack */
> +	movq	%rdi, %cr3
> +
> +	/*
> +	 * Compare value in %rdx with memory location - If C-Bit is incorrect
> +	 * this would read the encrypted data and make the check fail.
> +	 */
> +	cmpq	%rdx, sev_check_data(%rip)
> +
> +	/* Restore old %cr3 */
> +	movq	%rcx, %cr3
> +
> +	/* Restore previous CR4 */
> +	movq	%rsi, %cr4
> +
> +	/* Check CMPQ result */
> +	je	3f
> +
> +	/*
> +	 * The check failed - Prevent any forward progress to prevent ROP
> +	 * attacks, invalidate the stack and go into a hlt loop.
> +	 */
> +	xorq	%rsp, %rsp
> +	subq	$0x1000, %rsp
> +2:	hlt
> +	jmp 2b
> +3:
> +#endif
> +	/* Return page-table pointer */
> +	movq	%rdi, %rax
> +	ret
> +SYM_FUNC_END(sev_verify_cbit)
> +
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table
  2020-10-28 16:46 ` [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
@ 2020-10-28 17:29   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2020-10-28 17:29 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, linux-kernel

On 10/28/20 11:46 AM, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> When SEV is enabled the kernel requests the C-Bit position again from
> the hypervisor to built its own page-table. Since the hypervisor is an

s/built/build/

> untrusted source the C-bit position needs to be verified before the
> kernel page-table is used.
> 
> Call the sev_verify_cbit() function before writing the CR3.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/kernel/head_64.S | 16 ++++++++++++++++
>  arch/x86/mm/mem_encrypt.c |  1 +
>  2 files changed, 17 insertions(+)
> 
> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 7eb2a1c87969..3c417734790f 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -161,6 +161,21 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
>  
>  	/* Setup early boot stage 4-/5-level pagetables. */
>  	addq	phys_base(%rip), %rax
> +
> +	/*
> +	 * For SEV guests: Verify that the C-bit is correct. A malicious
> +	 * hypervisor could lie about the C-bit position to perform a ROP
> +	 * attack on the guest by writing to the unencrypted stack and wait for
> +	 * the next RET instruction.
> +	 * %rsi carries pointer to realmode data and is callee-clobbered. Save
> +	 * and restore it.
> +	 */
> +	pushq	%rsi
> +	movq	%rax, %rdi
> +	call	sev_verify_cbit
> +	popq	%rsi
> +
> +	/* Switch to new page-table */
>  	movq	%rax, %cr3
>  
>  	/* Ensure I am executing from virtual addresses */
> @@ -279,6 +294,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
>  SYM_CODE_END(secondary_startup_64)
>  
>  #include "verify_cpu.S"
> +#include "sev_verify_cbit.S"
>  
>  #ifdef CONFIG_HOTPLUG_CPU
>  /*
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index efbb3de472df..bc0833713be9 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -39,6 +39,7 @@
>   */
>  u64 sme_me_mask __section(".data") = 0;
>  u64 sev_status __section(".data") = 0;
> +u64 sev_check_data __section(".data") = 0;
>  EXPORT_SYMBOL(sme_me_mask);
>  DEFINE_STATIC_KEY_FALSE(sev_enable_key);
>  EXPORT_SYMBOL_GPL(sev_enable_key);
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory
  2020-10-28 16:46 ` [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
@ 2020-10-28 17:31   ` Tom Lendacky
  2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: Tom Lendacky @ 2020-10-28 17:31 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: Joerg Roedel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Kees Cook, Arvind Sankar, Martin Radev, linux-kernel

On 10/28/20 11:46 AM, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> MMIO memory is usually not mapped encrypted, so there is no reason to
> support emulated MMIO when it is mapped encrypted.
> 
> Prevent a possible hypervisor attack where a RAM page is mapped as
> an MMIO page in the nested page-table, so that any guest access to it
> will trigger a #VC exception and leak the data on that page to the
> hypervisor via the GHCB (like with valid MMIO). On the read side this
> attack would allow the HV to inject data into the guest.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>

Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>

> ---
>  arch/x86/kernel/sev-es.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
> index 4a96726fbaf8..0bd1a0fc587e 100644
> --- a/arch/x86/kernel/sev-es.c
> +++ b/arch/x86/kernel/sev-es.c
> @@ -374,8 +374,8 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
>  	return ES_EXCEPTION;
>  }
>  
> -static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
> -				 unsigned long vaddr, phys_addr_t *paddr)
> +static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
> +					   unsigned long vaddr, phys_addr_t *paddr)
>  {
>  	unsigned long va = (unsigned long)vaddr;
>  	unsigned int level;
> @@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
>  		if (user_mode(ctxt->regs))
>  			ctxt->fi.error_code |= X86_PF_USER;
>  
> -		return false;
> +		return ES_EXCEPTION;
>  	}
>  
> +	if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))
> +		/* Emulated MMIO to/from encrypted memory not supported */
> +		return ES_UNSUPPORTED;
> +
>  	pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
>  	pa |= va & ~page_level_mask(level);
>  
>  	*paddr = pa;
>  
> -	return true;
> +	return ES_OK;
>  }
>  
>  /* Include code shared with pre-decompression boot stage */
> @@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
>  {
>  	u64 exit_code, exit_info_1, exit_info_2;
>  	unsigned long ghcb_pa = __pa(ghcb);
> +	enum es_result res;
>  	phys_addr_t paddr;
>  	void __user *ref;
>  
> @@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
>  
>  	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
>  
> -	if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
> -		if (!read)
> +	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
> +	if (res != ES_OK) {
> +		if (res == ES_EXCEPTION && !read)
>  			ctxt->fi.error_code |= X86_PF_WRITE;
>  
> -		return ES_EXCEPTION;
> +		return res;
>  	}
>  
>  	exit_info_1 = paddr;
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [tip: x86/seves] x86/sev-es: Do not support MMIO to/from encrypted memory
  2020-10-28 16:46 ` [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
  2020-10-28 17:31   ` Tom Lendacky
@ 2020-10-29 19:17   ` tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-10-29 19:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Joerg Roedel, Borislav Petkov, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/seves branch of tip:

Commit-ID:     2411cd82112397bfb9d8f0f19cd46c3d71e0ce67
Gitweb:        https://git.kernel.org/tip/2411cd82112397bfb9d8f0f19cd46c3d71e0ce67
Author:        Joerg Roedel <jroedel@suse.de>
AuthorDate:    Wed, 28 Oct 2020 17:46:59 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Thu, 29 Oct 2020 19:27:42 +01:00

x86/sev-es: Do not support MMIO to/from encrypted memory

MMIO memory is usually not mapped encrypted, so there is no reason to
support emulated MMIO when it is mapped encrypted.

Prevent a possible hypervisor attack where a RAM page is mapped as
an MMIO page in the nested page-table, so that any guest access to it
will trigger a #VC exception and leak the data on that page to the
hypervisor via the GHCB (like with valid MMIO). On the read side this
attack would allow the HV to inject data into the guest.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20201028164659.27002-6-joro@8bytes.org
---
 arch/x86/kernel/sev-es.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 4a96726..0bd1a0f 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -374,8 +374,8 @@ fault:
 	return ES_EXCEPTION;
 }
 
-static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
-				 unsigned long vaddr, phys_addr_t *paddr)
+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+					   unsigned long vaddr, phys_addr_t *paddr)
 {
 	unsigned long va = (unsigned long)vaddr;
 	unsigned int level;
@@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 		if (user_mode(ctxt->regs))
 			ctxt->fi.error_code |= X86_PF_USER;
 
-		return false;
+		return ES_EXCEPTION;
 	}
 
+	if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))
+		/* Emulated MMIO to/from encrypted memory not supported */
+		return ES_UNSUPPORTED;
+
 	pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
 	pa |= va & ~page_level_mask(level);
 
 	*paddr = pa;
 
-	return true;
+	return ES_OK;
 }
 
 /* Include code shared with pre-decompression boot stage */
@@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 {
 	u64 exit_code, exit_info_1, exit_info_2;
 	unsigned long ghcb_pa = __pa(ghcb);
+	enum es_result res;
 	phys_addr_t paddr;
 	void __user *ref;
 
@@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
 
 	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
 
-	if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
-		if (!read)
+	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
+	if (res != ES_OK) {
+		if (res == ES_EXCEPTION && !read)
 			ctxt->fi.error_code |= X86_PF_WRITE;
 
-		return ES_EXCEPTION;
+		return res;
 	}
 
 	exit_info_1 = paddr;

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [tip: x86/seves] x86/head/64: Check SEV encryption before switching to kernel page-table
  2020-10-28 16:46 ` [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
  2020-10-28 17:29   ` Tom Lendacky
@ 2020-10-29 19:17   ` tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-10-29 19:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Joerg Roedel, Borislav Petkov, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/seves branch of tip:

Commit-ID:     c9f09539e16e281f92a27760fdfae71e8af036f6
Gitweb:        https://git.kernel.org/tip/c9f09539e16e281f92a27760fdfae71e8af036f6
Author:        Joerg Roedel <jroedel@suse.de>
AuthorDate:    Wed, 28 Oct 2020 17:46:58 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Thu, 29 Oct 2020 18:09:59 +01:00

x86/head/64: Check SEV encryption before switching to kernel page-table

When SEV is enabled, the kernel requests the C-bit position again from
the hypervisor to build its own page-table. Since the hypervisor is an
untrusted source, the C-bit position needs to be verified before the
kernel page-table is used.

Call sev_verify_cbit() before writing the CR3.

 [ bp: Massage. ]

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20201028164659.27002-5-joro@8bytes.org
---
 arch/x86/kernel/head_64.S | 16 ++++++++++++++++
 arch/x86/mm/mem_encrypt.c |  1 +
 2 files changed, 17 insertions(+)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 7eb2a1c..3c41773 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -161,6 +161,21 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 
 	/* Setup early boot stage 4-/5-level pagetables. */
 	addq	phys_base(%rip), %rax
+
+	/*
+	 * For SEV guests: Verify that the C-bit is correct. A malicious
+	 * hypervisor could lie about the C-bit position to perform a ROP
+	 * attack on the guest by writing to the unencrypted stack and wait for
+	 * the next RET instruction.
+	 * %rsi carries pointer to realmode data and is callee-clobbered. Save
+	 * and restore it.
+	 */
+	pushq	%rsi
+	movq	%rax, %rdi
+	call	sev_verify_cbit
+	popq	%rsi
+
+	/* Switch to new page-table */
 	movq	%rax, %cr3
 
 	/* Ensure I am executing from virtual addresses */
@@ -279,6 +294,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
 SYM_CODE_END(secondary_startup_64)
 
 #include "verify_cpu.S"
+#include "sev_verify_cbit.S"
 
 #ifdef CONFIG_HOTPLUG_CPU
 /*
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index efbb3de..bc08337 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -39,6 +39,7 @@
  */
 u64 sme_me_mask __section(".data") = 0;
 u64 sev_status __section(".data") = 0;
+u64 sev_check_data __section(".data") = 0;
 EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [tip: x86/seves] x86/boot/compressed/64: Sanity-check CPUID results in the early #VC handler
  2020-10-28 16:46 ` [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
  2020-10-28 17:15   ` Tom Lendacky
@ 2020-10-29 19:17   ` tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-10-29 19:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Joerg Roedel, Borislav Petkov, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/seves branch of tip:

Commit-ID:     ed7b895f3efb5df184722f5a30f8164fcaffceb1
Gitweb:        https://git.kernel.org/tip/ed7b895f3efb5df184722f5a30f8164fcaffceb1
Author:        Joerg Roedel <jroedel@suse.de>
AuthorDate:    Wed, 28 Oct 2020 17:46:56 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Thu, 29 Oct 2020 13:48:49 +01:00

x86/boot/compressed/64: Sanity-check CPUID results in the early #VC handler

The early #VC handler which doesn't have a GHCB can only handle CPUID
exit codes. It is needed by the early boot code to handle #VC exceptions
raised in verify_cpu() and to get the position of the C-bit.

But the CPUID information comes from the hypervisor which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C-bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the early #VC handler to make sure the hypervisor
can not pretend that SEV is disabled.

 [ bp: Massage a bit. ]

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20201028164659.27002-3-joro@8bytes.org
---
 arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index 5f83cca..7d04b35 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
 		goto fail;
 	regs->dx = val >> 32;
 
+	/*
+	 * This is a VC handler and the #VC is only raised when SEV-ES is
+	 * active, which means SEV must be active too. Do sanity checks on the
+	 * CPUID results to make sure the hypervisor does not trick the kernel
+	 * into the no-sev path. This could map sensitive data unencrypted and
+	 * make it accessible to the hypervisor.
+	 *
+	 * In particular, check for:
+	 *	- Hypervisor CPUID bit
+	 *	- Availability of CPUID leaf 0x8000001f
+	 *	- SEV CPUID bit.
+	 *
+	 * The hypervisor might still report the wrong C-bit position, but this
+	 * can't be checked here.
+	 */
+
+	if ((fn == 1 && !(regs->cx & BIT(31))))
+		/* Hypervisor bit */
+		goto fail;
+	else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+		/* SEV leaf check */
+		goto fail;
+	else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+		/* SEV bit */
+		goto fail;
+
 	/* Skip over the CPUID two-byte opcode */
 	regs->ip += 2;
 

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [tip: x86/seves] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
  2020-10-28 16:46 ` [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
  2020-10-28 17:25   ` Tom Lendacky
@ 2020-10-29 19:17   ` tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-10-29 19:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Joerg Roedel, Borislav Petkov, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/seves branch of tip:

Commit-ID:     86ce43f7dde81562f58b24b426cef068bd9f7595
Gitweb:        https://git.kernel.org/tip/86ce43f7dde81562f58b24b426cef068bd9f7595
Author:        Joerg Roedel <jroedel@suse.de>
AuthorDate:    Wed, 28 Oct 2020 17:46:57 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Thu, 29 Oct 2020 18:06:52 +01:00

x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in a separate file:

  arch/x86/kernel/sev_verify_cbit.S

so that it can be re-used in the running kernel image.

 [ bp: Massage. ]

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20201028164659.27002-4-joro@8bytes.org
---
 arch/x86/boot/compressed/ident_map_64.c |  1 +-
 arch/x86/boot/compressed/mem_encrypt.S  |  4 +-
 arch/x86/boot/compressed/misc.h         |  2 +-
 arch/x86/kernel/sev_verify_cbit.S       | 89 ++++++++++++++++++++++++-
 4 files changed, 96 insertions(+)
 create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index a5e5db6..39b2ede 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -164,6 +164,7 @@ void initialize_identity_maps(void *rmode)
 	add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
 
 	/* Load the new page-table. */
+	sev_verify_cbit(top_level_pgt);
 	write_cr3(top_level_pgt);
 }
 
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 3092ae1..aa56179 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
 SYM_FUNC_END(get_sev_encryption_bit)
 
 	.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
 SYM_FUNC_START(set_sev_encryption_mask)
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	push	%rbp
@@ -111,4 +114,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
 	.balign	8
 SYM_DATA(sme_me_mask,		.quad 0)
 SYM_DATA(sev_status,		.quad 0)
+SYM_DATA(sev_check_data,	.quad 0)
 #endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b..d9a631c 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
 void boot_stage1_vc(void);
 void boot_stage2_vc(void);
 
+unsigned long sev_verify_cbit(unsigned long cr3);
+
 #endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index 0000000..ee04941
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *	sev_verify_cbit.S - Code for verification of the C-bit position reported
+ *			    by the Hypervisor when running with SEV enabled.
+ *
+ *	Copyright (c) 2020  Joerg Roedel (jroedel@suse.de)
+ *
+ * sev_verify_cbit() is called before switching to a new long-mode page-table
+ * at boot.
+ *
+ * Verify that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+	/* First check if a C-bit was detected */
+	movq	sme_me_mask(%rip), %rsi
+	testq	%rsi, %rsi
+	jz	3f
+
+	/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+	movq	sev_status(%rip), %rsi
+	testq	%rsi, %rsi
+	jz	3f
+
+	/* Save CR4 in %rsi */
+	movq	%cr4, %rsi
+
+	/* Disable Global Pages */
+	movq	%rsi, %rdx
+	andq	$(~X86_CR4_PGE), %rdx
+	movq	%rdx, %cr4
+
+	/*
+	 * Verified that running under SEV - now get a random value using
+	 * RDRAND. This instruction is mandatory when running as an SEV guest.
+	 *
+	 * Don't bail out of the loop if RDRAND returns errors. It is better to
+	 * prevent forward progress than to work with a non-random value here.
+	 */
+1:	rdrand	%rdx
+	jnc	1b
+
+	/* Store value to memory and keep it in %rdx */
+	movq	%rdx, sev_check_data(%rip)
+
+	/* Backup current %cr3 value to restore it later */
+	movq	%cr3, %rcx
+
+	/* Switch to new %cr3 - This might unmap the stack */
+	movq	%rdi, %cr3
+
+	/*
+	 * Compare value in %rdx with memory location. If C-bit is incorrect
+	 * this would read the encrypted data and make the check fail.
+	 */
+	cmpq	%rdx, sev_check_data(%rip)
+
+	/* Restore old %cr3 */
+	movq	%rcx, %cr3
+
+	/* Restore previous CR4 */
+	movq	%rsi, %cr4
+
+	/* Check CMPQ result */
+	je	3f
+
+	/*
+	 * The check failed, prevent any forward progress to prevent ROP
+	 * attacks, invalidate the stack and go into a hlt loop.
+	 */
+	xorq	%rsp, %rsp
+	subq	$0x1000, %rsp
+2:	hlt
+	jmp 2b
+3:
+#endif
+	/* Return page-table pointer */
+	movq	%rdi, %rax
+	ret
+SYM_FUNC_END(sev_verify_cbit)

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [tip: x86/seves] x86/boot/compressed/64: Introduce sev_status
  2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
  2020-10-28 17:14   ` Tom Lendacky
@ 2020-10-29 19:17   ` tip-bot2 for Joerg Roedel
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-10-29 19:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Joerg Roedel, Borislav Petkov, Tom Lendacky, x86, LKML

The following commit has been merged into the x86/seves branch of tip:

Commit-ID:     3ad84246a4097010f3ae3d6944120c0be00e9e7a
Gitweb:        https://git.kernel.org/tip/3ad84246a4097010f3ae3d6944120c0be00e9e7a
Author:        Joerg Roedel <jroedel@suse.de>
AuthorDate:    Wed, 28 Oct 2020 17:46:55 +01:00
Committer:     Borislav Petkov <bp@suse.de>
CommitterDate: Thu, 29 Oct 2020 10:54:36 +01:00

x86/boot/compressed/64: Introduce sev_status

Introduce sev_status and initialize it together with sme_me_mask to have
an indicator which SEV features are enabled.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lkml.kernel.org/r/20201028164659.27002-2-joro@8bytes.org
---
 arch/x86/boot/compressed/mem_encrypt.S | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index dd07e7b..3092ae1 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -81,6 +81,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
 
 	bts	%rax, sme_me_mask(%rip)	/* Create the encryption mask */
 
+	/*
+	 * Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
+	 * get_sev_encryption_bit() because this function is 32-bit code and
+	 * shared between 64-bit and 32-bit boot path.
+	 */
+	movl	$MSR_AMD64_SEV, %ecx	/* Read the SEV MSR */
+	rdmsr
+
+	/* Store MSR value in sev_status */
+	shlq	$32, %rdx
+	orq	%rdx, %rax
+	movq	%rax, sev_status(%rip)
+
 .Lno_sev_mask:
 	movq	%rbp, %rsp		/* Restore original stack pointer */
 
@@ -96,5 +109,6 @@ SYM_FUNC_END(set_sev_encryption_mask)
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	.balign	8
-SYM_DATA(sme_me_mask, .quad 0)
+SYM_DATA(sme_me_mask,		.quad 0)
+SYM_DATA(sev_status,		.quad 0)
 #endif

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-10-29 19:18 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-28 16:46 [PATCH v4 0/5] x86/sev-es: Mitigate some HV attack vectors Joerg Roedel
2020-10-28 16:46 ` [PATCH v4 1/5] x86/boot/compressed/64: Introduce sev_status Joerg Roedel
2020-10-28 17:14   ` Tom Lendacky
2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
2020-10-28 16:46 ` [PATCH v4 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler Joerg Roedel
2020-10-28 17:15   ` Tom Lendacky
2020-10-29 19:17   ` [tip: x86/seves] x86/boot/compressed/64: Sanity-check CPUID results in the " tip-bot2 for Joerg Roedel
2020-10-28 16:46 ` [PATCH v4 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path Joerg Roedel
2020-10-28 17:25   ` Tom Lendacky
2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
2020-10-28 16:46 ` [PATCH v4 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table Joerg Roedel
2020-10-28 17:29   ` Tom Lendacky
2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel
2020-10-28 16:46 ` [PATCH v4 5/5] x86/sev-es: Do not support MMIO to/from encrypted memory Joerg Roedel
2020-10-28 17:31   ` Tom Lendacky
2020-10-29 19:17   ` [tip: x86/seves] " tip-bot2 for Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).