virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests
@ 2021-09-13 15:55 Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime Joerg Roedel
                   ` (12 more replies)
  0 siblings, 13 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Hi,

here are changes to enable kexec/kdump in SEV-ES guests. The biggest
problem for supporting kexec/kdump under SEV-ES is to find a way to
hand the non-boot CPUs (APs) from one kernel to another.

Without SEV-ES the first kernel parks the CPUs in a HLT loop until
they get reset by the kexec'ed kernel via an INIT-SIPI-SIPI sequence.
For virtual machines the CPU reset is emulated by the hypervisor,
which sets the vCPU registers back to reset state.

This does not work under SEV-ES, because the hypervisor has no access
to the vCPU registers and can't make modifications to them. So an
SEV-ES guest needs to reset the vCPU itself and park it using the
AP-reset-hold protocol. Upon wakeup the guest needs to jump to
real-mode and to the reset-vector configured in the AP-Jump-Table.

The code to do this is the main part of this patch-set. It works by
placing code on the AP Jump-Table page itself to park the vCPU and for
jumping to the reset vector upon wakeup. The code on the AP Jump Table
runs in 16-bit protected mode with segment base set to the beginning
of the page. The AP Jump-Table is usually not within the first 1MB of
memory, so the code can't run in real-mode.

The AP Jump-Table is the best place to put the parking code, because
the memory is owned, but read-only by the firmware and writeable by
the OS. Only the first 4 bytes are used for the reset-vector, leaving
the rest of the page for code/data/stack to park a vCPU. The code
can't be in kernel memory because by the time the vCPU wakes up the
memory will be owned by the new kernel, which might have overwritten it
already.

The other patches add initial GHCB Version 2 protocol support, because
kexec/kdump need the MSR-based (without a GHCB) AP-reset-hold VMGEXIT,
which is a GHCB protocol version 2 feature.

The kexec'ed kernel is also entered via the decompressor and needs
MMIO support there, so this patch-set also adds MMIO #VC support to
the decompressor and support for handling CLFLUSH instructions.

Finally there is also code to disable kexec/kdump support at runtime
when the environment does not support it (e.g. no GHCB protocol
version 2 support or AP Jump Table over 4GB).

The diffstat looks big, but most of it is moving code for MMIO #VC
support around to make it available to the decompressor.

These patches need a fix I sent out earlier today to work reliably:

	https://lore.kernel.org/lkml/20210913095236.24937-1-joro@8bytes.org/

Please review.

Thanks,

	Joerg

Changes v1->v2:

	- Rebased to v5.15-rc1

	- Fixed occasional triple-faults when parking APs, see
	  separate fix

Joerg Roedel (12):
  kexec: Allow architecture code to opt-out at runtime
  x86/kexec/64: Forbid kexec when running as an SEV-ES guest
  x86/sev: Save and print negotiated GHCB protocol version
  x86/sev: Do not hardcode GHCB protocol version
  x86/sev: Use GHCB protocol version 2 if supported
  x86/sev: Cache AP Jump Table Address
  x86/sev: Setup code to park APs in the AP Jump Table
  x86/sev: Park APs on AP Jump Table with GHCB protocol version 2
  x86/sev: Use AP Jump Table blob to stop CPU
  x86/sev: Add MMIO handling support to boot/compressed/ code
  x86/sev: Handle CLFLUSH MMIO events
  x86/sev: Support kexec under SEV-ES with AP Jump Table blob

 arch/x86/boot/compressed/sev.c          |  56 +-
 arch/x86/include/asm/realmode.h         |   5 +
 arch/x86/include/asm/sev-ap-jumptable.h |  25 +
 arch/x86/include/asm/sev.h              |  13 +-
 arch/x86/kernel/machine_kexec_64.c      |  12 +
 arch/x86/kernel/process.c               |   8 +
 arch/x86/kernel/sev-shared.c            | 333 +++++++++-
 arch/x86/kernel/sev.c                   | 494 ++++++---------
 arch/x86/lib/insn-eval-shared.c         | 805 ++++++++++++++++++++++++
 arch/x86/lib/insn-eval.c                | 802 +----------------------
 arch/x86/realmode/Makefile              |   9 +-
 arch/x86/realmode/rm/Makefile           |  11 +-
 arch/x86/realmode/rm/header.S           |   3 +
 arch/x86/realmode/rm/sev_ap_park.S      |  89 +++
 arch/x86/realmode/rmpiggy.S             |   6 +
 arch/x86/realmode/sev/Makefile          |  41 ++
 arch/x86/realmode/sev/ap_jump_table.S   | 130 ++++
 arch/x86/realmode/sev/ap_jump_table.lds |  24 +
 include/linux/kexec.h                   |   1 +
 kernel/kexec.c                          |  14 +
 kernel/kexec_file.c                     |   9 +
 21 files changed, 1764 insertions(+), 1126 deletions(-)
 create mode 100644 arch/x86/include/asm/sev-ap-jumptable.h
 create mode 100644 arch/x86/lib/insn-eval-shared.c
 create mode 100644 arch/x86/realmode/rm/sev_ap_park.S
 create mode 100644 arch/x86/realmode/sev/Makefile
 create mode 100644 arch/x86/realmode/sev/ap_jump_table.S
 create mode 100644 arch/x86/realmode/sev/ap_jump_table.lds


base-commit: 6880fa6c56601bb8ed59df6c30fd390cc5f6dd8f
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-01 16:10   ` Borislav Petkov
  2021-09-13 15:55 ` [PATCH v2 02/12] x86/kexec/64: Forbid kexec when running as an SEV-ES guest Joerg Roedel
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel, stable,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Allow a runtime opt-out of kexec support for architecture code in case
the kernel is running in an environment where kexec is not properly
supported yet.

This will be used on x86 when the kernel is running as an SEV-ES
guest. SEV-ES guests need special handling for kexec to hand over all
CPUs to the new kernel. This requires special hypervisor support and
handling code in the guest which is not yet implemented.

Cc: stable@vger.kernel.org # v5.10+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 include/linux/kexec.h |  1 +
 kernel/kexec.c        | 14 ++++++++++++++
 kernel/kexec_file.c   |  9 +++++++++
 3 files changed, 24 insertions(+)

diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 0c994ae37729..85c30dcd0bdc 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -201,6 +201,7 @@ int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
 				 unsigned long buf_len);
 #endif
 int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf);
+bool arch_kexec_supported(void);
 
 extern int kexec_add_buffer(struct kexec_buf *kbuf);
 int kexec_locate_mem_hole(struct kexec_buf *kbuf);
diff --git a/kernel/kexec.c b/kernel/kexec.c
index b5e40f069768..275cda429380 100644
--- a/kernel/kexec.c
+++ b/kernel/kexec.c
@@ -190,11 +190,25 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments,
  * that to happen you need to do that yourself.
  */
 
+bool __weak arch_kexec_supported(void)
+{
+	return true;
+}
+
 static inline int kexec_load_check(unsigned long nr_segments,
 				   unsigned long flags)
 {
 	int result;
 
+	/*
+	 * The architecture may support kexec in general, but the kernel could
+	 * run in an environment where it is not (yet) possible to execute a new
+	 * kernel. Allow the architecture code to opt-out of kexec support when
+	 * it is running in such an environment.
+	 */
+	if (!arch_kexec_supported())
+		return -ENOSYS;
+
 	/* We only trust the superuser with rebooting the system. */
 	if (!capable(CAP_SYS_BOOT) || kexec_load_disabled)
 		return -EPERM;
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 33400ff051a8..96d08a512e9c 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -358,6 +358,15 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
 	int ret = 0, i;
 	struct kimage **dest_image, *image;
 
+	/*
+	 * The architecture may support kexec in general, but the kernel could
+	 * run in an environment where it is not (yet) possible to execute a new
+	 * kernel. Allow the architecture code to opt-out of kexec support when
+	 * it is running in such an environment.
+	 */
+	if (!arch_kexec_supported())
+		return -ENOSYS;
+
 	/* We only trust the superuser with rebooting the system. */
 	if (!capable(CAP_SYS_BOOT) || kexec_load_disabled)
 		return -EPERM;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 02/12] x86/kexec/64: Forbid kexec when running as an SEV-ES guest
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version Joerg Roedel
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel, stable,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

For now, kexec is not supported when running as an SEV-ES guest. Doing
so requires additional hypervisor support and special code to hand
over the CPUs to the new kernel in a safe way.

Until this is implemented, do not support kexec in SEV-ES guests.

Cc: stable@vger.kernel.org # v5.10+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/machine_kexec_64.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..a8e16a411b40 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -591,3 +591,11 @@ void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
 	 */
 	set_memory_encrypted((unsigned long)vaddr, pages);
 }
+
+/*
+ * Kexec is not supported in SEV-ES guests yet
+ */
+bool arch_kexec_supported(void)
+{
+	return !sev_es_active();
+}
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 02/12] x86/kexec/64: Forbid kexec when running as an SEV-ES guest Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-03 14:27   ` Borislav Petkov
  2021-09-13 15:55 ` [PATCH v2 04/12] x86/sev: Do not hardcode " Joerg Roedel
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Save the results of the GHCB protocol negotiation into a data structure
and print information about versions supported and used to the kernel
log.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev.c |  2 +-
 arch/x86/kernel/sev-shared.c   | 22 +++++++++++++++++++++-
 arch/x86/kernel/sev.c          | 13 ++++++++++++-
 3 files changed, 34 insertions(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 670e998fe930..1a2e49730f8b 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -121,7 +121,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 
 static bool early_setup_sev_es(void)
 {
-	if (!sev_es_negotiate_protocol())
+	if (!sev_es_negotiate_protocol(NULL))
 		sev_es_terminate(GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
 
 	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 9f90f460a28c..73eeb5897d16 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -14,6 +14,20 @@
 #define has_cpuflag(f)	boot_cpu_has(f)
 #endif
 
+/*
+ * struct sev_ghcb_protocol_info - Used to return GHCB protocol
+ *				   negotiation details.
+ *
+ * @hv_proto_min:	Minimum GHCB protocol version supported by Hypervisor
+ * @hv_proto_max:	Maximum GHCB protocol version supported by Hypervisor
+ * @vm_proto:		Protocol version the VM (this kernel) will use
+ */
+struct sev_ghcb_protocol_info {
+	unsigned int hv_proto_min;
+	unsigned int hv_proto_max;
+	unsigned int vm_proto;
+};
+
 static bool __init sev_es_check_cpu_features(void)
 {
 	if (!has_cpuflag(X86_FEATURE_RDRAND)) {
@@ -42,7 +56,7 @@ static void __noreturn sev_es_terminate(unsigned int reason)
 		asm volatile("hlt\n" : : : "memory");
 }
 
-static bool sev_es_negotiate_protocol(void)
+static bool sev_es_negotiate_protocol(struct sev_ghcb_protocol_info *info)
 {
 	u64 val;
 
@@ -58,6 +72,12 @@ static bool sev_es_negotiate_protocol(void)
 	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR)
 		return false;
 
+	if (info) {
+		info->hv_proto_min = GHCB_MSR_PROTO_MIN(val);
+		info->hv_proto_max = GHCB_MSR_PROTO_MAX(val);
+		info->vm_proto	   = GHCB_PROTO_OUR;
+	}
+
 	return true;
 }
 
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..8084bfd7cce1 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -495,6 +495,9 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
 /* Include code shared with pre-decompression boot stage */
 #include "sev-shared.c"
 
+/* Negotiated GHCB protocol version */
+static struct sev_ghcb_protocol_info ghcb_protocol_info __ro_after_init;
+
 static noinstr void __sev_put_ghcb(struct ghcb_state *state)
 {
 	struct sev_es_runtime_data *data;
@@ -665,7 +668,7 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
 static bool __init sev_es_setup_ghcb(void)
 {
 	/* First make sure the hypervisor talks a supported protocol. */
-	if (!sev_es_negotiate_protocol())
+	if (!sev_es_negotiate_protocol(&ghcb_protocol_info))
 		return false;
 
 	/*
@@ -794,6 +797,14 @@ void __init sev_es_init_vc_handling(void)
 
 	/* Secondary CPUs use the runtime #VC handler */
 	initial_vc_handler = (unsigned long)kernel_exc_vmm_communication;
+
+	/*
+	 * Print information about supported and negotiated GHCB protocol
+	 * versions.
+	 */
+	pr_info("Hypervisor GHCB protocol version support: min=%u max=%u\n",
+		ghcb_protocol_info.hv_proto_min, ghcb_protocol_info.hv_proto_max);
+	pr_info("Using GHCB protocol version %u\n", ghcb_protocol_info.vm_proto);
 }
 
 static void __init vc_early_forward_exception(struct es_em_ctxt *ctxt)
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 04/12] x86/sev: Do not hardcode GHCB protocol version
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (2 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-09-13 15:55 ` [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported Joerg Roedel
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Introduce the sev_get_ghcb_proto_ver() which will return the negotiated
GHCB protocol version and use it to set the version field in the GHCB.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev.c | 5 +++++
 arch/x86/kernel/sev-shared.c   | 5 ++++-
 arch/x86/kernel/sev.c          | 5 +++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 1a2e49730f8b..101e08c67296 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -119,6 +119,11 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 /* Include code for early handlers */
 #include "../../kernel/sev-shared.c"
 
+static u64 sev_get_ghcb_proto_ver(void)
+{
+	return GHCB_PROTOCOL_MAX;
+}
+
 static bool early_setup_sev_es(void)
 {
 	if (!sev_es_negotiate_protocol(NULL))
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 73eeb5897d16..36eaac2773ed 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -28,6 +28,9 @@ struct sev_ghcb_protocol_info {
 	unsigned int vm_proto;
 };
 
+/* Returns the negotiated GHCB Protocol version */
+static u64 sev_get_ghcb_proto_ver(void);
+
 static bool __init sev_es_check_cpu_features(void)
 {
 	if (!has_cpuflag(X86_FEATURE_RDRAND)) {
@@ -122,7 +125,7 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
 	enum es_result ret;
 
 	/* Fill in protocol and format specifiers */
-	ghcb->protocol_version = GHCB_PROTOCOL_MAX;
+	ghcb->protocol_version = sev_get_ghcb_proto_ver();
 	ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
 
 	ghcb_set_sw_exit_code(ghcb, exit_code);
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 8084bfd7cce1..5d3422e8b25e 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -498,6 +498,11 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
 /* Negotiated GHCB protocol version */
 static struct sev_ghcb_protocol_info ghcb_protocol_info __ro_after_init;
 
+static u64 sev_get_ghcb_proto_ver(void)
+{
+	return ghcb_protocol_info.vm_proto;
+}
+
 static noinstr void __sev_put_ghcb(struct ghcb_state *state)
 {
 	struct sev_es_runtime_data *data;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (3 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 04/12] x86/sev: Do not hardcode " Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-03 16:05   ` Borislav Petkov
  2021-09-13 15:55 ` [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address Joerg Roedel
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Check whether the hypervisor supports GHCB version 2 and use it if
available.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev.c | 10 ++++++++--
 arch/x86/include/asm/sev.h     |  4 ++--
 arch/x86/kernel/sev-shared.c   | 17 ++++++++++++++---
 3 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 101e08c67296..7f8416f76be7 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -119,16 +119,22 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 /* Include code for early handlers */
 #include "../../kernel/sev-shared.c"
 
+static unsigned int ghcb_protocol;
+
 static u64 sev_get_ghcb_proto_ver(void)
 {
-	return GHCB_PROTOCOL_MAX;
+	return ghcb_protocol;
 }
 
 static bool early_setup_sev_es(void)
 {
-	if (!sev_es_negotiate_protocol(NULL))
+	struct sev_ghcb_protocol_info info;
+
+	if (!sev_es_negotiate_protocol(&info))
 		sev_es_terminate(GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
 
+	ghcb_protocol = info.vm_proto;
+
 	if (set_page_decrypted((unsigned long)&boot_ghcb_page))
 		return false;
 
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index fa5cd05d3b5b..134a7c9d91b6 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -12,8 +12,8 @@
 #include <asm/insn.h>
 #include <asm/sev-common.h>
 
-#define GHCB_PROTO_OUR		0x0001UL
-#define GHCB_PROTOCOL_MAX	1ULL
+#define GHCB_PROTOCOL_MIN	1ULL
+#define GHCB_PROTOCOL_MAX	2ULL
 #define GHCB_DEFAULT_USAGE	0ULL
 
 #define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 36eaac2773ed..40a1ca81bdb8 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -61,6 +61,7 @@ static void __noreturn sev_es_terminate(unsigned int reason)
 
 static bool sev_es_negotiate_protocol(struct sev_ghcb_protocol_info *info)
 {
+	unsigned int protocol;
 	u64 val;
 
 	/* Do the GHCB protocol version negotiation */
@@ -71,14 +72,24 @@ static bool sev_es_negotiate_protocol(struct sev_ghcb_protocol_info *info)
 	if (GHCB_MSR_INFO(val) != GHCB_MSR_SEV_INFO_RESP)
 		return false;
 
-	if (GHCB_MSR_PROTO_MAX(val) < GHCB_PROTO_OUR ||
-	    GHCB_MSR_PROTO_MIN(val) > GHCB_PROTO_OUR)
+	/* Sanity check untrusted input */
+	if (GHCB_MSR_PROTO_MIN(val) > GHCB_MSR_PROTO_MAX(val))
+		return false;
+
+	/* Use maximum supported protocol version */
+	protocol = min_t(unsigned int, GHCB_MSR_PROTO_MAX(val), GHCB_PROTOCOL_MAX);
+
+	/*
+	 * Hypervisor does not support any protocol version required for this
+	 * kernel.
+	 */
+	if (protocol < GHCB_MSR_PROTO_MIN(val))
 		return false;
 
 	if (info) {
 		info->hv_proto_min = GHCB_MSR_PROTO_MIN(val);
 		info->hv_proto_max = GHCB_MSR_PROTO_MAX(val);
-		info->vm_proto	   = GHCB_PROTO_OUR;
+		info->vm_proto	   = protocol;
 	}
 
 	return true;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (4 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-08 18:14   ` Borislav Petkov
  2021-09-13 15:55 ` [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table Joerg Roedel
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Store the physical address of the AP Jump Table in kernel memory so
that it does not need to be fetched from the Hypervisor again.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 5d3422e8b25e..eedba56b6bac 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -42,6 +42,9 @@ static struct ghcb boot_ghcb_page __bss_decrypted __aligned(PAGE_SIZE);
  */
 static struct ghcb __initdata *boot_ghcb;
 
+/* Cached AP Jump Table Address */
+static phys_addr_t sev_es_jump_table_pa;
+
 /* #VC handler runtime per-CPU data */
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
@@ -546,12 +549,14 @@ void noinstr __sev_es_nmi_complete(void)
 	__sev_put_ghcb(&state);
 }
 
-static u64 get_jump_table_addr(void)
+static phys_addr_t get_jump_table_addr(void)
 {
 	struct ghcb_state state;
 	unsigned long flags;
 	struct ghcb *ghcb;
-	u64 ret = 0;
+
+	if (sev_es_jump_table_pa)
+		return sev_es_jump_table_pa;
 
 	local_irq_save(flags);
 
@@ -567,39 +572,36 @@ static u64 get_jump_table_addr(void)
 
 	if (ghcb_sw_exit_info_1_is_valid(ghcb) &&
 	    ghcb_sw_exit_info_2_is_valid(ghcb))
-		ret = ghcb->save.sw_exit_info_2;
+		sev_es_jump_table_pa = (phys_addr_t)ghcb->save.sw_exit_info_2;
 
 	__sev_put_ghcb(&state);
 
 	local_irq_restore(flags);
 
-	return ret;
+	return sev_es_jump_table_pa;
 }
 
 int sev_es_setup_ap_jump_table(struct real_mode_header *rmh)
 {
 	u16 startup_cs, startup_ip;
-	phys_addr_t jump_table_pa;
-	u64 jump_table_addr;
 	u16 __iomem *jump_table;
+	phys_addr_t pa;
 
-	jump_table_addr = get_jump_table_addr();
+	pa = get_jump_table_addr();
 
 	/* On UP guests there is no jump table so this is not a failure */
-	if (!jump_table_addr)
+	if (!pa)
 		return 0;
 
 	/* Check if AP Jump Table is page-aligned */
-	if (jump_table_addr & ~PAGE_MASK)
+	if (pa & ~PAGE_MASK)
 		return -EINVAL;
 
-	jump_table_pa = jump_table_addr & PAGE_MASK;
-
 	startup_cs = (u16)(rmh->trampoline_start >> 4);
 	startup_ip = (u16)(rmh->sev_es_trampoline_start -
 			   rmh->trampoline_start);
 
-	jump_table = ioremap_encrypted(jump_table_pa, PAGE_SIZE);
+	jump_table = ioremap_encrypted(pa, PAGE_SIZE);
 	if (!jump_table)
 		return -EIO;
 
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (5 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-10 16:37   ` Borislav Petkov
  2021-09-13 15:55 ` [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2 Joerg Roedel
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

The AP Jump Table under SEV-ES contains the reset vector where non-boot
CPUs start executing when coming out of reset. This means that a CPU
coming out of the AP-reset-hold VMGEXIT also needs to start executing at
the reset vector stored in the AP Jump Table.

The problem is to find a safe place to put the real-mode code which
executes the VMGEXIT and jumps to the reset vector. The code can not be
in kernel memory, because after kexec that memory is owned by the new
kernel and the code might have been overwritten.

Fortunately the AP Jump Table itself is a safe place, because the
memory is not owned by the OS and will not be overwritten by a new
kernel started through kexec. The table is 4k in size and only the
first 4 bytes are used for the reset vector. This leaves enough space
for some 16-bit code to do the job and even a small stack.

Install 16-bit code into the AP Jump Table under SEV-ES after the APs
have been brought up. The code will do an AP-reset-hold VMGEXIT and jump
to the reset vector after being woken up.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/asm/realmode.h         |   2 +
 arch/x86/include/asm/sev-ap-jumptable.h |  25 +++++
 arch/x86/kernel/sev.c                   | 105 +++++++++++++++++++
 arch/x86/realmode/Makefile              |   9 +-
 arch/x86/realmode/rmpiggy.S             |   6 ++
 arch/x86/realmode/sev/Makefile          |  41 ++++++++
 arch/x86/realmode/sev/ap_jump_table.S   | 130 ++++++++++++++++++++++++
 arch/x86/realmode/sev/ap_jump_table.lds |  24 +++++
 8 files changed, 341 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/asm/sev-ap-jumptable.h
 create mode 100644 arch/x86/realmode/sev/Makefile
 create mode 100644 arch/x86/realmode/sev/ap_jump_table.S
 create mode 100644 arch/x86/realmode/sev/ap_jump_table.lds

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 5db5d083c873..29590a4ddf24 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -62,6 +62,8 @@ extern unsigned long initial_gs;
 extern unsigned long initial_stack;
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 extern unsigned long initial_vc_handler;
+extern unsigned char rm_ap_jump_table_blob[];
+extern unsigned char rm_ap_jump_table_blob_end[];
 #endif
 
 extern unsigned char real_mode_blob[];
diff --git a/arch/x86/include/asm/sev-ap-jumptable.h b/arch/x86/include/asm/sev-ap-jumptable.h
new file mode 100644
index 000000000000..1c8b2ce779e2
--- /dev/null
+++ b/arch/x86/include/asm/sev-ap-jumptable.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * AMD Encrypted Register State Support
+ *
+ * Author: Joerg Roedel <jroedel@suse.de>
+ */
+#ifndef __ASM_SEV_AP_JUMPTABLE_H
+#define __ASM_SEV_AP_JUMPTABLE_H
+
+#define	SEV_APJT_CS16	0x8
+#define	SEV_APJT_DS16	0x10
+
+#define SEV_APJT_ENTRY	0x10
+
+#ifndef __ASSEMBLY__
+
+struct sev_ap_jump_table_header {
+	u16	reset_ip;
+	u16	reset_cs;
+	u16	gdt_offset;
+};
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __ASM_SEV_AP_JUMPTABLE_H */
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index eedba56b6bac..a98eab926682 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -19,6 +19,7 @@
 #include <linux/kernel.h>
 #include <linux/mm.h>
 
+#include <asm/sev-ap-jumptable.h>
 #include <asm/cpu_entry_area.h>
 #include <asm/stacktrace.h>
 #include <asm/sev.h>
@@ -45,6 +46,9 @@ static struct ghcb __initdata *boot_ghcb;
 /* Cached AP Jump Table Address */
 static phys_addr_t sev_es_jump_table_pa;
 
+/* Whether the AP Jump Table blob was successfully installed */
+static bool sev_ap_jumptable_blob_installed __ro_after_init;
+
 /* #VC handler runtime per-CPU data */
 struct sev_es_runtime_data {
 	struct ghcb ghcb_page;
@@ -749,6 +753,107 @@ static void __init sev_es_setup_play_dead(void)
 static inline void sev_es_setup_play_dead(void) { }
 #endif
 
+/*
+ * This function make the necessary runtime changes to the AP Jump Table blob.
+ * For now this only sets up the GDT used while the code executes. The GDT needs
+ * to contain 16-bit code and data segments with a base that points to AP Jump
+ * Table page.
+ */
+void __init sev_es_setup_ap_jump_table_data(void *base, u32 pa)
+{
+	struct sev_ap_jump_table_header *header;
+	struct desc_ptr *gdt_descr;
+	u64 *ap_jumptable_gdt;
+
+	header = base;
+
+	/*
+	 * Setup 16-bit protected mode code and data segments for AP Jumptable.
+	 * Set the segment limits to 0xffff to already be compatible with
+	 * real-mode.
+	 */
+	ap_jumptable_gdt = (u64 *)(base + header->gdt_offset);
+	ap_jumptable_gdt[SEV_APJT_CS16 / 8] = GDT_ENTRY(0x9b, pa, 0xffff);
+	ap_jumptable_gdt[SEV_APJT_DS16 / 8] = GDT_ENTRY(0x93, pa, 0xffff);
+
+	/* Write correct GDT base address into GDT descriptor */
+	gdt_descr = (struct desc_ptr *)(base + header->gdt_offset);
+	gdt_descr->address += pa;
+}
+
+/*
+ * This function sets up the AP Jump Table blob which contains code which runs
+ * in 16-bit protected mode to park an AP. After the AP is woken up again the
+ * code will disable protected mode and jump to the reset vector which is also
+ * stored in the AP Jump Table.
+ *
+ * The Jump Table is a safe place to park an AP, because it is owned by the
+ * BIOS and writable by the OS. Putting the code in kernel memory would break
+ * with kexec, because by the time th APs wake up the memory is owned by
+ * the new kernel, and possibly already overwritten.
+ *
+ * Kexec is also the reason this function is called as an init-call after SMP
+ * bringup. Only after all CPUs are up there is a guarantee that no AP is still
+ * parked in AP jump-table code.
+ */
+static int __init sev_es_setup_ap_jump_table_blob(void)
+{
+	size_t blob_size = rm_ap_jump_table_blob_end - rm_ap_jump_table_blob;
+	u16 startup_cs, startup_ip;
+	u16 __iomem *jump_table;
+	phys_addr_t pa;
+
+	if (!sev_es_active())
+		return 0;
+
+	if (sev_get_ghcb_proto_ver() < 2) {
+		pr_info("AP Jump Table parking requires at least GHCB protocol version 2\n");
+		return 0;
+	}
+
+	pa = get_jump_table_addr();
+
+	/* Overflow and size checks for untrusted Jump Table address */
+	if (pa + PAGE_SIZE < pa || pa + PAGE_SIZE > SZ_4G) {
+		pr_info("AP Jump Table is above 4GB - not enabling AP Jump Table parking\n");
+		return 0;
+	}
+
+	/* On UP guests there is no jump table so this is not a failure */
+	if (!pa)
+		return 0;
+
+	jump_table = ioremap_encrypted(pa, PAGE_SIZE);
+	if (WARN_ON(!jump_table))
+		return -EINVAL;
+
+	/*
+	 * Safe reset vector to restore it later because the blob will
+	 * overwrite it.
+	 */
+	startup_ip = jump_table[0];
+	startup_cs = jump_table[1];
+
+	/* Install AP Jump Table Blob with real mode AP parking code */
+	memcpy_toio(jump_table, rm_ap_jump_table_blob, blob_size);
+
+	/* Setup AP Jumptable GDT */
+	sev_es_setup_ap_jump_table_data(jump_table, (u32)pa);
+
+	writew(startup_ip, &jump_table[0]);
+	writew(startup_cs, &jump_table[1]);
+
+	iounmap(jump_table);
+
+	pr_info("AP Jump Table Blob successfully set up\n");
+
+	/* Mark AP Jump Table blob as available */
+	sev_ap_jumptable_blob_installed = true;
+
+	return 0;
+}
+core_initcall(sev_es_setup_ap_jump_table_blob);
+
 static void __init alloc_runtime_data(int cpu)
 {
 	struct sev_es_runtime_data *data;
diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
index a0b491ae2de8..00f3cceb9580 100644
--- a/arch/x86/realmode/Makefile
+++ b/arch/x86/realmode/Makefile
@@ -11,12 +11,19 @@
 KASAN_SANITIZE			:= n
 KCSAN_SANITIZE			:= n
 
+RMPIGGY-y				 = $(obj)/rm/realmode.bin
+RMPIGGY-$(CONFIG_AMD_MEM_ENCRYPT)	+= $(obj)/sev/ap_jump_table.bin
+
 subdir- := rm
+subdir- := sev
 
 obj-y += init.o
 obj-y += rmpiggy.o
 
-$(obj)/rmpiggy.o: $(obj)/rm/realmode.bin
+$(obj)/rmpiggy.o: $(RMPIGGY-y)
 
 $(obj)/rm/realmode.bin: FORCE
 	$(Q)$(MAKE) $(build)=$(obj)/rm $@
+
+$(obj)/sev/ap_jump_table.bin: FORCE
+	$(Q)$(MAKE) $(build)=$(obj)/sev $@
diff --git a/arch/x86/realmode/rmpiggy.S b/arch/x86/realmode/rmpiggy.S
index c8fef76743f6..a659f98617ff 100644
--- a/arch/x86/realmode/rmpiggy.S
+++ b/arch/x86/realmode/rmpiggy.S
@@ -17,3 +17,9 @@ SYM_DATA_END_LABEL(real_mode_blob, SYM_L_GLOBAL, real_mode_blob_end)
 SYM_DATA_START(real_mode_relocs)
 	.incbin	"arch/x86/realmode/rm/realmode.relocs"
 SYM_DATA_END(real_mode_relocs)
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+SYM_DATA_START(rm_ap_jump_table_blob)
+	.incbin "arch/x86/realmode/sev/ap_jump_table.bin"
+SYM_DATA_END_LABEL(rm_ap_jump_table_blob, SYM_L_GLOBAL, rm_ap_jump_table_blob_end)
+#endif
diff --git a/arch/x86/realmode/sev/Makefile b/arch/x86/realmode/sev/Makefile
new file mode 100644
index 000000000000..5a96a518ccb3
--- /dev/null
+++ b/arch/x86/realmode/sev/Makefile
@@ -0,0 +1,41 @@
+#
+# arch/x86/sev/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License.  See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+
+# Sanitizer runtimes are unavailable and cannot be linked here.
+KASAN_SANITIZE			:= n
+KCSAN_SANITIZE			:= n
+OBJECT_FILES_NON_STANDARD	:= y
+
+# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
+KCOV_INSTRUMENT		:= n
+
+always-y := ap_jump_table.bin
+
+ap_jump_table-y				+= ap_jump_table.o
+
+targets	+= $(ap_jump_table-y)
+
+APJUMPTABLE_OBJS = $(addprefix $(obj)/,$(ap_jump_table-y))
+
+LDFLAGS_ap_jump_table.elf := -m elf_i386 -T
+
+targets += ap_jump_table.elf
+$(obj)/ap_jump_table.elf: $(obj)/ap_jump_table.lds $(APJUMPTABLE_OBJS) FORCE
+	$(call if_changed,ld)
+
+OBJCOPYFLAGS_ap_jump_table.bin := -O binary
+
+targets += ap_jump_table.bin
+$(obj)/ap_jump_table.bin: $(obj)/ap_jump_table.elf FORCE
+	$(call if_changed,objcopy)
+
+# ---------------------------------------------------------------------------
+
+KBUILD_AFLAGS	:= $(REALMODE_CFLAGS) -D__ASSEMBLY__
+GCOV_PROFILE := n
+UBSAN_SANITIZE := n
diff --git a/arch/x86/realmode/sev/ap_jump_table.S b/arch/x86/realmode/sev/ap_jump_table.S
new file mode 100644
index 000000000000..547cb363bb94
--- /dev/null
+++ b/arch/x86/realmode/sev/ap_jump_table.S
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/sev-ap-jumptable.h>
+
+/*
+ * This file contains the source code for the binary blob which gets copied to
+ * the SEV-ES AP Jumptable to park APs while offlining CPUs or booting a new
+ * kernel via KEXEC.
+ *
+ * The AP Jumptable is the only safe place to put this code, as any memory the
+ * kernel allocates will be owned (and possibly overwritten) by the new kernel
+ * once the APs are woken up.
+ *
+ * This code runs in 16-bit protected mode, the CS, DS, and SS segment bases are
+ * set to the beginning of the AP Jumptable page.
+ *
+ * Since the GDT will also be gone when the AP wakes up, this blob contains its
+ * own GDT, which is set up by the AP Jumptable setup code with the correct
+ * offsets.
+ *
+ * Author: Joerg Roedel <jroedel@suse.de>
+ */
+
+	.text
+	.org 0x0
+	.code16
+SYM_DATA_START(ap_jumptable_header)
+	.word	0			/* reset IP */
+	.word	0			/* reset CS */
+	.word	ap_jumptable_gdt	/* GDT Offset   */
+SYM_DATA_END(ap_jumptable_header)
+
+	.org	SEV_APJT_ENTRY
+SYM_CODE_START(ap_park_asm)
+
+	/* Switch to AP Jumptable GDT first */
+	lgdtl	ap_jumptable_gdt
+
+	/* Reload CS */
+	ljmpw	$SEV_APJT_CS16, $1f
+1:
+
+	/* Reload DS and SS */
+	movl	$SEV_APJT_DS16, %ecx
+	movl	%ecx, %ds
+	movl	%ecx, %ss
+
+	/*
+	 * Setup a stack pointing to the end of the AP Jumptable page.
+	 * The stack is needed ot reset EFLAGS after wakeup.
+	 */
+	movl	$0x1000, %esp
+
+	/* Execute AP reset hold VMGEXIT */
+2:	xorl	%edx, %edx
+	movl	$0x6, %eax
+	movl	$0xc0010130, %ecx
+	wrmsr
+	rep; vmmcall
+	rdmsr
+	movl	%eax, %ecx
+	andl	$0xfff, %ecx
+	cmpl	$0x7, %ecx
+	jne	2b
+	shrl	$12, %eax
+	jnz	3f
+	testl	%edx, %edx
+	jnz	3f
+	jmp	2b
+3:
+	/*
+	 * Successfully woken up - Patch the correct target into the far jump at
+	 * the end. An indirect far jump does not work here, because at the time
+	 * the jump is executed DS is already loaded with real-mode values.
+	 */
+
+	/* Jump target is at address 0x0 - copy it to the far jump instruction */
+	movl	$0, %ecx
+	movl	(%ecx), %eax
+	movl	%eax, jump_target
+
+	/* Reset EFLAGS */
+	pushl	$2
+	popfl
+
+	/* Setup DS and SS for real-mode */
+	movl	$0x18, %ecx
+	movl	%ecx, %ds
+	movl	%ecx, %ss
+
+	/* Reset remaining registers */
+	movl	$0, %esp
+	movl	$0, %eax
+	movl	$0, %ebx
+	movl	$0, %edx
+
+	/* Reset CR0 to get out of protected mode */
+	movl	$0x60000010, %ecx
+	movl	%ecx, %cr0
+
+	/*
+	 * The below sums up to a far-jump instruction which jumps to the reset
+	 * vector configured in the AP Jumptable and to real-mode. An indirect
+	 * jump would be cleaner, but requires a working DS base/limit. DS is
+	 * already loaded with real-mode values, therefore a direct far jump is
+	 * used which got the correct target patched in.
+	 */
+	.byte	0xea
+SYM_DATA_LOCAL(jump_target, .long 0)
+
+SYM_CODE_END(ap_park_asm)
+	/* Here comes the GDT */
+	.balign	16
+SYM_DATA_START_LOCAL(ap_jumptable_gdt)
+	/* Offset zero used for GDT descriptor */
+	.word	ap_jumptable_gdt_end - ap_jumptable_gdt - 1
+	.long	ap_jumptable_gdt
+	.word	0
+
+	/* 16 bit code segment - setup at boot */
+	.quad 0
+
+	/* 16 bit data segment - setup at boot */
+	.quad 0
+
+	/* Offset 0x8 - real-mode data segment */
+	.long	0xffff0180
+	.long	0x00009300
+SYM_DATA_END_LABEL(ap_jumptable_gdt, SYM_L_LOCAL, ap_jumptable_gdt_end)
diff --git a/arch/x86/realmode/sev/ap_jump_table.lds b/arch/x86/realmode/sev/ap_jump_table.lds
new file mode 100644
index 000000000000..e3a1220f36db
--- /dev/null
+++ b/arch/x86/realmode/sev/ap_jump_table.lds
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * ap_jump_table.lds
+ *
+ * Linker script for the SEV-ES AP Jump Table Code
+ */
+
+OUTPUT_FORMAT("elf32-i386")
+OUTPUT_ARCH(i386)
+ENTRY(ap_park_asm)
+
+SECTIONS
+{
+	. = 0;
+	.text : {
+		*(.text)
+		*(.text.*)
+	}
+
+	/DISCARD/ : {
+		*(.note*)
+		*(.debug*)
+	}
+}
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (6 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table Joerg Roedel
@ 2021-09-13 15:55 ` Joerg Roedel
  2021-11-12 16:33   ` Borislav Petkov
  2021-09-13 15:56 ` [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU Joerg Roedel
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:55 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

GHCB protocol version 2 adds the MSR-based AP-reset-hold VMGEXIT which
does not need a GHCB. Use that to park APs in 16-bit protected mode on
the AP Jump Table.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/asm/realmode.h    |  3 +
 arch/x86/kernel/sev.c              | 48 ++++++++++++++--
 arch/x86/realmode/rm/Makefile      | 11 ++--
 arch/x86/realmode/rm/header.S      |  3 +
 arch/x86/realmode/rm/sev_ap_park.S | 89 ++++++++++++++++++++++++++++++
 5 files changed, 144 insertions(+), 10 deletions(-)
 create mode 100644 arch/x86/realmode/rm/sev_ap_park.S

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 29590a4ddf24..668de0a8b1ae 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -23,6 +23,9 @@ struct real_mode_header {
 	u32	trampoline_header;
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	u32	sev_es_trampoline_start;
+	u32	sev_real_ap_park_asm;
+	u32	sev_real_ap_park_seg;
+	u32	sev_ap_park_gdt;
 #endif
 #ifdef CONFIG_X86_64
 	u32	trampoline_pgd;
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a98eab926682..20b439986d86 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -27,6 +27,7 @@
 #include <asm/fpu/internal.h>
 #include <asm/processor.h>
 #include <asm/realmode.h>
+#include <asm/tlbflush.h>
 #include <asm/traps.h>
 #include <asm/svm.h>
 #include <asm/smp.h>
@@ -695,6 +696,35 @@ static bool __init sev_es_setup_ghcb(void)
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
+void __noreturn sev_jumptable_ap_park(void)
+{
+	local_irq_disable();
+
+	write_cr3(real_mode_header->trampoline_pgd);
+
+	/* Exiting long mode will fail if CR4.PCIDE is set. */
+	if (boot_cpu_has(X86_FEATURE_PCID))
+		cr4_clear_bits(X86_CR4_PCIDE);
+
+	asm volatile("xorq	%%r15, %%r15\n"
+		     "xorq	%%r14, %%r14\n"
+		     "xorq	%%r13, %%r13\n"
+		     "xorq	%%r12, %%r12\n"
+		     "xorq	%%r11, %%r11\n"
+		     "xorq	%%r10, %%r10\n"
+		     "xorq	%%r9,  %%r9\n"
+		     "xorq	%%r8,  %%r8\n"
+		     "xorq	%%rsi, %%rsi\n"
+		     "xorq	%%rdi, %%rdi\n"
+		     "xorq	%%rsp, %%rsp\n"
+		     "xorq	%%rbp, %%rbp\n"
+		     "ljmpl	*%0" : :
+		     "m" (real_mode_header->sev_real_ap_park_asm),
+		     "b" (sev_es_jump_table_pa >> 4));
+	unreachable();
+}
+STACK_FRAME_NON_STANDARD(sev_jumptable_ap_park);
+
 static void sev_es_ap_hlt_loop(void)
 {
 	struct ghcb_state state;
@@ -731,8 +761,10 @@ static void sev_es_play_dead(void)
 	play_dead_common();
 
 	/* IRQs now disabled */
-
-	sev_es_ap_hlt_loop();
+	if (sev_ap_jumptable_blob_installed)
+		sev_jumptable_ap_park();
+	else
+		sev_es_ap_hlt_loop();
 
 	/*
 	 * If we get here, the VCPU was woken up again. Jump to CPU
@@ -762,8 +794,9 @@ static inline void sev_es_setup_play_dead(void) { }
 void __init sev_es_setup_ap_jump_table_data(void *base, u32 pa)
 {
 	struct sev_ap_jump_table_header *header;
+	u64 *ap_jumptable_gdt, *sev_ap_park_gdt;
 	struct desc_ptr *gdt_descr;
-	u64 *ap_jumptable_gdt;
+	int idx;
 
 	header = base;
 
@@ -773,8 +806,13 @@ void __init sev_es_setup_ap_jump_table_data(void *base, u32 pa)
 	 * real-mode.
 	 */
 	ap_jumptable_gdt = (u64 *)(base + header->gdt_offset);
-	ap_jumptable_gdt[SEV_APJT_CS16 / 8] = GDT_ENTRY(0x9b, pa, 0xffff);
-	ap_jumptable_gdt[SEV_APJT_DS16 / 8] = GDT_ENTRY(0x93, pa, 0xffff);
+	sev_ap_park_gdt  = __va(real_mode_header->sev_ap_park_gdt);
+
+	idx = SEV_APJT_CS16 / 8;
+	ap_jumptable_gdt[idx] = sev_ap_park_gdt[idx] = GDT_ENTRY(0x9b, pa, 0xffff);
+
+	idx = SEV_APJT_DS16 / 8;
+	ap_jumptable_gdt[idx] = sev_ap_park_gdt[idx] = GDT_ENTRY(0x93, pa, 0xffff);
 
 	/* Write correct GDT base address into GDT descriptor */
 	gdt_descr = (struct desc_ptr *)(base + header->gdt_offset);
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index 83f1b6a56449..a7f84d43a0a3 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -27,11 +27,12 @@ wakeup-objs	+= video-vga.o
 wakeup-objs	+= video-vesa.o
 wakeup-objs	+= video-bios.o
 
-realmode-y			+= header.o
-realmode-y			+= trampoline_$(BITS).o
-realmode-y			+= stack.o
-realmode-y			+= reboot.o
-realmode-$(CONFIG_ACPI_SLEEP)	+= $(wakeup-objs)
+realmode-y				+= header.o
+realmode-y				+= trampoline_$(BITS).o
+realmode-y				+= stack.o
+realmode-y				+= reboot.o
+realmode-$(CONFIG_ACPI_SLEEP)		+= $(wakeup-objs)
+realmode-$(CONFIG_AMD_MEM_ENCRYPT)	+= sev_ap_park.o
 
 targets	+= $(realmode-y)
 
diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 8c1db5bf5d78..6c17f8fd1eb4 100644
--- a/arch/x86/realmode/rm/header.S
+++ b/arch/x86/realmode/rm/header.S
@@ -22,6 +22,9 @@ SYM_DATA_START(real_mode_header)
 	.long	pa_trampoline_header
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 	.long	pa_sev_es_trampoline_start
+	.long	pa_sev_ap_park_asm
+	.long	__KERNEL32_CS
+	.long	pa_sev_ap_park_gdt;
 #endif
 #ifdef CONFIG_X86_64
 	.long	pa_trampoline_pgd;
diff --git a/arch/x86/realmode/rm/sev_ap_park.S b/arch/x86/realmode/rm/sev_ap_park.S
new file mode 100644
index 000000000000..0b63d0569d4d
--- /dev/null
+++ b/arch/x86/realmode/rm/sev_ap_park.S
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/linkage.h>
+#include <asm/segment.h>
+#include <asm/page_types.h>
+#include <asm/processor-flags.h>
+#include <asm/msr-index.h>
+#include <asm/sev-ap-jumptable.h>
+#include "realmode.h"
+
+	.section ".text32", "ax"
+	.code32
+/*
+ * The following code switches to 16-bit protected mode and sets up the
+ * execution environment for the AP Jump Table blob. Then it jumps to the AP
+ * Jump Table to park the AP.
+ *
+ * The code was copied from reboot.S and modified to fit the SEV-ES requirements
+ * for AP parking. When this code is entered, all registers except %EAX-%EDX are
+ * in reset state.
+ *
+ * The AP Jump Table physical base address is in %EBX upon entry.
+ *
+ * %EAX, %ECX, %EDX and EFLAGS are undefined. Only use registers %EAX-%EDX and
+ * %ESP in this code.
+ */
+SYM_CODE_START(sev_ap_park_asm)
+
+	/* Switch to trampoline GDT as it is guaranteed < 4 GiB */
+	movl	$__KERNEL_DS, %eax
+	movl	%eax, %ds
+	lgdt	pa_tr_gdt
+
+	/* Disable paging to drop us out of long mode */
+	movl	%cr0, %eax
+	btcl	$X86_CR0_PG_BIT, %eax
+	movl	%eax, %cr0
+
+	ljmpl	$__KERNEL32_CS, $pa_sev_ap_park_paging_off
+
+SYM_INNER_LABEL(sev_ap_park_paging_off, SYM_L_GLOBAL)
+	/* Clear EFER */
+	movl	$0, %eax
+	movl	$0, %edx
+	movl	$MSR_EFER, %ecx
+	wrmsr
+
+	/* Clear CR3 */
+	movl	$0, %ecx
+	movl	%ecx, %cr3
+
+	/* Set up the IDT for real mode. */
+	lidtl	pa_machine_real_restart_idt
+
+	/*
+	 * Load the GDT with the 16-bit segments for the AP Jump Table
+	 */
+	lgdtl	pa_sev_ap_park_gdt
+
+	/* Setup Code and Data segments for AP Jump Table */
+	movw	$SEV_APJT_DS16, %ax
+	movw	%ax, %ds
+	movw	%ax, %ss
+
+	/* Jump to the AP Jump Table into 16 bit protected mode */
+	ljmpw	$SEV_APJT_CS16, $SEV_APJT_ENTRY
+SYM_CODE_END(sev_ap_park_asm)
+
+	.data
+	.balign	16
+SYM_DATA_START(sev_ap_park_gdt)
+	/* Self-pointer */
+	.word	sev_ap_park_gdt_end - sev_ap_park_gdt - 1
+	.long	pa_sev_ap_park_gdt
+	.word	0
+
+	/*
+	 * Offset 0x8
+	 * 32 bit code segment descriptor pointing to AP Jump table base
+	 * Setup at runtime in sev_es_setup_ap_jump_table_data().
+	 */
+	.quad	0
+
+	/*
+	 * Offset 0x10
+	 * 32 bit data segment descriptor pointing to AP Jump table base
+	 * Setup at runtime in sev_es_setup_ap_jump_table_data().
+	 */
+	.quad	0
+SYM_DATA_END_LABEL(sev_ap_park_gdt, SYM_L_GLOBAL, sev_ap_park_gdt_end)
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (7 preceding siblings ...)
  2021-09-13 15:55 ` [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2 Joerg Roedel
@ 2021-09-13 15:56 ` Joerg Roedel
  2021-11-15 18:44   ` Borislav Petkov
  2021-09-13 15:56 ` [PATCH v2 10/12] x86/sev: Add MMIO handling support to boot/compressed/ code Joerg Roedel
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:56 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

To support kexec under SEV-ES the APs can't be parked with HLT. Upon
wakeup the AP needs to find its way to execute at the reset vector set
by the new kernel and in real-mode.

This is what the AP Jump Table blob provides, so stop the APs the
SEV-ES way by calling the AP-reset-hold VMGEXIT from the AP Jump
Table.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/asm/sev.h |  7 +++++++
 arch/x86/kernel/process.c  |  8 ++++++++
 arch/x86/kernel/sev.c      | 11 ++++++++++-
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 134a7c9d91b6..cd14b6e10f12 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -81,12 +81,19 @@ static __always_inline void sev_es_nmi_complete(void)
 		__sev_es_nmi_complete();
 }
 extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
+void __sev_es_stop_this_cpu(void);
+static __always_inline void sev_es_stop_this_cpu(void)
+{
+	if (static_branch_unlikely(&sev_es_enable_key))
+		__sev_es_stop_this_cpu();
+}
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
 static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
 static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
+static inline void sev_es_stop_this_cpu(void) { }
 #endif
 
 #endif
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 1d9463e3096b..8d9b03923baa 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -43,6 +43,7 @@
 #include <asm/io_bitmap.h>
 #include <asm/proto.h>
 #include <asm/frame.h>
+#include <asm/sev.h>
 
 #include "process.h"
 
@@ -752,6 +753,13 @@ void stop_this_cpu(void *dummy)
 	if (boot_cpu_has(X86_FEATURE_SME))
 		native_wbinvd();
 	for (;;) {
+		/*
+		 * SEV-ES guests need a special stop routine to support
+		 * kexec. Try this first, if it fails the function will
+		 * return and native_halt() is used.
+		 */
+		sev_es_stop_this_cpu();
+
 		/*
 		 * Use native_halt() so that memory contents don't change
 		 * (stack usage and variables) after possibly issuing the
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 20b439986d86..bac9bb4fa54e 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -695,7 +695,6 @@ static bool __init sev_es_setup_ghcb(void)
 	return true;
 }
 
-#ifdef CONFIG_HOTPLUG_CPU
 void __noreturn sev_jumptable_ap_park(void)
 {
 	local_irq_disable();
@@ -725,6 +724,16 @@ void __noreturn sev_jumptable_ap_park(void)
 }
 STACK_FRAME_NON_STANDARD(sev_jumptable_ap_park);
 
+void __sev_es_stop_this_cpu(void)
+{
+	/* Only park in the AP Jump Table when the code has been installed */
+	if (!sev_ap_jumptable_blob_installed)
+		return;
+
+	sev_jumptable_ap_park();
+}
+
+#ifdef CONFIG_HOTPLUG_CPU
 static void sev_es_ap_hlt_loop(void)
 {
 	struct ghcb_state state;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 10/12] x86/sev: Add MMIO handling support to boot/compressed/ code
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (8 preceding siblings ...)
  2021-09-13 15:56 ` [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU Joerg Roedel
@ 2021-09-13 15:56 ` Joerg Roedel
  2021-09-13 15:56 ` [PATCH v2 11/12] x86/sev: Handle CLFLUSH MMIO events Joerg Roedel
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:56 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Move the code for MMIO handling in the #VC handler to sev-shared.c so
that it can be used in the decompressor code. The decompressor needs
to handle MMIO events for writing to the VGA framebuffer.

When the kernel is booted via UEFI the VGA console is not enabled that
early, but a kexec boot will enable it and the decompressor needs MMIO
support to write to the frame buffer.

This also requires to share some code from lib/insn-eval.c. Since
insn-eval.c can't be included into the decompressor code directly,
move the relevant parts into lib/insn-eval-shared.c and include that
file.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/boot/compressed/sev.c  |  43 +-
 arch/x86/kernel/sev-shared.c    | 282 +++++++++++
 arch/x86/kernel/sev.c           | 282 -----------
 arch/x86/lib/insn-eval-shared.c | 805 ++++++++++++++++++++++++++++++++
 arch/x86/lib/insn-eval.c        | 802 +------------------------------
 5 files changed, 1116 insertions(+), 1098 deletions(-)
 create mode 100644 arch/x86/lib/insn-eval-shared.c

diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index 7f8416f76be7..3ffb3d873989 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -26,24 +26,8 @@
 struct ghcb boot_ghcb_page __aligned(PAGE_SIZE);
 struct ghcb *boot_ghcb;
 
-/*
- * Copy a version of this function here - insn-eval.c can't be used in
- * pre-decompression code.
- */
-static bool insn_has_rep_prefix(struct insn *insn)
-{
-	insn_byte_t p;
-	int i;
-
-	insn_get_prefixes(insn);
-
-	for_each_insn_prefix(insn, i, p) {
-		if (p == 0xf2 || p == 0xf3)
-			return true;
-	}
-
-	return false;
-}
+#undef WARN_ONCE
+#define WARN_ONCE(condition, format...)
 
 /*
  * Only a dummy for insn_get_seg_base() - Early boot-code is 64bit only and
@@ -54,6 +38,17 @@ static unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx)
 	return 0UL;
 }
 
+/* The decompressor only uses flat segments */
+static int get_seg_base_limit(struct insn *insn, struct pt_regs *regs,
+			      int regoff, unsigned long *base,
+			      unsigned long *limit)
+{
+	if (base)
+		*base  = 0L;
+	if (limit)
+		*limit = ~0L;
+}
+
 static inline u64 sev_es_rd_ghcb_msr(void)
 {
 	unsigned long low, high;
@@ -105,6 +100,14 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 	return ES_OK;
 }
 
+static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+					   unsigned long vaddr, phys_addr_t *paddr)
+{
+	*paddr = (phys_addr_t)vaddr;
+
+	return ES_OK;
+}
+
 #undef __init
 #undef __pa
 #define __init
@@ -115,6 +118,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
 /* Basic instruction decoding support needed */
 #include "../../lib/inat.c"
 #include "../../lib/insn.c"
+#include "../../lib/insn-eval-shared.c"
 
 /* Include code for early handlers */
 #include "../../kernel/sev-shared.c"
@@ -204,6 +208,9 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
 	case SVM_EXIT_CPUID:
 		result = vc_handle_cpuid(boot_ghcb, &ctxt);
 		break;
+	case SVM_EXIT_NPF:
+		result = vc_handle_mmio(boot_ghcb, &ctxt);
+		break;
 	default:
 		result = ES_UNSUPPORTED;
 		break;
diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index 40a1ca81bdb8..a7a0793c4f98 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -558,3 +558,285 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb,
 
 	return ES_OK;
 }
+
+static long *vc_insn_get_reg(struct es_em_ctxt *ctxt)
+{
+	long *reg_array;
+	int offset;
+
+	reg_array = (long *)ctxt->regs;
+	offset    = insn_get_modrm_reg_off(&ctxt->insn, ctxt->regs);
+
+	if (offset < 0)
+		return NULL;
+
+	offset /= sizeof(long);
+
+	return reg_array + offset;
+}
+
+static long *vc_insn_get_rm(struct es_em_ctxt *ctxt)
+{
+	long *reg_array;
+	int offset;
+
+	reg_array = (long *)ctxt->regs;
+	offset    = insn_get_modrm_rm_off(&ctxt->insn, ctxt->regs);
+
+	if (offset < 0)
+		return NULL;
+
+	offset /= sizeof(long);
+
+	return reg_array + offset;
+}
+static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
+				 unsigned int bytes, bool read)
+{
+	u64 exit_code, exit_info_1, exit_info_2;
+	unsigned long ghcb_pa = __pa(ghcb);
+	enum es_result res;
+	phys_addr_t paddr;
+	void __user *ref;
+
+	ref = insn_get_addr_ref(&ctxt->insn, ctxt->regs);
+	if (ref == (void __user *)-1L)
+		return ES_UNSUPPORTED;
+
+	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
+
+	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
+	if (res != ES_OK) {
+		if (res == ES_EXCEPTION && !read)
+			ctxt->fi.error_code |= X86_PF_WRITE;
+
+		return res;
+	}
+
+	exit_info_1 = paddr;
+	/* Can never be greater than 8 */
+	exit_info_2 = bytes;
+
+	ghcb_set_sw_scratch(ghcb, ghcb_pa + offsetof(struct ghcb, shared_buffer));
+
+	return sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, exit_info_1, exit_info_2);
+}
+
+static enum es_result vc_handle_mmio_twobyte_ops(struct ghcb *ghcb,
+						 struct es_em_ctxt *ctxt)
+{
+	struct insn *insn = &ctxt->insn;
+	unsigned int bytes = 0;
+	enum es_result ret;
+	int sign_byte;
+	long *reg_data;
+
+	switch (insn->opcode.bytes[1]) {
+		/* MMIO Read w/ zero-extension */
+	case 0xb6:
+		bytes = 1;
+		fallthrough;
+	case 0xb7:
+		if (!bytes)
+			bytes = 2;
+
+		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
+		if (ret)
+			break;
+
+		/* Zero extend based on operand size */
+		reg_data = vc_insn_get_reg(ctxt);
+		if (!reg_data)
+			return ES_DECODE_FAILED;
+
+		memset(reg_data, 0, insn->opnd_bytes);
+
+		memcpy(reg_data, ghcb->shared_buffer, bytes);
+		break;
+
+		/* MMIO Read w/ sign-extension */
+	case 0xbe:
+		bytes = 1;
+		fallthrough;
+	case 0xbf:
+		if (!bytes)
+			bytes = 2;
+
+		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
+		if (ret)
+			break;
+
+		/* Sign extend based on operand size */
+		reg_data = vc_insn_get_reg(ctxt);
+		if (!reg_data)
+			return ES_DECODE_FAILED;
+
+		if (bytes == 1) {
+			u8 *val = (u8 *)ghcb->shared_buffer;
+
+			sign_byte = (*val & 0x80) ? 0xff : 0x00;
+		} else {
+			u16 *val = (u16 *)ghcb->shared_buffer;
+
+			sign_byte = (*val & 0x8000) ? 0xff : 0x00;
+		}
+		memset(reg_data, sign_byte, insn->opnd_bytes);
+
+		memcpy(reg_data, ghcb->shared_buffer, bytes);
+		break;
+
+	default:
+		ret = ES_UNSUPPORTED;
+	}
+
+	return ret;
+}
+
+/*
+ * The MOVS instruction has two memory operands, which raises the
+ * problem that it is not known whether the access to the source or the
+ * destination caused the #VC exception (and hence whether an MMIO read
+ * or write operation needs to be emulated).
+ *
+ * Instead of playing games with walking page-tables and trying to guess
+ * whether the source or destination is an MMIO range, split the move
+ * into two operations, a read and a write with only one memory operand.
+ * This will cause a nested #VC exception on the MMIO address which can
+ * then be handled.
+ *
+ * This implementation has the benefit that it also supports MOVS where
+ * source _and_ destination are MMIO regions.
+ *
+ * It will slow MOVS on MMIO down a lot, but in SEV-ES guests it is a
+ * rare operation. If it turns out to be a performance problem the split
+ * operations can be moved to memcpy_fromio() and memcpy_toio().
+ */
+static enum es_result vc_handle_mmio_movs(struct es_em_ctxt *ctxt,
+					  unsigned int bytes)
+{
+	unsigned long ds_base, es_base;
+	unsigned char *src, *dst;
+	unsigned char buffer[8];
+	enum es_result ret;
+	bool rep;
+	int off;
+
+	ds_base = insn_get_seg_base(ctxt->regs, INAT_SEG_REG_DS);
+	es_base = insn_get_seg_base(ctxt->regs, INAT_SEG_REG_ES);
+
+	if (ds_base == -1L || es_base == -1L) {
+		ctxt->fi.vector = X86_TRAP_GP;
+		ctxt->fi.error_code = 0;
+		return ES_EXCEPTION;
+	}
+
+	src = ds_base + (unsigned char *)ctxt->regs->si;
+	dst = es_base + (unsigned char *)ctxt->regs->di;
+
+	ret = vc_read_mem(ctxt, src, buffer, bytes);
+	if (ret != ES_OK)
+		return ret;
+
+	ret = vc_write_mem(ctxt, dst, buffer, bytes);
+	if (ret != ES_OK)
+		return ret;
+
+	if (ctxt->regs->flags & X86_EFLAGS_DF)
+		off = -bytes;
+	else
+		off =  bytes;
+
+	ctxt->regs->si += off;
+	ctxt->regs->di += off;
+
+	rep = insn_has_rep_prefix(&ctxt->insn);
+	if (rep)
+		ctxt->regs->cx -= 1;
+
+	if (!rep || ctxt->regs->cx == 0)
+		return ES_OK;
+	else
+		return ES_RETRY;
+}
+
+static enum es_result vc_handle_mmio(struct ghcb *ghcb,
+				     struct es_em_ctxt *ctxt)
+{
+	struct insn *insn = &ctxt->insn;
+	unsigned int bytes = 0;
+	enum es_result ret;
+	long *reg_data;
+
+	switch (insn->opcode.bytes[0]) {
+	/* MMIO Write */
+	case 0x88:
+		bytes = 1;
+		fallthrough;
+	case 0x89:
+		if (!bytes)
+			bytes = insn->opnd_bytes;
+
+		reg_data = vc_insn_get_reg(ctxt);
+		if (!reg_data)
+			return ES_DECODE_FAILED;
+
+		memcpy(ghcb->shared_buffer, reg_data, bytes);
+
+		ret = vc_do_mmio(ghcb, ctxt, bytes, false);
+		break;
+
+	case 0xc6:
+		bytes = 1;
+		fallthrough;
+	case 0xc7:
+		if (!bytes)
+			bytes = insn->opnd_bytes;
+
+		memcpy(ghcb->shared_buffer, insn->immediate1.bytes, bytes);
+
+		ret = vc_do_mmio(ghcb, ctxt, bytes, false);
+		break;
+
+		/* MMIO Read */
+	case 0x8a:
+		bytes = 1;
+		fallthrough;
+	case 0x8b:
+		if (!bytes)
+			bytes = insn->opnd_bytes;
+
+		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
+		if (ret)
+			break;
+
+		reg_data = vc_insn_get_reg(ctxt);
+		if (!reg_data)
+			return ES_DECODE_FAILED;
+
+		/* Zero-extend for 32-bit operation */
+		if (bytes == 4)
+			*reg_data = 0;
+
+		memcpy(reg_data, ghcb->shared_buffer, bytes);
+		break;
+
+		/* MOVS instruction */
+	case 0xa4:
+		bytes = 1;
+		fallthrough;
+	case 0xa5:
+		if (!bytes)
+			bytes = insn->opnd_bytes;
+
+		ret = vc_handle_mmio_movs(ctxt, bytes);
+		break;
+		/* Two-Byte Opcodes */
+	case 0x0f:
+		ret = vc_handle_mmio_twobyte_ops(ghcb, ctxt);
+		break;
+	default:
+		ret = ES_UNSUPPORTED;
+	}
+
+	return ret;
+}
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index bac9bb4fa54e..5d4b1d317317 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -977,288 +977,6 @@ static void __init vc_early_forward_exception(struct es_em_ctxt *ctxt)
 	do_early_exception(ctxt->regs, trapnr);
 }
 
-static long *vc_insn_get_reg(struct es_em_ctxt *ctxt)
-{
-	long *reg_array;
-	int offset;
-
-	reg_array = (long *)ctxt->regs;
-	offset    = insn_get_modrm_reg_off(&ctxt->insn, ctxt->regs);
-
-	if (offset < 0)
-		return NULL;
-
-	offset /= sizeof(long);
-
-	return reg_array + offset;
-}
-
-static long *vc_insn_get_rm(struct es_em_ctxt *ctxt)
-{
-	long *reg_array;
-	int offset;
-
-	reg_array = (long *)ctxt->regs;
-	offset    = insn_get_modrm_rm_off(&ctxt->insn, ctxt->regs);
-
-	if (offset < 0)
-		return NULL;
-
-	offset /= sizeof(long);
-
-	return reg_array + offset;
-}
-static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
-				 unsigned int bytes, bool read)
-{
-	u64 exit_code, exit_info_1, exit_info_2;
-	unsigned long ghcb_pa = __pa(ghcb);
-	enum es_result res;
-	phys_addr_t paddr;
-	void __user *ref;
-
-	ref = insn_get_addr_ref(&ctxt->insn, ctxt->regs);
-	if (ref == (void __user *)-1L)
-		return ES_UNSUPPORTED;
-
-	exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
-
-	res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
-	if (res != ES_OK) {
-		if (res == ES_EXCEPTION && !read)
-			ctxt->fi.error_code |= X86_PF_WRITE;
-
-		return res;
-	}
-
-	exit_info_1 = paddr;
-	/* Can never be greater than 8 */
-	exit_info_2 = bytes;
-
-	ghcb_set_sw_scratch(ghcb, ghcb_pa + offsetof(struct ghcb, shared_buffer));
-
-	return sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, exit_info_1, exit_info_2);
-}
-
-static enum es_result vc_handle_mmio_twobyte_ops(struct ghcb *ghcb,
-						 struct es_em_ctxt *ctxt)
-{
-	struct insn *insn = &ctxt->insn;
-	unsigned int bytes = 0;
-	enum es_result ret;
-	int sign_byte;
-	long *reg_data;
-
-	switch (insn->opcode.bytes[1]) {
-		/* MMIO Read w/ zero-extension */
-	case 0xb6:
-		bytes = 1;
-		fallthrough;
-	case 0xb7:
-		if (!bytes)
-			bytes = 2;
-
-		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
-		if (ret)
-			break;
-
-		/* Zero extend based on operand size */
-		reg_data = vc_insn_get_reg(ctxt);
-		if (!reg_data)
-			return ES_DECODE_FAILED;
-
-		memset(reg_data, 0, insn->opnd_bytes);
-
-		memcpy(reg_data, ghcb->shared_buffer, bytes);
-		break;
-
-		/* MMIO Read w/ sign-extension */
-	case 0xbe:
-		bytes = 1;
-		fallthrough;
-	case 0xbf:
-		if (!bytes)
-			bytes = 2;
-
-		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
-		if (ret)
-			break;
-
-		/* Sign extend based on operand size */
-		reg_data = vc_insn_get_reg(ctxt);
-		if (!reg_data)
-			return ES_DECODE_FAILED;
-
-		if (bytes == 1) {
-			u8 *val = (u8 *)ghcb->shared_buffer;
-
-			sign_byte = (*val & 0x80) ? 0xff : 0x00;
-		} else {
-			u16 *val = (u16 *)ghcb->shared_buffer;
-
-			sign_byte = (*val & 0x8000) ? 0xff : 0x00;
-		}
-		memset(reg_data, sign_byte, insn->opnd_bytes);
-
-		memcpy(reg_data, ghcb->shared_buffer, bytes);
-		break;
-
-	default:
-		ret = ES_UNSUPPORTED;
-	}
-
-	return ret;
-}
-
-/*
- * The MOVS instruction has two memory operands, which raises the
- * problem that it is not known whether the access to the source or the
- * destination caused the #VC exception (and hence whether an MMIO read
- * or write operation needs to be emulated).
- *
- * Instead of playing games with walking page-tables and trying to guess
- * whether the source or destination is an MMIO range, split the move
- * into two operations, a read and a write with only one memory operand.
- * This will cause a nested #VC exception on the MMIO address which can
- * then be handled.
- *
- * This implementation has the benefit that it also supports MOVS where
- * source _and_ destination are MMIO regions.
- *
- * It will slow MOVS on MMIO down a lot, but in SEV-ES guests it is a
- * rare operation. If it turns out to be a performance problem the split
- * operations can be moved to memcpy_fromio() and memcpy_toio().
- */
-static enum es_result vc_handle_mmio_movs(struct es_em_ctxt *ctxt,
-					  unsigned int bytes)
-{
-	unsigned long ds_base, es_base;
-	unsigned char *src, *dst;
-	unsigned char buffer[8];
-	enum es_result ret;
-	bool rep;
-	int off;
-
-	ds_base = insn_get_seg_base(ctxt->regs, INAT_SEG_REG_DS);
-	es_base = insn_get_seg_base(ctxt->regs, INAT_SEG_REG_ES);
-
-	if (ds_base == -1L || es_base == -1L) {
-		ctxt->fi.vector = X86_TRAP_GP;
-		ctxt->fi.error_code = 0;
-		return ES_EXCEPTION;
-	}
-
-	src = ds_base + (unsigned char *)ctxt->regs->si;
-	dst = es_base + (unsigned char *)ctxt->regs->di;
-
-	ret = vc_read_mem(ctxt, src, buffer, bytes);
-	if (ret != ES_OK)
-		return ret;
-
-	ret = vc_write_mem(ctxt, dst, buffer, bytes);
-	if (ret != ES_OK)
-		return ret;
-
-	if (ctxt->regs->flags & X86_EFLAGS_DF)
-		off = -bytes;
-	else
-		off =  bytes;
-
-	ctxt->regs->si += off;
-	ctxt->regs->di += off;
-
-	rep = insn_has_rep_prefix(&ctxt->insn);
-	if (rep)
-		ctxt->regs->cx -= 1;
-
-	if (!rep || ctxt->regs->cx == 0)
-		return ES_OK;
-	else
-		return ES_RETRY;
-}
-
-static enum es_result vc_handle_mmio(struct ghcb *ghcb,
-				     struct es_em_ctxt *ctxt)
-{
-	struct insn *insn = &ctxt->insn;
-	unsigned int bytes = 0;
-	enum es_result ret;
-	long *reg_data;
-
-	switch (insn->opcode.bytes[0]) {
-	/* MMIO Write */
-	case 0x88:
-		bytes = 1;
-		fallthrough;
-	case 0x89:
-		if (!bytes)
-			bytes = insn->opnd_bytes;
-
-		reg_data = vc_insn_get_reg(ctxt);
-		if (!reg_data)
-			return ES_DECODE_FAILED;
-
-		memcpy(ghcb->shared_buffer, reg_data, bytes);
-
-		ret = vc_do_mmio(ghcb, ctxt, bytes, false);
-		break;
-
-	case 0xc6:
-		bytes = 1;
-		fallthrough;
-	case 0xc7:
-		if (!bytes)
-			bytes = insn->opnd_bytes;
-
-		memcpy(ghcb->shared_buffer, insn->immediate1.bytes, bytes);
-
-		ret = vc_do_mmio(ghcb, ctxt, bytes, false);
-		break;
-
-		/* MMIO Read */
-	case 0x8a:
-		bytes = 1;
-		fallthrough;
-	case 0x8b:
-		if (!bytes)
-			bytes = insn->opnd_bytes;
-
-		ret = vc_do_mmio(ghcb, ctxt, bytes, true);
-		if (ret)
-			break;
-
-		reg_data = vc_insn_get_reg(ctxt);
-		if (!reg_data)
-			return ES_DECODE_FAILED;
-
-		/* Zero-extend for 32-bit operation */
-		if (bytes == 4)
-			*reg_data = 0;
-
-		memcpy(reg_data, ghcb->shared_buffer, bytes);
-		break;
-
-		/* MOVS instruction */
-	case 0xa4:
-		bytes = 1;
-		fallthrough;
-	case 0xa5:
-		if (!bytes)
-			bytes = insn->opnd_bytes;
-
-		ret = vc_handle_mmio_movs(ctxt, bytes);
-		break;
-		/* Two-Byte Opcodes */
-	case 0x0f:
-		ret = vc_handle_mmio_twobyte_ops(ghcb, ctxt);
-		break;
-	default:
-		ret = ES_UNSUPPORTED;
-	}
-
-	return ret;
-}
-
 static enum es_result vc_handle_dr7_write(struct ghcb *ghcb,
 					  struct es_em_ctxt *ctxt)
 {
diff --git a/arch/x86/lib/insn-eval-shared.c b/arch/x86/lib/insn-eval-shared.c
new file mode 100644
index 000000000000..0eb37e3a218b
--- /dev/null
+++ b/arch/x86/lib/insn-eval-shared.c
@@ -0,0 +1,805 @@
+/*
+ * Utility functions for x86 operand and address decoding
+ *
+ * Copyright (C) Intel Corporation 2017
+ */
+
+enum reg_type {
+	REG_TYPE_RM = 0,
+	REG_TYPE_REG,
+	REG_TYPE_INDEX,
+	REG_TYPE_BASE,
+};
+
+/**
+ * is_string_insn() - Determine if instruction is a string instruction
+ * @insn:	Instruction containing the opcode to inspect
+ *
+ * Returns:
+ *
+ * true if the instruction, determined by the opcode, is any of the
+ * string instructions as defined in the Intel Software Development manual.
+ * False otherwise.
+ */
+static bool is_string_insn(struct insn *insn)
+{
+	insn_get_opcode(insn);
+
+	/* All string instructions have a 1-byte opcode. */
+	if (insn->opcode.nbytes != 1)
+		return false;
+
+	switch (insn->opcode.bytes[0]) {
+	case 0x6c ... 0x6f:	/* INS, OUTS */
+	case 0xa4 ... 0xa7:	/* MOVS, CMPS */
+	case 0xaa ... 0xaf:	/* STOS, LODS, SCAS */
+		return true;
+	default:
+		return false;
+	}
+}
+
+/**
+ * insn_has_rep_prefix() - Determine if instruction has a REP prefix
+ * @insn:	Instruction containing the prefix to inspect
+ *
+ * Returns:
+ *
+ * true if the instruction has a REP prefix, false if not.
+ */
+bool insn_has_rep_prefix(struct insn *insn)
+{
+	insn_byte_t p;
+	int i;
+
+	insn_get_prefixes(insn);
+
+	for_each_insn_prefix(insn, i, p) {
+		if (p == 0xf2 || p == 0xf3)
+			return true;
+	}
+
+	return false;
+}
+
+static int get_reg_offset(struct insn *insn, struct pt_regs *regs,
+			  enum reg_type type)
+{
+	int regno = 0;
+
+	static const int regoff[] = {
+		offsetof(struct pt_regs, ax),
+		offsetof(struct pt_regs, cx),
+		offsetof(struct pt_regs, dx),
+		offsetof(struct pt_regs, bx),
+		offsetof(struct pt_regs, sp),
+		offsetof(struct pt_regs, bp),
+		offsetof(struct pt_regs, si),
+		offsetof(struct pt_regs, di),
+#ifdef CONFIG_X86_64
+		offsetof(struct pt_regs, r8),
+		offsetof(struct pt_regs, r9),
+		offsetof(struct pt_regs, r10),
+		offsetof(struct pt_regs, r11),
+		offsetof(struct pt_regs, r12),
+		offsetof(struct pt_regs, r13),
+		offsetof(struct pt_regs, r14),
+		offsetof(struct pt_regs, r15),
+#endif
+	};
+	int nr_registers = ARRAY_SIZE(regoff);
+	/*
+	 * Don't possibly decode a 32-bit instructions as
+	 * reading a 64-bit-only register.
+	 */
+	if (IS_ENABLED(CONFIG_X86_64) && !insn->x86_64)
+		nr_registers -= 8;
+
+	switch (type) {
+	case REG_TYPE_RM:
+		regno = X86_MODRM_RM(insn->modrm.value);
+
+		/*
+		 * ModRM.mod == 0 and ModRM.rm == 5 means a 32-bit displacement
+		 * follows the ModRM byte.
+		 */
+		if (!X86_MODRM_MOD(insn->modrm.value) && regno == 5)
+			return -EDOM;
+
+		if (X86_REX_B(insn->rex_prefix.value))
+			regno += 8;
+		break;
+
+	case REG_TYPE_REG:
+		regno = X86_MODRM_REG(insn->modrm.value);
+
+		if (X86_REX_R(insn->rex_prefix.value))
+			regno += 8;
+		break;
+
+	case REG_TYPE_INDEX:
+		regno = X86_SIB_INDEX(insn->sib.value);
+		if (X86_REX_X(insn->rex_prefix.value))
+			regno += 8;
+
+		/*
+		 * If ModRM.mod != 3 and SIB.index = 4 the scale*index
+		 * portion of the address computation is null. This is
+		 * true only if REX.X is 0. In such a case, the SIB index
+		 * is used in the address computation.
+		 */
+		if (X86_MODRM_MOD(insn->modrm.value) != 3 && regno == 4)
+			return -EDOM;
+		break;
+
+	case REG_TYPE_BASE:
+		regno = X86_SIB_BASE(insn->sib.value);
+		/*
+		 * If ModRM.mod is 0 and SIB.base == 5, the base of the
+		 * register-indirect addressing is 0. In this case, a
+		 * 32-bit displacement follows the SIB byte.
+		 */
+		if (!X86_MODRM_MOD(insn->modrm.value) && regno == 5)
+			return -EDOM;
+
+		if (X86_REX_B(insn->rex_prefix.value))
+			regno += 8;
+		break;
+
+	default:
+		pr_err_ratelimited("invalid register type: %d\n", type);
+		return -EINVAL;
+	}
+
+	if (regno >= nr_registers) {
+		WARN_ONCE(1, "decoded an instruction with an invalid register");
+		return -EINVAL;
+	}
+	return regoff[regno];
+}
+
+/**
+ * insn_get_modrm_rm_off() - Obtain register in r/m part of the ModRM byte
+ * @insn:	Instruction containing the ModRM byte
+ * @regs:	Register values as seen when entering kernel mode
+ *
+ * Returns:
+ *
+ * The register indicated by the r/m part of the ModRM byte. The
+ * register is obtained as an offset from the base of pt_regs. In specific
+ * cases, the returned value can be -EDOM to indicate that the particular value
+ * of ModRM does not refer to a register and shall be ignored.
+ */
+int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs)
+{
+	return get_reg_offset(insn, regs, REG_TYPE_RM);
+}
+
+/**
+ * get_reg_offset_16() - Obtain offset of register indicated by instruction
+ * @insn:	Instruction containing ModRM byte
+ * @regs:	Register values as seen when entering kernel mode
+ * @offs1:	Offset of the first operand register
+ * @offs2:	Offset of the second operand register, if applicable
+ *
+ * Obtain the offset, in pt_regs, of the registers indicated by the ModRM byte
+ * in @insn. This function is to be used with 16-bit address encodings. The
+ * @offs1 and @offs2 will be written with the offset of the two registers
+ * indicated by the instruction. In cases where any of the registers is not
+ * referenced by the instruction, the value will be set to -EDOM.
+ *
+ * Returns:
+ *
+ * 0 on success, -EINVAL on error.
+ */
+static int get_reg_offset_16(struct insn *insn, struct pt_regs *regs,
+			     int *offs1, int *offs2)
+{
+	/*
+	 * 16-bit addressing can use one or two registers. Specifics of
+	 * encodings are given in Table 2-1. "16-Bit Addressing Forms with the
+	 * ModR/M Byte" of the Intel Software Development Manual.
+	 */
+	static const int regoff1[] = {
+		offsetof(struct pt_regs, bx),
+		offsetof(struct pt_regs, bx),
+		offsetof(struct pt_regs, bp),
+		offsetof(struct pt_regs, bp),
+		offsetof(struct pt_regs, si),
+		offsetof(struct pt_regs, di),
+		offsetof(struct pt_regs, bp),
+		offsetof(struct pt_regs, bx),
+	};
+
+	static const int regoff2[] = {
+		offsetof(struct pt_regs, si),
+		offsetof(struct pt_regs, di),
+		offsetof(struct pt_regs, si),
+		offsetof(struct pt_regs, di),
+		-EDOM,
+		-EDOM,
+		-EDOM,
+		-EDOM,
+	};
+
+	if (!offs1 || !offs2)
+		return -EINVAL;
+
+	/* Operand is a register, use the generic function. */
+	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
+		*offs1 = insn_get_modrm_rm_off(insn, regs);
+		*offs2 = -EDOM;
+		return 0;
+	}
+
+	*offs1 = regoff1[X86_MODRM_RM(insn->modrm.value)];
+	*offs2 = regoff2[X86_MODRM_RM(insn->modrm.value)];
+
+	/*
+	 * If ModRM.mod is 0 and ModRM.rm is 110b, then we use displacement-
+	 * only addressing. This means that no registers are involved in
+	 * computing the effective address. Thus, ensure that the first
+	 * register offset is invalid. The second register offset is already
+	 * invalid under the aforementioned conditions.
+	 */
+	if ((X86_MODRM_MOD(insn->modrm.value) == 0) &&
+	    (X86_MODRM_RM(insn->modrm.value) == 6))
+		*offs1 = -EDOM;
+
+	return 0;
+}
+
+/**
+ * insn_get_modrm_reg_off() - Obtain register in reg part of the ModRM byte
+ * @insn:	Instruction containing the ModRM byte
+ * @regs:	Register values as seen when entering kernel mode
+ *
+ * Returns:
+ *
+ * The register indicated by the reg part of the ModRM byte. The
+ * register is obtained as an offset from the base of pt_regs.
+ */
+int insn_get_modrm_reg_off(struct insn *insn, struct pt_regs *regs)
+{
+	return get_reg_offset(insn, regs, REG_TYPE_REG);
+}
+
+/**
+ * get_eff_addr_reg() - Obtain effective address from register operand
+ * @insn:	Instruction. Must be valid.
+ * @regs:	Register values as seen when entering kernel mode
+ * @regoff:	Obtained operand offset, in pt_regs, with the effective address
+ * @eff_addr:	Obtained effective address
+ *
+ * Obtain the effective address stored in the register operand as indicated by
+ * the ModRM byte. This function is to be used only with register addressing
+ * (i.e.,  ModRM.mod is 3). The effective address is saved in @eff_addr. The
+ * register operand, as an offset from the base of pt_regs, is saved in @regoff;
+ * such offset can then be used to resolve the segment associated with the
+ * operand. This function can be used with any of the supported address sizes
+ * in x86.
+ *
+ * Returns:
+ *
+ * 0 on success. @eff_addr will have the effective address stored in the
+ * operand indicated by ModRM. @regoff will have such operand as an offset from
+ * the base of pt_regs.
+ *
+ * -EINVAL on error.
+ */
+static int get_eff_addr_reg(struct insn *insn, struct pt_regs *regs,
+			    int *regoff, long *eff_addr)
+{
+	int ret;
+
+	ret = insn_get_modrm(insn);
+	if (ret)
+		return ret;
+
+	if (X86_MODRM_MOD(insn->modrm.value) != 3)
+		return -EINVAL;
+
+	*regoff = get_reg_offset(insn, regs, REG_TYPE_RM);
+	if (*regoff < 0)
+		return -EINVAL;
+
+	/* Ignore bytes that are outside the address size. */
+	if (insn->addr_bytes == 2)
+		*eff_addr = regs_get_register(regs, *regoff) & 0xffff;
+	else if (insn->addr_bytes == 4)
+		*eff_addr = regs_get_register(regs, *regoff) & 0xffffffff;
+	else /* 64-bit address */
+		*eff_addr = regs_get_register(regs, *regoff);
+
+	return 0;
+}
+
+/**
+ * get_eff_addr_modrm() - Obtain referenced effective address via ModRM
+ * @insn:	Instruction. Must be valid.
+ * @regs:	Register values as seen when entering kernel mode
+ * @regoff:	Obtained operand offset, in pt_regs, associated with segment
+ * @eff_addr:	Obtained effective address
+ *
+ * Obtain the effective address referenced by the ModRM byte of @insn. After
+ * identifying the registers involved in the register-indirect memory reference,
+ * its value is obtained from the operands in @regs. The computed address is
+ * stored @eff_addr. Also, the register operand that indicates the associated
+ * segment is stored in @regoff, this parameter can later be used to determine
+ * such segment.
+ *
+ * Returns:
+ *
+ * 0 on success. @eff_addr will have the referenced effective address. @regoff
+ * will have a register, as an offset from the base of pt_regs, that can be used
+ * to resolve the associated segment.
+ *
+ * -EINVAL on error.
+ */
+static int get_eff_addr_modrm(struct insn *insn, struct pt_regs *regs,
+			      int *regoff, long *eff_addr)
+{
+	long tmp;
+	int ret;
+
+	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
+		return -EINVAL;
+
+	ret = insn_get_modrm(insn);
+	if (ret)
+		return ret;
+
+	if (X86_MODRM_MOD(insn->modrm.value) > 2)
+		return -EINVAL;
+
+	*regoff = get_reg_offset(insn, regs, REG_TYPE_RM);
+
+	/*
+	 * -EDOM means that we must ignore the address_offset. In such a case,
+	 * in 64-bit mode the effective address relative to the rIP of the
+	 * following instruction.
+	 */
+	if (*regoff == -EDOM) {
+		if (any_64bit_mode(regs))
+			tmp = regs->ip + insn->length;
+		else
+			tmp = 0;
+	} else if (*regoff < 0) {
+		return -EINVAL;
+	} else {
+		tmp = regs_get_register(regs, *regoff);
+	}
+
+	if (insn->addr_bytes == 4) {
+		int addr32 = (int)(tmp & 0xffffffff) + insn->displacement.value;
+
+		*eff_addr = addr32 & 0xffffffff;
+	} else {
+		*eff_addr = tmp + insn->displacement.value;
+	}
+
+	return 0;
+}
+
+/**
+ * get_eff_addr_modrm_16() - Obtain referenced effective address via ModRM
+ * @insn:	Instruction. Must be valid.
+ * @regs:	Register values as seen when entering kernel mode
+ * @regoff:	Obtained operand offset, in pt_regs, associated with segment
+ * @eff_addr:	Obtained effective address
+ *
+ * Obtain the 16-bit effective address referenced by the ModRM byte of @insn.
+ * After identifying the registers involved in the register-indirect memory
+ * reference, its value is obtained from the operands in @regs. The computed
+ * address is stored @eff_addr. Also, the register operand that indicates
+ * the associated segment is stored in @regoff, this parameter can later be used
+ * to determine such segment.
+ *
+ * Returns:
+ *
+ * 0 on success. @eff_addr will have the referenced effective address. @regoff
+ * will have a register, as an offset from the base of pt_regs, that can be used
+ * to resolve the associated segment.
+ *
+ * -EINVAL on error.
+ */
+static int get_eff_addr_modrm_16(struct insn *insn, struct pt_regs *regs,
+				 int *regoff, short *eff_addr)
+{
+	int addr_offset1, addr_offset2, ret;
+	short addr1 = 0, addr2 = 0, displacement;
+
+	if (insn->addr_bytes != 2)
+		return -EINVAL;
+
+	insn_get_modrm(insn);
+
+	if (!insn->modrm.nbytes)
+		return -EINVAL;
+
+	if (X86_MODRM_MOD(insn->modrm.value) > 2)
+		return -EINVAL;
+
+	ret = get_reg_offset_16(insn, regs, &addr_offset1, &addr_offset2);
+	if (ret < 0)
+		return -EINVAL;
+
+	/*
+	 * Don't fail on invalid offset values. They might be invalid because
+	 * they cannot be used for this particular value of ModRM. Instead, use
+	 * them in the computation only if they contain a valid value.
+	 */
+	if (addr_offset1 != -EDOM)
+		addr1 = regs_get_register(regs, addr_offset1) & 0xffff;
+
+	if (addr_offset2 != -EDOM)
+		addr2 = regs_get_register(regs, addr_offset2) & 0xffff;
+
+	displacement = insn->displacement.value & 0xffff;
+	*eff_addr = addr1 + addr2 + displacement;
+
+	/*
+	 * The first operand register could indicate to use of either SS or DS
+	 * registers to obtain the segment selector.  The second operand
+	 * register can only indicate the use of DS. Thus, the first operand
+	 * will be used to obtain the segment selector.
+	 */
+	*regoff = addr_offset1;
+
+	return 0;
+}
+
+/**
+ * get_eff_addr_sib() - Obtain referenced effective address via SIB
+ * @insn:	Instruction. Must be valid.
+ * @regs:	Register values as seen when entering kernel mode
+ * @regoff:	Obtained operand offset, in pt_regs, associated with segment
+ * @eff_addr:	Obtained effective address
+ *
+ * Obtain the effective address referenced by the SIB byte of @insn. After
+ * identifying the registers involved in the indexed, register-indirect memory
+ * reference, its value is obtained from the operands in @regs. The computed
+ * address is stored @eff_addr. Also, the register operand that indicates the
+ * associated segment is stored in @regoff, this parameter can later be used to
+ * determine such segment.
+ *
+ * Returns:
+ *
+ * 0 on success. @eff_addr will have the referenced effective address.
+ * @base_offset will have a register, as an offset from the base of pt_regs,
+ * that can be used to resolve the associated segment.
+ *
+ * Negative value on error.
+ */
+static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs,
+			    int *base_offset, long *eff_addr)
+{
+	long base, indx;
+	int indx_offset;
+	int ret;
+
+	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
+		return -EINVAL;
+
+	ret = insn_get_modrm(insn);
+	if (ret)
+		return ret;
+
+	if (!insn->modrm.nbytes)
+		return -EINVAL;
+
+	if (X86_MODRM_MOD(insn->modrm.value) > 2)
+		return -EINVAL;
+
+	ret = insn_get_sib(insn);
+	if (ret)
+		return ret;
+
+	if (!insn->sib.nbytes)
+		return -EINVAL;
+
+	*base_offset = get_reg_offset(insn, regs, REG_TYPE_BASE);
+	indx_offset = get_reg_offset(insn, regs, REG_TYPE_INDEX);
+
+	/*
+	 * Negative values in the base and index offset means an error when
+	 * decoding the SIB byte. Except -EDOM, which means that the registers
+	 * should not be used in the address computation.
+	 */
+	if (*base_offset == -EDOM)
+		base = 0;
+	else if (*base_offset < 0)
+		return -EINVAL;
+	else
+		base = regs_get_register(regs, *base_offset);
+
+	if (indx_offset == -EDOM)
+		indx = 0;
+	else if (indx_offset < 0)
+		return -EINVAL;
+	else
+		indx = regs_get_register(regs, indx_offset);
+
+	if (insn->addr_bytes == 4) {
+		int addr32, base32, idx32;
+
+		base32 = base & 0xffffffff;
+		idx32 = indx & 0xffffffff;
+
+		addr32 = base32 + idx32 * (1 << X86_SIB_SCALE(insn->sib.value));
+		addr32 += insn->displacement.value;
+
+		*eff_addr = addr32 & 0xffffffff;
+	} else {
+		*eff_addr = base + indx * (1 << X86_SIB_SCALE(insn->sib.value));
+		*eff_addr += insn->displacement.value;
+	}
+
+	return 0;
+}
+
+/**
+ * get_addr_ref_16() - Obtain the 16-bit address referred by instruction
+ * @insn:	Instruction containing ModRM byte and displacement
+ * @regs:	Register values as seen when entering kernel mode
+ *
+ * This function is to be used with 16-bit address encodings. Obtain the memory
+ * address referred by the instruction's ModRM and displacement bytes. Also, the
+ * segment used as base is determined by either any segment override prefixes in
+ * @insn or the default segment of the registers involved in the address
+ * computation. In protected mode, segment limits are enforced.
+ *
+ * Returns:
+ *
+ * Linear address referenced by the instruction operands on success.
+ *
+ * -1L on error.
+ */
+static void __user *get_addr_ref_16(struct insn *insn, struct pt_regs *regs)
+{
+	unsigned long linear_addr = -1L, seg_base, seg_limit;
+	int ret, regoff;
+	short eff_addr;
+	long tmp;
+
+	if (insn_get_displacement(insn))
+		goto out;
+
+	if (insn->addr_bytes != 2)
+		goto out;
+
+	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
+		ret = get_eff_addr_reg(insn, regs, &regoff, &tmp);
+		if (ret)
+			goto out;
+
+		eff_addr = tmp;
+	} else {
+		ret = get_eff_addr_modrm_16(insn, regs, &regoff, &eff_addr);
+		if (ret)
+			goto out;
+	}
+
+	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, &seg_limit);
+	if (ret)
+		goto out;
+
+	/*
+	 * Before computing the linear address, make sure the effective address
+	 * is within the limits of the segment. In virtual-8086 mode, segment
+	 * limits are not enforced. In such a case, the segment limit is -1L to
+	 * reflect this fact.
+	 */
+	if ((unsigned long)(eff_addr & 0xffff) > seg_limit)
+		goto out;
+
+	linear_addr = (unsigned long)(eff_addr & 0xffff) + seg_base;
+
+	/* Limit linear address to 20 bits */
+	if (v8086_mode(regs))
+		linear_addr &= 0xfffff;
+
+out:
+	return (void __user *)linear_addr;
+}
+
+/**
+ * get_addr_ref_32() - Obtain a 32-bit linear address
+ * @insn:	Instruction with ModRM, SIB bytes and displacement
+ * @regs:	Register values as seen when entering kernel mode
+ *
+ * This function is to be used with 32-bit address encodings to obtain the
+ * linear memory address referred by the instruction's ModRM, SIB,
+ * displacement bytes and segment base address, as applicable. If in protected
+ * mode, segment limits are enforced.
+ *
+ * Returns:
+ *
+ * Linear address referenced by instruction and registers on success.
+ *
+ * -1L on error.
+ */
+static void __user *get_addr_ref_32(struct insn *insn, struct pt_regs *regs)
+{
+	unsigned long linear_addr = -1L, seg_base, seg_limit;
+	int eff_addr, regoff;
+	long tmp;
+	int ret;
+
+	if (insn->addr_bytes != 4)
+		goto out;
+
+	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
+		ret = get_eff_addr_reg(insn, regs, &regoff, &tmp);
+		if (ret)
+			goto out;
+
+		eff_addr = tmp;
+
+	} else {
+		if (insn->sib.nbytes) {
+			ret = get_eff_addr_sib(insn, regs, &regoff, &tmp);
+			if (ret)
+				goto out;
+
+			eff_addr = tmp;
+		} else {
+			ret = get_eff_addr_modrm(insn, regs, &regoff, &tmp);
+			if (ret)
+				goto out;
+
+			eff_addr = tmp;
+		}
+	}
+
+	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, &seg_limit);
+	if (ret)
+		goto out;
+
+	/*
+	 * In protected mode, before computing the linear address, make sure
+	 * the effective address is within the limits of the segment.
+	 * 32-bit addresses can be used in long and virtual-8086 modes if an
+	 * address override prefix is used. In such cases, segment limits are
+	 * not enforced. When in virtual-8086 mode, the segment limit is -1L
+	 * to reflect this situation.
+	 *
+	 * After computed, the effective address is treated as an unsigned
+	 * quantity.
+	 */
+	if (!any_64bit_mode(regs) && ((unsigned int)eff_addr > seg_limit))
+		goto out;
+
+	/*
+	 * Even though 32-bit address encodings are allowed in virtual-8086
+	 * mode, the address range is still limited to [0x-0xffff].
+	 */
+	if (v8086_mode(regs) && (eff_addr & ~0xffff))
+		goto out;
+
+	/*
+	 * Data type long could be 64 bits in size. Ensure that our 32-bit
+	 * effective address is not sign-extended when computing the linear
+	 * address.
+	 */
+	linear_addr = (unsigned long)(eff_addr & 0xffffffff) + seg_base;
+
+	/* Limit linear address to 20 bits */
+	if (v8086_mode(regs))
+		linear_addr &= 0xfffff;
+
+out:
+	return (void __user *)linear_addr;
+}
+
+/**
+ * get_addr_ref_64() - Obtain a 64-bit linear address
+ * @insn:	Instruction struct with ModRM and SIB bytes and displacement
+ * @regs:	Structure with register values as seen when entering kernel mode
+ *
+ * This function is to be used with 64-bit address encodings to obtain the
+ * linear memory address referred by the instruction's ModRM, SIB,
+ * displacement bytes and segment base address, as applicable.
+ *
+ * Returns:
+ *
+ * Linear address referenced by instruction and registers on success.
+ *
+ * -1L on error.
+ */
+#ifndef CONFIG_X86_64
+static void __user *get_addr_ref_64(struct insn *insn, struct pt_regs *regs)
+{
+	return (void __user *)-1L;
+}
+#else
+static void __user *get_addr_ref_64(struct insn *insn, struct pt_regs *regs)
+{
+	unsigned long linear_addr = -1L, seg_base;
+	int regoff, ret;
+	long eff_addr;
+
+	if (insn->addr_bytes != 8)
+		goto out;
+
+	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
+		ret = get_eff_addr_reg(insn, regs, &regoff, &eff_addr);
+		if (ret)
+			goto out;
+
+	} else {
+		if (insn->sib.nbytes) {
+			ret = get_eff_addr_sib(insn, regs, &regoff, &eff_addr);
+			if (ret)
+				goto out;
+		} else {
+			ret = get_eff_addr_modrm(insn, regs, &regoff, &eff_addr);
+			if (ret)
+				goto out;
+		}
+
+	}
+
+	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, NULL);
+	if (ret)
+		goto out;
+
+	linear_addr = (unsigned long)eff_addr + seg_base;
+
+out:
+	return (void __user *)linear_addr;
+}
+#endif /* CONFIG_X86_64 */
+
+/**
+ * insn_get_addr_ref() - Obtain the linear address referred by instruction
+ * @insn:	Instruction structure containing ModRM byte and displacement
+ * @regs:	Structure with register values as seen when entering kernel mode
+ *
+ * Obtain the linear address referred by the instruction's ModRM, SIB and
+ * displacement bytes, and segment base, as applicable. In protected mode,
+ * segment limits are enforced.
+ *
+ * Returns:
+ *
+ * Linear address referenced by instruction and registers on success.
+ *
+ * -1L on error.
+ */
+void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
+{
+	if (!insn || !regs)
+		return (void __user *)-1L;
+
+	switch (insn->addr_bytes) {
+	case 2:
+		return get_addr_ref_16(insn, regs);
+	case 4:
+		return get_addr_ref_32(insn, regs);
+	case 8:
+		return get_addr_ref_64(insn, regs);
+	default:
+		return (void __user *)-1L;
+	}
+}
+
+static int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip)
+{
+	unsigned long seg_base = 0;
+
+	/*
+	 * If not in user-space long mode, a custom code segment could be in
+	 * use. This is true in protected mode (if the process defined a local
+	 * descriptor table), or virtual-8086 mode. In most of the cases
+	 * seg_base will be zero as in USER_CS.
+	 */
+	if (!user_64bit_mode(regs)) {
+		seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
+		if (seg_base == -1L)
+			return -EINVAL;
+	}
+
+	*ip = seg_base + regs->ip;
+
+	return 0;
+}
diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
index a1d24fdc07cf..19d6dbc704f3 100644
--- a/arch/x86/lib/insn-eval.c
+++ b/arch/x86/lib/insn-eval.c
@@ -18,63 +18,11 @@
 #undef pr_fmt
 #define pr_fmt(fmt) "insn: " fmt
 
-enum reg_type {
-	REG_TYPE_RM = 0,
-	REG_TYPE_REG,
-	REG_TYPE_INDEX,
-	REG_TYPE_BASE,
-};
-
-/**
- * is_string_insn() - Determine if instruction is a string instruction
- * @insn:	Instruction containing the opcode to inspect
- *
- * Returns:
- *
- * true if the instruction, determined by the opcode, is any of the
- * string instructions as defined in the Intel Software Development manual.
- * False otherwise.
- */
-static bool is_string_insn(struct insn *insn)
-{
-	insn_get_opcode(insn);
-
-	/* All string instructions have a 1-byte opcode. */
-	if (insn->opcode.nbytes != 1)
-		return false;
-
-	switch (insn->opcode.bytes[0]) {
-	case 0x6c ... 0x6f:	/* INS, OUTS */
-	case 0xa4 ... 0xa7:	/* MOVS, CMPS */
-	case 0xaa ... 0xaf:	/* STOS, LODS, SCAS */
-		return true;
-	default:
-		return false;
-	}
-}
-
-/**
- * insn_has_rep_prefix() - Determine if instruction has a REP prefix
- * @insn:	Instruction containing the prefix to inspect
- *
- * Returns:
- *
- * true if the instruction has a REP prefix, false if not.
- */
-bool insn_has_rep_prefix(struct insn *insn)
-{
-	insn_byte_t p;
-	int i;
-
-	insn_get_prefixes(insn);
-
-	for_each_insn_prefix(insn, i, p) {
-		if (p == 0xf2 || p == 0xf3)
-			return true;
-	}
+static int get_seg_base_limit(struct insn *insn, struct pt_regs *regs,
+			      int regoff, unsigned long *base,
+			      unsigned long *limit);
 
-	return false;
-}
+#include "insn-eval-shared.c"
 
 /**
  * get_seg_reg_override_idx() - obtain segment register override index
@@ -412,176 +360,6 @@ static short get_segment_selector(struct pt_regs *regs, int seg_reg_idx)
 #endif /* CONFIG_X86_64 */
 }
 
-static int get_reg_offset(struct insn *insn, struct pt_regs *regs,
-			  enum reg_type type)
-{
-	int regno = 0;
-
-	static const int regoff[] = {
-		offsetof(struct pt_regs, ax),
-		offsetof(struct pt_regs, cx),
-		offsetof(struct pt_regs, dx),
-		offsetof(struct pt_regs, bx),
-		offsetof(struct pt_regs, sp),
-		offsetof(struct pt_regs, bp),
-		offsetof(struct pt_regs, si),
-		offsetof(struct pt_regs, di),
-#ifdef CONFIG_X86_64
-		offsetof(struct pt_regs, r8),
-		offsetof(struct pt_regs, r9),
-		offsetof(struct pt_regs, r10),
-		offsetof(struct pt_regs, r11),
-		offsetof(struct pt_regs, r12),
-		offsetof(struct pt_regs, r13),
-		offsetof(struct pt_regs, r14),
-		offsetof(struct pt_regs, r15),
-#endif
-	};
-	int nr_registers = ARRAY_SIZE(regoff);
-	/*
-	 * Don't possibly decode a 32-bit instructions as
-	 * reading a 64-bit-only register.
-	 */
-	if (IS_ENABLED(CONFIG_X86_64) && !insn->x86_64)
-		nr_registers -= 8;
-
-	switch (type) {
-	case REG_TYPE_RM:
-		regno = X86_MODRM_RM(insn->modrm.value);
-
-		/*
-		 * ModRM.mod == 0 and ModRM.rm == 5 means a 32-bit displacement
-		 * follows the ModRM byte.
-		 */
-		if (!X86_MODRM_MOD(insn->modrm.value) && regno == 5)
-			return -EDOM;
-
-		if (X86_REX_B(insn->rex_prefix.value))
-			regno += 8;
-		break;
-
-	case REG_TYPE_REG:
-		regno = X86_MODRM_REG(insn->modrm.value);
-
-		if (X86_REX_R(insn->rex_prefix.value))
-			regno += 8;
-		break;
-
-	case REG_TYPE_INDEX:
-		regno = X86_SIB_INDEX(insn->sib.value);
-		if (X86_REX_X(insn->rex_prefix.value))
-			regno += 8;
-
-		/*
-		 * If ModRM.mod != 3 and SIB.index = 4 the scale*index
-		 * portion of the address computation is null. This is
-		 * true only if REX.X is 0. In such a case, the SIB index
-		 * is used in the address computation.
-		 */
-		if (X86_MODRM_MOD(insn->modrm.value) != 3 && regno == 4)
-			return -EDOM;
-		break;
-
-	case REG_TYPE_BASE:
-		regno = X86_SIB_BASE(insn->sib.value);
-		/*
-		 * If ModRM.mod is 0 and SIB.base == 5, the base of the
-		 * register-indirect addressing is 0. In this case, a
-		 * 32-bit displacement follows the SIB byte.
-		 */
-		if (!X86_MODRM_MOD(insn->modrm.value) && regno == 5)
-			return -EDOM;
-
-		if (X86_REX_B(insn->rex_prefix.value))
-			regno += 8;
-		break;
-
-	default:
-		pr_err_ratelimited("invalid register type: %d\n", type);
-		return -EINVAL;
-	}
-
-	if (regno >= nr_registers) {
-		WARN_ONCE(1, "decoded an instruction with an invalid register");
-		return -EINVAL;
-	}
-	return regoff[regno];
-}
-
-/**
- * get_reg_offset_16() - Obtain offset of register indicated by instruction
- * @insn:	Instruction containing ModRM byte
- * @regs:	Register values as seen when entering kernel mode
- * @offs1:	Offset of the first operand register
- * @offs2:	Offset of the second operand register, if applicable
- *
- * Obtain the offset, in pt_regs, of the registers indicated by the ModRM byte
- * in @insn. This function is to be used with 16-bit address encodings. The
- * @offs1 and @offs2 will be written with the offset of the two registers
- * indicated by the instruction. In cases where any of the registers is not
- * referenced by the instruction, the value will be set to -EDOM.
- *
- * Returns:
- *
- * 0 on success, -EINVAL on error.
- */
-static int get_reg_offset_16(struct insn *insn, struct pt_regs *regs,
-			     int *offs1, int *offs2)
-{
-	/*
-	 * 16-bit addressing can use one or two registers. Specifics of
-	 * encodings are given in Table 2-1. "16-Bit Addressing Forms with the
-	 * ModR/M Byte" of the Intel Software Development Manual.
-	 */
-	static const int regoff1[] = {
-		offsetof(struct pt_regs, bx),
-		offsetof(struct pt_regs, bx),
-		offsetof(struct pt_regs, bp),
-		offsetof(struct pt_regs, bp),
-		offsetof(struct pt_regs, si),
-		offsetof(struct pt_regs, di),
-		offsetof(struct pt_regs, bp),
-		offsetof(struct pt_regs, bx),
-	};
-
-	static const int regoff2[] = {
-		offsetof(struct pt_regs, si),
-		offsetof(struct pt_regs, di),
-		offsetof(struct pt_regs, si),
-		offsetof(struct pt_regs, di),
-		-EDOM,
-		-EDOM,
-		-EDOM,
-		-EDOM,
-	};
-
-	if (!offs1 || !offs2)
-		return -EINVAL;
-
-	/* Operand is a register, use the generic function. */
-	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
-		*offs1 = insn_get_modrm_rm_off(insn, regs);
-		*offs2 = -EDOM;
-		return 0;
-	}
-
-	*offs1 = regoff1[X86_MODRM_RM(insn->modrm.value)];
-	*offs2 = regoff2[X86_MODRM_RM(insn->modrm.value)];
-
-	/*
-	 * If ModRM.mod is 0 and ModRM.rm is 110b, then we use displacement-
-	 * only addressing. This means that no registers are involved in
-	 * computing the effective address. Thus, ensure that the first
-	 * register offset is invalid. The second register offset is already
-	 * invalid under the aforementioned conditions.
-	 */
-	if ((X86_MODRM_MOD(insn->modrm.value) == 0) &&
-	    (X86_MODRM_RM(insn->modrm.value) == 6))
-		*offs1 = -EDOM;
-
-	return 0;
-}
-
 /**
  * get_desc() - Obtain contents of a segment descriptor
  * @out:	Segment descriptor contents on success
@@ -818,38 +596,6 @@ int insn_get_code_seg_params(struct pt_regs *regs)
 	}
 }
 
-/**
- * insn_get_modrm_rm_off() - Obtain register in r/m part of the ModRM byte
- * @insn:	Instruction containing the ModRM byte
- * @regs:	Register values as seen when entering kernel mode
- *
- * Returns:
- *
- * The register indicated by the r/m part of the ModRM byte. The
- * register is obtained as an offset from the base of pt_regs. In specific
- * cases, the returned value can be -EDOM to indicate that the particular value
- * of ModRM does not refer to a register and shall be ignored.
- */
-int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs)
-{
-	return get_reg_offset(insn, regs, REG_TYPE_RM);
-}
-
-/**
- * insn_get_modrm_reg_off() - Obtain register in reg part of the ModRM byte
- * @insn:	Instruction containing the ModRM byte
- * @regs:	Register values as seen when entering kernel mode
- *
- * Returns:
- *
- * The register indicated by the reg part of the ModRM byte. The
- * register is obtained as an offset from the base of pt_regs.
- */
-int insn_get_modrm_reg_off(struct insn *insn, struct pt_regs *regs)
-{
-	return get_reg_offset(insn, regs, REG_TYPE_REG);
-}
-
 /**
  * get_seg_base_limit() - obtain base address and limit of a segment
  * @insn:	Instruction. Must be valid.
@@ -898,546 +644,6 @@ static int get_seg_base_limit(struct insn *insn, struct pt_regs *regs,
 	return 0;
 }
 
-/**
- * get_eff_addr_reg() - Obtain effective address from register operand
- * @insn:	Instruction. Must be valid.
- * @regs:	Register values as seen when entering kernel mode
- * @regoff:	Obtained operand offset, in pt_regs, with the effective address
- * @eff_addr:	Obtained effective address
- *
- * Obtain the effective address stored in the register operand as indicated by
- * the ModRM byte. This function is to be used only with register addressing
- * (i.e.,  ModRM.mod is 3). The effective address is saved in @eff_addr. The
- * register operand, as an offset from the base of pt_regs, is saved in @regoff;
- * such offset can then be used to resolve the segment associated with the
- * operand. This function can be used with any of the supported address sizes
- * in x86.
- *
- * Returns:
- *
- * 0 on success. @eff_addr will have the effective address stored in the
- * operand indicated by ModRM. @regoff will have such operand as an offset from
- * the base of pt_regs.
- *
- * -EINVAL on error.
- */
-static int get_eff_addr_reg(struct insn *insn, struct pt_regs *regs,
-			    int *regoff, long *eff_addr)
-{
-	int ret;
-
-	ret = insn_get_modrm(insn);
-	if (ret)
-		return ret;
-
-	if (X86_MODRM_MOD(insn->modrm.value) != 3)
-		return -EINVAL;
-
-	*regoff = get_reg_offset(insn, regs, REG_TYPE_RM);
-	if (*regoff < 0)
-		return -EINVAL;
-
-	/* Ignore bytes that are outside the address size. */
-	if (insn->addr_bytes == 2)
-		*eff_addr = regs_get_register(regs, *regoff) & 0xffff;
-	else if (insn->addr_bytes == 4)
-		*eff_addr = regs_get_register(regs, *regoff) & 0xffffffff;
-	else /* 64-bit address */
-		*eff_addr = regs_get_register(regs, *regoff);
-
-	return 0;
-}
-
-/**
- * get_eff_addr_modrm() - Obtain referenced effective address via ModRM
- * @insn:	Instruction. Must be valid.
- * @regs:	Register values as seen when entering kernel mode
- * @regoff:	Obtained operand offset, in pt_regs, associated with segment
- * @eff_addr:	Obtained effective address
- *
- * Obtain the effective address referenced by the ModRM byte of @insn. After
- * identifying the registers involved in the register-indirect memory reference,
- * its value is obtained from the operands in @regs. The computed address is
- * stored @eff_addr. Also, the register operand that indicates the associated
- * segment is stored in @regoff, this parameter can later be used to determine
- * such segment.
- *
- * Returns:
- *
- * 0 on success. @eff_addr will have the referenced effective address. @regoff
- * will have a register, as an offset from the base of pt_regs, that can be used
- * to resolve the associated segment.
- *
- * -EINVAL on error.
- */
-static int get_eff_addr_modrm(struct insn *insn, struct pt_regs *regs,
-			      int *regoff, long *eff_addr)
-{
-	long tmp;
-	int ret;
-
-	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
-		return -EINVAL;
-
-	ret = insn_get_modrm(insn);
-	if (ret)
-		return ret;
-
-	if (X86_MODRM_MOD(insn->modrm.value) > 2)
-		return -EINVAL;
-
-	*regoff = get_reg_offset(insn, regs, REG_TYPE_RM);
-
-	/*
-	 * -EDOM means that we must ignore the address_offset. In such a case,
-	 * in 64-bit mode the effective address relative to the rIP of the
-	 * following instruction.
-	 */
-	if (*regoff == -EDOM) {
-		if (any_64bit_mode(regs))
-			tmp = regs->ip + insn->length;
-		else
-			tmp = 0;
-	} else if (*regoff < 0) {
-		return -EINVAL;
-	} else {
-		tmp = regs_get_register(regs, *regoff);
-	}
-
-	if (insn->addr_bytes == 4) {
-		int addr32 = (int)(tmp & 0xffffffff) + insn->displacement.value;
-
-		*eff_addr = addr32 & 0xffffffff;
-	} else {
-		*eff_addr = tmp + insn->displacement.value;
-	}
-
-	return 0;
-}
-
-/**
- * get_eff_addr_modrm_16() - Obtain referenced effective address via ModRM
- * @insn:	Instruction. Must be valid.
- * @regs:	Register values as seen when entering kernel mode
- * @regoff:	Obtained operand offset, in pt_regs, associated with segment
- * @eff_addr:	Obtained effective address
- *
- * Obtain the 16-bit effective address referenced by the ModRM byte of @insn.
- * After identifying the registers involved in the register-indirect memory
- * reference, its value is obtained from the operands in @regs. The computed
- * address is stored @eff_addr. Also, the register operand that indicates
- * the associated segment is stored in @regoff, this parameter can later be used
- * to determine such segment.
- *
- * Returns:
- *
- * 0 on success. @eff_addr will have the referenced effective address. @regoff
- * will have a register, as an offset from the base of pt_regs, that can be used
- * to resolve the associated segment.
- *
- * -EINVAL on error.
- */
-static int get_eff_addr_modrm_16(struct insn *insn, struct pt_regs *regs,
-				 int *regoff, short *eff_addr)
-{
-	int addr_offset1, addr_offset2, ret;
-	short addr1 = 0, addr2 = 0, displacement;
-
-	if (insn->addr_bytes != 2)
-		return -EINVAL;
-
-	insn_get_modrm(insn);
-
-	if (!insn->modrm.nbytes)
-		return -EINVAL;
-
-	if (X86_MODRM_MOD(insn->modrm.value) > 2)
-		return -EINVAL;
-
-	ret = get_reg_offset_16(insn, regs, &addr_offset1, &addr_offset2);
-	if (ret < 0)
-		return -EINVAL;
-
-	/*
-	 * Don't fail on invalid offset values. They might be invalid because
-	 * they cannot be used for this particular value of ModRM. Instead, use
-	 * them in the computation only if they contain a valid value.
-	 */
-	if (addr_offset1 != -EDOM)
-		addr1 = regs_get_register(regs, addr_offset1) & 0xffff;
-
-	if (addr_offset2 != -EDOM)
-		addr2 = regs_get_register(regs, addr_offset2) & 0xffff;
-
-	displacement = insn->displacement.value & 0xffff;
-	*eff_addr = addr1 + addr2 + displacement;
-
-	/*
-	 * The first operand register could indicate to use of either SS or DS
-	 * registers to obtain the segment selector.  The second operand
-	 * register can only indicate the use of DS. Thus, the first operand
-	 * will be used to obtain the segment selector.
-	 */
-	*regoff = addr_offset1;
-
-	return 0;
-}
-
-/**
- * get_eff_addr_sib() - Obtain referenced effective address via SIB
- * @insn:	Instruction. Must be valid.
- * @regs:	Register values as seen when entering kernel mode
- * @regoff:	Obtained operand offset, in pt_regs, associated with segment
- * @eff_addr:	Obtained effective address
- *
- * Obtain the effective address referenced by the SIB byte of @insn. After
- * identifying the registers involved in the indexed, register-indirect memory
- * reference, its value is obtained from the operands in @regs. The computed
- * address is stored @eff_addr. Also, the register operand that indicates the
- * associated segment is stored in @regoff, this parameter can later be used to
- * determine such segment.
- *
- * Returns:
- *
- * 0 on success. @eff_addr will have the referenced effective address.
- * @base_offset will have a register, as an offset from the base of pt_regs,
- * that can be used to resolve the associated segment.
- *
- * Negative value on error.
- */
-static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs,
-			    int *base_offset, long *eff_addr)
-{
-	long base, indx;
-	int indx_offset;
-	int ret;
-
-	if (insn->addr_bytes != 8 && insn->addr_bytes != 4)
-		return -EINVAL;
-
-	ret = insn_get_modrm(insn);
-	if (ret)
-		return ret;
-
-	if (!insn->modrm.nbytes)
-		return -EINVAL;
-
-	if (X86_MODRM_MOD(insn->modrm.value) > 2)
-		return -EINVAL;
-
-	ret = insn_get_sib(insn);
-	if (ret)
-		return ret;
-
-	if (!insn->sib.nbytes)
-		return -EINVAL;
-
-	*base_offset = get_reg_offset(insn, regs, REG_TYPE_BASE);
-	indx_offset = get_reg_offset(insn, regs, REG_TYPE_INDEX);
-
-	/*
-	 * Negative values in the base and index offset means an error when
-	 * decoding the SIB byte. Except -EDOM, which means that the registers
-	 * should not be used in the address computation.
-	 */
-	if (*base_offset == -EDOM)
-		base = 0;
-	else if (*base_offset < 0)
-		return -EINVAL;
-	else
-		base = regs_get_register(regs, *base_offset);
-
-	if (indx_offset == -EDOM)
-		indx = 0;
-	else if (indx_offset < 0)
-		return -EINVAL;
-	else
-		indx = regs_get_register(regs, indx_offset);
-
-	if (insn->addr_bytes == 4) {
-		int addr32, base32, idx32;
-
-		base32 = base & 0xffffffff;
-		idx32 = indx & 0xffffffff;
-
-		addr32 = base32 + idx32 * (1 << X86_SIB_SCALE(insn->sib.value));
-		addr32 += insn->displacement.value;
-
-		*eff_addr = addr32 & 0xffffffff;
-	} else {
-		*eff_addr = base + indx * (1 << X86_SIB_SCALE(insn->sib.value));
-		*eff_addr += insn->displacement.value;
-	}
-
-	return 0;
-}
-
-/**
- * get_addr_ref_16() - Obtain the 16-bit address referred by instruction
- * @insn:	Instruction containing ModRM byte and displacement
- * @regs:	Register values as seen when entering kernel mode
- *
- * This function is to be used with 16-bit address encodings. Obtain the memory
- * address referred by the instruction's ModRM and displacement bytes. Also, the
- * segment used as base is determined by either any segment override prefixes in
- * @insn or the default segment of the registers involved in the address
- * computation. In protected mode, segment limits are enforced.
- *
- * Returns:
- *
- * Linear address referenced by the instruction operands on success.
- *
- * -1L on error.
- */
-static void __user *get_addr_ref_16(struct insn *insn, struct pt_regs *regs)
-{
-	unsigned long linear_addr = -1L, seg_base, seg_limit;
-	int ret, regoff;
-	short eff_addr;
-	long tmp;
-
-	if (insn_get_displacement(insn))
-		goto out;
-
-	if (insn->addr_bytes != 2)
-		goto out;
-
-	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
-		ret = get_eff_addr_reg(insn, regs, &regoff, &tmp);
-		if (ret)
-			goto out;
-
-		eff_addr = tmp;
-	} else {
-		ret = get_eff_addr_modrm_16(insn, regs, &regoff, &eff_addr);
-		if (ret)
-			goto out;
-	}
-
-	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, &seg_limit);
-	if (ret)
-		goto out;
-
-	/*
-	 * Before computing the linear address, make sure the effective address
-	 * is within the limits of the segment. In virtual-8086 mode, segment
-	 * limits are not enforced. In such a case, the segment limit is -1L to
-	 * reflect this fact.
-	 */
-	if ((unsigned long)(eff_addr & 0xffff) > seg_limit)
-		goto out;
-
-	linear_addr = (unsigned long)(eff_addr & 0xffff) + seg_base;
-
-	/* Limit linear address to 20 bits */
-	if (v8086_mode(regs))
-		linear_addr &= 0xfffff;
-
-out:
-	return (void __user *)linear_addr;
-}
-
-/**
- * get_addr_ref_32() - Obtain a 32-bit linear address
- * @insn:	Instruction with ModRM, SIB bytes and displacement
- * @regs:	Register values as seen when entering kernel mode
- *
- * This function is to be used with 32-bit address encodings to obtain the
- * linear memory address referred by the instruction's ModRM, SIB,
- * displacement bytes and segment base address, as applicable. If in protected
- * mode, segment limits are enforced.
- *
- * Returns:
- *
- * Linear address referenced by instruction and registers on success.
- *
- * -1L on error.
- */
-static void __user *get_addr_ref_32(struct insn *insn, struct pt_regs *regs)
-{
-	unsigned long linear_addr = -1L, seg_base, seg_limit;
-	int eff_addr, regoff;
-	long tmp;
-	int ret;
-
-	if (insn->addr_bytes != 4)
-		goto out;
-
-	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
-		ret = get_eff_addr_reg(insn, regs, &regoff, &tmp);
-		if (ret)
-			goto out;
-
-		eff_addr = tmp;
-
-	} else {
-		if (insn->sib.nbytes) {
-			ret = get_eff_addr_sib(insn, regs, &regoff, &tmp);
-			if (ret)
-				goto out;
-
-			eff_addr = tmp;
-		} else {
-			ret = get_eff_addr_modrm(insn, regs, &regoff, &tmp);
-			if (ret)
-				goto out;
-
-			eff_addr = tmp;
-		}
-	}
-
-	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, &seg_limit);
-	if (ret)
-		goto out;
-
-	/*
-	 * In protected mode, before computing the linear address, make sure
-	 * the effective address is within the limits of the segment.
-	 * 32-bit addresses can be used in long and virtual-8086 modes if an
-	 * address override prefix is used. In such cases, segment limits are
-	 * not enforced. When in virtual-8086 mode, the segment limit is -1L
-	 * to reflect this situation.
-	 *
-	 * After computed, the effective address is treated as an unsigned
-	 * quantity.
-	 */
-	if (!any_64bit_mode(regs) && ((unsigned int)eff_addr > seg_limit))
-		goto out;
-
-	/*
-	 * Even though 32-bit address encodings are allowed in virtual-8086
-	 * mode, the address range is still limited to [0x-0xffff].
-	 */
-	if (v8086_mode(regs) && (eff_addr & ~0xffff))
-		goto out;
-
-	/*
-	 * Data type long could be 64 bits in size. Ensure that our 32-bit
-	 * effective address is not sign-extended when computing the linear
-	 * address.
-	 */
-	linear_addr = (unsigned long)(eff_addr & 0xffffffff) + seg_base;
-
-	/* Limit linear address to 20 bits */
-	if (v8086_mode(regs))
-		linear_addr &= 0xfffff;
-
-out:
-	return (void __user *)linear_addr;
-}
-
-/**
- * get_addr_ref_64() - Obtain a 64-bit linear address
- * @insn:	Instruction struct with ModRM and SIB bytes and displacement
- * @regs:	Structure with register values as seen when entering kernel mode
- *
- * This function is to be used with 64-bit address encodings to obtain the
- * linear memory address referred by the instruction's ModRM, SIB,
- * displacement bytes and segment base address, as applicable.
- *
- * Returns:
- *
- * Linear address referenced by instruction and registers on success.
- *
- * -1L on error.
- */
-#ifndef CONFIG_X86_64
-static void __user *get_addr_ref_64(struct insn *insn, struct pt_regs *regs)
-{
-	return (void __user *)-1L;
-}
-#else
-static void __user *get_addr_ref_64(struct insn *insn, struct pt_regs *regs)
-{
-	unsigned long linear_addr = -1L, seg_base;
-	int regoff, ret;
-	long eff_addr;
-
-	if (insn->addr_bytes != 8)
-		goto out;
-
-	if (X86_MODRM_MOD(insn->modrm.value) == 3) {
-		ret = get_eff_addr_reg(insn, regs, &regoff, &eff_addr);
-		if (ret)
-			goto out;
-
-	} else {
-		if (insn->sib.nbytes) {
-			ret = get_eff_addr_sib(insn, regs, &regoff, &eff_addr);
-			if (ret)
-				goto out;
-		} else {
-			ret = get_eff_addr_modrm(insn, regs, &regoff, &eff_addr);
-			if (ret)
-				goto out;
-		}
-
-	}
-
-	ret = get_seg_base_limit(insn, regs, regoff, &seg_base, NULL);
-	if (ret)
-		goto out;
-
-	linear_addr = (unsigned long)eff_addr + seg_base;
-
-out:
-	return (void __user *)linear_addr;
-}
-#endif /* CONFIG_X86_64 */
-
-/**
- * insn_get_addr_ref() - Obtain the linear address referred by instruction
- * @insn:	Instruction structure containing ModRM byte and displacement
- * @regs:	Structure with register values as seen when entering kernel mode
- *
- * Obtain the linear address referred by the instruction's ModRM, SIB and
- * displacement bytes, and segment base, as applicable. In protected mode,
- * segment limits are enforced.
- *
- * Returns:
- *
- * Linear address referenced by instruction and registers on success.
- *
- * -1L on error.
- */
-void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs)
-{
-	if (!insn || !regs)
-		return (void __user *)-1L;
-
-	switch (insn->addr_bytes) {
-	case 2:
-		return get_addr_ref_16(insn, regs);
-	case 4:
-		return get_addr_ref_32(insn, regs);
-	case 8:
-		return get_addr_ref_64(insn, regs);
-	default:
-		return (void __user *)-1L;
-	}
-}
-
-static int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip)
-{
-	unsigned long seg_base = 0;
-
-	/*
-	 * If not in user-space long mode, a custom code segment could be in
-	 * use. This is true in protected mode (if the process defined a local
-	 * descriptor table), or virtual-8086 mode. In most of the cases
-	 * seg_base will be zero as in USER_CS.
-	 */
-	if (!user_64bit_mode(regs)) {
-		seg_base = insn_get_seg_base(regs, INAT_SEG_REG_CS);
-		if (seg_base == -1L)
-			return -EINVAL;
-	}
-
-	*ip = seg_base + regs->ip;
-
-	return 0;
-}
-
 /**
  * insn_fetch_from_user() - Copy instruction bytes from user-space memory
  * @regs:	Structure with register values as seen when entering kernel mode
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 11/12] x86/sev: Handle CLFLUSH MMIO events
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (9 preceding siblings ...)
  2021-09-13 15:56 ` [PATCH v2 10/12] x86/sev: Add MMIO handling support to boot/compressed/ code Joerg Roedel
@ 2021-09-13 15:56 ` Joerg Roedel
  2021-09-13 15:56 ` [PATCH v2 12/12] x86/sev: Support kexec under SEV-ES with AP Jump Table blob Joerg Roedel
  2021-09-13 16:02 ` [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Dave Hansen
  12 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:56 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

Handle CLFLUSH instruction to MMIO memory in the #VC handler. The
instruction is ignored by the handler, as the Hypervisor is
responsible for cache management of emulated MMIO memory.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/kernel/sev-shared.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c
index a7a0793c4f98..682fa202444f 100644
--- a/arch/x86/kernel/sev-shared.c
+++ b/arch/x86/kernel/sev-shared.c
@@ -632,6 +632,15 @@ static enum es_result vc_handle_mmio_twobyte_ops(struct ghcb *ghcb,
 	long *reg_data;
 
 	switch (insn->opcode.bytes[1]) {
+		/* CLFLUSH */
+	case 0xae:
+		/*
+		 * Ignore CLFLUSHes - those go to emulated MMIO anyway and the
+		 * hypervisor is responsible for cache management.
+		 */
+		ret = ES_OK;
+		break;
+
 		/* MMIO Read w/ zero-extension */
 	case 0xb6:
 		bytes = 1;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 12/12] x86/sev: Support kexec under SEV-ES with AP Jump Table blob
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (10 preceding siblings ...)
  2021-09-13 15:56 ` [PATCH v2 11/12] x86/sev: Handle CLFLUSH MMIO events Joerg Roedel
@ 2021-09-13 15:56 ` Joerg Roedel
  2021-09-13 16:02 ` [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Dave Hansen
  12 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 15:56 UTC (permalink / raw)
  To: x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

From: Joerg Roedel <jroedel@suse.de>

When the AP Jump Table blob is installed the kernel can hand over the
APs from the old to the new kernel. Enable kexec when the AP Jump
Table blob has been installed.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 arch/x86/include/asm/sev.h         |  2 ++
 arch/x86/kernel/machine_kexec_64.c |  6 +++++-
 arch/x86/kernel/sev.c              | 12 ++++++++++++
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index cd14b6e10f12..61910caf2a0d 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -87,6 +87,7 @@ static __always_inline void sev_es_stop_this_cpu(void)
 	if (static_branch_unlikely(&sev_es_enable_key))
 		__sev_es_stop_this_cpu();
 }
+bool sev_kexec_supported(void);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -94,6 +95,7 @@ static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { ret
 static inline void sev_es_nmi_complete(void) { }
 static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
 static inline void sev_es_stop_this_cpu(void) { }
+static bool sev_kexec_supported(void) { return true; }
 #endif
 
 #endif
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index a8e16a411b40..06ff51b2b3fb 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -26,6 +26,7 @@
 #include <asm/kexec-bzimage64.h>
 #include <asm/setup.h>
 #include <asm/set_memory.h>
+#include <asm/sev.h>
 
 #ifdef CONFIG_ACPI
 /*
@@ -597,5 +598,8 @@ void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
  */
 bool arch_kexec_supported(void)
 {
-	return !sev_es_active();
+	if (!sev_kexec_supported())
+		return false;
+
+	return true;
 }
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 5d4b1d317317..8c7f1ad69185 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -901,6 +901,18 @@ static int __init sev_es_setup_ap_jump_table_blob(void)
 }
 core_initcall(sev_es_setup_ap_jump_table_blob);
 
+bool sev_kexec_supported(void)
+{
+	/*
+	 * KEXEC with SEV-ES and more than one CPU is only supported
+	 * when the AP Jump Table is installed.
+	 */
+	if (num_possible_cpus() > 1)
+		return !sev_es_active() || sev_ap_jumptable_blob_installed;
+	else
+		return true;
+}
+
 static void __init alloc_runtime_data(int cpu)
 {
 	struct sev_es_runtime_data *data;
-- 
2.33.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests
  2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
                   ` (11 preceding siblings ...)
  2021-09-13 15:56 ` [PATCH v2 12/12] x86/sev: Support kexec under SEV-ES with AP Jump Table blob Joerg Roedel
@ 2021-09-13 16:02 ` Dave Hansen
  2021-09-13 16:14   ` Joerg Roedel
  12 siblings, 1 reply; 31+ messages in thread
From: Dave Hansen @ 2021-09-13 16:02 UTC (permalink / raw)
  To: Joerg Roedel, x86
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, David Rientjes, Masami Hiramatsu, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, Eric Biederman,
	Erdem Aktas

On 9/13/21 8:55 AM, Joerg Roedel wrote:
> This does not work under SEV-ES, because the hypervisor has no access
> to the vCPU registers and can't make modifications to them. So an
> SEV-ES guest needs to reset the vCPU itself and park it using the
> AP-reset-hold protocol. Upon wakeup the guest needs to jump to
> real-mode and to the reset-vector configured in the AP-Jump-Table.

How does this end up looking to an end user that tries to kexec() from a
an SEV-ES kernel?  Does it just hang?
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests
  2021-09-13 16:02 ` [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Dave Hansen
@ 2021-09-13 16:14   ` Joerg Roedel
  2021-09-13 16:21     ` Dave Hansen
  0 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-09-13 16:14 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes,
	Masami Hiramatsu, Martin Radev, Tom Lendacky, Kees Cook,
	Cfir Cohen, linux-coco, Andy Lutomirski, Dan Williams,
	Juergen Gross, Mike Stunes, Sean Christopherson, kexec,
	linux-kernel, Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 09:02:38AM -0700, Dave Hansen wrote:
> On 9/13/21 8:55 AM, Joerg Roedel wrote:
> > This does not work under SEV-ES, because the hypervisor has no access
> > to the vCPU registers and can't make modifications to them. So an
> > SEV-ES guest needs to reset the vCPU itself and park it using the
> > AP-reset-hold protocol. Upon wakeup the guest needs to jump to
> > real-mode and to the reset-vector configured in the AP-Jump-Table.
> 
> How does this end up looking to an end user that tries to kexec() from a
> an SEV-ES kernel?  Does it just hang?

Yes, the kexec will just hang. This patch-set contains code to disable
the kexec syscalls in situations where it would not work for that
reason.

Actually with the changes to the decompressor in this patch-set the
kexec'ed kernel could boot, but would fail to bring up all the APs.

Regards,

	Joerg
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests
  2021-09-13 16:14   ` Joerg Roedel
@ 2021-09-13 16:21     ` Dave Hansen
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Hansen @ 2021-09-13 16:21 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes,
	Masami Hiramatsu, Martin Radev, Tom Lendacky, Kees Cook,
	Cfir Cohen, linux-coco, Andy Lutomirski, Dan Williams,
	Juergen Gross, Mike Stunes, Sean Christopherson, kexec,
	linux-kernel, Eric Biederman, Erdem Aktas

On 9/13/21 9:14 AM, Joerg Roedel wrote:
> On Mon, Sep 13, 2021 at 09:02:38AM -0700, Dave Hansen wrote:
>> On 9/13/21 8:55 AM, Joerg Roedel wrote:
>>> This does not work under SEV-ES, because the hypervisor has no access
>>> to the vCPU registers and can't make modifications to them. So an
>>> SEV-ES guest needs to reset the vCPU itself and park it using the
>>> AP-reset-hold protocol. Upon wakeup the guest needs to jump to
>>> real-mode and to the reset-vector configured in the AP-Jump-Table.
>> How does this end up looking to an end user that tries to kexec() from a
>> an SEV-ES kernel?  Does it just hang?
> Yes, the kexec will just hang. This patch-set contains code to disable
> the kexec syscalls in situations where it would not work for that
> reason.

Got it.  The end-user-visible symptom just wasn't obvious.  If you
revise these, it might be nice to add that so that folks who cherry-pick
stable patches or update to new stable kernels have an idea what this
might fix.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-09-13 15:55 ` [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime Joerg Roedel
@ 2021-11-01 16:10   ` Borislav Petkov
  2021-11-01 21:11     ` Eric W. Biederman
  0 siblings, 1 reply; 31+ messages in thread
From: Borislav Petkov @ 2021-11-01 16:10 UTC (permalink / raw)
  To: Eric Biederman
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:52PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Allow a runtime opt-out of kexec support for architecture code in case
> the kernel is running in an environment where kexec is not properly
> supported yet.
> 
> This will be used on x86 when the kernel is running as an SEV-ES
> guest. SEV-ES guests need special handling for kexec to hand over all
> CPUs to the new kernel. This requires special hypervisor support and
> handling code in the guest which is not yet implemented.
> 
> Cc: stable@vger.kernel.org # v5.10+
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  include/linux/kexec.h |  1 +
>  kernel/kexec.c        | 14 ++++++++++++++
>  kernel/kexec_file.c   |  9 +++++++++
>  3 files changed, 24 insertions(+)

I guess I can take this through the tip tree along with the next one.

Eric?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-11-01 16:10   ` Borislav Petkov
@ 2021-11-01 21:11     ` Eric W. Biederman
  2021-11-02 16:37       ` Joerg Roedel
                         ` (2 more replies)
  0 siblings, 3 replies; 31+ messages in thread
From: Eric W. Biederman @ 2021-11-01 21:11 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

Borislav Petkov <bp@alien8.de> writes:

> On Mon, Sep 13, 2021 at 05:55:52PM +0200, Joerg Roedel wrote:
>> From: Joerg Roedel <jroedel@suse.de>
>> 
>> Allow a runtime opt-out of kexec support for architecture code in case
>> the kernel is running in an environment where kexec is not properly
>> supported yet.
>> 
>> This will be used on x86 when the kernel is running as an SEV-ES
>> guest. SEV-ES guests need special handling for kexec to hand over all
>> CPUs to the new kernel. This requires special hypervisor support and
>> handling code in the guest which is not yet implemented.
>> 
>> Cc: stable@vger.kernel.org # v5.10+
>> Signed-off-by: Joerg Roedel <jroedel@suse.de>
>> ---
>>  include/linux/kexec.h |  1 +
>>  kernel/kexec.c        | 14 ++++++++++++++
>>  kernel/kexec_file.c   |  9 +++++++++
>>  3 files changed, 24 insertions(+)
>
> I guess I can take this through the tip tree along with the next one.

I seem to remember the consensus when this was reviewed that it was
unnecessary and there is already support for doing something like
this at a more fine grained level so we don't need a new kexec hook.

Eric

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-11-01 21:11     ` Eric W. Biederman
@ 2021-11-02 16:37       ` Joerg Roedel
  2021-11-02 17:00       ` Joerg Roedel
  2021-11-02 17:17       ` Borislav Petkov
  2 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2021-11-02 16:37 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Kees Cook, Cfir Cohen, Borislav Petkov, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

On Mon, Nov 01, 2021 at 04:11:42PM -0500, Eric W. Biederman wrote:
> I seem to remember the consensus when this was reviewed that it was
> unnecessary and there is already support for doing something like
> this at a more fine grained level so we don't need a new kexec hook.

It was a discussion, no consenus :)

I still think it is better to solve this in generic code for everybody
to re-use than with an hack in the architecture hooks.

More and more platforms which enable confidential computing features
may need this hook in the future.

Regards,

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
 
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-11-01 21:11     ` Eric W. Biederman
  2021-11-02 16:37       ` Joerg Roedel
@ 2021-11-02 17:00       ` Joerg Roedel
  2021-11-02 18:17         ` Eric W. Biederman
  2021-11-02 17:17       ` Borislav Petkov
  2 siblings, 1 reply; 31+ messages in thread
From: Joerg Roedel @ 2021-11-02 17:00 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Kees Cook, Cfir Cohen, Borislav Petkov, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

Hi again,

On Mon, Nov 01, 2021 at 04:11:42PM -0500, Eric W. Biederman wrote:
> I seem to remember the consensus when this was reviewed that it was
> unnecessary and there is already support for doing something like
> this at a more fine grained level so we don't need a new kexec hook.

Forgot to state to problem again which these patches solve:

Currently a Linux kernel running as an SEV-ES guest has no way to
successfully kexec into a new kernel. The normal SIPI sequence to reset
the non-boot VCPUs does not work in SEV-ES guests and special code is
needed in Linux to safely hand over the VCPUs from one kernel to the
next. What happens currently is that the kexec'ed kernel will just hang.

The code which implements the VCPU hand-over is also included in this
patch-set, but it requires a certain level of Hypervisor support which
is not available everywhere.

To make it clear to the user that kexec will not work in their
environment, it is best to disable the respected syscalls. This is what
the hook is needed for.

Regards,

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
 
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-11-01 21:11     ` Eric W. Biederman
  2021-11-02 16:37       ` Joerg Roedel
  2021-11-02 17:00       ` Joerg Roedel
@ 2021-11-02 17:17       ` Borislav Petkov
  2 siblings, 0 replies; 31+ messages in thread
From: Borislav Petkov @ 2021-11-02 17:17 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

On Mon, Nov 01, 2021 at 04:11:42PM -0500, Eric W. Biederman wrote:
> I seem to remember the consensus when this was reviewed that it was
> unnecessary and there is already support for doing something like
> this at a more fine grained level so we don't need a new kexec hook.

Well, the executive summary is that you have a guest whose memory *and*
registers are encrypted so the hypervisor cannot have a poke inside and
reset the vCPU like it would normally do. So you need to do that dance
differently, i.e, the patchset.

If you try to kexec such a guest now, it'll init only the BSP, as Joerg
said. So I guess a single-threaded kdump.

And yes, one of the prominent use cases is kdumping from such a guest,
as distros love doing kdump for debugging.

I hope that explains it better.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime
  2021-11-02 17:00       ` Joerg Roedel
@ 2021-11-02 18:17         ` Eric W. Biederman
  0 siblings, 0 replies; 31+ messages in thread
From: Eric W. Biederman @ 2021-11-02 18:17 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes, Martin Radev,
	Tom Lendacky, Kees Cook, Cfir Cohen, Borislav Petkov, linux-coco,
	Andy Lutomirski, Dan Williams, Juergen Gross, Mike Stunes,
	Sean Christopherson, kexec, linux-kernel, stable,
	Masami Hiramatsu, Erdem Aktas

Joerg Roedel <jroedel@suse.de> writes:

> Hi again,
>
> On Mon, Nov 01, 2021 at 04:11:42PM -0500, Eric W. Biederman wrote:
>> I seem to remember the consensus when this was reviewed that it was
>> unnecessary and there is already support for doing something like
>> this at a more fine grained level so we don't need a new kexec hook.
>
> Forgot to state to problem again which these patches solve:
>
> Currently a Linux kernel running as an SEV-ES guest has no way to
> successfully kexec into a new kernel. The normal SIPI sequence to reset
> the non-boot VCPUs does not work in SEV-ES guests and special code is
> needed in Linux to safely hand over the VCPUs from one kernel to the
> next. What happens currently is that the kexec'ed kernel will just hang.
>
> The code which implements the VCPU hand-over is also included in this
> patch-set, but it requires a certain level of Hypervisor support which
> is not available everywhere.
>
> To make it clear to the user that kexec will not work in their
> environment, it is best to disable the respected syscalls. This is what
> the hook is needed for.

Note this is environmental.  This is the equivalent of a driver for a
device without some feature.

The kernel already has machine_kexec_prepare, which is perfectly capable
of detecting this is a problem and causing kexec_load to fail.  Which
is all that is required.

We don't need a new hook and a new code path to test for one
architecture.

So when we can reliably cause the system call to fail with a specific
error code I don't think it makes sense to make clutter up generic code
because of one architecture's design mistakes.


My honest preference would be to go farther and have a
firmware/hypervisor/platform independent rendezvous for the cpus so we
don't have to worry about what bugs the code under has implemented for
this special case.  Because frankly there when there are layers of
software if a bug can slip through it always seems to and causes
problems.


But definitely there is no reason to add another generic hook when the
existing hook is quite good enough.

Eric

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version
  2021-09-13 15:55 ` [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version Joerg Roedel
@ 2021-11-03 14:27   ` Borislav Petkov
  2022-01-26  9:27     ` Joerg Roedel
  0 siblings, 1 reply; 31+ messages in thread
From: Borislav Petkov @ 2021-11-03 14:27 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:54PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Save the results of the GHCB protocol negotiation into a data structure
> and print information about versions supported and used to the kernel
> log.

Which is useful for?

> +/*
> + * struct sev_ghcb_protocol_info - Used to return GHCB protocol
> + *				   negotiation details.
> + *
> + * @hv_proto_min:	Minimum GHCB protocol version supported by Hypervisor
> + * @hv_proto_max:	Maximum GHCB protocol version supported by Hypervisor
> + * @vm_proto:		Protocol version the VM (this kernel) will use
> + */
> +struct sev_ghcb_protocol_info {

Too long a name - ghcb_info is perfectly fine.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported
  2021-09-13 15:55 ` [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported Joerg Roedel
@ 2021-11-03 16:05   ` Borislav Petkov
  0 siblings, 0 replies; 31+ messages in thread
From: Borislav Petkov @ 2021-11-03 16:05 UTC (permalink / raw)
  To: Joerg Roedel, Brijesh Singh
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:56PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Check whether the hypervisor supports GHCB version 2 and use it if
> available.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/boot/compressed/sev.c | 10 ++++++++--
>  arch/x86/include/asm/sev.h     |  4 ++--
>  arch/x86/kernel/sev-shared.c   | 17 ++++++++++++++---
>  3 files changed, 24 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
> index 101e08c67296..7f8416f76be7 100644
> --- a/arch/x86/boot/compressed/sev.c
> +++ b/arch/x86/boot/compressed/sev.c
> @@ -119,16 +119,22 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
>  /* Include code for early handlers */
>  #include "../../kernel/sev-shared.c"
>  
> +static unsigned int ghcb_protocol;

I guess you need to sync up with Brijesh on what to use:

https://lore.kernel.org/r/20211008180453.462291-7-brijesh.singh@amd.com

And if ghcb_version there is __ro_after_init I think that's perfectly
fine and doesn't need an accessor...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address
  2021-09-13 15:55 ` [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address Joerg Roedel
@ 2021-11-08 18:14   ` Borislav Petkov
  0 siblings, 0 replies; 31+ messages in thread
From: Borislav Petkov @ 2021-11-08 18:14 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:57PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> Store the physical address of the AP Jump Table in kernel memory so
> that it does not need to be fetched from the Hypervisor again.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/kernel/sev.c | 26 ++++++++++++++------------
>  1 file changed, 14 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index 5d3422e8b25e..eedba56b6bac 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -42,6 +42,9 @@ static struct ghcb boot_ghcb_page __bss_decrypted __aligned(PAGE_SIZE);
>   */
>  static struct ghcb __initdata *boot_ghcb;
>  
> +/* Cached AP Jump Table Address */
> +static phys_addr_t sev_es_jump_table_pa;

This is static, so "jump_table_pa" should be enough.

Also, to the prefixes, everything which is not SEV-ES only, should be
simply prefixed with "sev_" if externally visible.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table
  2021-09-13 15:55 ` [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table Joerg Roedel
@ 2021-11-10 16:37   ` Borislav Petkov
  2022-01-26 14:26     ` Joerg Roedel
  0 siblings, 1 reply; 31+ messages in thread
From: Borislav Petkov @ 2021-11-10 16:37 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:58PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> The AP Jump Table under SEV-ES contains the reset vector where non-boot
> CPUs start executing when coming out of reset. This means that a CPU
> coming out of the AP-reset-hold VMGEXIT also needs to start executing at
> the reset vector stored in the AP Jump Table.
> 
> The problem is to find a safe place to put the real-mode code which
> executes the VMGEXIT and jumps to the reset vector. The code can not be
> in kernel memory, because after kexec that memory is owned by the new
> kernel and the code might have been overwritten.
> 
> Fortunately the AP Jump Table itself is a safe place, because the
> memory is not owned by the OS and will not be overwritten by a new
> kernel started through kexec. The table is 4k in size and only the
> first 4 bytes are used for the reset vector. This leaves enough space
> for some 16-bit code to do the job and even a small stack.

"The AP jump table must be 4K in size, in encrypted memory and it must
be 4K (page) aligned. There can only be one AP jump table and it should
reside in memory that has been marked as reserved by UEFI."

I think we need to state in the spec that some of that space can be used
by the OS so that future changes to the spec do not cause trouble.

> Install 16-bit code into the AP Jump Table under SEV-ES after the APs
> have been brought up. The code will do an AP-reset-hold VMGEXIT and jump
> to the reset vector after being woken up.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/include/asm/realmode.h         |   2 +
>  arch/x86/include/asm/sev-ap-jumptable.h |  25 +++++
>  arch/x86/kernel/sev.c                   | 105 +++++++++++++++++++
>  arch/x86/realmode/Makefile              |   9 +-
>  arch/x86/realmode/rmpiggy.S             |   6 ++
>  arch/x86/realmode/sev/Makefile          |  41 ++++++++
>  arch/x86/realmode/sev/ap_jump_table.S   | 130 ++++++++++++++++++++++++
>  arch/x86/realmode/sev/ap_jump_table.lds |  24 +++++
>  8 files changed, 341 insertions(+), 1 deletion(-)
>  create mode 100644 arch/x86/include/asm/sev-ap-jumptable.h
>  create mode 100644 arch/x86/realmode/sev/Makefile
>  create mode 100644 arch/x86/realmode/sev/ap_jump_table.S
>  create mode 100644 arch/x86/realmode/sev/ap_jump_table.lds
> 
> diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
> index 5db5d083c873..29590a4ddf24 100644
> --- a/arch/x86/include/asm/realmode.h
> +++ b/arch/x86/include/asm/realmode.h
> @@ -62,6 +62,8 @@ extern unsigned long initial_gs;
>  extern unsigned long initial_stack;
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  extern unsigned long initial_vc_handler;
> +extern unsigned char rm_ap_jump_table_blob[];
> +extern unsigned char rm_ap_jump_table_blob_end[];
>  #endif
>  
>  extern unsigned char real_mode_blob[];
> diff --git a/arch/x86/include/asm/sev-ap-jumptable.h b/arch/x86/include/asm/sev-ap-jumptable.h
> new file mode 100644
> index 000000000000..1c8b2ce779e2
> --- /dev/null
> +++ b/arch/x86/include/asm/sev-ap-jumptable.h

Why a separate header? arch/x86/include/asm/sev.h looks small enough.

> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * AMD Encrypted Register State Support
> + *
> + * Author: Joerg Roedel <jroedel@suse.de>
> + */
> +#ifndef __ASM_SEV_AP_JUMPTABLE_H
> +#define __ASM_SEV_AP_JUMPTABLE_H
> +
> +#define	SEV_APJT_CS16	0x8
> +#define	SEV_APJT_DS16	0x10
> +
> +#define SEV_APJT_ENTRY	0x10
> +
> +#ifndef __ASSEMBLY__
> +
> +struct sev_ap_jump_table_header {
> +	u16	reset_ip;
> +	u16	reset_cs;
> +	u16	gdt_offset;

I guess you should state that the first two members are as the spec
mandates and cannot be moved around or changed or so.

Also, this gdt_offset thing looks like it wants to be ap_jumptable_gdt,
no?

> +};
> +
> +#endif /* !__ASSEMBLY__ */
> +
> +#endif /* __ASM_SEV_AP_JUMPTABLE_H */
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index eedba56b6bac..a98eab926682 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -19,6 +19,7 @@
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
>  
> +#include <asm/sev-ap-jumptable.h>
>  #include <asm/cpu_entry_area.h>
>  #include <asm/stacktrace.h>
>  #include <asm/sev.h>
> @@ -45,6 +46,9 @@ static struct ghcb __initdata *boot_ghcb;
>  /* Cached AP Jump Table Address */
>  static phys_addr_t sev_es_jump_table_pa;
>  
> +/* Whether the AP Jump Table blob was successfully installed */
> +static bool sev_ap_jumptable_blob_installed __ro_after_init;
> +
>  /* #VC handler runtime per-CPU data */
>  struct sev_es_runtime_data {
>  	struct ghcb ghcb_page;
> @@ -749,6 +753,107 @@ static void __init sev_es_setup_play_dead(void)
>  static inline void sev_es_setup_play_dead(void) { }
>  #endif
>  
> +/*
> + * This function make the necessary runtime changes to the AP Jump Table blob.

s/This function make/Make/

Ditto for the other "This function" below.

> + * For now this only sets up the GDT used while the code executes. The GDT needs
> + * to contain 16-bit code and data segments with a base that points to AP Jump
> + * Table page.
> + */
> +void __init sev_es_setup_ap_jump_table_data(void *base, u32 pa)

Why is this a separate function?

It is all part of the jump table setup.

> +	struct sev_ap_jump_table_header *header;
> +	struct desc_ptr *gdt_descr;
> +	u64 *ap_jumptable_gdt;
> +
> +	header = base;
> +
> +	/*
> +	 * Setup 16-bit protected mode code and data segments for AP Jumptable.
> +	 * Set the segment limits to 0xffff to already be compatible with
> +	 * real-mode.
> +	 */
> +	ap_jumptable_gdt = (u64 *)(base + header->gdt_offset);
> +	ap_jumptable_gdt[SEV_APJT_CS16 / 8] = GDT_ENTRY(0x9b, pa, 0xffff);
> +	ap_jumptable_gdt[SEV_APJT_DS16 / 8] = GDT_ENTRY(0x93, pa, 0xffff);
> +
> +	/* Write correct GDT base address into GDT descriptor */
> +	gdt_descr = (struct desc_ptr *)(base + header->gdt_offset);
> +	gdt_descr->address += pa;
> +}
> +
> +/*
> + * This function sets up the AP Jump Table blob which contains code which runs
> + * in 16-bit protected mode to park an AP. After the AP is woken up again the
> + * code will disable protected mode and jump to the reset vector which is also
> + * stored in the AP Jump Table.
> + *
> + * The Jump Table is a safe place to park an AP, because it is owned by the
> + * BIOS and writable by the OS. Putting the code in kernel memory would break
> + * with kexec, because by the time th APs wake up the memory is owned by

				     the

> + * the new kernel, and possibly already overwritten.
> + *
> + * Kexec is also the reason this function is called as an init-call after SMP

s/called as //

> + * bringup. Only after all CPUs are up there is a guarantee that no AP is still
> + * parked in AP jump-table code.
> + */
> +static int __init sev_es_setup_ap_jump_table_blob(void)

Everywhere: use prefix sev_ pls. IOW:

		sev_setup_ap_jump_table()

plain and simple.

> +{
> +	size_t blob_size = rm_ap_jump_table_blob_end - rm_ap_jump_table_blob;
> +	u16 startup_cs, startup_ip;
> +	u16 __iomem *jump_table;
> +	phys_addr_t pa;
> +
> +	if (!sev_es_active())

	if (!cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))

> +		return 0;
> +
> +	if (sev_get_ghcb_proto_ver() < 2) {
> +		pr_info("AP Jump Table parking requires at least GHCB protocol version 2\n");

Not pr_warn?

Also, can we drop everywhere this first-letter capitalized spelling?

		AP jump table parking...

is ok already.

> +		return 0;

Why are you returning 0 here and below?

> +	}
> +
> +	pa = get_jump_table_addr();
> +
> +	/* Overflow and size checks for untrusted Jump Table address */

	/* Check overflow and size...

> +	if (pa + PAGE_SIZE < pa || pa + PAGE_SIZE > SZ_4G) {
> +		pr_info("AP Jump Table is above 4GB - not enabling AP Jump Table parking\n");

That error message needs to say about the overflow too.

> +		return 0;
> +	}
> +
> +	/* On UP guests there is no jump table so this is not a failure */
> +	if (!pa)
> +		return 0;

So this check needs to happen right after the get_ call.

> +
> +	jump_table = ioremap_encrypted(pa, PAGE_SIZE);
> +	if (WARN_ON(!jump_table))

> +		return -EINVAL;
> +
> +	/*
> +	 * Safe reset vector to restore it later because the blob will

	   Save...

> +	 * overwrite it.
> +	 */
> +	startup_ip = jump_table[0];
> +	startup_cs = jump_table[1];
> +
> +	/* Install AP Jump Table Blob with real mode AP parking code */
> +	memcpy_toio(jump_table, rm_ap_jump_table_blob, blob_size);
> +
> +	/* Setup AP Jumptable GDT */
> +	sev_es_setup_ap_jump_table_data(jump_table, (u32)pa);
> +
> +	writew(startup_ip, &jump_table[0]);
> +	writew(startup_cs, &jump_table[1]);
> +
> +	iounmap(jump_table);
> +
> +	pr_info("AP Jump Table Blob successfully set up\n");
> +
> +	/* Mark AP Jump Table blob as available */
> +	sev_ap_jumptable_blob_installed = true;

I don't like those random boolean variables all over the place but at
least it is static.

> +
> +	return 0;
> +}
> +core_initcall(sev_es_setup_ap_jump_table_blob);
> +
>  static void __init alloc_runtime_data(int cpu)
>  {
>  	struct sev_es_runtime_data *data;
> diff --git a/arch/x86/realmode/Makefile b/arch/x86/realmode/Makefile
> index a0b491ae2de8..00f3cceb9580 100644
> --- a/arch/x86/realmode/Makefile
> +++ b/arch/x86/realmode/Makefile
> @@ -11,12 +11,19 @@
>  KASAN_SANITIZE			:= n
>  KCSAN_SANITIZE			:= n
>  
> +RMPIGGY-y				 = $(obj)/rm/realmode.bin
> +RMPIGGY-$(CONFIG_AMD_MEM_ENCRYPT)	+= $(obj)/sev/ap_jump_table.bin
> +
>  subdir- := rm
> +subdir- := sev
>  
>  obj-y += init.o
>  obj-y += rmpiggy.o
>  
> -$(obj)/rmpiggy.o: $(obj)/rm/realmode.bin
> +$(obj)/rmpiggy.o: $(RMPIGGY-y)
>  
>  $(obj)/rm/realmode.bin: FORCE
>  	$(Q)$(MAKE) $(build)=$(obj)/rm $@
> +
> +$(obj)/sev/ap_jump_table.bin: FORCE
> +	$(Q)$(MAKE) $(build)=$(obj)/sev $@
> diff --git a/arch/x86/realmode/rmpiggy.S b/arch/x86/realmode/rmpiggy.S
> index c8fef76743f6..a659f98617ff 100644
> --- a/arch/x86/realmode/rmpiggy.S
> +++ b/arch/x86/realmode/rmpiggy.S
> @@ -17,3 +17,9 @@ SYM_DATA_END_LABEL(real_mode_blob, SYM_L_GLOBAL, real_mode_blob_end)
>  SYM_DATA_START(real_mode_relocs)
>  	.incbin	"arch/x86/realmode/rm/realmode.relocs"
>  SYM_DATA_END(real_mode_relocs)
> +
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +SYM_DATA_START(rm_ap_jump_table_blob)
> +	.incbin "arch/x86/realmode/sev/ap_jump_table.bin"
> +SYM_DATA_END_LABEL(rm_ap_jump_table_blob, SYM_L_GLOBAL, rm_ap_jump_table_blob_end)
> +#endif
> diff --git a/arch/x86/realmode/sev/Makefile b/arch/x86/realmode/sev/Makefile
> new file mode 100644
> index 000000000000..5a96a518ccb3
> --- /dev/null
> +++ b/arch/x86/realmode/sev/Makefile
> @@ -0,0 +1,41 @@

<--- # SPDX-License-Identifier: GPL-2.0

We don't do that GPL text anymore.

> +#
> +# arch/x86/sev/Makefile
> +#
> +# This file is subject to the terms and conditions of the GNU General Public
> +# License.  See the file "COPYING" in the main directory of this archive
> +# for more details.
> +#
> +
> +# Sanitizer runtimes are unavailable and cannot be linked here.
> +KASAN_SANITIZE			:= n
> +KCSAN_SANITIZE			:= n
> +OBJECT_FILES_NON_STANDARD	:= y
> +
> +# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
> +KCOV_INSTRUMENT		:= n
> +
> +always-y := ap_jump_table.bin
> +
> +ap_jump_table-y				+= ap_jump_table.o

The vertical alignment of those is kinda random. Please unify.

> +
> +targets	+= $(ap_jump_table-y)
> +
> +APJUMPTABLE_OBJS = $(addprefix $(obj)/,$(ap_jump_table-y))
> +
> +LDFLAGS_ap_jump_table.elf := -m elf_i386 -T
> +
> +targets += ap_jump_table.elf
> +$(obj)/ap_jump_table.elf: $(obj)/ap_jump_table.lds $(APJUMPTABLE_OBJS) FORCE
> +	$(call if_changed,ld)
> +
> +OBJCOPYFLAGS_ap_jump_table.bin := -O binary
> +
> +targets += ap_jump_table.bin
> +$(obj)/ap_jump_table.bin: $(obj)/ap_jump_table.elf FORCE
> +	$(call if_changed,objcopy)
> +
> +# ---------------------------------------------------------------------------
> +
> +KBUILD_AFLAGS	:= $(REALMODE_CFLAGS) -D__ASSEMBLY__
> +GCOV_PROFILE := n
> +UBSAN_SANITIZE := n
> diff --git a/arch/x86/realmode/sev/ap_jump_table.S b/arch/x86/realmode/sev/ap_jump_table.S
> new file mode 100644
> index 000000000000..547cb363bb94
> --- /dev/null
> +++ b/arch/x86/realmode/sev/ap_jump_table.S
> @@ -0,0 +1,130 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/linkage.h>
> +#include <asm/sev-ap-jumptable.h>
> +
> +/*
> + * This file contains the source code for the binary blob which gets copied to
> + * the SEV-ES AP Jumptable to park APs while offlining CPUs or booting a new

I've seen "Jumptable", "Jump Table" and "jump table" at least. I'd say, do
the last one everywhere pls.

> + * kernel via KEXEC.
> + *
> + * The AP Jumptable is the only safe place to put this code, as any memory the
> + * kernel allocates will be owned (and possibly overwritten) by the new kernel
> + * once the APs are woken up.
> + *
> + * This code runs in 16-bit protected mode, the CS, DS, and SS segment bases are
> + * set to the beginning of the AP Jumptable page.
> + *
> + * Since the GDT will also be gone when the AP wakes up, this blob contains its
> + * own GDT, which is set up by the AP Jumptable setup code with the correct
> + * offsets.
> + *
> + * Author: Joerg Roedel <jroedel@suse.de>
> + */
> +
> +	.text
> +	.org 0x0
> +	.code16
> +SYM_DATA_START(ap_jumptable_header)
> +	.word	0			/* reset IP */
> +	.word	0			/* reset CS */
> +	.word	ap_jumptable_gdt	/* GDT Offset   */
> +SYM_DATA_END(ap_jumptable_header)
> +
> +	.org	SEV_APJT_ENTRY

So this hardcodes the fact that the first 16 bytes are header and the
rest is free game. I think the spec needs to play along here...

> +SYM_CODE_START(ap_park_asm)

This whole file is asm. I guess simply "ap_park" is enough.

> +
> +	/* Switch to AP Jumptable GDT first */
> +	lgdtl	ap_jumptable_gdt
> +
> +	/* Reload CS */
> +	ljmpw	$SEV_APJT_CS16, $1f
> +1:
> +
> +	/* Reload DS and SS */
> +	movl	$SEV_APJT_DS16, %ecx
> +	movl	%ecx, %ds
> +	movl	%ecx, %ss
> +
> +	/*
> +	 * Setup a stack pointing to the end of the AP Jumptable page.
> +	 * The stack is needed ot reset EFLAGS after wakeup.

s/ot/to/

> +	 */
> +	movl	$0x1000, %esp
> +
> +	/* Execute AP reset hold VMGEXIT */
> +2:	xorl	%edx, %edx
> +	movl	$0x6, %eax
> +	movl	$0xc0010130, %ecx

MSR_AMD64_SEV_ES_GHCB

> +	wrmsr
> +	rep; vmmcall
> +	rdmsr
> +	movl	%eax, %ecx
> +	andl	$0xfff, %ecx
> +	cmpl	$0x7, %ecx
> +	jne	2b
> +	shrl	$12, %eax
> +	jnz	3f
> +	testl	%edx, %edx
> +	jnz	3f
> +	jmp	2b

You usually document your asm pretty nicely but those after the RDMSR
are a bit lacking...

> +3:
> +	/*
> +	 * Successfully woken up - Patch the correct target into the far jump at

				   patch

> +	 * the end. An indirect far jump does not work here, because at the time
> +	 * the jump is executed DS is already loaded with real-mode values.
> +	 */
> +
> +	/* Jump target is at address 0x0 - copy it to the far jump instruction */
> +	movl	$0, %ecx
> +	movl	(%ecx), %eax
> +	movl	%eax, jump_target
> +
> +	/* Reset EFLAGS */
> +	pushl	$2

I'm assuming that two is bit 1 in rFLAGS which is always 1? Comment pls.

> +	popfl
> +
> +	/* Setup DS and SS for real-mode */
> +	movl	$0x18, %ecx
> +	movl	%ecx, %ds
> +	movl	%ecx, %ss
> +
> +	/* Reset remaining registers */
> +	movl	$0, %esp
> +	movl	$0, %eax
> +	movl	$0, %ebx
> +	movl	$0, %edx

All 4: use xor

> +
> +	/* Reset CR0 to get out of protected mode */
> +	movl	$0x60000010, %ecx

Another magic naked number.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2
  2021-09-13 15:55 ` [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2 Joerg Roedel
@ 2021-11-12 16:33   ` Borislav Petkov
  2022-01-27  9:01     ` Joerg Roedel
  0 siblings, 1 reply; 31+ messages in thread
From: Borislav Petkov @ 2021-11-12 16:33 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:55:59PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@suse.de>
> 
> GHCB protocol version 2 adds the MSR-based AP-reset-hold VMGEXIT which
> does not need a GHCB. Use that to park APs in 16-bit protected mode on
> the AP Jump Table.
> 
> Signed-off-by: Joerg Roedel <jroedel@suse.de>
> ---
>  arch/x86/include/asm/realmode.h    |  3 +
>  arch/x86/kernel/sev.c              | 48 ++++++++++++++--
>  arch/x86/realmode/rm/Makefile      | 11 ++--
>  arch/x86/realmode/rm/header.S      |  3 +
>  arch/x86/realmode/rm/sev_ap_park.S | 89 ++++++++++++++++++++++++++++++
>  5 files changed, 144 insertions(+), 10 deletions(-)
>  create mode 100644 arch/x86/realmode/rm/sev_ap_park.S
> 
> diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
> index 29590a4ddf24..668de0a8b1ae 100644
> --- a/arch/x86/include/asm/realmode.h
> +++ b/arch/x86/include/asm/realmode.h
> @@ -23,6 +23,9 @@ struct real_mode_header {
>  	u32	trampoline_header;
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	u32	sev_es_trampoline_start;
> +	u32	sev_real_ap_park_asm;

sev_ap_park;

> +	u32	sev_real_ap_park_seg;

sev_ap_park_seg;

> +	u32	sev_ap_park_gdt;

Yap, like thist one.

>  #endif
>  #ifdef CONFIG_X86_64
>  	u32	trampoline_pgd;
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index a98eab926682..20b439986d86 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -27,6 +27,7 @@
>  #include <asm/fpu/internal.h>
>  #include <asm/processor.h>
>  #include <asm/realmode.h>
> +#include <asm/tlbflush.h>
>  #include <asm/traps.h>
>  #include <asm/svm.h>
>  #include <asm/smp.h>
> @@ -695,6 +696,35 @@ static bool __init sev_es_setup_ghcb(void)
>  }
>  
>  #ifdef CONFIG_HOTPLUG_CPU
> +void __noreturn sev_jumptable_ap_park(void)
> +{
> +	local_irq_disable();
> +
> +	write_cr3(real_mode_header->trampoline_pgd);
> +
> +	/* Exiting long mode will fail if CR4.PCIDE is set. */
> +	if (boot_cpu_has(X86_FEATURE_PCID))

cpu_feature_enabled() is what we use everywhere now.

> +		cr4_clear_bits(X86_CR4_PCIDE);
> +
> +	asm volatile("xorq	%%r15, %%r15\n"
> +		     "xorq	%%r14, %%r14\n"
> +		     "xorq	%%r13, %%r13\n"
> +		     "xorq	%%r12, %%r12\n"
> +		     "xorq	%%r11, %%r11\n"
> +		     "xorq	%%r10, %%r10\n"
> +		     "xorq	%%r9,  %%r9\n"
> +		     "xorq	%%r8,  %%r8\n"
> +		     "xorq	%%rsi, %%rsi\n"
> +		     "xorq	%%rdi, %%rdi\n"
> +		     "xorq	%%rsp, %%rsp\n"
> +		     "xorq	%%rbp, %%rbp\n"

Use xorl and the 32-bit regs is enough - zero extension.

> +		     "ljmpl	*%0" : :
> +		     "m" (real_mode_header->sev_real_ap_park_asm),
> +		     "b" (sev_es_jump_table_pa >> 4));

In any case, this asm needs comments: why those regs, why
sev_es_jump_table_pa >> 4 in rbx (I found later in the patch why) and so
on.

> diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
> index 8c1db5bf5d78..6c17f8fd1eb4 100644
> --- a/arch/x86/realmode/rm/header.S
> +++ b/arch/x86/realmode/rm/header.S
> @@ -22,6 +22,9 @@ SYM_DATA_START(real_mode_header)
>  	.long	pa_trampoline_header
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  	.long	pa_sev_es_trampoline_start
> +	.long	pa_sev_ap_park_asm
> +	.long	__KERNEL32_CS
> +	.long	pa_sev_ap_park_gdt;
>  #endif
>  #ifdef CONFIG_X86_64
>  	.long	pa_trampoline_pgd;
> diff --git a/arch/x86/realmode/rm/sev_ap_park.S b/arch/x86/realmode/rm/sev_ap_park.S

arch/x86/realmode/rm/sev.S

is perfectly fine I guess.

> new file mode 100644
> index 000000000000..0b63d0569d4d
> --- /dev/null
> +++ b/arch/x86/realmode/rm/sev_ap_park.S
> @@ -0,0 +1,89 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#include <linux/linkage.h>
> +#include <asm/segment.h>
> +#include <asm/page_types.h>
> +#include <asm/processor-flags.h>
> +#include <asm/msr-index.h>
> +#include <asm/sev-ap-jumptable.h>
> +#include "realmode.h"
> +
> +	.section ".text32", "ax"
> +	.code32
> +/*

"This is executed by ... when ... "

> + * The following code switches to 16-bit protected mode and sets up the
> + * execution environment for the AP Jump Table blob. Then it jumps to the AP
> + * Jump Table to park the AP.
> + *
> + * The code was copied from reboot.S and modified to fit the SEV-ES requirements
> + * for AP parking.

That sentence belongs at most in the commit message.

> When this code is entered, all registers except %EAX-%EDX are

%eax, etc. Lowercase pls.

> + * in reset state.
> + *
> + * The AP Jump Table physical base address is in %EBX upon entry.
> + *
> + * %EAX, %ECX, %EDX and EFLAGS are undefined. Only use registers %EAX-%EDX and
> + * %ESP in this code.
> + */
> +SYM_CODE_START(sev_ap_park_asm)

sev_ap_park

> +
> +	/* Switch to trampoline GDT as it is guaranteed < 4 GiB */
> +	movl	$__KERNEL_DS, %eax
> +	movl	%eax, %ds
> +	lgdt	pa_tr_gdt
> +
> +	/* Disable paging to drop us out of long mode */
> +	movl	%cr0, %eax
> +	btcl	$X86_CR0_PG_BIT, %eax
> +	movl	%eax, %cr0
> +

	/* Start executing from 32-bit addresses or so, I guess...

> +	ljmpl	$__KERNEL32_CS, $pa_sev_ap_park_paging_off

Please add a comment also about those pa_ things because they look like
magic but they're sed-generated into arch/x86/realmode/rm/pasyms.h by
the Makefile in that same dir.

> +SYM_INNER_LABEL(sev_ap_park_paging_off, SYM_L_GLOBAL)

Global symbol but used only in this file. .L-prefix then?

> +	/* Clear EFER */
> +	movl	$0, %eax
> +	movl	$0, %edx

both:	xorl

> +	movl	$MSR_EFER, %ecx
> +	wrmsr
> +
> +	/* Clear CR3 */
> +	movl	$0, %ecx

ditto

> +	movl	%ecx, %cr3
> +
> +	/* Set up the IDT for real mode. */
> +	lidtl	pa_machine_real_restart_idt
> +
> +	/*
> +	 * Load the GDT with the 16-bit segments for the AP Jump Table
> +	 */

	/* Load the GDT with the 16-bit segments for the AP Jump Table  */

works too.

> +	lgdtl	pa_sev_ap_park_gdt
> +
> +	/* Setup Code and Data segments for AP Jump Table */

	... code and data segments ...

you have been reading too much vendor text where they love to capitalize
everything.

> +	movw	$SEV_APJT_DS16, %ax
> +	movw	%ax, %ds
> +	movw	%ax, %ss
> +
> +	/* Jump to the AP Jump Table into 16 bit protected mode */
> +	ljmpw	$SEV_APJT_CS16, $SEV_APJT_ENTRY
> +SYM_CODE_END(sev_ap_park_asm)
> +
> +	.data
> +	.balign	16
> +SYM_DATA_START(sev_ap_park_gdt)
> +	/* Self-pointer */
> +	.word	sev_ap_park_gdt_end - sev_ap_park_gdt - 1
> +	.long	pa_sev_ap_park_gdt
> +	.word	0
> +
> +	/*
> +	 * Offset 0x8
> +	 * 32 bit code segment descriptor pointing to AP Jump table base
> +	 * Setup at runtime in sev_es_setup_ap_jump_table_data().
> +	 */
> +	.quad	0
> +
> +	/*
> +	 * Offset 0x10
> +	 * 32 bit data segment descriptor pointing to AP Jump table base
> +	 * Setup at runtime in sev_es_setup_ap_jump_table_data().
> +	 */
> +	.quad	0
> +SYM_DATA_END_LABEL(sev_ap_park_gdt, SYM_L_GLOBAL, sev_ap_park_gdt_end)
> -- 
> 2.33.0
> 

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU
  2021-09-13 15:56 ` [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU Joerg Roedel
@ 2021-11-15 18:44   ` Borislav Petkov
  0 siblings, 0 replies; 31+ messages in thread
From: Borislav Petkov @ 2021-11-15 18:44 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, x86, David Rientjes, Masami Hiramatsu,
	Martin Radev, Tom Lendacky, Joerg Roedel, Kees Cook, Cfir Cohen,
	linux-coco, Andy Lutomirski, Dan Williams, Juergen Gross,
	Mike Stunes, Sean Christopherson, kexec, linux-kernel,
	Eric Biederman, Erdem Aktas

On Mon, Sep 13, 2021 at 05:56:00PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 134a7c9d91b6..cd14b6e10f12 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -81,12 +81,19 @@ static __always_inline void sev_es_nmi_complete(void)
>  		__sev_es_nmi_complete();
>  }
>  extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
> +void __sev_es_stop_this_cpu(void);
> +static __always_inline void sev_es_stop_this_cpu(void)

What's that for?

IOW, the below seems to build too:

---
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 1f16fc907636..398105580862 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -87,12 +87,7 @@ extern enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
 					  struct es_em_ctxt *ctxt,
 					  u64 exit_code, u64 exit_info_1,
 					  u64 exit_info_2);
-void __sev_es_stop_this_cpu(void);
-static __always_inline void sev_es_stop_this_cpu(void)
-{
-	if (static_branch_unlikely(&sev_es_enable_key))
-		__sev_es_stop_this_cpu();
-}
+void sev_es_stop_this_cpu(void);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index 39378357dc5a..7a74b3273f1a 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -694,8 +694,11 @@ void __noreturn sev_jumptable_ap_park(void)
 }
 STACK_FRAME_NON_STANDARD(sev_jumptable_ap_park);
 
-void __sev_es_stop_this_cpu(void)
+void sev_es_stop_this_cpu(void)
 {
+	if (!static_branch_unlikely(&sev_es_enable_key))
+		return;
+
 	/* Only park in the AP Jump Table when the code has been installed */
 	if (!sev_ap_jumptable_blob_installed)
 		return;

---

And as previously mentioned s/sev_es/sev/ if those are going to be used
on SNP guests too.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version
  2021-11-03 14:27   ` Borislav Petkov
@ 2022-01-26  9:27     ` Joerg Roedel
  0 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2022-01-26  9:27 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes,
	Masami Hiramatsu, Martin Radev, Tom Lendacky, Kees Cook,
	Cfir Cohen, linux-coco, Andy Lutomirski, Dan Williams,
	Juergen Gross, Mike Stunes, Sean Christopherson, kexec,
	linux-kernel, Eric Biederman, Erdem Aktas

On Wed, Nov 03, 2021 at 03:27:23PM +0100, Borislav Petkov wrote:
> On Mon, Sep 13, 2021 at 05:55:54PM +0200, Joerg Roedel wrote:
> > From: Joerg Roedel <jroedel@suse.de>
> > 
> > Save the results of the GHCB protocol negotiation into a data structure
> > and print information about versions supported and used to the kernel
> > log.
> 
> Which is useful for?

For easier debugging, I added a sentence about that to the changelog.

> > +struct sev_ghcb_protocol_info {
> 
> Too long a name - ghcb_info is perfectly fine.

Changed, thanks.

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
 
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table
  2021-11-10 16:37   ` Borislav Petkov
@ 2022-01-26 14:26     ` Joerg Roedel
  0 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2022-01-26 14:26 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes,
	Masami Hiramatsu, Martin Radev, Tom Lendacky, Kees Cook,
	Cfir Cohen, linux-coco, Andy Lutomirski, Dan Williams,
	Juergen Gross, Mike Stunes, Sean Christopherson, kexec,
	linux-kernel, Eric Biederman, Erdem Aktas

On Wed, Nov 10, 2021 at 05:37:32PM +0100, Borislav Petkov wrote:
> On Mon, Sep 13, 2021 at 05:55:58PM +0200, Joerg Roedel wrote:
> >  extern unsigned char real_mode_blob[];
> > diff --git a/arch/x86/include/asm/sev-ap-jumptable.h b/arch/x86/include/asm/sev-ap-jumptable.h
> > new file mode 100644
> > index 000000000000..1c8b2ce779e2
> > --- /dev/null
> > +++ b/arch/x86/include/asm/sev-ap-jumptable.h
> 
> Why a separate header? arch/x86/include/asm/sev.h looks small enough.

The header is included in assembly, so I made a separate one.

> > +void __init sev_es_setup_ap_jump_table_data(void *base, u32 pa)
> 
> Why is this a separate function?
> 
> It is all part of the jump table setup.

Right, but the sev_es_setup_ap_jump_table_blob() function is already
pretty big and I wanted to keep things readable.

> 
> > +		return 0;
> 
> Why are you returning 0 here and below?

This is in an initcall and it returns just 0 when the environment is not
ready to setup the ap jump table. Returning non-zero would create a
warning message in the caller for something that is not a bug in the
kernel.

> > + * This file contains the source code for the binary blob which gets copied to
> > + * the SEV-ES AP Jumptable to park APs while offlining CPUs or booting a new
> 
> I've seen "Jumptable", "Jump Table" and "jump table" at least. I'd say, do
> the last one everywhere pls.

Fair, sorry for my english being too german :) I changed everything to
'jump table'.

> > +	/* Reset remaining registers */
> > +	movl	$0, %esp
> > +	movl	$0, %eax
> > +	movl	$0, %ebx
> > +	movl	$0, %edx
> 
> All 4: use xor

XOR changes EFLAGS, can not use them here.

> > +
> > +	/* Reset CR0 to get out of protected mode */
> > +	movl	$0x60000010, %ecx
> 
> Another magic naked number.

This is the CR0 reset value. I have updated the comment to make this
more clear.

Thanks,

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
 
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2
  2021-11-12 16:33   ` Borislav Petkov
@ 2022-01-27  9:01     ` Joerg Roedel
  0 siblings, 0 replies; 31+ messages in thread
From: Joerg Roedel @ 2022-01-27  9:01 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kvm, Peter Zijlstra, Dave Hansen, virtualization, Arvind Sankar,
	hpa, Jiri Slaby, Joerg Roedel, x86, David Rientjes,
	Masami Hiramatsu, Martin Radev, Tom Lendacky, Kees Cook,
	Cfir Cohen, linux-coco, Andy Lutomirski, Dan Williams,
	Juergen Gross, Mike Stunes, Sean Christopherson, kexec,
	linux-kernel, Eric Biederman, Erdem Aktas

On Fri, Nov 12, 2021 at 05:33:05PM +0100, Borislav Petkov wrote:
> On Mon, Sep 13, 2021 at 05:55:59PM +0200, Joerg Roedel wrote:
> > +		     "ljmpl	*%0" : :
> > +		     "m" (real_mode_header->sev_real_ap_park_asm),
> > +		     "b" (sev_es_jump_table_pa >> 4));
> 
> In any case, this asm needs comments: why those regs, why
> sev_es_jump_table_pa >> 4 in rbx (I found later in the patch why) and so
> on.

Turned out the jump_table_pa is not used in asm code anymore. It was a
left-over from a previous version of the patch, it is removed now.

> > +SYM_INNER_LABEL(sev_ap_park_paging_off, SYM_L_GLOBAL)
> 
> Global symbol but used only in this file. .L-prefix then?

It needs to be a global symbol so the pa_ variant can be generated.

Regards,

-- 
Jörg Rödel
jroedel@suse.de

SUSE Software Solutions Germany GmbH
Maxfeldstr. 5
90409 Nürnberg
Germany
 
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2022-01-27  9:01 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-13 15:55 [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Joerg Roedel
2021-09-13 15:55 ` [PATCH v2 01/12] kexec: Allow architecture code to opt-out at runtime Joerg Roedel
2021-11-01 16:10   ` Borislav Petkov
2021-11-01 21:11     ` Eric W. Biederman
2021-11-02 16:37       ` Joerg Roedel
2021-11-02 17:00       ` Joerg Roedel
2021-11-02 18:17         ` Eric W. Biederman
2021-11-02 17:17       ` Borislav Petkov
2021-09-13 15:55 ` [PATCH v2 02/12] x86/kexec/64: Forbid kexec when running as an SEV-ES guest Joerg Roedel
2021-09-13 15:55 ` [PATCH v2 03/12] x86/sev: Save and print negotiated GHCB protocol version Joerg Roedel
2021-11-03 14:27   ` Borislav Petkov
2022-01-26  9:27     ` Joerg Roedel
2021-09-13 15:55 ` [PATCH v2 04/12] x86/sev: Do not hardcode " Joerg Roedel
2021-09-13 15:55 ` [PATCH v2 05/12] x86/sev: Use GHCB protocol version 2 if supported Joerg Roedel
2021-11-03 16:05   ` Borislav Petkov
2021-09-13 15:55 ` [PATCH v2 06/12] x86/sev: Cache AP Jump Table Address Joerg Roedel
2021-11-08 18:14   ` Borislav Petkov
2021-09-13 15:55 ` [PATCH v2 07/12] x86/sev: Setup code to park APs in the AP Jump Table Joerg Roedel
2021-11-10 16:37   ` Borislav Petkov
2022-01-26 14:26     ` Joerg Roedel
2021-09-13 15:55 ` [PATCH v2 08/12] x86/sev: Park APs on AP Jump Table with GHCB protocol version 2 Joerg Roedel
2021-11-12 16:33   ` Borislav Petkov
2022-01-27  9:01     ` Joerg Roedel
2021-09-13 15:56 ` [PATCH v2 09/12] x86/sev: Use AP Jump Table blob to stop CPU Joerg Roedel
2021-11-15 18:44   ` Borislav Petkov
2021-09-13 15:56 ` [PATCH v2 10/12] x86/sev: Add MMIO handling support to boot/compressed/ code Joerg Roedel
2021-09-13 15:56 ` [PATCH v2 11/12] x86/sev: Handle CLFLUSH MMIO events Joerg Roedel
2021-09-13 15:56 ` [PATCH v2 12/12] x86/sev: Support kexec under SEV-ES with AP Jump Table blob Joerg Roedel
2021-09-13 16:02 ` [PATCH v2 00/12] x86/sev: KEXEC/KDUMP support for SEV-ES guests Dave Hansen
2021-09-13 16:14   ` Joerg Roedel
2021-09-13 16:21     ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).