All of lore.kernel.org
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES
@ 2022-02-09 16:44 Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler " Varad Gautam
                   ` (9 more replies)
  0 siblings, 10 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

For AMD SEV-ES, kvm-unit-tests currently rely on UEFI to set up a
#VC exception handler. This leads to the following problems:

1) The test's page table needs to map the firmware and the shared
   GHCB used by the firmware.
2) The firmware needs to keep its #VC handler in the current IDT
   so that kvm-unit-tests can copy the #VC entry into its own IDT.
3) The firmware #VC handler might use state which is not available
   anymore after ExitBootServices.
4) After ExitBootServices, the firmware needs to get the GHCB address
   from the GHCB MSR if it needs to use the kvm-unit-test GHCB. This
   requires keeping an identity mapping, and the GHCB address must be
   in the MSR at all times where a #VC could happen.

Problems 1) and 2) were temporarily mitigated via commits b114aa57ab
("x86 AMD SEV-ES: Set up GHCB page") and 706ede1833 ("x86 AMD SEV-ES:
Copy UEFI #VC IDT entry") respectively.

However, to make kvm-unit-tests reliable against 3) and 4), the tests
must supply their own #VC handler [1][2].

This series adds #VC exception processing from Linux into kvm-unit-tests,
and makes it the default way of handling #VC exceptions.

If --amdsev-efi-vc is passed during ./configure, the tests will continue
using the UEFI #VC handler.

[1] https://lore.kernel.org/all/Yf0GO8EydyQSdZvu@suse.de/
[2] https://lore.kernel.org/all/YSA%2FsYhGgMU72tn+@google.com/

v2:
- Drop #VC processing code for RDTSC/RDTSCP and WBINVD (seanjc). KVM does
  not trap RDTSC/RDTSCP, and the tests do not produce a WBINVD exit to be
  handled.
- Clarify the rationale for tests needing their own #VC handler (marcorr).

Varad Gautam (10):
  x86: AMD SEV-ES: Setup #VC exception handler for AMD SEV-ES
  x86: Move svm.h to lib/x86/
  lib: x86: Import insn decoder from Linux
  x86: AMD SEV-ES: Pull related GHCB definitions and helpers from Linux
  x86: AMD SEV-ES: Prepare for #VC processing
  lib/x86: Move xsave helpers to lib/
  x86: AMD SEV-ES: Handle CPUID #VC
  x86: AMD SEV-ES: Handle MSR #VC
  x86: AMD SEV-ES: Handle IOIO #VC
  x86: AMD SEV-ES: Handle string IO for IOIO #VC

 Makefile                   |    3 +
 configure                  |   21 +
 lib/x86/amd_sev.c          |   13 +-
 lib/x86/amd_sev.h          |  107 +++
 lib/x86/amd_sev_vc.c       |  468 +++++++++++
 lib/x86/desc.c             |   15 +
 lib/x86/desc.h             |    1 +
 lib/x86/insn/inat-tables.c | 1566 ++++++++++++++++++++++++++++++++++++
 lib/x86/insn/inat.c        |   86 ++
 lib/x86/insn/inat.h        |  233 ++++++
 lib/x86/insn/inat_types.h  |   18 +
 lib/x86/insn/insn.c        |  778 ++++++++++++++++++
 lib/x86/insn/insn.h        |  280 +++++++
 lib/x86/setup.c            |    8 +
 {x86 => lib/x86}/svm.h     |   37 +
 lib/x86/xsave.c            |   37 +
 lib/x86/xsave.h            |   16 +
 x86/Makefile.common        |    4 +
 x86/Makefile.x86_64        |    1 +
 x86/svm.c                  |    2 +-
 x86/svm_tests.c            |    2 +-
 x86/xsave.c                |   43 +-
 22 files changed, 3687 insertions(+), 52 deletions(-)
 create mode 100644 lib/x86/amd_sev_vc.c
 create mode 100644 lib/x86/insn/inat-tables.c
 create mode 100644 lib/x86/insn/inat.c
 create mode 100644 lib/x86/insn/inat.h
 create mode 100644 lib/x86/insn/inat_types.h
 create mode 100644 lib/x86/insn/insn.c
 create mode 100644 lib/x86/insn/insn.h
 rename {x86 => lib/x86}/svm.h (93%)
 create mode 100644 lib/x86/xsave.c
 create mode 100644 lib/x86/xsave.h

-- 
2.32.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler for AMD SEV-ES
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 16:59   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 02/10] x86: Move svm.h to lib/x86/ Varad Gautam
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

AMD SEV-ES defines a new guest exception that gets triggered on
some vmexits to allow the guest to control what state gets shared
with the host. kvm-unit-tests currently relies on UEFI to provide
this #VC exception handler. This leads to the following problems:

1) The test's page table needs to map the firmware and the shared
   GHCB used by the firmware.
2) The firmware needs to keep its #VC handler in the current IDT
   so that kvm-unit-tests can copy the #VC entry into its own IDT.
3) The firmware #VC handler might use state which is not available
   anymore after ExitBootServices.
4) After ExitBootServices, the firmware needs to get the GHCB address
   from the GHCB MSR if it needs to use the kvm-unit-test GHCB. This
   requires keeping an identity mapping, and the GHCB address must be
   in the MSR at all times where a #VC could happen.

Problems 1) and 2) were temporarily mitigated via commits b114aa57ab
("x86 AMD SEV-ES: Set up GHCB page") and 706ede1833 ("x86 AMD SEV-ES:
Copy UEFI #VC IDT entry") respectively.

However, to make kvm-unit-tests reliable against 3) and 4), the tests
must supply their own #VC handler [1][2].

Switch the tests to install a #VC handler on early bootup, just after
GHCB has been mapped. The tests will use this handler by default.
If --amdsev-efi-vc is passed during ./configure, the tests will
continue using the UEFI #VC handler.

[1] https://lore.kernel.org/all/Yf0GO8EydyQSdZvu@suse.de/
[2] https://lore.kernel.org/all/YSA%2FsYhGgMU72tn+@google.com/

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 Makefile             |  3 +++
 configure            | 21 +++++++++++++++++++++
 lib/x86/amd_sev.c    | 13 +++++--------
 lib/x86/amd_sev.h    |  1 +
 lib/x86/amd_sev_vc.c | 14 ++++++++++++++
 lib/x86/desc.c       | 15 +++++++++++++++
 lib/x86/desc.h       |  1 +
 lib/x86/setup.c      |  8 ++++++++
 x86/Makefile.common  |  1 +
 9 files changed, 69 insertions(+), 8 deletions(-)
 create mode 100644 lib/x86/amd_sev_vc.c

diff --git a/Makefile b/Makefile
index 4f4ad23..94a0162 100644
--- a/Makefile
+++ b/Makefile
@@ -46,6 +46,9 @@ else
 $(error Cannot build $(ARCH_NAME) tests as EFI apps)
 endif
 EFI_CFLAGS := -DTARGET_EFI
+ifeq ($(AMDSEV_EFI_VC),y)
+EFI_CFLAGS += -DAMDSEV_EFI_VC
+endif
 # The following CFLAGS and LDFLAGS come from:
 #   - GNU-EFI/Makefile.defaults
 #   - GNU-EFI/apps/Makefile
diff --git a/configure b/configure
index 2d9c3e0..148d051 100755
--- a/configure
+++ b/configure
@@ -30,6 +30,12 @@ gen_se_header=
 page_size=
 earlycon=
 target_efi=
+# For AMD SEV-ES, the tests build to use their own #VC exception handler
+# by default, instead of using the one installed by UEFI. This ensures
+# that the tests do not depend on UEFI state after ExitBootServices.
+# To continue using the UEFI #VC handler, ./configure can be run with
+# --amdsev-efi-vc.
+amdsev_efi_vc=
 
 usage() {
     cat <<-EOF
@@ -75,6 +81,8 @@ usage() {
 	                           Specify a PL011 compatible UART at address ADDR. Supported
 	                           register stride is 32 bit only.
 	    --target-efi           Boot and run from UEFI
+	    --amdsev-efi-vc        Use UEFI-provided #VC handlers on AMD SEV/ES. Requires
+	                           --target-efi.
 EOF
     exit 1
 }
@@ -145,6 +153,9 @@ while [[ "$1" = -* ]]; do
 	--target-efi)
 	    target_efi=y
 	    ;;
+	--amdsev-efi-vc)
+	    amdsev_efi_vc=y
+	    ;;
 	--help)
 	    usage
 	    ;;
@@ -204,8 +215,17 @@ elif [ "$processor" = "arm" ]; then
     processor="cortex-a15"
 fi
 
+if [ "$amdsev_efi_vc" ] && [ "$arch" != "x86_64" ]; then
+    echo "--amdsev-efi-vc requires arch x86_64."
+    usage
+fi
+
 if [ "$arch" = "i386" ] || [ "$arch" = "x86_64" ]; then
     testdir=x86
+    if [ "$amdsev_efi_vc" ] && [ -z "$target_efi" ]; then
+        echo "--amdsev-efi-vc requires --target-efi."
+        usage
+    fi
 elif [ "$arch" = "arm" ] || [ "$arch" = "arm64" ]; then
     testdir=arm
     if [ "$target" = "qemu" ]; then
@@ -363,6 +383,7 @@ WA_DIVIDE=$wa_divide
 GENPROTIMG=${GENPROTIMG-genprotimg}
 HOST_KEY_DOCUMENT=$host_key_document
 TARGET_EFI=$target_efi
+AMDSEV_EFI_VC=$amdsev_efi_vc
 GEN_SE_HEADER=$gen_se_header
 EOF
 if [ "$arch" = "arm" ] || [ "$arch" = "arm64" ]; then
diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
index 6672214..987b59f 100644
--- a/lib/x86/amd_sev.c
+++ b/lib/x86/amd_sev.c
@@ -14,6 +14,7 @@
 #include "x86/vm.h"
 
 static unsigned short amd_sev_c_bit_pos;
+phys_addr_t ghcb_addr;
 
 bool amd_sev_enabled(void)
 {
@@ -100,14 +101,10 @@ efi_status_t setup_amd_sev_es(void)
 
 	/*
 	 * Copy UEFI's #VC IDT entry, so KVM-Unit-Tests can reuse it and does
-	 * not have to re-implement a #VC handler. Also update the #VC IDT code
-	 * segment to use KVM-Unit-Tests segments, KERNEL_CS, so that we do not
+	 * not have to re-implement a #VC handler for #VC exceptions before
+	 * GHCB is mapped. Also update the #VC IDT code segment to use
+	 * KVM-Unit-Tests segments, KERNEL_CS, so that we do not
 	 * have to copy the UEFI GDT entries into KVM-Unit-Tests GDT.
-	 *
-	 * TODO: Reusing UEFI #VC handler is a temporary workaround to simplify
-	 * the boot up process, the long-term solution is to implement a #VC
-	 * handler in kvm-unit-tests and load it, so that kvm-unit-tests does
-	 * not depend on specific UEFI #VC handler implementation.
 	 */
 	sidt(&idtr);
 	idt = (idt_entry_t *)idtr.base;
@@ -126,7 +123,7 @@ void setup_ghcb_pte(pgd_t *page_table)
 	 * function searches GHCB's L1 pte, creates corresponding L1 ptes if not
 	 * found, and unsets the c-bit of GHCB's L1 pte.
 	 */
-	phys_addr_t ghcb_addr, ghcb_base_addr;
+	phys_addr_t ghcb_base_addr;
 	pteval_t *pte;
 
 	/* Read the current GHCB page addr */
diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index 6a10f84..afbacf3 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -54,6 +54,7 @@ efi_status_t setup_amd_sev(void);
 bool amd_sev_es_enabled(void);
 efi_status_t setup_amd_sev_es(void);
 void setup_ghcb_pte(pgd_t *page_table);
+void handle_sev_es_vc(struct ex_regs *regs);
 
 unsigned long long get_amd_sev_c_bit_mask(void);
 unsigned long long get_amd_sev_addr_upperbound(void);
diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
new file mode 100644
index 0000000..8226121
--- /dev/null
+++ b/lib/x86/amd_sev_vc.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include "amd_sev.h"
+
+extern phys_addr_t ghcb_addr;
+
+void handle_sev_es_vc(struct ex_regs *regs)
+{
+	struct ghcb *ghcb = (struct ghcb *) ghcb_addr;
+	if (!ghcb) {
+		/* TODO: kill guest */
+		return;
+	}
+}
diff --git a/lib/x86/desc.c b/lib/x86/desc.c
index 16b7256..73aa866 100644
--- a/lib/x86/desc.c
+++ b/lib/x86/desc.c
@@ -3,6 +3,9 @@
 #include "processor.h"
 #include <setjmp.h>
 #include "apic-defs.h"
+#ifdef TARGET_EFI
+#include "amd_sev.h"
+#endif
 
 /* Boot-related data structures */
 
@@ -228,6 +231,9 @@ EX_E(ac, 17);
 EX(mc, 18);
 EX(xm, 19);
 EX_E(cp, 21);
+#ifdef TARGET_EFI
+EX_E(vc, 29);
+#endif
 
 asm (".pushsection .text \n\t"
      "__handle_exception: \n\t"
@@ -293,6 +299,15 @@ void setup_idt(void)
     handle_exception(13, check_exception_table);
 }
 
+void setup_amd_sev_es_vc(void)
+{
+	if (!amd_sev_es_enabled())
+		return;
+
+	set_idt_entry(29, &vc_fault, 0);
+	handle_exception(29, handle_sev_es_vc);
+}
+
 unsigned exception_vector(void)
 {
     unsigned char vector;
diff --git a/lib/x86/desc.h b/lib/x86/desc.h
index 9b81da0..6d95ab3 100644
--- a/lib/x86/desc.h
+++ b/lib/x86/desc.h
@@ -224,6 +224,7 @@ void set_intr_alt_stack(int e, void *fn);
 void print_current_tss_info(void);
 handler handle_exception(u8 v, handler fn);
 void unhandled_exception(struct ex_regs *regs, bool cpu);
+void setup_amd_sev_es_vc(void);
 
 bool test_for_exception(unsigned int ex, void (*trigger_func)(void *data),
 			void *data);
diff --git a/lib/x86/setup.c b/lib/x86/setup.c
index bbd3468..9de946b 100644
--- a/lib/x86/setup.c
+++ b/lib/x86/setup.c
@@ -327,6 +327,14 @@ efi_status_t setup_efi(efi_bootinfo_t *efi_bootinfo)
 	smp_init();
 	setup_page_table();
 
+#ifndef AMDSEV_EFI_VC
+	/*
+	 * Switch away from the UEFI-installed #VC handler.
+	 * GHCB has already been mapped at this point.
+	 */
+	setup_amd_sev_es_vc();
+#endif /* AMDSEV_EFI_VC */
+
 	return EFI_SUCCESS;
 }
 
diff --git a/x86/Makefile.common b/x86/Makefile.common
index ff02d98..ae426aa 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -24,6 +24,7 @@ cflatobjs += lib/x86/fault_test.o
 cflatobjs += lib/x86/delay.o
 ifeq ($(TARGET_EFI),y)
 cflatobjs += lib/x86/amd_sev.o
+cflatobjs += lib/x86/amd_sev_vc.o
 cflatobjs += lib/efi.o
 cflatobjs += x86/efi/reloc_x86_64.o
 endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 02/10] x86: Move svm.h to lib/x86/
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler " Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux Varad Gautam
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

to share common definitions across testcases and lib/.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
Reviewed-by: Marc Orr <marcorr@google.com>
---
 {x86 => lib/x86}/svm.h | 0
 x86/svm.c              | 2 +-
 x86/svm_tests.c        | 2 +-
 3 files changed, 2 insertions(+), 2 deletions(-)
 rename {x86 => lib/x86}/svm.h (100%)

diff --git a/x86/svm.h b/lib/x86/svm.h
similarity index 100%
rename from x86/svm.h
rename to lib/x86/svm.h
diff --git a/x86/svm.c b/x86/svm.c
index 3f94b2a..7cfef9e 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -2,7 +2,7 @@
  * Framework for testing nested virtualization
  */
 
-#include "svm.h"
+#include "x86/svm.h"
 #include "libcflat.h"
 #include "processor.h"
 #include "desc.h"
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 0707786..7756296 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1,4 +1,4 @@
-#include "svm.h"
+#include "x86/svm.h"
 #include "libcflat.h"
 #include "processor.h"
 #include "desc.h"
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler " Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 02/10] x86: Move svm.h to lib/x86/ Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 17:42   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers " Varad Gautam
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Processing #VC exceptions on AMD SEV-ES requires instruction decoding
logic to set up the right GHCB state before exiting to the host.

Pull in the instruction decoder from Linux for this purpose.

Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/insn/inat-tables.c | 1566 ++++++++++++++++++++++++++++++++++++
 lib/x86/insn/inat.c        |   86 ++
 lib/x86/insn/inat.h        |  233 ++++++
 lib/x86/insn/inat_types.h  |   18 +
 lib/x86/insn/insn.c        |  778 ++++++++++++++++++
 lib/x86/insn/insn.h        |  280 +++++++
 x86/Makefile.common        |    2 +
 7 files changed, 2963 insertions(+)
 create mode 100644 lib/x86/insn/inat-tables.c
 create mode 100644 lib/x86/insn/inat.c
 create mode 100644 lib/x86/insn/inat.h
 create mode 100644 lib/x86/insn/inat_types.h
 create mode 100644 lib/x86/insn/insn.c
 create mode 100644 lib/x86/insn/insn.h

diff --git a/lib/x86/insn/inat-tables.c b/lib/x86/insn/inat-tables.c
new file mode 100644
index 0000000..3e5fdba
--- /dev/null
+++ b/lib/x86/insn/inat-tables.c
@@ -0,0 +1,1566 @@
+/*
+ * x86 opcode map generated from x86-opcode-map.txt
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   arch/x86/lib/inat-tables.c
+ */
+
+/* Table: one byte opcode */
+const insn_attr_t inat_primary_table[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_MODRM,
+	[0x01] = INAT_MODRM,
+	[0x02] = INAT_MODRM,
+	[0x03] = INAT_MODRM,
+	[0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x05] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x08] = INAT_MODRM,
+	[0x09] = INAT_MODRM,
+	[0x0a] = INAT_MODRM,
+	[0x0b] = INAT_MODRM,
+	[0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x0d] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x0f] = INAT_MAKE_ESCAPE(1),
+	[0x10] = INAT_MODRM,
+	[0x11] = INAT_MODRM,
+	[0x12] = INAT_MODRM,
+	[0x13] = INAT_MODRM,
+	[0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x15] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x18] = INAT_MODRM,
+	[0x19] = INAT_MODRM,
+	[0x1a] = INAT_MODRM,
+	[0x1b] = INAT_MODRM,
+	[0x1c] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x1d] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x20] = INAT_MODRM,
+	[0x21] = INAT_MODRM,
+	[0x22] = INAT_MODRM,
+	[0x23] = INAT_MODRM,
+	[0x24] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x25] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x26] = INAT_MAKE_PREFIX(INAT_PFX_ES),
+	[0x28] = INAT_MODRM,
+	[0x29] = INAT_MODRM,
+	[0x2a] = INAT_MODRM,
+	[0x2b] = INAT_MODRM,
+	[0x2c] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x2d] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x2e] = INAT_MAKE_PREFIX(INAT_PFX_CS),
+	[0x30] = INAT_MODRM,
+	[0x31] = INAT_MODRM,
+	[0x32] = INAT_MODRM,
+	[0x33] = INAT_MODRM,
+	[0x34] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x35] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x36] = INAT_MAKE_PREFIX(INAT_PFX_SS),
+	[0x38] = INAT_MODRM,
+	[0x39] = INAT_MODRM,
+	[0x3a] = INAT_MODRM,
+	[0x3b] = INAT_MODRM,
+	[0x3c] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x3d] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0x3e] = INAT_MAKE_PREFIX(INAT_PFX_DS),
+	[0x40] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x41] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x42] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x43] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x44] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x45] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x46] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x47] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x48] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x49] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4a] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4b] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4c] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4d] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4e] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x4f] = INAT_MAKE_PREFIX(INAT_PFX_REX),
+	[0x50] = INAT_FORCE64,
+	[0x51] = INAT_FORCE64,
+	[0x52] = INAT_FORCE64,
+	[0x53] = INAT_FORCE64,
+	[0x54] = INAT_FORCE64,
+	[0x55] = INAT_FORCE64,
+	[0x56] = INAT_FORCE64,
+	[0x57] = INAT_FORCE64,
+	[0x58] = INAT_FORCE64,
+	[0x59] = INAT_FORCE64,
+	[0x5a] = INAT_FORCE64,
+	[0x5b] = INAT_FORCE64,
+	[0x5c] = INAT_FORCE64,
+	[0x5d] = INAT_FORCE64,
+	[0x5e] = INAT_FORCE64,
+	[0x5f] = INAT_FORCE64,
+	[0x62] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_EVEX),
+	[0x63] = INAT_MODRM | INAT_MODRM,
+	[0x64] = INAT_MAKE_PREFIX(INAT_PFX_FS),
+	[0x65] = INAT_MAKE_PREFIX(INAT_PFX_GS),
+	[0x66] = INAT_MAKE_PREFIX(INAT_PFX_OPNDSZ),
+	[0x67] = INAT_MAKE_PREFIX(INAT_PFX_ADDRSZ),
+	[0x68] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x69] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM,
+	[0x6a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0x6b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x71] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x72] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x73] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x74] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x75] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x76] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x77] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x78] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x79] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7a] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7b] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7c] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7d] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7e] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x7f] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0x80] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1),
+	[0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(1),
+	[0x82] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1),
+	[0x83] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1),
+	[0x84] = INAT_MODRM,
+	[0x85] = INAT_MODRM,
+	[0x86] = INAT_MODRM,
+	[0x87] = INAT_MODRM,
+	[0x88] = INAT_MODRM,
+	[0x89] = INAT_MODRM,
+	[0x8a] = INAT_MODRM,
+	[0x8b] = INAT_MODRM,
+	[0x8c] = INAT_MODRM,
+	[0x8d] = INAT_MODRM,
+	[0x8e] = INAT_MODRM,
+	[0x8f] = INAT_MAKE_GROUP(2) | INAT_MODRM | INAT_FORCE64,
+	[0x9a] = INAT_MAKE_IMM(INAT_IMM_PTR),
+	[0x9c] = INAT_FORCE64,
+	[0x9d] = INAT_FORCE64,
+	[0xa0] = INAT_MOFFSET,
+	[0xa1] = INAT_MOFFSET,
+	[0xa2] = INAT_MOFFSET,
+	[0xa3] = INAT_MOFFSET,
+	[0xa8] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xa9] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+	[0xb0] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb1] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb2] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb3] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb4] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb5] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb6] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb7] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xb8] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xb9] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xba] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xbb] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xbc] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xbd] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xbe] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xbf] = INAT_MAKE_IMM(INAT_IMM_VWORD),
+	[0xc0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xc1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xc2] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_FORCE64,
+	[0xc4] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX3),
+	[0xc5] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX2),
+	[0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(4),
+	[0xc7] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(5),
+	[0xc8] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_SCNDIMM,
+	[0xc9] = INAT_FORCE64,
+	[0xca] = INAT_MAKE_IMM(INAT_IMM_WORD),
+	[0xcd] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xd0] = INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xd1] = INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xd2] = INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xd3] = INAT_MODRM | INAT_MAKE_GROUP(3),
+	[0xd4] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xd5] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xd8] = INAT_MODRM,
+	[0xd9] = INAT_MODRM,
+	[0xda] = INAT_MODRM,
+	[0xdb] = INAT_MODRM,
+	[0xdc] = INAT_MODRM,
+	[0xdd] = INAT_MODRM,
+	[0xde] = INAT_MODRM,
+	[0xdf] = INAT_MODRM,
+	[0xe0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0xe1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0xe2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0xe3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0xe4] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xe5] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xe6] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xe7] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+	[0xe8] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0xe9] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0xea] = INAT_MAKE_IMM(INAT_IMM_PTR),
+	[0xeb] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64,
+	[0xf0] = INAT_MAKE_PREFIX(INAT_PFX_LOCK),
+	[0xf2] = INAT_MAKE_PREFIX(INAT_PFX_REPNE) | INAT_MAKE_PREFIX(INAT_PFX_REPNE),
+	[0xf3] = INAT_MAKE_PREFIX(INAT_PFX_REPE) | INAT_MAKE_PREFIX(INAT_PFX_REPE),
+	[0xf6] = INAT_MODRM | INAT_MAKE_GROUP(6),
+	[0xf7] = INAT_MODRM | INAT_MAKE_GROUP(7),
+	[0xfe] = INAT_MAKE_GROUP(8),
+	[0xff] = INAT_MAKE_GROUP(9),
+};
+
+/* Table: 2-byte opcode (0x0f) */
+const insn_attr_t inat_escape_table_1[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_MAKE_GROUP(10),
+	[0x01] = INAT_MAKE_GROUP(11),
+	[0x02] = INAT_MODRM,
+	[0x03] = INAT_MODRM,
+	[0x0d] = INAT_MAKE_GROUP(12),
+	[0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0x10] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x11] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x12] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x13] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x14] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x15] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x16] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x17] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x18] = INAT_MAKE_GROUP(13),
+	[0x1a] = INAT_MODRM | INAT_VARIANT,
+	[0x1b] = INAT_MODRM | INAT_VARIANT,
+	[0x1c] = INAT_MAKE_GROUP(14),
+	[0x1e] = INAT_MAKE_GROUP(15),
+	[0x1f] = INAT_MODRM,
+	[0x20] = INAT_MODRM,
+	[0x21] = INAT_MODRM,
+	[0x22] = INAT_MODRM,
+	[0x23] = INAT_MODRM,
+	[0x28] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x29] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x2a] = INAT_MODRM | INAT_VARIANT,
+	[0x2b] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x2c] = INAT_MODRM | INAT_VARIANT,
+	[0x2d] = INAT_MODRM | INAT_VARIANT,
+	[0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x38] = INAT_MAKE_ESCAPE(2),
+	[0x3a] = INAT_MAKE_ESCAPE(3),
+	[0x40] = INAT_MODRM,
+	[0x41] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x42] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x43] = INAT_MODRM,
+	[0x44] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x45] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x46] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x47] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x48] = INAT_MODRM,
+	[0x49] = INAT_MODRM,
+	[0x4a] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x4b] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x4c] = INAT_MODRM,
+	[0x4d] = INAT_MODRM,
+	[0x4e] = INAT_MODRM,
+	[0x4f] = INAT_MODRM,
+	[0x50] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x51] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x52] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x53] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x54] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x55] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x56] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x57] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x58] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x59] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5b] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5c] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5d] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x5f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x60] = INAT_MODRM | INAT_VARIANT,
+	[0x61] = INAT_MODRM | INAT_VARIANT,
+	[0x62] = INAT_MODRM | INAT_VARIANT,
+	[0x63] = INAT_MODRM | INAT_VARIANT,
+	[0x64] = INAT_MODRM | INAT_VARIANT,
+	[0x65] = INAT_MODRM | INAT_VARIANT,
+	[0x66] = INAT_MODRM | INAT_VARIANT,
+	[0x67] = INAT_MODRM | INAT_VARIANT,
+	[0x68] = INAT_MODRM | INAT_VARIANT,
+	[0x69] = INAT_MODRM | INAT_VARIANT,
+	[0x6a] = INAT_MODRM | INAT_VARIANT,
+	[0x6b] = INAT_MODRM | INAT_VARIANT,
+	[0x6c] = INAT_VARIANT,
+	[0x6d] = INAT_VARIANT,
+	[0x6e] = INAT_MODRM | INAT_VARIANT,
+	[0x6f] = INAT_MODRM | INAT_VARIANT,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x71] = INAT_MAKE_GROUP(16),
+	[0x72] = INAT_MAKE_GROUP(17),
+	[0x73] = INAT_MAKE_GROUP(18),
+	[0x74] = INAT_MODRM | INAT_VARIANT,
+	[0x75] = INAT_MODRM | INAT_VARIANT,
+	[0x76] = INAT_MODRM | INAT_VARIANT,
+	[0x77] = INAT_VEXOK | INAT_VEXOK,
+	[0x78] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x79] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x7a] = INAT_VARIANT,
+	[0x7b] = INAT_VARIANT,
+	[0x7c] = INAT_VARIANT,
+	[0x7d] = INAT_VARIANT,
+	[0x7e] = INAT_MODRM | INAT_VARIANT,
+	[0x7f] = INAT_MODRM | INAT_VARIANT,
+	[0x80] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x82] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x83] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x84] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x85] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x86] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x87] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x88] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x89] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8a] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8b] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8c] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8d] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8e] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x8f] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64,
+	[0x90] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x91] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x92] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x93] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x94] = INAT_MODRM,
+	[0x95] = INAT_MODRM,
+	[0x96] = INAT_MODRM,
+	[0x97] = INAT_MODRM,
+	[0x98] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x99] = INAT_MODRM | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x9a] = INAT_MODRM,
+	[0x9b] = INAT_MODRM,
+	[0x9c] = INAT_MODRM,
+	[0x9d] = INAT_MODRM,
+	[0x9e] = INAT_MODRM,
+	[0x9f] = INAT_MODRM,
+	[0xa0] = INAT_FORCE64,
+	[0xa1] = INAT_FORCE64,
+	[0xa3] = INAT_MODRM,
+	[0xa4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0xa5] = INAT_MODRM,
+	[0xa6] = INAT_MAKE_GROUP(19),
+	[0xa7] = INAT_MAKE_GROUP(20),
+	[0xa8] = INAT_FORCE64,
+	[0xa9] = INAT_FORCE64,
+	[0xab] = INAT_MODRM,
+	[0xac] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0xad] = INAT_MODRM,
+	[0xae] = INAT_MAKE_GROUP(21),
+	[0xaf] = INAT_MODRM,
+	[0xb0] = INAT_MODRM,
+	[0xb1] = INAT_MODRM,
+	[0xb2] = INAT_MODRM,
+	[0xb3] = INAT_MODRM,
+	[0xb4] = INAT_MODRM,
+	[0xb5] = INAT_MODRM,
+	[0xb6] = INAT_MODRM,
+	[0xb7] = INAT_MODRM,
+	[0xb8] = INAT_VARIANT,
+	[0xb9] = INAT_MAKE_GROUP(22),
+	[0xba] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(23),
+	[0xbb] = INAT_MODRM,
+	[0xbc] = INAT_MODRM | INAT_VARIANT,
+	[0xbd] = INAT_MODRM | INAT_VARIANT,
+	[0xbe] = INAT_MODRM,
+	[0xbf] = INAT_MODRM,
+	[0xc0] = INAT_MODRM,
+	[0xc1] = INAT_MODRM,
+	[0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0xc3] = INAT_MODRM,
+	[0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0xc7] = INAT_MAKE_GROUP(24),
+	[0xd0] = INAT_VARIANT,
+	[0xd1] = INAT_MODRM | INAT_VARIANT,
+	[0xd2] = INAT_MODRM | INAT_VARIANT,
+	[0xd3] = INAT_MODRM | INAT_VARIANT,
+	[0xd4] = INAT_MODRM | INAT_VARIANT,
+	[0xd5] = INAT_MODRM | INAT_VARIANT,
+	[0xd6] = INAT_VARIANT,
+	[0xd7] = INAT_MODRM | INAT_VARIANT,
+	[0xd8] = INAT_MODRM | INAT_VARIANT,
+	[0xd9] = INAT_MODRM | INAT_VARIANT,
+	[0xda] = INAT_MODRM | INAT_VARIANT,
+	[0xdb] = INAT_MODRM | INAT_VARIANT,
+	[0xdc] = INAT_MODRM | INAT_VARIANT,
+	[0xdd] = INAT_MODRM | INAT_VARIANT,
+	[0xde] = INAT_MODRM | INAT_VARIANT,
+	[0xdf] = INAT_MODRM | INAT_VARIANT,
+	[0xe0] = INAT_MODRM | INAT_VARIANT,
+	[0xe1] = INAT_MODRM | INAT_VARIANT,
+	[0xe2] = INAT_MODRM | INAT_VARIANT,
+	[0xe3] = INAT_MODRM | INAT_VARIANT,
+	[0xe4] = INAT_MODRM | INAT_VARIANT,
+	[0xe5] = INAT_MODRM | INAT_VARIANT,
+	[0xe6] = INAT_VARIANT,
+	[0xe7] = INAT_MODRM | INAT_VARIANT,
+	[0xe8] = INAT_MODRM | INAT_VARIANT,
+	[0xe9] = INAT_MODRM | INAT_VARIANT,
+	[0xea] = INAT_MODRM | INAT_VARIANT,
+	[0xeb] = INAT_MODRM | INAT_VARIANT,
+	[0xec] = INAT_MODRM | INAT_VARIANT,
+	[0xed] = INAT_MODRM | INAT_VARIANT,
+	[0xee] = INAT_MODRM | INAT_VARIANT,
+	[0xef] = INAT_MODRM | INAT_VARIANT,
+	[0xf0] = INAT_VARIANT,
+	[0xf1] = INAT_MODRM | INAT_VARIANT,
+	[0xf2] = INAT_MODRM | INAT_VARIANT,
+	[0xf3] = INAT_MODRM | INAT_VARIANT,
+	[0xf4] = INAT_MODRM | INAT_VARIANT,
+	[0xf5] = INAT_MODRM | INAT_VARIANT,
+	[0xf6] = INAT_MODRM | INAT_VARIANT,
+	[0xf7] = INAT_MODRM | INAT_VARIANT,
+	[0xf8] = INAT_MODRM | INAT_VARIANT,
+	[0xf9] = INAT_MODRM | INAT_VARIANT,
+	[0xfa] = INAT_MODRM | INAT_VARIANT,
+	[0xfb] = INAT_MODRM | INAT_VARIANT,
+	[0xfc] = INAT_MODRM | INAT_VARIANT,
+	[0xfd] = INAT_MODRM | INAT_VARIANT,
+	[0xfe] = INAT_MODRM | INAT_VARIANT,
+};
+const insn_attr_t inat_escape_table_1_1[INAT_OPCODE_TABLE_SIZE] = {
+	[0x10] = INAT_MODRM | INAT_VEXOK,
+	[0x11] = INAT_MODRM | INAT_VEXOK,
+	[0x12] = INAT_MODRM | INAT_VEXOK,
+	[0x13] = INAT_MODRM | INAT_VEXOK,
+	[0x14] = INAT_MODRM | INAT_VEXOK,
+	[0x15] = INAT_MODRM | INAT_VEXOK,
+	[0x16] = INAT_MODRM | INAT_VEXOK,
+	[0x17] = INAT_MODRM | INAT_VEXOK,
+	[0x1a] = INAT_MODRM,
+	[0x1b] = INAT_MODRM,
+	[0x28] = INAT_MODRM | INAT_VEXOK,
+	[0x29] = INAT_MODRM | INAT_VEXOK,
+	[0x2a] = INAT_MODRM,
+	[0x2b] = INAT_MODRM | INAT_VEXOK,
+	[0x2c] = INAT_MODRM,
+	[0x2d] = INAT_MODRM,
+	[0x2e] = INAT_MODRM | INAT_VEXOK,
+	[0x2f] = INAT_MODRM | INAT_VEXOK,
+	[0x41] = INAT_MODRM | INAT_VEXOK,
+	[0x42] = INAT_MODRM | INAT_VEXOK,
+	[0x44] = INAT_MODRM | INAT_VEXOK,
+	[0x45] = INAT_MODRM | INAT_VEXOK,
+	[0x46] = INAT_MODRM | INAT_VEXOK,
+	[0x47] = INAT_MODRM | INAT_VEXOK,
+	[0x4a] = INAT_MODRM | INAT_VEXOK,
+	[0x4b] = INAT_MODRM | INAT_VEXOK,
+	[0x50] = INAT_MODRM | INAT_VEXOK,
+	[0x51] = INAT_MODRM | INAT_VEXOK,
+	[0x54] = INAT_MODRM | INAT_VEXOK,
+	[0x55] = INAT_MODRM | INAT_VEXOK,
+	[0x56] = INAT_MODRM | INAT_VEXOK,
+	[0x57] = INAT_MODRM | INAT_VEXOK,
+	[0x58] = INAT_MODRM | INAT_VEXOK,
+	[0x59] = INAT_MODRM | INAT_VEXOK,
+	[0x5a] = INAT_MODRM | INAT_VEXOK,
+	[0x5b] = INAT_MODRM | INAT_VEXOK,
+	[0x5c] = INAT_MODRM | INAT_VEXOK,
+	[0x5d] = INAT_MODRM | INAT_VEXOK,
+	[0x5e] = INAT_MODRM | INAT_VEXOK,
+	[0x5f] = INAT_MODRM | INAT_VEXOK,
+	[0x60] = INAT_MODRM | INAT_VEXOK,
+	[0x61] = INAT_MODRM | INAT_VEXOK,
+	[0x62] = INAT_MODRM | INAT_VEXOK,
+	[0x63] = INAT_MODRM | INAT_VEXOK,
+	[0x64] = INAT_MODRM | INAT_VEXOK,
+	[0x65] = INAT_MODRM | INAT_VEXOK,
+	[0x66] = INAT_MODRM | INAT_VEXOK,
+	[0x67] = INAT_MODRM | INAT_VEXOK,
+	[0x68] = INAT_MODRM | INAT_VEXOK,
+	[0x69] = INAT_MODRM | INAT_VEXOK,
+	[0x6a] = INAT_MODRM | INAT_VEXOK,
+	[0x6b] = INAT_MODRM | INAT_VEXOK,
+	[0x6c] = INAT_MODRM | INAT_VEXOK,
+	[0x6d] = INAT_MODRM | INAT_VEXOK,
+	[0x6e] = INAT_MODRM | INAT_VEXOK,
+	[0x6f] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x74] = INAT_MODRM | INAT_VEXOK,
+	[0x75] = INAT_MODRM | INAT_VEXOK,
+	[0x76] = INAT_MODRM | INAT_VEXOK,
+	[0x78] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x79] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7c] = INAT_MODRM | INAT_VEXOK,
+	[0x7d] = INAT_MODRM | INAT_VEXOK,
+	[0x7e] = INAT_MODRM | INAT_VEXOK,
+	[0x7f] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x90] = INAT_MODRM | INAT_VEXOK,
+	[0x91] = INAT_MODRM | INAT_VEXOK,
+	[0x92] = INAT_MODRM | INAT_VEXOK,
+	[0x93] = INAT_MODRM | INAT_VEXOK,
+	[0x98] = INAT_MODRM | INAT_VEXOK,
+	[0x99] = INAT_MODRM | INAT_VEXOK,
+	[0xbc] = INAT_MODRM,
+	[0xbd] = INAT_MODRM,
+	[0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xd0] = INAT_MODRM | INAT_VEXOK,
+	[0xd1] = INAT_MODRM | INAT_VEXOK,
+	[0xd2] = INAT_MODRM | INAT_VEXOK,
+	[0xd3] = INAT_MODRM | INAT_VEXOK,
+	[0xd4] = INAT_MODRM | INAT_VEXOK,
+	[0xd5] = INAT_MODRM | INAT_VEXOK,
+	[0xd6] = INAT_MODRM | INAT_VEXOK,
+	[0xd7] = INAT_MODRM | INAT_VEXOK,
+	[0xd8] = INAT_MODRM | INAT_VEXOK,
+	[0xd9] = INAT_MODRM | INAT_VEXOK,
+	[0xda] = INAT_MODRM | INAT_VEXOK,
+	[0xdb] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0xdc] = INAT_MODRM | INAT_VEXOK,
+	[0xdd] = INAT_MODRM | INAT_VEXOK,
+	[0xde] = INAT_MODRM | INAT_VEXOK,
+	[0xdf] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0xe0] = INAT_MODRM | INAT_VEXOK,
+	[0xe1] = INAT_MODRM | INAT_VEXOK,
+	[0xe2] = INAT_MODRM | INAT_VEXOK,
+	[0xe3] = INAT_MODRM | INAT_VEXOK,
+	[0xe4] = INAT_MODRM | INAT_VEXOK,
+	[0xe5] = INAT_MODRM | INAT_VEXOK,
+	[0xe6] = INAT_MODRM | INAT_VEXOK,
+	[0xe7] = INAT_MODRM | INAT_VEXOK,
+	[0xe8] = INAT_MODRM | INAT_VEXOK,
+	[0xe9] = INAT_MODRM | INAT_VEXOK,
+	[0xea] = INAT_MODRM | INAT_VEXOK,
+	[0xeb] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0xec] = INAT_MODRM | INAT_VEXOK,
+	[0xed] = INAT_MODRM | INAT_VEXOK,
+	[0xee] = INAT_MODRM | INAT_VEXOK,
+	[0xef] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0xf1] = INAT_MODRM | INAT_VEXOK,
+	[0xf2] = INAT_MODRM | INAT_VEXOK,
+	[0xf3] = INAT_MODRM | INAT_VEXOK,
+	[0xf4] = INAT_MODRM | INAT_VEXOK,
+	[0xf5] = INAT_MODRM | INAT_VEXOK,
+	[0xf6] = INAT_MODRM | INAT_VEXOK,
+	[0xf7] = INAT_MODRM | INAT_VEXOK,
+	[0xf8] = INAT_MODRM | INAT_VEXOK,
+	[0xf9] = INAT_MODRM | INAT_VEXOK,
+	[0xfa] = INAT_MODRM | INAT_VEXOK,
+	[0xfb] = INAT_MODRM | INAT_VEXOK,
+	[0xfc] = INAT_MODRM | INAT_VEXOK,
+	[0xfd] = INAT_MODRM | INAT_VEXOK,
+	[0xfe] = INAT_MODRM | INAT_VEXOK,
+};
+const insn_attr_t inat_escape_table_1_2[INAT_OPCODE_TABLE_SIZE] = {
+	[0x10] = INAT_MODRM | INAT_VEXOK,
+	[0x11] = INAT_MODRM | INAT_VEXOK,
+	[0x12] = INAT_MODRM | INAT_VEXOK,
+	[0x16] = INAT_MODRM | INAT_VEXOK,
+	[0x1a] = INAT_MODRM,
+	[0x1b] = INAT_MODRM,
+	[0x2a] = INAT_MODRM | INAT_VEXOK,
+	[0x2c] = INAT_MODRM | INAT_VEXOK,
+	[0x2d] = INAT_MODRM | INAT_VEXOK,
+	[0x51] = INAT_MODRM | INAT_VEXOK,
+	[0x52] = INAT_MODRM | INAT_VEXOK,
+	[0x53] = INAT_MODRM | INAT_VEXOK,
+	[0x58] = INAT_MODRM | INAT_VEXOK,
+	[0x59] = INAT_MODRM | INAT_VEXOK,
+	[0x5a] = INAT_MODRM | INAT_VEXOK,
+	[0x5b] = INAT_MODRM | INAT_VEXOK,
+	[0x5c] = INAT_MODRM | INAT_VEXOK,
+	[0x5d] = INAT_MODRM | INAT_VEXOK,
+	[0x5e] = INAT_MODRM | INAT_VEXOK,
+	[0x5f] = INAT_MODRM | INAT_VEXOK,
+	[0x6f] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x78] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x79] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7e] = INAT_MODRM | INAT_VEXOK,
+	[0x7f] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0xb8] = INAT_MODRM,
+	[0xbc] = INAT_MODRM,
+	[0xbd] = INAT_MODRM,
+	[0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xd6] = INAT_MODRM,
+	[0xe6] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+};
+const insn_attr_t inat_escape_table_1_3[INAT_OPCODE_TABLE_SIZE] = {
+	[0x10] = INAT_MODRM | INAT_VEXOK,
+	[0x11] = INAT_MODRM | INAT_VEXOK,
+	[0x12] = INAT_MODRM | INAT_VEXOK,
+	[0x1a] = INAT_MODRM,
+	[0x1b] = INAT_MODRM,
+	[0x2a] = INAT_MODRM | INAT_VEXOK,
+	[0x2c] = INAT_MODRM | INAT_VEXOK,
+	[0x2d] = INAT_MODRM | INAT_VEXOK,
+	[0x51] = INAT_MODRM | INAT_VEXOK,
+	[0x58] = INAT_MODRM | INAT_VEXOK,
+	[0x59] = INAT_MODRM | INAT_VEXOK,
+	[0x5a] = INAT_MODRM | INAT_VEXOK,
+	[0x5c] = INAT_MODRM | INAT_VEXOK,
+	[0x5d] = INAT_MODRM | INAT_VEXOK,
+	[0x5e] = INAT_MODRM | INAT_VEXOK,
+	[0x5f] = INAT_MODRM | INAT_VEXOK,
+	[0x6f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x78] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x79] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7c] = INAT_MODRM | INAT_VEXOK,
+	[0x7d] = INAT_MODRM | INAT_VEXOK,
+	[0x7f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x92] = INAT_MODRM | INAT_VEXOK,
+	[0x93] = INAT_MODRM | INAT_VEXOK,
+	[0xbc] = INAT_MODRM,
+	[0xbd] = INAT_MODRM,
+	[0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xd0] = INAT_MODRM | INAT_VEXOK,
+	[0xd6] = INAT_MODRM,
+	[0xe6] = INAT_MODRM | INAT_VEXOK,
+	[0xf0] = INAT_MODRM | INAT_VEXOK,
+};
+
+/* Table: 3-byte opcode 1 (0x0f 0x38) */
+const insn_attr_t inat_escape_table_2[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_MODRM | INAT_VARIANT,
+	[0x01] = INAT_MODRM | INAT_VARIANT,
+	[0x02] = INAT_MODRM | INAT_VARIANT,
+	[0x03] = INAT_MODRM | INAT_VARIANT,
+	[0x04] = INAT_MODRM | INAT_VARIANT,
+	[0x05] = INAT_MODRM | INAT_VARIANT,
+	[0x06] = INAT_MODRM | INAT_VARIANT,
+	[0x07] = INAT_MODRM | INAT_VARIANT,
+	[0x08] = INAT_MODRM | INAT_VARIANT,
+	[0x09] = INAT_MODRM | INAT_VARIANT,
+	[0x0a] = INAT_MODRM | INAT_VARIANT,
+	[0x0b] = INAT_MODRM | INAT_VARIANT,
+	[0x0c] = INAT_VARIANT,
+	[0x0d] = INAT_VARIANT,
+	[0x0e] = INAT_VARIANT,
+	[0x0f] = INAT_VARIANT,
+	[0x10] = INAT_VARIANT,
+	[0x11] = INAT_VARIANT,
+	[0x12] = INAT_VARIANT,
+	[0x13] = INAT_VARIANT,
+	[0x14] = INAT_VARIANT,
+	[0x15] = INAT_VARIANT,
+	[0x16] = INAT_VARIANT,
+	[0x17] = INAT_VARIANT,
+	[0x18] = INAT_VARIANT,
+	[0x19] = INAT_VARIANT,
+	[0x1a] = INAT_VARIANT,
+	[0x1b] = INAT_VARIANT,
+	[0x1c] = INAT_MODRM | INAT_VARIANT,
+	[0x1d] = INAT_MODRM | INAT_VARIANT,
+	[0x1e] = INAT_MODRM | INAT_VARIANT,
+	[0x1f] = INAT_VARIANT,
+	[0x20] = INAT_VARIANT,
+	[0x21] = INAT_VARIANT,
+	[0x22] = INAT_VARIANT,
+	[0x23] = INAT_VARIANT,
+	[0x24] = INAT_VARIANT,
+	[0x25] = INAT_VARIANT,
+	[0x26] = INAT_VARIANT,
+	[0x27] = INAT_VARIANT,
+	[0x28] = INAT_VARIANT,
+	[0x29] = INAT_VARIANT,
+	[0x2a] = INAT_VARIANT,
+	[0x2b] = INAT_VARIANT,
+	[0x2c] = INAT_VARIANT,
+	[0x2d] = INAT_VARIANT,
+	[0x2e] = INAT_VARIANT,
+	[0x2f] = INAT_VARIANT,
+	[0x30] = INAT_VARIANT,
+	[0x31] = INAT_VARIANT,
+	[0x32] = INAT_VARIANT,
+	[0x33] = INAT_VARIANT,
+	[0x34] = INAT_VARIANT,
+	[0x35] = INAT_VARIANT,
+	[0x36] = INAT_VARIANT,
+	[0x37] = INAT_VARIANT,
+	[0x38] = INAT_VARIANT,
+	[0x39] = INAT_VARIANT,
+	[0x3a] = INAT_VARIANT,
+	[0x3b] = INAT_VARIANT,
+	[0x3c] = INAT_VARIANT,
+	[0x3d] = INAT_VARIANT,
+	[0x3e] = INAT_VARIANT,
+	[0x3f] = INAT_VARIANT,
+	[0x40] = INAT_VARIANT,
+	[0x41] = INAT_VARIANT,
+	[0x42] = INAT_VARIANT,
+	[0x43] = INAT_VARIANT,
+	[0x44] = INAT_VARIANT,
+	[0x45] = INAT_VARIANT,
+	[0x46] = INAT_VARIANT,
+	[0x47] = INAT_VARIANT,
+	[0x4c] = INAT_VARIANT,
+	[0x4d] = INAT_VARIANT,
+	[0x4e] = INAT_VARIANT,
+	[0x4f] = INAT_VARIANT,
+	[0x50] = INAT_VARIANT,
+	[0x51] = INAT_VARIANT,
+	[0x52] = INAT_VARIANT,
+	[0x53] = INAT_VARIANT,
+	[0x54] = INAT_VARIANT,
+	[0x55] = INAT_VARIANT,
+	[0x58] = INAT_VARIANT,
+	[0x59] = INAT_VARIANT,
+	[0x5a] = INAT_VARIANT,
+	[0x5b] = INAT_VARIANT,
+	[0x62] = INAT_VARIANT,
+	[0x63] = INAT_VARIANT,
+	[0x64] = INAT_VARIANT,
+	[0x65] = INAT_VARIANT,
+	[0x66] = INAT_VARIANT,
+	[0x68] = INAT_VARIANT,
+	[0x70] = INAT_VARIANT,
+	[0x71] = INAT_VARIANT,
+	[0x72] = INAT_VARIANT,
+	[0x73] = INAT_VARIANT,
+	[0x75] = INAT_VARIANT,
+	[0x76] = INAT_VARIANT,
+	[0x77] = INAT_VARIANT,
+	[0x78] = INAT_VARIANT,
+	[0x79] = INAT_VARIANT,
+	[0x7a] = INAT_VARIANT,
+	[0x7b] = INAT_VARIANT,
+	[0x7c] = INAT_VARIANT,
+	[0x7d] = INAT_VARIANT,
+	[0x7e] = INAT_VARIANT,
+	[0x7f] = INAT_VARIANT,
+	[0x80] = INAT_VARIANT,
+	[0x81] = INAT_VARIANT,
+	[0x82] = INAT_VARIANT,
+	[0x83] = INAT_VARIANT,
+	[0x88] = INAT_VARIANT,
+	[0x89] = INAT_VARIANT,
+	[0x8a] = INAT_VARIANT,
+	[0x8b] = INAT_VARIANT,
+	[0x8c] = INAT_VARIANT,
+	[0x8d] = INAT_VARIANT,
+	[0x8e] = INAT_VARIANT,
+	[0x8f] = INAT_VARIANT,
+	[0x90] = INAT_VARIANT,
+	[0x91] = INAT_VARIANT,
+	[0x92] = INAT_VARIANT,
+	[0x93] = INAT_VARIANT,
+	[0x96] = INAT_VARIANT,
+	[0x97] = INAT_VARIANT,
+	[0x98] = INAT_VARIANT,
+	[0x99] = INAT_VARIANT,
+	[0x9a] = INAT_VARIANT,
+	[0x9b] = INAT_VARIANT,
+	[0x9c] = INAT_VARIANT,
+	[0x9d] = INAT_VARIANT,
+	[0x9e] = INAT_VARIANT,
+	[0x9f] = INAT_VARIANT,
+	[0xa0] = INAT_VARIANT,
+	[0xa1] = INAT_VARIANT,
+	[0xa2] = INAT_VARIANT,
+	[0xa3] = INAT_VARIANT,
+	[0xa6] = INAT_VARIANT,
+	[0xa7] = INAT_VARIANT,
+	[0xa8] = INAT_VARIANT,
+	[0xa9] = INAT_VARIANT,
+	[0xaa] = INAT_VARIANT,
+	[0xab] = INAT_VARIANT,
+	[0xac] = INAT_VARIANT,
+	[0xad] = INAT_VARIANT,
+	[0xae] = INAT_VARIANT,
+	[0xaf] = INAT_VARIANT,
+	[0xb4] = INAT_VARIANT,
+	[0xb5] = INAT_VARIANT,
+	[0xb6] = INAT_VARIANT,
+	[0xb7] = INAT_VARIANT,
+	[0xb8] = INAT_VARIANT,
+	[0xb9] = INAT_VARIANT,
+	[0xba] = INAT_VARIANT,
+	[0xbb] = INAT_VARIANT,
+	[0xbc] = INAT_VARIANT,
+	[0xbd] = INAT_VARIANT,
+	[0xbe] = INAT_VARIANT,
+	[0xbf] = INAT_VARIANT,
+	[0xc4] = INAT_VARIANT,
+	[0xc6] = INAT_MAKE_GROUP(25),
+	[0xc7] = INAT_MAKE_GROUP(26),
+	[0xc8] = INAT_MODRM | INAT_VARIANT,
+	[0xc9] = INAT_MODRM,
+	[0xca] = INAT_MODRM | INAT_VARIANT,
+	[0xcb] = INAT_MODRM | INAT_VARIANT,
+	[0xcc] = INAT_MODRM | INAT_VARIANT,
+	[0xcd] = INAT_MODRM | INAT_VARIANT,
+	[0xcf] = INAT_VARIANT,
+	[0xdb] = INAT_VARIANT,
+	[0xdc] = INAT_VARIANT,
+	[0xdd] = INAT_VARIANT,
+	[0xde] = INAT_VARIANT,
+	[0xdf] = INAT_VARIANT,
+	[0xf0] = INAT_MODRM | INAT_MODRM | INAT_VARIANT,
+	[0xf1] = INAT_MODRM | INAT_MODRM | INAT_VARIANT,
+	[0xf2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf3] = INAT_MAKE_GROUP(27),
+	[0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT,
+	[0xf6] = INAT_MODRM | INAT_VARIANT,
+	[0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT,
+	[0xf8] = INAT_VARIANT,
+	[0xf9] = INAT_MODRM,
+};
+const insn_attr_t inat_escape_table_2_1[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_MODRM | INAT_VEXOK,
+	[0x01] = INAT_MODRM | INAT_VEXOK,
+	[0x02] = INAT_MODRM | INAT_VEXOK,
+	[0x03] = INAT_MODRM | INAT_VEXOK,
+	[0x04] = INAT_MODRM | INAT_VEXOK,
+	[0x05] = INAT_MODRM | INAT_VEXOK,
+	[0x06] = INAT_MODRM | INAT_VEXOK,
+	[0x07] = INAT_MODRM | INAT_VEXOK,
+	[0x08] = INAT_MODRM | INAT_VEXOK,
+	[0x09] = INAT_MODRM | INAT_VEXOK,
+	[0x0a] = INAT_MODRM | INAT_VEXOK,
+	[0x0b] = INAT_MODRM | INAT_VEXOK,
+	[0x0c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x0d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x0e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x0f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x10] = INAT_MODRM | INAT_MODRM | INAT_VEXOK,
+	[0x11] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x12] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x13] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x14] = INAT_MODRM | INAT_MODRM | INAT_VEXOK,
+	[0x15] = INAT_MODRM | INAT_MODRM | INAT_VEXOK,
+	[0x16] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x17] = INAT_MODRM | INAT_VEXOK,
+	[0x18] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x19] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x1a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x1b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x1c] = INAT_MODRM | INAT_VEXOK,
+	[0x1d] = INAT_MODRM | INAT_VEXOK,
+	[0x1e] = INAT_MODRM | INAT_VEXOK,
+	[0x1f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x20] = INAT_MODRM | INAT_VEXOK,
+	[0x21] = INAT_MODRM | INAT_VEXOK,
+	[0x22] = INAT_MODRM | INAT_VEXOK,
+	[0x23] = INAT_MODRM | INAT_VEXOK,
+	[0x24] = INAT_MODRM | INAT_VEXOK,
+	[0x25] = INAT_MODRM | INAT_VEXOK,
+	[0x26] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x27] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x28] = INAT_MODRM | INAT_VEXOK,
+	[0x29] = INAT_MODRM | INAT_VEXOK,
+	[0x2a] = INAT_MODRM | INAT_VEXOK,
+	[0x2b] = INAT_MODRM | INAT_VEXOK,
+	[0x2c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x2d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x30] = INAT_MODRM | INAT_VEXOK,
+	[0x31] = INAT_MODRM | INAT_VEXOK,
+	[0x32] = INAT_MODRM | INAT_VEXOK,
+	[0x33] = INAT_MODRM | INAT_VEXOK,
+	[0x34] = INAT_MODRM | INAT_VEXOK,
+	[0x35] = INAT_MODRM | INAT_VEXOK,
+	[0x36] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x37] = INAT_MODRM | INAT_VEXOK,
+	[0x38] = INAT_MODRM | INAT_VEXOK,
+	[0x39] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x3a] = INAT_MODRM | INAT_VEXOK,
+	[0x3b] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x3c] = INAT_MODRM | INAT_VEXOK,
+	[0x3d] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x3e] = INAT_MODRM | INAT_VEXOK,
+	[0x3f] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x40] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK,
+	[0x41] = INAT_MODRM | INAT_VEXOK,
+	[0x42] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x43] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x44] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x45] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x46] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x47] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x4c] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x4d] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x4e] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x4f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x50] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x51] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x52] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x53] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x54] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x55] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x58] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x59] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x5b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x62] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x63] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x64] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x65] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x66] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x70] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x71] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x72] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x73] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x75] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x76] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x77] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x78] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x79] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x7a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7c] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7d] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7e] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x7f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x80] = INAT_MODRM,
+	[0x81] = INAT_MODRM,
+	[0x82] = INAT_MODRM,
+	[0x83] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x88] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x89] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x8a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x8b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x8c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x8d] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x8e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x8f] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x90] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x91] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MODRM | INAT_VEXOK,
+	[0x92] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x93] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x96] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x97] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x98] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x99] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9b] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x9f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xa0] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xa1] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xa2] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xa3] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xa6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xa7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xa8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xa9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xaa] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xab] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xac] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xad] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xae] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xaf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xb4] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xb5] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xb6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xb7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xb8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xb9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xba] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xbb] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xbc] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xbd] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xbe] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xbf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xc4] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xc8] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xca] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xcb] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xcc] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xcd] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xcf] = INAT_MODRM | INAT_VEXOK,
+	[0xdb] = INAT_MODRM | INAT_VEXOK,
+	[0xdc] = INAT_MODRM | INAT_VEXOK,
+	[0xdd] = INAT_MODRM | INAT_VEXOK,
+	[0xde] = INAT_MODRM | INAT_VEXOK,
+	[0xdf] = INAT_MODRM | INAT_VEXOK,
+	[0xf0] = INAT_MODRM,
+	[0xf1] = INAT_MODRM,
+	[0xf5] = INAT_MODRM,
+	[0xf6] = INAT_MODRM,
+	[0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf8] = INAT_MODRM,
+};
+const insn_attr_t inat_escape_table_2_2[INAT_OPCODE_TABLE_SIZE] = {
+	[0x10] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x11] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x12] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x13] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x14] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x15] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x20] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x21] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x22] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x23] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x24] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x25] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x26] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x27] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x28] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x29] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x2a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x30] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x31] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x32] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x33] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x34] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x35] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x38] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x39] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x3a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x52] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x72] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf6] = INAT_MODRM,
+	[0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf8] = INAT_MODRM,
+};
+const insn_attr_t inat_escape_table_2_3[INAT_OPCODE_TABLE_SIZE] = {
+	[0x52] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x53] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x68] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x72] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x9a] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x9b] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xaa] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xab] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xf0] = INAT_MODRM | INAT_MODRM,
+	[0xf1] = INAT_MODRM | INAT_MODRM,
+	[0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0xf8] = INAT_MODRM,
+};
+
+/* Table: 3-byte opcode 2 (0x0f 0x3a) */
+const insn_attr_t inat_escape_table_3[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_VARIANT,
+	[0x01] = INAT_VARIANT,
+	[0x02] = INAT_VARIANT,
+	[0x03] = INAT_VARIANT,
+	[0x04] = INAT_VARIANT,
+	[0x05] = INAT_VARIANT,
+	[0x06] = INAT_VARIANT,
+	[0x08] = INAT_VARIANT,
+	[0x09] = INAT_VARIANT,
+	[0x0a] = INAT_VARIANT,
+	[0x0b] = INAT_VARIANT,
+	[0x0c] = INAT_VARIANT,
+	[0x0d] = INAT_VARIANT,
+	[0x0e] = INAT_VARIANT,
+	[0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x14] = INAT_VARIANT,
+	[0x15] = INAT_VARIANT,
+	[0x16] = INAT_VARIANT,
+	[0x17] = INAT_VARIANT,
+	[0x18] = INAT_VARIANT,
+	[0x19] = INAT_VARIANT,
+	[0x1a] = INAT_VARIANT,
+	[0x1b] = INAT_VARIANT,
+	[0x1d] = INAT_VARIANT,
+	[0x1e] = INAT_VARIANT,
+	[0x1f] = INAT_VARIANT,
+	[0x20] = INAT_VARIANT,
+	[0x21] = INAT_VARIANT,
+	[0x22] = INAT_VARIANT,
+	[0x23] = INAT_VARIANT,
+	[0x25] = INAT_VARIANT,
+	[0x26] = INAT_VARIANT,
+	[0x27] = INAT_VARIANT,
+	[0x30] = INAT_VARIANT,
+	[0x31] = INAT_VARIANT,
+	[0x32] = INAT_VARIANT,
+	[0x33] = INAT_VARIANT,
+	[0x38] = INAT_VARIANT,
+	[0x39] = INAT_VARIANT,
+	[0x3a] = INAT_VARIANT,
+	[0x3b] = INAT_VARIANT,
+	[0x3e] = INAT_VARIANT,
+	[0x3f] = INAT_VARIANT,
+	[0x40] = INAT_VARIANT,
+	[0x41] = INAT_VARIANT,
+	[0x42] = INAT_VARIANT,
+	[0x43] = INAT_VARIANT,
+	[0x44] = INAT_VARIANT,
+	[0x46] = INAT_VARIANT,
+	[0x4a] = INAT_VARIANT,
+	[0x4b] = INAT_VARIANT,
+	[0x4c] = INAT_VARIANT,
+	[0x50] = INAT_VARIANT,
+	[0x51] = INAT_VARIANT,
+	[0x54] = INAT_VARIANT,
+	[0x55] = INAT_VARIANT,
+	[0x56] = INAT_VARIANT,
+	[0x57] = INAT_VARIANT,
+	[0x60] = INAT_VARIANT,
+	[0x61] = INAT_VARIANT,
+	[0x62] = INAT_VARIANT,
+	[0x63] = INAT_VARIANT,
+	[0x66] = INAT_VARIANT,
+	[0x67] = INAT_VARIANT,
+	[0x70] = INAT_VARIANT,
+	[0x71] = INAT_VARIANT,
+	[0x72] = INAT_VARIANT,
+	[0x73] = INAT_VARIANT,
+	[0xcc] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0xce] = INAT_VARIANT,
+	[0xcf] = INAT_VARIANT,
+	[0xdf] = INAT_VARIANT,
+	[0xf0] = INAT_VARIANT,
+};
+const insn_attr_t inat_escape_table_3_1[INAT_OPCODE_TABLE_SIZE] = {
+	[0x00] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x01] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x02] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x03] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x05] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x06] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x08] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x09] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0e] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x15] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x16] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x17] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x18] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x19] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x1a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x1b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x1d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x1e] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x1f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x20] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x21] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x22] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x23] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x25] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x26] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x27] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x30] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x31] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x32] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x33] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x38] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x39] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x3a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x3b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x3e] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x3f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x40] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x41] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x42] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x43] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x44] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x46] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x4a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x4b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x4c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x50] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x51] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x54] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x55] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x56] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x57] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x60] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x61] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x62] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x63] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x66] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x67] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x71] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x72] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x73] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0xce] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xcf] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0xdf] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+};
+const insn_attr_t inat_escape_table_3_3[INAT_OPCODE_TABLE_SIZE] = {
+	[0xf0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+};
+
+/* GrpTable: Grp1 */
+
+/* GrpTable: Grp1A */
+
+/* GrpTable: Grp2 */
+
+/* GrpTable: Grp3_1 */
+const insn_attr_t inat_group_table_6[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0x1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM,
+	[0x5] = INAT_MODRM,
+	[0x6] = INAT_MODRM,
+	[0x7] = INAT_MODRM,
+};
+
+/* GrpTable: Grp3_2 */
+const insn_attr_t inat_group_table_7[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM,
+	[0x1] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM,
+	[0x5] = INAT_MODRM,
+	[0x6] = INAT_MODRM,
+	[0x7] = INAT_MODRM,
+};
+
+/* GrpTable: Grp4 */
+const insn_attr_t inat_group_table_8[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+};
+
+/* GrpTable: Grp5 */
+const insn_attr_t inat_group_table_9[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+	[0x2] = INAT_MODRM | INAT_FORCE64,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM | INAT_FORCE64,
+	[0x5] = INAT_MODRM,
+	[0x6] = INAT_MODRM | INAT_FORCE64,
+};
+
+/* GrpTable: Grp6 */
+const insn_attr_t inat_group_table_10[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM,
+	[0x5] = INAT_MODRM,
+};
+
+/* GrpTable: Grp7 */
+const insn_attr_t inat_group_table_11[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM,
+	[0x5] = INAT_VARIANT,
+	[0x6] = INAT_MODRM,
+	[0x7] = INAT_MODRM,
+};
+const insn_attr_t inat_group_table_11_2[INAT_GROUP_TABLE_SIZE] = {
+	[0x5] = INAT_MODRM,
+};
+
+/* GrpTable: Grp8 */
+
+/* GrpTable: Grp9 */
+const insn_attr_t inat_group_table_24[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_MODRM,
+	[0x6] = INAT_MODRM | INAT_MODRM | INAT_VARIANT,
+	[0x7] = INAT_MODRM | INAT_MODRM | INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_24_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x6] = INAT_MODRM,
+};
+const insn_attr_t inat_group_table_24_2[INAT_GROUP_TABLE_SIZE] = {
+	[0x6] = INAT_MODRM,
+	[0x7] = INAT_MODRM,
+};
+
+/* GrpTable: Grp10 */
+
+/* GrpTable: Grp11A */
+const insn_attr_t inat_group_table_4[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM,
+	[0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE),
+};
+
+/* GrpTable: Grp11B */
+const insn_attr_t inat_group_table_5[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM,
+	[0x7] = INAT_MAKE_IMM(INAT_IMM_VWORD32),
+};
+
+/* GrpTable: Grp12 */
+const insn_attr_t inat_group_table_16[INAT_GROUP_TABLE_SIZE] = {
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_16_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+};
+
+/* GrpTable: Grp13 */
+const insn_attr_t inat_group_table_17[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_VARIANT,
+	[0x1] = INAT_VARIANT,
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_17_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+};
+
+/* GrpTable: Grp14 */
+const insn_attr_t inat_group_table_18[INAT_GROUP_TABLE_SIZE] = {
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x3] = INAT_VARIANT,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT,
+	[0x7] = INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_18_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+	[0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK,
+};
+
+/* GrpTable: Grp15 */
+const insn_attr_t inat_group_table_21[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_VARIANT,
+	[0x1] = INAT_VARIANT,
+	[0x2] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x3] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT,
+	[0x4] = INAT_VARIANT,
+	[0x5] = INAT_VARIANT,
+	[0x6] = INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_21_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x6] = INAT_MODRM,
+};
+const insn_attr_t inat_group_table_21_2[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+	[0x4] = INAT_MODRM,
+	[0x5] = INAT_MODRM,
+	[0x6] = INAT_MODRM | INAT_MODRM,
+};
+const insn_attr_t inat_group_table_21_3[INAT_GROUP_TABLE_SIZE] = {
+	[0x6] = INAT_MODRM,
+};
+
+/* GrpTable: Grp16 */
+const insn_attr_t inat_group_table_13[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+	[0x1] = INAT_MODRM,
+	[0x2] = INAT_MODRM,
+	[0x3] = INAT_MODRM,
+};
+
+/* GrpTable: Grp17 */
+const insn_attr_t inat_group_table_27[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+	[0x3] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY,
+};
+
+/* GrpTable: Grp18 */
+const insn_attr_t inat_group_table_25[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_VARIANT,
+	[0x2] = INAT_VARIANT,
+	[0x5] = INAT_VARIANT,
+	[0x6] = INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_25_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x2] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x5] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x6] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+};
+
+/* GrpTable: Grp19 */
+const insn_attr_t inat_group_table_26[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_VARIANT,
+	[0x2] = INAT_VARIANT,
+	[0x5] = INAT_VARIANT,
+	[0x6] = INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_26_1[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x2] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x5] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+	[0x6] = INAT_MODRM | INAT_VEXOK | INAT_EVEXONLY,
+};
+
+/* GrpTable: Grp20 */
+const insn_attr_t inat_group_table_14[INAT_GROUP_TABLE_SIZE] = {
+	[0x0] = INAT_MODRM,
+};
+
+/* GrpTable: Grp21 */
+const insn_attr_t inat_group_table_15[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_VARIANT,
+};
+const insn_attr_t inat_group_table_15_2[INAT_GROUP_TABLE_SIZE] = {
+	[0x1] = INAT_MODRM,
+};
+
+/* GrpTable: GrpP */
+
+/* GrpTable: GrpPDLK */
+
+/* GrpTable: GrpRNG */
+
+#ifndef __BOOT_COMPRESSED
+
+/* Escape opcode map array */
+const insn_attr_t * const inat_escape_tables[INAT_ESC_MAX + 1][INAT_LSTPFX_MAX + 1] = {
+	[1][0] = inat_escape_table_1,
+	[1][1] = inat_escape_table_1_1,
+	[1][2] = inat_escape_table_1_2,
+	[1][3] = inat_escape_table_1_3,
+	[2][0] = inat_escape_table_2,
+	[2][1] = inat_escape_table_2_1,
+	[2][2] = inat_escape_table_2_2,
+	[2][3] = inat_escape_table_2_3,
+	[3][0] = inat_escape_table_3,
+	[3][1] = inat_escape_table_3_1,
+	[3][3] = inat_escape_table_3_3,
+};
+
+/* Group opcode map array */
+const insn_attr_t * const inat_group_tables[INAT_GRP_MAX + 1][INAT_LSTPFX_MAX + 1] = {
+	[4][0] = inat_group_table_4,
+	[5][0] = inat_group_table_5,
+	[6][0] = inat_group_table_6,
+	[7][0] = inat_group_table_7,
+	[8][0] = inat_group_table_8,
+	[9][0] = inat_group_table_9,
+	[10][0] = inat_group_table_10,
+	[11][0] = inat_group_table_11,
+	[11][2] = inat_group_table_11_2,
+	[13][0] = inat_group_table_13,
+	[14][0] = inat_group_table_14,
+	[15][0] = inat_group_table_15,
+	[15][2] = inat_group_table_15_2,
+	[16][0] = inat_group_table_16,
+	[16][1] = inat_group_table_16_1,
+	[17][0] = inat_group_table_17,
+	[17][1] = inat_group_table_17_1,
+	[18][0] = inat_group_table_18,
+	[18][1] = inat_group_table_18_1,
+	[21][0] = inat_group_table_21,
+	[21][1] = inat_group_table_21_1,
+	[21][2] = inat_group_table_21_2,
+	[21][3] = inat_group_table_21_3,
+	[24][0] = inat_group_table_24,
+	[24][1] = inat_group_table_24_1,
+	[24][2] = inat_group_table_24_2,
+	[25][0] = inat_group_table_25,
+	[25][1] = inat_group_table_25_1,
+	[26][0] = inat_group_table_26,
+	[26][1] = inat_group_table_26_1,
+	[27][0] = inat_group_table_27,
+};
+
+/* AVX opcode map array */
+const insn_attr_t * const inat_avx_tables[X86_VEX_M_MAX + 1][INAT_LSTPFX_MAX + 1] = {
+	[1][0] = inat_escape_table_1,
+	[1][1] = inat_escape_table_1_1,
+	[1][2] = inat_escape_table_1_2,
+	[1][3] = inat_escape_table_1_3,
+	[2][0] = inat_escape_table_2,
+	[2][1] = inat_escape_table_2_1,
+	[2][2] = inat_escape_table_2_2,
+	[2][3] = inat_escape_table_2_3,
+	[3][0] = inat_escape_table_3,
+	[3][1] = inat_escape_table_3_1,
+	[3][3] = inat_escape_table_3_3,
+};
+
+#else /* !__BOOT_COMPRESSED */
+
+/* Escape opcode map array */
+static const insn_attr_t *inat_escape_tables[INAT_ESC_MAX + 1][INAT_LSTPFX_MAX + 1];
+
+/* Group opcode map array */
+static const insn_attr_t *inat_group_tables[INAT_GRP_MAX + 1][INAT_LSTPFX_MAX + 1];
+
+/* AVX opcode map array */
+static const insn_attr_t *inat_avx_tables[X86_VEX_M_MAX + 1][INAT_LSTPFX_MAX + 1];
+
+static void inat_init_tables(void)
+{
+	/* Print Escape opcode map array */
+	inat_escape_tables[1][0] = inat_escape_table_1;
+	inat_escape_tables[1][1] = inat_escape_table_1_1;
+	inat_escape_tables[1][2] = inat_escape_table_1_2;
+	inat_escape_tables[1][3] = inat_escape_table_1_3;
+	inat_escape_tables[2][0] = inat_escape_table_2;
+	inat_escape_tables[2][1] = inat_escape_table_2_1;
+	inat_escape_tables[2][2] = inat_escape_table_2_2;
+	inat_escape_tables[2][3] = inat_escape_table_2_3;
+	inat_escape_tables[3][0] = inat_escape_table_3;
+	inat_escape_tables[3][1] = inat_escape_table_3_1;
+	inat_escape_tables[3][3] = inat_escape_table_3_3;
+
+	/* Print Group opcode map array */
+	inat_group_tables[4][0] = inat_group_table_4;
+	inat_group_tables[5][0] = inat_group_table_5;
+	inat_group_tables[6][0] = inat_group_table_6;
+	inat_group_tables[7][0] = inat_group_table_7;
+	inat_group_tables[8][0] = inat_group_table_8;
+	inat_group_tables[9][0] = inat_group_table_9;
+	inat_group_tables[10][0] = inat_group_table_10;
+	inat_group_tables[11][0] = inat_group_table_11;
+	inat_group_tables[11][2] = inat_group_table_11_2;
+	inat_group_tables[13][0] = inat_group_table_13;
+	inat_group_tables[14][0] = inat_group_table_14;
+	inat_group_tables[15][0] = inat_group_table_15;
+	inat_group_tables[15][2] = inat_group_table_15_2;
+	inat_group_tables[16][0] = inat_group_table_16;
+	inat_group_tables[16][1] = inat_group_table_16_1;
+	inat_group_tables[17][0] = inat_group_table_17;
+	inat_group_tables[17][1] = inat_group_table_17_1;
+	inat_group_tables[18][0] = inat_group_table_18;
+	inat_group_tables[18][1] = inat_group_table_18_1;
+	inat_group_tables[21][0] = inat_group_table_21;
+	inat_group_tables[21][1] = inat_group_table_21_1;
+	inat_group_tables[21][2] = inat_group_table_21_2;
+	inat_group_tables[21][3] = inat_group_table_21_3;
+	inat_group_tables[24][0] = inat_group_table_24;
+	inat_group_tables[24][1] = inat_group_table_24_1;
+	inat_group_tables[24][2] = inat_group_table_24_2;
+	inat_group_tables[25][0] = inat_group_table_25;
+	inat_group_tables[25][1] = inat_group_table_25_1;
+	inat_group_tables[26][0] = inat_group_table_26;
+	inat_group_tables[26][1] = inat_group_table_26_1;
+	inat_group_tables[27][0] = inat_group_table_27;
+
+	/* Print AVX opcode map array */
+	inat_avx_tables[1][0] = inat_escape_table_1;
+	inat_avx_tables[1][1] = inat_escape_table_1_1;
+	inat_avx_tables[1][2] = inat_escape_table_1_2;
+	inat_avx_tables[1][3] = inat_escape_table_1_3;
+	inat_avx_tables[2][0] = inat_escape_table_2;
+	inat_avx_tables[2][1] = inat_escape_table_2_1;
+	inat_avx_tables[2][2] = inat_escape_table_2_2;
+	inat_avx_tables[2][3] = inat_escape_table_2_3;
+	inat_avx_tables[3][0] = inat_escape_table_3;
+	inat_avx_tables[3][1] = inat_escape_table_3_1;
+	inat_avx_tables[3][3] = inat_escape_table_3_3;
+}
+#endif
diff --git a/lib/x86/insn/inat.c b/lib/x86/insn/inat.c
new file mode 100644
index 0000000..cb54aaf
--- /dev/null
+++ b/lib/x86/insn/inat.c
@@ -0,0 +1,86 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * x86 instruction attribute tables
+ *
+ * Written by Masami Hiramatsu <mhiramat@redhat.com>
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   tools/arch/x86/lib/inat.c
+ */
+#include "insn.h" /* __ignore_sync_check__ */
+
+/* Attribute tables are generated from opcode map */
+#include "inat-tables.c"
+
+/* Attribute search APIs */
+insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode)
+{
+	return inat_primary_table[opcode];
+}
+
+int inat_get_last_prefix_id(insn_byte_t last_pfx)
+{
+	insn_attr_t lpfx_attr;
+
+	lpfx_attr = inat_get_opcode_attribute(last_pfx);
+	return inat_last_prefix_id(lpfx_attr);
+}
+
+insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id,
+				      insn_attr_t esc_attr)
+{
+	const insn_attr_t *table;
+	int n;
+
+	n = inat_escape_id(esc_attr);
+
+	table = inat_escape_tables[n][0];
+	if (!table)
+		return 0;
+	if (inat_has_variant(table[opcode]) && lpfx_id) {
+		table = inat_escape_tables[n][lpfx_id];
+		if (!table)
+			return 0;
+	}
+	return table[opcode];
+}
+
+insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id,
+				     insn_attr_t grp_attr)
+{
+	const insn_attr_t *table;
+	int n;
+
+	n = inat_group_id(grp_attr);
+
+	table = inat_group_tables[n][0];
+	if (!table)
+		return inat_group_common_attribute(grp_attr);
+	if (inat_has_variant(table[X86_MODRM_REG(modrm)]) && lpfx_id) {
+		table = inat_group_tables[n][lpfx_id];
+		if (!table)
+			return inat_group_common_attribute(grp_attr);
+	}
+	return table[X86_MODRM_REG(modrm)] |
+	       inat_group_common_attribute(grp_attr);
+}
+
+insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m,
+				   insn_byte_t vex_p)
+{
+	const insn_attr_t *table;
+	if (vex_m > X86_VEX_M_MAX || vex_p > INAT_LSTPFX_MAX)
+		return 0;
+	/* At first, this checks the master table */
+	table = inat_avx_tables[vex_m][0];
+	if (!table)
+		return 0;
+	if (!inat_is_group(table[opcode]) && vex_p) {
+		/* If this is not a group, get attribute directly */
+		table = inat_avx_tables[vex_m][vex_p];
+		if (!table)
+			return 0;
+	}
+	return table[opcode];
+}
+
diff --git a/lib/x86/insn/inat.h b/lib/x86/insn/inat.h
new file mode 100644
index 0000000..b3103c3
--- /dev/null
+++ b/lib/x86/insn/inat.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _ASM_X86_INAT_H
+#define _ASM_X86_INAT_H
+/*
+ * x86 instruction attributes
+ *
+ * Written by Masami Hiramatsu <mhiramat@redhat.com>
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   tools/arch/x86/include/asm/inat.h
+ */
+#include "inat_types.h"
+
+/*
+ * Internal bits. Don't use bitmasks directly, because these bits are
+ * unstable. You should use checking functions.
+ */
+
+#define INAT_OPCODE_TABLE_SIZE 256
+#define INAT_GROUP_TABLE_SIZE 8
+
+/* Legacy last prefixes */
+#define INAT_PFX_OPNDSZ	1	/* 0x66 */ /* LPFX1 */
+#define INAT_PFX_REPE	2	/* 0xF3 */ /* LPFX2 */
+#define INAT_PFX_REPNE	3	/* 0xF2 */ /* LPFX3 */
+/* Other Legacy prefixes */
+#define INAT_PFX_LOCK	4	/* 0xF0 */
+#define INAT_PFX_CS	5	/* 0x2E */
+#define INAT_PFX_DS	6	/* 0x3E */
+#define INAT_PFX_ES	7	/* 0x26 */
+#define INAT_PFX_FS	8	/* 0x64 */
+#define INAT_PFX_GS	9	/* 0x65 */
+#define INAT_PFX_SS	10	/* 0x36 */
+#define INAT_PFX_ADDRSZ	11	/* 0x67 */
+/* x86-64 REX prefix */
+#define INAT_PFX_REX	12	/* 0x4X */
+/* AVX VEX prefixes */
+#define INAT_PFX_VEX2	13	/* 2-bytes VEX prefix */
+#define INAT_PFX_VEX3	14	/* 3-bytes VEX prefix */
+#define INAT_PFX_EVEX	15	/* EVEX prefix */
+
+#define INAT_LSTPFX_MAX	3
+#define INAT_LGCPFX_MAX	11
+
+/* Immediate size */
+#define INAT_IMM_BYTE		1
+#define INAT_IMM_WORD		2
+#define INAT_IMM_DWORD		3
+#define INAT_IMM_QWORD		4
+#define INAT_IMM_PTR		5
+#define INAT_IMM_VWORD32	6
+#define INAT_IMM_VWORD		7
+
+/* Legacy prefix */
+#define INAT_PFX_OFFS	0
+#define INAT_PFX_BITS	4
+#define INAT_PFX_MAX    ((1 << INAT_PFX_BITS) - 1)
+#define INAT_PFX_MASK	(INAT_PFX_MAX << INAT_PFX_OFFS)
+/* Escape opcodes */
+#define INAT_ESC_OFFS	(INAT_PFX_OFFS + INAT_PFX_BITS)
+#define INAT_ESC_BITS	2
+#define INAT_ESC_MAX	((1 << INAT_ESC_BITS) - 1)
+#define INAT_ESC_MASK	(INAT_ESC_MAX << INAT_ESC_OFFS)
+/* Group opcodes (1-16) */
+#define INAT_GRP_OFFS	(INAT_ESC_OFFS + INAT_ESC_BITS)
+#define INAT_GRP_BITS	5
+#define INAT_GRP_MAX	((1 << INAT_GRP_BITS) - 1)
+#define INAT_GRP_MASK	(INAT_GRP_MAX << INAT_GRP_OFFS)
+/* Immediates */
+#define INAT_IMM_OFFS	(INAT_GRP_OFFS + INAT_GRP_BITS)
+#define INAT_IMM_BITS	3
+#define INAT_IMM_MASK	(((1 << INAT_IMM_BITS) - 1) << INAT_IMM_OFFS)
+/* Flags */
+#define INAT_FLAG_OFFS	(INAT_IMM_OFFS + INAT_IMM_BITS)
+#define INAT_MODRM	(1 << (INAT_FLAG_OFFS))
+#define INAT_FORCE64	(1 << (INAT_FLAG_OFFS + 1))
+#define INAT_SCNDIMM	(1 << (INAT_FLAG_OFFS + 2))
+#define INAT_MOFFSET	(1 << (INAT_FLAG_OFFS + 3))
+#define INAT_VARIANT	(1 << (INAT_FLAG_OFFS + 4))
+#define INAT_VEXOK	(1 << (INAT_FLAG_OFFS + 5))
+#define INAT_VEXONLY	(1 << (INAT_FLAG_OFFS + 6))
+#define INAT_EVEXONLY	(1 << (INAT_FLAG_OFFS + 7))
+/* Attribute making macros for attribute tables */
+#define INAT_MAKE_PREFIX(pfx)	(pfx << INAT_PFX_OFFS)
+#define INAT_MAKE_ESCAPE(esc)	(esc << INAT_ESC_OFFS)
+#define INAT_MAKE_GROUP(grp)	((grp << INAT_GRP_OFFS) | INAT_MODRM)
+#define INAT_MAKE_IMM(imm)	(imm << INAT_IMM_OFFS)
+
+/* Identifiers for segment registers */
+#define INAT_SEG_REG_IGNORE	0
+#define INAT_SEG_REG_DEFAULT	1
+#define INAT_SEG_REG_CS		2
+#define INAT_SEG_REG_SS		3
+#define INAT_SEG_REG_DS		4
+#define INAT_SEG_REG_ES		5
+#define INAT_SEG_REG_FS		6
+#define INAT_SEG_REG_GS		7
+
+/* Attribute search APIs */
+extern insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode);
+extern int inat_get_last_prefix_id(insn_byte_t last_pfx);
+extern insn_attr_t inat_get_escape_attribute(insn_byte_t opcode,
+					     int lpfx_id,
+					     insn_attr_t esc_attr);
+extern insn_attr_t inat_get_group_attribute(insn_byte_t modrm,
+					    int lpfx_id,
+					    insn_attr_t esc_attr);
+extern insn_attr_t inat_get_avx_attribute(insn_byte_t opcode,
+					  insn_byte_t vex_m,
+					  insn_byte_t vex_pp);
+
+/* Attribute checking functions */
+static inline int inat_is_legacy_prefix(insn_attr_t attr)
+{
+	attr &= INAT_PFX_MASK;
+	return attr && attr <= INAT_LGCPFX_MAX;
+}
+
+static inline int inat_is_address_size_prefix(insn_attr_t attr)
+{
+	return (attr & INAT_PFX_MASK) == INAT_PFX_ADDRSZ;
+}
+
+static inline int inat_is_operand_size_prefix(insn_attr_t attr)
+{
+	return (attr & INAT_PFX_MASK) == INAT_PFX_OPNDSZ;
+}
+
+static inline int inat_is_rex_prefix(insn_attr_t attr)
+{
+	return (attr & INAT_PFX_MASK) == INAT_PFX_REX;
+}
+
+static inline int inat_last_prefix_id(insn_attr_t attr)
+{
+	if ((attr & INAT_PFX_MASK) > INAT_LSTPFX_MAX)
+		return 0;
+	else
+		return attr & INAT_PFX_MASK;
+}
+
+static inline int inat_is_vex_prefix(insn_attr_t attr)
+{
+	attr &= INAT_PFX_MASK;
+	return attr == INAT_PFX_VEX2 || attr == INAT_PFX_VEX3 ||
+	       attr == INAT_PFX_EVEX;
+}
+
+static inline int inat_is_evex_prefix(insn_attr_t attr)
+{
+	return (attr & INAT_PFX_MASK) == INAT_PFX_EVEX;
+}
+
+static inline int inat_is_vex3_prefix(insn_attr_t attr)
+{
+	return (attr & INAT_PFX_MASK) == INAT_PFX_VEX3;
+}
+
+static inline int inat_is_escape(insn_attr_t attr)
+{
+	return attr & INAT_ESC_MASK;
+}
+
+static inline int inat_escape_id(insn_attr_t attr)
+{
+	return (attr & INAT_ESC_MASK) >> INAT_ESC_OFFS;
+}
+
+static inline int inat_is_group(insn_attr_t attr)
+{
+	return attr & INAT_GRP_MASK;
+}
+
+static inline int inat_group_id(insn_attr_t attr)
+{
+	return (attr & INAT_GRP_MASK) >> INAT_GRP_OFFS;
+}
+
+static inline int inat_group_common_attribute(insn_attr_t attr)
+{
+	return attr & ~INAT_GRP_MASK;
+}
+
+static inline int inat_has_immediate(insn_attr_t attr)
+{
+	return attr & INAT_IMM_MASK;
+}
+
+static inline int inat_immediate_size(insn_attr_t attr)
+{
+	return (attr & INAT_IMM_MASK) >> INAT_IMM_OFFS;
+}
+
+static inline int inat_has_modrm(insn_attr_t attr)
+{
+	return attr & INAT_MODRM;
+}
+
+static inline int inat_is_force64(insn_attr_t attr)
+{
+	return attr & INAT_FORCE64;
+}
+
+static inline int inat_has_second_immediate(insn_attr_t attr)
+{
+	return attr & INAT_SCNDIMM;
+}
+
+static inline int inat_has_moffset(insn_attr_t attr)
+{
+	return attr & INAT_MOFFSET;
+}
+
+static inline int inat_has_variant(insn_attr_t attr)
+{
+	return attr & INAT_VARIANT;
+}
+
+static inline int inat_accept_vex(insn_attr_t attr)
+{
+	return attr & INAT_VEXOK;
+}
+
+static inline int inat_must_vex(insn_attr_t attr)
+{
+	return attr & (INAT_VEXONLY | INAT_EVEXONLY);
+}
+
+static inline int inat_must_evex(insn_attr_t attr)
+{
+	return attr & INAT_EVEXONLY;
+}
+#endif
diff --git a/lib/x86/insn/inat_types.h b/lib/x86/insn/inat_types.h
new file mode 100644
index 0000000..5e4ef12
--- /dev/null
+++ b/lib/x86/insn/inat_types.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _ASM_X86_INAT_TYPES_H
+#define _ASM_X86_INAT_TYPES_H
+/*
+ * x86 instruction attributes
+ *
+ * Written by Masami Hiramatsu <mhiramat@redhat.com>
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   tools/arch/x86/include/asm/inat_types.h
+ */
+
+/* Instruction attributes */
+typedef unsigned int insn_attr_t;
+typedef unsigned char insn_byte_t;
+typedef signed int insn_value_t;
+
+#endif
diff --git a/lib/x86/insn/insn.c b/lib/x86/insn/insn.c
new file mode 100644
index 0000000..b877a25
--- /dev/null
+++ b/lib/x86/insn/insn.c
@@ -0,0 +1,778 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * x86 instruction analysis
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004, 2009
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   tools/arch/x86/lib/insn.c
+ */
+
+#include "x86/vm.h"
+
+#include "inat.h"
+#include "insn.h"
+
+#define	EINVAL		22	/* Invalid argument */
+#define	ENODATA		61	/* No data available */
+
+/*
+ * Virt escape sequences to trigger instruction emulation;
+ * ideally these would decode to 'whole' instruction and not destroy
+ * the instruction stream; sadly this is not true for the 'kvm' one :/
+ */
+
+#define __XEN_EMULATE_PREFIX  0x0f,0x0b,0x78,0x65,0x6e  /* ud2 ; .ascii "xen" */
+#define __KVM_EMULATE_PREFIX  0x0f,0x0b,0x6b,0x76,0x6d	/* ud2 ; .ascii "kvm" */
+
+#define leXX_to_cpu(t, r)						\
+({									\
+	__typeof__(t) v;						\
+	switch (sizeof(t)) {						\
+	case 4: v = le32_to_cpu(r); break;				\
+	case 2: v = le16_to_cpu(r); break;				\
+	case 1:	v = r; break;						\
+	default:							\
+		break;					\
+	}								\
+	v;								\
+})
+
+/* Verify next sizeof(t) bytes can be on the same instruction */
+#define validate_next(t, insn, n)	\
+	((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
+
+#define __get_next(t, insn)	\
+	({ t r; memcpy(&r, insn->next_byte, sizeof(t)); insn->next_byte += sizeof(t); leXX_to_cpu(t, r); })
+
+#define __peek_nbyte_next(t, insn, n)	\
+	({ t r; memcpy(&r, (insn)->next_byte + n, sizeof(t)); leXX_to_cpu(t, r); })
+
+#define get_next(t, insn)	\
+	({ if ((!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); })
+
+#define peek_nbyte_next(t, insn, n)	\
+	({ if ((!validate_next(t, insn, n))) goto err_out; __peek_nbyte_next(t, insn, n); })
+
+#define peek_next(t, insn)	peek_nbyte_next(t, insn, 0)
+
+/**
+ * insn_init() - initialize struct insn
+ * @insn:	&struct insn to be initialized
+ * @kaddr:	address (in kernel memory) of instruction (or copy thereof)
+ * @buf_len:	length of the insn buffer at @kaddr
+ * @x86_64:	!0 for 64-bit kernel or 64-bit app
+ */
+void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64)
+{
+	/*
+	 * Instructions longer than MAX_INSN_SIZE (15 bytes) are invalid
+	 * even if the input buffer is long enough to hold them.
+	 */
+	if (buf_len > MAX_INSN_SIZE)
+		buf_len = MAX_INSN_SIZE;
+
+	memset(insn, 0, sizeof(*insn));
+	insn->kaddr = kaddr;
+	insn->end_kaddr = kaddr + buf_len;
+	insn->next_byte = kaddr;
+	insn->x86_64 = x86_64 ? 1 : 0;
+	insn->opnd_bytes = 4;
+	if (x86_64)
+		insn->addr_bytes = 8;
+	else
+		insn->addr_bytes = 4;
+}
+
+static const insn_byte_t xen_prefix[] = { __XEN_EMULATE_PREFIX };
+static const insn_byte_t kvm_prefix[] = { __KVM_EMULATE_PREFIX };
+
+static int __insn_get_emulate_prefix(struct insn *insn,
+				     const insn_byte_t *prefix, size_t len)
+{
+	size_t i;
+
+	for (i = 0; i < len; i++) {
+		if (peek_nbyte_next(insn_byte_t, insn, i) != prefix[i])
+			goto err_out;
+	}
+
+	insn->emulate_prefix_size = len;
+	insn->next_byte += len;
+
+	return 1;
+
+err_out:
+	return 0;
+}
+
+static void insn_get_emulate_prefix(struct insn *insn)
+{
+	if (__insn_get_emulate_prefix(insn, xen_prefix, sizeof(xen_prefix)))
+		return;
+
+	__insn_get_emulate_prefix(insn, kvm_prefix, sizeof(kvm_prefix));
+}
+
+/**
+ * insn_get_prefixes - scan x86 instruction prefix bytes
+ * @insn:	&struct insn containing instruction
+ *
+ * Populates the @insn->prefixes bitmap, and updates @insn->next_byte
+ * to point to the (first) opcode.  No effect if @insn->prefixes.got
+ * is already set.
+ *
+ * * Returns:
+ * 0:  on success
+ * < 0: on error
+ */
+int insn_get_prefixes(struct insn *insn)
+{
+	struct insn_field *prefixes = &insn->prefixes;
+	insn_attr_t attr;
+	insn_byte_t b, lb;
+	int i, nb;
+
+	if (prefixes->got)
+		return 0;
+
+	insn_get_emulate_prefix(insn);
+
+	nb = 0;
+	lb = 0;
+	b = peek_next(insn_byte_t, insn);
+	attr = inat_get_opcode_attribute(b);
+	while (inat_is_legacy_prefix(attr)) {
+		/* Skip if same prefix */
+		for (i = 0; i < nb; i++)
+			if (prefixes->bytes[i] == b)
+				goto found;
+		if (nb == 4)
+			/* Invalid instruction */
+			break;
+		prefixes->bytes[nb++] = b;
+		if (inat_is_address_size_prefix(attr)) {
+			/* address size switches 2/4 or 4/8 */
+			if (insn->x86_64)
+				insn->addr_bytes ^= 12;
+			else
+				insn->addr_bytes ^= 6;
+		} else if (inat_is_operand_size_prefix(attr)) {
+			/* oprand size switches 2/4 */
+			insn->opnd_bytes ^= 6;
+		}
+found:
+		prefixes->nbytes++;
+		insn->next_byte++;
+		lb = b;
+		b = peek_next(insn_byte_t, insn);
+		attr = inat_get_opcode_attribute(b);
+	}
+	/* Set the last prefix */
+	if (lb && lb != insn->prefixes.bytes[3]) {
+		if ((insn->prefixes.bytes[3])) {
+			/* Swap the last prefix */
+			b = insn->prefixes.bytes[3];
+			for (i = 0; i < nb; i++)
+				if (prefixes->bytes[i] == lb)
+					insn_set_byte(prefixes, i, b);
+		}
+		insn_set_byte(&insn->prefixes, 3, lb);
+	}
+
+	/* Decode REX prefix */
+	if (insn->x86_64) {
+		b = peek_next(insn_byte_t, insn);
+		attr = inat_get_opcode_attribute(b);
+		if (inat_is_rex_prefix(attr)) {
+			insn_field_set(&insn->rex_prefix, b, 1);
+			insn->next_byte++;
+			if (X86_REX_W(b))
+				/* REX.W overrides opnd_size */
+				insn->opnd_bytes = 8;
+		}
+	}
+	insn->rex_prefix.got = 1;
+
+	/* Decode VEX prefix */
+	b = peek_next(insn_byte_t, insn);
+	attr = inat_get_opcode_attribute(b);
+	if (inat_is_vex_prefix(attr)) {
+		insn_byte_t b2 = peek_nbyte_next(insn_byte_t, insn, 1);
+		if (!insn->x86_64) {
+			/*
+			 * In 32-bits mode, if the [7:6] bits (mod bits of
+			 * ModRM) on the second byte are not 11b, it is
+			 * LDS or LES or BOUND.
+			 */
+			if (X86_MODRM_MOD(b2) != 3)
+				goto vex_end;
+		}
+		insn_set_byte(&insn->vex_prefix, 0, b);
+		insn_set_byte(&insn->vex_prefix, 1, b2);
+		if (inat_is_evex_prefix(attr)) {
+			b2 = peek_nbyte_next(insn_byte_t, insn, 2);
+			insn_set_byte(&insn->vex_prefix, 2, b2);
+			b2 = peek_nbyte_next(insn_byte_t, insn, 3);
+			insn_set_byte(&insn->vex_prefix, 3, b2);
+			insn->vex_prefix.nbytes = 4;
+			insn->next_byte += 4;
+			if (insn->x86_64 && X86_VEX_W(b2))
+				/* VEX.W overrides opnd_size */
+				insn->opnd_bytes = 8;
+		} else if (inat_is_vex3_prefix(attr)) {
+			b2 = peek_nbyte_next(insn_byte_t, insn, 2);
+			insn_set_byte(&insn->vex_prefix, 2, b2);
+			insn->vex_prefix.nbytes = 3;
+			insn->next_byte += 3;
+			if (insn->x86_64 && X86_VEX_W(b2))
+				/* VEX.W overrides opnd_size */
+				insn->opnd_bytes = 8;
+		} else {
+			/*
+			 * For VEX2, fake VEX3-like byte#2.
+			 * Makes it easier to decode vex.W, vex.vvvv,
+			 * vex.L and vex.pp. Masking with 0x7f sets vex.W == 0.
+			 */
+			insn_set_byte(&insn->vex_prefix, 2, b2 & 0x7f);
+			insn->vex_prefix.nbytes = 2;
+			insn->next_byte += 2;
+		}
+	}
+vex_end:
+	insn->vex_prefix.got = 1;
+
+	prefixes->got = 1;
+
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+/**
+ * insn_get_opcode - collect opcode(s)
+ * @insn:	&struct insn containing instruction
+ *
+ * Populates @insn->opcode, updates @insn->next_byte to point past the
+ * opcode byte(s), and set @insn->attr (except for groups).
+ * If necessary, first collects any preceding (prefix) bytes.
+ * Sets @insn->opcode.value = opcode1.  No effect if @insn->opcode.got
+ * is already 1.
+ *
+ * Returns:
+ * 0:  on success
+ * < 0: on error
+ */
+int insn_get_opcode(struct insn *insn)
+{
+	struct insn_field *opcode = &insn->opcode;
+	int pfx_id, ret;
+	insn_byte_t op;
+
+	if (opcode->got)
+		return 0;
+
+	if (!insn->prefixes.got) {
+		ret = insn_get_prefixes(insn);
+		if (ret)
+			return ret;
+	}
+
+	/* Get first opcode */
+	op = get_next(insn_byte_t, insn);
+	insn_set_byte(opcode, 0, op);
+	opcode->nbytes = 1;
+
+	/* Check if there is VEX prefix or not */
+	if (insn_is_avx(insn)) {
+		insn_byte_t m, p;
+		m = insn_vex_m_bits(insn);
+		p = insn_vex_p_bits(insn);
+		insn->attr = inat_get_avx_attribute(op, m, p);
+		if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) ||
+		    (!inat_accept_vex(insn->attr) &&
+		     !inat_is_group(insn->attr))) {
+			/* This instruction is bad */
+			insn->attr = 0;
+			return -EINVAL;
+		}
+		/* VEX has only 1 byte for opcode */
+		goto end;
+	}
+
+	insn->attr = inat_get_opcode_attribute(op);
+	while (inat_is_escape(insn->attr)) {
+		/* Get escaped opcode */
+		op = get_next(insn_byte_t, insn);
+		opcode->bytes[opcode->nbytes++] = op;
+		pfx_id = insn_last_prefix_id(insn);
+		insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr);
+	}
+
+	if (inat_must_vex(insn->attr)) {
+		/* This instruction is bad */
+		insn->attr = 0;
+		return -EINVAL;
+	}
+end:
+	opcode->got = 1;
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+/**
+ * insn_get_modrm - collect ModRM byte, if any
+ * @insn:	&struct insn containing instruction
+ *
+ * Populates @insn->modrm and updates @insn->next_byte to point past the
+ * ModRM byte, if any.  If necessary, first collects the preceding bytes
+ * (prefixes and opcode(s)).  No effect if @insn->modrm.got is already 1.
+ *
+ * Returns:
+ * 0:  on success
+ * < 0: on error
+ */
+int insn_get_modrm(struct insn *insn)
+{
+	struct insn_field *modrm = &insn->modrm;
+	insn_byte_t pfx_id, mod;
+	int ret;
+
+	if (modrm->got)
+		return 0;
+
+	if (!insn->opcode.got) {
+		ret = insn_get_opcode(insn);
+		if (ret)
+			return ret;
+	}
+
+	if (inat_has_modrm(insn->attr)) {
+		mod = get_next(insn_byte_t, insn);
+		insn_field_set(modrm, mod, 1);
+		if (inat_is_group(insn->attr)) {
+			pfx_id = insn_last_prefix_id(insn);
+			insn->attr = inat_get_group_attribute(mod, pfx_id,
+							      insn->attr);
+			if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) {
+				/* Bad insn */
+				insn->attr = 0;
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (insn->x86_64 && inat_is_force64(insn->attr))
+		insn->opnd_bytes = 8;
+
+	modrm->got = 1;
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+
+/**
+ * insn_rip_relative() - Does instruction use RIP-relative addressing mode?
+ * @insn:	&struct insn containing instruction
+ *
+ * If necessary, first collects the instruction up to and including the
+ * ModRM byte.  No effect if @insn->x86_64 is 0.
+ */
+int insn_rip_relative(struct insn *insn)
+{
+	struct insn_field *modrm = &insn->modrm;
+	int ret;
+
+	if (!insn->x86_64)
+		return 0;
+
+	if (!modrm->got) {
+		ret = insn_get_modrm(insn);
+		if (ret)
+			return 0;
+	}
+	/*
+	 * For rip-relative instructions, the mod field (top 2 bits)
+	 * is zero and the r/m field (bottom 3 bits) is 0x5.
+	 */
+	return (modrm->nbytes && (modrm->bytes[0] & 0xc7) == 0x5);
+}
+
+/**
+ * insn_get_sib() - Get the SIB byte of instruction
+ * @insn:	&struct insn containing instruction
+ *
+ * If necessary, first collects the instruction up to and including the
+ * ModRM byte.
+ *
+ * Returns:
+ * 0: if decoding succeeded
+ * < 0: otherwise.
+ */
+int insn_get_sib(struct insn *insn)
+{
+	insn_byte_t modrm;
+	int ret;
+
+	if (insn->sib.got)
+		return 0;
+
+	if (!insn->modrm.got) {
+		ret = insn_get_modrm(insn);
+		if (ret)
+			return ret;
+	}
+
+	if (insn->modrm.nbytes) {
+		modrm = insn->modrm.bytes[0];
+		if (insn->addr_bytes != 2 &&
+		    X86_MODRM_MOD(modrm) != 3 && X86_MODRM_RM(modrm) == 4) {
+			insn_field_set(&insn->sib,
+				       get_next(insn_byte_t, insn), 1);
+		}
+	}
+	insn->sib.got = 1;
+
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+
+/**
+ * insn_get_displacement() - Get the displacement of instruction
+ * @insn:	&struct insn containing instruction
+ *
+ * If necessary, first collects the instruction up to and including the
+ * SIB byte.
+ * Displacement value is sign-expanded.
+ *
+ * * Returns:
+ * 0: if decoding succeeded
+ * < 0: otherwise.
+ */
+int insn_get_displacement(struct insn *insn)
+{
+	insn_byte_t mod, rm, base;
+	int ret;
+
+	if (insn->displacement.got)
+		return 0;
+
+	if (!insn->sib.got) {
+		ret = insn_get_sib(insn);
+		if (ret)
+			return ret;
+	}
+
+	if (insn->modrm.nbytes) {
+		/*
+		 * Interpreting the modrm byte:
+		 * mod = 00 - no displacement fields (exceptions below)
+		 * mod = 01 - 1-byte displacement field
+		 * mod = 10 - displacement field is 4 bytes, or 2 bytes if
+		 * 	address size = 2 (0x67 prefix in 32-bit mode)
+		 * mod = 11 - no memory operand
+		 *
+		 * If address size = 2...
+		 * mod = 00, r/m = 110 - displacement field is 2 bytes
+		 *
+		 * If address size != 2...
+		 * mod != 11, r/m = 100 - SIB byte exists
+		 * mod = 00, SIB base = 101 - displacement field is 4 bytes
+		 * mod = 00, r/m = 101 - rip-relative addressing, displacement
+		 * 	field is 4 bytes
+		 */
+		mod = X86_MODRM_MOD(insn->modrm.value);
+		rm = X86_MODRM_RM(insn->modrm.value);
+		base = X86_SIB_BASE(insn->sib.value);
+		if (mod == 3)
+			goto out;
+		if (mod == 1) {
+			insn_field_set(&insn->displacement,
+				       get_next(signed char, insn), 1);
+		} else if (insn->addr_bytes == 2) {
+			if ((mod == 0 && rm == 6) || mod == 2) {
+				insn_field_set(&insn->displacement,
+					       get_next(short, insn), 2);
+			}
+		} else {
+			if ((mod == 0 && rm == 5) || mod == 2 ||
+			    (mod == 0 && base == 5)) {
+				insn_field_set(&insn->displacement,
+					       get_next(int, insn), 4);
+			}
+		}
+	}
+out:
+	insn->displacement.got = 1;
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+/* Decode moffset16/32/64. Return 0 if failed */
+static int __get_moffset(struct insn *insn)
+{
+	switch (insn->addr_bytes) {
+	case 2:
+		insn_field_set(&insn->moffset1, get_next(short, insn), 2);
+		break;
+	case 4:
+		insn_field_set(&insn->moffset1, get_next(int, insn), 4);
+		break;
+	case 8:
+		insn_field_set(&insn->moffset1, get_next(int, insn), 4);
+		insn_field_set(&insn->moffset2, get_next(int, insn), 4);
+		break;
+	default:	/* opnd_bytes must be modified manually */
+		goto err_out;
+	}
+	insn->moffset1.got = insn->moffset2.got = 1;
+
+	return 1;
+
+err_out:
+	return 0;
+}
+
+/* Decode imm v32(Iz). Return 0 if failed */
+static int __get_immv32(struct insn *insn)
+{
+	switch (insn->opnd_bytes) {
+	case 2:
+		insn_field_set(&insn->immediate, get_next(short, insn), 2);
+		break;
+	case 4:
+	case 8:
+		insn_field_set(&insn->immediate, get_next(int, insn), 4);
+		break;
+	default:	/* opnd_bytes must be modified manually */
+		goto err_out;
+	}
+
+	return 1;
+
+err_out:
+	return 0;
+}
+
+/* Decode imm v64(Iv/Ov), Return 0 if failed */
+static int __get_immv(struct insn *insn)
+{
+	switch (insn->opnd_bytes) {
+	case 2:
+		insn_field_set(&insn->immediate1, get_next(short, insn), 2);
+		break;
+	case 4:
+		insn_field_set(&insn->immediate1, get_next(int, insn), 4);
+		insn->immediate1.nbytes = 4;
+		break;
+	case 8:
+		insn_field_set(&insn->immediate1, get_next(int, insn), 4);
+		insn_field_set(&insn->immediate2, get_next(int, insn), 4);
+		break;
+	default:	/* opnd_bytes must be modified manually */
+		goto err_out;
+	}
+	insn->immediate1.got = insn->immediate2.got = 1;
+
+	return 1;
+err_out:
+	return 0;
+}
+
+/* Decode ptr16:16/32(Ap) */
+static int __get_immptr(struct insn *insn)
+{
+	switch (insn->opnd_bytes) {
+	case 2:
+		insn_field_set(&insn->immediate1, get_next(short, insn), 2);
+		break;
+	case 4:
+		insn_field_set(&insn->immediate1, get_next(int, insn), 4);
+		break;
+	case 8:
+		/* ptr16:64 is not exist (no segment) */
+		return 0;
+	default:	/* opnd_bytes must be modified manually */
+		goto err_out;
+	}
+	insn_field_set(&insn->immediate2, get_next(unsigned short, insn), 2);
+	insn->immediate1.got = insn->immediate2.got = 1;
+
+	return 1;
+err_out:
+	return 0;
+}
+
+/**
+ * insn_get_immediate() - Get the immediate in an instruction
+ * @insn:	&struct insn containing instruction
+ *
+ * If necessary, first collects the instruction up to and including the
+ * displacement bytes.
+ * Basically, most of immediates are sign-expanded. Unsigned-value can be
+ * computed by bit masking with ((1 << (nbytes * 8)) - 1)
+ *
+ * Returns:
+ * 0:  on success
+ * < 0: on error
+ */
+int insn_get_immediate(struct insn *insn)
+{
+	int ret;
+
+	if (insn->immediate.got)
+		return 0;
+
+	if (!insn->displacement.got) {
+		ret = insn_get_displacement(insn);
+		if (ret)
+			return ret;
+	}
+
+	if (inat_has_moffset(insn->attr)) {
+		if (!__get_moffset(insn))
+			goto err_out;
+		goto done;
+	}
+
+	if (!inat_has_immediate(insn->attr))
+		/* no immediates */
+		goto done;
+
+	switch (inat_immediate_size(insn->attr)) {
+	case INAT_IMM_BYTE:
+		insn_field_set(&insn->immediate, get_next(signed char, insn), 1);
+		break;
+	case INAT_IMM_WORD:
+		insn_field_set(&insn->immediate, get_next(short, insn), 2);
+		break;
+	case INAT_IMM_DWORD:
+		insn_field_set(&insn->immediate, get_next(int, insn), 4);
+		break;
+	case INAT_IMM_QWORD:
+		insn_field_set(&insn->immediate1, get_next(int, insn), 4);
+		insn_field_set(&insn->immediate2, get_next(int, insn), 4);
+		break;
+	case INAT_IMM_PTR:
+		if (!__get_immptr(insn))
+			goto err_out;
+		break;
+	case INAT_IMM_VWORD32:
+		if (!__get_immv32(insn))
+			goto err_out;
+		break;
+	case INAT_IMM_VWORD:
+		if (!__get_immv(insn))
+			goto err_out;
+		break;
+	default:
+		/* Here, insn must have an immediate, but failed */
+		goto err_out;
+	}
+	if (inat_has_second_immediate(insn->attr)) {
+		insn_field_set(&insn->immediate2, get_next(signed char, insn), 1);
+	}
+done:
+	insn->immediate.got = 1;
+	return 0;
+
+err_out:
+	return -ENODATA;
+}
+
+/**
+ * insn_get_length() - Get the length of instruction
+ * @insn:	&struct insn containing instruction
+ *
+ * If necessary, first collects the instruction up to and including the
+ * immediates bytes.
+ *
+ * Returns:
+ *  - 0 on success
+ *  - < 0 on error
+*/
+int insn_get_length(struct insn *insn)
+{
+	int ret;
+
+	if (insn->length)
+		return 0;
+
+	if (!insn->immediate.got) {
+		ret = insn_get_immediate(insn);
+		if (ret)
+			return ret;
+	}
+
+	insn->length = (unsigned char)((unsigned long)insn->next_byte
+				     - (unsigned long)insn->kaddr);
+
+	return 0;
+}
+
+/* Ensure this instruction is decoded completely */
+static inline int insn_complete(struct insn *insn)
+{
+	return insn->opcode.got && insn->modrm.got && insn->sib.got &&
+		insn->displacement.got && insn->immediate.got;
+}
+
+/**
+ * insn_decode() - Decode an x86 instruction
+ * @insn:	&struct insn to be initialized
+ * @kaddr:	address (in kernel memory) of instruction (or copy thereof)
+ * @buf_len:	length of the insn buffer at @kaddr
+ * @m:		insn mode, see enum insn_mode
+ *
+ * Returns:
+ * 0: if decoding succeeded
+ * < 0: otherwise.
+ */
+int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m)
+{
+	int ret;
+
+	insn_init(insn, kaddr, buf_len, m == INSN_MODE_64);
+
+	ret = insn_get_length(insn);
+	if (ret)
+		return ret;
+
+	if (insn_complete(insn))
+		return 0;
+
+	return -EINVAL;
+}
+
+/**
+ * insn_has_rep_prefix() - Determine if instruction has a REP prefix
+ * @insn:       Instruction containing the prefix to inspect
+ *
+ * Returns:
+ *
+ * 1 if the instruction has a REP prefix, 0 if not.
+ */
+int insn_has_rep_prefix(struct insn *insn)
+{
+        insn_byte_t p;
+        int i;
+
+        insn_get_prefixes(insn);
+
+        for_each_insn_prefix(insn, i, p) {
+                if (p == 0xf2 || p == 0xf3)
+                        return 1;
+        }
+
+        return 0;
+}
diff --git a/lib/x86/insn/insn.h b/lib/x86/insn/insn.h
new file mode 100644
index 0000000..0fbe305
--- /dev/null
+++ b/lib/x86/insn/insn.h
@@ -0,0 +1,280 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _ASM_X86_INSN_H
+#define _ASM_X86_INSN_H
+/*
+ * x86 instruction analysis
+ *
+ * Copyright (C) IBM Corporation, 2009
+ *
+ * Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
+ *   tools/arch/x86/include/asm/insn.h
+ */
+
+#include <asm/byteorder.h>
+/* insn_attr_t is defined in inat.h */
+#include "inat.h" /* __ignore_sync_check__ */
+
+#if defined(__BYTE_ORDER) ? __BYTE_ORDER == __LITTLE_ENDIAN : defined(__LITTLE_ENDIAN)
+
+struct insn_field {
+	union {
+		insn_value_t value;
+		insn_byte_t bytes[4];
+	};
+	/* !0 if we've run insn_get_xxx() for this field */
+	unsigned char got;
+	unsigned char nbytes;
+};
+
+static inline void insn_field_set(struct insn_field *p, insn_value_t v,
+				  unsigned char n)
+{
+	p->value = v;
+	p->nbytes = n;
+}
+
+static inline void insn_set_byte(struct insn_field *p, unsigned char n,
+				 insn_byte_t v)
+{
+	p->bytes[n] = v;
+}
+
+#else
+
+struct insn_field {
+	insn_value_t value;
+	union {
+		insn_value_t little;
+		insn_byte_t bytes[4];
+	};
+	/* !0 if we've run insn_get_xxx() for this field */
+	unsigned char got;
+	unsigned char nbytes;
+};
+
+static inline void insn_field_set(struct insn_field *p, insn_value_t v,
+				  unsigned char n)
+{
+	p->value = v;
+	p->little = __cpu_to_le32(v);
+	p->nbytes = n;
+}
+
+static inline void insn_set_byte(struct insn_field *p, unsigned char n,
+				 insn_byte_t v)
+{
+	p->bytes[n] = v;
+	p->value = __le32_to_cpu(p->little);
+}
+#endif
+
+struct insn {
+	struct insn_field prefixes;	/*
+					 * Prefixes
+					 * prefixes.bytes[3]: last prefix
+					 */
+	struct insn_field rex_prefix;	/* REX prefix */
+	struct insn_field vex_prefix;	/* VEX prefix */
+	struct insn_field opcode;	/*
+					 * opcode.bytes[0]: opcode1
+					 * opcode.bytes[1]: opcode2
+					 * opcode.bytes[2]: opcode3
+					 */
+	struct insn_field modrm;
+	struct insn_field sib;
+	struct insn_field displacement;
+	union {
+		struct insn_field immediate;
+		struct insn_field moffset1;	/* for 64bit MOV */
+		struct insn_field immediate1;	/* for 64bit imm or off16/32 */
+	};
+	union {
+		struct insn_field moffset2;	/* for 64bit MOV */
+		struct insn_field immediate2;	/* for 64bit imm or seg16 */
+	};
+
+	int	emulate_prefix_size;
+	insn_attr_t attr;
+	unsigned char opnd_bytes;
+	unsigned char addr_bytes;
+	unsigned char length;
+	unsigned char x86_64;
+
+	const insn_byte_t *kaddr;	/* kernel address of insn to analyze */
+	const insn_byte_t *end_kaddr;	/* kernel address of last insn in buffer */
+	const insn_byte_t *next_byte;
+};
+
+#define MAX_INSN_SIZE	15
+
+#define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6)
+#define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3)
+#define X86_MODRM_RM(modrm) ((modrm) & 0x07)
+
+#define X86_SIB_SCALE(sib) (((sib) & 0xc0) >> 6)
+#define X86_SIB_INDEX(sib) (((sib) & 0x38) >> 3)
+#define X86_SIB_BASE(sib) ((sib) & 0x07)
+
+#define X86_REX_W(rex) ((rex) & 8)
+#define X86_REX_R(rex) ((rex) & 4)
+#define X86_REX_X(rex) ((rex) & 2)
+#define X86_REX_B(rex) ((rex) & 1)
+
+/* VEX bit flags  */
+#define X86_VEX_W(vex)	((vex) & 0x80)	/* VEX3 Byte2 */
+#define X86_VEX_R(vex)	((vex) & 0x80)	/* VEX2/3 Byte1 */
+#define X86_VEX_X(vex)	((vex) & 0x40)	/* VEX3 Byte1 */
+#define X86_VEX_B(vex)	((vex) & 0x20)	/* VEX3 Byte1 */
+#define X86_VEX_L(vex)	((vex) & 0x04)	/* VEX3 Byte2, VEX2 Byte1 */
+/* VEX bit fields */
+#define X86_EVEX_M(vex)	((vex) & 0x03)		/* EVEX Byte1 */
+#define X86_VEX3_M(vex)	((vex) & 0x1f)		/* VEX3 Byte1 */
+#define X86_VEX2_M	1			/* VEX2.M always 1 */
+#define X86_VEX_V(vex)	(((vex) & 0x78) >> 3)	/* VEX3 Byte2, VEX2 Byte1 */
+#define X86_VEX_P(vex)	((vex) & 0x03)		/* VEX3 Byte2, VEX2 Byte1 */
+#define X86_VEX_M_MAX	0x1f			/* VEX3.M Maximum value */
+
+extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64);
+extern int insn_get_prefixes(struct insn *insn);
+extern int insn_get_opcode(struct insn *insn);
+extern int insn_get_modrm(struct insn *insn);
+extern int insn_get_sib(struct insn *insn);
+extern int insn_get_displacement(struct insn *insn);
+extern int insn_get_immediate(struct insn *insn);
+extern int insn_get_length(struct insn *insn);
+
+enum insn_mode {
+	INSN_MODE_32,
+	INSN_MODE_64,
+	/* Mode is determined by the current kernel build. */
+	INSN_MODE_KERN,
+	INSN_NUM_MODES,
+};
+
+extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m);
+extern int insn_has_rep_prefix(struct insn *insn);
+
+#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN)
+
+/* Attribute will be determined after getting ModRM (for opcode groups) */
+static inline void insn_get_attribute(struct insn *insn)
+{
+	insn_get_modrm(insn);
+}
+
+/* Instruction uses RIP-relative addressing */
+extern int insn_rip_relative(struct insn *insn);
+
+static inline int insn_is_avx(struct insn *insn)
+{
+	if (!insn->prefixes.got)
+		insn_get_prefixes(insn);
+	return (insn->vex_prefix.value != 0);
+}
+
+static inline int insn_is_evex(struct insn *insn)
+{
+	if (!insn->prefixes.got)
+		insn_get_prefixes(insn);
+	return (insn->vex_prefix.nbytes == 4);
+}
+
+static inline int insn_has_emulate_prefix(struct insn *insn)
+{
+	return !!insn->emulate_prefix_size;
+}
+
+static inline insn_byte_t insn_vex_m_bits(struct insn *insn)
+{
+	if (insn->vex_prefix.nbytes == 2)	/* 2 bytes VEX */
+		return X86_VEX2_M;
+	else if (insn->vex_prefix.nbytes == 3)	/* 3 bytes VEX */
+		return X86_VEX3_M(insn->vex_prefix.bytes[1]);
+	else					/* EVEX */
+		return X86_EVEX_M(insn->vex_prefix.bytes[1]);
+}
+
+static inline insn_byte_t insn_vex_p_bits(struct insn *insn)
+{
+	if (insn->vex_prefix.nbytes == 2)	/* 2 bytes VEX */
+		return X86_VEX_P(insn->vex_prefix.bytes[1]);
+	else
+		return X86_VEX_P(insn->vex_prefix.bytes[2]);
+}
+
+/* Get the last prefix id from last prefix or VEX prefix */
+static inline int insn_last_prefix_id(struct insn *insn)
+{
+	if (insn_is_avx(insn))
+		return insn_vex_p_bits(insn);	/* VEX_p is a SIMD prefix id */
+
+	if (insn->prefixes.bytes[3])
+		return inat_get_last_prefix_id(insn->prefixes.bytes[3]);
+
+	return 0;
+}
+
+/* Offset of each field from kaddr */
+static inline int insn_offset_rex_prefix(struct insn *insn)
+{
+	return insn->prefixes.nbytes;
+}
+static inline int insn_offset_vex_prefix(struct insn *insn)
+{
+	return insn_offset_rex_prefix(insn) + insn->rex_prefix.nbytes;
+}
+static inline int insn_offset_opcode(struct insn *insn)
+{
+	return insn_offset_vex_prefix(insn) + insn->vex_prefix.nbytes;
+}
+static inline int insn_offset_modrm(struct insn *insn)
+{
+	return insn_offset_opcode(insn) + insn->opcode.nbytes;
+}
+static inline int insn_offset_sib(struct insn *insn)
+{
+	return insn_offset_modrm(insn) + insn->modrm.nbytes;
+}
+static inline int insn_offset_displacement(struct insn *insn)
+{
+	return insn_offset_sib(insn) + insn->sib.nbytes;
+}
+static inline int insn_offset_immediate(struct insn *insn)
+{
+	return insn_offset_displacement(insn) + insn->displacement.nbytes;
+}
+
+/**
+ * for_each_insn_prefix() -- Iterate prefixes in the instruction
+ * @insn: Pointer to struct insn.
+ * @idx:  Index storage.
+ * @prefix: Prefix byte.
+ *
+ * Iterate prefix bytes of given @insn. Each prefix byte is stored in @prefix
+ * and the index is stored in @idx (note that this @idx is just for a cursor,
+ * do not change it.)
+ * Since prefixes.nbytes can be bigger than 4 if some prefixes
+ * are repeated, it cannot be used for looping over the prefixes.
+ */
+#define for_each_insn_prefix(insn, idx, prefix)	\
+	for (idx = 0; idx < ARRAY_SIZE(insn->prefixes.bytes) && (prefix = insn->prefixes.bytes[idx]) != 0; idx++)
+
+#define POP_SS_OPCODE 0x1f
+#define MOV_SREG_OPCODE 0x8e
+
+/*
+ * Intel SDM Vol.3A 6.8.3 states;
+ * "Any single-step trap that would be delivered following the MOV to SS
+ * instruction or POP to SS instruction (because EFLAGS.TF is 1) is
+ * suppressed."
+ * This function returns true if @insn is MOV SS or POP SS. On these
+ * instructions, single stepping is suppressed.
+ */
+static inline int insn_masking_exception(struct insn *insn)
+{
+	return insn->opcode.bytes[0] == POP_SS_OPCODE ||
+		(insn->opcode.bytes[0] == MOV_SREG_OPCODE &&
+		 X86_MODRM_REG(insn->modrm.bytes[0]) == 2);
+}
+
+#endif /* _ASM_X86_INSN_H */
diff --git a/x86/Makefile.common b/x86/Makefile.common
index ae426aa..2496d81 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -25,6 +25,8 @@ cflatobjs += lib/x86/delay.o
 ifeq ($(TARGET_EFI),y)
 cflatobjs += lib/x86/amd_sev.o
 cflatobjs += lib/x86/amd_sev_vc.o
+cflatobjs += lib/x86/insn/insn.o
+cflatobjs += lib/x86/insn/inat.o
 cflatobjs += lib/efi.o
 cflatobjs += x86/efi/reloc_x86_64.o
 endif
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers from Linux
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (2 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 19:09   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing Varad Gautam
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87

Suppress -Waddress-of-packed-member to allow taking addresses on struct
ghcb / struct vmcb_save_area fields.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev.h   | 106 ++++++++++++++++++++++++++++++++++++++++++++
 lib/x86/svm.h       |  37 ++++++++++++++++
 x86/Makefile.x86_64 |   1 +
 3 files changed, 144 insertions(+)

diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
index afbacf3..ed71c18 100644
--- a/lib/x86/amd_sev.h
+++ b/lib/x86/amd_sev.h
@@ -18,6 +18,49 @@
 #include "desc.h"
 #include "asm/page.h"
 #include "efi.h"
+#include "processor.h"
+#include "insn/insn.h"
+#include "svm.h"
+
+struct __attribute__ ((__packed__)) ghcb {
+	struct vmcb_save_area save;
+	u8 reserved_save[2048 - sizeof(struct vmcb_save_area)];
+
+	u8 shared_buffer[2032];
+
+	u8 reserved_1[10];
+	u16 protocol_version;	/* negotiated SEV-ES/GHCB protocol version */
+	u32 ghcb_usage;
+};
+
+/* SEV definitions from linux's include/asm/sev.h */
+#define GHCB_PROTO_OUR		0x0001UL
+#define GHCB_PROTOCOL_MAX	1ULL
+#define GHCB_DEFAULT_USAGE	0ULL
+
+#define	VMGEXIT()			{ asm volatile("rep; vmmcall\n\r"); }
+
+enum es_result {
+	ES_OK,			/* All good */
+	ES_UNSUPPORTED,		/* Requested operation not supported */
+	ES_VMM_ERROR,		/* Unexpected state from the VMM */
+	ES_DECODE_FAILED,	/* Instruction decoding failed */
+	ES_EXCEPTION,		/* Instruction caused exception */
+	ES_RETRY,		/* Retry instruction emulation */
+};
+
+struct es_fault_info {
+	unsigned long vector;
+	unsigned long error_code;
+	unsigned long cr2;
+};
+
+/* ES instruction emulation context */
+struct es_em_ctxt {
+	struct ex_regs *regs;
+	struct insn insn;
+	struct es_fault_info fi;
+};
 
 /*
  * AMD Programmer's Manual Volume 3
@@ -59,6 +102,69 @@ void handle_sev_es_vc(struct ex_regs *regs);
 unsigned long long get_amd_sev_c_bit_mask(void);
 unsigned long long get_amd_sev_addr_upperbound(void);
 
+static int _test_bit(int nr, const volatile unsigned long *addr)
+{
+	const volatile unsigned long *word = addr + BIT_WORD(nr);
+	unsigned long mask = BIT_MASK(nr);
+
+	return (*word & mask) != 0;
+}
+
+/* GHCB Accessor functions from Linux's include/asm/svm.h */
+
+#define GHCB_BITMAP_IDX(field)							\
+	(offsetof(struct vmcb_save_area, field) / sizeof(u64))
+
+#define DEFINE_GHCB_ACCESSORS(field)						\
+	static inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb)	\
+	{									\
+		return _test_bit(GHCB_BITMAP_IDX(field),				\
+				(unsigned long *)&ghcb->save.valid_bitmap);	\
+	}									\
+										\
+	static inline u64 ghcb_get_##field(struct ghcb *ghcb)			\
+	{									\
+		return ghcb->save.field;					\
+	}									\
+										\
+	static inline u64 ghcb_get_##field##_if_valid(struct ghcb *ghcb)	\
+	{									\
+		return ghcb_##field##_is_valid(ghcb) ? ghcb->save.field : 0;	\
+	}									\
+										\
+	static inline void ghcb_set_##field(struct ghcb *ghcb, u64 value)	\
+	{									\
+		set_bit(GHCB_BITMAP_IDX(field),				\
+			  (u8 *)&ghcb->save.valid_bitmap);		\
+		ghcb->save.field = value;					\
+	}
+
+DEFINE_GHCB_ACCESSORS(cpl)
+DEFINE_GHCB_ACCESSORS(rip)
+DEFINE_GHCB_ACCESSORS(rsp)
+DEFINE_GHCB_ACCESSORS(rax)
+DEFINE_GHCB_ACCESSORS(rcx)
+DEFINE_GHCB_ACCESSORS(rdx)
+DEFINE_GHCB_ACCESSORS(rbx)
+DEFINE_GHCB_ACCESSORS(rbp)
+DEFINE_GHCB_ACCESSORS(rsi)
+DEFINE_GHCB_ACCESSORS(rdi)
+DEFINE_GHCB_ACCESSORS(r8)
+DEFINE_GHCB_ACCESSORS(r9)
+DEFINE_GHCB_ACCESSORS(r10)
+DEFINE_GHCB_ACCESSORS(r11)
+DEFINE_GHCB_ACCESSORS(r12)
+DEFINE_GHCB_ACCESSORS(r13)
+DEFINE_GHCB_ACCESSORS(r14)
+DEFINE_GHCB_ACCESSORS(r15)
+DEFINE_GHCB_ACCESSORS(sw_exit_code)
+DEFINE_GHCB_ACCESSORS(sw_exit_info_1)
+DEFINE_GHCB_ACCESSORS(sw_exit_info_2)
+DEFINE_GHCB_ACCESSORS(sw_scratch)
+DEFINE_GHCB_ACCESSORS(xcr0)
+
+#define MSR_AMD64_SEV_ES_GHCB          0xc0010130
+
 #endif /* TARGET_EFI */
 
 #endif /* _X86_AMD_SEV_H_ */
diff --git a/lib/x86/svm.h b/lib/x86/svm.h
index f74b13a..f046455 100644
--- a/lib/x86/svm.h
+++ b/lib/x86/svm.h
@@ -197,6 +197,42 @@ struct __attribute__ ((__packed__)) vmcb_save_area {
 	u64 br_to;
 	u64 last_excp_from;
 	u64 last_excp_to;
+
+	/*
+	 * The following part of the save area is valid only for
+	 * SEV-ES guests when referenced through the GHCB or for
+	 * saving to the host save area.
+	 */
+	u8 reserved_7[72];
+	u32 spec_ctrl;          /* Guest version of SPEC_CTRL at 0x2E0 */
+	u8 reserved_7b[4];
+	u32 pkru;
+	u8 reserved_7a[20];
+	u64 reserved_8;         /* rax already available at 0x01f8 */
+	u64 rcx;
+	u64 rdx;
+	u64 rbx;
+	u64 reserved_9;         /* rsp already available at 0x01d8 */
+	u64 rbp;
+	u64 rsi;
+	u64 rdi;
+	u64 r8;
+	u64 r9;
+	u64 r10;
+	u64 r11;
+	u64 r12;
+	u64 r13;
+	u64 r14;
+	u64 r15;
+	u8 reserved_10[16];
+	u64 sw_exit_code;
+	u64 sw_exit_info_1;
+	u64 sw_exit_info_2;
+	u64 sw_scratch;
+	u8 reserved_11[56];
+	u64 xcr0;
+	u8 valid_bitmap[16];
+	u64 x87_state_gpa;
 };
 
 struct __attribute__ ((__packed__)) vmcb {
@@ -297,6 +333,7 @@ struct __attribute__ ((__packed__)) vmcb {
 #define	SVM_EXIT_WRITE_DR6 	0x036
 #define	SVM_EXIT_WRITE_DR7 	0x037
 #define SVM_EXIT_EXCP_BASE      0x040
+#define SVM_EXIT_LAST_EXCP     0x05f
 #define SVM_EXIT_INTR		0x060
 #define SVM_EXIT_NMI		0x061
 #define SVM_EXIT_SMI		0x062
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index a3cb75a..7d3eb53 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -13,6 +13,7 @@ endif
 
 fcf_protection_full := $(call cc-option, -fcf-protection=full,)
 COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full)
+COMMON_CFLAGS += -Wno-address-of-packed-member
 
 cflatobjs += lib/x86/setjmp64.o
 cflatobjs += lib/x86/intel-iommu.o
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (3 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers " Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 20:54   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/ Varad Gautam
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Lay the groundwork for processing #VC exceptions in the handler.
This includes clearing the GHCB, decoding the insn that triggered
this #VC, and continuing execution after the exception has been
processed.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev_vc.c | 78 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
index 8226121..142f2cd 100644
--- a/lib/x86/amd_sev_vc.c
+++ b/lib/x86/amd_sev_vc.c
@@ -1,14 +1,92 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
 #include "amd_sev.h"
+#include "svm.h"
 
 extern phys_addr_t ghcb_addr;
 
+static void vc_ghcb_invalidate(struct ghcb *ghcb)
+{
+	ghcb->save.sw_exit_code = 0;
+	memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
+}
+
+static bool vc_decoding_needed(unsigned long exit_code)
+{
+	/* Exceptions don't require to decode the instruction */
+	return !(exit_code >= SVM_EXIT_EXCP_BASE &&
+		 exit_code <= SVM_EXIT_LAST_EXCP);
+}
+
+static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
+{
+	unsigned char buffer[MAX_INSN_SIZE];
+	int ret;
+
+	memcpy(buffer, (unsigned char *)ctxt->regs->rip, MAX_INSN_SIZE);
+
+	ret = insn_decode(&ctxt->insn, buffer, MAX_INSN_SIZE, INSN_MODE_64);
+	if (ret < 0)
+		return ES_DECODE_FAILED;
+	else
+		return ES_OK;
+}
+
+static enum es_result vc_init_em_ctxt(struct es_em_ctxt *ctxt,
+				      struct ex_regs *regs,
+				      unsigned long exit_code)
+{
+	enum es_result ret = ES_OK;
+
+	memset(ctxt, 0, sizeof(*ctxt));
+	ctxt->regs = regs;
+
+	if (vc_decoding_needed(exit_code))
+		ret = vc_decode_insn(ctxt);
+
+	return ret;
+}
+
+static void vc_finish_insn(struct es_em_ctxt *ctxt)
+{
+	ctxt->regs->rip += ctxt->insn.length;
+}
+
+static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
+					 struct ghcb *ghcb,
+					 unsigned long exit_code)
+{
+	enum es_result result;
+
+	switch (exit_code) {
+	default:
+		/*
+		 * Unexpected #VC exception
+		 */
+		result = ES_UNSUPPORTED;
+	}
+
+	return result;
+}
+
 void handle_sev_es_vc(struct ex_regs *regs)
 {
 	struct ghcb *ghcb = (struct ghcb *) ghcb_addr;
+	unsigned long exit_code = regs->error_code;
+	struct es_em_ctxt ctxt;
+	enum es_result result;
+
 	if (!ghcb) {
 		/* TODO: kill guest */
 		return;
 	}
+
+	vc_ghcb_invalidate(ghcb);
+	result = vc_init_em_ctxt(&ctxt, regs, exit_code);
+	if (result == ES_OK)
+		result = vc_handle_exitcode(&ctxt, ghcb, exit_code);
+	if (result == ES_OK)
+		vc_finish_insn(&ctxt);
+
+	return;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (4 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 21:12   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC Varad Gautam
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Processing CPUID #VC for AMD SEV-ES requires copying xcr0 into GHCB.
Move the xsave read/write helpers used by xsave testcase to lib/x86
to share as common code.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/xsave.c     | 37 +++++++++++++++++++++++++++++++++++++
 lib/x86/xsave.h     | 16 ++++++++++++++++
 x86/Makefile.common |  1 +
 x86/xsave.c         | 43 +------------------------------------------
 4 files changed, 55 insertions(+), 42 deletions(-)
 create mode 100644 lib/x86/xsave.c
 create mode 100644 lib/x86/xsave.h

diff --git a/lib/x86/xsave.c b/lib/x86/xsave.c
new file mode 100644
index 0000000..1c0f16e
--- /dev/null
+++ b/lib/x86/xsave.c
@@ -0,0 +1,37 @@
+#include "libcflat.h"
+#include "xsave.h"
+#include "processor.h"
+
+int xgetbv_checking(u32 index, u64 *result)
+{
+    u32 eax, edx;
+
+    asm volatile(ASM_TRY("1f")
+            ".byte 0x0f,0x01,0xd0\n\t" /* xgetbv */
+            "1:"
+            : "=a" (eax), "=d" (edx)
+            : "c" (index));
+    *result = eax + ((u64)edx << 32);
+    return exception_vector();
+}
+
+int xsetbv_checking(u32 index, u64 value)
+{
+    u32 eax = value;
+    u32 edx = value >> 32;
+
+    asm volatile(ASM_TRY("1f")
+            ".byte 0x0f,0x01,0xd1\n\t" /* xsetbv */
+            "1:"
+            : : "a" (eax), "d" (edx), "c" (index));
+    return exception_vector();
+}
+
+uint64_t get_supported_xcr0(void)
+{
+    struct cpuid r;
+    r = cpuid_indexed(0xd, 0);
+    printf("eax %x, ebx %x, ecx %x, edx %x\n",
+            r.a, r.b, r.c, r.d);
+    return r.a + ((u64)r.d << 32);
+}
diff --git a/lib/x86/xsave.h b/lib/x86/xsave.h
new file mode 100644
index 0000000..f1851a3
--- /dev/null
+++ b/lib/x86/xsave.h
@@ -0,0 +1,16 @@
+#ifndef _X86_XSAVE_H_
+#define _X86_XSAVE_H_
+
+#define X86_CR4_OSXSAVE			0x00040000
+#define XCR_XFEATURE_ENABLED_MASK       0x00000000
+#define XCR_XFEATURE_ILLEGAL_MASK       0x00000010
+
+#define XSTATE_FP       0x1
+#define XSTATE_SSE      0x2
+#define XSTATE_YMM      0x4
+
+int xgetbv_checking(u32 index, u64 *result);
+int xsetbv_checking(u32 index, u64 value);
+uint64_t get_supported_xcr0(void);
+
+#endif
diff --git a/x86/Makefile.common b/x86/Makefile.common
index 2496d81..aa30948 100644
--- a/x86/Makefile.common
+++ b/x86/Makefile.common
@@ -22,6 +22,7 @@ cflatobjs += lib/x86/acpi.o
 cflatobjs += lib/x86/stack.o
 cflatobjs += lib/x86/fault_test.o
 cflatobjs += lib/x86/delay.o
+cflatobjs += lib/x86/xsave.o
 ifeq ($(TARGET_EFI),y)
 cflatobjs += lib/x86/amd_sev.o
 cflatobjs += lib/x86/amd_sev_vc.o
diff --git a/x86/xsave.c b/x86/xsave.c
index 892bf56..bd8fe11 100644
--- a/x86/xsave.c
+++ b/x86/xsave.c
@@ -1,6 +1,7 @@
 #include "libcflat.h"
 #include "desc.h"
 #include "processor.h"
+#include "xsave.h"
 
 #ifdef __x86_64__
 #define uint64_t unsigned long
@@ -8,48 +9,6 @@
 #define uint64_t unsigned long long
 #endif
 
-static int xgetbv_checking(u32 index, u64 *result)
-{
-    u32 eax, edx;
-
-    asm volatile(ASM_TRY("1f")
-            ".byte 0x0f,0x01,0xd0\n\t" /* xgetbv */
-            "1:"
-            : "=a" (eax), "=d" (edx)
-            : "c" (index));
-    *result = eax + ((u64)edx << 32);
-    return exception_vector();
-}
-
-static int xsetbv_checking(u32 index, u64 value)
-{
-    u32 eax = value;
-    u32 edx = value >> 32;
-
-    asm volatile(ASM_TRY("1f")
-            ".byte 0x0f,0x01,0xd1\n\t" /* xsetbv */
-            "1:"
-            : : "a" (eax), "d" (edx), "c" (index));
-    return exception_vector();
-}
-
-static uint64_t get_supported_xcr0(void)
-{
-    struct cpuid r;
-    r = cpuid_indexed(0xd, 0);
-    printf("eax %x, ebx %x, ecx %x, edx %x\n",
-            r.a, r.b, r.c, r.d);
-    return r.a + ((u64)r.d << 32);
-}
-
-#define X86_CR4_OSXSAVE			0x00040000
-#define XCR_XFEATURE_ENABLED_MASK       0x00000000
-#define XCR_XFEATURE_ILLEGAL_MASK       0x00000010
-
-#define XSTATE_FP       0x1
-#define XSTATE_SSE      0x2
-#define XSTATE_YMM      0x4
-
 static void test_xsave(void)
 {
     unsigned long cr4;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (5 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/ Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 21:32   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC Varad Gautam
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Using Linux's CPUID #VC processing logic.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev_vc.c | 98 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 98 insertions(+)

diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
index 142f2cd..9ee67c0 100644
--- a/lib/x86/amd_sev_vc.c
+++ b/lib/x86/amd_sev_vc.c
@@ -2,6 +2,7 @@
 
 #include "amd_sev.h"
 #include "svm.h"
+#include "x86/xsave.h"
 
 extern phys_addr_t ghcb_addr;
 
@@ -52,6 +53,100 @@ static void vc_finish_insn(struct es_em_ctxt *ctxt)
 	ctxt->regs->rip += ctxt->insn.length;
 }
 
+static inline u64 lower_bits(u64 val, unsigned int bits)
+{
+	u64 mask = (1ULL << bits) - 1;
+
+	return (val & mask);
+}
+
+static inline void sev_es_wr_ghcb_msr(u64 val)
+{
+	wrmsr(MSR_AMD64_SEV_ES_GHCB, val);
+}
+
+static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
+					  struct es_em_ctxt *ctxt,
+					  u64 exit_code, u64 exit_info_1,
+					  u64 exit_info_2)
+{
+	enum es_result ret;
+
+	/* Fill in protocol and format specifiers */
+	ghcb->protocol_version = GHCB_PROTOCOL_MAX;
+	ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
+
+	ghcb_set_sw_exit_code(ghcb, exit_code);
+	ghcb_set_sw_exit_info_1(ghcb, exit_info_1);
+	ghcb_set_sw_exit_info_2(ghcb, exit_info_2);
+
+	sev_es_wr_ghcb_msr(__pa(ghcb));
+	VMGEXIT();
+
+	if ((ghcb->save.sw_exit_info_1 & 0xffffffff) == 1) {
+		u64 info = ghcb->save.sw_exit_info_2;
+		unsigned long v;
+
+		info = ghcb->save.sw_exit_info_2;
+		v = info & SVM_EVTINJ_VEC_MASK;
+
+		/* Check if exception information from hypervisor is sane. */
+		if ((info & SVM_EVTINJ_VALID) &&
+		    ((v == GP_VECTOR) || (v == UD_VECTOR)) &&
+		    ((info & SVM_EVTINJ_TYPE_MASK) == SVM_EVTINJ_TYPE_EXEPT)) {
+			ctxt->fi.vector = v;
+			if (info & SVM_EVTINJ_VALID_ERR)
+				ctxt->fi.error_code = info >> 32;
+			ret = ES_EXCEPTION;
+		} else {
+			ret = ES_VMM_ERROR;
+		}
+	} else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
+		ret = ES_VMM_ERROR;
+	} else {
+		ret = ES_OK;
+	}
+
+	return ret;
+}
+
+static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
+				      struct es_em_ctxt *ctxt)
+{
+	struct ex_regs *regs = ctxt->regs;
+	u32 cr4 = read_cr4();
+	enum es_result ret;
+
+	ghcb_set_rax(ghcb, regs->rax);
+	ghcb_set_rcx(ghcb, regs->rcx);
+
+	if (cr4 & X86_CR4_OSXSAVE) {
+		/* Safe to read xcr0 */
+		u64 xcr0;
+		xgetbv_checking(XCR_XFEATURE_ENABLED_MASK, &xcr0);
+		ghcb_set_xcr0(ghcb, xcr0);
+	} else
+		/* xgetbv will cause #GP - use reset value for xcr0 */
+		ghcb_set_xcr0(ghcb, 1);
+
+	ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
+	if (ret != ES_OK)
+		return ret;
+
+	if (!(ghcb_rax_is_valid(ghcb) &&
+	      ghcb_rbx_is_valid(ghcb) &&
+	      ghcb_rcx_is_valid(ghcb) &&
+	      ghcb_rdx_is_valid(ghcb)))
+		return ES_VMM_ERROR;
+
+	regs->rax = ghcb->save.rax;
+	regs->rbx = ghcb->save.rbx;
+	regs->rcx = ghcb->save.rcx;
+	regs->rdx = ghcb->save.rdx;
+
+	return ES_OK;
+}
+
 static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 					 struct ghcb *ghcb,
 					 unsigned long exit_code)
@@ -59,6 +154,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 	enum es_result result;
 
 	switch (exit_code) {
+	case SVM_EXIT_CPUID:
+		result = vc_handle_cpuid(ghcb, ctxt);
+		break;
 	default:
 		/*
 		 * Unexpected #VC exception
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (6 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 21:49   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC Varad Gautam
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for " Varad Gautam
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Using Linux's MSR #VC processing logic.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev_vc.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
index 9ee67c0..401cb29 100644
--- a/lib/x86/amd_sev_vc.c
+++ b/lib/x86/amd_sev_vc.c
@@ -147,6 +147,31 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
 	return ES_OK;
 }
 
+static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
+{
+	struct ex_regs *regs = ctxt->regs;
+	enum es_result ret;
+	u64 exit_info_1;
+
+	/* Is it a WRMSR? */
+	exit_info_1 = (ctxt->insn.opcode.bytes[1] == 0x30) ? 1 : 0;
+
+	ghcb_set_rcx(ghcb, regs->rcx);
+	if (exit_info_1) {
+		ghcb_set_rax(ghcb, regs->rax);
+		ghcb_set_rdx(ghcb, regs->rdx);
+	}
+
+	ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0);
+
+	if ((ret == ES_OK) && (!exit_info_1)) {
+		regs->rax = ghcb->save.rax;
+		regs->rdx = ghcb->save.rdx;
+	}
+
+	return ret;
+}
+
 static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 					 struct ghcb *ghcb,
 					 unsigned long exit_code)
@@ -157,6 +182,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 	case SVM_EXIT_CPUID:
 		result = vc_handle_cpuid(ghcb, ctxt);
 		break;
+	case SVM_EXIT_MSR:
+		result = vc_handle_msr(ghcb, ctxt);
+		break;
 	default:
 		/*
 		 * Unexpected #VC exception
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (7 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-12 23:03   ` Marc Orr
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for " Varad Gautam
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Using Linux's IOIO #VC processing logic.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev_vc.c | 146 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 146 insertions(+)

diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
index 401cb29..88c95e1 100644
--- a/lib/x86/amd_sev_vc.c
+++ b/lib/x86/amd_sev_vc.c
@@ -172,6 +172,149 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
 	return ret;
 }
 
+#define IOIO_TYPE_STR  BIT(2)
+#define IOIO_TYPE_IN   1
+#define IOIO_TYPE_INS  (IOIO_TYPE_IN | IOIO_TYPE_STR)
+#define IOIO_TYPE_OUT  0
+#define IOIO_TYPE_OUTS (IOIO_TYPE_OUT | IOIO_TYPE_STR)
+
+#define IOIO_REP       BIT(3)
+
+#define IOIO_ADDR_64   BIT(9)
+#define IOIO_ADDR_32   BIT(8)
+#define IOIO_ADDR_16   BIT(7)
+
+#define IOIO_DATA_32   BIT(6)
+#define IOIO_DATA_16   BIT(5)
+#define IOIO_DATA_8    BIT(4)
+
+#define IOIO_SEG_ES    (0 << 10)
+#define IOIO_SEG_DS    (3 << 10)
+
+static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
+{
+	struct insn *insn = &ctxt->insn;
+	*exitinfo = 0;
+
+	switch (insn->opcode.bytes[0]) {
+	/* INS opcodes */
+	case 0x6c:
+	case 0x6d:
+		*exitinfo |= IOIO_TYPE_INS;
+		*exitinfo |= IOIO_SEG_ES;
+		*exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
+		break;
+
+	/* OUTS opcodes */
+	case 0x6e:
+	case 0x6f:
+		*exitinfo |= IOIO_TYPE_OUTS;
+		*exitinfo |= IOIO_SEG_DS;
+		*exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
+		break;
+
+	/* IN immediate opcodes */
+	case 0xe4:
+	case 0xe5:
+		*exitinfo |= IOIO_TYPE_IN;
+		*exitinfo |= (u8)insn->immediate.value << 16;
+		break;
+
+	/* OUT immediate opcodes */
+	case 0xe6:
+	case 0xe7:
+		*exitinfo |= IOIO_TYPE_OUT;
+		*exitinfo |= (u8)insn->immediate.value << 16;
+		break;
+
+	/* IN register opcodes */
+	case 0xec:
+	case 0xed:
+		*exitinfo |= IOIO_TYPE_IN;
+		*exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
+		break;
+
+	/* OUT register opcodes */
+	case 0xee:
+	case 0xef:
+		*exitinfo |= IOIO_TYPE_OUT;
+		*exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
+		break;
+
+	default:
+		return ES_DECODE_FAILED;
+	}
+
+	switch (insn->opcode.bytes[0]) {
+	case 0x6c:
+	case 0x6e:
+	case 0xe4:
+	case 0xe6:
+	case 0xec:
+	case 0xee:
+		/* Single byte opcodes */
+		*exitinfo |= IOIO_DATA_8;
+		break;
+	default:
+		/* Length determined by instruction parsing */
+		*exitinfo |= (insn->opnd_bytes == 2) ? IOIO_DATA_16
+						     : IOIO_DATA_32;
+	}
+	switch (insn->addr_bytes) {
+	case 2:
+		*exitinfo |= IOIO_ADDR_16;
+		break;
+	case 4:
+		*exitinfo |= IOIO_ADDR_32;
+		break;
+	case 8:
+		*exitinfo |= IOIO_ADDR_64;
+		break;
+	}
+
+	if (insn_has_rep_prefix(insn))
+		*exitinfo |= IOIO_REP;
+
+	return ES_OK;
+}
+
+static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
+{
+	struct ex_regs *regs = ctxt->regs;
+	u64 exit_info_1;
+	enum es_result ret;
+
+	ret = vc_ioio_exitinfo(ctxt, &exit_info_1);
+	if (ret != ES_OK)
+		return ret;
+
+	if (exit_info_1 & IOIO_TYPE_STR) {
+		ret = ES_VMM_ERROR;
+	} else {
+		/* IN/OUT into/from rAX */
+
+		int bits = (exit_info_1 & 0x70) >> 1;
+		u64 rax = 0;
+
+		if (!(exit_info_1 & IOIO_TYPE_IN))
+			rax = lower_bits(regs->rax, bits);
+
+		ghcb_set_rax(ghcb, rax);
+
+		ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, exit_info_1, 0);
+		if (ret != ES_OK)
+			return ret;
+
+		if (exit_info_1 & IOIO_TYPE_IN) {
+			if (!ghcb_rax_is_valid(ghcb))
+				return ES_VMM_ERROR;
+			regs->rax = lower_bits(ghcb->save.rax, bits);
+		}
+	}
+
+	return ret;
+}
+
 static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 					 struct ghcb *ghcb,
 					 unsigned long exit_code)
@@ -185,6 +328,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
 	case SVM_EXIT_MSR:
 		result = vc_handle_msr(ghcb, ctxt);
 		break;
+	case SVM_EXIT_IOIO:
+		result = vc_handle_ioio(ghcb, ctxt);
+		break;
 	default:
 		/*
 		 * Unexpected #VC exception
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for IOIO #VC
  2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
                   ` (8 preceding siblings ...)
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC Varad Gautam
@ 2022-02-09 16:44 ` Varad Gautam
  2022-02-13  1:31   ` Marc Orr
  9 siblings, 1 reply; 25+ messages in thread
From: Varad Gautam @ 2022-02-09 16:44 UTC (permalink / raw)
  To: kvm, pbonzini, drjones
  Cc: marcorr, zxwang42, erdemaktas, rientjes, seanjc, brijesh.singh,
	Thomas.Lendacky, jroedel, bp, varad.gautam

Using Linux's IOIO #VC processing logic.

Signed-off-by: Varad Gautam <varad.gautam@suse.com>
---
 lib/x86/amd_sev_vc.c | 108 ++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 106 insertions(+), 2 deletions(-)

diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
index 88c95e1..c79d9be 100644
--- a/lib/x86/amd_sev_vc.c
+++ b/lib/x86/amd_sev_vc.c
@@ -278,10 +278,46 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
 	return ES_OK;
 }
 
+static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
+					  void *src, unsigned char *buf,
+					  unsigned int data_size,
+					  unsigned int count,
+					  bool backwards)
+{
+	int i, b = backwards ? -1 : 1;
+
+	for (i = 0; i < count; i++) {
+		void *s = src + (i * data_size * b);
+		unsigned char *d = buf + (i * data_size);
+
+		memcpy(d, s, data_size);
+	}
+
+	return ES_OK;
+}
+
+static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
+					   void *dst, unsigned char *buf,
+					   unsigned int data_size,
+					   unsigned int count,
+					   bool backwards)
+{
+	int i, s = backwards ? -1 : 1;
+
+	for (i = 0; i < count; i++) {
+		void *d = dst + (i * data_size * s);
+		unsigned char *b = buf + (i * data_size);
+
+		memcpy(d, b, data_size);
+	}
+
+	return ES_OK;
+}
+
 static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
 {
 	struct ex_regs *regs = ctxt->regs;
-	u64 exit_info_1;
+	u64 exit_info_1, exit_info_2;
 	enum es_result ret;
 
 	ret = vc_ioio_exitinfo(ctxt, &exit_info_1);
@@ -289,7 +325,75 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
 		return ret;
 
 	if (exit_info_1 & IOIO_TYPE_STR) {
-		ret = ES_VMM_ERROR;
+		/* (REP) INS/OUTS */
+
+		bool df = ((regs->rflags & X86_EFLAGS_DF) == X86_EFLAGS_DF);
+		unsigned int io_bytes, exit_bytes;
+		unsigned int ghcb_count, op_count;
+		unsigned long es_base;
+		u64 sw_scratch;
+
+		/*
+		 * For the string variants with rep prefix the amount of in/out
+		 * operations per #VC exception is limited so that the kernel
+		 * has a chance to take interrupts and re-schedule while the
+		 * instruction is emulated.
+		 */
+		io_bytes   = (exit_info_1 >> 4) & 0x7;
+		ghcb_count = sizeof(ghcb->shared_buffer) / io_bytes;
+
+		op_count    = (exit_info_1 & IOIO_REP) ? regs->rcx : 1;
+		exit_info_2 = op_count < ghcb_count ? op_count : ghcb_count;
+		exit_bytes  = exit_info_2 * io_bytes;
+
+		es_base = 0;
+
+		/* Read bytes of OUTS into the shared buffer */
+		if (!(exit_info_1 & IOIO_TYPE_IN)) {
+			ret = vc_insn_string_read(ctxt,
+					       (void *)(es_base + regs->rsi),
+					       ghcb->shared_buffer, io_bytes,
+					       exit_info_2, df);
+			if (ret)
+				return ret;
+		}
+
+		/*
+		 * Issue an VMGEXIT to the HV to consume the bytes from the
+		 * shared buffer or to have it write them into the shared buffer
+		 * depending on the instruction: OUTS or INS.
+		 */
+		sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer);
+		ghcb_set_sw_scratch(ghcb, sw_scratch);
+		ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO,
+					  exit_info_1, exit_info_2);
+		if (ret != ES_OK)
+			return ret;
+
+		/* Read bytes from shared buffer into the guest's destination. */
+		if (exit_info_1 & IOIO_TYPE_IN) {
+			ret = vc_insn_string_write(ctxt,
+						   (void *)(es_base + regs->rdi),
+						   ghcb->shared_buffer, io_bytes,
+						   exit_info_2, df);
+			if (ret)
+				return ret;
+
+			if (df)
+				regs->rdi -= exit_bytes;
+			else
+				regs->rdi += exit_bytes;
+		} else {
+			if (df)
+				regs->rsi -= exit_bytes;
+			else
+				regs->rsi += exit_bytes;
+		}
+
+		if (exit_info_1 & IOIO_REP)
+			regs->rcx -= exit_info_2;
+
+		ret = regs->rcx ? ES_RETRY : ES_OK;
 	} else {
 		/* IN/OUT into/from rAX */
 
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler for AMD SEV-ES
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler " Varad Gautam
@ 2022-02-12 16:59   ` Marc Orr
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Orr @ 2022-02-12 16:59 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> AMD SEV-ES defines a new guest exception that gets triggered on
> some vmexits to allow the guest to control what state gets shared
> with the host. kvm-unit-tests currently relies on UEFI to provide
> this #VC exception handler. This leads to the following problems:
>
> 1) The test's page table needs to map the firmware and the shared
>    GHCB used by the firmware.
> 2) The firmware needs to keep its #VC handler in the current IDT
>    so that kvm-unit-tests can copy the #VC entry into its own IDT.
> 3) The firmware #VC handler might use state which is not available
>    anymore after ExitBootServices.
> 4) After ExitBootServices, the firmware needs to get the GHCB address
>    from the GHCB MSR if it needs to use the kvm-unit-test GHCB. This
>    requires keeping an identity mapping, and the GHCB address must be
>    in the MSR at all times where a #VC could happen.
>
> Problems 1) and 2) were temporarily mitigated via commits b114aa57ab
> ("x86 AMD SEV-ES: Set up GHCB page") and 706ede1833 ("x86 AMD SEV-ES:
> Copy UEFI #VC IDT entry") respectively.
>
> However, to make kvm-unit-tests reliable against 3) and 4), the tests
> must supply their own #VC handler [1][2].
>
> Switch the tests to install a #VC handler on early bootup, just after
> GHCB has been mapped. The tests will use this handler by default.
> If --amdsev-efi-vc is passed during ./configure, the tests will
> continue using the UEFI #VC handler.
>
> [1] https://lore.kernel.org/all/Yf0GO8EydyQSdZvu@suse.de/
> [2] https://lore.kernel.org/all/YSA%2FsYhGgMU72tn+@google.com/
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  Makefile             |  3 +++
>  configure            | 21 +++++++++++++++++++++
>  lib/x86/amd_sev.c    | 13 +++++--------
>  lib/x86/amd_sev.h    |  1 +
>  lib/x86/amd_sev_vc.c | 14 ++++++++++++++
>  lib/x86/desc.c       | 15 +++++++++++++++
>  lib/x86/desc.h       |  1 +
>  lib/x86/setup.c      |  8 ++++++++
>  x86/Makefile.common  |  1 +
>  9 files changed, 69 insertions(+), 8 deletions(-)
>  create mode 100644 lib/x86/amd_sev_vc.c
>
> diff --git a/Makefile b/Makefile
> index 4f4ad23..94a0162 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -46,6 +46,9 @@ else
>  $(error Cannot build $(ARCH_NAME) tests as EFI apps)
>  endif
>  EFI_CFLAGS := -DTARGET_EFI
> +ifeq ($(AMDSEV_EFI_VC),y)
> +EFI_CFLAGS += -DAMDSEV_EFI_VC
> +endif
>  # The following CFLAGS and LDFLAGS come from:
>  #   - GNU-EFI/Makefile.defaults
>  #   - GNU-EFI/apps/Makefile
> diff --git a/configure b/configure
> index 2d9c3e0..148d051 100755
> --- a/configure
> +++ b/configure
> @@ -30,6 +30,12 @@ gen_se_header=
>  page_size=
>  earlycon=
>  target_efi=
> +# For AMD SEV-ES, the tests build to use their own #VC exception handler
> +# by default, instead of using the one installed by UEFI. This ensures
> +# that the tests do not depend on UEFI state after ExitBootServices.
> +# To continue using the UEFI #VC handler, ./configure can be run with
> +# --amdsev-efi-vc.
> +amdsev_efi_vc=
>
>  usage() {
>      cat <<-EOF
> @@ -75,6 +81,8 @@ usage() {
>                                    Specify a PL011 compatible UART at address ADDR. Supported
>                                    register stride is 32 bit only.
>             --target-efi           Boot and run from UEFI
> +           --amdsev-efi-vc        Use UEFI-provided #VC handlers on AMD SEV/ES. Requires
> +                                  --target-efi.
>  EOF
>      exit 1
>  }
> @@ -145,6 +153,9 @@ while [[ "$1" = -* ]]; do
>         --target-efi)
>             target_efi=y
>             ;;
> +       --amdsev-efi-vc)
> +           amdsev_efi_vc=y
> +           ;;
>         --help)
>             usage
>             ;;
> @@ -204,8 +215,17 @@ elif [ "$processor" = "arm" ]; then
>      processor="cortex-a15"
>  fi
>
> +if [ "$amdsev_efi_vc" ] && [ "$arch" != "x86_64" ]; then
> +    echo "--amdsev-efi-vc requires arch x86_64."
> +    usage
> +fi
> +
>  if [ "$arch" = "i386" ] || [ "$arch" = "x86_64" ]; then
>      testdir=x86
> +    if [ "$amdsev_efi_vc" ] && [ -z "$target_efi" ]; then
> +        echo "--amdsev-efi-vc requires --target-efi."
> +        usage
> +    fi
>  elif [ "$arch" = "arm" ] || [ "$arch" = "arm64" ]; then
>      testdir=arm
>      if [ "$target" = "qemu" ]; then
> @@ -363,6 +383,7 @@ WA_DIVIDE=$wa_divide
>  GENPROTIMG=${GENPROTIMG-genprotimg}
>  HOST_KEY_DOCUMENT=$host_key_document
>  TARGET_EFI=$target_efi
> +AMDSEV_EFI_VC=$amdsev_efi_vc
>  GEN_SE_HEADER=$gen_se_header
>  EOF
>  if [ "$arch" = "arm" ] || [ "$arch" = "arm64" ]; then
> diff --git a/lib/x86/amd_sev.c b/lib/x86/amd_sev.c
> index 6672214..987b59f 100644
> --- a/lib/x86/amd_sev.c
> +++ b/lib/x86/amd_sev.c
> @@ -14,6 +14,7 @@
>  #include "x86/vm.h"
>
>  static unsigned short amd_sev_c_bit_pos;
> +phys_addr_t ghcb_addr;
>
>  bool amd_sev_enabled(void)
>  {
> @@ -100,14 +101,10 @@ efi_status_t setup_amd_sev_es(void)
>
>         /*
>          * Copy UEFI's #VC IDT entry, so KVM-Unit-Tests can reuse it and does
> -        * not have to re-implement a #VC handler. Also update the #VC IDT code
> -        * segment to use KVM-Unit-Tests segments, KERNEL_CS, so that we do not
> +        * not have to re-implement a #VC handler for #VC exceptions before
> +        * GHCB is mapped. Also update the #VC IDT code segment to use
> +        * KVM-Unit-Tests segments, KERNEL_CS, so that we do not
>          * have to copy the UEFI GDT entries into KVM-Unit-Tests GDT.
> -        *
> -        * TODO: Reusing UEFI #VC handler is a temporary workaround to simplify
> -        * the boot up process, the long-term solution is to implement a #VC
> -        * handler in kvm-unit-tests and load it, so that kvm-unit-tests does
> -        * not depend on specific UEFI #VC handler implementation.
>          */
>         sidt(&idtr);
>         idt = (idt_entry_t *)idtr.base;
> @@ -126,7 +123,7 @@ void setup_ghcb_pte(pgd_t *page_table)
>          * function searches GHCB's L1 pte, creates corresponding L1 ptes if not
>          * found, and unsets the c-bit of GHCB's L1 pte.
>          */
> -       phys_addr_t ghcb_addr, ghcb_base_addr;
> +       phys_addr_t ghcb_base_addr;
>         pteval_t *pte;
>
>         /* Read the current GHCB page addr */
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index 6a10f84..afbacf3 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -54,6 +54,7 @@ efi_status_t setup_amd_sev(void);
>  bool amd_sev_es_enabled(void);
>  efi_status_t setup_amd_sev_es(void);
>  void setup_ghcb_pte(pgd_t *page_table);
> +void handle_sev_es_vc(struct ex_regs *regs);
>
>  unsigned long long get_amd_sev_c_bit_mask(void);
>  unsigned long long get_amd_sev_addr_upperbound(void);
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> new file mode 100644
> index 0000000..8226121
> --- /dev/null
> +++ b/lib/x86/amd_sev_vc.c
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include "amd_sev.h"
> +
> +extern phys_addr_t ghcb_addr;
> +
> +void handle_sev_es_vc(struct ex_regs *regs)
> +{
> +       struct ghcb *ghcb = (struct ghcb *) ghcb_addr;
> +       if (!ghcb) {
> +               /* TODO: kill guest */
> +               return;
> +       }
> +}
> diff --git a/lib/x86/desc.c b/lib/x86/desc.c
> index 16b7256..73aa866 100644
> --- a/lib/x86/desc.c
> +++ b/lib/x86/desc.c
> @@ -3,6 +3,9 @@
>  #include "processor.h"
>  #include <setjmp.h>
>  #include "apic-defs.h"
> +#ifdef TARGET_EFI
> +#include "amd_sev.h"
> +#endif
>
>  /* Boot-related data structures */
>
> @@ -228,6 +231,9 @@ EX_E(ac, 17);
>  EX(mc, 18);
>  EX(xm, 19);
>  EX_E(cp, 21);
> +#ifdef TARGET_EFI
> +EX_E(vc, 29);
> +#endif
>
>  asm (".pushsection .text \n\t"
>       "__handle_exception: \n\t"
> @@ -293,6 +299,15 @@ void setup_idt(void)
>      handle_exception(13, check_exception_table);
>  }
>
> +void setup_amd_sev_es_vc(void)
> +{
> +       if (!amd_sev_es_enabled())
> +               return;
> +
> +       set_idt_entry(29, &vc_fault, 0);
> +       handle_exception(29, handle_sev_es_vc);
> +}
> +
>  unsigned exception_vector(void)
>  {
>      unsigned char vector;
> diff --git a/lib/x86/desc.h b/lib/x86/desc.h
> index 9b81da0..6d95ab3 100644
> --- a/lib/x86/desc.h
> +++ b/lib/x86/desc.h
> @@ -224,6 +224,7 @@ void set_intr_alt_stack(int e, void *fn);
>  void print_current_tss_info(void);
>  handler handle_exception(u8 v, handler fn);
>  void unhandled_exception(struct ex_regs *regs, bool cpu);
> +void setup_amd_sev_es_vc(void);
>
>  bool test_for_exception(unsigned int ex, void (*trigger_func)(void *data),
>                         void *data);
> diff --git a/lib/x86/setup.c b/lib/x86/setup.c
> index bbd3468..9de946b 100644
> --- a/lib/x86/setup.c
> +++ b/lib/x86/setup.c
> @@ -327,6 +327,14 @@ efi_status_t setup_efi(efi_bootinfo_t *efi_bootinfo)
>         smp_init();
>         setup_page_table();
>
> +#ifndef AMDSEV_EFI_VC
> +       /*
> +        * Switch away from the UEFI-installed #VC handler.
> +        * GHCB has already been mapped at this point.
> +        */
> +       setup_amd_sev_es_vc();
> +#endif /* AMDSEV_EFI_VC */
> +
>         return EFI_SUCCESS;
>  }
>
> diff --git a/x86/Makefile.common b/x86/Makefile.common
> index ff02d98..ae426aa 100644
> --- a/x86/Makefile.common
> +++ b/x86/Makefile.common
> @@ -24,6 +24,7 @@ cflatobjs += lib/x86/fault_test.o
>  cflatobjs += lib/x86/delay.o
>  ifeq ($(TARGET_EFI),y)
>  cflatobjs += lib/x86/amd_sev.o
> +cflatobjs += lib/x86/amd_sev_vc.o
>  cflatobjs += lib/efi.o
>  cflatobjs += x86/efi/reloc_x86_64.o
>  endif
> --
> 2.32.0
>

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux Varad Gautam
@ 2022-02-12 17:42   ` Marc Orr
  2022-02-24  9:14     ` Varad Gautam
  0 siblings, 1 reply; 25+ messages in thread
From: Marc Orr @ 2022-02-12 17:42 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Processing #VC exceptions on AMD SEV-ES requires instruction decoding
> logic to set up the right GHCB state before exiting to the host.
>
> Pull in the instruction decoder from Linux for this purpose.
>
> Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/insn/inat-tables.c | 1566 ++++++++++++++++++++++++++++++++++++
>  lib/x86/insn/inat.c        |   86 ++
>  lib/x86/insn/inat.h        |  233 ++++++
>  lib/x86/insn/inat_types.h  |   18 +
>  lib/x86/insn/insn.c        |  778 ++++++++++++++++++
>  lib/x86/insn/insn.h        |  280 +++++++
>  x86/Makefile.common        |    2 +
>  7 files changed, 2963 insertions(+)
>  create mode 100644 lib/x86/insn/inat-tables.c

In Linux, this file is generated. Why not take the scripts to generate
it -- rather than the generated file?

>  create mode 100644 lib/x86/insn/inat.c
>  create mode 100644 lib/x86/insn/inat.h
>  create mode 100644 lib/x86/insn/inat_types.h
>  create mode 100644 lib/x86/insn/insn.c
>  create mode 100644 lib/x86/insn/insn.h

I diffed all of these files against their counterparts in Linus' tree
at SHA1 64222515138e. I saw differences for insn.c and insn.h. Is that
intended?

Also, should we add a README to this directory to explain that the
code was obtained from upstream, how this was done, and when/how to
update it?

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers from Linux
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers " Varad Gautam
@ 2022-02-12 19:09   ` Marc Orr
  2022-02-24  9:17     ` Varad Gautam
  0 siblings, 1 reply; 25+ messages in thread
From: Marc Orr @ 2022-02-12 19:09 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
>
> Suppress -Waddress-of-packed-member to allow taking addresses on struct
> ghcb / struct vmcb_save_area fields.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev.h   | 106 ++++++++++++++++++++++++++++++++++++++++++++
>  lib/x86/svm.h       |  37 ++++++++++++++++
>  x86/Makefile.x86_64 |   1 +
>  3 files changed, 144 insertions(+)
>
> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
> index afbacf3..ed71c18 100644
> --- a/lib/x86/amd_sev.h
> +++ b/lib/x86/amd_sev.h
> @@ -18,6 +18,49 @@
>  #include "desc.h"
>  #include "asm/page.h"
>  #include "efi.h"
> +#include "processor.h"
> +#include "insn/insn.h"
> +#include "svm.h"
> +
> +struct __attribute__ ((__packed__)) ghcb {
> +       struct vmcb_save_area save;
> +       u8 reserved_save[2048 - sizeof(struct vmcb_save_area)];
> +
> +       u8 shared_buffer[2032];
> +
> +       u8 reserved_1[10];
> +       u16 protocol_version;   /* negotiated SEV-ES/GHCB protocol version */
> +       u32 ghcb_usage;
> +};
> +
> +/* SEV definitions from linux's include/asm/sev.h */

nit: "include/asm/sev.h" should be "arch/x86/include/asm/sev.h".

Also, while I feel that I like verbose comments more than many, it
might be best to skip this one. Because when this code diverges from
Linux, it's just going to cause confusion.

> +#define GHCB_PROTO_OUR         0x0001UL
> +#define GHCB_PROTOCOL_MAX      1ULL
> +#define GHCB_DEFAULT_USAGE     0ULL
> +
> +#define        VMGEXIT()                       { asm volatile("rep; vmmcall\n\r"); }
> +
> +enum es_result {
> +       ES_OK,                  /* All good */
> +       ES_UNSUPPORTED,         /* Requested operation not supported */
> +       ES_VMM_ERROR,           /* Unexpected state from the VMM */
> +       ES_DECODE_FAILED,       /* Instruction decoding failed */
> +       ES_EXCEPTION,           /* Instruction caused exception */
> +       ES_RETRY,               /* Retry instruction emulation */
> +};
> +
> +struct es_fault_info {
> +       unsigned long vector;
> +       unsigned long error_code;
> +       unsigned long cr2;
> +};
> +
> +/* ES instruction emulation context */
> +struct es_em_ctxt {
> +       struct ex_regs *regs;
> +       struct insn insn;
> +       struct es_fault_info fi;
> +};
>
>  /*
>   * AMD Programmer's Manual Volume 3
> @@ -59,6 +102,69 @@ void handle_sev_es_vc(struct ex_regs *regs);
>  unsigned long long get_amd_sev_c_bit_mask(void);
>  unsigned long long get_amd_sev_addr_upperbound(void);
>
> +static int _test_bit(int nr, const volatile unsigned long *addr)
> +{
> +       const volatile unsigned long *word = addr + BIT_WORD(nr);
> +       unsigned long mask = BIT_MASK(nr);
> +
> +       return (*word & mask) != 0;
> +}

This looks like it's copy/pasted from lib/arm/bitops.c? Maybe it's
worth moving this helper into a platform independent bitops library.

Alternatively, we could add an x86-specific test_bit implementation to
lib/x86/processor.h, where `set_bit()` is defined.

> +
> +/* GHCB Accessor functions from Linux's include/asm/svm.h */
> +
> +#define GHCB_BITMAP_IDX(field)                                                 \
> +       (offsetof(struct vmcb_save_area, field) / sizeof(u64))
> +
> +#define DEFINE_GHCB_ACCESSORS(field)                                           \
> +       static inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb)     \
> +       {                                                                       \
> +               return _test_bit(GHCB_BITMAP_IDX(field),                                \
> +                               (unsigned long *)&ghcb->save.valid_bitmap);     \
> +       }                                                                       \
> +                                                                               \
> +       static inline u64 ghcb_get_##field(struct ghcb *ghcb)                   \
> +       {                                                                       \
> +               return ghcb->save.field;                                        \
> +       }                                                                       \
> +                                                                               \
> +       static inline u64 ghcb_get_##field##_if_valid(struct ghcb *ghcb)        \
> +       {                                                                       \
> +               return ghcb_##field##_is_valid(ghcb) ? ghcb->save.field : 0;    \
> +       }                                                                       \
> +                                                                               \
> +       static inline void ghcb_set_##field(struct ghcb *ghcb, u64 value)       \
> +       {                                                                       \
> +               set_bit(GHCB_BITMAP_IDX(field),                         \
> +                         (u8 *)&ghcb->save.valid_bitmap);              \
> +               ghcb->save.field = value;                                       \
> +       }
> +
> +DEFINE_GHCB_ACCESSORS(cpl)
> +DEFINE_GHCB_ACCESSORS(rip)
> +DEFINE_GHCB_ACCESSORS(rsp)
> +DEFINE_GHCB_ACCESSORS(rax)
> +DEFINE_GHCB_ACCESSORS(rcx)
> +DEFINE_GHCB_ACCESSORS(rdx)
> +DEFINE_GHCB_ACCESSORS(rbx)
> +DEFINE_GHCB_ACCESSORS(rbp)
> +DEFINE_GHCB_ACCESSORS(rsi)
> +DEFINE_GHCB_ACCESSORS(rdi)
> +DEFINE_GHCB_ACCESSORS(r8)
> +DEFINE_GHCB_ACCESSORS(r9)
> +DEFINE_GHCB_ACCESSORS(r10)
> +DEFINE_GHCB_ACCESSORS(r11)
> +DEFINE_GHCB_ACCESSORS(r12)
> +DEFINE_GHCB_ACCESSORS(r13)
> +DEFINE_GHCB_ACCESSORS(r14)
> +DEFINE_GHCB_ACCESSORS(r15)
> +DEFINE_GHCB_ACCESSORS(sw_exit_code)
> +DEFINE_GHCB_ACCESSORS(sw_exit_info_1)
> +DEFINE_GHCB_ACCESSORS(sw_exit_info_2)
> +DEFINE_GHCB_ACCESSORS(sw_scratch)
> +DEFINE_GHCB_ACCESSORS(xcr0)
> +
> +#define MSR_AMD64_SEV_ES_GHCB          0xc0010130

Should this go in lib/x86/msr.h?

> +
>  #endif /* TARGET_EFI */
>
>  #endif /* _X86_AMD_SEV_H_ */
> diff --git a/lib/x86/svm.h b/lib/x86/svm.h
> index f74b13a..f046455 100644
> --- a/lib/x86/svm.h
> +++ b/lib/x86/svm.h
> @@ -197,6 +197,42 @@ struct __attribute__ ((__packed__)) vmcb_save_area {
>         u64 br_to;
>         u64 last_excp_from;
>         u64 last_excp_to;

In upstream Linux @ 64222515138e, above the save area, there was a
change made for ES. See below. Maybe we should go ahead pull this
change from Linux while we're here adding the VMSA.

kvm-unit-tests, with this patch applied:

172         u8 reserved_3[112];
173         u64 cr4;

Linux @ 64222515138e:

245         u8 reserved_3[104];
246         u64 xss;                /* Valid for SEV-ES only */
247         u64 cr4;

> +
> +       /*
> +        * The following part of the save area is valid only for
> +        * SEV-ES guests when referenced through the GHCB or for
> +        * saving to the host save area.
> +        */
> +       u8 reserved_7[72];
> +       u32 spec_ctrl;          /* Guest version of SPEC_CTRL at 0x2E0 */
> +       u8 reserved_7b[4];
> +       u32 pkru;
> +       u8 reserved_7a[20];
> +       u64 reserved_8;         /* rax already available at 0x01f8 */
> +       u64 rcx;
> +       u64 rdx;
> +       u64 rbx;
> +       u64 reserved_9;         /* rsp already available at 0x01d8 */
> +       u64 rbp;
> +       u64 rsi;
> +       u64 rdi;
> +       u64 r8;
> +       u64 r9;
> +       u64 r10;
> +       u64 r11;
> +       u64 r12;
> +       u64 r13;
> +       u64 r14;
> +       u64 r15;
> +       u8 reserved_10[16];
> +       u64 sw_exit_code;
> +       u64 sw_exit_info_1;
> +       u64 sw_exit_info_2;
> +       u64 sw_scratch;
> +       u8 reserved_11[56];
> +       u64 xcr0;
> +       u8 valid_bitmap[16];
> +       u64 x87_state_gpa;
>  };
>
>  struct __attribute__ ((__packed__)) vmcb {
> @@ -297,6 +333,7 @@ struct __attribute__ ((__packed__)) vmcb {
>  #define        SVM_EXIT_WRITE_DR6      0x036
>  #define        SVM_EXIT_WRITE_DR7      0x037
>  #define SVM_EXIT_EXCP_BASE      0x040
> +#define SVM_EXIT_LAST_EXCP     0x05f

nit: There is a spacing issue here. When this patch is applied, 0x05f
is not aligned with the constants above and below.

>  #define SVM_EXIT_INTR          0x060
>  #define SVM_EXIT_NMI           0x061
>  #define SVM_EXIT_SMI           0x062
> diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
> index a3cb75a..7d3eb53 100644
> --- a/x86/Makefile.x86_64
> +++ b/x86/Makefile.x86_64
> @@ -13,6 +13,7 @@ endif
>
>  fcf_protection_full := $(call cc-option, -fcf-protection=full,)
>  COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full)
> +COMMON_CFLAGS += -Wno-address-of-packed-member
>
>  cflatobjs += lib/x86/setjmp64.o
>  cflatobjs += lib/x86/intel-iommu.o
> --
> 2.32.0
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing Varad Gautam
@ 2022-02-12 20:54   ` Marc Orr
  2022-02-24  9:32     ` Varad Gautam
  0 siblings, 1 reply; 25+ messages in thread
From: Marc Orr @ 2022-02-12 20:54 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Lay the groundwork for processing #VC exceptions in the handler.
> This includes clearing the GHCB, decoding the insn that triggered
> this #VC, and continuing execution after the exception has been
> processed.

This description does not mention that this code is copied from Linux.
Should we have a comment in this patch description, similar to the
other patches?

Also, in general, I wonder if we need to mention where this code came
from in a comment header at the top of the file.

>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev_vc.c | 78 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 78 insertions(+)
>
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> index 8226121..142f2cd 100644
> --- a/lib/x86/amd_sev_vc.c
> +++ b/lib/x86/amd_sev_vc.c
> @@ -1,14 +1,92 @@
>  /* SPDX-License-Identifier: GPL-2.0 */
>
>  #include "amd_sev.h"
> +#include "svm.h"
>
>  extern phys_addr_t ghcb_addr;
>
> +static void vc_ghcb_invalidate(struct ghcb *ghcb)
> +{
> +       ghcb->save.sw_exit_code = 0;
> +       memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
> +}
> +
> +static bool vc_decoding_needed(unsigned long exit_code)
> +{
> +       /* Exceptions don't require to decode the instruction */
> +       return !(exit_code >= SVM_EXIT_EXCP_BASE &&
> +                exit_code <= SVM_EXIT_LAST_EXCP);
> +}
> +
> +static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
> +{
> +       unsigned char buffer[MAX_INSN_SIZE];
> +       int ret;
> +
> +       memcpy(buffer, (unsigned char *)ctxt->regs->rip, MAX_INSN_SIZE);
> +
> +       ret = insn_decode(&ctxt->insn, buffer, MAX_INSN_SIZE, INSN_MODE_64);
> +       if (ret < 0)
> +               return ES_DECODE_FAILED;
> +       else
> +               return ES_OK;
> +}
> +
> +static enum es_result vc_init_em_ctxt(struct es_em_ctxt *ctxt,
> +                                     struct ex_regs *regs,
> +                                     unsigned long exit_code)
> +{
> +       enum es_result ret = ES_OK;
> +
> +       memset(ctxt, 0, sizeof(*ctxt));
> +       ctxt->regs = regs;
> +
> +       if (vc_decoding_needed(exit_code))
> +               ret = vc_decode_insn(ctxt);
> +
> +       return ret;
> +}
> +
> +static void vc_finish_insn(struct es_em_ctxt *ctxt)
> +{
> +       ctxt->regs->rip += ctxt->insn.length;
> +}
> +
> +static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
> +                                        struct ghcb *ghcb,
> +                                        unsigned long exit_code)
> +{
> +       enum es_result result;
> +
> +       switch (exit_code) {
> +       default:
> +               /*
> +                * Unexpected #VC exception
> +                */
> +               result = ES_UNSUPPORTED;
> +       }
> +
> +       return result;
> +}
> +
>  void handle_sev_es_vc(struct ex_regs *regs)
>  {
>         struct ghcb *ghcb = (struct ghcb *) ghcb_addr;
> +       unsigned long exit_code = regs->error_code;
> +       struct es_em_ctxt ctxt;
> +       enum es_result result;
> +
>         if (!ghcb) {
>                 /* TODO: kill guest */
>                 return;
>         }
> +
> +       vc_ghcb_invalidate(ghcb);
> +       result = vc_init_em_ctxt(&ctxt, regs, exit_code);
> +       if (result == ES_OK)
> +               result = vc_handle_exitcode(&ctxt, ghcb, exit_code);
> +       if (result == ES_OK)
> +               vc_finish_insn(&ctxt);

Should we print an error if the result is not `ES_OK`, like the
function `vc_raw_handle_exception()` does in Linux? Otherwise, this
silent failure is going to be very confusing to whoever runs into it.

> +
> +       return;
>  }
> --
> 2.32.0
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/ Varad Gautam
@ 2022-02-12 21:12   ` Marc Orr
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Orr @ 2022-02-12 21:12 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Processing CPUID #VC for AMD SEV-ES requires copying xcr0 into GHCB.
> Move the xsave read/write helpers used by xsave testcase to lib/x86
> to share as common code.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/xsave.c     | 37 +++++++++++++++++++++++++++++++++++++
>  lib/x86/xsave.h     | 16 ++++++++++++++++
>  x86/Makefile.common |  1 +
>  x86/xsave.c         | 43 +------------------------------------------
>  4 files changed, 55 insertions(+), 42 deletions(-)
>  create mode 100644 lib/x86/xsave.c
>  create mode 100644 lib/x86/xsave.h
>
> diff --git a/lib/x86/xsave.c b/lib/x86/xsave.c
> new file mode 100644
> index 0000000..1c0f16e
> --- /dev/null
> +++ b/lib/x86/xsave.c
> @@ -0,0 +1,37 @@
> +#include "libcflat.h"
> +#include "xsave.h"
> +#include "processor.h"
> +
> +int xgetbv_checking(u32 index, u64 *result)
> +{
> +    u32 eax, edx;
> +
> +    asm volatile(ASM_TRY("1f")
> +            ".byte 0x0f,0x01,0xd0\n\t" /* xgetbv */
> +            "1:"
> +            : "=a" (eax), "=d" (edx)
> +            : "c" (index));
> +    *result = eax + ((u64)edx << 32);
> +    return exception_vector();
> +}
> +
> +int xsetbv_checking(u32 index, u64 value)
> +{
> +    u32 eax = value;
> +    u32 edx = value >> 32;
> +
> +    asm volatile(ASM_TRY("1f")
> +            ".byte 0x0f,0x01,0xd1\n\t" /* xsetbv */
> +            "1:"
> +            : : "a" (eax), "d" (edx), "c" (index));
> +    return exception_vector();
> +}
> +
> +uint64_t get_supported_xcr0(void)
> +{
> +    struct cpuid r;
> +    r = cpuid_indexed(0xd, 0);
> +    printf("eax %x, ebx %x, ecx %x, edx %x\n",
> +            r.a, r.b, r.c, r.d);
> +    return r.a + ((u64)r.d << 32);
> +}
> diff --git a/lib/x86/xsave.h b/lib/x86/xsave.h
> new file mode 100644
> index 0000000..f1851a3
> --- /dev/null
> +++ b/lib/x86/xsave.h
> @@ -0,0 +1,16 @@
> +#ifndef _X86_XSAVE_H_
> +#define _X86_XSAVE_H_
> +
> +#define X86_CR4_OSXSAVE                        0x00040000
> +#define XCR_XFEATURE_ENABLED_MASK       0x00000000
> +#define XCR_XFEATURE_ILLEGAL_MASK       0x00000010
> +
> +#define XSTATE_FP       0x1
> +#define XSTATE_SSE      0x2
> +#define XSTATE_YMM      0x4
> +
> +int xgetbv_checking(u32 index, u64 *result);
> +int xsetbv_checking(u32 index, u64 value);
> +uint64_t get_supported_xcr0(void);
> +
> +#endif
> diff --git a/x86/Makefile.common b/x86/Makefile.common
> index 2496d81..aa30948 100644
> --- a/x86/Makefile.common
> +++ b/x86/Makefile.common
> @@ -22,6 +22,7 @@ cflatobjs += lib/x86/acpi.o
>  cflatobjs += lib/x86/stack.o
>  cflatobjs += lib/x86/fault_test.o
>  cflatobjs += lib/x86/delay.o
> +cflatobjs += lib/x86/xsave.o
>  ifeq ($(TARGET_EFI),y)
>  cflatobjs += lib/x86/amd_sev.o
>  cflatobjs += lib/x86/amd_sev_vc.o
> diff --git a/x86/xsave.c b/x86/xsave.c
> index 892bf56..bd8fe11 100644
> --- a/x86/xsave.c
> +++ b/x86/xsave.c
> @@ -1,6 +1,7 @@
>  #include "libcflat.h"
>  #include "desc.h"
>  #include "processor.h"
> +#include "xsave.h"
>
>  #ifdef __x86_64__
>  #define uint64_t unsigned long
> @@ -8,48 +9,6 @@
>  #define uint64_t unsigned long long
>  #endif
>
> -static int xgetbv_checking(u32 index, u64 *result)
> -{
> -    u32 eax, edx;
> -
> -    asm volatile(ASM_TRY("1f")
> -            ".byte 0x0f,0x01,0xd0\n\t" /* xgetbv */
> -            "1:"
> -            : "=a" (eax), "=d" (edx)
> -            : "c" (index));
> -    *result = eax + ((u64)edx << 32);
> -    return exception_vector();
> -}
> -
> -static int xsetbv_checking(u32 index, u64 value)
> -{
> -    u32 eax = value;
> -    u32 edx = value >> 32;
> -
> -    asm volatile(ASM_TRY("1f")
> -            ".byte 0x0f,0x01,0xd1\n\t" /* xsetbv */
> -            "1:"
> -            : : "a" (eax), "d" (edx), "c" (index));
> -    return exception_vector();
> -}
> -
> -static uint64_t get_supported_xcr0(void)
> -{
> -    struct cpuid r;
> -    r = cpuid_indexed(0xd, 0);
> -    printf("eax %x, ebx %x, ecx %x, edx %x\n",
> -            r.a, r.b, r.c, r.d);
> -    return r.a + ((u64)r.d << 32);
> -}
> -
> -#define X86_CR4_OSXSAVE                        0x00040000
> -#define XCR_XFEATURE_ENABLED_MASK       0x00000000
> -#define XCR_XFEATURE_ILLEGAL_MASK       0x00000010
> -
> -#define XSTATE_FP       0x1
> -#define XSTATE_SSE      0x2
> -#define XSTATE_YMM      0x4
> -
>  static void test_xsave(void)
>  {
>      unsigned long cr4;
> --
> 2.32.0
>

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC Varad Gautam
@ 2022-02-12 21:32   ` Marc Orr
  2022-02-24  9:41     ` Varad Gautam
  0 siblings, 1 reply; 25+ messages in thread
From: Marc Orr @ 2022-02-12 21:32 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Using Linux's CPUID #VC processing logic.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev_vc.c | 98 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 98 insertions(+)
>
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> index 142f2cd..9ee67c0 100644
> --- a/lib/x86/amd_sev_vc.c
> +++ b/lib/x86/amd_sev_vc.c
> @@ -2,6 +2,7 @@
>
>  #include "amd_sev.h"
>  #include "svm.h"
> +#include "x86/xsave.h"
>
>  extern phys_addr_t ghcb_addr;
>
> @@ -52,6 +53,100 @@ static void vc_finish_insn(struct es_em_ctxt *ctxt)
>         ctxt->regs->rip += ctxt->insn.length;
>  }
>
> +static inline u64 lower_bits(u64 val, unsigned int bits)
> +{
> +       u64 mask = (1ULL << bits) - 1;
> +
> +       return (val & mask);
> +}

This isn't used in this patch. I guess it ends up being used later, in
path 9: "x86: AMD SEV-ES: Handle IOIO #VC". Let's introduce it there
if we're going to put it in this file. Though, again, maybe it's worth
creating a platform agnostic bit library, and put this and
`_test_bit()` (introduced in a previous patch) there.

> +
> +static inline void sev_es_wr_ghcb_msr(u64 val)
> +{
> +       wrmsr(MSR_AMD64_SEV_ES_GHCB, val);
> +}
> +
> +static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
> +                                         struct es_em_ctxt *ctxt,
> +                                         u64 exit_code, u64 exit_info_1,
> +                                         u64 exit_info_2)
> +{
> +       enum es_result ret;
> +
> +       /* Fill in protocol and format specifiers */
> +       ghcb->protocol_version = GHCB_PROTOCOL_MAX;
> +       ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
> +
> +       ghcb_set_sw_exit_code(ghcb, exit_code);
> +       ghcb_set_sw_exit_info_1(ghcb, exit_info_1);
> +       ghcb_set_sw_exit_info_2(ghcb, exit_info_2);
> +
> +       sev_es_wr_ghcb_msr(__pa(ghcb));
> +       VMGEXIT();
> +
> +       if ((ghcb->save.sw_exit_info_1 & 0xffffffff) == 1) {
> +               u64 info = ghcb->save.sw_exit_info_2;
> +               unsigned long v;
> +
> +               info = ghcb->save.sw_exit_info_2;

This line seems redundant, since `info` is already initialized to this
value when it's declared, two lines above. That being said, I see this
is how the code is in Linux as well. I wonder if it was done like this
on accident.

> +               v = info & SVM_EVTINJ_VEC_MASK;
> +
> +               /* Check if exception information from hypervisor is sane. */
> +               if ((info & SVM_EVTINJ_VALID) &&
> +                   ((v == GP_VECTOR) || (v == UD_VECTOR)) &&
> +                   ((info & SVM_EVTINJ_TYPE_MASK) == SVM_EVTINJ_TYPE_EXEPT)) {
> +                       ctxt->fi.vector = v;
> +                       if (info & SVM_EVTINJ_VALID_ERR)
> +                               ctxt->fi.error_code = info >> 32;
> +                       ret = ES_EXCEPTION;
> +               } else {
> +                       ret = ES_VMM_ERROR;
> +               }
> +       } else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
> +               ret = ES_VMM_ERROR;
> +       } else {
> +               ret = ES_OK;
> +       }
> +
> +       return ret;
> +}
> +
> +static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
> +                                     struct es_em_ctxt *ctxt)
> +{
> +       struct ex_regs *regs = ctxt->regs;
> +       u32 cr4 = read_cr4();
> +       enum es_result ret;
> +
> +       ghcb_set_rax(ghcb, regs->rax);
> +       ghcb_set_rcx(ghcb, regs->rcx);
> +
> +       if (cr4 & X86_CR4_OSXSAVE) {
> +               /* Safe to read xcr0 */
> +               u64 xcr0;
> +               xgetbv_checking(XCR_XFEATURE_ENABLED_MASK, &xcr0);
> +               ghcb_set_xcr0(ghcb, xcr0);
> +       } else
> +               /* xgetbv will cause #GP - use reset value for xcr0 */
> +               ghcb_set_xcr0(ghcb, 1);

nit: Consider adding curly braces to the else branch, so that it
matches the if branch.

> +
> +       ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
> +       if (ret != ES_OK)
> +               return ret;
> +
> +       if (!(ghcb_rax_is_valid(ghcb) &&
> +             ghcb_rbx_is_valid(ghcb) &&
> +             ghcb_rcx_is_valid(ghcb) &&
> +             ghcb_rdx_is_valid(ghcb)))
> +               return ES_VMM_ERROR;
> +
> +       regs->rax = ghcb->save.rax;
> +       regs->rbx = ghcb->save.rbx;
> +       regs->rcx = ghcb->save.rcx;
> +       regs->rdx = ghcb->save.rdx;
> +
> +       return ES_OK;
> +}
> +
>  static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>                                          struct ghcb *ghcb,
>                                          unsigned long exit_code)
> @@ -59,6 +154,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>         enum es_result result;
>
>         switch (exit_code) {
> +       case SVM_EXIT_CPUID:
> +               result = vc_handle_cpuid(ghcb, ctxt);
> +               break;
>         default:
>                 /*
>                  * Unexpected #VC exception
> --
> 2.32.0
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC Varad Gautam
@ 2022-02-12 21:49   ` Marc Orr
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Orr @ 2022-02-12 21:49 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Using Linux's MSR #VC processing logic.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev_vc.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
>
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> index 9ee67c0..401cb29 100644
> --- a/lib/x86/amd_sev_vc.c
> +++ b/lib/x86/amd_sev_vc.c
> @@ -147,6 +147,31 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
>         return ES_OK;
>  }
>
> +static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
> +{
> +       struct ex_regs *regs = ctxt->regs;
> +       enum es_result ret;
> +       u64 exit_info_1;
> +
> +       /* Is it a WRMSR? */
> +       exit_info_1 = (ctxt->insn.opcode.bytes[1] == 0x30) ? 1 : 0;
> +
> +       ghcb_set_rcx(ghcb, regs->rcx);
> +       if (exit_info_1) {
> +               ghcb_set_rax(ghcb, regs->rax);
> +               ghcb_set_rdx(ghcb, regs->rdx);
> +       }
> +
> +       ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0);
> +
> +       if ((ret == ES_OK) && (!exit_info_1)) {
> +               regs->rax = ghcb->save.rax;
> +               regs->rdx = ghcb->save.rdx;
> +       }
> +
> +       return ret;
> +}
> +
>  static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>                                          struct ghcb *ghcb,
>                                          unsigned long exit_code)
> @@ -157,6 +182,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>         case SVM_EXIT_CPUID:
>                 result = vc_handle_cpuid(ghcb, ctxt);
>                 break;
> +       case SVM_EXIT_MSR:
> +               result = vc_handle_msr(ghcb, ctxt);
> +               break;
>         default:
>                 /*
>                  * Unexpected #VC exception
> --
> 2.32.0
>

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC Varad Gautam
@ 2022-02-12 23:03   ` Marc Orr
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Orr @ 2022-02-12 23:03 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Using Linux's IOIO #VC processing logic.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev_vc.c | 146 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 146 insertions(+)
>
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> index 401cb29..88c95e1 100644
> --- a/lib/x86/amd_sev_vc.c
> +++ b/lib/x86/amd_sev_vc.c
> @@ -172,6 +172,149 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
>         return ret;
>  }
>
> +#define IOIO_TYPE_STR  BIT(2)
> +#define IOIO_TYPE_IN   1
> +#define IOIO_TYPE_INS  (IOIO_TYPE_IN | IOIO_TYPE_STR)
> +#define IOIO_TYPE_OUT  0
> +#define IOIO_TYPE_OUTS (IOIO_TYPE_OUT | IOIO_TYPE_STR)
> +
> +#define IOIO_REP       BIT(3)
> +
> +#define IOIO_ADDR_64   BIT(9)
> +#define IOIO_ADDR_32   BIT(8)
> +#define IOIO_ADDR_16   BIT(7)
> +
> +#define IOIO_DATA_32   BIT(6)
> +#define IOIO_DATA_16   BIT(5)
> +#define IOIO_DATA_8    BIT(4)
> +
> +#define IOIO_SEG_ES    (0 << 10)
> +#define IOIO_SEG_DS    (3 << 10)
> +
> +static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
> +{
> +       struct insn *insn = &ctxt->insn;
> +       *exitinfo = 0;
> +
> +       switch (insn->opcode.bytes[0]) {
> +       /* INS opcodes */
> +       case 0x6c:
> +       case 0x6d:
> +               *exitinfo |= IOIO_TYPE_INS;
> +               *exitinfo |= IOIO_SEG_ES;
> +               *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
> +               break;
> +
> +       /* OUTS opcodes */
> +       case 0x6e:
> +       case 0x6f:
> +               *exitinfo |= IOIO_TYPE_OUTS;
> +               *exitinfo |= IOIO_SEG_DS;
> +               *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
> +               break;
> +
> +       /* IN immediate opcodes */
> +       case 0xe4:
> +       case 0xe5:
> +               *exitinfo |= IOIO_TYPE_IN;
> +               *exitinfo |= (u8)insn->immediate.value << 16;
> +               break;
> +
> +       /* OUT immediate opcodes */
> +       case 0xe6:
> +       case 0xe7:
> +               *exitinfo |= IOIO_TYPE_OUT;
> +               *exitinfo |= (u8)insn->immediate.value << 16;
> +               break;
> +
> +       /* IN register opcodes */
> +       case 0xec:
> +       case 0xed:
> +               *exitinfo |= IOIO_TYPE_IN;
> +               *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
> +               break;
> +
> +       /* OUT register opcodes */
> +       case 0xee:
> +       case 0xef:
> +               *exitinfo |= IOIO_TYPE_OUT;
> +               *exitinfo |= (ctxt->regs->rdx & 0xffff) << 16;
> +               break;
> +
> +       default:
> +               return ES_DECODE_FAILED;
> +       }
> +
> +       switch (insn->opcode.bytes[0]) {
> +       case 0x6c:
> +       case 0x6e:
> +       case 0xe4:
> +       case 0xe6:
> +       case 0xec:
> +       case 0xee:
> +               /* Single byte opcodes */
> +               *exitinfo |= IOIO_DATA_8;
> +               break;
> +       default:
> +               /* Length determined by instruction parsing */
> +               *exitinfo |= (insn->opnd_bytes == 2) ? IOIO_DATA_16
> +                                                    : IOIO_DATA_32;
> +       }
> +       switch (insn->addr_bytes) {
> +       case 2:
> +               *exitinfo |= IOIO_ADDR_16;
> +               break;
> +       case 4:
> +               *exitinfo |= IOIO_ADDR_32;
> +               break;
> +       case 8:
> +               *exitinfo |= IOIO_ADDR_64;
> +               break;
> +       }
> +
> +       if (insn_has_rep_prefix(insn))
> +               *exitinfo |= IOIO_REP;
> +
> +       return ES_OK;
> +}
> +
> +static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
> +{
> +       struct ex_regs *regs = ctxt->regs;
> +       u64 exit_info_1;
> +       enum es_result ret;
> +
> +       ret = vc_ioio_exitinfo(ctxt, &exit_info_1);
> +       if (ret != ES_OK)
> +               return ret;
> +
> +       if (exit_info_1 & IOIO_TYPE_STR) {
> +               ret = ES_VMM_ERROR;
> +       } else {
> +               /* IN/OUT into/from rAX */
> +
> +               int bits = (exit_info_1 & 0x70) >> 1;
> +               u64 rax = 0;
> +
> +               if (!(exit_info_1 & IOIO_TYPE_IN))
> +                       rax = lower_bits(regs->rax, bits);
> +
> +               ghcb_set_rax(ghcb, rax);
> +
> +               ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, exit_info_1, 0);
> +               if (ret != ES_OK)
> +                       return ret;
> +
> +               if (exit_info_1 & IOIO_TYPE_IN) {
> +                       if (!ghcb_rax_is_valid(ghcb))
> +                               return ES_VMM_ERROR;
> +                       regs->rax = lower_bits(ghcb->save.rax, bits);
> +               }
> +       }
> +
> +       return ret;
> +}
> +
>  static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>                                          struct ghcb *ghcb,
>                                          unsigned long exit_code)
> @@ -185,6 +328,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>         case SVM_EXIT_MSR:
>                 result = vc_handle_msr(ghcb, ctxt);
>                 break;
> +       case SVM_EXIT_IOIO:
> +               result = vc_handle_ioio(ghcb, ctxt);
> +               break;
>         default:
>                 /*
>                  * Unexpected #VC exception
> --
> 2.32.0
>

Reviewed-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for IOIO #VC
  2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for " Varad Gautam
@ 2022-02-13  1:31   ` Marc Orr
  2022-02-24  9:42     ` Varad Gautam
  0 siblings, 1 reply; 25+ messages in thread
From: Marc Orr @ 2022-02-13  1:31 UTC (permalink / raw)
  To: Varad Gautam
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>
> Using Linux's IOIO #VC processing logic.
>
> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
> ---
>  lib/x86/amd_sev_vc.c | 108 ++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 106 insertions(+), 2 deletions(-)
>
> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
> index 88c95e1..c79d9be 100644
> --- a/lib/x86/amd_sev_vc.c
> +++ b/lib/x86/amd_sev_vc.c
> @@ -278,10 +278,46 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
>         return ES_OK;
>  }
>
> +static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
> +                                         void *src, unsigned char *buf,
> +                                         unsigned int data_size,
> +                                         unsigned int count,
> +                                         bool backwards)
> +{
> +       int i, b = backwards ? -1 : 1;
> +
> +       for (i = 0; i < count; i++) {
> +               void *s = src + (i * data_size * b);
> +               unsigned char *d = buf + (i * data_size);
> +
> +               memcpy(d, s, data_size);
> +       }
> +
> +       return ES_OK;
> +}
> +
> +static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
> +                                          void *dst, unsigned char *buf,
> +                                          unsigned int data_size,
> +                                          unsigned int count,
> +                                          bool backwards)
> +{
> +       int i, s = backwards ? -1 : 1;
> +
> +       for (i = 0; i < count; i++) {
> +               void *d = dst + (i * data_size * s);
> +               unsigned char *b = buf + (i * data_size);
> +
> +               memcpy(d, b, data_size);
> +       }
> +
> +       return ES_OK;
> +}
> +
>  static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
>  {
>         struct ex_regs *regs = ctxt->regs;
> -       u64 exit_info_1;
> +       u64 exit_info_1, exit_info_2;
>         enum es_result ret;
>
>         ret = vc_ioio_exitinfo(ctxt, &exit_info_1);
> @@ -289,7 +325,75 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
>                 return ret;
>
>         if (exit_info_1 & IOIO_TYPE_STR) {
> -               ret = ES_VMM_ERROR;
> +               /* (REP) INS/OUTS */
> +
> +               bool df = ((regs->rflags & X86_EFLAGS_DF) == X86_EFLAGS_DF);
> +               unsigned int io_bytes, exit_bytes;
> +               unsigned int ghcb_count, op_count;
> +               unsigned long es_base;
> +               u64 sw_scratch;
> +
> +               /*
> +                * For the string variants with rep prefix the amount of in/out
> +                * operations per #VC exception is limited so that the kernel
> +                * has a chance to take interrupts and re-schedule while the
> +                * instruction is emulated.
> +                */
> +               io_bytes   = (exit_info_1 >> 4) & 0x7;
> +               ghcb_count = sizeof(ghcb->shared_buffer) / io_bytes;
> +
> +               op_count    = (exit_info_1 & IOIO_REP) ? regs->rcx : 1;
> +               exit_info_2 = op_count < ghcb_count ? op_count : ghcb_count;
> +               exit_bytes  = exit_info_2 * io_bytes;
> +
> +               es_base = 0;
> +
> +               /* Read bytes of OUTS into the shared buffer */
> +               if (!(exit_info_1 & IOIO_TYPE_IN)) {
> +                       ret = vc_insn_string_read(ctxt,
> +                                              (void *)(es_base + regs->rsi),
> +                                              ghcb->shared_buffer, io_bytes,
> +                                              exit_info_2, df);
> +                       if (ret)
> +                               return ret;
> +               }
> +
> +               /*
> +                * Issue an VMGEXIT to the HV to consume the bytes from the
> +                * shared buffer or to have it write them into the shared buffer
> +                * depending on the instruction: OUTS or INS.
> +                */
> +               sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer);
> +               ghcb_set_sw_scratch(ghcb, sw_scratch);
> +               ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO,
> +                                         exit_info_1, exit_info_2);
> +               if (ret != ES_OK)
> +                       return ret;
> +
> +               /* Read bytes from shared buffer into the guest's destination. */
> +               if (exit_info_1 & IOIO_TYPE_IN) {
> +                       ret = vc_insn_string_write(ctxt,
> +                                                  (void *)(es_base + regs->rdi),
> +                                                  ghcb->shared_buffer, io_bytes,
> +                                                  exit_info_2, df);
> +                       if (ret)
> +                               return ret;
> +
> +                       if (df)
> +                               regs->rdi -= exit_bytes;
> +                       else
> +                               regs->rdi += exit_bytes;
> +               } else {
> +                       if (df)
> +                               regs->rsi -= exit_bytes;
> +                       else
> +                               regs->rsi += exit_bytes;
> +               }
> +
> +               if (exit_info_1 & IOIO_REP)
> +                       regs->rcx -= exit_info_2;
> +
> +               ret = regs->rcx ? ES_RETRY : ES_OK;
>         } else {
>                 /* IN/OUT into/from rAX */
>
> --
> 2.32.0
>

I was able to run both the amd_sev and msr tests under SEV-ES using
this built-in #VC handler on my setup. Obviously, that doesn't
exercise all of this #VC handler code. But I also compared it against
what's in LInux and read through all of it as well. Great job, Varad!

Reviewed-by: Marc Orr <marcorr@google.com>
Tested-by: Marc Orr <marcorr@google.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux
  2022-02-12 17:42   ` Marc Orr
@ 2022-02-24  9:14     ` Varad Gautam
  0 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-24  9:14 UTC (permalink / raw)
  To: Marc Orr
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On 2/12/22 6:42 PM, Marc Orr wrote:
> On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>>
>> Processing #VC exceptions on AMD SEV-ES requires instruction decoding
>> logic to set up the right GHCB state before exiting to the host.
>>
>> Pull in the instruction decoder from Linux for this purpose.
>>
>> Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
>>
>> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
>> ---
>>  lib/x86/insn/inat-tables.c | 1566 ++++++++++++++++++++++++++++++++++++
>>  lib/x86/insn/inat.c        |   86 ++
>>  lib/x86/insn/inat.h        |  233 ++++++
>>  lib/x86/insn/inat_types.h  |   18 +
>>  lib/x86/insn/insn.c        |  778 ++++++++++++++++++
>>  lib/x86/insn/insn.h        |  280 +++++++
>>  x86/Makefile.common        |    2 +
>>  7 files changed, 2963 insertions(+)
>>  create mode 100644 lib/x86/insn/inat-tables.c
> 
> In Linux, this file is generated. Why not take the scripts to generate
> it -- rather than the generated file?
> 

Sounds better, I will generate it in v3.

>>  create mode 100644 lib/x86/insn/inat.c
>>  create mode 100644 lib/x86/insn/inat.h
>>  create mode 100644 lib/x86/insn/inat_types.h
>>  create mode 100644 lib/x86/insn/insn.c
>>  create mode 100644 lib/x86/insn/insn.h
> 
> I diffed all of these files against their counterparts in Linus' tree
> at SHA1 64222515138e. I saw differences for insn.c and insn.h. Is that
> intended?
> 

The diff is because I needed to fixup some of the insn decoder code to
build here (eg, include paths, unavailable definitions). But I see how
that would lead to confusion whenever these files need an update, and
it's better to minimize the diff.

I'll go with taking the insn decoder code as-is from Linux, and try
keeping the requirements into an additional .h that glues the decoder
to KUT.

> Also, should we add a README to this directory to explain that the
> code was obtained from upstream, how this was done, and when/how to
> update it?
> 

Makes sense.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers from Linux
  2022-02-12 19:09   ` Marc Orr
@ 2022-02-24  9:17     ` Varad Gautam
  0 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-24  9:17 UTC (permalink / raw)
  To: Marc Orr
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On 2/12/22 8:09 PM, Marc Orr wrote:
> On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>>
>> Origin: Linux 64222515138e43da1fcf288f0289ef1020427b87
>>
>> Suppress -Waddress-of-packed-member to allow taking addresses on struct
>> ghcb / struct vmcb_save_area fields.
>>
>> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
>> ---
>>  lib/x86/amd_sev.h   | 106 ++++++++++++++++++++++++++++++++++++++++++++
>>  lib/x86/svm.h       |  37 ++++++++++++++++
>>  x86/Makefile.x86_64 |   1 +
>>  3 files changed, 144 insertions(+)
>>
>> diff --git a/lib/x86/amd_sev.h b/lib/x86/amd_sev.h
>> index afbacf3..ed71c18 100644
>> --- a/lib/x86/amd_sev.h
>> +++ b/lib/x86/amd_sev.h
>> @@ -18,6 +18,49 @@
>>  #include "desc.h"
>>  #include "asm/page.h"
>>  #include "efi.h"
>> +#include "processor.h"
>> +#include "insn/insn.h"
>> +#include "svm.h"
>> +
>> +struct __attribute__ ((__packed__)) ghcb {
>> +       struct vmcb_save_area save;
>> +       u8 reserved_save[2048 - sizeof(struct vmcb_save_area)];
>> +
>> +       u8 shared_buffer[2032];
>> +
>> +       u8 reserved_1[10];
>> +       u16 protocol_version;   /* negotiated SEV-ES/GHCB protocol version */
>> +       u32 ghcb_usage;
>> +};
>> +
>> +/* SEV definitions from linux's include/asm/sev.h */
> 
> nit: "include/asm/sev.h" should be "arch/x86/include/asm/sev.h".
> 
> Also, while I feel that I like verbose comments more than many, it
> might be best to skip this one. Because when this code diverges from
> Linux, it's just going to cause confusion.
> 

Ack, dropping the comment.

>> +#define GHCB_PROTO_OUR         0x0001UL
>> +#define GHCB_PROTOCOL_MAX      1ULL
>> +#define GHCB_DEFAULT_USAGE     0ULL
>> +
>> +#define        VMGEXIT()                       { asm volatile("rep; vmmcall\n\r"); }
>> +
>> +enum es_result {
>> +       ES_OK,                  /* All good */
>> +       ES_UNSUPPORTED,         /* Requested operation not supported */
>> +       ES_VMM_ERROR,           /* Unexpected state from the VMM */
>> +       ES_DECODE_FAILED,       /* Instruction decoding failed */
>> +       ES_EXCEPTION,           /* Instruction caused exception */
>> +       ES_RETRY,               /* Retry instruction emulation */
>> +};
>> +
>> +struct es_fault_info {
>> +       unsigned long vector;
>> +       unsigned long error_code;
>> +       unsigned long cr2;
>> +};
>> +
>> +/* ES instruction emulation context */
>> +struct es_em_ctxt {
>> +       struct ex_regs *regs;
>> +       struct insn insn;
>> +       struct es_fault_info fi;
>> +};
>>
>>  /*
>>   * AMD Programmer's Manual Volume 3
>> @@ -59,6 +102,69 @@ void handle_sev_es_vc(struct ex_regs *regs);
>>  unsigned long long get_amd_sev_c_bit_mask(void);
>>  unsigned long long get_amd_sev_addr_upperbound(void);
>>
>> +static int _test_bit(int nr, const volatile unsigned long *addr)
>> +{
>> +       const volatile unsigned long *word = addr + BIT_WORD(nr);
>> +       unsigned long mask = BIT_MASK(nr);
>> +
>> +       return (*word & mask) != 0;
>> +}
> 
> This looks like it's copy/pasted from lib/arm/bitops.c? Maybe it's
> worth moving this helper into a platform independent bitops library.
> 
> Alternatively, we could add an x86-specific test_bit implementation to
> lib/x86/processor.h, where `set_bit()` is defined.
> 

lib/x86/processor.h sounds like a decent place for both test_bit() and
lower_bits() later.

>> +
>> +/* GHCB Accessor functions from Linux's include/asm/svm.h */
>> +
>> +#define GHCB_BITMAP_IDX(field)                                                 \
>> +       (offsetof(struct vmcb_save_area, field) / sizeof(u64))
>> +
>> +#define DEFINE_GHCB_ACCESSORS(field)                                           \
>> +       static inline bool ghcb_##field##_is_valid(const struct ghcb *ghcb)     \
>> +       {                                                                       \
>> +               return _test_bit(GHCB_BITMAP_IDX(field),                                \
>> +                               (unsigned long *)&ghcb->save.valid_bitmap);     \
>> +       }                                                                       \
>> +                                                                               \
>> +       static inline u64 ghcb_get_##field(struct ghcb *ghcb)                   \
>> +       {                                                                       \
>> +               return ghcb->save.field;                                        \
>> +       }                                                                       \
>> +                                                                               \
>> +       static inline u64 ghcb_get_##field##_if_valid(struct ghcb *ghcb)        \
>> +       {                                                                       \
>> +               return ghcb_##field##_is_valid(ghcb) ? ghcb->save.field : 0;    \
>> +       }                                                                       \
>> +                                                                               \
>> +       static inline void ghcb_set_##field(struct ghcb *ghcb, u64 value)       \
>> +       {                                                                       \
>> +               set_bit(GHCB_BITMAP_IDX(field),                         \
>> +                         (u8 *)&ghcb->save.valid_bitmap);              \
>> +               ghcb->save.field = value;                                       \
>> +       }
>> +
>> +DEFINE_GHCB_ACCESSORS(cpl)
>> +DEFINE_GHCB_ACCESSORS(rip)
>> +DEFINE_GHCB_ACCESSORS(rsp)
>> +DEFINE_GHCB_ACCESSORS(rax)
>> +DEFINE_GHCB_ACCESSORS(rcx)
>> +DEFINE_GHCB_ACCESSORS(rdx)
>> +DEFINE_GHCB_ACCESSORS(rbx)
>> +DEFINE_GHCB_ACCESSORS(rbp)
>> +DEFINE_GHCB_ACCESSORS(rsi)
>> +DEFINE_GHCB_ACCESSORS(rdi)
>> +DEFINE_GHCB_ACCESSORS(r8)
>> +DEFINE_GHCB_ACCESSORS(r9)
>> +DEFINE_GHCB_ACCESSORS(r10)
>> +DEFINE_GHCB_ACCESSORS(r11)
>> +DEFINE_GHCB_ACCESSORS(r12)
>> +DEFINE_GHCB_ACCESSORS(r13)
>> +DEFINE_GHCB_ACCESSORS(r14)
>> +DEFINE_GHCB_ACCESSORS(r15)
>> +DEFINE_GHCB_ACCESSORS(sw_exit_code)
>> +DEFINE_GHCB_ACCESSORS(sw_exit_info_1)
>> +DEFINE_GHCB_ACCESSORS(sw_exit_info_2)
>> +DEFINE_GHCB_ACCESSORS(sw_scratch)
>> +DEFINE_GHCB_ACCESSORS(xcr0)
>> +
>> +#define MSR_AMD64_SEV_ES_GHCB          0xc0010130
> 
> Should this go in lib/x86/msr.h?
> 
>> +
>>  #endif /* TARGET_EFI */
>>
>>  #endif /* _X86_AMD_SEV_H_ */
>> diff --git a/lib/x86/svm.h b/lib/x86/svm.h
>> index f74b13a..f046455 100644
>> --- a/lib/x86/svm.h
>> +++ b/lib/x86/svm.h
>> @@ -197,6 +197,42 @@ struct __attribute__ ((__packed__)) vmcb_save_area {
>>         u64 br_to;
>>         u64 last_excp_from;
>>         u64 last_excp_to;
> 
> In upstream Linux @ 64222515138e, above the save area, there was a
> change made for ES. See below. Maybe we should go ahead pull this
> change from Linux while we're here adding the VMSA.
> 

I'll update this in v3.

> kvm-unit-tests, with this patch applied:
> 
> 172         u8 reserved_3[112];
> 173         u64 cr4;
> 
> Linux @ 64222515138e:
> 
> 245         u8 reserved_3[104];
> 246         u64 xss;                /* Valid for SEV-ES only */
> 247         u64 cr4;
> 
>> +
>> +       /*
>> +        * The following part of the save area is valid only for
>> +        * SEV-ES guests when referenced through the GHCB or for
>> +        * saving to the host save area.
>> +        */
>> +       u8 reserved_7[72];
>> +       u32 spec_ctrl;          /* Guest version of SPEC_CTRL at 0x2E0 */
>> +       u8 reserved_7b[4];
>> +       u32 pkru;
>> +       u8 reserved_7a[20];
>> +       u64 reserved_8;         /* rax already available at 0x01f8 */
>> +       u64 rcx;
>> +       u64 rdx;
>> +       u64 rbx;
>> +       u64 reserved_9;         /* rsp already available at 0x01d8 */
>> +       u64 rbp;
>> +       u64 rsi;
>> +       u64 rdi;
>> +       u64 r8;
>> +       u64 r9;
>> +       u64 r10;
>> +       u64 r11;
>> +       u64 r12;
>> +       u64 r13;
>> +       u64 r14;
>> +       u64 r15;
>> +       u8 reserved_10[16];
>> +       u64 sw_exit_code;
>> +       u64 sw_exit_info_1;
>> +       u64 sw_exit_info_2;
>> +       u64 sw_scratch;
>> +       u8 reserved_11[56];
>> +       u64 xcr0;
>> +       u8 valid_bitmap[16];
>> +       u64 x87_state_gpa;
>>  };
>>
>>  struct __attribute__ ((__packed__)) vmcb {
>> @@ -297,6 +333,7 @@ struct __attribute__ ((__packed__)) vmcb {
>>  #define        SVM_EXIT_WRITE_DR6      0x036
>>  #define        SVM_EXIT_WRITE_DR7      0x037
>>  #define SVM_EXIT_EXCP_BASE      0x040
>> +#define SVM_EXIT_LAST_EXCP     0x05f
> 
> nit: There is a spacing issue here. When this patch is applied, 0x05f
> is not aligned with the constants above and below.
> 

Ack.

>>  #define SVM_EXIT_INTR          0x060
>>  #define SVM_EXIT_NMI           0x061
>>  #define SVM_EXIT_SMI           0x062
>> diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
>> index a3cb75a..7d3eb53 100644
>> --- a/x86/Makefile.x86_64
>> +++ b/x86/Makefile.x86_64
>> @@ -13,6 +13,7 @@ endif
>>
>>  fcf_protection_full := $(call cc-option, -fcf-protection=full,)
>>  COMMON_CFLAGS += -mno-red-zone -mno-sse -mno-sse2 $(fcf_protection_full)
>> +COMMON_CFLAGS += -Wno-address-of-packed-member
>>
>>  cflatobjs += lib/x86/setjmp64.o
>>  cflatobjs += lib/x86/intel-iommu.o
>> --
>> 2.32.0
>>
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing
  2022-02-12 20:54   ` Marc Orr
@ 2022-02-24  9:32     ` Varad Gautam
  0 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-24  9:32 UTC (permalink / raw)
  To: Marc Orr
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On 2/12/22 9:54 PM, Marc Orr wrote:
> On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>>
>> Lay the groundwork for processing #VC exceptions in the handler.
>> This includes clearing the GHCB, decoding the insn that triggered
>> this #VC, and continuing execution after the exception has been
>> processed.
> 
> This description does not mention that this code is copied from Linux.
> Should we have a comment in this patch description, similar to the
> other patches?
> 
> Also, in general, I wonder if we need to mention where this code came
> from in a comment header at the top of the file.
> 
>>
>> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
>> ---
>>  lib/x86/amd_sev_vc.c | 78 ++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 78 insertions(+)
>>
>> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
>> index 8226121..142f2cd 100644
>> --- a/lib/x86/amd_sev_vc.c
>> +++ b/lib/x86/amd_sev_vc.c
>> @@ -1,14 +1,92 @@
>>  /* SPDX-License-Identifier: GPL-2.0 */
>>
>>  #include "amd_sev.h"
>> +#include "svm.h"
>>
>>  extern phys_addr_t ghcb_addr;
>>
>> +static void vc_ghcb_invalidate(struct ghcb *ghcb)
>> +{
>> +       ghcb->save.sw_exit_code = 0;
>> +       memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
>> +}
>> +
>> +static bool vc_decoding_needed(unsigned long exit_code)
>> +{
>> +       /* Exceptions don't require to decode the instruction */
>> +       return !(exit_code >= SVM_EXIT_EXCP_BASE &&
>> +                exit_code <= SVM_EXIT_LAST_EXCP);
>> +}
>> +
>> +static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt)
>> +{
>> +       unsigned char buffer[MAX_INSN_SIZE];
>> +       int ret;
>> +
>> +       memcpy(buffer, (unsigned char *)ctxt->regs->rip, MAX_INSN_SIZE);
>> +
>> +       ret = insn_decode(&ctxt->insn, buffer, MAX_INSN_SIZE, INSN_MODE_64);
>> +       if (ret < 0)
>> +               return ES_DECODE_FAILED;
>> +       else
>> +               return ES_OK;
>> +}
>> +
>> +static enum es_result vc_init_em_ctxt(struct es_em_ctxt *ctxt,
>> +                                     struct ex_regs *regs,
>> +                                     unsigned long exit_code)
>> +{
>> +       enum es_result ret = ES_OK;
>> +
>> +       memset(ctxt, 0, sizeof(*ctxt));
>> +       ctxt->regs = regs;
>> +
>> +       if (vc_decoding_needed(exit_code))
>> +               ret = vc_decode_insn(ctxt);
>> +
>> +       return ret;
>> +}
>> +
>> +static void vc_finish_insn(struct es_em_ctxt *ctxt)
>> +{
>> +       ctxt->regs->rip += ctxt->insn.length;
>> +}
>> +
>> +static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>> +                                        struct ghcb *ghcb,
>> +                                        unsigned long exit_code)
>> +{
>> +       enum es_result result;
>> +
>> +       switch (exit_code) {
>> +       default:
>> +               /*
>> +                * Unexpected #VC exception
>> +                */
>> +               result = ES_UNSUPPORTED;
>> +       }
>> +
>> +       return result;
>> +}
>> +
>>  void handle_sev_es_vc(struct ex_regs *regs)
>>  {
>>         struct ghcb *ghcb = (struct ghcb *) ghcb_addr;
>> +       unsigned long exit_code = regs->error_code;
>> +       struct es_em_ctxt ctxt;
>> +       enum es_result result;
>> +
>>         if (!ghcb) {
>>                 /* TODO: kill guest */
>>                 return;
>>         }
>> +
>> +       vc_ghcb_invalidate(ghcb);
>> +       result = vc_init_em_ctxt(&ctxt, regs, exit_code);
>> +       if (result == ES_OK)
>> +               result = vc_handle_exitcode(&ctxt, ghcb, exit_code);
>> +       if (result == ES_OK)
>> +               vc_finish_insn(&ctxt);
> 
> Should we print an error if the result is not `ES_OK`, like the
> function `vc_raw_handle_exception()` does in Linux? Otherwise, this
> silent failure is going to be very confusing to whoever runs into it.
> 

Changed in v3.

>> +
>> +       return;
>>  }
>> --
>> 2.32.0
>>
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC
  2022-02-12 21:32   ` Marc Orr
@ 2022-02-24  9:41     ` Varad Gautam
  0 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-24  9:41 UTC (permalink / raw)
  To: Marc Orr
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp



On 2/12/22 10:32 PM, Marc Orr wrote:
> On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>>
>> Using Linux's CPUID #VC processing logic.
>>
>> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
>> ---
>>  lib/x86/amd_sev_vc.c | 98 ++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 98 insertions(+)
>>
>> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
>> index 142f2cd..9ee67c0 100644
>> --- a/lib/x86/amd_sev_vc.c
>> +++ b/lib/x86/amd_sev_vc.c
>> @@ -2,6 +2,7 @@
>>
>>  #include "amd_sev.h"
>>  #include "svm.h"
>> +#include "x86/xsave.h"
>>
>>  extern phys_addr_t ghcb_addr;
>>
>> @@ -52,6 +53,100 @@ static void vc_finish_insn(struct es_em_ctxt *ctxt)
>>         ctxt->regs->rip += ctxt->insn.length;
>>  }
>>
>> +static inline u64 lower_bits(u64 val, unsigned int bits)
>> +{
>> +       u64 mask = (1ULL << bits) - 1;
>> +
>> +       return (val & mask);
>> +}
> 
> This isn't used in this patch. I guess it ends up being used later, in
> path 9: "x86: AMD SEV-ES: Handle IOIO #VC". Let's introduce it there
> if we're going to put it in this file. Though, again, maybe it's worth
> creating a platform agnostic bit library, and put this and
> `_test_bit()` (introduced in a previous patch) there.
> 

Ack, it makes sense to introduce it later (and at a different place).

>> +
>> +static inline void sev_es_wr_ghcb_msr(u64 val)
>> +{
>> +       wrmsr(MSR_AMD64_SEV_ES_GHCB, val);
>> +}
>> +
>> +static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb,
>> +                                         struct es_em_ctxt *ctxt,
>> +                                         u64 exit_code, u64 exit_info_1,
>> +                                         u64 exit_info_2)
>> +{
>> +       enum es_result ret;
>> +
>> +       /* Fill in protocol and format specifiers */
>> +       ghcb->protocol_version = GHCB_PROTOCOL_MAX;
>> +       ghcb->ghcb_usage       = GHCB_DEFAULT_USAGE;
>> +
>> +       ghcb_set_sw_exit_code(ghcb, exit_code);
>> +       ghcb_set_sw_exit_info_1(ghcb, exit_info_1);
>> +       ghcb_set_sw_exit_info_2(ghcb, exit_info_2);
>> +
>> +       sev_es_wr_ghcb_msr(__pa(ghcb));
>> +       VMGEXIT();
>> +
>> +       if ((ghcb->save.sw_exit_info_1 & 0xffffffff) == 1) {
>> +               u64 info = ghcb->save.sw_exit_info_2;
>> +               unsigned long v;
>> +
>> +               info = ghcb->save.sw_exit_info_2;
> 
> This line seems redundant, since `info` is already initialized to this
> value when it's declared, two lines above. That being said, I see this
> is how the code is in Linux as well. I wonder if it was done like this
> on accident.
> 

Nice catch, it seems so. It's harmless, but I will drop it in v3.

>> +               v = info & SVM_EVTINJ_VEC_MASK;
>> +
>> +               /* Check if exception information from hypervisor is sane. */
>> +               if ((info & SVM_EVTINJ_VALID) &&
>> +                   ((v == GP_VECTOR) || (v == UD_VECTOR)) &&
>> +                   ((info & SVM_EVTINJ_TYPE_MASK) == SVM_EVTINJ_TYPE_EXEPT)) {
>> +                       ctxt->fi.vector = v;
>> +                       if (info & SVM_EVTINJ_VALID_ERR)
>> +                               ctxt->fi.error_code = info >> 32;
>> +                       ret = ES_EXCEPTION;
>> +               } else {
>> +                       ret = ES_VMM_ERROR;
>> +               }
>> +       } else if (ghcb->save.sw_exit_info_1 & 0xffffffff) {
>> +               ret = ES_VMM_ERROR;
>> +       } else {
>> +               ret = ES_OK;
>> +       }
>> +
>> +       return ret;
>> +}
>> +
>> +static enum es_result vc_handle_cpuid(struct ghcb *ghcb,
>> +                                     struct es_em_ctxt *ctxt)
>> +{
>> +       struct ex_regs *regs = ctxt->regs;
>> +       u32 cr4 = read_cr4();
>> +       enum es_result ret;
>> +
>> +       ghcb_set_rax(ghcb, regs->rax);
>> +       ghcb_set_rcx(ghcb, regs->rcx);
>> +
>> +       if (cr4 & X86_CR4_OSXSAVE) {
>> +               /* Safe to read xcr0 */
>> +               u64 xcr0;
>> +               xgetbv_checking(XCR_XFEATURE_ENABLED_MASK, &xcr0);
>> +               ghcb_set_xcr0(ghcb, xcr0);
>> +       } else
>> +               /* xgetbv will cause #GP - use reset value for xcr0 */
>> +               ghcb_set_xcr0(ghcb, 1);
> 
> nit: Consider adding curly braces to the else branch, so that it
> matches the if branch.
> 

Will do.

>> +
>> +       ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0);
>> +       if (ret != ES_OK)
>> +               return ret;
>> +
>> +       if (!(ghcb_rax_is_valid(ghcb) &&
>> +             ghcb_rbx_is_valid(ghcb) &&
>> +             ghcb_rcx_is_valid(ghcb) &&
>> +             ghcb_rdx_is_valid(ghcb)))
>> +               return ES_VMM_ERROR;
>> +
>> +       regs->rax = ghcb->save.rax;
>> +       regs->rbx = ghcb->save.rbx;
>> +       regs->rcx = ghcb->save.rcx;
>> +       regs->rdx = ghcb->save.rdx;
>> +
>> +       return ES_OK;
>> +}
>> +
>>  static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>>                                          struct ghcb *ghcb,
>>                                          unsigned long exit_code)
>> @@ -59,6 +154,9 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
>>         enum es_result result;
>>
>>         switch (exit_code) {
>> +       case SVM_EXIT_CPUID:
>> +               result = vc_handle_cpuid(ghcb, ctxt);
>> +               break;
>>         default:
>>                 /*
>>                  * Unexpected #VC exception
>> --
>> 2.32.0
>>
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for IOIO #VC
  2022-02-13  1:31   ` Marc Orr
@ 2022-02-24  9:42     ` Varad Gautam
  0 siblings, 0 replies; 25+ messages in thread
From: Varad Gautam @ 2022-02-24  9:42 UTC (permalink / raw)
  To: Marc Orr
  Cc: kvm list, Paolo Bonzini, Andrew Jones, Zixuan Wang, Erdem Aktas,
	David Rientjes, Sean Christopherson, Singh, Brijesh, Lendacky,
	Thomas, Joerg Roedel, bp

On 2/13/22 2:31 AM, Marc Orr wrote:
> On Wed, Feb 9, 2022 at 8:44 AM Varad Gautam <varad.gautam@suse.com> wrote:
>>
>> Using Linux's IOIO #VC processing logic.
>>
>> Signed-off-by: Varad Gautam <varad.gautam@suse.com>
>> ---
>>  lib/x86/amd_sev_vc.c | 108 ++++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 106 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/x86/amd_sev_vc.c b/lib/x86/amd_sev_vc.c
>> index 88c95e1..c79d9be 100644
>> --- a/lib/x86/amd_sev_vc.c
>> +++ b/lib/x86/amd_sev_vc.c
>> @@ -278,10 +278,46 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
>>         return ES_OK;
>>  }
>>
>> +static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
>> +                                         void *src, unsigned char *buf,
>> +                                         unsigned int data_size,
>> +                                         unsigned int count,
>> +                                         bool backwards)
>> +{
>> +       int i, b = backwards ? -1 : 1;
>> +
>> +       for (i = 0; i < count; i++) {
>> +               void *s = src + (i * data_size * b);
>> +               unsigned char *d = buf + (i * data_size);
>> +
>> +               memcpy(d, s, data_size);
>> +       }
>> +
>> +       return ES_OK;
>> +}
>> +
>> +static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
>> +                                          void *dst, unsigned char *buf,
>> +                                          unsigned int data_size,
>> +                                          unsigned int count,
>> +                                          bool backwards)
>> +{
>> +       int i, s = backwards ? -1 : 1;
>> +
>> +       for (i = 0; i < count; i++) {
>> +               void *d = dst + (i * data_size * s);
>> +               unsigned char *b = buf + (i * data_size);
>> +
>> +               memcpy(d, b, data_size);
>> +       }
>> +
>> +       return ES_OK;
>> +}
>> +
>>  static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
>>  {
>>         struct ex_regs *regs = ctxt->regs;
>> -       u64 exit_info_1;
>> +       u64 exit_info_1, exit_info_2;
>>         enum es_result ret;
>>
>>         ret = vc_ioio_exitinfo(ctxt, &exit_info_1);
>> @@ -289,7 +325,75 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
>>                 return ret;
>>
>>         if (exit_info_1 & IOIO_TYPE_STR) {
>> -               ret = ES_VMM_ERROR;
>> +               /* (REP) INS/OUTS */
>> +
>> +               bool df = ((regs->rflags & X86_EFLAGS_DF) == X86_EFLAGS_DF);
>> +               unsigned int io_bytes, exit_bytes;
>> +               unsigned int ghcb_count, op_count;
>> +               unsigned long es_base;
>> +               u64 sw_scratch;
>> +
>> +               /*
>> +                * For the string variants with rep prefix the amount of in/out
>> +                * operations per #VC exception is limited so that the kernel
>> +                * has a chance to take interrupts and re-schedule while the
>> +                * instruction is emulated.
>> +                */
>> +               io_bytes   = (exit_info_1 >> 4) & 0x7;
>> +               ghcb_count = sizeof(ghcb->shared_buffer) / io_bytes;
>> +
>> +               op_count    = (exit_info_1 & IOIO_REP) ? regs->rcx : 1;
>> +               exit_info_2 = op_count < ghcb_count ? op_count : ghcb_count;
>> +               exit_bytes  = exit_info_2 * io_bytes;
>> +
>> +               es_base = 0;
>> +
>> +               /* Read bytes of OUTS into the shared buffer */
>> +               if (!(exit_info_1 & IOIO_TYPE_IN)) {
>> +                       ret = vc_insn_string_read(ctxt,
>> +                                              (void *)(es_base + regs->rsi),
>> +                                              ghcb->shared_buffer, io_bytes,
>> +                                              exit_info_2, df);
>> +                       if (ret)
>> +                               return ret;
>> +               }
>> +
>> +               /*
>> +                * Issue an VMGEXIT to the HV to consume the bytes from the
>> +                * shared buffer or to have it write them into the shared buffer
>> +                * depending on the instruction: OUTS or INS.
>> +                */
>> +               sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer);
>> +               ghcb_set_sw_scratch(ghcb, sw_scratch);
>> +               ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO,
>> +                                         exit_info_1, exit_info_2);
>> +               if (ret != ES_OK)
>> +                       return ret;
>> +
>> +               /* Read bytes from shared buffer into the guest's destination. */
>> +               if (exit_info_1 & IOIO_TYPE_IN) {
>> +                       ret = vc_insn_string_write(ctxt,
>> +                                                  (void *)(es_base + regs->rdi),
>> +                                                  ghcb->shared_buffer, io_bytes,
>> +                                                  exit_info_2, df);
>> +                       if (ret)
>> +                               return ret;
>> +
>> +                       if (df)
>> +                               regs->rdi -= exit_bytes;
>> +                       else
>> +                               regs->rdi += exit_bytes;
>> +               } else {
>> +                       if (df)
>> +                               regs->rsi -= exit_bytes;
>> +                       else
>> +                               regs->rsi += exit_bytes;
>> +               }
>> +
>> +               if (exit_info_1 & IOIO_REP)
>> +                       regs->rcx -= exit_info_2;
>> +
>> +               ret = regs->rcx ? ES_RETRY : ES_OK;
>>         } else {
>>                 /* IN/OUT into/from rAX */
>>
>> --
>> 2.32.0
>>
> 
> I was able to run both the amd_sev and msr tests under SEV-ES using
> this built-in #VC handler on my setup. Obviously, that doesn't
> exercise all of this #VC handler code. But I also compared it against
> what's in LInux and read through all of it as well. Great job, Varad!
> 
> Reviewed-by: Marc Orr <marcorr@google.com>
> Tested-by: Marc Orr <marcorr@google.com>
> 

Thank you Marc for going over the series! I'll have the v3 out soon.

Regards,
Varad


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2022-02-24  9:42 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-09 16:44 [kvm-unit-tests PATCH v2 00/10] Add #VC exception handling for AMD SEV-ES Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 01/10] x86: AMD SEV-ES: Setup #VC exception handler " Varad Gautam
2022-02-12 16:59   ` Marc Orr
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 02/10] x86: Move svm.h to lib/x86/ Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 03/10] lib: x86: Import insn decoder from Linux Varad Gautam
2022-02-12 17:42   ` Marc Orr
2022-02-24  9:14     ` Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 04/10] x86: AMD SEV-ES: Pull related GHCB definitions and helpers " Varad Gautam
2022-02-12 19:09   ` Marc Orr
2022-02-24  9:17     ` Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 05/10] x86: AMD SEV-ES: Prepare for #VC processing Varad Gautam
2022-02-12 20:54   ` Marc Orr
2022-02-24  9:32     ` Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 06/10] lib/x86: Move xsave helpers to lib/ Varad Gautam
2022-02-12 21:12   ` Marc Orr
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 07/10] x86: AMD SEV-ES: Handle CPUID #VC Varad Gautam
2022-02-12 21:32   ` Marc Orr
2022-02-24  9:41     ` Varad Gautam
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 08/10] x86: AMD SEV-ES: Handle MSR #VC Varad Gautam
2022-02-12 21:49   ` Marc Orr
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 09/10] x86: AMD SEV-ES: Handle IOIO #VC Varad Gautam
2022-02-12 23:03   ` Marc Orr
2022-02-09 16:44 ` [kvm-unit-tests PATCH v2 10/10] x86: AMD SEV-ES: Handle string IO for " Varad Gautam
2022-02-13  1:31   ` Marc Orr
2022-02-24  9:42     ` Varad Gautam

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.