kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests v4 0/4] x86: Add test cases for LAM
@ 2023-05-04  8:47 Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Binbin Wu @ 2023-05-04  8:47 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu

Intel Linear-address masking (LAM) [1], modifies the checking that is applied to
*64-bit* linear addresses, allowing software to use of the untranslated address
bits for metadata.

The patch series add test cases for KVM LAM:

Patch 1 makes change to allow setting of CR3 LAM bits in vmlaunch tests.
Patch 2~4 add test cases for LAM supervisor mode and user mode, including:
- For supervisor mode
  CR4.LAM_SUP toggle
  Memory/MMIO access with tagged pointer
  INVLPG
  INVPCID
  INVVPID (also used to cover VMX instruction VMExit path)
- For user mode
  CR3 LAM bits toggle 
  Memory/MMIO access with tagged pointer

[1] Intel ISE https://cdrdv2.intel.com/v1/dl/getContent/671368
    Chapter Linear Address Masking (LAM)

---
Changelog
v3 --> v4:
Remove an extra space in patch 1.
Remove an unnecessary local variable in patch 2.
Add some comments about the linear address used for LAM tests in patch 2 and patch 3.
Add a static assert to check the physical address range for LAM user mode tests in patch 3.
Remove the unnecessary identical mapping code in patch 3.
Move common definitions and helper function to header file in patch 2.
LAM test of INVVPID is tested only when individual-address invalidation is supported in patch 4.
Add reviewed-by from Chao Gao in patch 2~4.

v2 --> v3:
Simplify the implementation of set_metadata() (Suggested by Chao Gao)
Move definition of X86_FEATURE_LAM to patch 1.
Remove strcpy case and unify memory & MMIO address access using MOV instruction.
Some code cleanups.
Add reviewed-by from Chao Gao in patch 1. 

v1 --> v2:
Add cases to test INVLPG, INVPCID, INVVPID with LAM_SUP
Add cases to test LAM_{U48,U57}

Binbin Wu (3):
  x86: Allow setting of CR3 LAM bits if LAM supported
  x86: Add test cases for LAM_{U48,U57}
  x86: Add test case for INVVPID with LAM

Robert Hoo (1):
  x86: Add test case for LAM_SUP

 lib/x86/processor.h |  15 +++
 x86/Makefile.x86_64 |   1 +
 x86/lam.c           | 319 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |  10 ++
 x86/vmx_tests.c     |  58 +++++++-
 5 files changed, 401 insertions(+), 2 deletions(-)
 create mode 100644 x86/lam.c


base-commit: 02d8befe99f8205d4caea402d8b0800354255681
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported
  2023-05-04  8:47 [kvm-unit-tests v4 0/4] x86: Add test cases for LAM Binbin Wu
@ 2023-05-04  8:47 ` Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 2/4] x86: Add test case for LAM_SUP Binbin Wu
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Binbin Wu @ 2023-05-04  8:47 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu

If LAM is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57
(bit 61) to be set in CR3 field.

Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57
on vmlaunch tests when LAM is supported.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 lib/x86/processor.h | 3 +++
 x86/vmx_tests.c     | 6 +++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 6555056..901df98 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -55,6 +55,8 @@
 #define X86_CR0_PG		BIT(X86_CR0_PG_BIT)
 
 #define X86_CR3_PCID_MASK	GENMASK(11, 0)
+#define X86_CR3_LAM_U57_BIT	(61)
+#define X86_CR3_LAM_U48_BIT	(62)
 
 #define X86_CR4_VME_BIT		(0)
 #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
@@ -249,6 +251,7 @@ static inline bool is_intel(void)
 #define	X86_FEATURE_FLUSH_L1D		(CPUID(0x7, 0, EDX, 28))
 #define	X86_FEATURE_ARCH_CAPABILITIES	(CPUID(0x7, 0, EDX, 29))
 #define	X86_FEATURE_PKS			(CPUID(0x7, 0, ECX, 31))
+#define	X86_FEATURE_LAM			(CPUID(0x7, 1, EAX, 26))
 
 /*
  * Extended Leafs, a.k.a. AMD defined
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 7952ccb..217befe 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -6998,7 +6998,11 @@ static void test_host_ctl_regs(void)
 		cr3 = cr3_saved | (1ul << i);
 		vmcs_write(HOST_CR3, cr3);
 		report_prefix_pushf("HOST_CR3 %lx", cr3);
-		test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
+		if (this_cpu_has(X86_FEATURE_LAM) &&
+		    ((i == X86_CR3_LAM_U57_BIT) || (i == X86_CR3_LAM_U48_BIT)))
+			test_vmx_vmlaunch(0);
+		else
+			test_vmx_vmlaunch(VMXERR_ENTRY_INVALID_HOST_STATE_FIELD);
 		report_prefix_pop();
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [kvm-unit-tests v4 2/4] x86: Add test case for LAM_SUP
  2023-05-04  8:47 [kvm-unit-tests v4 0/4] x86: Add test cases for LAM Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
@ 2023-05-04  8:47 ` Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
  3 siblings, 0 replies; 6+ messages in thread
From: Binbin Wu @ 2023-05-04  8:47 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu

From: Robert Hoo <robert.hu@linux.intel.com>

This unit test covers:
1. CR4.LAM_SUP toggles.
2. Memory & MMIO access with supervisor mode address with LAM metadata.
3. INVLPG memory operand doesn't contain LAM meta data, if the address
   is non-canonical form then the INVLPG is the same as a NOP (no #GP).
4. INVPCID memory operand (descriptor pointer) could contain LAM meta data,
   however, the address in the descriptor should be canonical.

In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.

LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chap 7
LINEAR ADDRESS MASKING (LAM)

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 lib/x86/processor.h |  10 ++
 x86/Makefile.x86_64 |   1 +
 x86/lam.c           | 243 ++++++++++++++++++++++++++++++++++++++++++++
 x86/unittests.cfg   |  10 ++
 4 files changed, 264 insertions(+)
 create mode 100644 x86/lam.c

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 901df98..6803c9c 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -8,6 +8,14 @@
 #include <stdint.h>
 
 #define NONCANONICAL	0xaaaaaaaaaaaaaaaaull
+#define LAM57_MASK	GENMASK_ULL(62, 57)
+#define LAM48_MASK	GENMASK_ULL(62, 48)
+
+/* Set metadata with non-canonical pattern in mask bits of a linear address */
+static inline u64 set_la_non_canonical(u64 src, u64 mask)
+{
+	return (src & ~mask) | (NONCANONICAL & mask);
+}
 
 #ifdef __x86_64__
 #  define R "r"
@@ -107,6 +115,8 @@
 #define X86_CR4_CET		BIT(X86_CR4_CET_BIT)
 #define X86_CR4_PKS_BIT		(24)
 #define X86_CR4_PKS		BIT(X86_CR4_PKS_BIT)
+#define X86_CR4_LAM_SUP_BIT	(28)
+#define X86_CR4_LAM_SUP		BIT(X86_CR4_LAM_SUP_BIT)
 
 #define X86_EFLAGS_CF_BIT	(0)
 #define X86_EFLAGS_CF		BIT(X86_EFLAGS_CF_BIT)
diff --git a/x86/Makefile.x86_64 b/x86/Makefile.x86_64
index f483dea..fa11eb3 100644
--- a/x86/Makefile.x86_64
+++ b/x86/Makefile.x86_64
@@ -34,6 +34,7 @@ tests += $(TEST_DIR)/rdpru.$(exe)
 tests += $(TEST_DIR)/pks.$(exe)
 tests += $(TEST_DIR)/pmu_lbr.$(exe)
 tests += $(TEST_DIR)/pmu_pebs.$(exe)
+tests += $(TEST_DIR)/lam.$(exe)
 
 ifeq ($(CONFIG_EFI),y)
 tests += $(TEST_DIR)/amd_sev.$(exe)
diff --git a/x86/lam.c b/x86/lam.c
new file mode 100644
index 0000000..4e631c5
--- /dev/null
+++ b/x86/lam.c
@@ -0,0 +1,243 @@
+/*
+ * Intel LAM unit test
+ *
+ * Copyright (C) 2023 Intel
+ *
+ * Author: Robert Hoo <robert.hu@linux.intel.com>
+ *         Binbin Wu <binbin.wu@linux.intel.com>
+ *
+ * This work is licensed under the terms of the GNU LGPL, version 2 or
+ * later.
+ */
+
+#include "libcflat.h"
+#include "processor.h"
+#include "desc.h"
+#include "vmalloc.h"
+#include "alloc_page.h"
+#include "vm.h"
+#include "asm/io.h"
+#include "ioram.h"
+
+#define FLAGS_LAM_ACTIVE	BIT_ULL(0)
+#define FLAGS_LA57		BIT_ULL(1)
+
+struct invpcid_desc {
+	u64 pcid : 12;
+	u64 rsv  : 52;
+	u64 addr;
+};
+
+static inline bool is_la57(void)
+{
+	return !!(read_cr4() & X86_CR4_LA57);
+}
+
+static inline bool lam_sup_active(void)
+{
+	return !!(read_cr4() & X86_CR4_LAM_SUP);
+}
+
+static void cr4_set_lam_sup(void *data)
+{
+	unsigned long cr4;
+
+	cr4 = read_cr4();
+	write_cr4_safe(cr4 | X86_CR4_LAM_SUP);
+}
+
+static void cr4_clear_lam_sup(void *data)
+{
+	unsigned long cr4;
+
+	cr4 = read_cr4();
+	write_cr4_safe(cr4 & ~X86_CR4_LAM_SUP);
+}
+
+static void test_cr4_lam_set_clear(bool has_lam)
+{
+	bool fault;
+
+	fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+	report((fault != has_lam) && (lam_sup_active() == has_lam),
+	       "Set CR4.LAM_SUP");
+
+	fault = test_for_exception(GP_VECTOR, &cr4_clear_lam_sup, NULL);
+	report(!fault, "Clear CR4.LAM_SUP");
+}
+
+/* Refer to emulator.c */
+static void do_mov(void *mem)
+{
+	unsigned long t1, t2;
+
+	t1 = 0x123456789abcdefull & -1ul;
+	asm volatile("mov %[t1], (%[mem])\n\t"
+		     "mov (%[mem]), %[t2]"
+		     : [t2]"=r"(t2)
+		     : [t1]"r"(t1), [mem]"r"(mem)
+		     : "memory");
+	report(t1 == t2, "Mov result check");
+}
+
+static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
+{
+	bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+	u64 lam_mask = arg2;
+	u64 *ptr = (u64 *)arg3;
+	bool is_mmio = !!arg4;
+	bool fault;
+
+	fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+	report(!fault, "Test untagged addr (%s)", is_mmio ? "MMIO" : "Memory");
+
+	ptr = (u64 *)set_la_non_canonical((u64)ptr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+	report(fault != lam_active,"Test tagged addr (%s)",
+	       is_mmio ? "MMIO" : "Memory");
+
+	return 0;
+}
+
+static void do_invlpg(void *mem)
+{
+	invlpg(mem);
+}
+
+static void do_invlpg_fep(void *mem)
+{
+	asm volatile(KVM_FEP "invlpg (%0)" ::"r" (mem) : "memory");
+}
+
+/* invlpg with tagged address is same as NOP, no #GP expected. */
+static void test_invlpg(u64 lam_mask, void *va, bool fep)
+{
+	bool fault;
+	u64 *ptr;
+
+	ptr = (u64 *)set_la_non_canonical((u64)va, lam_mask);
+	if (fep)
+		fault = test_for_exception(GP_VECTOR, do_invlpg_fep, ptr);
+	else
+		fault = test_for_exception(GP_VECTOR, do_invlpg, ptr);
+
+	report(!fault, "%sINVLPG with tagged addr", fep ? "fep: " : "");
+}
+
+static void do_invpcid(void *desc)
+{
+	struct invpcid_desc *desc_ptr = (struct invpcid_desc *)desc;
+
+	asm volatile("invpcid %0, %1" :
+	                              : "m" (*desc_ptr), "r" (0UL)
+	                              : "memory");
+}
+
+/* LAM doesn't apply to the target address in the descriptor of invpcid */
+static void test_invpcid(u64 flags, u64 lam_mask, void *data)
+{
+	/*
+	 * Reuse the memory address for the descriptor since stack memory
+	 * address in KUT doesn't follow the kernel address space partitions.
+	 */
+	struct invpcid_desc *desc_ptr = (struct invpcid_desc *)data;
+	bool lam_active = !!(flags & FLAGS_LAM_ACTIVE);
+	bool fault;
+
+	if (!this_cpu_has(X86_FEATURE_PCID) ||
+	    !this_cpu_has(X86_FEATURE_INVPCID)) {
+		report_skip("INVPCID not supported");
+		return;
+	}
+
+	memset(desc_ptr, 0, sizeof(struct invpcid_desc));
+	desc_ptr->addr = (u64)data;
+
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(!fault, "INVPCID: untagged pointer + untagged addr");
+
+	desc_ptr->addr = set_la_non_canonical(desc_ptr->addr, lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault, "INVPCID: untagged pointer + tagged addr");
+
+	desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr,
+							       lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault, "INVPCID: tagged pointer + tagged addr");
+
+	desc_ptr = (struct invpcid_desc *)data;
+	desc_ptr->addr = (u64)data;
+	desc_ptr = (struct invpcid_desc *)set_la_non_canonical((u64)desc_ptr,
+							       lam_mask);
+	fault = test_for_exception(GP_VECTOR, do_invpcid, desc_ptr);
+	report(fault != lam_active, "INVPCID: tagged pointer + untagged addr");
+}
+
+static void test_lam_sup(bool has_lam, bool fep_available)
+{
+	void *vaddr, *vaddr_mmio;
+	phys_addr_t paddr;
+	u64 lam_mask = LAM48_MASK;
+	u64 flags = 0;
+	bool fault;
+
+	/*
+	 * KUT initializes vfree_top to 0 for X86_64, and each virtual address
+	 * allocation decreases the size from vfree_top. It's guaranteed that
+	 * the return value of alloc_vpage() is considered as kernel mode
+	 * address and canonical since only a small mount virtual address range
+	 * is allocated in this test.
+	 */
+	vaddr = alloc_vpage();
+	vaddr_mmio = alloc_vpage();
+	paddr = virt_to_phys(alloc_page());
+	install_page(current_page_table(), paddr, vaddr);
+	install_page(current_page_table(), IORAM_BASE_PHYS, vaddr_mmio);
+
+	test_cr4_lam_set_clear(has_lam);
+
+	/* Set for the following LAM_SUP tests */
+	if (has_lam) {
+		fault = test_for_exception(GP_VECTOR, &cr4_set_lam_sup, NULL);
+		report(!fault && lam_sup_active(), "Set CR4.LAM_SUP");
+	}
+
+	if (lam_sup_active())
+		flags |= FLAGS_LAM_ACTIVE;
+
+	if (is_la57()) {
+		flags |= FLAGS_LA57;
+		lam_mask = LAM57_MASK;
+	}
+
+	/* Test for normal memory */
+	test_ptr(flags, lam_mask, (u64)vaddr, false);
+	/* Test for MMIO to trigger instruction emulation */
+	test_ptr(flags, lam_mask, (u64)vaddr_mmio, true);
+	test_invpcid(flags, lam_mask, vaddr);
+	test_invlpg(lam_mask, vaddr, false);
+	if (fep_available)
+		test_invlpg(lam_mask, vaddr, true);
+}
+
+int main(int ac, char **av)
+{
+	bool has_lam;
+	bool fep_available = is_fep_available();
+
+	setup_vm();
+
+	has_lam = this_cpu_has(X86_FEATURE_LAM);
+	if (!has_lam)
+		report_info("This CPU doesn't support LAM feature\n");
+	else
+		report_info("This CPU supports LAM feature\n");
+
+	if (!fep_available)
+		report_skip("Skipping tests the forced emulation, "
+			    "use kvm.force_emulation_prefix=1 to enable\n");
+
+	test_lam_sup(has_lam, fep_available);
+
+	return report_summary();
+}
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index f4e7b25..b8a628e 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -492,3 +492,13 @@ file = cet.flat
 arch = x86_64
 smp = 2
 extra_params = -enable-kvm -m 2048 -cpu host
+
+[intel-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host
+
+[intel-no-lam]
+file = lam.flat
+arch = x86_64
+extra_params = -enable-kvm -cpu host,-lam
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [kvm-unit-tests v4 3/4] x86: Add test cases for LAM_{U48,U57}
  2023-05-04  8:47 [kvm-unit-tests v4 0/4] x86: Add test cases for LAM Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 2/4] x86: Add test case for LAM_SUP Binbin Wu
@ 2023-05-04  8:47 ` Binbin Wu
  2023-05-04  8:47 ` [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
  3 siblings, 0 replies; 6+ messages in thread
From: Binbin Wu @ 2023-05-04  8:47 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu

This unit test covers:
1. CR3 LAM bits toggles.
2. Memory/MMIO access with user mode address containing LAM metadata.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 lib/x86/processor.h |  2 ++
 x86/lam.c           | 76 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 6803c9c..d45566a 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -64,7 +64,9 @@ static inline u64 set_la_non_canonical(u64 src, u64 mask)
 
 #define X86_CR3_PCID_MASK	GENMASK(11, 0)
 #define X86_CR3_LAM_U57_BIT	(61)
+#define X86_CR3_LAM_U57		BIT_ULL(X86_CR3_LAM_U57_BIT)
 #define X86_CR3_LAM_U48_BIT	(62)
+#define X86_CR3_LAM_U48		BIT_ULL(X86_CR3_LAM_U48_BIT)
 
 #define X86_CR4_VME_BIT		(0)
 #define X86_CR4_VME		BIT(X86_CR4_VME_BIT)
diff --git a/x86/lam.c b/x86/lam.c
index 4e631c5..2920747 100644
--- a/x86/lam.c
+++ b/x86/lam.c
@@ -18,6 +18,9 @@
 #include "vm.h"
 #include "asm/io.h"
 #include "ioram.h"
+#include "usermode.h"
+
+#define CR3_LAM_BITS_MASK (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)
 
 #define FLAGS_LAM_ACTIVE	BIT_ULL(0)
 #define FLAGS_LA57		BIT_ULL(1)
@@ -38,6 +41,16 @@ static inline bool lam_sup_active(void)
 	return !!(read_cr4() & X86_CR4_LAM_SUP);
 }
 
+static inline bool lam_u48_active(void)
+{
+	return (read_cr3() & CR3_LAM_BITS_MASK) == X86_CR3_LAM_U48;
+}
+
+static inline bool lam_u57_active(void)
+{
+	return !!(read_cr3() & X86_CR3_LAM_U57);
+}
+
 static void cr4_set_lam_sup(void *data)
 {
 	unsigned long cr4;
@@ -83,6 +96,7 @@ static void do_mov(void *mem)
 static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
 {
 	bool lam_active = !!(arg1 & FLAGS_LAM_ACTIVE);
+	bool la_57 = !!(arg1 & FLAGS_LA57);
 	u64 lam_mask = arg2;
 	u64 *ptr = (u64 *)arg3;
 	bool is_mmio = !!arg4;
@@ -96,6 +110,17 @@ static u64 test_ptr(u64 arg1, u64 arg2, u64 arg3, u64 arg4)
 	report(fault != lam_active,"Test tagged addr (%s)",
 	       is_mmio ? "MMIO" : "Memory");
 
+	/*
+	 * This test case is only triggered when LAM_U57 is active and 4-level
+	 * paging is used. For the case, bit[56:47] aren't all 0 triggers #GP.
+	 */
+	if (lam_active && (lam_mask == LAM57_MASK) && !la_57) {
+		ptr = (u64 *)set_la_non_canonical((u64)ptr, LAM48_MASK);
+		fault = test_for_exception(GP_VECTOR, do_mov, ptr);
+		report(fault,  "Test non-LAM-canonical addr (%s)",
+		       is_mmio ? "MMIO" : "Memory");
+	}
+
 	return 0;
 }
 
@@ -220,6 +245,56 @@ static void test_lam_sup(bool has_lam, bool fep_available)
 		test_invlpg(lam_mask, vaddr, true);
 }
 
+static void test_lam_user_mode(bool has_lam, u64 lam_mask, u64 mem, u64 mmio)
+{
+	unsigned r;
+	bool raised_vector;
+	unsigned long cr3 = read_cr3() & ~CR3_LAM_BITS_MASK;
+	u64 flags = 0;
+
+	if (is_la57())
+		flags |= FLAGS_LA57;
+
+	if (has_lam) {
+		if (lam_mask == LAM48_MASK) {
+			r = write_cr3_safe(cr3 | X86_CR3_LAM_U48);
+			report((r == 0) && lam_u48_active(), "Set LAM_U48");
+		} else {
+			r = write_cr3_safe(cr3 | X86_CR3_LAM_U57);
+			report((r == 0) && lam_u57_active(), "Set LAM_U57");
+		}
+	}
+	if (lam_u48_active() || lam_u57_active())
+		flags |= FLAGS_LAM_ACTIVE;
+
+	run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mem,
+		    false, &raised_vector);
+	run_in_user((usermode_func)test_ptr, GP_VECTOR, flags, lam_mask, mmio,
+		    true, &raised_vector);
+}
+
+static void test_lam_user(bool has_lam)
+{
+	phys_addr_t paddr;
+
+	/*
+	 * The physical address of AREA_NORMAL is within 36 bits, so that using
+	 * identical mapping, the linear address will be considered as user mode
+	 * address from the view of LAM, and the metadata bits are not used as
+	 * address for both LAM48 and LAM57.
+	 */
+	paddr = virt_to_phys(alloc_pages_flags(0, AREA_NORMAL));
+	_Static_assert((AREA_NORMAL_PFN & GENMASK(63, 47)) == 0UL,
+			"Identical mapping range check");
+
+	/*
+	 * Physical memory & MMIO have already been identical mapped in
+	 * setup_mmu().
+	 */
+	test_lam_user_mode(has_lam, LAM48_MASK, paddr, IORAM_BASE_PHYS);
+	test_lam_user_mode(has_lam, LAM57_MASK, paddr, IORAM_BASE_PHYS);
+}
+
 int main(int ac, char **av)
 {
 	bool has_lam;
@@ -238,6 +313,7 @@ int main(int ac, char **av)
 			    "use kvm.force_emulation_prefix=1 to enable\n");
 
 	test_lam_sup(has_lam, fep_available);
+	test_lam_user(has_lam);
 
 	return report_summary();
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM
  2023-05-04  8:47 [kvm-unit-tests v4 0/4] x86: Add test cases for LAM Binbin Wu
                   ` (2 preceding siblings ...)
  2023-05-04  8:47 ` [kvm-unit-tests v4 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
@ 2023-05-04  8:47 ` Binbin Wu
  2023-05-09  1:38   ` Binbin Wu
  3 siblings, 1 reply; 6+ messages in thread
From: Binbin Wu @ 2023-05-04  8:47 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu, binbin.wu

When LAM is on, the linear address of INVVPID operand can contain
metadata, and the linear address in the INVVPID descriptor can
contain metadata.

The added cases use tagged descriptor address or/and tagged target
invalidation address to make sure the behaviors are expected when
LAM is on.
Also, INVVPID cases can be used as the common test cases for VMX
instruction VMExits.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 x86/vmx_tests.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index 217befe..678c9ec 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -3225,6 +3225,54 @@ static void invvpid_test_not_in_vmx_operation(void)
 	TEST_ASSERT(!vmx_on());
 }
 
+/* LAM applies to the target address inside the descriptor of invvpid */
+static void invvpid_test_lam(void)
+{
+	void *vaddr;
+	struct invvpid_operand *operand;
+	u64 lam_mask = LAM48_MASK;
+	bool fault;
+
+	if (!this_cpu_has(X86_FEATURE_LAM)) {
+		report_skip("LAM is not supported, skip INVVPID with LAM");
+		return;
+	}
+	write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
+
+	if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
+		lam_mask = LAM57_MASK;
+
+	vaddr = alloc_vpage();
+	install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
+	/*
+	 * Since the stack memory address in KUT doesn't follow kernel address
+	 * space partition rule, reuse the memory address for descriptor and
+	 * the target address in the descriptor of invvpid.
+	 */
+	operand = (struct invvpid_operand *)vaddr;
+	operand->vpid = 0xffff;
+	operand->gla = (u64)vaddr;
+
+	operand = (struct invvpid_operand *)vaddr;
+	operand->gla = set_la_non_canonical(operand->gla, lam_mask);
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr");
+
+	operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
+								 lam_mask);
+	operand->gla = (u64)vaddr;
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr");
+
+	operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
+								 lam_mask);
+	operand->gla = set_la_non_canonical(operand->gla, lam_mask);
+	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
+	report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
+
+	write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
+}
+
 /*
  * This does not test real-address mode, virtual-8086 mode, protected mode,
  * or CPL > 0.
@@ -3274,8 +3322,10 @@ static void invvpid_test(void)
 	/*
 	 * The gla operand is only validated for single-address INVVPID.
 	 */
-	if (types & (1u << INVVPID_ADDR))
+	if (types & (1u << INVVPID_ADDR)) {
 		try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL);
+		invvpid_test_lam();
+	}
 
 	invvpid_test_gp();
 	invvpid_test_ss();
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM
  2023-05-04  8:47 ` [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
@ 2023-05-09  1:38   ` Binbin Wu
  0 siblings, 0 replies; 6+ messages in thread
From: Binbin Wu @ 2023-05-09  1:38 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, pbonzini, chao.gao, robert.hu



On 5/4/2023 4:47 PM, Binbin Wu wrote:
> When LAM is on, the linear address of INVVPID operand can contain
> metadata, and the linear address in the INVVPID descriptor can
> contain metadata.
>
> The added cases use tagged descriptor address or/and tagged target
> invalidation address to make sure the behaviors are expected when
> LAM is on.
> Also, INVVPID cases can be used as the common test cases for VMX
> instruction VMExits.
>
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
> ---
>   x86/vmx_tests.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 51 insertions(+), 1 deletion(-)
>
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index 217befe..678c9ec 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -3225,6 +3225,54 @@ static void invvpid_test_not_in_vmx_operation(void)
>   	TEST_ASSERT(!vmx_on());
>   }
>   
> +/* LAM applies to the target address inside the descriptor of invvpid */
> +static void invvpid_test_lam(void)
> +{
> +	void *vaddr;
> +	struct invvpid_operand *operand;
> +	u64 lam_mask = LAM48_MASK;
> +	bool fault;
> +
> +	if (!this_cpu_has(X86_FEATURE_LAM)) {
> +		report_skip("LAM is not supported, skip INVVPID with LAM");
> +		return;
> +	}
> +	write_cr4_safe(read_cr4() | X86_CR4_LAM_SUP);
> +
> +	if (this_cpu_has(X86_FEATURE_LA57) && read_cr4() & X86_CR4_LA57)
> +		lam_mask = LAM57_MASK;
> +
> +	vaddr = alloc_vpage();
> +	install_page(current_page_table(), virt_to_phys(alloc_page()), vaddr);
> +	/*
> +	 * Since the stack memory address in KUT doesn't follow kernel address
> +	 * space partition rule, reuse the memory address for descriptor and
> +	 * the target address in the descriptor of invvpid.
> +	 */
> +	operand = (struct invvpid_operand *)vaddr;
> +	operand->vpid = 0xffff;
> +	operand->gla = (u64)vaddr;
> +
> +	operand = (struct invvpid_operand *)vaddr;
> +	operand->gla = set_la_non_canonical(operand->gla, lam_mask);
> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
> +	report(!fault, "INVVPID (LAM on): untagged pointer + tagged addr");
> +
> +	operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
> +								 lam_mask);
> +	operand->gla = (u64)vaddr;
> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
> +	report(!fault, "INVVPID (LAM on): tagged pointer + untagged addr");
> +
> +	operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
> +								 lam_mask);
> +	operand->gla = set_la_non_canonical(operand->gla, lam_mask);
> +	fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
> +	report(!fault, "INVVPID (LAM on): tagged pointer + tagged addr");
The test cases designed for invvpid with LAM is not right.

Will use two test cases to test invvpid when LAM is activated:
One to test with tagged operand expecting no #GP.
The other one to test with tagged target address inside the descriptor 
expecting failure and VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID set in 
VMX_INST_ERROR field of VMCS.

The new test code proposed as below:

     ....
     operand = (struct invvpid_operand *)vaddr;
     operand->vpid = 0xffff;
     operand->gla = (u64)vaddr;
     operand = (struct invvpid_operand *)set_la_non_canonical((u64)operand,
                                  lam_mask);
     fault = test_for_exception(GP_VECTOR, ds_invvpid, operand);
     report(!fault, "INVVPID (LAM on): tagged operand");

     /*
      * LAM doesn't apply to the address inside the descriptor, expected
      * failure and VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID set in
      * VMX_INST_ERROR.
      */
     try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL);


> +
> +	write_cr4_safe(read_cr4() & ~X86_CR4_LAM_SUP);
> +}
> +
>   /*
>    * This does not test real-address mode, virtual-8086 mode, protected mode,
>    * or CPL > 0.
> @@ -3274,8 +3322,10 @@ static void invvpid_test(void)
>   	/*
>   	 * The gla operand is only validated for single-address INVVPID.
>   	 */
> -	if (types & (1u << INVVPID_ADDR))
> +	if (types & (1u << INVVPID_ADDR)) {
>   		try_invvpid(INVVPID_ADDR, 0xffff, NONCANONICAL);
> +		invvpid_test_lam();
> +	}
>   
>   	invvpid_test_gp();
>   	invvpid_test_ss();


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-05-09  1:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-04  8:47 [kvm-unit-tests v4 0/4] x86: Add test cases for LAM Binbin Wu
2023-05-04  8:47 ` [kvm-unit-tests v4 1/4] x86: Allow setting of CR3 LAM bits if LAM supported Binbin Wu
2023-05-04  8:47 ` [kvm-unit-tests v4 2/4] x86: Add test case for LAM_SUP Binbin Wu
2023-05-04  8:47 ` [kvm-unit-tests v4 3/4] x86: Add test cases for LAM_{U48,U57} Binbin Wu
2023-05-04  8:47 ` [kvm-unit-tests v4 4/4] x86: Add test case for INVVPID with LAM Binbin Wu
2023-05-09  1:38   ` Binbin Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).