kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest()
@ 2019-09-17 18:57 Marc Orr
  2019-09-17 18:57 ` [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs Marc Orr
  2019-09-17 19:14 ` [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Krish Sadhukhan
  0 siblings, 2 replies; 9+ messages in thread
From: Marc Orr @ 2019-09-17 18:57 UTC (permalink / raw)
  To: kvm, jmattson, pshier, sean.j.christopherson, krish.sadhukhan; +Cc: Marc Orr

__enter_guest() should only set the launched flag when a launch has
succeeded. Thus, don't set the launched flag when the VMX_ENTRY_FAILURE,
bit 31, is set in the VMCS exit reason.

Signed-off-by: Marc Orr <marcorr@google.com>
---
 x86/vmx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/x86/vmx.c b/x86/vmx.c
index 6079420db33a..7313c78f15c2 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -1820,7 +1820,7 @@ static void __enter_guest(u8 abort_flag, struct vmentry_failure *failure)
 		abort();
 	}
 
-	if (!failure->early) {
+	if (!failure->early && !(vmcs_read(EXI_REASON) & VMX_ENTRY_FAILURE)) {
 		launched = 1;
 		check_for_guest_termination();
 	}
-- 
2.23.0.237.gc6a4ce50a0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 18:57 [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Marc Orr
@ 2019-09-17 18:57 ` Marc Orr
  2019-09-17 19:47   ` Sean Christopherson
  2019-09-17 21:37   ` Krish Sadhukhan
  2019-09-17 19:14 ` [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Krish Sadhukhan
  1 sibling, 2 replies; 9+ messages in thread
From: Marc Orr @ 2019-09-17 18:57 UTC (permalink / raw)
  To: kvm, jmattson, pshier, sean.j.christopherson, krish.sadhukhan; +Cc: Marc Orr

Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load
list) at the maximum number of MSRs supported, as described in the SDM,
in the appendix chapter titled "MISCELLANEOUS DATA".

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Marc Orr <marcorr@google.com>
---
 lib/alloc_page.c  |   5 ++
 lib/alloc_page.h  |   1 +
 x86/unittests.cfg |   2 +-
 x86/vmx_tests.c   | 131 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 138 insertions(+), 1 deletion(-)

diff --git a/lib/alloc_page.c b/lib/alloc_page.c
index 97d13395ff08..ed236389537e 100644
--- a/lib/alloc_page.c
+++ b/lib/alloc_page.c
@@ -53,6 +53,11 @@ void free_pages(void *mem, unsigned long size)
 	spin_unlock(&lock);
 }
 
+void free_pages_by_order(void *mem, unsigned long order)
+{
+	free_pages(mem, 1ul << (order + PAGE_SHIFT));
+}
+
 void *alloc_page()
 {
 	void *p;
diff --git a/lib/alloc_page.h b/lib/alloc_page.h
index 5cdfec57a0a8..739a91def979 100644
--- a/lib/alloc_page.h
+++ b/lib/alloc_page.h
@@ -14,5 +14,6 @@ void *alloc_page(void);
 void *alloc_pages(unsigned long order);
 void free_page(void *page);
 void free_pages(void *mem, unsigned long size);
+void free_pages_by_order(void *mem, unsigned long order);
 
 #endif
diff --git a/x86/unittests.cfg b/x86/unittests.cfg
index 694ee3d42f3a..05122cf91ea1 100644
--- a/x86/unittests.cfg
+++ b/x86/unittests.cfg
@@ -227,7 +227,7 @@ extra_params = -cpu qemu64,+umip
 
 [vmx]
 file = vmx.flat
-extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test"
+extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test -atomic_switch_overflow_msrs_test"
 arch = x86_64
 groups = vmx
 
diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
index f035f24a771a..fb665f38b1e5 100644
--- a/x86/vmx_tests.c
+++ b/x86/vmx_tests.c
@@ -8570,6 +8570,134 @@ static int invalid_msr_entry_failure(struct vmentry_failure *failure)
 	return VMX_TEST_VMEXIT;
 }
 
+/*
+ * The max number of MSRs in an atomic switch MSR list is:
+ * (111B + 1) * 512 = 4096
+ *
+ * Each list entry consumes:
+ * 4-byte MSR index + 4 bytes reserved + 8-byte data = 16 bytes
+ *
+ * Allocate 128 kB to cover max_msr_list_size (i.e., 64 kB) and then some.
+ */
+static const u32 msr_list_page_order = 5;
+
+static void atomic_switch_msr_limit_test_guest(void)
+{
+	vmcall();
+}
+
+static void populate_msr_list(struct vmx_msr_entry *msr_list,
+			      size_t byte_capacity, int count)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		msr_list[i].index = MSR_IA32_TSC;
+		msr_list[i].reserved = 0;
+		msr_list[i].value = 0x1234567890abcdef;
+	}
+
+	memset(msr_list + count, 0xff,
+	       byte_capacity - count * sizeof(*msr_list));
+}
+
+static int max_msr_list_size(void)
+{
+	u32 vmx_misc = rdmsr(MSR_IA32_VMX_MISC);
+	u32 factor = ((vmx_misc & GENMASK(27, 25)) >> 25) + 1;
+
+	return factor * 512;
+}
+
+static void atomic_switch_msrs_test(int count)
+{
+	struct vmx_msr_entry *vm_enter_load;
+        struct vmx_msr_entry *vm_exit_load;
+        struct vmx_msr_entry *vm_exit_store;
+	int max_allowed = max_msr_list_size();
+	int byte_capacity = 1ul << (msr_list_page_order + PAGE_SHIFT);
+	/* Exceeding the max MSR list size at exit trigers KVM to abort. */
+	int exit_count = count > max_allowed ? max_allowed : count;
+	int cleanup_count = count > max_allowed ? 2 : 1;
+	int i;
+
+	/*
+	 * Check for the IA32_TSC MSR,
+	 * available with the "TSC flag" and used to populate the MSR lists.
+	 */
+	if (!(cpuid(1).d & (1 << 4))) {
+		report_skip(__func__);
+		return;
+	}
+
+	/* Set L2 guest. */
+	test_set_guest(atomic_switch_msr_limit_test_guest);
+
+	/* Setup atomic MSR switch lists. */
+	vm_enter_load = alloc_pages(msr_list_page_order);
+	vm_exit_load = alloc_pages(msr_list_page_order);
+	vm_exit_store = alloc_pages(msr_list_page_order);
+
+	vmcs_write(ENTER_MSR_LD_ADDR, (u64)vm_enter_load);
+	vmcs_write(EXIT_MSR_LD_ADDR, (u64)vm_exit_load);
+	vmcs_write(EXIT_MSR_ST_ADDR, (u64)vm_exit_store);
+
+	/*
+	 * VM-Enter should succeed up to the max number of MSRs per list, and
+	 * should not consume junk beyond the last entry.
+	 */
+	populate_msr_list(vm_enter_load, byte_capacity, count);
+	populate_msr_list(vm_exit_load, byte_capacity, exit_count);
+	populate_msr_list(vm_exit_store, byte_capacity, exit_count);
+
+	vmcs_write(ENT_MSR_LD_CNT, count);
+	vmcs_write(EXI_MSR_LD_CNT, exit_count);
+	vmcs_write(EXI_MSR_ST_CNT, exit_count);
+
+	if (count <= max_allowed) {
+		enter_guest();
+		assert_exit_reason(VMX_VMCALL);
+		skip_exit_vmcall();
+	} else {
+		u32 exit_reason;
+		u32 exit_reason_want;
+		u32 exit_qual;
+
+		enter_guest_with_invalid_guest_state();
+
+		exit_reason = vmcs_read(EXI_REASON);
+		exit_reason_want = VMX_FAIL_MSR | VMX_ENTRY_FAILURE;
+		report("exit_reason, %u, is %u.",
+		       exit_reason == exit_reason_want, exit_reason,
+		       exit_reason_want);
+
+		exit_qual = vmcs_read(EXI_QUALIFICATION);
+		report("exit_qual, %u, is %u.", exit_qual == max_allowed + 1,
+		       exit_qual, max_allowed + 1);
+	}
+
+	/* Cleanup. */
+	vmcs_write(ENT_MSR_LD_CNT, 0);
+	vmcs_write(EXI_MSR_LD_CNT, 0);
+	vmcs_write(EXI_MSR_ST_CNT, 0);
+	for (i = 0; i < cleanup_count; i++) {
+		enter_guest();
+		skip_exit_vmcall();
+	}
+	free_pages_by_order(vm_enter_load, msr_list_page_order);
+	free_pages_by_order(vm_exit_load, msr_list_page_order);
+	free_pages_by_order(vm_exit_store, msr_list_page_order);
+}
+
+static void atomic_switch_max_msrs_test(void)
+{
+	atomic_switch_msrs_test(max_msr_list_size());
+}
+
+static void atomic_switch_overflow_msrs_test(void)
+{
+	atomic_switch_msrs_test(max_msr_list_size() + 1);
+}
 
 #define TEST(name) { #name, .v2 = name }
 
@@ -8660,5 +8788,8 @@ struct vmx_test vmx_tests[] = {
 	TEST(ept_access_test_paddr_read_execute_ad_enabled),
 	TEST(ept_access_test_paddr_not_present_page_fault),
 	TEST(ept_access_test_force_2m_page),
+	/* Atomic MSR switch tests. */
+	TEST(atomic_switch_max_msrs_test),
+	TEST(atomic_switch_overflow_msrs_test),
 	{ NULL, NULL, NULL, NULL, NULL, {0} },
 };
-- 
2.23.0.237.gc6a4ce50a0-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest()
  2019-09-17 18:57 [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Marc Orr
  2019-09-17 18:57 ` [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs Marc Orr
@ 2019-09-17 19:14 ` Krish Sadhukhan
  1 sibling, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2019-09-17 19:14 UTC (permalink / raw)
  To: Marc Orr, kvm, jmattson, pshier, sean.j.christopherson



On 09/17/2019 11:57 AM, Marc Orr wrote:
> __enter_guest() should only set the launched flag when a launch has
> succeeded. Thus, don't set the launched flag when the VMX_ENTRY_FAILURE,
> bit 31, is set in the VMCS exit reason.
>
> Signed-off-by: Marc Orr <marcorr@google.com>
> ---
>   x86/vmx.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/x86/vmx.c b/x86/vmx.c
> index 6079420db33a..7313c78f15c2 100644
> --- a/x86/vmx.c
> +++ b/x86/vmx.c
> @@ -1820,7 +1820,7 @@ static void __enter_guest(u8 abort_flag, struct vmentry_failure *failure)
>   		abort();
>   	}
>   
> -	if (!failure->early) {
> +	if (!failure->early && !(vmcs_read(EXI_REASON) & VMX_ENTRY_FAILURE)) {
>   		launched = 1;
>   		check_for_guest_termination();
>   	}
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 18:57 ` [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs Marc Orr
@ 2019-09-17 19:47   ` Sean Christopherson
  2019-09-17 20:16     ` Marc Orr
  2019-09-17 21:37   ` Krish Sadhukhan
  1 sibling, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2019-09-17 19:47 UTC (permalink / raw)
  To: Marc Orr; +Cc: kvm, jmattson, pshier, krish.sadhukhan

On Tue, Sep 17, 2019 at 11:57:53AM -0700, Marc Orr wrote:
> Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load
> list) at the maximum number of MSRs supported, as described in the SDM,
> in the appendix chapter titled "MISCELLANEOUS DATA".
> 
> Suggested-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Marc Orr <marcorr@google.com>
> ---
>  lib/alloc_page.c  |   5 ++
>  lib/alloc_page.h  |   1 +
>  x86/unittests.cfg |   2 +-
>  x86/vmx_tests.c   | 131 ++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 138 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> index 97d13395ff08..ed236389537e 100644
> --- a/lib/alloc_page.c
> +++ b/lib/alloc_page.c
> @@ -53,6 +53,11 @@ void free_pages(void *mem, unsigned long size)
>  	spin_unlock(&lock);
>  }
>  
> +void free_pages_by_order(void *mem, unsigned long order)
> +{
> +	free_pages(mem, 1ul << (order + PAGE_SHIFT));
> +}
> +
>  void *alloc_page()
>  {
>  	void *p;
> diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> index 5cdfec57a0a8..739a91def979 100644
> --- a/lib/alloc_page.h
> +++ b/lib/alloc_page.h
> @@ -14,5 +14,6 @@ void *alloc_page(void);
>  void *alloc_pages(unsigned long order);
>  void free_page(void *page);
>  void free_pages(void *mem, unsigned long size);
> +void free_pages_by_order(void *mem, unsigned long order);
>  
>  #endif
> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> index 694ee3d42f3a..05122cf91ea1 100644
> --- a/x86/unittests.cfg
> +++ b/x86/unittests.cfg
> @@ -227,7 +227,7 @@ extra_params = -cpu qemu64,+umip
>  
>  [vmx]
>  file = vmx.flat
> -extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test"
> +extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test -atomic_switch_overflow_msrs_test"
>  arch = x86_64
>  groups = vmx
>  
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index f035f24a771a..fb665f38b1e5 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -8570,6 +8570,134 @@ static int invalid_msr_entry_failure(struct vmentry_failure *failure)
>  	return VMX_TEST_VMEXIT;
>  }
>  
> +/*
> + * The max number of MSRs in an atomic switch MSR list is:
> + * (111B + 1) * 512 = 4096
> + *
> + * Each list entry consumes:
> + * 4-byte MSR index + 4 bytes reserved + 8-byte data = 16 bytes
> + *
> + * Allocate 128 kB to cover max_msr_list_size (i.e., 64 kB) and then some.
> + */
> +static const u32 msr_list_page_order = 5;
> +
> +static void atomic_switch_msr_limit_test_guest(void)
> +{
> +	vmcall();
> +}
> +
> +static void populate_msr_list(struct vmx_msr_entry *msr_list,
> +			      size_t byte_capacity, int count)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		msr_list[i].index = MSR_IA32_TSC;
> +		msr_list[i].reserved = 0;
> +		msr_list[i].value = 0x1234567890abcdef;
> +	}
> +
> +	memset(msr_list + count, 0xff,
> +	       byte_capacity - count * sizeof(*msr_list));
> +}
> +
> +static int max_msr_list_size(void)
> +{
> +	u32 vmx_misc = rdmsr(MSR_IA32_VMX_MISC);
> +	u32 factor = ((vmx_misc & GENMASK(27, 25)) >> 25) + 1;
> +
> +	return factor * 512;
> +}
> +
> +static void atomic_switch_msrs_test(int count)
> +{
> +	struct vmx_msr_entry *vm_enter_load;
> +        struct vmx_msr_entry *vm_exit_load;
> +        struct vmx_msr_entry *vm_exit_store;

Spaces need to be replaced with tabs.

> +	int max_allowed = max_msr_list_size();
> +	int byte_capacity = 1ul << (msr_list_page_order + PAGE_SHIFT);
> +	/* Exceeding the max MSR list size at exit trigers KVM to abort. */

s/trigers/triggers

The whole sentence kinda is kinda funky.  Maybe

	/* KVM signals VM-Abort if an exit MSR list exceeds the max size. */

> +	int exit_count = count > max_allowed ? max_allowed : count;

If only we had MIN()... ;-)

> +	int cleanup_count = count > max_allowed ? 2 : 1;
> +	int i;
> +
> +	/*
> +	 * Check for the IA32_TSC MSR,
> +	 * available with the "TSC flag" and used to populate the MSR lists.
> +	 */
> +	if (!(cpuid(1).d & (1 << 4))) {
> +		report_skip(__func__);
> +		return;
> +	}
> +
> +	/* Set L2 guest. */
> +	test_set_guest(atomic_switch_msr_limit_test_guest);
> +
> +	/* Setup atomic MSR switch lists. */
> +	vm_enter_load = alloc_pages(msr_list_page_order);
> +	vm_exit_load = alloc_pages(msr_list_page_order);
> +	vm_exit_store = alloc_pages(msr_list_page_order);
> +
> +	vmcs_write(ENTER_MSR_LD_ADDR, (u64)vm_enter_load);
> +	vmcs_write(EXIT_MSR_LD_ADDR, (u64)vm_exit_load);
> +	vmcs_write(EXIT_MSR_ST_ADDR, (u64)vm_exit_store);
> +
> +	/*
> +	 * VM-Enter should succeed up to the max number of MSRs per list, and
> +	 * should not consume junk beyond the last entry.
> +	 */
> +	populate_msr_list(vm_enter_load, byte_capacity, count);
> +	populate_msr_list(vm_exit_load, byte_capacity, exit_count);
> +	populate_msr_list(vm_exit_store, byte_capacity, exit_count);
> +
> +	vmcs_write(ENT_MSR_LD_CNT, count);
> +	vmcs_write(EXI_MSR_LD_CNT, exit_count);
> +	vmcs_write(EXI_MSR_ST_CNT, exit_count);
> +
> +	if (count <= max_allowed) {
> +		enter_guest();
> +		assert_exit_reason(VMX_VMCALL);
> +		skip_exit_vmcall();
> +	} else {
> +		u32 exit_reason;
> +		u32 exit_reason_want;
> +		u32 exit_qual;
> +
> +		enter_guest_with_invalid_guest_state();
> +
> +		exit_reason = vmcs_read(EXI_REASON);
> +		exit_reason_want = VMX_FAIL_MSR | VMX_ENTRY_FAILURE;
> +		report("exit_reason, %u, is %u.",
> +		       exit_reason == exit_reason_want, exit_reason,
> +		       exit_reason_want);
> +
> +		exit_qual = vmcs_read(EXI_QUALIFICATION);
> +		report("exit_qual, %u, is %u.", exit_qual == max_allowed + 1,
> +		       exit_qual, max_allowed + 1);
> +	}
> +
> +	/* Cleanup. */
> +	vmcs_write(ENT_MSR_LD_CNT, 0);
> +	vmcs_write(EXI_MSR_LD_CNT, 0);
> +	vmcs_write(EXI_MSR_ST_CNT, 0);
> +	for (i = 0; i < cleanup_count; i++) {
> +		enter_guest();
> +		skip_exit_vmcall();
> +	}

I'm missing something, why do we need to reenter the guest after setting
the count to 0?

> +	free_pages_by_order(vm_enter_load, msr_list_page_order);
> +	free_pages_by_order(vm_exit_load, msr_list_page_order);
> +	free_pages_by_order(vm_exit_store, msr_list_page_order);
> +}
> +
> +static void atomic_switch_max_msrs_test(void)
> +{
> +	atomic_switch_msrs_test(max_msr_list_size());
> +}
> +
> +static void atomic_switch_overflow_msrs_test(void)
> +{
> +	atomic_switch_msrs_test(max_msr_list_size() + 1);
> +}
>  
>  #define TEST(name) { #name, .v2 = name }
>  
> @@ -8660,5 +8788,8 @@ struct vmx_test vmx_tests[] = {
>  	TEST(ept_access_test_paddr_read_execute_ad_enabled),
>  	TEST(ept_access_test_paddr_not_present_page_fault),
>  	TEST(ept_access_test_force_2m_page),
> +	/* Atomic MSR switch tests. */
> +	TEST(atomic_switch_max_msrs_test),
> +	TEST(atomic_switch_overflow_msrs_test),
>  	{ NULL, NULL, NULL, NULL, NULL, {0} },
>  };
> -- 
> 2.23.0.237.gc6a4ce50a0-goog
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 19:47   ` Sean Christopherson
@ 2019-09-17 20:16     ` Marc Orr
  2019-09-17 22:28       ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Marc Orr @ 2019-09-17 20:16 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, Jim Mattson, Peter Shier, Krish Sadhukhan

> > +static void atomic_switch_msrs_test(int count)
> > +{
> > +     struct vmx_msr_entry *vm_enter_load;
> > +        struct vmx_msr_entry *vm_exit_load;
> > +        struct vmx_msr_entry *vm_exit_store;
>
> Spaces need to be replaced with tabs.

Done.

> > +     /* Exceeding the max MSR list size at exit trigers KVM to abort. */
>
> s/trigers/triggers
>
> The whole sentence kinda is kinda funky.  Maybe

Done.

> > +     int exit_count = count > max_allowed ? max_allowed : count;
>
> If only we had MIN()... ;-)

Done, thanks :-).

> > +     /* Cleanup. */
> > +     vmcs_write(ENT_MSR_LD_CNT, 0);
> > +     vmcs_write(EXI_MSR_LD_CNT, 0);
> > +     vmcs_write(EXI_MSR_ST_CNT, 0);
> > +     for (i = 0; i < cleanup_count; i++) {
> > +             enter_guest();
> > +             skip_exit_vmcall();
> > +     }
>
> I'm missing something, why do we need to reenter the guest after setting
> the count to 0?

It's for the failure code path, which fails to get into the guest and
skip the single vmcall(). I've refactored the code to make this clear.
Let me know what you think.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 18:57 ` [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs Marc Orr
  2019-09-17 19:47   ` Sean Christopherson
@ 2019-09-17 21:37   ` Krish Sadhukhan
  2019-09-18 22:24     ` Marc Orr
  1 sibling, 1 reply; 9+ messages in thread
From: Krish Sadhukhan @ 2019-09-17 21:37 UTC (permalink / raw)
  To: Marc Orr, kvm, jmattson, pshier, sean.j.christopherson



On 09/17/2019 11:57 AM, Marc Orr wrote:
> Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load
> list) at the maximum number of MSRs supported, as described in the SDM,
> in the appendix chapter titled "MISCELLANEOUS DATA".
>
> Suggested-by: Jim Mattson <jmattson@google.com>
> Signed-off-by: Marc Orr <marcorr@google.com>
> ---
>   lib/alloc_page.c  |   5 ++
>   lib/alloc_page.h  |   1 +
>   x86/unittests.cfg |   2 +-
>   x86/vmx_tests.c   | 131 ++++++++++++++++++++++++++++++++++++++++++++++
>   4 files changed, 138 insertions(+), 1 deletion(-)
>
> diff --git a/lib/alloc_page.c b/lib/alloc_page.c
> index 97d13395ff08..ed236389537e 100644
> --- a/lib/alloc_page.c
> +++ b/lib/alloc_page.c
> @@ -53,6 +53,11 @@ void free_pages(void *mem, unsigned long size)
>   	spin_unlock(&lock);
>   }
>   
> +void free_pages_by_order(void *mem, unsigned long order)
> +{
> +	free_pages(mem, 1ul << (order + PAGE_SHIFT));
> +}
> +
>   void *alloc_page()
>   {
>   	void *p;
> diff --git a/lib/alloc_page.h b/lib/alloc_page.h
> index 5cdfec57a0a8..739a91def979 100644
> --- a/lib/alloc_page.h
> +++ b/lib/alloc_page.h
> @@ -14,5 +14,6 @@ void *alloc_page(void);
>   void *alloc_pages(unsigned long order);
>   void free_page(void *page);
>   void free_pages(void *mem, unsigned long size);
> +void free_pages_by_order(void *mem, unsigned long order);
>   
>   #endif
> diff --git a/x86/unittests.cfg b/x86/unittests.cfg
> index 694ee3d42f3a..05122cf91ea1 100644
> --- a/x86/unittests.cfg
> +++ b/x86/unittests.cfg
> @@ -227,7 +227,7 @@ extra_params = -cpu qemu64,+umip
>   
>   [vmx]
>   file = vmx.flat
> -extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test"
> +extra_params = -cpu host,+vmx -append "-exit_monitor_from_l2_test -ept_access* -vmx_smp* -vmx_vmcs_shadow_test -atomic_switch_overflow_msrs_test"
>   arch = x86_64
>   groups = vmx
>   
> diff --git a/x86/vmx_tests.c b/x86/vmx_tests.c
> index f035f24a771a..fb665f38b1e5 100644
> --- a/x86/vmx_tests.c
> +++ b/x86/vmx_tests.c
> @@ -8570,6 +8570,134 @@ static int invalid_msr_entry_failure(struct vmentry_failure *failure)
>   	return VMX_TEST_VMEXIT;
>   }
>   
> +/*
> + * The max number of MSRs in an atomic switch MSR list is:
> + * (111B + 1) * 512 = 4096
> + *
> + * Each list entry consumes:
> + * 4-byte MSR index + 4 bytes reserved + 8-byte data = 16 bytes
> + *
> + * Allocate 128 kB to cover max_msr_list_size (i.e., 64 kB) and then some.
> + */
> +static const u32 msr_list_page_order = 5;
> +
> +static void atomic_switch_msr_limit_test_guest(void)
> +{
> +	vmcall();
> +}
> +
> +static void populate_msr_list(struct vmx_msr_entry *msr_list,
> +			      size_t byte_capacity, int count)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		msr_list[i].index = MSR_IA32_TSC;
> +		msr_list[i].reserved = 0;
> +		msr_list[i].value = 0x1234567890abcdef;
> +	}
> +
> +	memset(msr_list + count, 0xff,
> +	       byte_capacity - count * sizeof(*msr_list));
> +}
> +
> +static int max_msr_list_size(void)
> +{
> +	u32 vmx_misc = rdmsr(MSR_IA32_VMX_MISC);
> +	u32 factor = ((vmx_misc & GENMASK(27, 25)) >> 25) + 1;
> +
> +	return factor * 512;
> +}
> +
> +static void atomic_switch_msrs_test(int count)
> +{
> +	struct vmx_msr_entry *vm_enter_load;
> +        struct vmx_msr_entry *vm_exit_load;
> +        struct vmx_msr_entry *vm_exit_store;

Is it possible to re-use the existing pointers in vmx_tests.c,

     struct vmx_msr_entry *exit_msr_store, *entry_msr_load, *exit_msr_load;


  instead of using local pointers ?

> +	int max_allowed = max_msr_list_size();
> +	int byte_capacity = 1ul << (msr_list_page_order + PAGE_SHIFT);
> +	/* Exceeding the max MSR list size at exit trigers KVM to abort. */
> +	int exit_count = count > max_allowed ? max_allowed : count;
> +	int cleanup_count = count > max_allowed ? 2 : 1;
> +	int i;
> +
> +	/*
> +	 * Check for the IA32_TSC MSR,
> +	 * available with the "TSC flag" and used to populate the MSR lists.
> +	 */
> +	if (!(cpuid(1).d & (1 << 4))) {
> +		report_skip(__func__);
> +		return;
> +	}
> +
> +	/* Set L2 guest. */
> +	test_set_guest(atomic_switch_msr_limit_test_guest);

Is it possible to directly pass the vmcall() function instead of 
creating a wrapper ?

> +
> +	/* Setup atomic MSR switch lists. */
> +	vm_enter_load = alloc_pages(msr_list_page_order);
> +	vm_exit_load = alloc_pages(msr_list_page_order);
> +	vm_exit_store = alloc_pages(msr_list_page_order);
> +
> +	vmcs_write(ENTER_MSR_LD_ADDR, (u64)vm_enter_load);
> +	vmcs_write(EXIT_MSR_LD_ADDR, (u64)vm_exit_load);
> +	vmcs_write(EXIT_MSR_ST_ADDR, (u64)vm_exit_store);
> +
> +	/*
> +	 * VM-Enter should succeed up to the max number of MSRs per list, and
> +	 * should not consume junk beyond the last entry.
> +	 */
> +	populate_msr_list(vm_enter_load, byte_capacity, count);
> +	populate_msr_list(vm_exit_load, byte_capacity, exit_count);
> +	populate_msr_list(vm_exit_store, byte_capacity, exit_count);
> +
> +	vmcs_write(ENT_MSR_LD_CNT, count);
> +	vmcs_write(EXI_MSR_LD_CNT, exit_count);
> +	vmcs_write(EXI_MSR_ST_CNT, exit_count);
> +
> +	if (count <= max_allowed) {
> +		enter_guest();
> +		assert_exit_reason(VMX_VMCALL);

This is redundant because skip_exit_vmcall() calls this also.

> +		skip_exit_vmcall();
> +	} else {
> +		u32 exit_reason;
> +		u32 exit_reason_want;
> +		u32 exit_qual;
> +
> +		enter_guest_with_invalid_guest_state();
> +
> +		exit_reason = vmcs_read(EXI_REASON);
> +		exit_reason_want = VMX_FAIL_MSR | VMX_ENTRY_FAILURE;
> +		report("exit_reason, %u, is %u.",
> +		       exit_reason == exit_reason_want, exit_reason,
> +		       exit_reason_want);
> +
> +		exit_qual = vmcs_read(EXI_QUALIFICATION);
> +		report("exit_qual, %u, is %u.", exit_qual == max_allowed + 1,
> +		       exit_qual, max_allowed + 1);
> +	}
> +
> +	/* Cleanup. */
> +	vmcs_write(ENT_MSR_LD_CNT, 0);
> +	vmcs_write(EXI_MSR_LD_CNT, 0);
> +	vmcs_write(EXI_MSR_ST_CNT, 0);
> +	for (i = 0; i < cleanup_count; i++) {
> +		enter_guest();
> +		skip_exit_vmcall();
> +	}
> +	free_pages_by_order(vm_enter_load, msr_list_page_order);
> +	free_pages_by_order(vm_exit_load, msr_list_page_order);
> +	free_pages_by_order(vm_exit_store, msr_list_page_order);

Since the 2nd argument to the function is not changing, is there any 
particular reason for keeping it ?

> +}
> +
> +static void atomic_switch_max_msrs_test(void)
> +{
> +	atomic_switch_msrs_test(max_msr_list_size());
> +}
> +
> +static void atomic_switch_overflow_msrs_test(void)
> +{
> +	atomic_switch_msrs_test(max_msr_list_size() + 1);
> +}
>   
>   #define TEST(name) { #name, .v2 = name }
>   
> @@ -8660,5 +8788,8 @@ struct vmx_test vmx_tests[] = {
>   	TEST(ept_access_test_paddr_read_execute_ad_enabled),
>   	TEST(ept_access_test_paddr_not_present_page_fault),
>   	TEST(ept_access_test_force_2m_page),
> +	/* Atomic MSR switch tests. */
> +	TEST(atomic_switch_max_msrs_test),
> +	TEST(atomic_switch_overflow_msrs_test),

Actually, it should be a single executable. 
'atomic_switch_max_msrs_test' is the positive version and 
'atomic_switch_overflow_msrs_test' is the negative version of the 
MSR-lists, and so these should be enveloped in the same test.

>   	{ NULL, NULL, NULL, NULL, NULL, {0} },
>   };


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 20:16     ` Marc Orr
@ 2019-09-17 22:28       ` Sean Christopherson
  2019-09-17 22:35         ` Marc Orr
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2019-09-17 22:28 UTC (permalink / raw)
  To: Marc Orr; +Cc: kvm, Jim Mattson, Peter Shier, Krish Sadhukhan

On Tue, Sep 17, 2019 at 01:16:27PM -0700, Marc Orr wrote:
> > > +     /* Cleanup. */
> > > +     vmcs_write(ENT_MSR_LD_CNT, 0);
> > > +     vmcs_write(EXI_MSR_LD_CNT, 0);
> > > +     vmcs_write(EXI_MSR_ST_CNT, 0);
> > > +     for (i = 0; i < cleanup_count; i++) {
> > > +             enter_guest();
> > > +             skip_exit_vmcall();
> > > +     }
> >
> > I'm missing something, why do we need to reenter the guest after setting
> > the count to 0?
> 
> It's for the failure code path, which fails to get into the guest and
> skip the single vmcall(). I've refactored the code to make this clear.
> Let me know what you think.

Why is not entering the guest a problem?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 22:28       ` Sean Christopherson
@ 2019-09-17 22:35         ` Marc Orr
  0 siblings, 0 replies; 9+ messages in thread
From: Marc Orr @ 2019-09-17 22:35 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, Jim Mattson, Peter Shier, Krish Sadhukhan

> > > > +     /* Cleanup. */
> > > > +     vmcs_write(ENT_MSR_LD_CNT, 0);
> > > > +     vmcs_write(EXI_MSR_LD_CNT, 0);
> > > > +     vmcs_write(EXI_MSR_ST_CNT, 0);
> > > > +     for (i = 0; i < cleanup_count; i++) {
> > > > +             enter_guest();
> > > > +             skip_exit_vmcall();
> > > > +     }
> > >
> > > I'm missing something, why do we need to reenter the guest after setting
> > > the count to 0?
> >
> > It's for the failure code path, which fails to get into the guest and
> > skip the single vmcall(). I've refactored the code to make this clear.
> > Let me know what you think.
>
> Why is not entering the guest a problem?

The vmx tests check that the L2 guest has completed. So we need to
advance the L2 RIP past the single vmcall. Technically, we don't need
to enter the guest to do that. Entering the guest and calling
skip_exit_vmcall() feels like a convenient, clean way to do this. But
I'm happy to directly advance the RIP if you think that's better. Let
me know what you think.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs
  2019-09-17 21:37   ` Krish Sadhukhan
@ 2019-09-18 22:24     ` Marc Orr
  0 siblings, 0 replies; 9+ messages in thread
From: Marc Orr @ 2019-09-18 22:24 UTC (permalink / raw)
  To: Krish Sadhukhan; +Cc: kvm, Jim Mattson, Peter Shier, Sean Christopherson

> > +     struct vmx_msr_entry *vm_enter_load;
> > +        struct vmx_msr_entry *vm_exit_load;
> > +        struct vmx_msr_entry *vm_exit_store;
>
> Is it possible to re-use the existing pointers in vmx_tests.c,
>
>      struct vmx_msr_entry *exit_msr_store, *entry_msr_load, *exit_msr_load;
>
>
>   instead of using local pointers ?

Done.

> > +     int max_allowed = max_msr_list_size();
> > +     int byte_capacity = 1ul << (msr_list_page_order + PAGE_SHIFT);
> > +     /* Exceeding the max MSR list size at exit trigers KVM to abort. */
> > +     int exit_count = count > max_allowed ? max_allowed : count;
> > +     int cleanup_count = count > max_allowed ? 2 : 1;
> > +     int i;
> > +
> > +     /*
> > +      * Check for the IA32_TSC MSR,
> > +      * available with the "TSC flag" and used to populate the MSR lists.
> > +      */
> > +     if (!(cpuid(1).d & (1 << 4))) {
> > +             report_skip(__func__);
> > +             return;
> > +     }
> > +
> > +     /* Set L2 guest. */
> > +     test_set_guest(atomic_switch_msr_limit_test_guest);
>
> Is it possible to directly pass the vmcall() function instead of
> creating a wrapper ?

Done.

> > +     if (count <= max_allowed) {
> > +             enter_guest();
> > +             assert_exit_reason(VMX_VMCALL);
>
> This is redundant because skip_exit_vmcall() calls this also.

Done.

> > +     free_pages_by_order(vm_enter_load, msr_list_page_order);
> > +     free_pages_by_order(vm_exit_load, msr_list_page_order);
> > +     free_pages_by_order(vm_exit_store, msr_list_page_order);
>
> Since the 2nd argument to the function is not changing, is there any
> particular reason for keeping it ?

The second argument has to match what was passed into the respective
alloc_pages() call, per my understanding.

> > +     /* Atomic MSR switch tests. */
> > +     TEST(atomic_switch_max_msrs_test),
> > +     TEST(atomic_switch_overflow_msrs_test),
>
> Actually, it should be a single executable.
> 'atomic_switch_max_msrs_test' is the positive version and
> 'atomic_switch_overflow_msrs_test' is the negative version of the
> MSR-lists, and so these should be enveloped in the same test.

They're separated because the negative version,
atomic_switch_overflow_msrs_test, may behave unexpectedly on real
hardware. This way, the negative version can easily be off by default
in the unittests.cfg file, and opted into as desired. Let me know if
there's a different, preferred, way to do this (e.g., via a flag) with
the two scenarios enveloped within a single test. However, this is
what Sean suggested in a prior code review and it seems like a good
solution to me.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-09-18 22:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-17 18:57 [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Marc Orr
2019-09-17 18:57 ` [kvm-unit-tests PATCH v3 2/2] x86: nvmx: test max atomic switch MSRs Marc Orr
2019-09-17 19:47   ` Sean Christopherson
2019-09-17 20:16     ` Marc Orr
2019-09-17 22:28       ` Sean Christopherson
2019-09-17 22:35         ` Marc Orr
2019-09-17 21:37   ` Krish Sadhukhan
2019-09-18 22:24     ` Marc Orr
2019-09-17 19:14 ` [kvm-unit-tests PATCH v3 1/2] x86: nvmx: fix bug in __enter_guest() Krish Sadhukhan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).