kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH 0/5] x86: svm: fixes
@ 2020-06-30  9:45 Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 1/5] x86: Remove boot_idt assembly assignment Nadav Amit
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

Excluding the first patch, the others are bug-fixes for the svm tests,
that make the tests more robust, and less KVM-dependent.

The fourth patch might indicate there is a bug in KVM, but I did not get
to check it.

Nadav Amit (5):
  x86: Remove boot_idt assembly assignment
  x86: svm: check TSC adjust support
  x86: svm: flush TLB on each test
  x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare
  x86: svm: avoid advancing rip incorrectly on exc_inject

 x86/cstart64.S  |  3 ---
 x86/svm.c       |  1 +
 x86/svm_tests.c | 15 ++++++++++-----
 3 files changed, 11 insertions(+), 8 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH 1/5] x86: Remove boot_idt assembly assignment
  2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
@ 2020-06-30  9:45 ` Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 2/5] x86: svm: check TSC adjust support Nadav Amit
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

boot_idt is now a symbol.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 x86/cstart64.S | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/x86/cstart64.S b/x86/cstart64.S
index b44d0ae..fabcdbf 100644
--- a/x86/cstart64.S
+++ b/x86/cstart64.S
@@ -2,15 +2,12 @@
 #include "apic-defs.h"
 
 .globl boot_idt
-boot_idt = 0
 
 .globl idt_descr
 .globl tss_descr
 .globl gdt64_desc
 .globl online_cpus
 
-boot_idt = 0
-
 ipi_vector = 0x20
 
 max_cpus = MAX_TEST_CPUS
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH 2/5] x86: svm: check TSC adjust support
  2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 1/5] x86: Remove boot_idt assembly assignment Nadav Amit
@ 2020-06-30  9:45 ` Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 3/5] x86: svm: flush TLB on each test Nadav Amit
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

MSR_IA32_TSC_ADJUST may be supported by KVM on AMD machines, but it does
not show on AMD manual. Check CPUID to see if it supported before
running the relevant tests.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 x86/svm_tests.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index a2c993d..92cefaf 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -893,6 +893,11 @@ static bool npt_rw_l1mmio_check(struct svm_test *test)
 #define TSC_OFFSET_VALUE    (~0ull << 48)
 static bool ok;
 
+static bool tsc_adjust_supported(void)
+{
+    return this_cpu_has(X86_FEATURE_TSC_ADJUST);
+}
+
 static void tsc_adjust_prepare(struct svm_test *test)
 {
     default_prepare(test);
@@ -2010,7 +2015,7 @@ struct svm_test svm_tests[] = {
     { "npt_rw_l1mmio", npt_supported, npt_rw_l1mmio_prepare,
       default_prepare_gif_clear, npt_rw_l1mmio_test,
       default_finished, npt_rw_l1mmio_check },
-    { "tsc_adjust", default_supported, tsc_adjust_prepare,
+    { "tsc_adjust", tsc_adjust_supported, tsc_adjust_prepare,
       default_prepare_gif_clear, tsc_adjust_test,
       default_finished, tsc_adjust_check },
     { "latency_run_exit", default_supported, latency_prepare,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH 3/5] x86: svm: flush TLB on each test
  2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 1/5] x86: Remove boot_idt assembly assignment Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 2/5] x86: svm: check TSC adjust support Nadav Amit
@ 2020-06-30  9:45 ` Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare Nadav Amit
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 5/5] x86: svm: avoid advancing rip incorrectly on exc_inject Nadav Amit
  4 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

Several svm tests change PTEs but do not flush the TLB. To avoid messing
around or encountering new bugs in the future, flush the TLB on every
test.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 x86/svm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/x86/svm.c b/x86/svm.c
index f35c063..0fcad8d 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -170,6 +170,7 @@ void vmcb_ident(struct vmcb *vmcb)
 	if (npt_supported()) {
 		ctrl->nested_ctl = 1;
 		ctrl->nested_cr3 = (u64)pml4e;
+		ctrl->tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
 	}
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare
  2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
                   ` (2 preceding siblings ...)
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 3/5] x86: svm: flush TLB on each test Nadav Amit
@ 2020-06-30  9:45 ` Nadav Amit
  2020-06-30 11:06   ` Paolo Bonzini
  2020-06-30 16:54   ` Paolo Bonzini
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 5/5] x86: svm: avoid advancing rip incorrectly on exc_inject Nadav Amit
  4 siblings, 2 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

According to AMD manual bit 8 in PDPE is not reserved, but bit 7.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 x86/svm_tests.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 92cefaf..323031f 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -825,13 +825,13 @@ static void npt_rsvd_pfwalk_prepare(struct svm_test *test)
     vmcb_ident(vmcb);
 
     pdpe = npt_get_pdpe();
-    pdpe[0] |= (1ULL << 8);
+    pdpe[0] |= (1ULL << 7);
 }
 
 static bool npt_rsvd_pfwalk_check(struct svm_test *test)
 {
     u64 *pdpe = npt_get_pdpe();
-    pdpe[0] &= ~(1ULL << 8);
+    pdpe[0] &= ~(1ULL << 7);
 
     return (vmcb->control.exit_code == SVM_EXIT_NPF)
             && (vmcb->control.exit_info_1 == 0x20000000eULL);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [kvm-unit-tests PATCH 5/5] x86: svm: avoid advancing rip incorrectly on exc_inject
  2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
                   ` (3 preceding siblings ...)
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare Nadav Amit
@ 2020-06-30  9:45 ` Nadav Amit
  4 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30  9:45 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Nadav Amit

exc_inject advances the ripon every stage, so it can do so 3 times, but
there are only 2 vmmcall instructions that the guest runs. So, if a
failure happens on the last test, there is no vmmcall instruction to
trigger an exit.

Advance the rip only in the two stages in which vmmcall is expected to
run.

Signed-off-by: Nadav Amit <namit@vmware.com>
---
 x86/svm_tests.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 323031f..a20aa37 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -1593,8 +1593,6 @@ static void exc_inject_test(struct svm_test *test)
 
 static bool exc_inject_finished(struct svm_test *test)
 {
-    vmcb->save.rip += 3;
-
     switch (get_test_stage(test)) {
     case 0:
         if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
@@ -1602,6 +1600,7 @@ static bool exc_inject_finished(struct svm_test *test)
                    vmcb->control.exit_code);
             return true;
         }
+        vmcb->save.rip += 3;
         vmcb->control.event_inj = NMI_VECTOR | SVM_EVTINJ_TYPE_EXEPT | SVM_EVTINJ_VALID;
         break;
 
@@ -1621,6 +1620,7 @@ static bool exc_inject_finished(struct svm_test *test)
                    vmcb->control.exit_code);
             return true;
         }
+        vmcb->save.rip += 3;
         report(count_exc == 1, "divide overflow exception injected");
         report(!(vmcb->control.event_inj & SVM_EVTINJ_VALID), "eventinj.VALID cleared");
         break;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare Nadav Amit
@ 2020-06-30 11:06   ` Paolo Bonzini
  2020-06-30 16:54   ` Paolo Bonzini
  1 sibling, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-06-30 11:06 UTC (permalink / raw)
  To: Nadav Amit; +Cc: kvm

On 30/06/20 11:45, Nadav Amit wrote:
> According to AMD manual bit 8 in PDPE is not reserved, but bit 7.

Indeed.  Maybe it was a problem in the previous versions because I
remember adding this check explicitly.  I've sent a patch.

Paolo

> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  x86/svm_tests.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 92cefaf..323031f 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -825,13 +825,13 @@ static void npt_rsvd_pfwalk_prepare(struct svm_test *test)
>      vmcb_ident(vmcb);
>  
>      pdpe = npt_get_pdpe();
> -    pdpe[0] |= (1ULL << 8);
> +    pdpe[0] |= (1ULL << 7);
>  }
>  
>  static bool npt_rsvd_pfwalk_check(struct svm_test *test)
>  {
>      u64 *pdpe = npt_get_pdpe();
> -    pdpe[0] &= ~(1ULL << 8);
> +    pdpe[0] &= ~(1ULL << 7);
>  
>      return (vmcb->control.exit_code == SVM_EXIT_NPF)
>              && (vmcb->control.exit_info_1 == 0x20000000eULL);
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare
  2020-06-30  9:45 ` [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare Nadav Amit
  2020-06-30 11:06   ` Paolo Bonzini
@ 2020-06-30 16:54   ` Paolo Bonzini
  2020-06-30 17:37     ` Nadav Amit
  1 sibling, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2020-06-30 16:54 UTC (permalink / raw)
  To: Nadav Amit; +Cc: kvm

On 30/06/20 11:45, Nadav Amit wrote:
> According to AMD manual bit 8 in PDPE is not reserved, but bit 7.
> 
> Signed-off-by: Nadav Amit <namit@vmware.com>
> ---
>  x86/svm_tests.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
> index 92cefaf..323031f 100644
> --- a/x86/svm_tests.c
> +++ b/x86/svm_tests.c
> @@ -825,13 +825,13 @@ static void npt_rsvd_pfwalk_prepare(struct svm_test *test)
>      vmcb_ident(vmcb);
>  
>      pdpe = npt_get_pdpe();
> -    pdpe[0] |= (1ULL << 8);
> +    pdpe[0] |= (1ULL << 7);
>  }
>  
>  static bool npt_rsvd_pfwalk_check(struct svm_test *test)
>  {
>      u64 *pdpe = npt_get_pdpe();
> -    pdpe[0] &= ~(1ULL << 8);
> +    pdpe[0] &= ~(1ULL << 7);
>  
>      return (vmcb->control.exit_code == SVM_EXIT_NPF)
>              && (vmcb->control.exit_info_1 == 0x20000000eULL);
> 

Wait, bit 7 is not reserved, it's the PS bit.  We need to use the PML4E instead (and
then using bit 7 or bit 8 is irrelevant):

diff --git a/x86/svm.c b/x86/svm.c
index 0fcad8d..62907fd 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -49,6 +49,11 @@ u64 *npt_get_pdpe(void)
 	return pdpe;
 }
 
+u64 *npt_get_pml4e(void)
+{
+	return pml4e;
+}
+
 bool smp_supported(void)
 {
 	return cpu_count() > 1;
diff --git a/x86/svm.h b/x86/svm.h
index 645deb7..8d688b6 100644
--- a/x86/svm.h
+++ b/x86/svm.h
@@ -365,6 +365,7 @@ typedef void (*test_guest_func)(struct svm_test *);
 u64 *npt_get_pte(u64 address);
 u64 *npt_get_pde(u64 address);
 u64 *npt_get_pdpe(void);
+u64 *npt_get_pml4e(void);
 bool smp_supported(void);
 bool default_supported(void);
 void default_prepare(struct svm_test *test);
diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 92cefaf..b540527 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -824,13 +824,13 @@ static void npt_rsvd_pfwalk_prepare(struct svm_test *test)
     u64 *pdpe;
     vmcb_ident(vmcb);
 
-    pdpe = npt_get_pdpe();
+    pdpe = npt_get_pml4e();
     pdpe[0] |= (1ULL << 8);
 }
 
 static bool npt_rsvd_pfwalk_check(struct svm_test *test)
 {
-    u64 *pdpe = npt_get_pdpe();
+    u64 *pdpe = npt_get_pml4e();
     pdpe[0] &= ~(1ULL << 8);
 
     return (vmcb->control.exit_code == SVM_EXIT_NPF)


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare
  2020-06-30 16:54   ` Paolo Bonzini
@ 2020-06-30 17:37     ` Nadav Amit
  0 siblings, 0 replies; 9+ messages in thread
From: Nadav Amit @ 2020-06-30 17:37 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm

> On Jun 30, 2020, at 9:54 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> 
> On 30/06/20 11:45, Nadav Amit wrote:
>> According to AMD manual bit 8 in PDPE is not reserved, but bit 7.
>> 
>> Signed-off-by: Nadav Amit <namit@vmware.com>
>> ---
>> x86/svm_tests.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/x86/svm_tests.c b/x86/svm_tests.c
>> index 92cefaf..323031f 100644
>> --- a/x86/svm_tests.c
>> +++ b/x86/svm_tests.c
>> @@ -825,13 +825,13 @@ static void npt_rsvd_pfwalk_prepare(struct svm_test *test)
>>     vmcb_ident(vmcb);
>> 
>>     pdpe = npt_get_pdpe();
>> -    pdpe[0] |= (1ULL << 8);
>> +    pdpe[0] |= (1ULL << 7);
>> }
>> 
>> static bool npt_rsvd_pfwalk_check(struct svm_test *test)
>> {
>>     u64 *pdpe = npt_get_pdpe();
>> -    pdpe[0] &= ~(1ULL << 8);
>> +    pdpe[0] &= ~(1ULL << 7);
>> 
>>     return (vmcb->control.exit_code == SVM_EXIT_NPF)
>>             && (vmcb->control.exit_info_1 == 0x20000000eULL);
> 
> Wait, bit 7 is not reserved, it's the PS bit.  We need to use the PML4E instead (and
> then using bit 7 or bit 8 is irrelevant):

Err.. I remembered that bit 7 is not zero for no reason.

Now I wonder why it caused #PF at all. To be fair, I did not run it on
bare-metal yet, since I do not have access to such a machine.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-06-30 17:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-30  9:45 [kvm-unit-tests PATCH 0/5] x86: svm: fixes Nadav Amit
2020-06-30  9:45 ` [kvm-unit-tests PATCH 1/5] x86: Remove boot_idt assembly assignment Nadav Amit
2020-06-30  9:45 ` [kvm-unit-tests PATCH 2/5] x86: svm: check TSC adjust support Nadav Amit
2020-06-30  9:45 ` [kvm-unit-tests PATCH 3/5] x86: svm: flush TLB on each test Nadav Amit
2020-06-30  9:45 ` [kvm-unit-tests PATCH 4/5] x86: svm: wrong reserved bit in npt_rsvd_pfwalk_prepare Nadav Amit
2020-06-30 11:06   ` Paolo Bonzini
2020-06-30 16:54   ` Paolo Bonzini
2020-06-30 17:37     ` Nadav Amit
2020-06-30  9:45 ` [kvm-unit-tests PATCH 5/5] x86: svm: avoid advancing rip incorrectly on exc_inject Nadav Amit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).