All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN
@ 2021-02-23 19:19 Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped Krish Sadhukhan
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-23 19:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

v2 -> v3:
        Patch# 1: It's a new patch for SVM. It fixes the SVM bug that advances
                  the RIP following the #VMEXIT from a VMRUN that is being
                  single-stepped.
        Patch# 2: It's a new patch for the test framework. It adds a utility
                   function to read the current RIP.
        Patch# 3: It's a new patch for the test framework. It adds an 
                  assembly label to the VMRUN instruction so that the RIP
                  of VMRUN can be known to tests.
        Patch# 4: It's the updated test from v2. The test uses the VMRUN
                  instruction label, added by the previous patch, in order
                  know its RIP. The part of the test that tests single-stepping
                  on VMRUN, uses the difference between the VMRUN RIP and its
                  next RIP, in order to determine success.

[PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the
[PATCH 2/4 v3] KVM: X86: Add a utility function to read current RIP
[PATCH 3/4 v3] KVM: nSVM: Add assembly label to VMRUN instruction
[PATCH 4/4 v3] KVM: nSVM: Test effect of host RFLAGS.TF on VMRUN

 arch/x86/kvm/svm/svm.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Krish Sadhukhan (1):
      nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped

 lib/x86/processor.h |   7 ++++
 x86/svm.c           |  16 ++++++--
 x86/svm_tests.c     | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 134 insertions(+), 4 deletions(-)

Krish Sadhukhan (3):
      KVM: X86: Add a utility function to read current RIP
      KVM: nSVM: Add assembly label to VMRUN instruction
      nSVM: Test effect of host RFLAGS.TF on VMRUN


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped
  2021-02-23 19:19 [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN Krish Sadhukhan
@ 2021-02-23 19:19 ` Krish Sadhukhan
  2021-02-23 22:42   ` Sean Christopherson
  2021-02-23 19:19 ` [PATCH 2/4 v3] KVM: X86: Add a utility function to read current RIP Krish Sadhukhan
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-23 19:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

Currently, svm_vcpu_run() advances the RIP following VMRUN completion when
control returns to host. This works fine if there is no trap flag set
on the VMRUN instruction i.e., if VMRUN is not single-stepped. But if
VMRUN is single-stepped, this advancement of the RIP leads to an incorrect
RIP in the #DB handler invoked for the single-step trap. Therefore, check
if the VMRUN instruction is single-stepped and if so, do not advance the RIP
when the #DB intercept #VMEXIT happens.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oraacle.com>
---
 arch/x86/kvm/svm/svm.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3442d44ca53b..427d32213f51 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3740,6 +3740,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
 	instrumentation_end();
 }
 
+static bool single_step_vmrun = false;
+
 static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -3800,6 +3802,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	svm_vcpu_enter_exit(vcpu, svm);
 
+	if (svm->vmcb->control.exit_code == SVM_EXIT_VMRUN &&
+	    (svm->vmcb->save.rflags & X86_EFLAGS_TF))
+                single_step_vmrun = true;
+
 	/*
 	 * We do not use IBRS in the kernel. If this vCPU has used the
 	 * SPEC_CTRL MSR it may have left it on; save the value and
@@ -3827,7 +3833,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 		vcpu->arch.cr2 = svm->vmcb->save.cr2;
 		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
 		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
-		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
+		if (single_step_vmrun && svm->vmcb->control.exit_code ==
+		    SVM_EXIT_EXCP_BASE + DB_VECTOR)
+			single_step_vmrun = false;
+		else
+			vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
 	}
 
 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4 v3] KVM: X86: Add a utility function to read current RIP
  2021-02-23 19:19 [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped Krish Sadhukhan
@ 2021-02-23 19:19 ` Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 3/4 v3] KVM: nSVM: Add assembly label to VMRUN instruction Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 4/4 v3] KVM: nSVM: Test effect of host RFLAGS.TF on VMRUN Krish Sadhukhan
  3 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-23 19:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

Some tests may require to know the current RIP to compares tests results. Add
a function to read current RIP. This will be used by a test in this series.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 lib/x86/processor.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 87112bd..5604874 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -489,6 +489,13 @@ static inline unsigned long long rdtsc(void)
 	return r;
 }
 
+static inline u64 read_rip(void)
+{
+	u64 val;
+	asm volatile ("lea (%%rip),%0" : "=r"(val) : : "memory");
+	return val;
+}
+
 /*
  * Per the advice in the SDM, volume 2, the sequence "mfence; lfence"
  * executed immediately before rdtsc ensures that rdtsc will be
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4 v3] KVM: nSVM: Add assembly label to VMRUN instruction
  2021-02-23 19:19 [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 2/4 v3] KVM: X86: Add a utility function to read current RIP Krish Sadhukhan
@ 2021-02-23 19:19 ` Krish Sadhukhan
  2021-02-23 19:19 ` [PATCH 4/4 v3] KVM: nSVM: Test effect of host RFLAGS.TF on VMRUN Krish Sadhukhan
  3 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-23 19:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

Add an assembly label to the VMRUN instruction so that its RIP can be known
to test cases. This will be used by the test in the next patch.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 x86/svm.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/x86/svm.c b/x86/svm.c
index a1808c7..77fba8b 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -208,14 +208,15 @@ struct regs get_regs(void)
 
 struct svm_test *v2_test;
 
-#define ASM_VMRUN_CMD                           \
+#define ASM_PRE_VMRUN_CMD                       \
                 "vmload %%rax\n\t"              \
                 "mov regs+0x80, %%r15\n\t"      \
                 "mov %%r15, 0x170(%%rax)\n\t"   \
                 "mov regs, %%r15\n\t"           \
                 "mov %%r15, 0x1f8(%%rax)\n\t"   \
                 LOAD_GPR_C                      \
-                "vmrun %%rax\n\t"               \
+
+#define ASM_POST_VMRUN_CMD                      \
                 SAVE_GPR_C                      \
                 "mov 0x170(%%rax), %%r15\n\t"   \
                 "mov %%r15, regs+0x80\n\t"      \
@@ -232,7 +233,9 @@ int svm_vmrun(void)
 	regs.rdi = (ulong)v2_test;
 
 	asm volatile (
-		ASM_VMRUN_CMD
+		ASM_PRE_VMRUN_CMD
+                "vmrun %%rax\n\t"               \
+		ASM_POST_VMRUN_CMD
 		:
 		: "a" (virt_to_phys(vmcb))
 		: "memory", "r15");
@@ -240,6 +243,8 @@ int svm_vmrun(void)
 	return (vmcb->control.exit_code);
 }
 
+extern void *vmrun_rip;
+
 static void test_run(struct svm_test *test)
 {
 	u64 vmcb_phys = virt_to_phys(vmcb);
@@ -258,7 +263,10 @@ static void test_run(struct svm_test *test)
 			"sti \n\t"
 			"call *%c[PREPARE_GIF_CLEAR](%[test]) \n \t"
 			"mov %[vmcb_phys], %%rax \n\t"
-			ASM_VMRUN_CMD
+			ASM_PRE_VMRUN_CMD
+			".global vmrun_rip\n\t"		\
+			"vmrun_rip: vmrun %%rax\n\t"    \
+			ASM_POST_VMRUN_CMD
 			"cli \n\t"
 			"stgi"
 			: // inputs clobbered by the guest:
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4 v3] KVM: nSVM: Test effect of host RFLAGS.TF on VMRUN
  2021-02-23 19:19 [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN Krish Sadhukhan
                   ` (2 preceding siblings ...)
  2021-02-23 19:19 ` [PATCH 3/4 v3] KVM: nSVM: Add assembly label to VMRUN instruction Krish Sadhukhan
@ 2021-02-23 19:19 ` Krish Sadhukhan
  3 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-23 19:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc

According to section "VMRUN and TF/RF Bits in EFLAGS" in AMD APM vol 2,

    "From the host point of view, VMRUN acts like a single instruction,
     even though an arbitrary number of guest instructions may execute
     before a #VMEXIT effectively completes the VMRUN. As a single
     host instruction, VMRUN interacts with EFLAGS.TF like ordinary
     instructions. EFLAGS.TF causes a #DB trap after the VMRUN completes
     on the host side (i.e., after the #VMEXIT from the guest).

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
---
 x86/svm_tests.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 115 insertions(+)

diff --git a/x86/svm_tests.c b/x86/svm_tests.c
index 29a0b59..466a13c 100644
--- a/x86/svm_tests.c
+++ b/x86/svm_tests.c
@@ -2002,6 +2002,118 @@ static bool init_intercept_check(struct svm_test *test)
     return init_intercept;
 }
 
+/*
+ * Setting host EFLAGS.TF causes a #DB trap after the VMRUN completes on the
+ * host side (i.e., after the #VMEXIT from the guest).
+ *
+ * [AMD APM]
+ */
+static volatile u8 host_rflags_guest_main_flag = 0;
+static volatile u8 host_rflags_db_handler_flag = 0;
+static volatile bool host_rflags_ss_on_vmrun = false;
+static volatile bool host_rflags_vmrun_reached = false;
+static volatile bool host_rflags_set_tf = false;
+static u64 post_vmrun_rip;
+
+extern void *vmrun_rip;
+
+static void host_rflags_db_handler(struct ex_regs *r)
+{
+	if (host_rflags_ss_on_vmrun) {
+		if (host_rflags_vmrun_reached) {
+			r->rflags &= ~X86_EFLAGS_TF;
+			post_vmrun_rip = r->rip;
+		} else {
+			if (r->rip == (u64)(&vmrun_rip))
+				host_rflags_vmrun_reached = true;
+		}
+	} else {
+		r->rflags &= ~X86_EFLAGS_TF;
+	}
+}
+
+static void host_rflags_prepare(struct svm_test *test)
+{
+	default_prepare(test);
+	handle_exception(DB_VECTOR, host_rflags_db_handler);
+	set_test_stage(test, 0);
+}
+
+static void host_rflags_prepare_gif_clear(struct svm_test *test)
+{
+	if (host_rflags_set_tf)
+		write_rflags(read_rflags() | X86_EFLAGS_TF);
+}
+
+static void host_rflags_test(struct svm_test *test)
+{
+	while (1) {
+		if (get_test_stage(test) > 0 && host_rflags_set_tf &&
+		    (!host_rflags_ss_on_vmrun) &&
+		    (!host_rflags_db_handler_flag))
+			host_rflags_guest_main_flag = 1;
+		vmmcall();
+		if (get_test_stage(test) == 3)
+			break;
+	}
+}
+
+static bool host_rflags_finished(struct svm_test *test)
+{
+	switch (get_test_stage(test)) {
+	case 0:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL) {
+			report(false, "Unexpected VMEXIT. Exit reason 0x%x",
+			    vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
+		/*
+		 * Setting host EFLAGS.TF not immediately before VMRUN, causes
+		 * #DB trap before first guest instruction is executed
+		 */
+		host_rflags_set_tf = true;
+		break;
+	case 1:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		    (!host_rflags_guest_main_flag)) {
+			report(false, "Unexpected VMEXIT or #DB handler"
+			    " invoked before guest main. Exit reason 0x%x",
+			    vmcb->control.exit_code);
+			return true;
+		}
+		vmcb->save.rip += 3;
+		/*
+		 * Setting host EFLAGS.TF immediately before VMRUN, causes #DB
+		 * trap after VMRUN completes on the host side (i.e., after
+		 * VMEXIT from guest).
+		 */
+		host_rflags_ss_on_vmrun = true;
+		break;
+	case 2:
+		if (vmcb->control.exit_code != SVM_EXIT_VMMCALL ||
+		    post_vmrun_rip - (u64)(&vmrun_rip) != 3) {
+			report(false, "Unexpected VMEXIT or RIP mismatch."
+			    " Exit reason 0x%x, VMRUN RIP: %lx, post-VMRUN"
+			    " RIP: %lx", vmcb->control.exit_code,
+			    (u64)(&vmrun_rip), post_vmrun_rip);
+			return true;
+		}
+		host_rflags_set_tf = false;
+		vmcb->save.rip += 3;
+		break;
+	default:
+		return true;
+	}
+	inc_test_stage(test);
+	return get_test_stage(test) == 4;
+}
+
+static bool host_rflags_check(struct svm_test *test)
+{
+	return get_test_stage(test) == 3;
+}
+
 #define TEST(name) { #name, .v2 = name }
 
 /*
@@ -2492,6 +2604,9 @@ struct svm_test svm_tests[] = {
     { "svm_init_intercept_test", smp_supported, init_intercept_prepare,
       default_prepare_gif_clear, init_intercept_test,
       init_intercept_finished, init_intercept_check, .on_vcpu = 2 },
+    { "host_rflags", default_supported, host_rflags_prepare,
+      host_rflags_prepare_gif_clear, host_rflags_test,
+      host_rflags_finished, host_rflags_check },
     TEST(svm_cr4_osxsave_test),
     TEST(svm_guest_state_test),
     TEST(svm_vmrun_errata_test),
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped
  2021-02-23 19:19 ` [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped Krish Sadhukhan
@ 2021-02-23 22:42   ` Sean Christopherson
  2021-02-24 21:18     ` Krish Sadhukhan
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2021-02-23 22:42 UTC (permalink / raw)
  To: Krish Sadhukhan; +Cc: kvm, pbonzini, jmattson

On Tue, Feb 23, 2021, Krish Sadhukhan wrote:
> Currently, svm_vcpu_run() advances the RIP following VMRUN completion when
> control returns to host. This works fine if there is no trap flag set
> on the VMRUN instruction i.e., if VMRUN is not single-stepped. But if
> VMRUN is single-stepped, this advancement of the RIP leads to an incorrect
> RIP in the #DB handler invoked for the single-step trap. Therefore, check
> if the VMRUN instruction is single-stepped and if so, do not advance the RIP
> when the #DB intercept #VMEXIT happens.

This really needs to clarify which VMRUN, i.e. L0 vs. L1.  AFAICT, you're
talking about both at separate times.  Is this an issue with L1 single-stepping
its VMRUN, L0 single-stepping its VMRUN, L0 single-stepping L1's VMRUN, ???
 
> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oraacle.com>
> ---
>  arch/x86/kvm/svm/svm.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 3442d44ca53b..427d32213f51 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3740,6 +3740,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>  	instrumentation_end();
>  }
>  
> +static bool single_step_vmrun = false;

Sharing a global flag amongst all vCPUs isn't going to fare well...

> +
>  static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>  {
>  	struct vcpu_svm *svm = to_svm(vcpu);
> @@ -3800,6 +3802,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>  
>  	svm_vcpu_enter_exit(vcpu, svm);
>  
> +	if (svm->vmcb->control.exit_code == SVM_EXIT_VMRUN &&
> +	    (svm->vmcb->save.rflags & X86_EFLAGS_TF))
> +                single_step_vmrun = true;
> +
>  	/*
>  	 * We do not use IBRS in the kernel. If this vCPU has used the
>  	 * SPEC_CTRL MSR it may have left it on; save the value and
> @@ -3827,7 +3833,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>  		vcpu->arch.cr2 = svm->vmcb->save.cr2;
>  		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
>  		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
> -		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
> +		if (single_step_vmrun && svm->vmcb->control.exit_code ==
> +		    SVM_EXIT_EXCP_BASE + DB_VECTOR)
> +			single_step_vmrun = false;

Even if you fix the global flag issue, this can't possibly work if userspace
changes state, if VMRUN fails and leaves a timebomb, and probably any number of
other conditions.

> +		else
> +			vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
>  	}
>  
>  	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
> -- 
> 2.27.0
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped
  2021-02-23 22:42   ` Sean Christopherson
@ 2021-02-24 21:18     ` Krish Sadhukhan
  2021-02-24 21:59       ` Sean Christopherson
  0 siblings, 1 reply; 9+ messages in thread
From: Krish Sadhukhan @ 2021-02-24 21:18 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, pbonzini, jmattson


On 2/23/21 2:42 PM, Sean Christopherson wrote:
> On Tue, Feb 23, 2021, Krish Sadhukhan wrote:
>> Currently, svm_vcpu_run() advances the RIP following VMRUN completion when
>> control returns to host. This works fine if there is no trap flag set
>> on the VMRUN instruction i.e., if VMRUN is not single-stepped. But if
>> VMRUN is single-stepped, this advancement of the RIP leads to an incorrect
>> RIP in the #DB handler invoked for the single-step trap. Therefore, check
>> if the VMRUN instruction is single-stepped and if so, do not advance the RIP
>> when the #DB intercept #VMEXIT happens.
> This really needs to clarify which VMRUN, i.e. L0 vs. L1.  AFAICT, you're
> talking about both at separate times.  Is this an issue with L1 single-stepping
> its VMRUN, L0 single-stepping its VMRUN, L0 single-stepping L1's VMRUN, ???


The issue is seen when L1 single-steps its own VMRUN. I will fix the 
wording.

I don't know if there's a way to test L0 single-stepping its own or L1's 
VMRUN.

>   
>> Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oraacle.com>
>> ---
>>   arch/x86/kvm/svm/svm.c | 12 +++++++++++-
>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 3442d44ca53b..427d32213f51 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -3740,6 +3740,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu,
>>   	instrumentation_end();
>>   }
>>   
>> +static bool single_step_vmrun = false;
> Sharing a global flag amongst all vCPUs isn't going to fare well...


I will fix this.

>
>> +
>>   static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>>   {
>>   	struct vcpu_svm *svm = to_svm(vcpu);
>> @@ -3800,6 +3802,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>>   
>>   	svm_vcpu_enter_exit(vcpu, svm);
>>   
>> +	if (svm->vmcb->control.exit_code == SVM_EXIT_VMRUN &&
>> +	    (svm->vmcb->save.rflags & X86_EFLAGS_TF))
>> +                single_step_vmrun = true;
>> +
>>   	/*
>>   	 * We do not use IBRS in the kernel. If this vCPU has used the
>>   	 * SPEC_CTRL MSR it may have left it on; save the value and
>> @@ -3827,7 +3833,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>>   		vcpu->arch.cr2 = svm->vmcb->save.cr2;
>>   		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
>>   		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
>> -		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
>> +		if (single_step_vmrun && svm->vmcb->control.exit_code ==
>> +		    SVM_EXIT_EXCP_BASE + DB_VECTOR)
>> +			single_step_vmrun = false;
> Even if you fix the global flag issue, this can't possibly work if userspace
> changes state, if VMRUN fails and leaves a timebomb, and probably any number of
> other conditions.


  Are you saying that I need to adjust the RIP in cases where VMRUN fails ?

>> +		else
>> +			vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
>>   	}
>>   
>>   	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
>> -- 
>> 2.27.0
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped
  2021-02-24 21:18     ` Krish Sadhukhan
@ 2021-02-24 21:59       ` Sean Christopherson
  2021-03-22 19:00         ` Krish Sadhukhan
  0 siblings, 1 reply; 9+ messages in thread
From: Sean Christopherson @ 2021-02-24 21:59 UTC (permalink / raw)
  To: Krish Sadhukhan; +Cc: kvm, pbonzini, jmattson

On Wed, Feb 24, 2021, Krish Sadhukhan wrote:
> 
> On 2/23/21 2:42 PM, Sean Christopherson wrote:
> > On Tue, Feb 23, 2021, Krish Sadhukhan wrote:
> > > Currently, svm_vcpu_run() advances the RIP following VMRUN completion when
> > > control returns to host. This works fine if there is no trap flag set
> > > on the VMRUN instruction i.e., if VMRUN is not single-stepped. But if
> > > VMRUN is single-stepped, this advancement of the RIP leads to an incorrect
> > > RIP in the #DB handler invoked for the single-step trap. Therefore, check

Whose #DB handler?  L1's?

> > > if the VMRUN instruction is single-stepped and if so, do not advance the RIP
> > > when the #DB intercept #VMEXIT happens.
> > This really needs to clarify which VMRUN, i.e. L0 vs. L1.  AFAICT, you're
> > talking about both at separate times.  Is this an issue with L1 single-stepping
> > its VMRUN, L0 single-stepping its VMRUN, L0 single-stepping L1's VMRUN, ???
> 
> 
> The issue is seen when L1 single-steps its own VMRUN. I will fix the
> wording.

...

> > > @@ -3827,7 +3833,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
> > >   		vcpu->arch.cr2 = svm->vmcb->save.cr2;
> > >   		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
> > >   		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
> > > -		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
> > > +		if (single_step_vmrun && svm->vmcb->control.exit_code ==
> > > +		    SVM_EXIT_EXCP_BASE + DB_VECTOR)
> > > +			single_step_vmrun = false;
> > Even if you fix the global flag issue, this can't possibly work if userspace
> > changes state, if VMRUN fails and leaves a timebomb, and probably any number of
> > other conditions.
> 
> 
>  Are you saying that I need to adjust the RIP in cases where VMRUN fails ?

If VMRUN fails, the #DB exit will never occur and single_step_vmrun will be left
set.  Ditto if a higher priority exit occurs, though I'm not sure that can cause
problems in practice.  Anyways, my point is that setting a flag that must be
consumed on an exact instruction is almost always fragile, there are just too
many corner cases that pop up.

Can you elaborate more on who/what incorrectly advances RIP?  The changelog says
"svm_vcpu_run() advances the RIP", but it's not advancing anything it's just
grabbing RIP from the VMCB, which IIUC is L2's RIP.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped
  2021-02-24 21:59       ` Sean Christopherson
@ 2021-03-22 19:00         ` Krish Sadhukhan
  0 siblings, 0 replies; 9+ messages in thread
From: Krish Sadhukhan @ 2021-03-22 19:00 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: kvm, pbonzini, jmattson


On 2/24/21 1:59 PM, Sean Christopherson wrote:
> On Wed, Feb 24, 2021, Krish Sadhukhan wrote:
>> On 2/23/21 2:42 PM, Sean Christopherson wrote:
>>> On Tue, Feb 23, 2021, Krish Sadhukhan wrote:
>>>> Currently, svm_vcpu_run() advances the RIP following VMRUN completion when
>>>> control returns to host. This works fine if there is no trap flag set
>>>> on the VMRUN instruction i.e., if VMRUN is not single-stepped. But if
>>>> VMRUN is single-stepped, this advancement of the RIP leads to an incorrect
>>>> RIP in the #DB handler invoked for the single-step trap. Therefore, check
> Whose #DB handler?  L1's?


Yes

>>>> if the VMRUN instruction is single-stepped and if so, do not advance the RIP
>>>> when the #DB intercept #VMEXIT happens.
>>> This really needs to clarify which VMRUN, i.e. L0 vs. L1.  AFAICT, you're
>>> talking about both at separate times.  Is this an issue with L1 single-stepping
>>> its VMRUN, L0 single-stepping its VMRUN, L0 single-stepping L1's VMRUN, ???
>> The issue is seen when L1 single-steps its own VMRUN. I will fix the
>> wording.
> ...
>
>>>> @@ -3827,7 +3833,11 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
>>>>    		vcpu->arch.cr2 = svm->vmcb->save.cr2;
>>>>    		vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
>>>>    		vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
>>>> -		vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
>>>> +		if (single_step_vmrun && svm->vmcb->control.exit_code ==
>>>> +		    SVM_EXIT_EXCP_BASE + DB_VECTOR)
>>>> +			single_step_vmrun = false;
>>> Even if you fix the global flag issue, this can't possibly work if userspace
>>> changes state, if VMRUN fails and leaves a timebomb, and probably any number of
>>> other conditions.
>>   Are you saying that I need to adjust the RIP in cases where VMRUN fails ?
> If VMRUN fails, the #DB exit will never occur and single_step_vmrun will be left
> set.  Ditto if a higher priority exit occurs, though I'm not sure that can cause
> problems in practice.  Anyways, my point is that setting a flag that must be
> consumed on an exact instruction is almost always fragile, there are just too
> many corner cases that pop up.
>
> Can you elaborate more on who/what incorrectly advances RIP?


The RIP advances because KVM is not taking action for the pending 
RFLAGS.TF when it executes L1 for the first time after L2 exits. #DB 
intercept is triggered only after the instruction next to VMRUN is 
executed in svm_vcpu_run and hence the #DB handler for L1 shows the 
wrong RIP.

I have just sent out v4 which fixes the problem in a better way.

>   The changelog says
> "svm_vcpu_run() advances the RIP", but it's not advancing anything it's just
> grabbing RIP from the VMCB, which IIUC is L2's RIP.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-03-22 19:01 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-23 19:19 [PATCH 0/4 v3] nSVM: Test host RFLAGS.TF on VMRUN Krish Sadhukhan
2021-02-23 19:19 ` [PATCH 1/4 v3] KVM: nSVM: Do not advance RIP following VMRUN completion if the latter is single-stepped Krish Sadhukhan
2021-02-23 22:42   ` Sean Christopherson
2021-02-24 21:18     ` Krish Sadhukhan
2021-02-24 21:59       ` Sean Christopherson
2021-03-22 19:00         ` Krish Sadhukhan
2021-02-23 19:19 ` [PATCH 2/4 v3] KVM: X86: Add a utility function to read current RIP Krish Sadhukhan
2021-02-23 19:19 ` [PATCH 3/4 v3] KVM: nSVM: Add assembly label to VMRUN instruction Krish Sadhukhan
2021-02-23 19:19 ` [PATCH 4/4 v3] KVM: nSVM: Test effect of host RFLAGS.TF on VMRUN Krish Sadhukhan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.