linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RESEND 1/3] KVM: SVM: Get rid of handle_fastpath_set_msr_irqoff()
@ 2020-09-09  2:57 Wanpeng Li
  2020-09-09  2:57 ` [PATCH RESEND 2/3] KVM: SVM: Move svm_complete_interrupts() into svm_vcpu_run() Wanpeng Li
  2020-09-09  2:57 ` [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts() Wanpeng Li
  0 siblings, 2 replies; 5+ messages in thread
From: Wanpeng Li @ 2020-09-09  2:57 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Paul K ., # v5 . 8-rc1+

From: Wanpeng Li <wanpengli@tencent.com>

Analysis from Sean:

 | svm->next_rip is reset in svm_vcpu_run() only after calling 
 | svm_exit_handlers_fastpath(), which will cause SVM's 
 | skip_emulated_instruction() to write a stale RIP.
 
Let's get rid of handle_fastpath_set_msr_irqoff() in svm_exit_handlers_fastpath()
to have a quick fix.

Reported-by: Paul K. <kronenpj@kronenpj.dyndns.org>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paul K. <kronenpj@kronenpj.dyndns.org>
Cc: <stable@vger.kernel.org> # v5.8-rc1+
Fixes: 404d5d7bff0d (KVM: X86: Introduce more exit_fastpath_completion enum values)
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/svm/svm.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 19e622a..c61bc3b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3349,11 +3349,6 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 
 static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 {
-	if (!is_guest_mode(vcpu) &&
-	    to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
-	    to_svm(vcpu)->vmcb->control.exit_info_1)
-		return handle_fastpath_set_msr_irqoff(vcpu);
-
 	return EXIT_FASTPATH_NONE;
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RESEND 2/3] KVM: SVM: Move svm_complete_interrupts() into svm_vcpu_run()
  2020-09-09  2:57 [PATCH RESEND 1/3] KVM: SVM: Get rid of handle_fastpath_set_msr_irqoff() Wanpeng Li
@ 2020-09-09  2:57 ` Wanpeng Li
  2020-09-09  2:57 ` [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts() Wanpeng Li
  1 sibling, 0 replies; 5+ messages in thread
From: Wanpeng Li @ 2020-09-09  2:57 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Paul K .

From: Wanpeng Li <wanpengli@tencent.com>

Moving svm_complete_interrupts() into svm_vcpu_run() which can align VMX 
and SVM with respect to completing interrupts.

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paul K. <kronenpj@kronenpj.dyndns.org>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/svm/svm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c61bc3b..74bcf0a 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2938,8 +2938,6 @@ static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
 	if (npt_enabled)
 		vcpu->arch.cr3 = svm->vmcb->save.cr3;
 
-	svm_complete_interrupts(svm);
-
 	if (is_guest_mode(vcpu)) {
 		int vmexit;
 
@@ -3530,6 +3528,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 		     SVM_EXIT_EXCP_BASE + MC_VECTOR))
 		svm_handle_mce(svm);
 
+	svm_complete_interrupts(svm);
+
 	vmcb_mark_all_clean(svm->vmcb);
 	return exit_fastpath;
 }
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts()
  2020-09-09  2:57 [PATCH RESEND 1/3] KVM: SVM: Get rid of handle_fastpath_set_msr_irqoff() Wanpeng Li
  2020-09-09  2:57 ` [PATCH RESEND 2/3] KVM: SVM: Move svm_complete_interrupts() into svm_vcpu_run() Wanpeng Li
@ 2020-09-09  2:57 ` Wanpeng Li
  2020-09-12  6:15   ` Paolo Bonzini
  1 sibling, 1 reply; 5+ messages in thread
From: Wanpeng Li @ 2020-09-09  2:57 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Paul K .

From: Wanpeng Li <wanpengli@tencent.com>

Moving the call to svm_exit_handlers_fastpath() after svm_complete_interrupts() 
since svm_complete_interrupts() consumes rip and reenable the function 
handle_fastpath_set_msr_irqoff() call in svm_exit_handlers_fastpath().

Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Paul K. <kronenpj@kronenpj.dyndns.org>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/svm/svm.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 74bcf0a..ac819f0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3347,6 +3347,11 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 
 static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 {
+	if (!is_guest_mode(vcpu) &&
+	    to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
+	    to_svm(vcpu)->vmcb->control.exit_info_1)
+		return handle_fastpath_set_msr_irqoff(vcpu);
+
 	return EXIT_FASTPATH_NONE;
 }
 
@@ -3495,7 +3500,6 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 	stgi();
 
 	/* Any pending NMI will happen here */
-	exit_fastpath = svm_exit_handlers_fastpath(vcpu);
 
 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
 		kvm_after_interrupt(&svm->vcpu);
@@ -3529,6 +3533,7 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
 		svm_handle_mce(svm);
 
 	svm_complete_interrupts(svm);
+	exit_fastpath = svm_exit_handlers_fastpath(vcpu);
 
 	vmcb_mark_all_clean(svm->vmcb);
 	return exit_fastpath;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts()
  2020-09-09  2:57 ` [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts() Wanpeng Li
@ 2020-09-12  6:15   ` Paolo Bonzini
  2020-09-14 15:48     ` Sean Christopherson
  0 siblings, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2020-09-12  6:15 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Paul K .

The overall patch is fairly simple:

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 03dd7bac8034..d6ce75e107c0 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2938,8 +2938,6 @@ static int handle_exit(struct kvm_vcpu *vcpu,
fastpath_t exit_fastpath)
 	if (npt_enabled)
 		vcpu->arch.cr3 = svm->vmcb->save.cr3;

-	svm_complete_interrupts(svm);
-
 	if (is_guest_mode(vcpu)) {
 		int vmexit;

@@ -3504,7 +3502,6 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct
kvm_vcpu *vcpu)
 	stgi();

 	/* Any pending NMI will happen here */
-	exit_fastpath = svm_exit_handlers_fastpath(vcpu);

 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
 		kvm_after_interrupt(&svm->vcpu);
@@ -3537,6 +3534,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct
kvm_vcpu *vcpu)
 		     SVM_EXIT_EXCP_BASE + MC_VECTOR))
 		svm_handle_mce(svm);

+	svm_complete_interrupts(svm);
+	exit_fastpath = svm_exit_handlers_fastpath(vcpu);
+
 	vmcb_mark_all_clean(svm->vmcb);
 	return exit_fastpath;
 }

so I will just squash everything.

Paolo


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts()
  2020-09-12  6:15   ` Paolo Bonzini
@ 2020-09-14 15:48     ` Sean Christopherson
  0 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2020-09-14 15:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Wanpeng Li, linux-kernel, kvm, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Paul K .

On Sat, Sep 12, 2020 at 08:15:46AM +0200, Paolo Bonzini wrote:
> The overall patch is fairly simple:
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 03dd7bac8034..d6ce75e107c0 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2938,8 +2938,6 @@ static int handle_exit(struct kvm_vcpu *vcpu,
> fastpath_t exit_fastpath)
>  	if (npt_enabled)
>  		vcpu->arch.cr3 = svm->vmcb->save.cr3;
> 
> -	svm_complete_interrupts(svm);
> -
>  	if (is_guest_mode(vcpu)) {
>  		int vmexit;
> 
> @@ -3504,7 +3502,6 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct
> kvm_vcpu *vcpu)
>  	stgi();
> 
>  	/* Any pending NMI will happen here */
> -	exit_fastpath = svm_exit_handlers_fastpath(vcpu);
> 
>  	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
>  		kvm_after_interrupt(&svm->vcpu);
> @@ -3537,6 +3534,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct
> kvm_vcpu *vcpu)
>  		     SVM_EXIT_EXCP_BASE + MC_VECTOR))
>  		svm_handle_mce(svm);
> 
> +	svm_complete_interrupts(svm);
> +	exit_fastpath = svm_exit_handlers_fastpath(vcpu);
> +
>  	vmcb_mark_all_clean(svm->vmcb);
>  	return exit_fastpath;
>  }
> 
> so I will just squash everything.

The thought behind the multi-patch series was to allow automatically applying
the fix to the 5.8 stable tree without having to take on the risk of moving
svm_complete_interrupts().

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-09-14 15:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-09  2:57 [PATCH RESEND 1/3] KVM: SVM: Get rid of handle_fastpath_set_msr_irqoff() Wanpeng Li
2020-09-09  2:57 ` [PATCH RESEND 2/3] KVM: SVM: Move svm_complete_interrupts() into svm_vcpu_run() Wanpeng Li
2020-09-09  2:57 ` [PATCH RESEND 3/3] KVM: SVM: Reenable handle_fastpath_set_msr_irqoff() after complete_interrupts() Wanpeng Li
2020-09-12  6:15   ` Paolo Bonzini
2020-09-14 15:48     ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).