* [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI
@ 2020-04-14 20:11 Cathy Avery
2020-04-14 20:11 ` [PATCH 1/2] " Cathy Avery
` (3 more replies)
0 siblings, 4 replies; 15+ messages in thread
From: Cathy Avery @ 2020-04-14 20:11 UTC (permalink / raw)
To: linux-kernel, kvm, pbonzini; +Cc: vkuznets, wei.huang2
Moved nested NMI exit to new check_nested_events.
The second patch fixes the NMI pending race condition that now occurs.
Cathy Avery (2):
KVM: SVM: Implement check_nested_events for NMI
KVM: x86: check_nested_events if there is an injectable NMI
arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/svm.h | 15 ---------------
arch/x86/kvm/x86.c | 15 +++++++++++----
4 files changed, 33 insertions(+), 20 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] KVM: SVM: Implement check_nested_events for NMI
2020-04-14 20:11 [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Cathy Avery
@ 2020-04-14 20:11 ` Cathy Avery
2020-04-14 20:11 ` [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI Cathy Avery
` (2 subsequent siblings)
3 siblings, 0 replies; 15+ messages in thread
From: Cathy Avery @ 2020-04-14 20:11 UTC (permalink / raw)
To: linux-kernel, kvm, pbonzini; +Cc: vkuznets, wei.huang2
Migrate nested guest NMI intercept processing
to new check_nested_events.
Signed-off-by: Cathy Avery <cavery@redhat.com>
---
arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/svm.h | 15 ---------------
3 files changed, 22 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 90a1ca939627..1ba8ef46b0b5 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -764,6 +764,20 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
return vmexit;
}
+static bool nested_exit_on_nmi(struct vcpu_svm *svm)
+{
+ return (svm->nested.intercept & (1ULL << INTERCEPT_NMI));
+}
+
+static void nested_svm_nmi(struct vcpu_svm *svm)
+{
+ svm->vmcb->control.exit_code = SVM_EXIT_NMI;
+ svm->vmcb->control.exit_info_1 = 0;
+ svm->vmcb->control.exit_info_2 = 0;
+
+ svm->nested.exit_required = true;
+}
+
static void nested_svm_intr(struct vcpu_svm *svm)
{
svm->vmcb->control.exit_code = SVM_EXIT_INTR;
@@ -793,6 +807,13 @@ int svm_check_nested_events(struct kvm_vcpu *vcpu)
return 0;
}
+ if (vcpu->arch.nmi_pending && nested_exit_on_nmi(svm)) {
+ if (block_nested_events)
+ return -EBUSY;
+ nested_svm_nmi(svm);
+ return 0;
+ }
+
return 0;
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 2be5bbae3a40..84c338c55348 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3053,7 +3053,7 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
int ret;
ret = !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) &&
!(svm->vcpu.arch.hflags & HF_NMI_MASK);
- ret = ret && gif_set(svm) && nested_svm_nmi(svm);
+ ret = ret && gif_set(svm);
return ret;
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index df3474f4fb02..9be2b890ff3c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -369,21 +369,6 @@ void disable_nmi_singlestep(struct vcpu_svm *svm);
#define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */
#define NESTED_EXIT_CONTINUE 2 /* Further checks needed */
-/* This function returns true if it is save to enable the nmi window */
-static inline bool nested_svm_nmi(struct vcpu_svm *svm)
-{
- if (!is_guest_mode(&svm->vcpu))
- return true;
-
- if (!(svm->nested.intercept & (1ULL << INTERCEPT_NMI)))
- return true;
-
- svm->vmcb->control.exit_code = SVM_EXIT_NMI;
- svm->nested.exit_required = true;
-
- return false;
-}
-
static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
{
return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
--
2.20.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-14 20:11 [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Cathy Avery
2020-04-14 20:11 ` [PATCH 1/2] " Cathy Avery
@ 2020-04-14 20:11 ` Cathy Avery
2020-04-23 14:42 ` Sean Christopherson
2020-04-15 9:49 ` [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Vitaly Kuznetsov
2020-04-23 13:43 ` Paolo Bonzini
3 siblings, 1 reply; 15+ messages in thread
From: Cathy Avery @ 2020-04-14 20:11 UTC (permalink / raw)
To: linux-kernel, kvm, pbonzini; +Cc: vkuznets, wei.huang2
With NMI intercept moved to check_nested_events there is a race
condition where vcpu->arch.nmi_pending is set late causing
the execution of check_nested_events to not setup correctly
for nested.exit_required. A second call to check_nested_events
allows the injectable nmi to be detected in time in order to
require immediate exit from L2 to L1.
Signed-off-by: Cathy Avery <cavery@redhat.com>
---
arch/x86/kvm/x86.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 027dfd278a97..ecfafcd93536 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7734,10 +7734,17 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
vcpu->arch.smi_pending = false;
++vcpu->arch.smi_count;
enter_smm(vcpu);
- } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) {
- --vcpu->arch.nmi_pending;
- vcpu->arch.nmi_injected = true;
- kvm_x86_ops.set_nmi(vcpu);
+ } else if (vcpu->arch.nmi_pending) {
+ if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) {
+ r = kvm_x86_ops.check_nested_events(vcpu);
+ if (r != 0)
+ return r;
+ }
+ if (kvm_x86_ops.nmi_allowed(vcpu)) {
+ --vcpu->arch.nmi_pending;
+ vcpu->arch.nmi_injected = true;
+ kvm_x86_ops.set_nmi(vcpu);
+ }
} else if (kvm_cpu_has_injectable_intr(vcpu)) {
/*
* Because interrupts can be injected asynchronously, we are
--
2.20.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI
2020-04-14 20:11 [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Cathy Avery
2020-04-14 20:11 ` [PATCH 1/2] " Cathy Avery
2020-04-14 20:11 ` [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI Cathy Avery
@ 2020-04-15 9:49 ` Vitaly Kuznetsov
2020-04-15 12:45 ` Paolo Bonzini
2020-04-23 13:43 ` Paolo Bonzini
3 siblings, 1 reply; 15+ messages in thread
From: Vitaly Kuznetsov @ 2020-04-15 9:49 UTC (permalink / raw)
To: Cathy Avery, pbonzini; +Cc: wei.huang2, Jim Mattson, linux-kernel, kvm
Cathy Avery <cavery@redhat.com> writes:
> Moved nested NMI exit to new check_nested_events.
> The second patch fixes the NMI pending race condition that now occurs.
>
> Cathy Avery (2):
> KVM: SVM: Implement check_nested_events for NMI
> KVM: x86: check_nested_events if there is an injectable NMI
>
Not directly related to this series but I just noticed that we have the
following comment in inject_pending_event():
/* try to inject new event if pending */
if (vcpu->arch.exception.pending) {
...
if (vcpu->arch.exception.nr == DB_VECTOR) {
/*
* This code assumes that nSVM doesn't use
* check_nested_events(). If it does, the
* DR6/DR7 changes should happen before L1
* gets a #VMEXIT for an intercepted #DB in
* L2. (Under VMX, on the other hand, the
* DR6/DR7 changes should not happen in the
* event of a VM-exit to L1 for an intercepted
* #DB in L2.)
*/
kvm_deliver_exception_payload(vcpu);
if (vcpu->arch.dr7 & DR7_GD) {
vcpu->arch.dr7 &= ~DR7_GD;
kvm_update_dr7(vcpu);
}
}
kvm_x86_ops.queue_exception(vcpu);
}
As we already implement check_nested_events() on SVM, do we need to do
anything here? CC: Jim who added the guardian (f10c729ff9652).
> arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/svm/svm.h | 15 ---------------
> arch/x86/kvm/x86.c | 15 +++++++++++----
> 4 files changed, 33 insertions(+), 20 deletions(-)
--
Vitaly
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI
2020-04-15 9:49 ` [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Vitaly Kuznetsov
@ 2020-04-15 12:45 ` Paolo Bonzini
0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-15 12:45 UTC (permalink / raw)
To: Vitaly Kuznetsov, Cathy Avery; +Cc: wei.huang2, Jim Mattson, linux-kernel, kvm
On 15/04/20 11:49, Vitaly Kuznetsov wrote:
> Not directly related to this series but I just noticed that we have the
> following comment in inject_pending_event():
>
> /* try to inject new event if pending */
> if (vcpu->arch.exception.pending) {
> ...
> if (vcpu->arch.exception.nr == DB_VECTOR) {
> /*
> * This code assumes that nSVM doesn't use
> * check_nested_events(). If it does, the
> * DR6/DR7 changes should happen before L1
> * gets a #VMEXIT for an intercepted #DB in
> * L2. (Under VMX, on the other hand, the
> * DR6/DR7 changes should not happen in the
> * event of a VM-exit to L1 for an intercepted
> * #DB in L2.)
> */
> kvm_deliver_exception_payload(vcpu);
> if (vcpu->arch.dr7 & DR7_GD) {
> vcpu->arch.dr7 &= ~DR7_GD;
> kvm_update_dr7(vcpu);
> }
> }
>
> kvm_x86_ops.queue_exception(vcpu);
> }
>
> As we already implement check_nested_events() on SVM, do we need to do
> anything here? CC: Jim who added the guardian (f10c729ff9652).
It's (still) okay because we don't use check_nested_events() for exceptions.
Regarding this series, I think we should implement the NMI injection
test for VMX and see if it requires the same change as patch 2.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI
2020-04-14 20:11 [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Cathy Avery
` (2 preceding siblings ...)
2020-04-15 9:49 ` [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Vitaly Kuznetsov
@ 2020-04-23 13:43 ` Paolo Bonzini
3 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 13:43 UTC (permalink / raw)
To: Cathy Avery, linux-kernel, kvm; +Cc: vkuznets, wei.huang2
On 14/04/20 22:11, Cathy Avery wrote:
> Moved nested NMI exit to new check_nested_events.
> The second patch fixes the NMI pending race condition that now occurs.
>
> Cathy Avery (2):
> KVM: SVM: Implement check_nested_events for NMI
> KVM: x86: check_nested_events if there is an injectable NMI
>
> arch/x86/kvm/svm/nested.c | 21 +++++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/svm/svm.h | 15 ---------------
> arch/x86/kvm/x86.c | 15 +++++++++++----
> 4 files changed, 33 insertions(+), 20 deletions(-)
>
Thanks, I've integrated this in Sean's event reinjection series and will
post soon my own take on both.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-14 20:11 ` [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI Cathy Avery
@ 2020-04-23 14:42 ` Sean Christopherson
2020-04-23 15:10 ` Paolo Bonzini
2020-04-23 15:36 ` Cathy Avery
0 siblings, 2 replies; 15+ messages in thread
From: Sean Christopherson @ 2020-04-23 14:42 UTC (permalink / raw)
To: Cathy Avery; +Cc: linux-kernel, kvm, pbonzini, vkuznets, wei.huang2
On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote:
> With NMI intercept moved to check_nested_events there is a race
> condition where vcpu->arch.nmi_pending is set late causing
How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set
it because the current KVM_RUN thread holds the mutex, and the only other
call to process_nmi() is in the request path of vcpu_enter_guest, which has
already executed.
> the execution of check_nested_events to not setup correctly
> for nested.exit_required. A second call to check_nested_events
> allows the injectable nmi to be detected in time in order to
> require immediate exit from L2 to L1.
>
> Signed-off-by: Cathy Avery <cavery@redhat.com>
> ---
> arch/x86/kvm/x86.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 027dfd278a97..ecfafcd93536 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7734,10 +7734,17 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
> vcpu->arch.smi_pending = false;
> ++vcpu->arch.smi_count;
> enter_smm(vcpu);
> - } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) {
> - --vcpu->arch.nmi_pending;
> - vcpu->arch.nmi_injected = true;
> - kvm_x86_ops.set_nmi(vcpu);
> + } else if (vcpu->arch.nmi_pending) {
> + if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) {
> + r = kvm_x86_ops.check_nested_events(vcpu);
> + if (r != 0)
> + return r;
> + }
> + if (kvm_x86_ops.nmi_allowed(vcpu)) {
> + --vcpu->arch.nmi_pending;
> + vcpu->arch.nmi_injected = true;
> + kvm_x86_ops.set_nmi(vcpu);
> + }
> } else if (kvm_cpu_has_injectable_intr(vcpu)) {
> /*
> * Because interrupts can be injected asynchronously, we are
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 14:42 ` Sean Christopherson
@ 2020-04-23 15:10 ` Paolo Bonzini
2020-04-23 15:35 ` Sean Christopherson
2020-04-23 15:36 ` Cathy Avery
1 sibling, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 15:10 UTC (permalink / raw)
To: Sean Christopherson, Cathy Avery; +Cc: linux-kernel, kvm, vkuznets, wei.huang2
On 23/04/20 16:42, Sean Christopherson wrote:
> On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote:
>> With NMI intercept moved to check_nested_events there is a race
>> condition where vcpu->arch.nmi_pending is set late causing
> How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set
> it because the current KVM_RUN thread holds the mutex, and the only other
> call to process_nmi() is in the request path of vcpu_enter_guest, which has
> already executed.
>
I think the actual cause is priority inversion between NMI and
interrupts, because NMI is added last in patch 1.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 15:10 ` Paolo Bonzini
@ 2020-04-23 15:35 ` Sean Christopherson
2020-04-23 15:43 ` Paolo Bonzini
0 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2020-04-23 15:35 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Cathy Avery, linux-kernel, kvm, vkuznets, wei.huang2
On Thu, Apr 23, 2020 at 05:10:45PM +0200, Paolo Bonzini wrote:
> On 23/04/20 16:42, Sean Christopherson wrote:
> > On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote:
> >> With NMI intercept moved to check_nested_events there is a race
> >> condition where vcpu->arch.nmi_pending is set late causing
> > How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set
> > it because the current KVM_RUN thread holds the mutex, and the only other
> > call to process_nmi() is in the request path of vcpu_enter_guest, which has
> > already executed.
> >
>
> I think the actual cause is priority inversion between NMI and
> interrupts, because NMI is added last in patch 1.
Ah, that makes more sense. I stared/glared at this exact code for a long
while and came to the conclusion that the "late" behavior was exclusive to
interrupts, would have been a shame if all that glaring was for naught.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 14:42 ` Sean Christopherson
2020-04-23 15:10 ` Paolo Bonzini
@ 2020-04-23 15:36 ` Cathy Avery
2020-04-23 15:45 ` Paolo Bonzini
1 sibling, 1 reply; 15+ messages in thread
From: Cathy Avery @ 2020-04-23 15:36 UTC (permalink / raw)
To: Sean Christopherson; +Cc: linux-kernel, kvm, pbonzini, vkuznets, wei.huang2
On 4/23/20 10:42 AM, Sean Christopherson wrote:
> On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote:
>> With NMI intercept moved to check_nested_events there is a race
>> condition where vcpu->arch.nmi_pending is set late causing
> How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set
> it because the current KVM_RUN thread holds the mutex, and the only other
> call to process_nmi() is in the request path of vcpu_enter_guest, which has
> already executed.
You will have to forgive me as I am new to KVM and any help would be
most appreciated. This is what I noticed when an NMI intercept is
processed when it was implemented in check_nested_events.
When check_nested_events is called from inject_pending_event ...
check_nested_events needs to have already been called (kvm_vcpu_running
with vcpu->arch.nmi_pending = 1) to set up the NMI intercept and set
svm->nested.exit_required. Otherwise we do not exit from the second
checked_nested_events call ( code below ) with a return of -EBUSY which
allows us to immediately vmexit.
/*
* Call check_nested_events() even if we reinjected a previous
event
* in order for caller to determine if it should require
immediate-exit
* from L2 to L1 due to pending L1 events which require exit
* from L2 to L1.
*/
if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) {
r = kvm_x86_ops.check_nested_events(vcpu);
if (r != 0)
return r;
}
Unfortunately when kvm_vcpu_running is called vcpu->arch.nmi_pending is
not yet set.
Here is the trace snippet ( with some debug ) without the second call to
check_nested_events.
Thanks,
Cathy
qemu-system-x86-2029 [040] 232.168269: kvm_entry: vcpu 0
qemu-system-x86-2029 [040] 232.168271: kvm_exit: reason EXIT_MSR
rip 0x405371 info 1 0
qemu-system-x86-2029 [040] 232.168272: kvm_nested_vmexit: rip
405371 reason EXIT_MSR info1 1 info2 0 int_info 0 int_info_err 0
qemu-system-x86-2029 [040] 232.168273: kvm_apic: apic_write
APIC_ICR2 = 0x0
qemu-system-x86-2029 [040] 232.168274: kvm_apic: apic_write
APIC_ICR = 0x44400
qemu-system-x86-2029 [040] 232.168275: kvm_apic_ipi: dst 0 vec 0
(NMI|physical|assert|edge|self)
qemu-system-x86-2029 [040] 232.168277: kvm_apic_accept_irq: apicid
0 vec 0 (NMI|edge)
qemu-system-x86-2029 [040] 232.168278: kvm_msr: msr_write 830 = 0x44400
qemu-system-x86-2029 [040] 232.168279: bprint:
svm_check_nested_events: svm_check_nested_events reinj = 0, exit_req = 0
qemu-system-x86-2029 [040] 232.168279: bprint:
svm_check_nested_events: svm_check_nested_events nmi pending = 0
qemu-system-x86-2029 [040] 232.168279: bputs: vcpu_enter_guest:
inject_pending_event 1
qemu-system-x86-2029 [040] 232.168279: bprint:
svm_check_nested_events: svm_check_nested_events reinj = 0, exit_req = 0
qemu-system-x86-2029 [040] 232.168279: bprint:
svm_check_nested_events: svm_check_nested_events nmi pending = 1
qemu-system-x86-2029 [040] 232.168280: bprint: svm_nmi_allowed:
svm_nmi_allowed ret 1
qemu-system-x86-2029 [040] 232.168280: bputs: svm_inject_nmi:
svm_inject_nmi
qemu-system-x86-2029 [040] 232.168280: bprint: vcpu_enter_guest:
nmi_pending 0
qemu-system-x86-2029 [040] 232.168281: kvm_entry: vcpu 0
qemu-system-x86-2029 [040] 232.168282: kvm_exit: reason EXIT_NMI
rip 0x405373 info 1 0
qemu-system-x86-2029 [040] 232.168284: kvm_nested_vmexit_inject:
reason EXIT_NMI info1 1 info2 0 int_info 0 int_info_err 0
qemu-system-x86-2029 [040] 232.168285: kvm_entry: vcpu 0
>> the execution of check_nested_events to not setup correctly
>> for nested.exit_required. A second call to check_nested_events
>> allows the injectable nmi to be detected in time in order to
>> require immediate exit from L2 to L1.
>>
>> Signed-off-by: Cathy Avery <cavery@redhat.com>
>> ---
>> arch/x86/kvm/x86.c | 15 +++++++++++----
>> 1 file changed, 11 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 027dfd278a97..ecfafcd93536 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -7734,10 +7734,17 @@ static int inject_pending_event(struct kvm_vcpu *vcpu)
>> vcpu->arch.smi_pending = false;
>> ++vcpu->arch.smi_count;
>> enter_smm(vcpu);
>> - } else if (vcpu->arch.nmi_pending && kvm_x86_ops.nmi_allowed(vcpu)) {
>> - --vcpu->arch.nmi_pending;
>> - vcpu->arch.nmi_injected = true;
>> - kvm_x86_ops.set_nmi(vcpu);
>> + } else if (vcpu->arch.nmi_pending) {
>> + if (is_guest_mode(vcpu) && kvm_x86_ops.check_nested_events) {
>> + r = kvm_x86_ops.check_nested_events(vcpu);
>> + if (r != 0)
>> + return r;
>> + }
>> + if (kvm_x86_ops.nmi_allowed(vcpu)) {
>> + --vcpu->arch.nmi_pending;
>> + vcpu->arch.nmi_injected = true;
>> + kvm_x86_ops.set_nmi(vcpu);
>> + }
>> } else if (kvm_cpu_has_injectable_intr(vcpu)) {
>> /*
>> * Because interrupts can be injected asynchronously, we are
>> --
>> 2.20.1
>>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 15:35 ` Sean Christopherson
@ 2020-04-23 15:43 ` Paolo Bonzini
2020-04-23 18:32 ` Paolo Bonzini
0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 15:43 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Cathy Avery, linux-kernel, kvm, vkuznets, wei.huang2
On 23/04/20 17:35, Sean Christopherson wrote:
> On Thu, Apr 23, 2020 at 05:10:45PM +0200, Paolo Bonzini wrote:
>> On 23/04/20 16:42, Sean Christopherson wrote:
>>> On Tue, Apr 14, 2020 at 04:11:07PM -0400, Cathy Avery wrote:
>>>> With NMI intercept moved to check_nested_events there is a race
>>>> condition where vcpu->arch.nmi_pending is set late causing
>>> How is nmi_pending set late? The KVM_{G,S}ET_VCPU_EVENTS paths can't set
>>> it because the current KVM_RUN thread holds the mutex, and the only other
>>> call to process_nmi() is in the request path of vcpu_enter_guest, which has
>>> already executed.
>>>
>> I think the actual cause is priority inversion between NMI and
>> interrupts, because NMI is added last in patch 1.
> Ah, that makes more sense. I stared/glared at this exact code for a long
> while and came to the conclusion that the "late" behavior was exclusive to
> interrupts, would have been a shame if all that glaring was for naught.
>
Ah no, it's a bug in Cathy's patch and it's a weird one.
The problem is that on AMD you exit guest mode with the NMI latched and
GIF=0. So check_nested_events should enable the NMI window in addition
to causing a vmexit.
So why does it work? Because on AMD we don't have (yet)
nested_run_pending, so we just check if we already have a vmexit
scheduled and if so return -EBUSY. The second call causes
inject_pending_event to return -EBUSY and thus go through KVM_REQ_EVENT
again, which enables the NMI window.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 15:36 ` Cathy Avery
@ 2020-04-23 15:45 ` Paolo Bonzini
2020-04-23 18:33 ` Sean Christopherson
0 siblings, 1 reply; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 15:45 UTC (permalink / raw)
To: Cathy Avery, Sean Christopherson; +Cc: linux-kernel, kvm, vkuznets, wei.huang2
On 23/04/20 17:36, Cathy Avery wrote:
>
> You will have to forgive me as I am new to KVM and any help would be
> most appreciated.
No problem---this is a _really_ hairy part. At least every time we make
some changes it suddenly starts making more sense (both hacks and bugs
decrease over time).
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 15:43 ` Paolo Bonzini
@ 2020-04-23 18:32 ` Paolo Bonzini
0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 18:32 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Cathy Avery, linux-kernel, kvm, vkuznets, wei.huang2
On 23/04/20 17:43, Paolo Bonzini wrote:
>>
> Ah no, it's a bug in Cathy's patch and it's a weird one.
>
> The problem is that on AMD you exit guest mode with the NMI latched and
> GIF=0. So check_nested_events should enable the NMI window in addition
> to causing a vmexit.
>
> So why does it work? Because on AMD we don't have (yet)
> nested_run_pending, so we just check if we already have a vmexit
> scheduled and if so return -EBUSY. The second call causes
> inject_pending_event to return -EBUSY and thus go through KVM_REQ_EVENT
> again, which enables the NMI window.
... and this means that suddenly your event handling series has become
twice as large so I'm taking it over.
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 15:45 ` Paolo Bonzini
@ 2020-04-23 18:33 ` Sean Christopherson
2020-04-23 18:47 ` Paolo Bonzini
0 siblings, 1 reply; 15+ messages in thread
From: Sean Christopherson @ 2020-04-23 18:33 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Cathy Avery, linux-kernel, kvm, vkuznets, wei.huang2
On Thu, Apr 23, 2020 at 05:45:07PM +0200, Paolo Bonzini wrote:
> On 23/04/20 17:36, Cathy Avery wrote:
> >
> > You will have to forgive me as I am new to KVM and any help would be
> > most appreciated.
>
> No problem---this is a _really_ hairy part. At least every time we make
> some changes it suddenly starts making more sense (both hacks and bugs
> decrease over time).
LOL, no kidding, what sadist gave Cathy nested NMI as a ramp task? :-D
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI
2020-04-23 18:33 ` Sean Christopherson
@ 2020-04-23 18:47 ` Paolo Bonzini
0 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2020-04-23 18:47 UTC (permalink / raw)
To: Sean Christopherson; +Cc: Cathy Avery, linux-kernel, kvm, vkuznets, wei.huang2
On 23/04/20 20:33, Sean Christopherson wrote:
> On Thu, Apr 23, 2020 at 05:45:07PM +0200, Paolo Bonzini wrote:
>> On 23/04/20 17:36, Cathy Avery wrote:
>>>
>>> You will have to forgive me as I am new to KVM and any help would be
>>> most appreciated.
>>
>> No problem---this is a _really_ hairy part. At least every time we make
>> some changes it suddenly starts making more sense (both hacks and bugs
>> decrease over time).
>
> LOL, no kidding, what sadist gave Cathy nested NMI as a ramp task? :-D
The suggestion was to write more tests, she decided to extend that to
fixing them!!
Paolo
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2020-04-23 18:47 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-14 20:11 [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Cathy Avery
2020-04-14 20:11 ` [PATCH 1/2] " Cathy Avery
2020-04-14 20:11 ` [PATCH 2/2] KVM: x86: check_nested_events if there is an injectable NMI Cathy Avery
2020-04-23 14:42 ` Sean Christopherson
2020-04-23 15:10 ` Paolo Bonzini
2020-04-23 15:35 ` Sean Christopherson
2020-04-23 15:43 ` Paolo Bonzini
2020-04-23 18:32 ` Paolo Bonzini
2020-04-23 15:36 ` Cathy Avery
2020-04-23 15:45 ` Paolo Bonzini
2020-04-23 18:33 ` Sean Christopherson
2020-04-23 18:47 ` Paolo Bonzini
2020-04-15 9:49 ` [PATCH 0/2] KVM: SVM: Implement check_nested_events for NMI Vitaly Kuznetsov
2020-04-15 12:45 ` Paolo Bonzini
2020-04-23 13:43 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).