kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: nSVM: first step towards fixing event injection
@ 2020-03-05 10:13 Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-05 10:13 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: cavery, vkuznets, jan.kiszka, wei.huang2

Event injection in nSVM does not use check_nested_events, which means it
is basically broken.  As a first step, this fixes interrupt injection
which is probably the most complicated case due to the interactions
with V_INTR_MASKING and the host EFLAGS.IF.

This series fixes Cathy's test case that I have sent earlier.

Paolo

Paolo Bonzini (4):
  KVM: nSVM: do not change host intercepts while nested VM is running
  KVM: nSVM: ignore L1 interrupt window while running L2 with
    V_INTR_MASKING=1
  KVM: nSVM: implement check_nested_events for interrupts
  KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2

 arch/x86/kvm/svm.c | 172 ++++++++++++++++++++++++++++++++---------------------
 1 file changed, 103 insertions(+), 69 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running
  2020-03-05 10:13 [PATCH 0/4] KVM: nSVM: first step towards fixing event injection Paolo Bonzini
@ 2020-03-05 10:13 ` Paolo Bonzini
  2020-03-06 14:42   ` Vitaly Kuznetsov
  2020-03-05 10:13 ` [PATCH 2/4] KVM: nSVM: ignore L1 interrupt window while running L2 with V_INTR_MASKING=1 Paolo Bonzini
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-05 10:13 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: cavery, vkuznets, jan.kiszka, wei.huang2

Instead of touching the host intercepts so that the bitwise OR in
recalc_intercepts just works, mask away uninteresting intercepts
directly in recalc_intercepts.

This is cleaner and keeps the logic in one place even for intercepts
that can change even while L2 is running.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm.c | 31 ++++++++++++++++++-------------
 1 file changed, 18 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 247e31d21b96..14cb5c194008 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -519,10 +519,24 @@ static void recalc_intercepts(struct vcpu_svm *svm)
 	h = &svm->nested.hsave->control;
 	g = &svm->nested;
 
-	c->intercept_cr = h->intercept_cr | g->intercept_cr;
-	c->intercept_dr = h->intercept_dr | g->intercept_dr;
-	c->intercept_exceptions = h->intercept_exceptions | g->intercept_exceptions;
-	c->intercept = h->intercept | g->intercept;
+	c->intercept_cr = h->intercept_cr;
+	c->intercept_dr = h->intercept_dr;
+	c->intercept_exceptions = h->intercept_exceptions;
+	c->intercept = h->intercept;
+
+	if (svm->vcpu.arch.hflags & HF_VINTR_MASK) {
+		/* We only want the cr8 intercept bits of L1 */
+		c->intercept_cr &= ~(1U << INTERCEPT_CR8_READ);
+		c->intercept_cr &= ~(1U << INTERCEPT_CR8_WRITE);
+	}
+
+	/* We don't want to see VMMCALLs from a nested guest */
+	c->intercept &= ~(1ULL << INTERCEPT_VMMCALL);
+
+	c->intercept_cr |= g->intercept_cr;
+	c->intercept_dr |= g->intercept_dr;
+	c->intercept_exceptions |= g->intercept_exceptions;
+	c->intercept |= g->intercept;
 }
 
 static inline struct vmcb *get_host_vmcb(struct vcpu_svm *svm)
@@ -3590,15 +3604,6 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 	else
 		svm->vcpu.arch.hflags &= ~HF_VINTR_MASK;
 
-	if (svm->vcpu.arch.hflags & HF_VINTR_MASK) {
-		/* We only want the cr8 intercept bits of the guest */
-		clr_cr_intercept(svm, INTERCEPT_CR8_READ);
-		clr_cr_intercept(svm, INTERCEPT_CR8_WRITE);
-	}
-
-	/* We don't want to see VMMCALLs from a nested guest */
-	clr_intercept(svm, INTERCEPT_VMMCALL);
-
 	svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset;
 	svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset;
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/4] KVM: nSVM: ignore L1 interrupt window while running L2 with V_INTR_MASKING=1
  2020-03-05 10:13 [PATCH 0/4] KVM: nSVM: first step towards fixing event injection Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running Paolo Bonzini
@ 2020-03-05 10:13 ` Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2 Paolo Bonzini
  3 siblings, 0 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-05 10:13 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: cavery, vkuznets, jan.kiszka, wei.huang2

If a nested VM is started while an IRQ was pending and with
V_INTR_MASKING=1, the behavior of the guest depends on host IF.  If it
is 1, the VM should exit immediately, before executing the first
instruction of the guest, because VMRUN sets GIF back to 1.

If it is 0 and the host has VGIF, however, at the time of the VMRUN
instruction L0 is running the guest with a pending interrupt window
request.  This interrupt window request is completely irrelevant to
L2, since IF only controls virtual interrupts, so this patch drops
INTERCEPT_VINTR from the VMCB while running L2 under these circumstances.
To simplify the code, both steps of enabling the interrupt window
(setting the VINTR intercept and requesting a fake virtual interrupt
in svm_inject_irq) are grouped in the svm_set_vintr function, and
likewise for dismissing the interrupt window request in svm_clear_vintr.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm.c | 55 ++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 37 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 14cb5c194008..25827b79cf96 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -528,6 +528,13 @@ static void recalc_intercepts(struct vcpu_svm *svm)
 		/* We only want the cr8 intercept bits of L1 */
 		c->intercept_cr &= ~(1U << INTERCEPT_CR8_READ);
 		c->intercept_cr &= ~(1U << INTERCEPT_CR8_WRITE);
+
+		/*
+		 * Once running L2 with HF_VINTR_MASK, EFLAGS.IF does not
+		 * affect any interrupt we may want to inject; therefore,
+		 * interrupt window vmexits are irrelevant to L0.
+		 */
+		c->intercept &= ~(1ULL << INTERCEPT_VINTR);
 	}
 
 	/* We don't want to see VMMCALLs from a nested guest */
@@ -641,6 +648,11 @@ static inline void clr_intercept(struct vcpu_svm *svm, int bit)
 	recalc_intercepts(svm);
 }
 
+static inline bool is_intercept(struct vcpu_svm *svm, int bit)
+{
+	return (svm->vmcb->control.intercept & (1ULL << bit)) != 0;
+}
+
 static inline bool vgif_enabled(struct vcpu_svm *svm)
 {
 	return !!(svm->vmcb->control.int_ctl & V_GIF_ENABLE_MASK);
@@ -2438,14 +2450,38 @@ static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)
 	}
 }
 
+static inline void svm_enable_vintr(struct vcpu_svm *svm)
+{
+	struct vmcb_control_area *control;
+
+	/* The following fields are ignored when AVIC is enabled */
+	WARN_ON(kvm_vcpu_apicv_active(&svm->vcpu));
+
+	/*
+	 * This is just a dummy VINTR to actually cause a vmexit to happen.
+	 * Actual injection of virtual interrupts happens through EVENTINJ.
+	 */
+	control = &svm->vmcb->control;
+	control->int_vector = 0x0;
+	control->int_ctl &= ~V_INTR_PRIO_MASK;
+	control->int_ctl |= V_IRQ_MASK |
+		((/*control->int_vector >> 4*/ 0xf) << V_INTR_PRIO_SHIFT);
+	mark_dirty(svm->vmcb, VMCB_INTR);
+}
+
 static void svm_set_vintr(struct vcpu_svm *svm)
 {
 	set_intercept(svm, INTERCEPT_VINTR);
+	if (is_intercept(svm, INTERCEPT_VINTR))
+		svm_enable_vintr(svm);
 }
 
 static void svm_clear_vintr(struct vcpu_svm *svm)
 {
 	clr_intercept(svm, INTERCEPT_VINTR);
+
+	svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
+	mark_dirty(svm->vmcb, VMCB_INTR);
 }
 
 static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg)
@@ -3833,11 +3869,8 @@ static int clgi_interception(struct vcpu_svm *svm)
 	disable_gif(svm);
 
 	/* After a CLGI no interrupts should come */
-	if (!kvm_vcpu_apicv_active(&svm->vcpu)) {
+	if (!kvm_vcpu_apicv_active(&svm->vcpu))
 		svm_clear_vintr(svm);
-		svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
-		mark_dirty(svm->vmcb, VMCB_INTR);
-	}
 
 	return ret;
 }
@@ -5123,19 +5156,6 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
 	++vcpu->stat.nmi_injections;
 }
 
-static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
-{
-	struct vmcb_control_area *control;
-
-	/* The following fields are ignored when AVIC is enabled */
-	control = &svm->vmcb->control;
-	control->int_vector = irq;
-	control->int_ctl &= ~V_INTR_PRIO_MASK;
-	control->int_ctl |= V_IRQ_MASK |
-		((/*control->int_vector >> 4*/ 0xf) << V_INTR_PRIO_SHIFT);
-	mark_dirty(svm->vmcb, VMCB_INTR);
-}
-
 static void svm_set_irq(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -5559,7 +5579,6 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
 		 */
 		svm_toggle_avic_for_irq_window(vcpu, false);
 		svm_set_vintr(svm);
-		svm_inject_irq(svm, 0x0);
 	}
 }
 
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts
  2020-03-05 10:13 [PATCH 0/4] KVM: nSVM: first step towards fixing event injection Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running Paolo Bonzini
  2020-03-05 10:13 ` [PATCH 2/4] KVM: nSVM: ignore L1 interrupt window while running L2 with V_INTR_MASKING=1 Paolo Bonzini
@ 2020-03-05 10:13 ` Paolo Bonzini
  2020-03-05 23:51   ` kbuild test robot
  2020-03-07  1:18   ` kbuild test robot
  2020-03-05 10:13 ` [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2 Paolo Bonzini
  3 siblings, 2 replies; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-05 10:13 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: cavery, vkuznets, jan.kiszka, wei.huang2

The current implementation of physical interrupt delivery to a nested guest
is quite broken.  It relies on svm_interrupt_allowed returning false if
VINTR=1 so that the interrupt can be injected from enable_irq_window,
but this does not work for guests that do not intercept HLT or that rely
on clearing the host IF to block physical interrupts while L2 runs.

This patch can be split in two logical parts, but including only
one breaks tests so I am combining both changes together.

The first and easiest is simply to return true for svm_interrupt_allowed
if HF_VINTR_MASK is set and HIF is set.  This way the semantics of
svm_interrupt_allowed are respected: svm_interrupt_allowed being false
does not mean "call enable_irq_window", it means "interrupts cannot
be injected now".

After doing this, however, we need another place to inject the
interrupt, and fortunately we already have one, check_nested_events,
which nested SVM does not implement but which is meant exactly for this
purpose.  It is called before interrupts are injected, and it can
therefore do the L2->L1 switch while leaving inject_pending_event
none the wiser.

This patch was developed together with Cathy Avery, who wrote the
test and did a lot of the initial debugging.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm.c | 68 ++++++++++++++++++++++++------------------------------
 1 file changed, 30 insertions(+), 38 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 25827b79cf96..0d773406f7ac 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3133,43 +3133,36 @@ static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 	return vmexit;
 }
 
-/* This function returns true if it is save to enable the irq window */
-static inline bool nested_svm_intr(struct vcpu_svm *svm)
+static void nested_svm_intr(struct vcpu_svm *svm)
 {
-	if (!is_guest_mode(&svm->vcpu))
-		return true;
-
-	if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK))
-		return true;
-
-	if (!(svm->vcpu.arch.hflags & HF_HIF_MASK))
-		return false;
-
-	/*
-	 * if vmexit was already requested (by intercepted exception
-	 * for instance) do not overwrite it with "external interrupt"
-	 * vmexit.
-	 */
-	if (svm->nested.exit_required)
-		return false;
-
 	svm->vmcb->control.exit_code   = SVM_EXIT_INTR;
 	svm->vmcb->control.exit_info_1 = 0;
 	svm->vmcb->control.exit_info_2 = 0;
 
-	if (svm->nested.intercept & 1ULL) {
-		/*
-		 * The #vmexit can't be emulated here directly because this
-		 * code path runs with irqs and preemption disabled. A
-		 * #vmexit emulation might sleep. Only signal request for
-		 * the #vmexit here.
-		 */
-		svm->nested.exit_required = true;
-		trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
-		return false;
+	/* nested_svm_vmexit this gets called afterwards from handle_exit */
+	svm->nested.exit_required = true;
+	trace_kvm_nested_intr_vmexit(svm->vmcb->save.rip);
+}
+
+static bool nested_exit_on_intr(struct vcpu_svm *svm)
+{
+	return (svm->nested.intercept & 1ULL);
+}
+
+static int svm_check_nested_events(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+	bool block_nested_events =
+		kvm_event_needs_reinjection(vcpu) || svm->nested.exit_required;
+
+	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(svm)) {
+		if (block_nested_events)
+			return -EBUSY;
+		nested_svm_intr(svm);
+		return 0;
 	}
 
-	return true;
+	return 0;
 }
 
 /* This function returns true if it is save to enable the nmi window */
@@ -5544,18 +5537,15 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
 	struct vmcb *vmcb = svm->vmcb;
-	int ret;
 
 	if (!gif_set(svm) ||
 	     (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK))
 		return 0;
 
-	ret = !!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
-
-	if (is_guest_mode(vcpu))
-		return ret && !(svm->vcpu.arch.hflags & HF_VINTR_MASK);
-
-	return ret;
+	if (is_guest_mode(vcpu) && (svm->vcpu.arch.hflags & HF_VINTR_MASK))
+		return !!(svm->vcpu.arch.hflags & HF_HIF_MASK);
+	else
+		return !!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
 }
 
 static void enable_irq_window(struct kvm_vcpu *vcpu)
@@ -5570,7 +5560,7 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
 	 * enabled, the STGI interception will not occur. Enable the irq
 	 * window under the assumption that the hardware will set the GIF.
 	 */
-	if ((vgif_enabled(svm) || gif_set(svm)) && nested_svm_intr(svm)) {
+	if (vgif_enabled(svm) || gif_set(svm)) {
 		/*
 		 * IRQ window is not needed when AVIC is enabled,
 		 * unless we have pending ExtINT since it cannot be injected
@@ -7465,6 +7455,8 @@ static void svm_pre_update_apicv_exec_ctrl(struct kvm *kvm, bool activate)
 	.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
 
 	.apic_init_signal_blocked = svm_apic_init_signal_blocked,
+
+	.check_nested_events = svm_check_nested_events,
 };
 
 static int __init svm_init(void)
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2
  2020-03-05 10:13 [PATCH 0/4] KVM: nSVM: first step towards fixing event injection Paolo Bonzini
                   ` (2 preceding siblings ...)
  2020-03-05 10:13 ` [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts Paolo Bonzini
@ 2020-03-05 10:13 ` Paolo Bonzini
  2020-03-05 10:46   ` Jan Kiszka
  3 siblings, 1 reply; 9+ messages in thread
From: Paolo Bonzini @ 2020-03-05 10:13 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: cavery, vkuznets, jan.kiszka, wei.huang2

This patch reproduces for nSVM the change that was made for nVMX in
commit b5861e5cf2fc ("KVM: nVMX: Fix loss of pending IRQ/NMI before
entering L2").  While I do not have a test that breaks without it, I
cannot see why it would not be necessary since all events are unblocked
by VMRUN's setting of GIF back to 1.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/svm.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 0d773406f7ac..3df62257889a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3574,6 +3574,10 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
 static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 				 struct vmcb *nested_vmcb, struct kvm_host_map *map)
 {
+	bool evaluate_pending_interrupts =
+		is_intercept(svm, INTERCEPT_VINTR) ||
+		is_intercept(svm, INTERCEPT_IRET);
+
 	if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF)
 		svm->vcpu.arch.hflags |= HF_HIF_MASK;
 	else
@@ -3660,7 +3664,21 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 
 	svm->nested.vmcb = vmcb_gpa;
 
+	/*
+	 * If L1 had a pending IRQ/NMI before executing VMRUN,
+	 * which wasn't delivered because it was disallowed (e.g.
+	 * interrupts disabled), L0 needs to evaluate if this pending
+	 * event should cause an exit from L2 to L1 or be delivered
+	 * directly to L2.
+	 *
+	 * Usually this would be handled by the processor noticing an
+	 * IRQ/NMI window request.  However, VMRUN can unblock interrupts
+	 * by implicitly setting GIF, so force L0 to perform pending event
+	 * evaluation by requesting a KVM_REQ_EVENT.
+	 */
 	enable_gif(svm);
+	if (unlikely(evaluate_pending_interrupts))
+		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
 
 	mark_all_dirty(svm->vmcb);
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2
  2020-03-05 10:13 ` [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2 Paolo Bonzini
@ 2020-03-05 10:46   ` Jan Kiszka
  0 siblings, 0 replies; 9+ messages in thread
From: Jan Kiszka @ 2020-03-05 10:46 UTC (permalink / raw)
  To: Paolo Bonzini, linux-kernel, kvm; +Cc: cavery, vkuznets, wei.huang2

On 05.03.20 11:13, Paolo Bonzini wrote:
> This patch reproduces for nSVM the change that was made for nVMX in
> commit b5861e5cf2fc ("KVM: nVMX: Fix loss of pending IRQ/NMI before
> entering L2").  While I do not have a test that breaks without it, I
> cannot see why it would not be necessary since all events are unblocked
> by VMRUN's setting of GIF back to 1.

I suspect, running Jailhouse enable/disable in a tight loop as KVM guest 
can stress this fairly well. At least that was the case last time I 
tried (4 years ago, or so) - it broke it.

Unfortunately, we have no up-to-date configuration for such a setup. 
Some old pieces are lying around here, could try to hand them over if 
someone is interested and has the time I lack ATM.

Jan

> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   arch/x86/kvm/svm.c | 18 ++++++++++++++++++
>   1 file changed, 18 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 0d773406f7ac..3df62257889a 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3574,6 +3574,10 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
>   static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
>   				 struct vmcb *nested_vmcb, struct kvm_host_map *map)
>   {
> +	bool evaluate_pending_interrupts =
> +		is_intercept(svm, INTERCEPT_VINTR) ||
> +		is_intercept(svm, INTERCEPT_IRET);
> +
>   	if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF)
>   		svm->vcpu.arch.hflags |= HF_HIF_MASK;
>   	else
> @@ -3660,7 +3664,21 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
>   
>   	svm->nested.vmcb = vmcb_gpa;
>   
> +	/*
> +	 * If L1 had a pending IRQ/NMI before executing VMRUN,
> +	 * which wasn't delivered because it was disallowed (e.g.
> +	 * interrupts disabled), L0 needs to evaluate if this pending
> +	 * event should cause an exit from L2 to L1 or be delivered
> +	 * directly to L2.
> +	 *
> +	 * Usually this would be handled by the processor noticing an
> +	 * IRQ/NMI window request.  However, VMRUN can unblock interrupts
> +	 * by implicitly setting GIF, so force L0 to perform pending event
> +	 * evaluation by requesting a KVM_REQ_EVENT.
> +	 */
>   	enable_gif(svm);
> +	if (unlikely(evaluate_pending_interrupts))
> +		kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
>   
>   	mark_all_dirty(svm->vmcb);
>   }
> 

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts
  2020-03-05 10:13 ` [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts Paolo Bonzini
@ 2020-03-05 23:51   ` kbuild test robot
  2020-03-07  1:18   ` kbuild test robot
  1 sibling, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2020-03-05 23:51 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kbuild-all, linux-kernel, kvm, cavery, vkuznets, jan.kiszka, wei.huang2

[-- Attachment #1: Type: text/plain, Size: 7109 bytes --]

Hi Paolo,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/linux-next]
[also build test ERROR on linus/master v5.6-rc4 next-20200305]
[cannot apply to linux/master vhost/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Paolo-Bonzini/KVM-nSVM-first-step-towards-fixing-event-injection/20200306-015933
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: x86_64-rhel (attached as .config)
compiler: gcc-7 (Debian 7.5.0-5) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> arch/x86//kvm/svm.c:7538:25: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
     .check_nested_events = svm_check_nested_events,
                            ^~~~~~~~~~~~~~~~~~~~~~~
   arch/x86//kvm/svm.c:7538:25: note: (near initialization for 'svm_x86_ops.check_nested_events')
   cc1: all warnings being treated as errors

vim +7538 arch/x86//kvm/svm.c

  7396	
  7397	static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
  7398		.cpu_has_kvm_support = has_svm,
  7399		.disabled_by_bios = is_disabled,
  7400		.hardware_setup = svm_hardware_setup,
  7401		.hardware_unsetup = svm_hardware_teardown,
  7402		.check_processor_compatibility = svm_check_processor_compat,
  7403		.hardware_enable = svm_hardware_enable,
  7404		.hardware_disable = svm_hardware_disable,
  7405		.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
  7406		.has_emulated_msr = svm_has_emulated_msr,
  7407	
  7408		.vcpu_create = svm_create_vcpu,
  7409		.vcpu_free = svm_free_vcpu,
  7410		.vcpu_reset = svm_vcpu_reset,
  7411	
  7412		.vm_alloc = svm_vm_alloc,
  7413		.vm_free = svm_vm_free,
  7414		.vm_init = svm_vm_init,
  7415		.vm_destroy = svm_vm_destroy,
  7416	
  7417		.prepare_guest_switch = svm_prepare_guest_switch,
  7418		.vcpu_load = svm_vcpu_load,
  7419		.vcpu_put = svm_vcpu_put,
  7420		.vcpu_blocking = svm_vcpu_blocking,
  7421		.vcpu_unblocking = svm_vcpu_unblocking,
  7422	
  7423		.update_bp_intercept = update_bp_intercept,
  7424		.get_msr_feature = svm_get_msr_feature,
  7425		.get_msr = svm_get_msr,
  7426		.set_msr = svm_set_msr,
  7427		.get_segment_base = svm_get_segment_base,
  7428		.get_segment = svm_get_segment,
  7429		.set_segment = svm_set_segment,
  7430		.get_cpl = svm_get_cpl,
  7431		.get_cs_db_l_bits = kvm_get_cs_db_l_bits,
  7432		.decache_cr0_guest_bits = svm_decache_cr0_guest_bits,
  7433		.decache_cr4_guest_bits = svm_decache_cr4_guest_bits,
  7434		.set_cr0 = svm_set_cr0,
  7435		.set_cr3 = svm_set_cr3,
  7436		.set_cr4 = svm_set_cr4,
  7437		.set_efer = svm_set_efer,
  7438		.get_idt = svm_get_idt,
  7439		.set_idt = svm_set_idt,
  7440		.get_gdt = svm_get_gdt,
  7441		.set_gdt = svm_set_gdt,
  7442		.get_dr6 = svm_get_dr6,
  7443		.set_dr6 = svm_set_dr6,
  7444		.set_dr7 = svm_set_dr7,
  7445		.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
  7446		.cache_reg = svm_cache_reg,
  7447		.get_rflags = svm_get_rflags,
  7448		.set_rflags = svm_set_rflags,
  7449	
  7450		.tlb_flush = svm_flush_tlb,
  7451		.tlb_flush_gva = svm_flush_tlb_gva,
  7452	
  7453		.run = svm_vcpu_run,
  7454		.handle_exit = handle_exit,
  7455		.skip_emulated_instruction = skip_emulated_instruction,
  7456		.update_emulated_instruction = NULL,
  7457		.set_interrupt_shadow = svm_set_interrupt_shadow,
  7458		.get_interrupt_shadow = svm_get_interrupt_shadow,
  7459		.patch_hypercall = svm_patch_hypercall,
  7460		.set_irq = svm_set_irq,
  7461		.set_nmi = svm_inject_nmi,
  7462		.queue_exception = svm_queue_exception,
  7463		.cancel_injection = svm_cancel_injection,
  7464		.interrupt_allowed = svm_interrupt_allowed,
  7465		.nmi_allowed = svm_nmi_allowed,
  7466		.get_nmi_mask = svm_get_nmi_mask,
  7467		.set_nmi_mask = svm_set_nmi_mask,
  7468		.enable_nmi_window = enable_nmi_window,
  7469		.enable_irq_window = enable_irq_window,
  7470		.update_cr8_intercept = update_cr8_intercept,
  7471		.set_virtual_apic_mode = svm_set_virtual_apic_mode,
  7472		.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
  7473		.check_apicv_inhibit_reasons = svm_check_apicv_inhibit_reasons,
  7474		.pre_update_apicv_exec_ctrl = svm_pre_update_apicv_exec_ctrl,
  7475		.load_eoi_exitmap = svm_load_eoi_exitmap,
  7476		.hwapic_irr_update = svm_hwapic_irr_update,
  7477		.hwapic_isr_update = svm_hwapic_isr_update,
  7478		.sync_pir_to_irr = kvm_lapic_find_highest_irr,
  7479		.apicv_post_state_restore = avic_post_state_restore,
  7480	
  7481		.set_tss_addr = svm_set_tss_addr,
  7482		.set_identity_map_addr = svm_set_identity_map_addr,
  7483		.get_tdp_level = get_npt_level,
  7484		.get_mt_mask = svm_get_mt_mask,
  7485	
  7486		.get_exit_info = svm_get_exit_info,
  7487	
  7488		.get_lpage_level = svm_get_lpage_level,
  7489	
  7490		.cpuid_update = svm_cpuid_update,
  7491	
  7492		.rdtscp_supported = svm_rdtscp_supported,
  7493		.invpcid_supported = svm_invpcid_supported,
  7494		.mpx_supported = svm_mpx_supported,
  7495		.xsaves_supported = svm_xsaves_supported,
  7496		.umip_emulated = svm_umip_emulated,
  7497		.pt_supported = svm_pt_supported,
  7498		.pku_supported = svm_pku_supported,
  7499	
  7500		.set_supported_cpuid = svm_set_supported_cpuid,
  7501	
  7502		.has_wbinvd_exit = svm_has_wbinvd_exit,
  7503	
  7504		.read_l1_tsc_offset = svm_read_l1_tsc_offset,
  7505		.write_l1_tsc_offset = svm_write_l1_tsc_offset,
  7506	
  7507		.set_tdp_cr3 = set_tdp_cr3,
  7508	
  7509		.check_intercept = svm_check_intercept,
  7510		.handle_exit_irqoff = svm_handle_exit_irqoff,
  7511	
  7512		.request_immediate_exit = __kvm_request_immediate_exit,
  7513	
  7514		.sched_in = svm_sched_in,
  7515	
  7516		.pmu_ops = &amd_pmu_ops,
  7517		.deliver_posted_interrupt = svm_deliver_avic_intr,
  7518		.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
  7519		.update_pi_irte = svm_update_pi_irte,
  7520		.setup_mce = svm_setup_mce,
  7521	
  7522		.smi_allowed = svm_smi_allowed,
  7523		.pre_enter_smm = svm_pre_enter_smm,
  7524		.pre_leave_smm = svm_pre_leave_smm,
  7525		.enable_smi_window = enable_smi_window,
  7526	
  7527		.mem_enc_op = svm_mem_enc_op,
  7528		.mem_enc_reg_region = svm_register_enc_region,
  7529		.mem_enc_unreg_region = svm_unregister_enc_region,
  7530	
  7531		.nested_enable_evmcs = NULL,
  7532		.nested_get_evmcs_version = NULL,
  7533	
  7534		.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
  7535	
  7536		.apic_init_signal_blocked = svm_apic_init_signal_blocked,
  7537	
> 7538		.check_nested_events = svm_check_nested_events,
  7539	};
  7540	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 44253 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running
  2020-03-05 10:13 ` [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running Paolo Bonzini
@ 2020-03-06 14:42   ` Vitaly Kuznetsov
  0 siblings, 0 replies; 9+ messages in thread
From: Vitaly Kuznetsov @ 2020-03-06 14:42 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: cavery, jan.kiszka, wei.huang2, linux-kernel, kvm

Paolo Bonzini <pbonzini@redhat.com> writes:

> Instead of touching the host intercepts so that the bitwise OR in
> recalc_intercepts just works, mask away uninteresting intercepts
> directly in recalc_intercepts.
>
> This is cleaner and keeps the logic in one place even for intercepts
> that can change even while L2 is running.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  arch/x86/kvm/svm.c | 31 ++++++++++++++++++-------------
>  1 file changed, 18 insertions(+), 13 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index 247e31d21b96..14cb5c194008 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -519,10 +519,24 @@ static void recalc_intercepts(struct vcpu_svm *svm)
>  	h = &svm->nested.hsave->control;
>  	g = &svm->nested;
>  
> -	c->intercept_cr = h->intercept_cr | g->intercept_cr;
> -	c->intercept_dr = h->intercept_dr | g->intercept_dr;
> -	c->intercept_exceptions = h->intercept_exceptions | g->intercept_exceptions;
> -	c->intercept = h->intercept | g->intercept;
> +	c->intercept_cr = h->intercept_cr;
> +	c->intercept_dr = h->intercept_dr;
> +	c->intercept_exceptions = h->intercept_exceptions;
> +	c->intercept = h->intercept;
> +
> +	if (svm->vcpu.arch.hflags & HF_VINTR_MASK) {
> +		/* We only want the cr8 intercept bits of L1 */
> +		c->intercept_cr &= ~(1U << INTERCEPT_CR8_READ);
> +		c->intercept_cr &= ~(1U << INTERCEPT_CR8_WRITE);
> +	}
> +
> +	/* We don't want to see VMMCALLs from a nested guest */
> +	c->intercept &= ~(1ULL << INTERCEPT_VMMCALL);
> +
> +	c->intercept_cr |= g->intercept_cr;
> +	c->intercept_dr |= g->intercept_dr;
> +	c->intercept_exceptions |= g->intercept_exceptions;
> +	c->intercept |= g->intercept;
>  }
>  
>  static inline struct vmcb *get_host_vmcb(struct vcpu_svm *svm)
> @@ -3590,15 +3604,6 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
>  	else
>  		svm->vcpu.arch.hflags &= ~HF_VINTR_MASK;
>  
> -	if (svm->vcpu.arch.hflags & HF_VINTR_MASK) {
> -		/* We only want the cr8 intercept bits of the guest */
> -		clr_cr_intercept(svm, INTERCEPT_CR8_READ);
> -		clr_cr_intercept(svm, INTERCEPT_CR8_WRITE);
> -	}
> -
> -	/* We don't want to see VMMCALLs from a nested guest */
> -	clr_intercept(svm, INTERCEPT_VMMCALL);
> -
>  	svm->vcpu.arch.tsc_offset += nested_vmcb->control.tsc_offset;
>  	svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset;

FWIW,

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

-- 
Vitaly


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts
  2020-03-05 10:13 ` [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts Paolo Bonzini
  2020-03-05 23:51   ` kbuild test robot
@ 2020-03-07  1:18   ` kbuild test robot
  1 sibling, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2020-03-07  1:18 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: kbuild-all, linux-kernel, kvm, cavery, vkuznets, jan.kiszka, wei.huang2

Hi Paolo,

I love your patch! Perhaps something to improve:

[auto build test WARNING on kvm/linux-next]
[also build test WARNING on linus/master v5.6-rc4 next-20200306]
[cannot apply to linux/master vhost/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Paolo-Bonzini/KVM-nSVM-first-step-towards-fixing-event-injection/20200306-015933
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-174-g094d5a94-dirty
        make ARCH=x86_64 allmodconfig
        make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

>> arch/x86/kvm/svm.c:7538:32: sparse: sparse: incorrect type in initializer (different argument counts)
>> arch/x86/kvm/svm.c:7538:32: sparse:    expected int ( *check_nested_events )( ... )
>> arch/x86/kvm/svm.c:7538:32: sparse:    got int ( * )( ... )
   arch/x86/include/asm/paravirt.h:200:9: sparse: sparse: cast truncates bits from constant value (100000000 becomes 0)
   arch/x86/include/asm/paravirt.h:200:9: sparse: sparse: cast truncates bits from constant value (100000000 becomes 0)
   arch/x86/include/asm/bitops.h:77:37: sparse: sparse: cast truncates bits from constant value (ffffff7f becomes 7f)
   arch/x86/kvm/svm.c:6920:60: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6920:60: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6943:14: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6949:59: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6949:59: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6963:14: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6988:70: sparse: sparse: dereference of noderef expression
   arch/x86/kvm/svm.c:6988:70: sparse: sparse: dereference of noderef expression

vim +7538 arch/x86/kvm/svm.c

  7396	
  7397	static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
  7398		.cpu_has_kvm_support = has_svm,
  7399		.disabled_by_bios = is_disabled,
  7400		.hardware_setup = svm_hardware_setup,
  7401		.hardware_unsetup = svm_hardware_teardown,
  7402		.check_processor_compatibility = svm_check_processor_compat,
  7403		.hardware_enable = svm_hardware_enable,
  7404		.hardware_disable = svm_hardware_disable,
  7405		.cpu_has_accelerated_tpr = svm_cpu_has_accelerated_tpr,
  7406		.has_emulated_msr = svm_has_emulated_msr,
  7407	
  7408		.vcpu_create = svm_create_vcpu,
  7409		.vcpu_free = svm_free_vcpu,
  7410		.vcpu_reset = svm_vcpu_reset,
  7411	
  7412		.vm_alloc = svm_vm_alloc,
  7413		.vm_free = svm_vm_free,
  7414		.vm_init = svm_vm_init,
  7415		.vm_destroy = svm_vm_destroy,
  7416	
  7417		.prepare_guest_switch = svm_prepare_guest_switch,
  7418		.vcpu_load = svm_vcpu_load,
  7419		.vcpu_put = svm_vcpu_put,
  7420		.vcpu_blocking = svm_vcpu_blocking,
  7421		.vcpu_unblocking = svm_vcpu_unblocking,
  7422	
  7423		.update_bp_intercept = update_bp_intercept,
  7424		.get_msr_feature = svm_get_msr_feature,
  7425		.get_msr = svm_get_msr,
  7426		.set_msr = svm_set_msr,
  7427		.get_segment_base = svm_get_segment_base,
  7428		.get_segment = svm_get_segment,
  7429		.set_segment = svm_set_segment,
  7430		.get_cpl = svm_get_cpl,
  7431		.get_cs_db_l_bits = kvm_get_cs_db_l_bits,
  7432		.decache_cr0_guest_bits = svm_decache_cr0_guest_bits,
  7433		.decache_cr4_guest_bits = svm_decache_cr4_guest_bits,
  7434		.set_cr0 = svm_set_cr0,
  7435		.set_cr3 = svm_set_cr3,
  7436		.set_cr4 = svm_set_cr4,
  7437		.set_efer = svm_set_efer,
  7438		.get_idt = svm_get_idt,
  7439		.set_idt = svm_set_idt,
  7440		.get_gdt = svm_get_gdt,
  7441		.set_gdt = svm_set_gdt,
  7442		.get_dr6 = svm_get_dr6,
  7443		.set_dr6 = svm_set_dr6,
  7444		.set_dr7 = svm_set_dr7,
  7445		.sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
  7446		.cache_reg = svm_cache_reg,
  7447		.get_rflags = svm_get_rflags,
  7448		.set_rflags = svm_set_rflags,
  7449	
  7450		.tlb_flush = svm_flush_tlb,
  7451		.tlb_flush_gva = svm_flush_tlb_gva,
  7452	
  7453		.run = svm_vcpu_run,
  7454		.handle_exit = handle_exit,
  7455		.skip_emulated_instruction = skip_emulated_instruction,
  7456		.update_emulated_instruction = NULL,
  7457		.set_interrupt_shadow = svm_set_interrupt_shadow,
  7458		.get_interrupt_shadow = svm_get_interrupt_shadow,
  7459		.patch_hypercall = svm_patch_hypercall,
  7460		.set_irq = svm_set_irq,
  7461		.set_nmi = svm_inject_nmi,
  7462		.queue_exception = svm_queue_exception,
  7463		.cancel_injection = svm_cancel_injection,
  7464		.interrupt_allowed = svm_interrupt_allowed,
  7465		.nmi_allowed = svm_nmi_allowed,
  7466		.get_nmi_mask = svm_get_nmi_mask,
  7467		.set_nmi_mask = svm_set_nmi_mask,
  7468		.enable_nmi_window = enable_nmi_window,
  7469		.enable_irq_window = enable_irq_window,
  7470		.update_cr8_intercept = update_cr8_intercept,
  7471		.set_virtual_apic_mode = svm_set_virtual_apic_mode,
  7472		.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
  7473		.check_apicv_inhibit_reasons = svm_check_apicv_inhibit_reasons,
  7474		.pre_update_apicv_exec_ctrl = svm_pre_update_apicv_exec_ctrl,
  7475		.load_eoi_exitmap = svm_load_eoi_exitmap,
  7476		.hwapic_irr_update = svm_hwapic_irr_update,
  7477		.hwapic_isr_update = svm_hwapic_isr_update,
  7478		.sync_pir_to_irr = kvm_lapic_find_highest_irr,
  7479		.apicv_post_state_restore = avic_post_state_restore,
  7480	
  7481		.set_tss_addr = svm_set_tss_addr,
  7482		.set_identity_map_addr = svm_set_identity_map_addr,
  7483		.get_tdp_level = get_npt_level,
  7484		.get_mt_mask = svm_get_mt_mask,
  7485	
  7486		.get_exit_info = svm_get_exit_info,
  7487	
  7488		.get_lpage_level = svm_get_lpage_level,
  7489	
  7490		.cpuid_update = svm_cpuid_update,
  7491	
  7492		.rdtscp_supported = svm_rdtscp_supported,
  7493		.invpcid_supported = svm_invpcid_supported,
  7494		.mpx_supported = svm_mpx_supported,
  7495		.xsaves_supported = svm_xsaves_supported,
  7496		.umip_emulated = svm_umip_emulated,
  7497		.pt_supported = svm_pt_supported,
  7498		.pku_supported = svm_pku_supported,
  7499	
  7500		.set_supported_cpuid = svm_set_supported_cpuid,
  7501	
  7502		.has_wbinvd_exit = svm_has_wbinvd_exit,
  7503	
  7504		.read_l1_tsc_offset = svm_read_l1_tsc_offset,
  7505		.write_l1_tsc_offset = svm_write_l1_tsc_offset,
  7506	
  7507		.set_tdp_cr3 = set_tdp_cr3,
  7508	
  7509		.check_intercept = svm_check_intercept,
  7510		.handle_exit_irqoff = svm_handle_exit_irqoff,
  7511	
  7512		.request_immediate_exit = __kvm_request_immediate_exit,
  7513	
  7514		.sched_in = svm_sched_in,
  7515	
  7516		.pmu_ops = &amd_pmu_ops,
  7517		.deliver_posted_interrupt = svm_deliver_avic_intr,
  7518		.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
  7519		.update_pi_irte = svm_update_pi_irte,
  7520		.setup_mce = svm_setup_mce,
  7521	
  7522		.smi_allowed = svm_smi_allowed,
  7523		.pre_enter_smm = svm_pre_enter_smm,
  7524		.pre_leave_smm = svm_pre_leave_smm,
  7525		.enable_smi_window = enable_smi_window,
  7526	
  7527		.mem_enc_op = svm_mem_enc_op,
  7528		.mem_enc_reg_region = svm_register_enc_region,
  7529		.mem_enc_unreg_region = svm_unregister_enc_region,
  7530	
  7531		.nested_enable_evmcs = NULL,
  7532		.nested_get_evmcs_version = NULL,
  7533	
  7534		.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
  7535	
  7536		.apic_init_signal_blocked = svm_apic_init_signal_blocked,
  7537	
> 7538		.check_nested_events = svm_check_nested_events,
  7539	};
  7540	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-07  1:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-05 10:13 [PATCH 0/4] KVM: nSVM: first step towards fixing event injection Paolo Bonzini
2020-03-05 10:13 ` [PATCH 1/4] KVM: nSVM: do not change host intercepts while nested VM is running Paolo Bonzini
2020-03-06 14:42   ` Vitaly Kuznetsov
2020-03-05 10:13 ` [PATCH 2/4] KVM: nSVM: ignore L1 interrupt window while running L2 with V_INTR_MASKING=1 Paolo Bonzini
2020-03-05 10:13 ` [PATCH 3/4] KVM: nSVM: implement check_nested_events for interrupts Paolo Bonzini
2020-03-05 23:51   ` kbuild test robot
2020-03-07  1:18   ` kbuild test robot
2020-03-05 10:13 ` [PATCH 4/4] KVM: nSVM: avoid loss of pending IRQ/NMI before entering L2 Paolo Bonzini
2020-03-05 10:46   ` Jan Kiszka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).