linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath
@ 2020-04-23  9:01 Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath Wanpeng Li
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

IPI and Timer cause the main vmexits in cloud environment observation, 
after single target IPI fastpath, let's optimize tscdeadline timer 
latency by introducing tscdeadline timer emulation fastpath, it will 
skip various KVM related checks when possible. i.e. after vmexit due 
to tscdeadline timer emulation, handle it and vmentry immediately 
without checking various kvm stuff when possible. 

Testing on SKX Server.

cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1):

5632.75ns -> 4559.25ns, 19%

kvm-unit-test/vmexit.flat:

w/o APICv, w/o advance timer:
tscdeadline_immed: 4780.75 -> 3851    19.4%
tscdeadline:       7474    -> 6528.5  12.7%

w/o APICv, w/ adaptive advance timer default -1:
tscdeadline_immed: 4845.75 -> 3930.5  18.9%
tscdeadline:       6048    -> 5871.75    3%

w/ APICv, w/o avanced timer:
tscdeadline_immed: 2919    -> 2467.75 15.5%
tscdeadline:       5661.75 -> 5188.25  8.4%

w/ APICv, w/ adaptive advance timer default -1:
tscdeadline_immed: 3018.5  -> 2561    15.2%
tscdeadline:       4663.75 -> 4537     2.7%

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>

v1 -> v2:
 * move more stuff from vmx.c to lapic.c
 * remove redundant checking
 * check more conditions to bail out CONT_RUN
 * not break AMD
 * not handle LVTT sepecial
 * cleanup codes

Wanpeng Li (5):
  KVM: LAPIC: Introduce interrupt delivery fastpath
  KVM: X86: Introduce need_cancel_enter_guest helper
  KVM: VMX: Introduce generic fastpath handler
  KVM: X86: TSCDEADLINE MSR emulation fastpath
  KVM: VMX: Handle preemption timer fastpath

 arch/x86/include/asm/kvm_host.h |  2 +
 arch/x86/kvm/lapic.c            | 98 +++++++++++++++++++++++++++++++++++++++--
 arch/x86/kvm/lapic.h            |  2 +
 arch/x86/kvm/svm/avic.c         |  5 +++
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/svm/svm.h          |  1 +
 arch/x86/kvm/vmx/vmx.c          | 69 ++++++++++++++++++++++++-----
 arch/x86/kvm/x86.c              | 42 ++++++++++++++----
 arch/x86/kvm/x86.h              |  1 +
 9 files changed, 199 insertions(+), 22 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
@ 2020-04-23  9:01 ` Wanpeng Li
  2020-04-23  9:25   ` Paolo Bonzini
  2020-04-23  9:01 ` [PATCH v2 2/5] KVM: X86: Introduce need_cancel_enter_guest helper Wanpeng Li
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

From: Wanpeng Li <wanpengli@tencent.com>

Introduce interrupt delivery fastpath, I observe kvm_x86_ops.deliver_posted_interrupt() 
has more latency then vmx_sync_pir_to_irr in my case, since it needs to wait 
vmentry, after that it can handle external interrupt, ack the notification 
vector, read posted-interrupt desciptor etc, it is slower than evaluate and 
delivery during vmentry method. For non-APICv, inject directly since we will 
not go though inject_pending_event().

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/lapic.c            | 32 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/svm/avic.c         |  5 +++++
 arch/x86/kvm/svm/svm.c          |  1 +
 arch/x86/kvm/svm/svm.h          |  1 +
 arch/x86/kvm/vmx/vmx.c          | 23 +++++++++++++++++------
 6 files changed, 57 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f26df2c..f809763 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1157,6 +1157,7 @@ struct kvm_x86_ops {
 	void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
 	int (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
 	int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
+	bool (*pi_test_and_set_pir_on)(struct kvm_vcpu *vcpu, int vector);
 	int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
 	int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);
 	int (*get_tdp_level)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 38f7dc9..7703142 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1259,6 +1259,30 @@ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high)
 	kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq, NULL);
 }
 
+static void fast_deliver_interrupt(struct kvm_lapic *apic, int vector)
+{
+	struct kvm_vcpu *vcpu = apic->vcpu;
+
+	kvm_lapic_clear_vector(vector, apic->regs + APIC_TMR);
+
+	if (vcpu->arch.apicv_active) {
+		if (kvm_x86_ops.pi_test_and_set_pir_on(vcpu, vector))
+			return;
+
+		kvm_x86_ops.sync_pir_to_irr(vcpu);
+	} else {
+		kvm_lapic_set_irr(vector, apic);
+		if (kvm_cpu_has_injectable_intr(vcpu)) {
+			if (kvm_x86_ops.interrupt_allowed(vcpu)) {
+				kvm_queue_interrupt(vcpu,
+					kvm_cpu_get_interrupt(vcpu), false);
+				kvm_x86_ops.set_irq(vcpu);
+			} else
+				kvm_x86_ops.enable_irq_window(vcpu);
+		}
+	}
+}
+
 static u32 apic_get_tmcct(struct kvm_lapic *apic)
 {
 	ktime_t remaining, now;
@@ -2351,6 +2375,14 @@ int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type)
 	return 0;
 }
 
+static void kvm_apic_local_deliver_fast(struct kvm_lapic *apic, int lvt_type)
+{
+	u32 reg = kvm_lapic_get_reg(apic, lvt_type);
+
+	if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED))
+		fast_deliver_interrupt(apic, reg & APIC_VECTOR_MASK);
+}
+
 void kvm_apic_nmi_wd_deliver(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index e80daa9..ab9e0fd 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -905,6 +905,11 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 	return ret;
 }
 
+bool svm_pi_test_and_set_pir_on(struct kvm_vcpu *vcpu, int vector)
+{
+	return false;
+}
+
 bool svm_check_apicv_inhibit_reasons(ulong bit)
 {
 	ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index eb95283..fd0cab3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4035,6 +4035,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
 	.deliver_posted_interrupt = svm_deliver_avic_intr,
 	.dy_apicv_has_pending_interrupt = svm_dy_apicv_has_pending_interrupt,
 	.update_pi_irte = svm_update_pi_irte,
+	.pi_test_and_set_pir_on = svm_pi_test_and_set_pir_on,
 	.setup_mce = svm_setup_mce,
 
 	.smi_allowed = svm_smi_allowed,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index ca95204..8a62a8b 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -457,6 +457,7 @@ int svm_update_pi_irte(struct kvm *kvm, unsigned int host_irq,
 		       uint32_t guest_irq, bool set);
 void svm_vcpu_blocking(struct kvm_vcpu *vcpu);
 void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
+bool svm_pi_test_and_set_pir_on(struct kvm_vcpu *vcpu, int vector);
 
 /* sev.c */
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 766303b..fd20cb3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3883,6 +3883,21 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
 	}
 	return -1;
 }
+
+static bool vmx_pi_test_and_set_pir_on(struct kvm_vcpu *vcpu, int vector)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	if (pi_test_and_set_pir(vector, &vmx->pi_desc))
+		return true;
+
+	/* If a previous notification has sent the IPI, nothing to do.  */
+	if (pi_test_and_set_on(&vmx->pi_desc))
+		return true;
+
+	return false;
+}
+
 /*
  * Send interrupt to vcpu via posted interrupt way.
  * 1. If target vcpu is running(non-root mode), send posted interrupt
@@ -3892,7 +3907,6 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
  */
 static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 {
-	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int r;
 
 	r = vmx_deliver_nested_posted_interrupt(vcpu, vector);
@@ -3902,11 +3916,7 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
 	if (!vcpu->arch.apicv_active)
 		return -1;
 
-	if (pi_test_and_set_pir(vector, &vmx->pi_desc))
-		return 0;
-
-	/* If a previous notification has sent the IPI, nothing to do.  */
-	if (pi_test_and_set_on(&vmx->pi_desc))
+	if (vmx_pi_test_and_set_pir_on(vcpu, vector))
 		return 0;
 
 	if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
@@ -7826,6 +7836,7 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.hwapic_isr_update = vmx_hwapic_isr_update,
 	.guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,
 	.sync_pir_to_irr = vmx_sync_pir_to_irr,
+	.pi_test_and_set_pir_on = vmx_pi_test_and_set_pir_on,
 	.deliver_posted_interrupt = vmx_deliver_posted_interrupt,
 	.dy_apicv_has_pending_interrupt = vmx_dy_apicv_has_pending_interrupt,
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/5] KVM: X86: Introduce need_cancel_enter_guest helper
  2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath Wanpeng Li
@ 2020-04-23  9:01 ` Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 3/5] KVM: VMX: Introduce generic fastpath handler Wanpeng Li
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

From: Wanpeng Li <wanpengli@tencent.com>

Introduce need_cancel_enter_guest() helper, it will be used by later patches.

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/x86.c | 10 ++++++++--
 arch/x86/kvm/x86.h |  1 +
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 59958ce..4561104 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1581,6 +1581,13 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);
 
+bool kvm_need_cancel_enter_guest(struct kvm_vcpu *vcpu)
+{
+	return (vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu)
+	    || need_resched() || signal_pending(current));
+}
+EXPORT_SYMBOL_GPL(kvm_need_cancel_enter_guest);
+
 /*
  * The fast path for frequent and performance sensitive wrmsr emulation,
  * i.e. the sending of IPI, sending IPI early in the VM-Exit flow reduces
@@ -8373,8 +8380,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
 	if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
 		kvm_x86_ops.sync_pir_to_irr(vcpu);
 
-	if (vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu)
-	    || need_resched() || signal_pending(current)) {
+	if (kvm_need_cancel_enter_guest(vcpu)) {
 		vcpu->mode = OUTSIDE_GUEST_MODE;
 		smp_wmb();
 		local_irq_enable();
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 7b5ed8e..1906e7e 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -364,5 +364,6 @@ static inline bool kvm_dr7_valid(u64 data)
 void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
 void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
 u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu);
+bool kvm_need_cancel_enter_guest(struct kvm_vcpu *vcpu);
 
 #endif
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/5] KVM: VMX: Introduce generic fastpath handler
  2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 2/5] KVM: X86: Introduce need_cancel_enter_guest helper Wanpeng Li
@ 2020-04-23  9:01 ` Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath Wanpeng Li
  2020-04-23  9:01 ` [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Wanpeng Li
  4 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

From: Wanpeng Li <wanpengli@tencent.com>

Introduce generic fastpath handler to handle MSR fastpath, VMX-preemption 
timer fastpath etc.

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/vmx/vmx.c          | 24 +++++++++++++++++++-----
 2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f809763..bcddf93 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -188,6 +188,7 @@ enum {
 enum exit_fastpath_completion {
 	EXIT_FASTPATH_NONE,
 	EXIT_FASTPATH_SKIP_EMUL_INS,
+	EXIT_FASTPATH_CONT_RUN,
 };
 
 struct x86_emulate_ctxt;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index fd20cb3..2613e58 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6569,6 +6569,20 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
 	}
 }
 
+static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+{
+	if (!is_guest_mode(vcpu)) {
+		switch (to_vmx(vcpu)->exit_reason) {
+		case EXIT_REASON_MSR_WRITE:
+			return handle_fastpath_set_msr_irqoff(vcpu);
+		default:
+			return EXIT_FASTPATH_NONE;
+		}
+	}
+
+	return EXIT_FASTPATH_NONE;
+}
+
 bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
 
 static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
@@ -6577,6 +6591,7 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	unsigned long cr3, cr4;
 
+cont_run:
 	/* Record the guest's net vcpu time for enforced NMI injections. */
 	if (unlikely(!enable_vnmi &&
 		     vmx->loaded_vmcs->soft_vnmi_blocked))
@@ -6743,17 +6758,16 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
 		return EXIT_FASTPATH_NONE;
 
-	if (!is_guest_mode(vcpu) && vmx->exit_reason == EXIT_REASON_MSR_WRITE)
-		exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu);
-	else
-		exit_fastpath = EXIT_FASTPATH_NONE;
-
 	vmx->loaded_vmcs->launched = 1;
 	vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
 
 	vmx_recover_nmi_blocking(vmx);
 	vmx_complete_interrupts(vmx);
 
+	exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
+	if (exit_fastpath == EXIT_FASTPATH_CONT_RUN)
+		goto cont_run;
+
 	return exit_fastpath;
 }
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath
  2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
                   ` (2 preceding siblings ...)
  2020-04-23  9:01 ` [PATCH v2 3/5] KVM: VMX: Introduce generic fastpath handler Wanpeng Li
@ 2020-04-23  9:01 ` Wanpeng Li
  2020-04-23  9:37   ` Paolo Bonzini
  2020-04-23  9:01 ` [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Wanpeng Li
  4 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

From: Wanpeng Li <wanpengli@tencent.com>

This patch implements tscdealine msr emulation fastpath, after wrmsr 
tscdeadline vmexit, handle it as soon as possible and vmentry immediately 
without checking various kvm stuff when possible.

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/lapic.c | 47 ++++++++++++++++++++++++++++++++++++++++++++---
 arch/x86/kvm/lapic.h |  1 +
 arch/x86/kvm/x86.c   | 32 ++++++++++++++++++++++++++------
 3 files changed, 71 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 7703142..d652bd9 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1898,6 +1898,8 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
 }
 EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer);
 
+static void kvm_inject_apic_timer_irqs_fast(struct kvm_vcpu *vcpu);
+
 void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu)
 {
 	restart_apic_timer(vcpu->arch.apic);
@@ -2189,17 +2191,48 @@ u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu)
 	return apic->lapic_timer.tscdeadline;
 }
 
-void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+static int __kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
 
 	if (!lapic_in_kernel(vcpu) || apic_lvtt_oneshot(apic) ||
 			apic_lvtt_period(apic))
-		return;
+		return 0;
 
 	hrtimer_cancel(&apic->lapic_timer.timer);
 	apic->lapic_timer.tscdeadline = data;
-	start_apic_timer(apic);
+
+	return 1;
+}
+
+void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
+{
+	if (__kvm_set_lapic_tscdeadline_msr(vcpu, data))
+		start_apic_timer(vcpu->arch.apic);
+}
+
+static int tscdeadline_expired_timer_fast(struct kvm_vcpu *vcpu)
+{
+	if (kvm_check_request(KVM_REQ_PENDING_TIMER, vcpu)) {
+		kvm_clear_request(KVM_REQ_PENDING_TIMER, vcpu);
+		kvm_inject_apic_timer_irqs_fast(vcpu);
+		atomic_set(&vcpu->arch.apic->lapic_timer.pending, 0);
+	}
+
+	return 0;
+}
+
+int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+
+	if (__kvm_set_lapic_tscdeadline_msr(vcpu, data)) {
+		atomic_set(&apic->lapic_timer.pending, 0);
+		if (start_hv_timer(apic))
+			return tscdeadline_expired_timer_fast(vcpu);
+	}
+
+	return 1;
 }
 
 void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
@@ -2492,6 +2525,14 @@ void kvm_inject_apic_timer_irqs(struct kvm_vcpu *vcpu)
 	}
 }
 
+static void kvm_inject_apic_timer_irqs_fast(struct kvm_vcpu *vcpu)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+
+	kvm_apic_local_deliver_fast(apic, APIC_LVTT);
+	apic->lapic_timer.tscdeadline = 0;
+}
+
 int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu)
 {
 	int vector = kvm_apic_has_interrupt(vcpu);
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 7f15f9e..5ef1364 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -251,6 +251,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu);
 bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu);
 void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu);
 bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu);
+int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data);
 
 static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4561104..112f1c4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1616,27 +1616,47 @@ static int handle_fastpath_set_x2apic_icr_irqoff(struct kvm_vcpu *vcpu, u64 data
 	return 1;
 }
 
+static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
+{
+	if (!kvm_x86_ops.set_hv_timer ||
+		kvm_mwait_in_guest(vcpu->kvm) ||
+		kvm_can_post_timer_interrupt(vcpu))
+		return 1;
+
+	return kvm_set_lapic_tscdeadline_msr_fast(vcpu, data);
+}
+
 enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
 {
 	u32 msr = kvm_rcx_read(vcpu);
 	u64 data;
-	int ret = 0;
+	int ret = EXIT_FASTPATH_NONE;
 
 	switch (msr) {
 	case APIC_BASE_MSR + (APIC_ICR >> 4):
 		data = kvm_read_edx_eax(vcpu);
-		ret = handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
+		if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data))
+			ret = EXIT_FASTPATH_SKIP_EMUL_INS;
+		break;
+	case MSR_IA32_TSCDEADLINE:
+		if (!(kvm_need_cancel_enter_guest(vcpu) ||
+			kvm_event_needs_reinjection(vcpu))) {
+			data = kvm_read_edx_eax(vcpu);
+			if (!handle_fastpath_set_tscdeadline(vcpu, data))
+				ret = EXIT_FASTPATH_CONT_RUN;
+		}
 		break;
 	default:
-		return EXIT_FASTPATH_NONE;
+		ret = EXIT_FASTPATH_NONE;
 	}
 
-	if (!ret) {
+	if (ret != EXIT_FASTPATH_NONE) {
 		trace_kvm_msr_write(msr, data);
-		return EXIT_FASTPATH_SKIP_EMUL_INS;
+		if (ret == EXIT_FASTPATH_CONT_RUN)
+			kvm_skip_emulated_instruction(vcpu);
 	}
 
-	return EXIT_FASTPATH_NONE;
+	return ret;
 }
 EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
                   ` (3 preceding siblings ...)
  2020-04-23  9:01 ` [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath Wanpeng Li
@ 2020-04-23  9:01 ` Wanpeng Li
  2020-04-23  9:40   ` Paolo Bonzini
  2020-04-26  7:38   ` kbuild test robot
  4 siblings, 2 replies; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:01 UTC (permalink / raw)
  To: linux-kernel, kvm
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

From: Wanpeng Li <wanpengli@tencent.com>

This patch implements handle preemption timer fastpath, after timer fire 
due to VMX-preemption timer counts down to zero, handle it as soon as 
possible and vmentry immediately without checking various kvm stuff when 
possible.

Testing on SKX Server.

cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1):

5632.75ns -> 4559.25ns, 19%

kvm-unit-test/vmexit.flat:

w/o APICv, w/o advance timer:
tscdeadline_immed: 4780.75 -> 3851    19.4%
tscdeadline:       7474    -> 6528.5  12.7%

w/o APICv, w/ adaptive advance timer default -1:
tscdeadline_immed: 4845.75 -> 3930.5  18.9%
tscdeadline:       6048    -> 5871.75    3%

w/ APICv, w/o avanced timer:
tscdeadline_immed: 2919    -> 2467.75 15.5%
tscdeadline:       5661.75 -> 5188.25  8.4%

w/ APICv, w/ adaptive advance timer default -1:
tscdeadline_immed: 3018.5  -> 2561    15.2%
tscdeadline:       4663.75 -> 4537     2.7%

Tested-by: Haiwei Li <lihaiwei@tencent.com>
Cc: Haiwei Li <lihaiwei@tencent.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/lapic.c   | 19 +++++++++++++++++++
 arch/x86/kvm/lapic.h   |  1 +
 arch/x86/kvm/vmx/vmx.c | 22 ++++++++++++++++++++++
 3 files changed, 42 insertions(+)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index d652bd9..2741931 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1899,6 +1899,25 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
 EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer);
 
 static void kvm_inject_apic_timer_irqs_fast(struct kvm_vcpu *vcpu);
+bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+	struct kvm_timer *ktimer = &apic->lapic_timer;
+
+	if (!apic_lvtt_tscdeadline(apic) ||
+		!ktimer->hv_timer_in_use ||
+		atomic_read(&ktimer->pending))
+		return 0;
+
+	WARN_ON(swait_active(&vcpu->wq));
+	cancel_hv_timer(apic);
+
+	ktimer->expired_tscdeadline = ktimer->tscdeadline;
+	kvm_inject_apic_timer_irqs_fast(vcpu);
+
+	return 1;
+}
+EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer_fast);
 
 void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 5ef1364..1b5abd8 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -252,6 +252,7 @@ bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu);
 void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu);
 bool kvm_can_post_timer_interrupt(struct kvm_vcpu *vcpu);
 int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data);
+bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu);
 
 static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
 {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 2613e58..527d1c1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6569,12 +6569,34 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
 	}
 }
 
+static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
+
+static enum exit_fastpath_completion handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu)
+{
+	struct vcpu_vmx *vmx = to_vmx(vcpu);
+
+	if (kvm_need_cancel_enter_guest(vcpu) ||
+		kvm_event_needs_reinjection(vcpu))
+		return EXIT_FASTPATH_NONE;
+
+	if (!vmx->req_immediate_exit &&
+		!unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled) &&
+		kvm_lapic_expired_hv_timer_fast(vcpu)) {
+		trace_kvm_exit(EXIT_REASON_PREEMPTION_TIMER, vcpu, KVM_ISA_VMX);
+		return EXIT_FASTPATH_CONT_RUN;
+	}
+
+	return EXIT_FASTPATH_NONE;
+}
+
 static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
 {
 	if (!is_guest_mode(vcpu)) {
 		switch (to_vmx(vcpu)->exit_reason) {
 		case EXIT_REASON_MSR_WRITE:
 			return handle_fastpath_set_msr_irqoff(vcpu);
+		case EXIT_REASON_PREEMPTION_TIMER:
+			return handle_fastpath_preemption_timer(vcpu);
 		default:
 			return EXIT_FASTPATH_NONE;
 		}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:01 ` [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath Wanpeng Li
@ 2020-04-23  9:25   ` Paolo Bonzini
  2020-04-23  9:35     ` Wanpeng Li
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23  9:25 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Haiwei Li

On 23/04/20 11:01, Wanpeng Li wrote:
> +static void fast_deliver_interrupt(struct kvm_lapic *apic, int vector)
> +{
> +	struct kvm_vcpu *vcpu = apic->vcpu;
> +
> +	kvm_lapic_clear_vector(vector, apic->regs + APIC_TMR);
> +
> +	if (vcpu->arch.apicv_active) {
> +		if (kvm_x86_ops.pi_test_and_set_pir_on(vcpu, vector))
> +			return;
> +
> +		kvm_x86_ops.sync_pir_to_irr(vcpu);
> +	} else {
> +		kvm_lapic_set_irr(vector, apic);
> +		if (kvm_cpu_has_injectable_intr(vcpu)) {
> +			if (kvm_x86_ops.interrupt_allowed(vcpu)) {
> +				kvm_queue_interrupt(vcpu,
> +					kvm_cpu_get_interrupt(vcpu), false);
> +				kvm_x86_ops.set_irq(vcpu);
> +			} else
> +				kvm_x86_ops.enable_irq_window(vcpu);
> +		}
> +	}
> +}
> +

Ok, got it now.  The problem is that deliver_posted_interrupt goes through

        if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
                kvm_vcpu_kick(vcpu);

Would it help to make the above

        if (vcpu != kvm_get_running_vcpu() &&
	    !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
                kvm_vcpu_kick(vcpu);

?  If that is enough for the APICv case, it's good enough.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:25   ` Paolo Bonzini
@ 2020-04-23  9:35     ` Wanpeng Li
  2020-04-23  9:39       ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:35 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On Thu, 23 Apr 2020 at 17:25, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/04/20 11:01, Wanpeng Li wrote:
> > +static void fast_deliver_interrupt(struct kvm_lapic *apic, int vector)
> > +{
> > +     struct kvm_vcpu *vcpu = apic->vcpu;
> > +
> > +     kvm_lapic_clear_vector(vector, apic->regs + APIC_TMR);
> > +
> > +     if (vcpu->arch.apicv_active) {
> > +             if (kvm_x86_ops.pi_test_and_set_pir_on(vcpu, vector))
> > +                     return;
> > +
> > +             kvm_x86_ops.sync_pir_to_irr(vcpu);
> > +     } else {
> > +             kvm_lapic_set_irr(vector, apic);
> > +             if (kvm_cpu_has_injectable_intr(vcpu)) {
> > +                     if (kvm_x86_ops.interrupt_allowed(vcpu)) {
> > +                             kvm_queue_interrupt(vcpu,
> > +                                     kvm_cpu_get_interrupt(vcpu), false);
> > +                             kvm_x86_ops.set_irq(vcpu);
> > +                     } else
> > +                             kvm_x86_ops.enable_irq_window(vcpu);
> > +             }
> > +     }
> > +}
> > +
>
> Ok, got it now.  The problem is that deliver_posted_interrupt goes through
>
>         if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
>                 kvm_vcpu_kick(vcpu);
>
> Would it help to make the above
>
>         if (vcpu != kvm_get_running_vcpu() &&
>             !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
>                 kvm_vcpu_kick(vcpu);
>
> ?  If that is enough for the APICv case, it's good enough.

We will not exit from vmx_vcpu_run to vcpu_enter_guest, so it will not
help, right?

    Wanpeng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath
  2020-04-23  9:01 ` [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath Wanpeng Li
@ 2020-04-23  9:37   ` Paolo Bonzini
  2020-04-23  9:54     ` Wanpeng Li
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23  9:37 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Haiwei Li

On 23/04/20 11:01, Wanpeng Li wrote:
> +
> +void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
> +{
> +	if (__kvm_set_lapic_tscdeadline_msr(vcpu, data))
> +		start_apic_timer(vcpu->arch.apic);
> +}
> +
> +int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data)
> +{
> +	struct kvm_lapic *apic = vcpu->arch.apic;
> +
> +	if (__kvm_set_lapic_tscdeadline_msr(vcpu, data)) {
> +		atomic_set(&apic->lapic_timer.pending, 0);
> +		if (start_hv_timer(apic))
> +			return tscdeadline_expired_timer_fast(vcpu);
> +	}
> +
> +	return 1;
>  }
>
> +static int tscdeadline_expired_timer_fast(struct kvm_vcpu *vcpu)
> +{
> +	if (kvm_check_request(KVM_REQ_PENDING_TIMER, vcpu)) {
> +		kvm_clear_request(KVM_REQ_PENDING_TIMER, vcpu);
> +		kvm_inject_apic_timer_irqs_fast(vcpu);
> +		atomic_set(&vcpu->arch.apic->lapic_timer.pending, 0);
> +	}
> +
> +	return 0;
> +}

This could also be handled in apic_timer_expired.  For example you can
add an argument from_timer_fn and do

	if (!from_timer_fn) {
		WARN_ON(kvm_get_running_vcpu() != vcpu);
		kvm_inject_apic_timer_irqs_fast(vcpu);
		return;
	}

        if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
                ...
	}
	atomic_inc(&apic->lapic_timer.pending);
	kvm_set_pending_timer(vcpu);

and then you don't need kvm_set_lapic_tscdeadline_msr_fast and
everything else.  Anyway thanks, this is already much better.

Paoo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:35     ` Wanpeng Li
@ 2020-04-23  9:39       ` Paolo Bonzini
  2020-04-23  9:44         ` Wanpeng Li
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23  9:39 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On 23/04/20 11:35, Wanpeng Li wrote:
>> Ok, got it now.  The problem is that deliver_posted_interrupt goes through
>>
>>         if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
>>                 kvm_vcpu_kick(vcpu);
>>
>> Would it help to make the above
>>
>>         if (vcpu != kvm_get_running_vcpu() &&
>>             !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
>>                 kvm_vcpu_kick(vcpu);
>>
>> ?  If that is enough for the APICv case, it's good enough.
> We will not exit from vmx_vcpu_run to vcpu_enter_guest, so it will not
> help, right?

Oh indeed---the call to sync_pir_to_irr is in vcpu_enter_guest.  You can
add it to patch 3 right before "goto cont_run", since AMD does not need it.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23  9:01 ` [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Wanpeng Li
@ 2020-04-23  9:40   ` Paolo Bonzini
  2020-04-23  9:56     ` Wanpeng Li
  2020-04-26  7:38   ` kbuild test robot
  1 sibling, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23  9:40 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, Haiwei Li

On 23/04/20 11:01, Wanpeng Li wrote:
> +bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_lapic *apic = vcpu->arch.apic;
> +	struct kvm_timer *ktimer = &apic->lapic_timer;
> +
> +	if (!apic_lvtt_tscdeadline(apic) ||
> +		!ktimer->hv_timer_in_use ||
> +		atomic_read(&ktimer->pending))
> +		return 0;
> +
> +	WARN_ON(swait_active(&vcpu->wq));
> +	cancel_hv_timer(apic);
> +
> +	ktimer->expired_tscdeadline = ktimer->tscdeadline;
> +	kvm_inject_apic_timer_irqs_fast(vcpu);
> +
> +	return 1;
> +}
> +EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer_fast);

Please re-evaluate if this is needed (or which parts are needed) after
cleaning up patch 4.  Anyway again---this is already better, I don't
like the duplicated code but at least I can understand what's going on.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:39       ` Paolo Bonzini
@ 2020-04-23  9:44         ` Wanpeng Li
  2020-04-23  9:52           ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On Thu, 23 Apr 2020 at 17:41, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/04/20 11:35, Wanpeng Li wrote:
> >> Ok, got it now.  The problem is that deliver_posted_interrupt goes through
> >>
> >>         if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
> >>                 kvm_vcpu_kick(vcpu);
> >>
> >> Would it help to make the above
> >>
> >>         if (vcpu != kvm_get_running_vcpu() &&
> >>             !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
> >>                 kvm_vcpu_kick(vcpu);
> >>
> >> ?  If that is enough for the APICv case, it's good enough.
> > We will not exit from vmx_vcpu_run to vcpu_enter_guest, so it will not
> > help, right?
>
> Oh indeed---the call to sync_pir_to_irr is in vcpu_enter_guest.  You can
> add it to patch 3 right before "goto cont_run", since AMD does not need it.

Just move kvm_x86_ops.sync_pir_to_irr(vcpu)? How about the set pir/on
part for APICv and non-APICv part in fast_deliver_interrupt()?

    Wanpeng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath
  2020-04-23  9:44         ` Wanpeng Li
@ 2020-04-23  9:52           ` Paolo Bonzini
  0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23  9:52 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On 23/04/20 11:44, Wanpeng Li wrote:
>>>> Would it help to make the above
>>>> 
>>>>         if (vcpu != kvm_get_running_vcpu() &&
>>>>             !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
>>>>                 kvm_vcpu_kick(vcpu);
>>>> 
>>>> ?  If that is enough for the APICv case, it's good enough.
>>>
>>> We will not exit from vmx_vcpu_run to vcpu_enter_guest, so it will not
>>> help, right?
>>
>> Oh indeed---the call to sync_pir_to_irr is in vcpu_enter_guest.  You can
>> add it to patch 3 right before "goto cont_run", since AMD does not need it.
>
> Just move kvm_x86_ops.sync_pir_to_irr(vcpu)? How about the set pir/on
> part for APICv and non-APICv part in fast_deliver_interrupt()?

That should be handled by deliver_posted_interrupt with no performance
penalty, if you add "vcpu != kvm_get_running_vcpu()" before it calls
kvm_vcpu_trigger_posted_interrupt.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath
  2020-04-23  9:37   ` Paolo Bonzini
@ 2020-04-23  9:54     ` Wanpeng Li
  2020-04-23 10:28       ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:54 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On Thu, 23 Apr 2020 at 17:39, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/04/20 11:01, Wanpeng Li wrote:
> > +
> > +void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data)
> > +{
> > +     if (__kvm_set_lapic_tscdeadline_msr(vcpu, data))
> > +             start_apic_timer(vcpu->arch.apic);
> > +}
> > +
> > +int kvm_set_lapic_tscdeadline_msr_fast(struct kvm_vcpu *vcpu, u64 data)
> > +{
> > +     struct kvm_lapic *apic = vcpu->arch.apic;
> > +
> > +     if (__kvm_set_lapic_tscdeadline_msr(vcpu, data)) {
> > +             atomic_set(&apic->lapic_timer.pending, 0);
> > +             if (start_hv_timer(apic))
> > +                     return tscdeadline_expired_timer_fast(vcpu);
> > +     }
> > +
> > +     return 1;
> >  }
> >
> > +static int tscdeadline_expired_timer_fast(struct kvm_vcpu *vcpu)
> > +{
> > +     if (kvm_check_request(KVM_REQ_PENDING_TIMER, vcpu)) {
> > +             kvm_clear_request(KVM_REQ_PENDING_TIMER, vcpu);
> > +             kvm_inject_apic_timer_irqs_fast(vcpu);
> > +             atomic_set(&vcpu->arch.apic->lapic_timer.pending, 0);
> > +     }
> > +
> > +     return 0;
> > +}
>
> This could also be handled in apic_timer_expired.  For example you can
> add an argument from_timer_fn and do
>
>         if (!from_timer_fn) {
>                 WARN_ON(kvm_get_running_vcpu() != vcpu);
>                 kvm_inject_apic_timer_irqs_fast(vcpu);
>                 return;
>         }
>
>         if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
>                 ...
>         }
>         atomic_inc(&apic->lapic_timer.pending);
>         kvm_set_pending_timer(vcpu);
>
> and then you don't need kvm_set_lapic_tscdeadline_msr_fast and

I guess you mean don't need tscdeadline_expired_timer_fast().

    Wanpeng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23  9:40   ` Paolo Bonzini
@ 2020-04-23  9:56     ` Wanpeng Li
  2020-04-23 10:29       ` Paolo Bonzini
  0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2020-04-23  9:56 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On Thu, 23 Apr 2020 at 17:40, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/04/20 11:01, Wanpeng Li wrote:
> > +bool kvm_lapic_expired_hv_timer_fast(struct kvm_vcpu *vcpu)
> > +{
> > +     struct kvm_lapic *apic = vcpu->arch.apic;
> > +     struct kvm_timer *ktimer = &apic->lapic_timer;
> > +
> > +     if (!apic_lvtt_tscdeadline(apic) ||
> > +             !ktimer->hv_timer_in_use ||
> > +             atomic_read(&ktimer->pending))
> > +             return 0;
> > +
> > +     WARN_ON(swait_active(&vcpu->wq));
> > +     cancel_hv_timer(apic);
> > +
> > +     ktimer->expired_tscdeadline = ktimer->tscdeadline;
> > +     kvm_inject_apic_timer_irqs_fast(vcpu);
> > +
> > +     return 1;
> > +}
> > +EXPORT_SYMBOL_GPL(kvm_lapic_expired_hv_timer_fast);
>
> Please re-evaluate if this is needed (or which parts are needed) after
> cleaning up patch 4.  Anyway again---this is already better, I don't
> like the duplicated code but at least I can understand what's going on.

Except the apic_lvtt_tscdeadline(apic) check, others are duplicated,
what do you think about apic_lvtt_tscdeadline(apic) check?

    Wanpeng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath
  2020-04-23  9:54     ` Wanpeng Li
@ 2020-04-23 10:28       ` Paolo Bonzini
  0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23 10:28 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On 23/04/20 11:54, Wanpeng Li wrote:
>> and then you don't need kvm_set_lapic_tscdeadline_msr_fast and
>
> I guess you mean don't need tscdeadline_expired_timer_fast().

Both.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23  9:56     ` Wanpeng Li
@ 2020-04-23 10:29       ` Paolo Bonzini
  2020-04-24  6:38         ` Wanpeng Li
  0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2020-04-23 10:29 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On 23/04/20 11:56, Wanpeng Li wrote:
>> Please re-evaluate if this is needed (or which parts are needed) after
>> cleaning up patch 4.  Anyway again---this is already better, I don't
>> like the duplicated code but at least I can understand what's going on.
> Except the apic_lvtt_tscdeadline(apic) check, others are duplicated,
> what do you think about apic_lvtt_tscdeadline(apic) check?

We have to take a look again after you clean up patch 4.  My hope is to
reuse the slowpath code as much as possible, by introducing some
optimizations here and there.

Paolo


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23 10:29       ` Paolo Bonzini
@ 2020-04-24  6:38         ` Wanpeng Li
  0 siblings, 0 replies; 19+ messages in thread
From: Wanpeng Li @ 2020-04-24  6:38 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: LKML, kvm, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Haiwei Li

On Thu, 23 Apr 2020 at 18:29, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 23/04/20 11:56, Wanpeng Li wrote:
> >> Please re-evaluate if this is needed (or which parts are needed) after
> >> cleaning up patch 4.  Anyway again---this is already better, I don't
> >> like the duplicated code but at least I can understand what's going on.
> > Except the apic_lvtt_tscdeadline(apic) check, others are duplicated,
> > what do you think about apic_lvtt_tscdeadline(apic) check?
>
> We have to take a look again after you clean up patch 4.  My hope is to
> reuse the slowpath code as much as possible, by introducing some
> optimizations here and there.

I found we are not need to move the if (vcpu->arch.apicv_active) from
__apic_accept_irq() to a separate function if I understand you
correctly. Please see patch v3 3/5. In addition, I observe
kvm-unit-tests #UD etc if check need_cancel_enter_guest() after the
generic fastpath handler, I didn't dig too much, just move it before
the generic fastpath handler for safe in patch v3 2/5.

    Wanpeng

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath
  2020-04-23  9:01 ` [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Wanpeng Li
  2020-04-23  9:40   ` Paolo Bonzini
@ 2020-04-26  7:38   ` kbuild test robot
  1 sibling, 0 replies; 19+ messages in thread
From: kbuild test robot @ 2020-04-26  7:38 UTC (permalink / raw)
  To: Wanpeng Li, linux-kernel, kvm
  Cc: kbuild-all, Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov,
	Wanpeng Li, Jim Mattson, Joerg Roedel, Haiwei Li

[-- Attachment #1: Type: text/plain, Size: 1480 bytes --]

Hi Wanpeng,

I love your patch! Yet something to improve:

[auto build test ERROR on kvm/linux-next]
[also build test ERROR on next-20200424]
[cannot apply to tip/auto-latest linus/master linux/master v5.7-rc2]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wanpeng-Li/KVM-VMX-Tscdeadline-timer-emulation-fastpath/20200426-132300
base:   https://git.kernel.org/pub/scm/virt/kvm/kvm.git linux-next
config: i386-randconfig-d001-20200426 (attached as .config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> arch/x86/kvm/vmx/vmx.c:6572:13: error: 'vmx_cancel_hv_timer' declared 'static' but never defined [-Werror=unused-function]
    static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
                ^~~~~~~~~~~~~~~~~~~
   cc1: all warnings being treated as errors

vim +6572 arch/x86/kvm/vmx/vmx.c

  6571	
> 6572	static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu);
  6573	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 36316 bytes --]

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-04-26  7:51 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-23  9:01 [PATCH v2 0/5] KVM: VMX: Tscdeadline timer emulation fastpath Wanpeng Li
2020-04-23  9:01 ` [PATCH v2 1/5] KVM: LAPIC: Introduce interrupt delivery fastpath Wanpeng Li
2020-04-23  9:25   ` Paolo Bonzini
2020-04-23  9:35     ` Wanpeng Li
2020-04-23  9:39       ` Paolo Bonzini
2020-04-23  9:44         ` Wanpeng Li
2020-04-23  9:52           ` Paolo Bonzini
2020-04-23  9:01 ` [PATCH v2 2/5] KVM: X86: Introduce need_cancel_enter_guest helper Wanpeng Li
2020-04-23  9:01 ` [PATCH v2 3/5] KVM: VMX: Introduce generic fastpath handler Wanpeng Li
2020-04-23  9:01 ` [PATCH v2 4/5] KVM: X86: TSCDEADLINE MSR emulation fastpath Wanpeng Li
2020-04-23  9:37   ` Paolo Bonzini
2020-04-23  9:54     ` Wanpeng Li
2020-04-23 10:28       ` Paolo Bonzini
2020-04-23  9:01 ` [PATCH v2 5/5] KVM: VMX: Handle preemption timer fastpath Wanpeng Li
2020-04-23  9:40   ` Paolo Bonzini
2020-04-23  9:56     ` Wanpeng Li
2020-04-23 10:29       ` Paolo Bonzini
2020-04-24  6:38         ` Wanpeng Li
2020-04-26  7:38   ` kbuild test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).