kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] KVM: Yield to IPI target if necessary
@ 2019-05-30  1:05 Wanpeng Li
  2019-05-30  1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-05-30  1:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář

The idea is from Xen, when sending a call-function IPI-many to vCPUs, 
yield if any of the IPI target vCPUs was preempted. 17% performance 
increasement of ebizzy benchmark can be observed in an over-subscribe 
environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function 
IPI-many since call-function is not easy to be trigged by userspace 
workload).

v2 -> v3:
 * add bounds-check on dest_id

v1 -> v2:
 * check map is not NULL
 * check map->phys_map[dest_id] is not NULL
 * make kvm_sched_yield static
 * change dest_id to unsinged long

Wanpeng Li (3):
  KVM: X86: Yield to IPI target if necessary
  KVM: X86: Implement PV sched yield hypercall
  KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest

 Documentation/virtual/kvm/cpuid.txt      |  4 ++++
 Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
 arch/x86/include/uapi/asm/kvm_para.h     |  1 +
 arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
 arch/x86/kvm/cpuid.c                     |  3 ++-
 arch/x86/kvm/x86.c                       | 26 ++++++++++++++++++++++++++
 include/uapi/linux/kvm_para.h            |  1 +
 7 files changed, 66 insertions(+), 1 deletion(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/3] KVM: X86: Yield to IPI target if necessary
  2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
@ 2019-05-30  1:05 ` Wanpeng Li
  2019-05-30  1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-05-30  1:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Liran Alon

From: Wanpeng Li <wanpengli@tencent.com>

When sending a call-function IPI-many to vCPUs, yield if any of
the IPI target vCPUs was preempted, we just select the first
preempted target vCPU which we found since the state of target
vCPUs can change underneath and to avoid race conditions.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
 arch/x86/include/uapi/asm/kvm_para.h     |  1 +
 arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
 include/uapi/linux/kvm_para.h            |  1 +
 4 files changed, 34 insertions(+)

diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt
index da24c13..da21065 100644
--- a/Documentation/virtual/kvm/hypercalls.txt
+++ b/Documentation/virtual/kvm/hypercalls.txt
@@ -141,3 +141,14 @@ a0 corresponds to the APIC ID in the third argument (a2), bit 1
 corresponds to the APIC ID a2+1, and so on.
 
 Returns the number of CPUs to which the IPIs were delivered successfully.
+
+7. KVM_HC_SCHED_YIELD
+------------------------
+Architecture: x86
+Status: active
+Purpose: Hypercall used to yield if the IPI target vCPU is preempted
+
+a0: destination APIC ID
+
+Usage example: When sending a call-function IPI-many to vCPUs, yield if
+any of the IPI target vCPUs was preempted.
diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
index 19980ec..d0bf77c 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -29,6 +29,7 @@
 #define KVM_FEATURE_PV_TLB_FLUSH	9
 #define KVM_FEATURE_ASYNC_PF_VMEXIT	10
 #define KVM_FEATURE_PV_SEND_IPI	11
+#define KVM_FEATURE_PV_SCHED_YIELD	12
 
 #define KVM_HINTS_REALTIME      0
 
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 3f0cc82..54400c2 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -540,6 +540,21 @@ static void kvm_setup_pv_ipi(void)
 	pr_info("KVM setup pv IPIs\n");
 }
 
+static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
+{
+	int cpu;
+
+	native_send_call_func_ipi(mask);
+
+	/* Make sure other vCPUs get a chance to run if they need to. */
+	for_each_cpu(cpu, mask) {
+		if (vcpu_is_preempted(cpu)) {
+			kvm_hypercall1(KVM_HC_SCHED_YIELD, per_cpu(x86_cpu_to_apicid, cpu));
+			break;
+		}
+	}
+}
+
 static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
 {
 	native_smp_prepare_cpus(max_cpus);
@@ -651,6 +666,12 @@ static void __init kvm_guest_init(void)
 #ifdef CONFIG_SMP
 	smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus;
 	smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu;
+	if (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
+	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
+	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
+		smp_ops.send_call_func_ipi = kvm_smp_send_call_func_ipi;
+		pr_info("KVM setup pv sched yield\n");
+	}
 	if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/kvm:online",
 				      kvm_cpu_online, kvm_cpu_down_prepare) < 0)
 		pr_err("kvm_guest: Failed to install cpu hotplug callbacks\n");
diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
index 6c0ce49..8b86609 100644
--- a/include/uapi/linux/kvm_para.h
+++ b/include/uapi/linux/kvm_para.h
@@ -28,6 +28,7 @@
 #define KVM_HC_MIPS_CONSOLE_OUTPUT	8
 #define KVM_HC_CLOCK_PAIRING		9
 #define KVM_HC_SEND_IPI		10
+#define KVM_HC_SCHED_YIELD		11
 
 /*
  * hypercalls use architecture specific
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
  2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
  2019-05-30  1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
@ 2019-05-30  1:05 ` Wanpeng Li
  2019-06-10 14:17   ` Radim Krčmář
  2019-05-30  1:05 ` [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Wanpeng Li
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2019-05-30  1:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Liran Alon

From: Wanpeng Li <wanpengli@tencent.com>

The target vCPUs are in runnable state after vcpu_kick and suitable 
as a yield target. This patch implements the sched yield hypercall.

17% performance increasement of ebizzy benchmark can be observed in an 
over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush 
call-function IPI-many since call-function is not easy to be trigged 
by userspace workload).

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 arch/x86/kvm/x86.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e7e57de..8575b36 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
 	kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
 }
 
+static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
+{
+	struct kvm_vcpu *target = NULL;
+	struct kvm_apic_map *map = NULL;
+
+	rcu_read_lock();
+	map = rcu_dereference(kvm->arch.apic_map);
+
+	if (unlikely(!map) || dest_id > map->max_apic_id)
+		goto out;
+
+	if (map->phys_map[dest_id]->vcpu) {
+		target = map->phys_map[dest_id]->vcpu;
+		rcu_read_unlock();
+		kvm_vcpu_yield_to(target);
+	}
+
+out:
+	if (!target)
+		rcu_read_unlock();
+}
+
 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 {
 	unsigned long nr, a0, a1, a2, a3, ret;
@@ -7218,6 +7240,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	case KVM_HC_SEND_IPI:
 		ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit);
 		break;
+	case KVM_HC_SCHED_YIELD:
+		kvm_sched_yield(vcpu->kvm, a0);
+		ret = 0;
+		break;
 	default:
 		ret = -KVM_ENOSYS;
 		break;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
  2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
  2019-05-30  1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
  2019-05-30  1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
@ 2019-05-30  1:05 ` Wanpeng Li
  2019-06-10  5:58 ` [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
  2019-06-10 14:34 ` Radim Krčmář
  4 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-05-30  1:05 UTC (permalink / raw)
  To: linux-kernel, kvm; +Cc: Paolo Bonzini, Radim Krčmář, Liran Alon

From: Wanpeng Li <wanpengli@tencent.com>

Expose PV_SCHED_YIELD feature bit to guest, the guest can check this
feature bit before using paravirtualized sched yield.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
 Documentation/virtual/kvm/cpuid.txt | 4 ++++
 arch/x86/kvm/cpuid.c                | 3 ++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
index 97ca194..1c39683 100644
--- a/Documentation/virtual/kvm/cpuid.txt
+++ b/Documentation/virtual/kvm/cpuid.txt
@@ -66,6 +66,10 @@ KVM_FEATURE_PV_SEND_IPI            ||    11 || guest checks this feature bit
                                    ||       || before using paravirtualized
                                    ||       || send IPIs.
 ------------------------------------------------------------------------------
+KVM_FEATURE_PV_SHED_YIELD          ||    12 || guest checks this feature bit
+                                   ||       || before using paravirtualized
+                                   ||       || sched yield.
+------------------------------------------------------------------------------
 KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
                                    ||       || per-cpu warps are expected in
                                    ||       || kvmclock.
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e18a9f9..c018fc8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -643,7 +643,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 			     (1 << KVM_FEATURE_PV_UNHALT) |
 			     (1 << KVM_FEATURE_PV_TLB_FLUSH) |
 			     (1 << KVM_FEATURE_ASYNC_PF_VMEXIT) |
-			     (1 << KVM_FEATURE_PV_SEND_IPI);
+			     (1 << KVM_FEATURE_PV_SEND_IPI) |
+			     (1 << KVM_FEATURE_PV_SCHED_YIELD);
 
 		if (sched_info_on())
 			entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
                   ` (2 preceding siblings ...)
  2019-05-30  1:05 ` [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Wanpeng Li
@ 2019-06-10  5:58 ` Wanpeng Li
  2019-06-10 14:34 ` Radim Krčmář
  4 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-06-10  5:58 UTC (permalink / raw)
  To: LKML, kvm; +Cc: Paolo Bonzini, Radim Krčmář

ping, :)
On Thu, 30 May 2019 at 09:05, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> yield if any of the IPI target vCPUs was preempted. 17% performance
> increasement of ebizzy benchmark can be observed in an over-subscribe
> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> IPI-many since call-function is not easy to be trigged by userspace
> workload).
>
> v2 -> v3:
>  * add bounds-check on dest_id
>
> v1 -> v2:
>  * check map is not NULL
>  * check map->phys_map[dest_id] is not NULL
>  * make kvm_sched_yield static
>  * change dest_id to unsinged long
>
> Wanpeng Li (3):
>   KVM: X86: Yield to IPI target if necessary
>   KVM: X86: Implement PV sched yield hypercall
>   KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest
>
>  Documentation/virtual/kvm/cpuid.txt      |  4 ++++
>  Documentation/virtual/kvm/hypercalls.txt | 11 +++++++++++
>  arch/x86/include/uapi/asm/kvm_para.h     |  1 +
>  arch/x86/kernel/kvm.c                    | 21 +++++++++++++++++++++
>  arch/x86/kvm/cpuid.c                     |  3 ++-
>  arch/x86/kvm/x86.c                       | 26 ++++++++++++++++++++++++++
>  include/uapi/linux/kvm_para.h            |  1 +
>  7 files changed, 66 insertions(+), 1 deletion(-)
>
> --
> 2.7.4
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
  2019-05-30  1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
@ 2019-06-10 14:17   ` Radim Krčmář
  2019-06-11  8:47     ` Wanpeng Li
  0 siblings, 1 reply; 18+ messages in thread
From: Radim Krčmář @ 2019-06-10 14:17 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: linux-kernel, kvm, Paolo Bonzini, Liran Alon

2019-05-30 09:05+0800, Wanpeng Li:
> From: Wanpeng Li <wanpengli@tencent.com>
> 
> The target vCPUs are in runnable state after vcpu_kick and suitable 
> as a yield target. This patch implements the sched yield hypercall.
> 
> 17% performance increasement of ebizzy benchmark can be observed in an 
> over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush 
> call-function IPI-many since call-function is not easy to be trigged 
> by userspace workload).
> 
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Liran Alon <liran.alon@oracle.com>
> Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> ---
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
>  	kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
>  }
>  
> +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
> +{
> +	struct kvm_vcpu *target = NULL;
> +	struct kvm_apic_map *map = NULL;
> +
> +	rcu_read_lock();
> +	map = rcu_dereference(kvm->arch.apic_map);
> +
> +	if (unlikely(!map) || dest_id > map->max_apic_id)
> +		goto out;
> +
> +	if (map->phys_map[dest_id]->vcpu) {

This should check for map->phys_map[dest_id].

> +		target = map->phys_map[dest_id]->vcpu;
> +		rcu_read_unlock();
> +		kvm_vcpu_yield_to(target);
> +	}
> +
> +out:
> +	if (!target)
> +		rcu_read_unlock();

Also, I find the following logic clearer

  {
  	struct kvm_vcpu *target = NULL;
  	struct kvm_apic_map *map;
  	
  	rcu_read_lock();
  	map = rcu_dereference(kvm->arch.apic_map);
  	
  	if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id])
  		target = map->phys_map[dest_id]->vcpu;
  	
  	rcu_read_unlock();
  	
  	if (target)
  		kvm_vcpu_yield_to(target);
  }

thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
                   ` (3 preceding siblings ...)
  2019-06-10  5:58 ` [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
@ 2019-06-10 14:34 ` Radim Krčmář
  2019-06-11  1:11   ` Sean Christopherson
  2019-06-11 10:26   ` Wanpeng Li
  4 siblings, 2 replies; 18+ messages in thread
From: Radim Krčmář @ 2019-06-10 14:34 UTC (permalink / raw)
  To: Wanpeng Li; +Cc: linux-kernel, kvm, Paolo Bonzini

2019-05-30 09:05+0800, Wanpeng Li:
> The idea is from Xen, when sending a call-function IPI-many to vCPUs, 
> yield if any of the IPI target vCPUs was preempted. 17% performance 
> increasement of ebizzy benchmark can be observed in an over-subscribe 
> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function 
> IPI-many since call-function is not easy to be trigged by userspace 
> workload).

Have you checked if we could gain performance by having the yield as an
extension to our PV IPI call?

It would allow us to skip the VM entry/exit overhead on the caller.
(The benefit of that might be negligible and it also poses a
 complication when splitting the target mask into several PV IPI
 hypercalls.)

Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-10 14:34 ` Radim Krčmář
@ 2019-06-11  1:11   ` Sean Christopherson
  2019-06-11  1:45     ` Wanpeng Li
  2019-06-11 10:26   ` Wanpeng Li
  1 sibling, 1 reply; 18+ messages in thread
From: Sean Christopherson @ 2019-06-11  1:11 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: Wanpeng Li, linux-kernel, kvm, Paolo Bonzini

On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> 2019-05-30 09:05+0800, Wanpeng Li:
> > The idea is from Xen, when sending a call-function IPI-many to vCPUs, 
> > yield if any of the IPI target vCPUs was preempted. 17% performance 
> > increasement of ebizzy benchmark can be observed in an over-subscribe 
> > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function 
> > IPI-many since call-function is not easy to be trigged by userspace 
> > workload).
> 
> Have you checked if we could gain performance by having the yield as an
> extension to our PV IPI call?
> 
> It would allow us to skip the VM entry/exit overhead on the caller.
> (The benefit of that might be negligible and it also poses a
>  complication when splitting the target mask into several PV IPI
>  hypercalls.)

Tangetially related to splitting PV IPI hypercalls, are there any major
hurdles to supporting shorthand?  Not having to generate the mask for
->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
shave cycles for affected flows.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-11  1:11   ` Sean Christopherson
@ 2019-06-11  1:45     ` Wanpeng Li
  2019-06-11  1:48       ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2019-06-11  1:45 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: Radim Krčmář, LKML, kvm, Paolo Bonzini

On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> > 2019-05-30 09:05+0800, Wanpeng Li:
> > > The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > > yield if any of the IPI target vCPUs was preempted. 17% performance
> > > increasement of ebizzy benchmark can be observed in an over-subscribe
> > > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > > IPI-many since call-function is not easy to be trigged by userspace
> > > workload).
> >
> > Have you checked if we could gain performance by having the yield as an
> > extension to our PV IPI call?
> >
> > It would allow us to skip the VM entry/exit overhead on the caller.
> > (The benefit of that might be negligible and it also poses a
> >  complication when splitting the target mask into several PV IPI
> >  hypercalls.)
>
> Tangetially related to splitting PV IPI hypercalls, are there any major
> hurdles to supporting shorthand?  Not having to generate the mask for
> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> shave cycles for affected flows.

Not sure why shorthand is not used for native x2apic mode.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-11  1:45     ` Wanpeng Li
@ 2019-06-11  1:48       ` Nadav Amit
  2019-06-11 10:02         ` Wanpeng Li
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-11  1:48 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> 
> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
>>> 2019-05-30 09:05+0800, Wanpeng Li:
>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
>>>> IPI-many since call-function is not easy to be trigged by userspace
>>>> workload).
>>> 
>>> Have you checked if we could gain performance by having the yield as an
>>> extension to our PV IPI call?
>>> 
>>> It would allow us to skip the VM entry/exit overhead on the caller.
>>> (The benefit of that might be negligible and it also poses a
>>> complication when splitting the target mask into several PV IPI
>>> hypercalls.)
>> 
>> Tangetially related to splitting PV IPI hypercalls, are there any major
>> hurdles to supporting shorthand?  Not having to generate the mask for
>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
>> shave cycles for affected flows.
> 
> Not sure why shorthand is not used for native x2apic mode.

Why do you say so? native_send_call_func_ipi() checks if allbutself
shorthand should be used and does so (even though the check can be more
efficient - I’m looking at that code right now…)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
  2019-06-10 14:17   ` Radim Krčmář
@ 2019-06-11  8:47     ` Wanpeng Li
  0 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-06-11  8:47 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: LKML, kvm, Paolo Bonzini, Liran Alon

On Mon, 10 Jun 2019 at 22:17, Radim Krčmář <rkrcmar@redhat.com> wrote:
>
> 2019-05-30 09:05+0800, Wanpeng Li:
> > From: Wanpeng Li <wanpengli@tencent.com>
> >
> > The target vCPUs are in runnable state after vcpu_kick and suitable
> > as a yield target. This patch implements the sched yield hypercall.
> >
> > 17% performance increasement of ebizzy benchmark can be observed in an
> > over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush
> > call-function IPI-many since call-function is not easy to be trigged
> > by userspace workload).
> >
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Radim Krčmář <rkrcmar@redhat.com>
> > Cc: Liran Alon <liran.alon@oracle.com>
> > Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
> > ---
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
> >       kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
> >  }
> >
> > +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
> > +{
> > +     struct kvm_vcpu *target = NULL;
> > +     struct kvm_apic_map *map = NULL;
> > +
> > +     rcu_read_lock();
> > +     map = rcu_dereference(kvm->arch.apic_map);
> > +
> > +     if (unlikely(!map) || dest_id > map->max_apic_id)
> > +             goto out;
> > +
> > +     if (map->phys_map[dest_id]->vcpu) {
>
> This should check for map->phys_map[dest_id].

Yeah, make a mistake here.

>
> > +             target = map->phys_map[dest_id]->vcpu;
> > +             rcu_read_unlock();
> > +             kvm_vcpu_yield_to(target);
> > +     }
> > +
> > +out:
> > +     if (!target)
> > +             rcu_read_unlock();
>
> Also, I find the following logic clearer
>
>   {
>         struct kvm_vcpu *target = NULL;
>         struct kvm_apic_map *map;
>
>         rcu_read_lock();
>         map = rcu_dereference(kvm->arch.apic_map);
>
>         if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id])
>                 target = map->phys_map[dest_id]->vcpu;
>
>         rcu_read_unlock();
>
>         if (target)
>                 kvm_vcpu_yield_to(target);
>   }

More better, thanks.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-11  1:48       ` Nadav Amit
@ 2019-06-11 10:02         ` Wanpeng Li
  2019-06-11 16:57           ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2019-06-11 10:02 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
>
> > On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >
> > On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> > <sean.j.christopherson@intel.com> wrote:
> >> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> >>> 2019-05-30 09:05+0800, Wanpeng Li:
> >>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> >>>> yield if any of the IPI target vCPUs was preempted. 17% performance
> >>>> increasement of ebizzy benchmark can be observed in an over-subscribe
> >>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> >>>> IPI-many since call-function is not easy to be trigged by userspace
> >>>> workload).
> >>>
> >>> Have you checked if we could gain performance by having the yield as an
> >>> extension to our PV IPI call?
> >>>
> >>> It would allow us to skip the VM entry/exit overhead on the caller.
> >>> (The benefit of that might be negligible and it also poses a
> >>> complication when splitting the target mask into several PV IPI
> >>> hypercalls.)
> >>
> >> Tangetially related to splitting PV IPI hypercalls, are there any major
> >> hurdles to supporting shorthand?  Not having to generate the mask for
> >> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> >> shave cycles for affected flows.
> >
> > Not sure why shorthand is not used for native x2apic mode.
>
> Why do you say so? native_send_call_func_ipi() checks if allbutself
> shorthand should be used and does so (even though the check can be more
> efficient - I’m looking at that code right now…)

Please continue to follow the apic/x2apic driver. Just apic_flat set
APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-10 14:34 ` Radim Krčmář
  2019-06-11  1:11   ` Sean Christopherson
@ 2019-06-11 10:26   ` Wanpeng Li
  1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-06-11 10:26 UTC (permalink / raw)
  To: Radim Krčmář; +Cc: LKML, kvm, Paolo Bonzini

On Mon, 10 Jun 2019 at 22:34, Radim Krčmář <rkrcmar@redhat.com> wrote:
>
> 2019-05-30 09:05+0800, Wanpeng Li:
> > The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > yield if any of the IPI target vCPUs was preempted. 17% performance
> > increasement of ebizzy benchmark can be observed in an over-subscribe
> > environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > IPI-many since call-function is not easy to be trigged by userspace
> > workload).
>
> Have you checked if we could gain performance by having the yield as an
> extension to our PV IPI call?

It will extend irq disabled time in __send_ipi_mask(). In addition,
sched yield can be used to optimize other synchronization primitives
in guest I think.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-11 10:02         ` Wanpeng Li
@ 2019-06-11 16:57           ` Nadav Amit
  2019-06-12  1:18             ` Wanpeng Li
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-11 16:57 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

> On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> 
> On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
>>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
>>> 
>>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
>>> <sean.j.christopherson@intel.com> wrote:
>>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
>>>>> 2019-05-30 09:05+0800, Wanpeng Li:
>>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
>>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
>>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
>>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
>>>>>> IPI-many since call-function is not easy to be trigged by userspace
>>>>>> workload).
>>>>> 
>>>>> Have you checked if we could gain performance by having the yield as an
>>>>> extension to our PV IPI call?
>>>>> 
>>>>> It would allow us to skip the VM entry/exit overhead on the caller.
>>>>> (The benefit of that might be negligible and it also poses a
>>>>> complication when splitting the target mask into several PV IPI
>>>>> hypercalls.)
>>>> 
>>>> Tangetially related to splitting PV IPI hypercalls, are there any major
>>>> hurdles to supporting shorthand?  Not having to generate the mask for
>>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
>>>> shave cycles for affected flows.
>>> 
>>> Not sure why shorthand is not used for native x2apic mode.
>> 
>> Why do you say so? native_send_call_func_ipi() checks if allbutself
>> shorthand should be used and does so (even though the check can be more
>> efficient - I’m looking at that code right now…)
> 
> Please continue to follow the apic/x2apic driver. Just apic_flat set
> APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.

Indeed - I was sure by the name that it does it correctly. That’s stupid.

I’ll add it to the patch-set I am working on (TLB shootdown improvements),
if you don’t mind.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-11 16:57           ` Nadav Amit
@ 2019-06-12  1:18             ` Wanpeng Li
  2019-06-12  1:37               ` Nadav Amit
  0 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2019-06-12  1:18 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

On Wed, 12 Jun 2019 at 00:57, Nadav Amit <nadav.amit@gmail.com> wrote:
>
> > On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >
> > On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
> >>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >>>
> >>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> >>> <sean.j.christopherson@intel.com> wrote:
> >>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> >>>>> 2019-05-30 09:05+0800, Wanpeng Li:
> >>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> >>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
> >>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
> >>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> >>>>>> IPI-many since call-function is not easy to be trigged by userspace
> >>>>>> workload).
> >>>>>
> >>>>> Have you checked if we could gain performance by having the yield as an
> >>>>> extension to our PV IPI call?
> >>>>>
> >>>>> It would allow us to skip the VM entry/exit overhead on the caller.
> >>>>> (The benefit of that might be negligible and it also poses a
> >>>>> complication when splitting the target mask into several PV IPI
> >>>>> hypercalls.)
> >>>>
> >>>> Tangetially related to splitting PV IPI hypercalls, are there any major
> >>>> hurdles to supporting shorthand?  Not having to generate the mask for
> >>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> >>>> shave cycles for affected flows.
> >>>
> >>> Not sure why shorthand is not used for native x2apic mode.
> >>
> >> Why do you say so? native_send_call_func_ipi() checks if allbutself
> >> shorthand should be used and does so (even though the check can be more
> >> efficient - I’m looking at that code right now…)
> >
> > Please continue to follow the apic/x2apic driver. Just apic_flat set
> > APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.
>
> Indeed - I was sure by the name that it does it correctly. That’s stupid.
>
> I’ll add it to the patch-set I am working on (TLB shootdown improvements),
> if you don’t mind.

Original for hotplug cpu safe.
https://lwn.net/Articles/138365/
https://lwn.net/Articles/138368/
Not sure shortcut native support is acceptable, I will play my
kvm_send_ipi_allbutself and kvm_send_ipi_all. :)

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-12  1:18             ` Wanpeng Li
@ 2019-06-12  1:37               ` Nadav Amit
  2019-06-28  9:12                 ` Wanpeng Li
  0 siblings, 1 reply; 18+ messages in thread
From: Nadav Amit @ 2019-06-12  1:37 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

> On Jun 11, 2019, at 6:18 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> 
> On Wed, 12 Jun 2019 at 00:57, Nadav Amit <nadav.amit@gmail.com> wrote:
>>> On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
>>> 
>>> On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
>>>>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
>>>>> 
>>>>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
>>>>> <sean.j.christopherson@intel.com> wrote:
>>>>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
>>>>>>> 2019-05-30 09:05+0800, Wanpeng Li:
>>>>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
>>>>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
>>>>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
>>>>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
>>>>>>>> IPI-many since call-function is not easy to be trigged by userspace
>>>>>>>> workload).
>>>>>>> 
>>>>>>> Have you checked if we could gain performance by having the yield as an
>>>>>>> extension to our PV IPI call?
>>>>>>> 
>>>>>>> It would allow us to skip the VM entry/exit overhead on the caller.
>>>>>>> (The benefit of that might be negligible and it also poses a
>>>>>>> complication when splitting the target mask into several PV IPI
>>>>>>> hypercalls.)
>>>>>> 
>>>>>> Tangetially related to splitting PV IPI hypercalls, are there any major
>>>>>> hurdles to supporting shorthand?  Not having to generate the mask for
>>>>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
>>>>>> shave cycles for affected flows.
>>>>> 
>>>>> Not sure why shorthand is not used for native x2apic mode.
>>>> 
>>>> Why do you say so? native_send_call_func_ipi() checks if allbutself
>>>> shorthand should be used and does so (even though the check can be more
>>>> efficient - I’m looking at that code right now…)
>>> 
>>> Please continue to follow the apic/x2apic driver. Just apic_flat set
>>> APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.
>> 
>> Indeed - I was sure by the name that it does it correctly. That’s stupid.
>> 
>> I’ll add it to the patch-set I am working on (TLB shootdown improvements),
>> if you don’t mind.
> 
> Original for hotplug cpu safe.
> https://lwn.net/Articles/138365/
> https://lwn.net/Articles/138368/
> Not sure shortcut native support is acceptable, I will play my
> kvm_send_ipi_allbutself and kvm_send_ipi_all. :)

Yes, I saw these threads before. But I think the test in
native_send_call_func_ipi() should take care of it.

I’ll recheck.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-12  1:37               ` Nadav Amit
@ 2019-06-28  9:12                 ` Wanpeng Li
  2019-06-28  9:18                   ` Wanpeng Li
  0 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2019-06-28  9:12 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

On Wed, 12 Jun 2019 at 09:37, Nadav Amit <nadav.amit@gmail.com> wrote:
>
> > On Jun 11, 2019, at 6:18 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >
> > On Wed, 12 Jun 2019 at 00:57, Nadav Amit <nadav.amit@gmail.com> wrote:
> >>> On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >>>
> >>> On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
> >>>>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> >>>>>
> >>>>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> >>>>> <sean.j.christopherson@intel.com> wrote:
> >>>>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> >>>>>>> 2019-05-30 09:05+0800, Wanpeng Li:
> >>>>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> >>>>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
> >>>>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
> >>>>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> >>>>>>>> IPI-many since call-function is not easy to be trigged by userspace
> >>>>>>>> workload).
> >>>>>>>
> >>>>>>> Have you checked if we could gain performance by having the yield as an
> >>>>>>> extension to our PV IPI call?
> >>>>>>>
> >>>>>>> It would allow us to skip the VM entry/exit overhead on the caller.
> >>>>>>> (The benefit of that might be negligible and it also poses a
> >>>>>>> complication when splitting the target mask into several PV IPI
> >>>>>>> hypercalls.)
> >>>>>>
> >>>>>> Tangetially related to splitting PV IPI hypercalls, are there any major
> >>>>>> hurdles to supporting shorthand?  Not having to generate the mask for
> >>>>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> >>>>>> shave cycles for affected flows.
> >>>>>
> >>>>> Not sure why shorthand is not used for native x2apic mode.
> >>>>
> >>>> Why do you say so? native_send_call_func_ipi() checks if allbutself
> >>>> shorthand should be used and does so (even though the check can be more
> >>>> efficient - I’m looking at that code right now…)
> >>>
> >>> Please continue to follow the apic/x2apic driver. Just apic_flat set
> >>> APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.
> >>
> >> Indeed - I was sure by the name that it does it correctly. That’s stupid.
> >>
> >> I’ll add it to the patch-set I am working on (TLB shootdown improvements),
> >> if you don’t mind.
> >
> > Original for hotplug cpu safe.
> > https://lwn.net/Articles/138365/
> > https://lwn.net/Articles/138368/
> > Not sure shortcut native support is acceptable, I will play my
> > kvm_send_ipi_allbutself and kvm_send_ipi_all. :)
>
> Yes, I saw these threads before. But I think the test in
> native_send_call_func_ipi() should take care of it.

Good news, https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/ipi
Thomas who also is the hotplug state machine author introduces
shorthands support to native kernel now, I will add the support to
kvm_send_ipi_allbutself() and kvm_send_ipi_all() after his work
complete.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/3] KVM: Yield to IPI target if necessary
  2019-06-28  9:12                 ` Wanpeng Li
@ 2019-06-28  9:18                   ` Wanpeng Li
  0 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2019-06-28  9:18 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Sean Christopherson, Radim Krčmář,
	LKML, kvm, Paolo Bonzini

On Fri, 28 Jun 2019 at 17:12, Wanpeng Li <kernellwp@gmail.com> wrote:
>
> On Wed, 12 Jun 2019 at 09:37, Nadav Amit <nadav.amit@gmail.com> wrote:
> >
> > > On Jun 11, 2019, at 6:18 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >
> > > On Wed, 12 Jun 2019 at 00:57, Nadav Amit <nadav.amit@gmail.com> wrote:
> > >>> On Jun 11, 2019, at 3:02 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >>>
> > >>> On Tue, 11 Jun 2019 at 09:48, Nadav Amit <nadav.amit@gmail.com> wrote:
> > >>>>> On Jun 10, 2019, at 6:45 PM, Wanpeng Li <kernellwp@gmail.com> wrote:
> > >>>>>
> > >>>>> On Tue, 11 Jun 2019 at 09:11, Sean Christopherson
> > >>>>> <sean.j.christopherson@intel.com> wrote:
> > >>>>>> On Mon, Jun 10, 2019 at 04:34:20PM +0200, Radim Krčmář wrote:
> > >>>>>>> 2019-05-30 09:05+0800, Wanpeng Li:
> > >>>>>>>> The idea is from Xen, when sending a call-function IPI-many to vCPUs,
> > >>>>>>>> yield if any of the IPI target vCPUs was preempted. 17% performance
> > >>>>>>>> increasement of ebizzy benchmark can be observed in an over-subscribe
> > >>>>>>>> environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function
> > >>>>>>>> IPI-many since call-function is not easy to be trigged by userspace
> > >>>>>>>> workload).
> > >>>>>>>
> > >>>>>>> Have you checked if we could gain performance by having the yield as an
> > >>>>>>> extension to our PV IPI call?
> > >>>>>>>
> > >>>>>>> It would allow us to skip the VM entry/exit overhead on the caller.
> > >>>>>>> (The benefit of that might be negligible and it also poses a
> > >>>>>>> complication when splitting the target mask into several PV IPI
> > >>>>>>> hypercalls.)
> > >>>>>>
> > >>>>>> Tangetially related to splitting PV IPI hypercalls, are there any major
> > >>>>>> hurdles to supporting shorthand?  Not having to generate the mask for
> > >>>>>> ->send_IPI_allbutself and ->kvm_send_ipi_all seems like an easy to way
> > >>>>>> shave cycles for affected flows.
> > >>>>>
> > >>>>> Not sure why shorthand is not used for native x2apic mode.
> > >>>>
> > >>>> Why do you say so? native_send_call_func_ipi() checks if allbutself
> > >>>> shorthand should be used and does so (even though the check can be more
> > >>>> efficient - I’m looking at that code right now…)
> > >>>
> > >>> Please continue to follow the apic/x2apic driver. Just apic_flat set
> > >>> APIC_DEST_ALLBUT/APIC_DEST_ALLINC to ICR.
> > >>
> > >> Indeed - I was sure by the name that it does it correctly. That’s stupid.
> > >>
> > >> I’ll add it to the patch-set I am working on (TLB shootdown improvements),
> > >> if you don’t mind.
> > >
> > > Original for hotplug cpu safe.
> > > https://lwn.net/Articles/138365/
> > > https://lwn.net/Articles/138368/
> > > Not sure shortcut native support is acceptable, I will play my
> > > kvm_send_ipi_allbutself and kvm_send_ipi_all. :)
> >
> > Yes, I saw these threads before. But I think the test in
> > native_send_call_func_ipi() should take care of it.
>
> Good news, https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=WIP.x86/ipi
> Thomas who also is the hotplug state machine author introduces
> shorthands support to native kernel now, I will add the support to
> kvm_send_ipi_allbutself() and kvm_send_ipi_all() after his work
> complete.

Hmm, should fallback to native shorthands when support.

Regards,
Wanpeng Li

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-06-28  9:18 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-30  1:05 [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 1/3] KVM: X86: " Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall Wanpeng Li
2019-06-10 14:17   ` Radim Krčmář
2019-06-11  8:47     ` Wanpeng Li
2019-05-30  1:05 ` [PATCH v3 3/3] KVM: X86: Expose PV_SCHED_YIELD CPUID feature bit to guest Wanpeng Li
2019-06-10  5:58 ` [PATCH v3 0/3] KVM: Yield to IPI target if necessary Wanpeng Li
2019-06-10 14:34 ` Radim Krčmář
2019-06-11  1:11   ` Sean Christopherson
2019-06-11  1:45     ` Wanpeng Li
2019-06-11  1:48       ` Nadav Amit
2019-06-11 10:02         ` Wanpeng Li
2019-06-11 16:57           ` Nadav Amit
2019-06-12  1:18             ` Wanpeng Li
2019-06-12  1:37               ` Nadav Amit
2019-06-28  9:12                 ` Wanpeng Li
2019-06-28  9:18                   ` Wanpeng Li
2019-06-11 10:26   ` Wanpeng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).