linux-hyperv.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/5] Add a unified parameter "nopvspin"
@ 2019-10-15  1:19 Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan

There are cases folks want to disable spinlock optimization for
debug/test purpose. Xen and hyperv already have parameters "xen_nopvspin"
and "hv_nopvspin" to support that, but kvm doesn't.

The first patch adds that feature to KVM guest with "nopvspin".

For compatibility reason original parameters "xen_nopvspin" and
"hv_nopvspin" are retained and marked obsolete.

v6:
PATCH1: add Reviewed-by                                  [Vitaly Kuznetsov]
PATCH2: change 'pv' to 'PV', add Reviewed-by             [Vitaly Kuznetsov]
PATCH3: refactor 'if' branch in kvm_spinlock_init()      [Vitaly Kuznetsov]

v5:
PATCH1: new patch to revert a currently unnecessory commit,
        code is simpler a bit after that change.         [Boris Ostrovsky]
PATCH3: fold 'if' statement,add comments on virt_spin_lock_key,
        reorder with PATCH2 to better reflect dependency                               
PATCH4: fold 'if' statement, add Reviewed-by             [Boris Ostrovsky]
PATCH5: add Reviewed-by                                  [Michael Kelley]

v4:
PATCH1: use variable name nopvspin instead of pvspin and
        defined it as __initdata, changed print message,
        updated patch description                     [Sean Christopherson]
PATCH2: remove Suggested-by, use "kvm-guest:" prefix  [Sean Christopherson]
PATCH3: make variable nopvsin and xen_pvspin coexist
        remove Reviewed-by due to code change         [Sean Christopherson]
PATCH4: make variable nopvsin and hv_pvspin coexist   [Sean Christopherson]

v3:
PATCH2: Fix indentation

v2:
PATCH1: pick the print code change into separate PATCH2,
        updated patch description             [Vitaly Kuznetsov]
PATCH2: new patch with print code change      [Vitaly Kuznetsov]
PATCH3: add Reviewed-by                       [Juergen Gross]

Zhenzhong Duan (5):
  Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key
    get initialized"
  x86/kvm: Change print code to use pr_*() format
  x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  xen: Mark "xen_nopvspin" parameter obsolete
  x86/hyperv: Mark "hv_nopvspin" parameter obsolete

 Documentation/admin-guide/kernel-parameters.txt | 14 ++++-
 arch/x86/hyperv/hv_spinlock.c                   |  4 ++
 arch/x86/include/asm/qspinlock.h                |  1 +
 arch/x86/kernel/kvm.c                           | 76 ++++++++++++++++---------
 arch/x86/xen/spinlock.c                         |  4 +-
 kernel/locking/qspinlock.c                      |  7 +++
 6 files changed, 75 insertions(+), 31 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v6 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized"
  2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
@ 2019-10-15  1:19 ` Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan, H. Peter Anvin

This reverts commit 34226b6b70980a8f81fff3c09a2c889f77edeeff.

Commit 8990cac6e5ea ("x86/jump_label: Initialize static branching
early") adds jump_label_init() call in setup_arch() to make static
keys initialized early, so we could use the original simpler code
again.

The similar change for XEN is in commit 090d54bcbc54 ("Revert
"x86/paravirt: Set up the virt_spin_lock_key after static keys get
initialized"")

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/kernel/kvm.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index e820568..3bc6a266 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -527,13 +527,6 @@ static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
 	}
 }
 
-static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
-{
-	native_smp_prepare_cpus(max_cpus);
-	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
-		static_branch_disable(&virt_spin_lock_key);
-}
-
 static void __init kvm_smp_prepare_boot_cpu(void)
 {
 	/*
@@ -633,7 +626,6 @@ static void __init kvm_guest_init(void)
 		apic_set_eoi_write(kvm_guest_apic_eoi_write);
 
 #ifdef CONFIG_SMP
-	smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus;
 	smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu;
 	if (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
@@ -835,8 +827,10 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
-	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
+	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+		static_branch_disable(&virt_spin_lock_key);
 		return;
+	}
 
 	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
 	if (num_possible_cpus() == 1)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 2/5] x86/kvm: Change print code to use pr_*() format
  2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
@ 2019-10-15  1:19 ` Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan, H. Peter Anvin

pr_*() is preferred than printk(KERN_* ...), after change all the print
in arch/x86/kernel/kvm.c will have "kvm_guest: xxx" style.

No functional change.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/kernel/kvm.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 3bc6a266..249f14a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -7,6 +7,8 @@
  *   Authors: Anthony Liguori <aliguori@us.ibm.com>
  */
 
+#define pr_fmt(fmt) "kvm_guest: " fmt
+
 #include <linux/context_tracking.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
@@ -286,8 +288,8 @@ static void kvm_register_steal_time(void)
 		return;
 
 	wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
-	pr_info("kvm-stealtime: cpu %d, msr %llx\n",
-		cpu, (unsigned long long) slow_virt_to_phys(st));
+	pr_info("stealtime: cpu %d, msr %llx\n", cpu,
+		(unsigned long long) slow_virt_to_phys(st));
 }
 
 static DEFINE_PER_CPU_DECRYPTED(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
@@ -321,8 +323,7 @@ static void kvm_guest_cpu_init(void)
 
 		wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
 		__this_cpu_write(apf_reason.enabled, 1);
-		printk(KERN_INFO"KVM setup async PF for cpu %d\n",
-		       smp_processor_id());
+		pr_info("setup async PF for cpu %d\n", smp_processor_id());
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) {
@@ -347,8 +348,7 @@ static void kvm_pv_disable_apf(void)
 	wrmsrl(MSR_KVM_ASYNC_PF_EN, 0);
 	__this_cpu_write(apf_reason.enabled, 0);
 
-	printk(KERN_INFO"Unregister pv shared memory for cpu %d\n",
-	       smp_processor_id());
+	pr_info("Unregister PV shared memory for cpu %d\n", smp_processor_id());
 }
 
 static void kvm_pv_guest_cpu_reboot(void *unused)
@@ -469,7 +469,8 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
 		} else {
 			ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
 				(unsigned long)(ipi_bitmap >> BITS_PER_LONG), min, icr);
-			WARN_ONCE(ret < 0, "KVM: failed to send PV IPI: %ld", ret);
+			WARN_ONCE(ret < 0, "kvm_guest: failed to send PV IPI: %ld",
+				  ret);
 			min = max = apic_id;
 			ipi_bitmap = 0;
 		}
@@ -479,7 +480,8 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
 	if (ipi_bitmap) {
 		ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
 			(unsigned long)(ipi_bitmap >> BITS_PER_LONG), min, icr);
-		WARN_ONCE(ret < 0, "KVM: failed to send PV IPI: %ld", ret);
+		WARN_ONCE(ret < 0, "kvm_guest: failed to send PV IPI: %ld",
+			  ret);
 	}
 
 	local_irq_restore(flags);
@@ -509,7 +511,7 @@ static void kvm_setup_pv_ipi(void)
 {
 	apic->send_IPI_mask = kvm_send_ipi_mask;
 	apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
-	pr_info("KVM setup pv IPIs\n");
+	pr_info("setup PV IPIs\n");
 }
 
 static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
@@ -631,11 +633,11 @@ static void __init kvm_guest_init(void)
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		smp_ops.send_call_func_ipi = kvm_smp_send_call_func_ipi;
-		pr_info("KVM setup pv sched yield\n");
+		pr_info("setup PV sched yield\n");
 	}
 	if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/kvm:online",
 				      kvm_cpu_online, kvm_cpu_down_prepare) < 0)
-		pr_err("kvm_guest: Failed to install cpu hotplug callbacks\n");
+		pr_err("failed to install cpu hotplug callbacks\n");
 #else
 	sev_map_percpu_data();
 	kvm_guest_cpu_init();
@@ -738,7 +740,7 @@ static __init int kvm_setup_pv_tlb_flush(void)
 			zalloc_cpumask_var_node(per_cpu_ptr(&__pv_tlb_mask, cpu),
 				GFP_KERNEL, cpu_to_node(cpu));
 		}
-		pr_info("KVM setup pv remote TLB flush\n");
+		pr_info("setup PV remote TLB flush\n");
 	}
 
 	return 0;
@@ -866,8 +868,8 @@ static void kvm_enable_host_haltpoll(void *i)
 void arch_haltpoll_enable(unsigned int cpu)
 {
 	if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL)) {
-		pr_err_once("kvm: host does not support poll control\n");
-		pr_err_once("kvm: host upgrade recommended\n");
+		pr_err_once("host does not support poll control\n");
+		pr_err_once("host upgrade recommended\n");
 		return;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
@ 2019-10-15  1:19 ` Zhenzhong Duan
  2019-10-21  2:36   ` Zhenzhong Duan
  2019-10-21 11:14   ` Vitaly Kuznetsov
  2019-10-15  1:19 ` [PATCH v6 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan
  4 siblings, 2 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan, Jonathan Corbet, H. Peter Anvin,
	Will Deacon

There are cases where a guest tries to switch spinlocks to bare metal
behavior (e.g. by setting "xen_nopvspin" on XEN platform and
"hv_nopvspin" on HYPER_V).

That feature is missed on KVM, add a new parameter "nopvspin" to disable
PV spinlocks for KVM guest.

The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
parameters in future patches.

Define variable nopvsin as global because it will be used in future
patches as above.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 Documentation/admin-guide/kernel-parameters.txt |  5 ++++
 arch/x86/include/asm/qspinlock.h                |  1 +
 arch/x86/kernel/kvm.c                           | 34 ++++++++++++++++++++++---
 kernel/locking/qspinlock.c                      |  7 +++++
 4 files changed, 43 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f..bd49ed2 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5334,6 +5334,11 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
+	nopvspin	[X86,KVM]
+			Disables the qspinlock slow path using PV optimizations
+			which allow the hypervisor to 'idle' the guest on lock
+			contention.
+
 	xirc2ps_cs=	[NET,PCMCIA]
 			Format:
 			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 444d6fd..d86ab94 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
 extern void __pv_init_lock_hash(void);
 extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
 extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
+extern bool nopvspin;
 
 #define	queued_spin_unlock queued_spin_unlock
 /**
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 249f14a..e9c76d8 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -825,18 +825,44 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
  */
 void __init kvm_spinlock_init(void)
 {
-	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
-	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+	/*
+	 * PV spinlocks is disabled if no host side support, then native
+	 * qspinlock will be used. As native qspinlock is a fair lock, there is
+	 * lock holder preemption issue using it in a guest, imaging one pCPU
+	 * running 10 vCPUs of same guest contending same lock.
+	 *
+	 * virt_spin_lock() is introduced as an optimization for that scenario
+	 * which is enabled by virt_spin_lock_key key. To use that optimization,
+	 * virt_spin_lock_key isn't disabled here.
+	 */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+		pr_info("PV spinlocks disabled, no host support.\n");
 		return;
+	}
 
+	/*
+	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
+	 * are available.
+	 */
 	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
+		static_branch_disable(&virt_spin_lock_key);
+		return;
+	}
+
+	if (num_possible_cpus() == 1) {
+		pr_info("PV spinlocks disabled, single CPU.\n");
 		static_branch_disable(&virt_spin_lock_key);
 		return;
 	}
 
-	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1)
+	if (nopvspin) {
+		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
+		static_branch_disable(&virt_spin_lock_key);
 		return;
+	}
+
+	pr_info("PV spinlocks enabled\n");
 
 	__pv_init_lock_hash();
 	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 2473f10..75193d6 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 #include "qspinlock_paravirt.h"
 #include "qspinlock.c"
 
+bool nopvspin __initdata;
+static __init int parse_nopvspin(char *arg)
+{
+	nopvspin = true;
+	return 0;
+}
+early_param("nopvspin", parse_nopvspin);
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 4/5] xen: Mark "xen_nopvspin" parameter obsolete
  2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
                   ` (2 preceding siblings ...)
  2019-10-15  1:19 ` [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
@ 2019-10-15  1:19 ` Zhenzhong Duan
  2019-10-15  1:19 ` [PATCH v6 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan
  4 siblings, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan, Jonathan Corbet, H. Peter Anvin

Map "xen_nopvspin" to "nopvspin", fix stale description of "xen_nopvspin"
as we use qspinlock now.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 ++++---
 arch/x86/xen/spinlock.c                         | 4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index bd49ed2..85059dd 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5307,8 +5307,9 @@
 			panic() code such as dumping handler.
 
 	xen_nopvspin	[X86,XEN]
-			Disables the ticketlock slowpath using Xen PV
-			optimizations.
+			Disables the qspinlock slowpath using Xen PV optimizations.
+			This parameter is obsoleted by "nopvspin" parameter, which
+			has equivalent effect for XEN platform.
 
 	xen_nopv	[X86]
 			Disables the PV optimizations forcing the HVM guest to
@@ -5334,7 +5335,7 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
-	nopvspin	[X86,KVM]
+	nopvspin	[X86,XEN,KVM]
 			Disables the qspinlock slow path using PV optimizations
 			which allow the hypervisor to 'idle' the guest on lock
 			contention.
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 6deb490..799f4eb 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -114,9 +114,8 @@ void xen_uninit_lock_cpu(int cpu)
  */
 void __init xen_init_spinlocks(void)
 {
-
 	/*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1)
+	if (num_possible_cpus() == 1 || nopvspin)
 		xen_pvspin = false;
 
 	if (!xen_pvspin) {
@@ -137,6 +136,7 @@ void __init xen_init_spinlocks(void)
 
 static __init int xen_parse_nopvspin(char *arg)
 {
+	pr_notice("\"xen_nopvspin\" is deprecated, please use \"nopvspin\" instead\n");
 	xen_pvspin = false;
 	return 0;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v6 5/5] x86/hyperv: Mark "hv_nopvspin" parameter obsolete
  2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
                   ` (3 preceding siblings ...)
  2019-10-15  1:19 ` [PATCH v6 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
@ 2019-10-15  1:19 ` Zhenzhong Duan
  4 siblings, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-15  1:19 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Zhenzhong Duan, Jonathan Corbet, H. Peter Anvin

Map "hv_nopvspin" to "nopvspin".

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 6 +++++-
 arch/x86/hyperv/hv_spinlock.c                   | 4 ++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 85059dd..78648bb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1436,6 +1436,10 @@
 	hv_nopvspin	[X86,HYPER_V] Disables the paravirt spinlock optimizations
 				      which allow the hypervisor to 'idle' the
 				      guest on lock contention.
+				      This parameter is obsoleted by "nopvspin"
+				      parameter, which has equivalent effect for
+				      HYPER_V platform.
+
 
 	keep_bootcon	[KNL]
 			Do not unregister boot console at start. This is only
@@ -5335,7 +5339,7 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
-	nopvspin	[X86,XEN,KVM]
+	nopvspin	[X86,XEN,KVM,HYPER_V]
 			Disables the qspinlock slow path using PV optimizations
 			which allow the hypervisor to 'idle' the guest on lock
 			contention.
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index 07f21a0..47c7d6c 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -64,6 +64,9 @@ __visible bool hv_vcpu_is_preempted(int vcpu)
 
 void __init hv_init_spinlocks(void)
 {
+	if (nopvspin)
+		hv_pvspin = false;
+
 	if (!hv_pvspin || !apic ||
 	    !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
 	    !(ms_hyperv.features & HV_X64_MSR_GUEST_IDLE_AVAILABLE)) {
@@ -82,6 +85,7 @@ void __init hv_init_spinlocks(void)
 
 static __init int hv_parse_nopvspin(char *arg)
 {
+	pr_notice("\"hv_nopvspin\" is deprecated, please use \"nopvspin\" instead\n");
 	hv_pvspin = false;
 	return 0;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-15  1:19 ` [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
@ 2019-10-21  2:36   ` Zhenzhong Duan
  2019-10-21 11:14   ` Vitaly Kuznetsov
  1 sibling, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  2:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: vkuznets, linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal,
	tglx, mingo, bp, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, sstabellini,
	peterz, Jonathan Corbet, H. Peter Anvin, Will Deacon

Hi vitaly

This patch is based on your suggestion on v5, appreciate your further

review:) Thanks

Zhenzhong

On 2019/10/15 9:19, Zhenzhong Duan wrote:
> There are cases where a guest tries to switch spinlocks to bare metal
> behavior (e.g. by setting "xen_nopvspin" on XEN platform and
> "hv_nopvspin" on HYPER_V).
>
> That feature is missed on KVM, add a new parameter "nopvspin" to disable
> PV spinlocks for KVM guest.
>
> The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
> parameters in future patches.
>
> Define variable nopvsin as global because it will be used in future
> patches as above.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krcmar <rkrcmar@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>   Documentation/admin-guide/kernel-parameters.txt |  5 ++++
>   arch/x86/include/asm/qspinlock.h                |  1 +
>   arch/x86/kernel/kvm.c                           | 34 ++++++++++++++++++++++---
>   kernel/locking/qspinlock.c                      |  7 +++++
>   4 files changed, 43 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index a84a83f..bd49ed2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5334,6 +5334,11 @@
>   			as generic guest with no PV drivers. Currently support
>   			XEN HVM, KVM, HYPER_V and VMWARE guest.
>   
> +	nopvspin	[X86,KVM]
> +			Disables the qspinlock slow path using PV optimizations
> +			which allow the hypervisor to 'idle' the guest on lock
> +			contention.
> +
>   	xirc2ps_cs=	[NET,PCMCIA]
>   			Format:
>   			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 444d6fd..d86ab94 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
>   extern void __pv_init_lock_hash(void);
>   extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>   extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
> +extern bool nopvspin;
>   
>   #define	queued_spin_unlock queued_spin_unlock
>   /**
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 249f14a..e9c76d8 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -825,18 +825,44 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>    */
>   void __init kvm_spinlock_init(void)
>   {
> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> +	/*
> +	 * PV spinlocks is disabled if no host side support, then native
> +	 * qspinlock will be used. As native qspinlock is a fair lock, there is
> +	 * lock holder preemption issue using it in a guest, imaging one pCPU
> +	 * running 10 vCPUs of same guest contending same lock.
> +	 *
> +	 * virt_spin_lock() is introduced as an optimization for that scenario
> +	 * which is enabled by virt_spin_lock_key key. To use that optimization,
> +	 * virt_spin_lock_key isn't disabled here.
> +	 */
> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> +		pr_info("PV spinlocks disabled, no host support.\n");
>   		return;
> +	}
>   
> +	/*
> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> +	 * are available.
> +	 */
>   	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> +		static_branch_disable(&virt_spin_lock_key);
> +		return;
> +	}
> +
> +	if (num_possible_cpus() == 1) {
> +		pr_info("PV spinlocks disabled, single CPU.\n");
>   		static_branch_disable(&virt_spin_lock_key);
>   		return;
>   	}
>   
> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> -	if (num_possible_cpus() == 1)
> +	if (nopvspin) {
> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> +		static_branch_disable(&virt_spin_lock_key);
>   		return;
> +	}
> +
> +	pr_info("PV spinlocks enabled\n");
>   
>   	__pv_init_lock_hash();
>   	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 2473f10..75193d6 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>   #include "qspinlock_paravirt.h"
>   #include "qspinlock.c"
>   
> +bool nopvspin __initdata;
> +static __init int parse_nopvspin(char *arg)
> +{
> +	nopvspin = true;
> +	return 0;
> +}
> +early_param("nopvspin", parse_nopvspin);
>   #endif

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-15  1:19 ` [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
  2019-10-21  2:36   ` Zhenzhong Duan
@ 2019-10-21 11:14   ` Vitaly Kuznetsov
  2019-10-22  2:45     ` Zhenzhong Duan
  1 sibling, 1 reply; 9+ messages in thread
From: Vitaly Kuznetsov @ 2019-10-21 11:14 UTC (permalink / raw)
  To: Zhenzhong Duan, linux-kernel
  Cc: linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal, tglx, mingo,
	bp, pbonzini, rkrcmar, sean.j.christopherson, wanpengli,
	jmattson, joro, boris.ostrovsky, jgross, sstabellini, peterz,
	Zhenzhong Duan, Jonathan Corbet, H. Peter Anvin, Will Deacon

Zhenzhong Duan <zhenzhong.duan@oracle.com> writes:

> There are cases where a guest tries to switch spinlocks to bare metal
> behavior (e.g. by setting "xen_nopvspin" on XEN platform and
> "hv_nopvspin" on HYPER_V).
>
> That feature is missed on KVM, add a new parameter "nopvspin" to disable
> PV spinlocks for KVM guest.
>
> The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
> parameters in future patches.
>
> Define variable nopvsin as global because it will be used in future
> patches as above.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krcmar <rkrcmar@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  5 ++++
>  arch/x86/include/asm/qspinlock.h                |  1 +
>  arch/x86/kernel/kvm.c                           | 34 ++++++++++++++++++++++---
>  kernel/locking/qspinlock.c                      |  7 +++++
>  4 files changed, 43 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index a84a83f..bd49ed2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5334,6 +5334,11 @@
>  			as generic guest with no PV drivers. Currently support
>  			XEN HVM, KVM, HYPER_V and VMWARE guest.
>  
> +	nopvspin	[X86,KVM]
> +			Disables the qspinlock slow path using PV optimizations
> +			which allow the hypervisor to 'idle' the guest on lock
> +			contention.
> +
>  	xirc2ps_cs=	[NET,PCMCIA]
>  			Format:
>  			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 444d6fd..d86ab94 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
>  extern void __pv_init_lock_hash(void);
>  extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>  extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
> +extern bool nopvspin;
>  
>  #define	queued_spin_unlock queued_spin_unlock
>  /**
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 249f14a..e9c76d8 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -825,18 +825,44 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>   */
>  void __init kvm_spinlock_init(void)
>  {
> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> +	/*
> +	 * PV spinlocks is disabled if no host side support, then native
> +	 * qspinlock will be used. As native qspinlock is a fair lock, there is
> +	 * lock holder preemption issue using it in a guest, imaging one pCPU
> +	 * running 10 vCPUs of same guest contending same lock.
> +	 *
> +	 * virt_spin_lock() is introduced as an optimization for that scenario
> +	 * which is enabled by virt_spin_lock_key key. To use that optimization,
> +	 * virt_spin_lock_key isn't disabled here.
> +	 */

My take (if I properly understood what you say) would be:

"In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
preferred over native qspinlock when vCPU is preempted."

> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> +		pr_info("PV spinlocks disabled, no host support.\n");
>  		return;
> +	}
>  
> +	/*
> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> +	 * are available.
> +	 */
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> +		static_branch_disable(&virt_spin_lock_key);
> +		return;
> +	}
> +
> +	if (num_possible_cpus() == 1) {
> +		pr_info("PV spinlocks disabled, single CPU.\n");
>  		static_branch_disable(&virt_spin_lock_key);
>  		return;
>  	}
>  
> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> -	if (num_possible_cpus() == 1)
> +	if (nopvspin) {
> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> +		static_branch_disable(&virt_spin_lock_key);
>  		return;

You could've replaced this 'static_branch_disable(); return;' pattern
with a goto to the end of the function to save a few lines but this
looks good anyways.

Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>

> +	}
> +
> +	pr_info("PV spinlocks enabled\n");
>  
>  	__pv_init_lock_hash();
>  	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 2473f10..75193d6 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  #include "qspinlock_paravirt.h"
>  #include "qspinlock.c"
>  
> +bool nopvspin __initdata;
> +static __init int parse_nopvspin(char *arg)
> +{
> +	nopvspin = true;
> +	return 0;
> +}
> +early_param("nopvspin", parse_nopvspin);
>  #endif

-- 
Vitaly

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-21 11:14   ` Vitaly Kuznetsov
@ 2019-10-22  2:45     ` Zhenzhong Duan
  0 siblings, 0 replies; 9+ messages in thread
From: Zhenzhong Duan @ 2019-10-22  2:45 UTC (permalink / raw)
  To: Vitaly Kuznetsov, linux-kernel
  Cc: linux-hyperv, kvm, kys, haiyangz, sthemmin, sashal, tglx, mingo,
	bp, pbonzini, rkrcmar, sean.j.christopherson, wanpengli,
	jmattson, joro, boris.ostrovsky, jgross, sstabellini, peterz,
	Jonathan Corbet, H. Peter Anvin, Will Deacon


On 2019/10/21 19:14, Vitaly Kuznetsov wrote:
>> index 249f14a..e9c76d8 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -825,18 +825,44 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>    */
>>   void __init kvm_spinlock_init(void)
>>   {
>> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> +	/*
>> +	 * PV spinlocks is disabled if no host side support, then native
>> +	 * qspinlock will be used. As native qspinlock is a fair lock, there is
>> +	 * lock holder preemption issue using it in a guest, imaging one pCPU
>> +	 * running 10 vCPUs of same guest contending same lock.
>> +	 *
>> +	 * virt_spin_lock() is introduced as an optimization for that scenario
>> +	 * which is enabled by virt_spin_lock_key key. To use that optimization,
>> +	 * virt_spin_lock_key isn't disabled here.
>> +	 */
> My take (if I properly understood what you say) would be:
>
> "In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> preferred over native qspinlock when vCPU is preempted."

Yes, that's what I mean, maybe I didn't explain clearly due to my pool 
english,

I'll use your explanation instead.

>
>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>> +		pr_info("PV spinlocks disabled, no host support.\n");
>>   		return;
>> +	}
>>   
>> +	/*
>> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>> +	 * are available.
>> +	 */
>>   	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>> +		static_branch_disable(&virt_spin_lock_key);
>> +		return;
>> +	}
>> +
>> +	if (num_possible_cpus() == 1) {
>> +		pr_info("PV spinlocks disabled, single CPU.\n");
>>   		static_branch_disable(&virt_spin_lock_key);
>>   		return;
>>   	}
>>   
>> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
>> -	if (num_possible_cpus() == 1)
>> +	if (nopvspin) {
>> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>> +		static_branch_disable(&virt_spin_lock_key);
>>   		return;
> You could've replaced this 'static_branch_disable(); return;' pattern
> with a goto to the end of the function to save a few lines but this
> looks good anyways.
>
> Reviewed-by: Vitaly Kuznetsov<vkuznets@redhat.com>

Ok, will do, thanks for review.

Zhenzhong


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-10-22  2:47 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-15  1:19 [PATCH v6 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
2019-10-15  1:19 ` [PATCH v6 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
2019-10-15  1:19 ` [PATCH v6 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
2019-10-15  1:19 ` [PATCH v6 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
2019-10-21  2:36   ` Zhenzhong Duan
2019-10-21 11:14   ` Vitaly Kuznetsov
2019-10-22  2:45     ` Zhenzhong Duan
2019-10-15  1:19 ` [PATCH v6 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
2019-10-15  1:19 ` [PATCH v6 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).