All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/5] Add a unified parameter "nopvspin"
@ 2019-10-21  9:11 Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan

All the patches have Reviewed-by now, I think v7 could be the final
version.

There are cases folks want to disable spinlock optimization for
debug/test purpose. Xen and hyperv already have parameters "xen_nopvspin"
and "hv_nopvspin" to support that, but kvm doesn't.

The first patch adds that feature to KVM guest with "nopvspin".

For compatibility reason original parameters "xen_nopvspin" and
"hv_nopvspin" are retained and marked obsolete.

v7:
PATCH3: update comment and use goto, add RB              [Vitaly Kuznetsov]

v6:
PATCH1: add Reviewed-by                                  [Vitaly Kuznetsov]
PATCH2: change 'pv' to 'PV', add Reviewed-by             [Vitaly Kuznetsov]
PATCH3: refactor 'if' branch in kvm_spinlock_init()      [Vitaly Kuznetsov]

v5:
PATCH1: new patch to revert a currently unnecessory commit,
        code is simpler a bit after that change.         [Boris Ostrovsky]
PATCH3: fold 'if' statement,add comments on virt_spin_lock_key,
        reorder with PATCH2 to better reflect dependency                               
PATCH4: fold 'if' statement, add Reviewed-by             [Boris Ostrovsky]
PATCH5: add Reviewed-by                                  [Michael Kelley]

v4:
PATCH1: use variable name nopvspin instead of pvspin and
        defined it as __initdata, changed print message,
        updated patch description                     [Sean Christopherson]
PATCH2: remove Suggested-by, use "kvm-guest:" prefix  [Sean Christopherson]
PATCH3: make variable nopvsin and xen_pvspin coexist
        remove Reviewed-by due to code change         [Sean Christopherson]
PATCH4: make variable nopvsin and hv_pvspin coexist   [Sean Christopherson]

v3:
PATCH2: Fix indentation

v2:
PATCH1: pick the print code change into separate PATCH2,
        updated patch description             [Vitaly Kuznetsov]
PATCH2: new patch with print code change      [Vitaly Kuznetsov]
PATCH3: add Reviewed-by                       [Juergen Gross]

Zhenzhong Duan (5):
  Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key
    get initialized"
  x86/kvm: Change print code to use pr_*() format
  x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  xen: Mark "xen_nopvspin" parameter obsolete
  x86/hyperv: Mark "hv_nopvspin" parameter obsolete

 Documentation/admin-guide/kernel-parameters.txt | 14 ++++-
 arch/x86/hyperv/hv_spinlock.c                   |  4 ++
 arch/x86/include/asm/qspinlock.h                |  1 +
 arch/x86/kernel/kvm.c                           | 74 +++++++++++++++----------
 arch/x86/xen/spinlock.c                         |  4 +-
 kernel/locking/qspinlock.c                      |  7 +++
 6 files changed, 71 insertions(+), 33 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized"
  2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
@ 2019-10-21  9:11 ` Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan, H. Peter Anvin

This reverts commit 34226b6b70980a8f81fff3c09a2c889f77edeeff.

Commit 8990cac6e5ea ("x86/jump_label: Initialize static branching
early") adds jump_label_init() call in setup_arch() to make static
keys initialized early, so we could use the original simpler code
again.

The similar change for XEN is in commit 090d54bcbc54 ("Revert
"x86/paravirt: Set up the virt_spin_lock_key after static keys get
initialized"")

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/kernel/kvm.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index e820568..3bc6a266 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -527,13 +527,6 @@ static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
 	}
 }
 
-static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
-{
-	native_smp_prepare_cpus(max_cpus);
-	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
-		static_branch_disable(&virt_spin_lock_key);
-}
-
 static void __init kvm_smp_prepare_boot_cpu(void)
 {
 	/*
@@ -633,7 +626,6 @@ static void __init kvm_guest_init(void)
 		apic_set_eoi_write(kvm_guest_apic_eoi_write);
 
 #ifdef CONFIG_SMP
-	smp_ops.smp_prepare_cpus = kvm_smp_prepare_cpus;
 	smp_ops.smp_prepare_boot_cpu = kvm_smp_prepare_boot_cpu;
 	if (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) &&
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
@@ -835,8 +827,10 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
-	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
+	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+		static_branch_disable(&virt_spin_lock_key);
 		return;
+	}
 
 	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
 	if (num_possible_cpus() == 1)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format
  2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
@ 2019-10-21  9:11 ` Zhenzhong Duan
  2019-10-22 21:01   ` Sean Christopherson
  2019-10-21  9:11 ` [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan, H. Peter Anvin

pr_*() is preferred than printk(KERN_* ...), after change all the print
in arch/x86/kernel/kvm.c will have "kvm_guest: xxx" style.

No functional change.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 arch/x86/kernel/kvm.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 3bc6a266..249f14a 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -7,6 +7,8 @@
  *   Authors: Anthony Liguori <aliguori@us.ibm.com>
  */
 
+#define pr_fmt(fmt) "kvm_guest: " fmt
+
 #include <linux/context_tracking.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
@@ -286,8 +288,8 @@ static void kvm_register_steal_time(void)
 		return;
 
 	wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
-	pr_info("kvm-stealtime: cpu %d, msr %llx\n",
-		cpu, (unsigned long long) slow_virt_to_phys(st));
+	pr_info("stealtime: cpu %d, msr %llx\n", cpu,
+		(unsigned long long) slow_virt_to_phys(st));
 }
 
 static DEFINE_PER_CPU_DECRYPTED(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
@@ -321,8 +323,7 @@ static void kvm_guest_cpu_init(void)
 
 		wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
 		__this_cpu_write(apf_reason.enabled, 1);
-		printk(KERN_INFO"KVM setup async PF for cpu %d\n",
-		       smp_processor_id());
+		pr_info("setup async PF for cpu %d\n", smp_processor_id());
 	}
 
 	if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) {
@@ -347,8 +348,7 @@ static void kvm_pv_disable_apf(void)
 	wrmsrl(MSR_KVM_ASYNC_PF_EN, 0);
 	__this_cpu_write(apf_reason.enabled, 0);
 
-	printk(KERN_INFO"Unregister pv shared memory for cpu %d\n",
-	       smp_processor_id());
+	pr_info("Unregister PV shared memory for cpu %d\n", smp_processor_id());
 }
 
 static void kvm_pv_guest_cpu_reboot(void *unused)
@@ -469,7 +469,8 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
 		} else {
 			ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
 				(unsigned long)(ipi_bitmap >> BITS_PER_LONG), min, icr);
-			WARN_ONCE(ret < 0, "KVM: failed to send PV IPI: %ld", ret);
+			WARN_ONCE(ret < 0, "kvm_guest: failed to send PV IPI: %ld",
+				  ret);
 			min = max = apic_id;
 			ipi_bitmap = 0;
 		}
@@ -479,7 +480,8 @@ static void __send_ipi_mask(const struct cpumask *mask, int vector)
 	if (ipi_bitmap) {
 		ret = kvm_hypercall4(KVM_HC_SEND_IPI, (unsigned long)ipi_bitmap,
 			(unsigned long)(ipi_bitmap >> BITS_PER_LONG), min, icr);
-		WARN_ONCE(ret < 0, "KVM: failed to send PV IPI: %ld", ret);
+		WARN_ONCE(ret < 0, "kvm_guest: failed to send PV IPI: %ld",
+			  ret);
 	}
 
 	local_irq_restore(flags);
@@ -509,7 +511,7 @@ static void kvm_setup_pv_ipi(void)
 {
 	apic->send_IPI_mask = kvm_send_ipi_mask;
 	apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
-	pr_info("KVM setup pv IPIs\n");
+	pr_info("setup PV IPIs\n");
 }
 
 static void kvm_smp_send_call_func_ipi(const struct cpumask *mask)
@@ -631,11 +633,11 @@ static void __init kvm_guest_init(void)
 	    !kvm_para_has_hint(KVM_HINTS_REALTIME) &&
 	    kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) {
 		smp_ops.send_call_func_ipi = kvm_smp_send_call_func_ipi;
-		pr_info("KVM setup pv sched yield\n");
+		pr_info("setup PV sched yield\n");
 	}
 	if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/kvm:online",
 				      kvm_cpu_online, kvm_cpu_down_prepare) < 0)
-		pr_err("kvm_guest: Failed to install cpu hotplug callbacks\n");
+		pr_err("failed to install cpu hotplug callbacks\n");
 #else
 	sev_map_percpu_data();
 	kvm_guest_cpu_init();
@@ -738,7 +740,7 @@ static __init int kvm_setup_pv_tlb_flush(void)
 			zalloc_cpumask_var_node(per_cpu_ptr(&__pv_tlb_mask, cpu),
 				GFP_KERNEL, cpu_to_node(cpu));
 		}
-		pr_info("KVM setup pv remote TLB flush\n");
+		pr_info("setup PV remote TLB flush\n");
 	}
 
 	return 0;
@@ -866,8 +868,8 @@ static void kvm_enable_host_haltpoll(void *i)
 void arch_haltpoll_enable(unsigned int cpu)
 {
 	if (!kvm_para_has_feature(KVM_FEATURE_POLL_CONTROL)) {
-		pr_err_once("kvm: host does not support poll control\n");
-		pr_err_once("kvm: host upgrade recommended\n");
+		pr_err_once("host does not support poll control\n");
+		pr_err_once("host upgrade recommended\n");
 		return;
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
@ 2019-10-21  9:11 ` Zhenzhong Duan
  2019-10-22 11:36   ` Vitaly Kuznetsov
  2019-10-21  9:11 ` [PATCH v7 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan
  4 siblings, 1 reply; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan, Jonathan Corbet,
	H. Peter Anvin

There are cases where a guest tries to switch spinlocks to bare metal
behavior (e.g. by setting "xen_nopvspin" on XEN platform and
"hv_nopvspin" on HYPER_V).

That feature is missed on KVM, add a new parameter "nopvspin" to disable
PV spinlocks for KVM guest.

The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
parameters in future patches.

Define variable nopvsin as global because it will be used in future
patches as above.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krcmar <rkrcmar@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 Documentation/admin-guide/kernel-parameters.txt |  5 ++++
 arch/x86/include/asm/qspinlock.h                |  1 +
 arch/x86/kernel/kvm.c                           | 34 ++++++++++++++++++++-----
 kernel/locking/qspinlock.c                      |  7 +++++
 4 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a84a83f..bd49ed2 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5334,6 +5334,11 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
+	nopvspin	[X86,KVM]
+			Disables the qspinlock slow path using PV optimizations
+			which allow the hypervisor to 'idle' the guest on lock
+			contention.
+
 	xirc2ps_cs=	[NET,PCMCIA]
 			Format:
 			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 444d6fd..d86ab94 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
 extern void __pv_init_lock_hash(void);
 extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
 extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
+extern bool nopvspin;
 
 #define	queued_spin_unlock queued_spin_unlock
 /**
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 249f14a..3945aa5 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
  */
 void __init kvm_spinlock_init(void)
 {
-	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
-	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+	/*
+	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+	 * preferred over native qspinlock when vCPU is preempted.
+	 */
+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+		pr_info("PV spinlocks disabled, no host support.\n");
 		return;
+	}
 
+	/*
+	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
+	 * are available.
+	 */
 	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
-		static_branch_disable(&virt_spin_lock_key);
-		return;
+		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
+		goto out;
 	}
 
-	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1)
-		return;
+	if (num_possible_cpus() == 1) {
+		pr_info("PV spinlocks disabled, single CPU.\n");
+		goto out;
+	}
+
+	if (nopvspin) {
+		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
+		goto out;
+	}
+
+	pr_info("PV spinlocks enabled\n");
 
 	__pv_init_lock_hash();
 	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
@@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
 		pv_ops.lock.vcpu_is_preempted =
 			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
 	}
+out:
+	static_branch_disable(&virt_spin_lock_key);
 }
 
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 2473f10..75193d6 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 #include "qspinlock_paravirt.h"
 #include "qspinlock.c"
 
+bool nopvspin __initdata;
+static __init int parse_nopvspin(char *arg)
+{
+	nopvspin = true;
+	return 0;
+}
+early_param("nopvspin", parse_nopvspin);
 #endif
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 4/5] xen: Mark "xen_nopvspin" parameter obsolete
  2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
                   ` (2 preceding siblings ...)
  2019-10-21  9:11 ` [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
@ 2019-10-21  9:11 ` Zhenzhong Duan
  2019-10-21  9:11 ` [PATCH v7 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan
  4 siblings, 0 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan, Jonathan Corbet,
	Stefano Stabellini, H. Peter Anvin

Map "xen_nopvspin" to "nopvspin", fix stale description of "xen_nopvspin"
as we use qspinlock now.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 ++++---
 arch/x86/xen/spinlock.c                         | 4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index bd49ed2..85059dd 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5307,8 +5307,9 @@
 			panic() code such as dumping handler.
 
 	xen_nopvspin	[X86,XEN]
-			Disables the ticketlock slowpath using Xen PV
-			optimizations.
+			Disables the qspinlock slowpath using Xen PV optimizations.
+			This parameter is obsoleted by "nopvspin" parameter, which
+			has equivalent effect for XEN platform.
 
 	xen_nopv	[X86]
 			Disables the PV optimizations forcing the HVM guest to
@@ -5334,7 +5335,7 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
-	nopvspin	[X86,KVM]
+	nopvspin	[X86,XEN,KVM]
 			Disables the qspinlock slow path using PV optimizations
 			which allow the hypervisor to 'idle' the guest on lock
 			contention.
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 6deb490..799f4eb 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -114,9 +114,8 @@ void xen_uninit_lock_cpu(int cpu)
  */
 void __init xen_init_spinlocks(void)
 {
-
 	/*  Don't need to use pvqspinlock code if there is only 1 vCPU. */
-	if (num_possible_cpus() == 1)
+	if (num_possible_cpus() == 1 || nopvspin)
 		xen_pvspin = false;
 
 	if (!xen_pvspin) {
@@ -137,6 +136,7 @@ void __init xen_init_spinlocks(void)
 
 static __init int xen_parse_nopvspin(char *arg)
 {
+	pr_notice("\"xen_nopvspin\" is deprecated, please use \"nopvspin\" instead\n");
 	xen_pvspin = false;
 	return 0;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v7 5/5] x86/hyperv: Mark "hv_nopvspin" parameter obsolete
  2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
                   ` (3 preceding siblings ...)
  2019-10-21  9:11 ` [PATCH v7 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
@ 2019-10-21  9:11 ` Zhenzhong Duan
  4 siblings, 0 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-21  9:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	vkuznets, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Zhenzhong Duan, Jonathan Corbet,
	H. Peter Anvin

Map "hv_nopvspin" to "nopvspin".

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sasha Levin <sashal@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 6 +++++-
 arch/x86/hyperv/hv_spinlock.c                   | 4 ++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 85059dd..78648bb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1436,6 +1436,10 @@
 	hv_nopvspin	[X86,HYPER_V] Disables the paravirt spinlock optimizations
 				      which allow the hypervisor to 'idle' the
 				      guest on lock contention.
+				      This parameter is obsoleted by "nopvspin"
+				      parameter, which has equivalent effect for
+				      HYPER_V platform.
+
 
 	keep_bootcon	[KNL]
 			Do not unregister boot console at start. This is only
@@ -5335,7 +5339,7 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
-	nopvspin	[X86,XEN,KVM]
+	nopvspin	[X86,XEN,KVM,HYPER_V]
 			Disables the qspinlock slow path using PV optimizations
 			which allow the hypervisor to 'idle' the guest on lock
 			contention.
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index 07f21a0..47c7d6c 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -64,6 +64,9 @@ __visible bool hv_vcpu_is_preempted(int vcpu)
 
 void __init hv_init_spinlocks(void)
 {
+	if (nopvspin)
+		hv_pvspin = false;
+
 	if (!hv_pvspin || !apic ||
 	    !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
 	    !(ms_hyperv.features & HV_X64_MSR_GUEST_IDLE_AVAILABLE)) {
@@ -82,6 +85,7 @@ void __init hv_init_spinlocks(void)
 
 static __init int hv_parse_nopvspin(char *arg)
 {
+	pr_notice("\"hv_nopvspin\" is deprecated, please use \"nopvspin\" instead\n");
 	hv_pvspin = false;
 	return 0;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-21  9:11 ` [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
@ 2019-10-22 11:36   ` Vitaly Kuznetsov
  2019-10-22 12:46     ` Zhenzhong Duan
  0 siblings, 1 reply; 13+ messages in thread
From: Vitaly Kuznetsov @ 2019-10-22 11:36 UTC (permalink / raw)
  To: Zhenzhong Duan, linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, peterz, will,
	linux-hyperv, kvm, mikelley, kys, haiyangz, sthemmin, sashal,
	Zhenzhong Duan, Jonathan Corbet, H. Peter Anvin

Zhenzhong Duan <zhenzhong.duan@oracle.com> writes:

> There are cases where a guest tries to switch spinlocks to bare metal
> behavior (e.g. by setting "xen_nopvspin" on XEN platform and
> "hv_nopvspin" on HYPER_V).
>
> That feature is missed on KVM, add a new parameter "nopvspin" to disable
> PV spinlocks for KVM guest.
>
> The new 'nopvspin' parameter will also replace Xen and Hyper-V specific
> parameters in future patches.
>
> Define variable nopvsin as global because it will be used in future
> patches as above.
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krcmar <rkrcmar@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  Documentation/admin-guide/kernel-parameters.txt |  5 ++++
>  arch/x86/include/asm/qspinlock.h                |  1 +
>  arch/x86/kernel/kvm.c                           | 34 ++++++++++++++++++++-----
>  kernel/locking/qspinlock.c                      |  7 +++++
>  4 files changed, 40 insertions(+), 7 deletions(-)
>
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index a84a83f..bd49ed2 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -5334,6 +5334,11 @@
>  			as generic guest with no PV drivers. Currently support
>  			XEN HVM, KVM, HYPER_V and VMWARE guest.
>  
> +	nopvspin	[X86,KVM]
> +			Disables the qspinlock slow path using PV optimizations
> +			which allow the hypervisor to 'idle' the guest on lock
> +			contention.
> +
>  	xirc2ps_cs=	[NET,PCMCIA]
>  			Format:
>  			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 444d6fd..d86ab94 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -32,6 +32,7 @@ static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lo
>  extern void __pv_init_lock_hash(void);
>  extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val);
>  extern void __raw_callee_save___pv_queued_spin_unlock(struct qspinlock *lock);
> +extern bool nopvspin;
>  
>  #define	queued_spin_unlock queued_spin_unlock
>  /**
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 249f14a..3945aa5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>   */
>  void __init kvm_spinlock_init(void)
>  {
> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> +	/*
> +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> +	 * preferred over native qspinlock when vCPU is preempted.
> +	 */
> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> +		pr_info("PV spinlocks disabled, no host support.\n");
>  		return;
> +	}
>  
> +	/*
> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> +	 * are available.
> +	 */
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> -		static_branch_disable(&virt_spin_lock_key);
> -		return;
> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> +		goto out;
>  	}
>  
> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> -	if (num_possible_cpus() == 1)
> -		return;
> +	if (num_possible_cpus() == 1) {
> +		pr_info("PV spinlocks disabled, single CPU.\n");
> +		goto out;
> +	}
> +
> +	if (nopvspin) {
> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> +		goto out;
> +	}
> +
> +	pr_info("PV spinlocks enabled\n");
>  
>  	__pv_init_lock_hash();
>  	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>  		pv_ops.lock.vcpu_is_preempted =
>  			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>  	}
> +out:
> +	static_branch_disable(&virt_spin_lock_key);

You probably need to add 'return' before 'out:' as it seems you're
disabling virt_spin_lock_key in all cases now).

>  }
>  
>  #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 2473f10..75193d6 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -580,4 +580,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  #include "qspinlock_paravirt.h"
>  #include "qspinlock.c"
>  
> +bool nopvspin __initdata;
> +static __init int parse_nopvspin(char *arg)
> +{
> +	nopvspin = true;
> +	return 0;
> +}
> +early_param("nopvspin", parse_nopvspin);
>  #endif

-- 
Vitaly

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-22 11:36   ` Vitaly Kuznetsov
@ 2019-10-22 12:46     ` Zhenzhong Duan
  2019-10-22 13:11       ` Vitaly Kuznetsov
  2019-10-22 21:03       ` Sean Christopherson
  0 siblings, 2 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-22 12:46 UTC (permalink / raw)
  To: Vitaly Kuznetsov, linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, peterz, will,
	linux-hyperv, kvm, mikelley, kys, haiyangz, sthemmin, sashal,
	Jonathan Corbet, H. Peter Anvin

Hi Vitaly,

On 2019/10/22 19:36, Vitaly Kuznetsov wrote:

> Zhenzhong Duan<zhenzhong.duan@oracle.com>  writes:
>
...snip

>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 249f14a..3945aa5 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>    */
>>   void __init kvm_spinlock_init(void)
>>   {
>> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> +	/*
>> +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>> +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>> +	 * preferred over native qspinlock when vCPU is preempted.
>> +	 */
>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>> +		pr_info("PV spinlocks disabled, no host support.\n");
>>   		return;
>> +	}
>>   
>> +	/*
>> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>> +	 * are available.
>> +	 */
>>   	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>> -		static_branch_disable(&virt_spin_lock_key);
>> -		return;
>> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>> +		goto out;
>>   	}
>>   
>> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
>> -	if (num_possible_cpus() == 1)
>> -		return;
>> +	if (num_possible_cpus() == 1) {
>> +		pr_info("PV spinlocks disabled, single CPU.\n");
>> +		goto out;
>> +	}
>> +
>> +	if (nopvspin) {
>> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>> +		goto out;
>> +	}
>> +
>> +	pr_info("PV spinlocks enabled\n");
>>   
>>   	__pv_init_lock_hash();
>>   	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>>   		pv_ops.lock.vcpu_is_preempted =
>>   			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>>   	}
>> +out:
>> +	static_branch_disable(&virt_spin_lock_key);
> You probably need to add 'return' before 'out:' as it seems you're
> disabling virt_spin_lock_key in all cases now).

virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
case which is the only case virt_spin_lock() optimization is used.

When PV qspinlock is enabled, virt_spin_lock() isn't called in
__pv_queued_spin_lock_slowpath() in which case we don't care
virt_spin_lock_key's value.

So adding 'return' or not are both ok, I chosed to save a line,
let me know if you prefer to add a 'return' and I'll change it.

btw: __pv_queued_spin_lock_slowpath() is alias of queued_spin_lock_slowpath()

Thanks
Zhenzhong


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-22 12:46     ` Zhenzhong Duan
@ 2019-10-22 13:11       ` Vitaly Kuznetsov
  2019-10-22 21:03       ` Sean Christopherson
  1 sibling, 0 replies; 13+ messages in thread
From: Vitaly Kuznetsov @ 2019-10-22 13:11 UTC (permalink / raw)
  To: Zhenzhong Duan, linux-kernel
  Cc: tglx, mingo, bp, x86, pbonzini, rkrcmar, sean.j.christopherson,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, peterz, will,
	linux-hyperv, kvm, mikelley, kys, haiyangz, sthemmin, sashal,
	Jonathan Corbet, H. Peter Anvin

Zhenzhong Duan <zhenzhong.duan@oracle.com> writes:

> Hi Vitaly,
>
> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
>
>> Zhenzhong Duan<zhenzhong.duan@oracle.com>  writes:
>>
> ...snip
>
>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>> index 249f14a..3945aa5 100644
>>> --- a/arch/x86/kernel/kvm.c
>>> +++ b/arch/x86/kernel/kvm.c
>>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>>    */
>>>   void __init kvm_spinlock_init(void)
>>>   {
>>> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>>> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>>> +	/*
>>> +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>>> +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>>> +	 * preferred over native qspinlock when vCPU is preempted.
>>> +	 */
>>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>>> +		pr_info("PV spinlocks disabled, no host support.\n");
>>>   		return;
>>> +	}
>>>   
>>> +	/*
>>> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>>> +	 * are available.
>>> +	 */
>>>   	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>> -		static_branch_disable(&virt_spin_lock_key);
>>> -		return;
>>> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>>> +		goto out;
>>>   	}
>>>   
>>> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
>>> -	if (num_possible_cpus() == 1)
>>> -		return;
>>> +	if (num_possible_cpus() == 1) {
>>> +		pr_info("PV spinlocks disabled, single CPU.\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	if (nopvspin) {
>>> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>>> +		goto out;
>>> +	}
>>> +
>>> +	pr_info("PV spinlocks enabled\n");
>>>   
>>>   	__pv_init_lock_hash();
>>>   	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>>>   		pv_ops.lock.vcpu_is_preempted =
>>>   			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>>>   	}
>>> +out:
>>> +	static_branch_disable(&virt_spin_lock_key);
>> You probably need to add 'return' before 'out:' as it seems you're
>> disabling virt_spin_lock_key in all cases now).
>
> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
> case which is the only case virt_spin_lock() optimization is used.
>
> When PV qspinlock is enabled, virt_spin_lock() isn't called in
> __pv_queued_spin_lock_slowpath() in which case we don't care
> virt_spin_lock_key's value.
>

True, my bad: I though we still need it enabled for something.

> So adding 'return' or not are both ok, I chosed to save a line,
> let me know if you prefer to add a 'return' and I'll change it.

No, please ignore.

>
> btw: __pv_queued_spin_lock_slowpath() is alias of queued_spin_lock_slowpath()
>
> Thanks
> Zhenzhong
>

-- 
Vitaly

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format
  2019-10-21  9:11 ` [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
@ 2019-10-22 21:01   ` Sean Christopherson
  2019-10-23  1:29     ` Zhenzhong Duan
  0 siblings, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2019-10-22 21:01 UTC (permalink / raw)
  To: Zhenzhong Duan
  Cc: linux-kernel, tglx, mingo, bp, x86, pbonzini, rkrcmar, vkuznets,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, peterz, will,
	linux-hyperv, kvm, mikelley, kys, haiyangz, sthemmin, sashal,
	H. Peter Anvin

On Mon, Oct 21, 2019 at 05:11:13PM +0800, Zhenzhong Duan wrote:
> pr_*() is preferred than printk(KERN_* ...), after change all the print
> in arch/x86/kernel/kvm.c will have "kvm_guest: xxx" style.
> 
> No functional change.
> 
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krcmar <rkrcmar@redhat.com>
> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> ---
>  arch/x86/kernel/kvm.c | 30 ++++++++++++++++--------------
>  1 file changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 3bc6a266..249f14a 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -7,6 +7,8 @@
>   *   Authors: Anthony Liguori <aliguori@us.ibm.com>
>   */
>  
> +#define pr_fmt(fmt) "kvm_guest: " fmt

Sort of a silly nit, especially since I suggested kvm_guest...

What about using kvm-guest instead of kvm_guest to be consistent with
kvm-clock, the other prolific logger in a KVM guest.

E.g.

  kvm-clock: cpu 1, msr 551e041, secondary cpu clock
  kvm-guest: setup async PF for cpu 1
  kvm-guest: stealtime: cpu 1, msr 277695f40
  kvm-clock: cpu 2, msr 551e081, secondary cpu clock
  kvm-guest: setup async PF for cpu 2
  kvm-guest: stealtime: cpu 2, msr 277715f40
  kvm-clock: cpu 3, msr 551e0c1, secondary cpu clock
  kvm-guest: setup async PF for cpu 3
  kvm-guest: stealtime: cpu 3, msr 277795f40
  kvm-clock: cpu 4, msr 551e101, secondary cpu clock
  
instead of

  kvm-clock: cpu 1, msr 551e041, secondary cpu clock
  kvm_guest: setup async PF for cpu 1
  kvm_guest: stealtime: cpu 1, msr 277695f40
  kvm-clock: cpu 2, msr 551e081, secondary cpu clock
  kvm_guest: setup async PF for cpu 2
  kvm_guest: stealtime: cpu 2, msr 277715f40
  kvm-clock: cpu 3, msr 551e0c1, secondary cpu clock
  kvm_guest: setup async PF for cpu 3
  kvm_guest: stealtime: cpu 3, msr 277795f40
  kvm-clock: cpu 4, msr 551e101, secondary cpu clock

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-22 12:46     ` Zhenzhong Duan
  2019-10-22 13:11       ` Vitaly Kuznetsov
@ 2019-10-22 21:03       ` Sean Christopherson
  2019-10-23  1:36         ` Zhenzhong Duan
  1 sibling, 1 reply; 13+ messages in thread
From: Sean Christopherson @ 2019-10-22 21:03 UTC (permalink / raw)
  To: Zhenzhong Duan
  Cc: Vitaly Kuznetsov, linux-kernel, tglx, mingo, bp, x86, pbonzini,
	rkrcmar, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Jonathan Corbet, H. Peter Anvin

On Tue, Oct 22, 2019 at 08:46:46PM +0800, Zhenzhong Duan wrote:
> Hi Vitaly,
> 
> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
> 
> >Zhenzhong Duan<zhenzhong.duan@oracle.com>  writes:
> >
> ...snip
> 
> >>diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> >>index 249f14a..3945aa5 100644
> >>--- a/arch/x86/kernel/kvm.c
> >>+++ b/arch/x86/kernel/kvm.c
> >>@@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
> >>   */
> >>  void __init kvm_spinlock_init(void)
> >>  {
> >>-	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
> >>-	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> >>+	/*
> >>+	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> >>+	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> >>+	 * preferred over native qspinlock when vCPU is preempted.
> >>+	 */
> >>+	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> >>+		pr_info("PV spinlocks disabled, no host support.\n");
> >>  		return;
> >>+	}
> >>+	/*
> >>+	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
> >>+	 * are available.
> >>+	 */
> >>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
> >>-		static_branch_disable(&virt_spin_lock_key);
> >>-		return;
> >>+		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
> >>+		goto out;
> >>  	}
> >>-	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> >>-	if (num_possible_cpus() == 1)
> >>-		return;
> >>+	if (num_possible_cpus() == 1) {
> >>+		pr_info("PV spinlocks disabled, single CPU.\n");
> >>+		goto out;
> >>+	}
> >>+
> >>+	if (nopvspin) {
> >>+		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
> >>+		goto out;
> >>+	}
> >>+
> >>+	pr_info("PV spinlocks enabled\n");
> >>  	__pv_init_lock_hash();
> >>  	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
> >>@@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
> >>  		pv_ops.lock.vcpu_is_preempted =
> >>  			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
> >>  	}
> >>+out:
> >>+	static_branch_disable(&virt_spin_lock_key);
> >You probably need to add 'return' before 'out:' as it seems you're
> >disabling virt_spin_lock_key in all cases now).
> 
> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
> case which is the only case virt_spin_lock() optimization is used.
> 
> When PV qspinlock is enabled, virt_spin_lock() isn't called in
> __pv_queued_spin_lock_slowpath() in which case we don't care
> virt_spin_lock_key's value.
> 
> So adding 'return' or not are both ok, I chosed to save a line,
> let me know if you prefer to add a 'return' and I'll change it.

It'd be worth adding a comment here if you end up spinning another version
to change the logging prefix.  The logic is sound and I like the end
result, but I had the same knee jerk "this can't be right!?!?" reaction as
Vitaly.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format
  2019-10-22 21:01   ` Sean Christopherson
@ 2019-10-23  1:29     ` Zhenzhong Duan
  0 siblings, 0 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-23  1:29 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: linux-kernel, tglx, mingo, bp, x86, pbonzini, rkrcmar, vkuznets,
	wanpengli, jmattson, joro, boris.ostrovsky, jgross, peterz, will,
	linux-hyperv, kvm, mikelley, kys, haiyangz, sthemmin, sashal,
	H. Peter Anvin


On 2019/10/23 5:01, Sean Christopherson wrote:
> On Mon, Oct 21, 2019 at 05:11:13PM +0800, Zhenzhong Duan wrote:
>> pr_*() is preferred than printk(KERN_* ...), after change all the print
>> in arch/x86/kernel/kvm.c will have "kvm_guest: xxx" style.
>>
>> No functional change.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
>> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krcmar <rkrcmar@redhat.com>
>> Cc: Sean Christopherson <sean.j.christopherson@intel.com>
>> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Cc: Wanpeng Li <wanpengli@tencent.com>
>> Cc: Jim Mattson <jmattson@google.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Borislav Petkov <bp@alien8.de>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> ---
>>   arch/x86/kernel/kvm.c | 30 ++++++++++++++++--------------
>>   1 file changed, 16 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 3bc6a266..249f14a 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -7,6 +7,8 @@
>>    *   Authors: Anthony Liguori <aliguori@us.ibm.com>
>>    */
>>   
>> +#define pr_fmt(fmt) "kvm_guest: " fmt
> Sort of a silly nit, especially since I suggested kvm_guest...
>
> What about using kvm-guest instead of kvm_guest to be consistent with
> kvm-clock, the other prolific logger in a KVM guest.
>
> E.g.
>
>    kvm-clock: cpu 1, msr 551e041, secondary cpu clock
>    kvm-guest: setup async PF for cpu 1
>    kvm-guest: stealtime: cpu 1, msr 277695f40
>    kvm-clock: cpu 2, msr 551e081, secondary cpu clock
>    kvm-guest: setup async PF for cpu 2
>    kvm-guest: stealtime: cpu 2, msr 277715f40
>    kvm-clock: cpu 3, msr 551e0c1, secondary cpu clock
>    kvm-guest: setup async PF for cpu 3
>    kvm-guest: stealtime: cpu 3, msr 277795f40
>    kvm-clock: cpu 4, msr 551e101, secondary cpu clock
>    
> instead of
>
>    kvm-clock: cpu 1, msr 551e041, secondary cpu clock
>    kvm_guest: setup async PF for cpu 1
>    kvm_guest: stealtime: cpu 1, msr 277695f40
>    kvm-clock: cpu 2, msr 551e081, secondary cpu clock
>    kvm_guest: setup async PF for cpu 2
>    kvm_guest: stealtime: cpu 2, msr 277715f40
>    kvm-clock: cpu 3, msr 551e0c1, secondary cpu clock
>    kvm_guest: setup async PF for cpu 3
>    kvm_guest: stealtime: cpu 3, msr 277795f40
>    kvm-clock: cpu 4, msr 551e101, secondary cpu clock

Good suggestion, will do, thanks for point out.

Zhenzhong


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks
  2019-10-22 21:03       ` Sean Christopherson
@ 2019-10-23  1:36         ` Zhenzhong Duan
  0 siblings, 0 replies; 13+ messages in thread
From: Zhenzhong Duan @ 2019-10-23  1:36 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Vitaly Kuznetsov, linux-kernel, tglx, mingo, bp, x86, pbonzini,
	rkrcmar, wanpengli, jmattson, joro, boris.ostrovsky, jgross,
	peterz, will, linux-hyperv, kvm, mikelley, kys, haiyangz,
	sthemmin, sashal, Jonathan Corbet, H. Peter Anvin


On 2019/10/23 5:03, Sean Christopherson wrote:
> On Tue, Oct 22, 2019 at 08:46:46PM +0800, Zhenzhong Duan wrote:
>> Hi Vitaly,
>>
>> On 2019/10/22 19:36, Vitaly Kuznetsov wrote:
>>
>>> Zhenzhong Duan<zhenzhong.duan@oracle.com>  writes:
>>>
>> ...snip
>>
>>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>>>> index 249f14a..3945aa5 100644
>>>> --- a/arch/x86/kernel/kvm.c
>>>> +++ b/arch/x86/kernel/kvm.c
>>>> @@ -825,18 +825,36 @@ __visible bool __kvm_vcpu_is_preempted(long cpu)
>>>>    */
>>>>   void __init kvm_spinlock_init(void)
>>>>   {
>>>> -	/* Does host kernel support KVM_FEATURE_PV_UNHALT? */
>>>> -	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>>>> +	/*
>>>> +	 * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
>>>> +	 * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
>>>> +	 * preferred over native qspinlock when vCPU is preempted.
>>>> +	 */
>>>> +	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
>>>> +		pr_info("PV spinlocks disabled, no host support.\n");
>>>>   		return;
>>>> +	}
>>>> +	/*
>>>> +	 * Disable PV qspinlock and use native qspinlock when dedicated pCPUs
>>>> +	 * are available.
>>>> +	 */
>>>>   	if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
>>>> -		static_branch_disable(&virt_spin_lock_key);
>>>> -		return;
>>>> +		pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints.\n");
>>>> +		goto out;
>>>>   	}
>>>> -	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
>>>> -	if (num_possible_cpus() == 1)
>>>> -		return;
>>>> +	if (num_possible_cpus() == 1) {
>>>> +		pr_info("PV spinlocks disabled, single CPU.\n");
>>>> +		goto out;
>>>> +	}
>>>> +
>>>> +	if (nopvspin) {
>>>> +		pr_info("PV spinlocks disabled, forced by \"nopvspin\" parameter.\n");
>>>> +		goto out;
>>>> +	}
>>>> +
>>>> +	pr_info("PV spinlocks enabled\n");
>>>>   	__pv_init_lock_hash();
>>>>   	pv_ops.lock.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>>>> @@ -849,6 +867,8 @@ void __init kvm_spinlock_init(void)
>>>>   		pv_ops.lock.vcpu_is_preempted =
>>>>   			PV_CALLEE_SAVE(__kvm_vcpu_is_preempted);
>>>>   	}
>>>> +out:
>>>> +	static_branch_disable(&virt_spin_lock_key);
>>> You probably need to add 'return' before 'out:' as it seems you're
>>> disabling virt_spin_lock_key in all cases now).
>> virt_spin_lock_key is kept enabled in !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)
>> case which is the only case virt_spin_lock() optimization is used.
>>
>> When PV qspinlock is enabled, virt_spin_lock() isn't called in
>> __pv_queued_spin_lock_slowpath() in which case we don't care
>> virt_spin_lock_key's value.
>>
>> So adding 'return' or not are both ok, I chosed to save a line,
>> let me know if you prefer to add a 'return' and I'll change it.
> It'd be worth adding a comment here if you end up spinning another version
> to change the logging prefix.  The logic is sound and I like the end
> result, but I had the same knee jerk "this can't be right!?!?" reaction as
> Vitaly.

Sure, will do in next version.

Thanks

Zhenzhong


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-10-23  1:39 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21  9:11 [PATCH v7 0/5] Add a unified parameter "nopvspin" Zhenzhong Duan
2019-10-21  9:11 ` [PATCH v7 1/5] Revert "KVM: X86: Fix setup the virt_spin_lock_key before static key get initialized" Zhenzhong Duan
2019-10-21  9:11 ` [PATCH v7 2/5] x86/kvm: Change print code to use pr_*() format Zhenzhong Duan
2019-10-22 21:01   ` Sean Christopherson
2019-10-23  1:29     ` Zhenzhong Duan
2019-10-21  9:11 ` [PATCH v7 3/5] x86/kvm: Add "nopvspin" parameter to disable PV spinlocks Zhenzhong Duan
2019-10-22 11:36   ` Vitaly Kuznetsov
2019-10-22 12:46     ` Zhenzhong Duan
2019-10-22 13:11       ` Vitaly Kuznetsov
2019-10-22 21:03       ` Sean Christopherson
2019-10-23  1:36         ` Zhenzhong Duan
2019-10-21  9:11 ` [PATCH v7 4/5] xen: Mark "xen_nopvspin" parameter obsolete Zhenzhong Duan
2019-10-21  9:11 ` [PATCH v7 5/5] x86/hyperv: Mark "hv_nopvspin" " Zhenzhong Duan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.