linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU
@ 2018-07-17 21:59 Waiman Long
  2018-07-18 11:51 ` Paolo Bonzini
  2018-07-19  1:15 ` Konrad Rzeszutek Wilk
  0 siblings, 2 replies; 4+ messages in thread
From: Waiman Long @ 2018-07-17 21:59 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, kvm, linux-kernel, Joe Mario, Waiman Long

On a VM with only 1 vCPU, the locking fast path will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 arch/x86/kernel/kvm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 5b2300b..575c9a5 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void)
 	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
 		return;
 
+	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
+	if (num_possible_cpus() == 1)
+		return;
+
 	__pv_init_lock_hash();
 	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
 	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU
  2018-07-17 21:59 [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU Waiman Long
@ 2018-07-18 11:51 ` Paolo Bonzini
  2018-07-19  1:15 ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2018-07-18 11:51 UTC (permalink / raw)
  To: Waiman Long, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: x86, kvm, linux-kernel, Joe Mario

On 17/07/2018 23:59, Waiman Long wrote:
> On a VM with only 1 vCPU, the locking fast path will always be
> successful. In this case, there is no need to use the the PV qspinlock
> code which has higher overhead on the unlock side than the native
> qspinlock code.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5b2300b..575c9a5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void)
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
>  		return;
>  
> +	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> +	if (num_possible_cpus() == 1)
> +		return;
> +
>  	__pv_init_lock_hash();
>  	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>  	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
> 

Queued, thanks.

Paolo

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU
  2018-07-17 21:59 [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU Waiman Long
  2018-07-18 11:51 ` Paolo Bonzini
@ 2018-07-19  1:15 ` Konrad Rzeszutek Wilk
  2018-07-19 13:34   ` Waiman Long
  1 sibling, 1 reply; 4+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-07-19  1:15 UTC (permalink / raw)
  To: Waiman Long
  Cc: Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, kvm,
	linux-kernel, Joe Mario

On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote:
> On a VM with only 1 vCPU, the locking fast path will always be
> successful. In this case, there is no need to use the the PV qspinlock
> code which has higher overhead on the unlock side than the native
> qspinlock code.

Why not make this global? That is for both KVM and Xen and any
other virtualized guest that uses this?

> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  arch/x86/kernel/kvm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 5b2300b..575c9a5 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -748,6 +748,10 @@ void __init kvm_spinlock_init(void)
>  	if (kvm_para_has_hint(KVM_HINTS_REALTIME))
>  		return;
>  
> +	/* Don't use the pvqspinlock code if there is only 1 vCPU. */
> +	if (num_possible_cpus() == 1)
> +		return;
> +
>  	__pv_init_lock_hash();
>  	pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath;
>  	pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock);
> -- 
> 1.8.3.1
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU
  2018-07-19  1:15 ` Konrad Rzeszutek Wilk
@ 2018-07-19 13:34   ` Waiman Long
  0 siblings, 0 replies; 4+ messages in thread
From: Waiman Long @ 2018-07-19 13:34 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Paolo Bonzini, Radim Krčmář,
	Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, kvm,
	linux-kernel, Joe Mario

On 07/18/2018 09:15 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Jul 17, 2018 at 05:59:27PM -0400, Waiman Long wrote:
>> On a VM with only 1 vCPU, the locking fast path will always be
>> successful. In this case, there is no need to use the the PV qspinlock
>> code which has higher overhead on the unlock side than the native
>> qspinlock code.
> Why not make this global? That is for both KVM and Xen and any
> other virtualized guest that uses this?

Right, I will send another patch for Xen. The pvqspinlock code has to be
explicitly opted in. Right now, both Xen and KVM used it in the tree. I
am not sure about other out-of-tree modules. There is nothing I can do
for those.

Cheers,
Longman

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-07-19 13:35 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-17 21:59 [PATCH] x86/kvm: Don't use pvqspinlock code if only 1 vCPU Waiman Long
2018-07-18 11:51 ` Paolo Bonzini
2018-07-19  1:15 ` Konrad Rzeszutek Wilk
2018-07-19 13:34   ` Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).