From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755484AbbDIOOR (ORCPT ); Thu, 9 Apr 2015 10:14:17 -0400 Received: from casper.infradead.org ([85.118.1.10]:55234 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753055AbbDIOOD (ORCPT ); Thu, 9 Apr 2015 10:14:03 -0400 Date: Thu, 9 Apr 2015 16:13:48 +0200 From: Peter Zijlstra To: Rik van Riel Cc: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , linux-arch@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Paolo Bonzini , Konrad Rzeszutek Wilk , Boris Ostrovsky , "Paul E. McKenney" , Linus Torvalds , Raghavendra K T , David Vrabel , Oleg Nesterov , Daniel J Blueman , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock Message-ID: <20150409141348.GX5029@twins.programming.kicks-ass.net> References: <1428517939-27968-1-git-send-email-Waiman.Long@hp.com> <20150409070146.GL27490@worktop.programming.kicks-ass.net> <55267BA8.9060009@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55267BA8.9060009@redhat.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote: > On 04/09/2015 03:01 AM, Peter Zijlstra wrote: > > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: > >> For a virtual guest with the qspinlock patch, a simple unfair byte lock > >> will be used if PV spinlock is not configured in or the hypervisor > >> isn't either KVM or Xen. The byte lock works fine with small guest > >> of just a few vCPUs. On a much larger guest, however, byte lock can > >> have serious performance problem. > > > > Who cares? > > There are some people out there running guests with dozens > of vCPUs. If the code exists to make those setups run better, > is there a good reason not to use it? Well use paravirt, !paravirt stuff sucks performance wise anyhow. The question really is: is the added complexity worth the maintenance burden. And I'm just not convinced !paravirt virt is a performance critical target. > Having said that, only KVM and Xen seem to support very > large guests, and PV spinlock is available there. > > I believe both VMware and Hyperv have a 32 VCPU limit, anyway. Don't we have Hyperv paravirt drivers? They could add support for paravirt spinlocks too. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock Date: Thu, 9 Apr 2015 16:13:48 +0200 Message-ID: <20150409141348.GX5029@twins.programming.kicks-ass.net> References: <1428517939-27968-1-git-send-email-Waiman.Long@hp.com> <20150409070146.GL27490@worktop.programming.kicks-ass.net> <55267BA8.9060009@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <55267BA8.9060009@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Rik van Riel Cc: Waiman Long , linux-arch@vger.kernel.org, Raghavendra K T , Oleg Nesterov , kvm@vger.kernel.org, Konrad Rzeszutek Wilk , Daniel J Blueman , x86@kernel.org, Paolo Bonzini , linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Scott J Norton , Ingo Molnar , David Vrabel , "H. Peter Anvin" , xen-devel@lists.xenproject.org, Thomas Gleixner , "Paul E. McKenney" , Linus Torvalds , Boris Ostrovsky , Douglas Hatch List-Id: linux-arch.vger.kernel.org On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote: > On 04/09/2015 03:01 AM, Peter Zijlstra wrote: > > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: > >> For a virtual guest with the qspinlock patch, a simple unfair byte lock > >> will be used if PV spinlock is not configured in or the hypervisor > >> isn't either KVM or Xen. The byte lock works fine with small guest > >> of just a few vCPUs. On a much larger guest, however, byte lock can > >> have serious performance problem. > > > > Who cares? > > There are some people out there running guests with dozens > of vCPUs. If the code exists to make those setups run better, > is there a good reason not to use it? Well use paravirt, !paravirt stuff sucks performance wise anyhow. The question really is: is the added complexity worth the maintenance burden. And I'm just not convinced !paravirt virt is a performance critical target. > Having said that, only KVM and Xen seem to support very > large guests, and PV spinlock is available there. > > I believe both VMware and Hyperv have a 32 VCPU limit, anyway. Don't we have Hyperv paravirt drivers? They could add support for paravirt spinlocks too.