From mboxrd@z Thu Jan 1 00:00:00 1970 From: Raghavendra K T Subject: Re: [PATCH 0/9] qspinlock stuff -v15 Date: Fri, 27 Mar 2015 12:10:48 +0530 Message-ID: <5514FB70.4010600__22429.140067152$1427438460$gmane$org@linux.vnet.ibm.com> References: <20150316131613.720617163@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1YbNup-0007ks-Th for xen-devel@lists.xenproject.org; Fri, 27 Mar 2015 06:38:48 +0000 Received: from /spool/local by e28smtp04.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 27 Mar 2015 12:08:42 +0530 Received: from d28relay03.in.ibm.com (d28relay03.in.ibm.com [9.184.220.60]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id ACE22125805C for ; Fri, 27 Mar 2015 12:10:19 +0530 (IST) Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay03.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t2R6cehn52953092 for ; Fri, 27 Mar 2015 12:08:40 +0530 Received: from d28av05.in.ibm.com (localhost [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t2R6cbE4010762 for ; Fri, 27 Mar 2015 12:08:40 +0530 In-Reply-To: <20150316131613.720617163@infradead.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Peter Zijlstra Cc: Waiman.Long@hp.com, linux-arch@vger.kernel.org, riel@redhat.com, kvm@vger.kernel.org, scott.norton@hp.com, x86@kernel.org, paolo.bonzini@gmail.com, oleg@redhat.com, linux-kernel@vger.kernel.org, mingo@redhat.com, david.vrabel@citrix.com, hpa@zytor.com, luto@amacapital.net, xen-devel@lists.xenproject.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, torvalds@linux-foundation.org, boris.ostrovsky@oracle.com, virtualization@lists.linux-foundation.org, doug.hatch@hp.com List-Id: xen-devel@lists.xenproject.org On 03/16/2015 06:46 PM, Peter Zijlstra wrote: > Hi Waiman, > > As promised; here is the paravirt stuff I did during the trip to BOS last week. > > All the !paravirt patches are more or less the same as before (the only real > change is the copyright lines in the first patch). > > The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more > convoluted and I've no real way to test that but it should be stright fwd to > make work. > > I ran this using the virtme tool (thanks Andy) on my laptop with a 4x > overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and > it both booted and survived a hackbench run (perf bench sched messaging -g 20 > -l 5000). > > So while the paravirt code isn't the most optimal code ever conceived it does work. > > Also, the paravirt patching includes replacing the call with "movb $0, %arg1" > for the native case, which should greatly reduce the cost of having > CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware. > > I feel that if someone were to do a Xen patch we can go ahead and merge this > stuff (finally!). > > These patches do not implement the paravirt spinlock debug stats currently > implemented (separately) by KVM and Xen, but that should not be too hard to do > on top and in the 'generic' code -- no reason to duplicate all that. > > Of course; once this lands people can look at improving the paravirt nonsense. > last time I had reported some hangs in kvm case, and I can confirm that the current set of patches works fine. Feel free to add Tested-by: Raghavendra K T #kvm pv As far as performance is concerned (with my 16core +ht machine having 16vcpu guests [ even w/ , w/o the lfsr hash patchset ]), I do not see any significant observations to report, though I understand that we could see much more benefit with large number of vcpus because of possible reduction in cache bouncing.