From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Bruce Rogers" Subject: Re: kvm scaling question Date: Mon, 14 Sep 2009 17:21:10 -0600 Message-ID: <4AAE7B86020000480008111D@novprvlin0050.provo.novell.com> References: <4AAA1A0A0200004800080E06@novprvlin0050.provo.novell.com> <20090911215355.GD6244@amt.cnet> <4AAAD703.6090502@amd.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Cc: "Thomas Friebel" , To: "Andre Przywara" , "Marcelo Tosatti" Return-path: In-Reply-To: <4AAAD703.6090502@amd.com> Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: On 9/11/2009 at 5:02 PM, Andre Przywara wrote: > Marcelo Tosatti wrote: >> On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote: >>> I am wondering if anyone has investigated how well kvm scales when > supporting many guests, or many vcpus or both. >>> >>> I'll do some investigations into the per vm memory overhead and >>> play with bumping the max vcpu limit way beyond 16, but hopefully >>> someone can comment on issues such as locking problems that are known >>> to exist and needing to be addressed to increased parallellism, >>> general overhead percentages which can help provide consolidation >>> expectations, etc. >> >> I suppose it depends on the guest and workload. With an EPT host and >> 16-way Linux guest doing kernel compilations, on recent kernel, i see: > > ... >> >>> Also, when I did a simple experiment with vcpu overcommitment, I was >>> surprised how quickly performance suffered (just bringing a Linux vm >>> up), since I would have assumed the additional vcpus would have been >>> halted the vast majority of the time. On a 2 proc box, overcommitment >>> to 8 vcpus in a guest (I know this isn't a good usage scenario, but >>> does provide some insights) caused the boot time to increase to almost >>> exponential levels. At 16 vcpus, it took hours to just reach the gui >>> login prompt. >> >> One probable reason for that are vcpus which hold spinlocks in the guest >> are scheduled out in favour of vcpus which spin on that same lock. > We have encountered this issue some time ago in Xen. Ticket spinlocks > make this even worse. More detailed info can be found here: > http://www.amd64.org/research/virtualization.html#Lock_holder_preemption > > Have you tried using paravirtualized spinlock in the guest kernel? > http://lkml.indiana.edu/hypermail/linux/kernel/0807.0/2808.html I'll try to give that a try. Thanks for the tips. Bruce