From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeremy Fitzhardinge Subject: Re: [PATCH RFC V4 0/5] kvm : Paravirt-spinlock support for KVM guests Date: Tue, 17 Jan 2012 10:59:03 +1100 Message-ID: <4F14B9C7.9090709@goop.org> References: <20120114182501.8604.68416.sendpatchset@oc5400248562.ibm.com> <3EC1B881-0724-49E3-B892-F40BEB07D15D@suse.de> <03D10A71-19F8-4278-B7A4-3F618ED6ECF0@goop.org> <4F13E613.7090602@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Raghavendra K T , linux-doc@vger.kernel.org, Peter Zijlstra , Jan Kiszka , Virtualization , Paul Mackerras , "H. Peter Anvin" , Stefano Stabellini , Xen , Dave Jiang , KVM , Glauber Costa , X86 , Ingo Molnar , Rik van Riel , Konrad Rzeszutek Wilk , Greg Kroah-Hartman , Sasha Levin , Sedat Dilek , Thomas Gleixner , LKML , Dave Hansen , Suzuki Poulose , S To: Avi Kivity Return-path: In-Reply-To: <4F13E613.7090602@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org List-Id: kvm.vger.kernel.org On 01/16/2012 07:55 PM, Avi Kivity wrote: > On 01/16/2012 08:40 AM, Jeremy Fitzhardinge wrote: >>> That means we're spinning for n cycles, then notify the spinlock holder that we'd like to get kicked and go sleeping. While I'm pretty sure that it improves the situation, it doesn't solve all of the issues we have. >>> >>> Imagine we have an idle host. All vcpus can freely run and everyone can fetch the lock as fast as on real machines. We don't need to / want to go to sleep here. Locks that take too long are bugs that need to be solved on real hw just as well, so all we do is possibly incur overhead. >> I'm not quite sure what your concern is. The lock is under contention, so there's nothing to do except spin; all this patch adds is a variable decrement/test to the spin loop, but that's not going to waste any more CPU than the non-counting case. And once it falls into the blocking path, its a win because the VCPU isn't burning CPU any more. > The wakeup path is slower though. The previous lock holder has to > hypercall, and the new lock holder has to be scheduled, and transition > from halted state to running (a vmentry). So it's only a clear win if > we can do something with the cpu other than go into the idle loop. Not burning power is a win too. Actually what you want is something like "if you preempt a VCPU while its spinning in a lock, then push it into the slowpath and don't reschedule it without a kick". But I think that interface would have a lot of fiddly corners. J From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeremy Fitzhardinge Subject: Re: [PATCH RFC V4 0/5] kvm : Paravirt-spinlock support for KVM guests Date: Tue, 17 Jan 2012 10:59:03 +1100 Message-ID: <4F14B9C7.9090709@goop.org> References: <20120114182501.8604.68416.sendpatchset@oc5400248562.ibm.com> <3EC1B881-0724-49E3-B892-F40BEB07D15D@suse.de> <03D10A71-19F8-4278-B7A4-3F618ED6ECF0@goop.org> <4F13E613.7090602@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <4F13E613.7090602@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Avi Kivity Cc: Raghavendra K T , linux-doc@vger.kernel.org, Peter Zijlstra , Jan Kiszka , Virtualization , Paul Mackerras , "H. Peter Anvin" , Stefano Stabellini , Xen , Dave Jiang , KVM , Glauber Costa , X86 , Ingo Molnar , Rik van Riel , Konrad Rzeszutek Wilk , Greg Kroah-Hartman , Sasha Levin , Sedat Dilek , Thomas Gleixner , LKML , Dave Hansen , Suzuki Poulose S List-Id: virtualization@lists.linuxfoundation.org On 01/16/2012 07:55 PM, Avi Kivity wrote: > On 01/16/2012 08:40 AM, Jeremy Fitzhardinge wrote: >>> That means we're spinning for n cycles, then notify the spinlock holder that we'd like to get kicked and go sleeping. While I'm pretty sure that it improves the situation, it doesn't solve all of the issues we have. >>> >>> Imagine we have an idle host. All vcpus can freely run and everyone can fetch the lock as fast as on real machines. We don't need to / want to go to sleep here. Locks that take too long are bugs that need to be solved on real hw just as well, so all we do is possibly incur overhead. >> I'm not quite sure what your concern is. The lock is under contention, so there's nothing to do except spin; all this patch adds is a variable decrement/test to the spin loop, but that's not going to waste any more CPU than the non-counting case. And once it falls into the blocking path, its a win because the VCPU isn't burning CPU any more. > The wakeup path is slower though. The previous lock holder has to > hypercall, and the new lock holder has to be scheduled, and transition > from halted state to running (a vmentry). So it's only a clear win if > we can do something with the cpu other than go into the idle loop. Not burning power is a win too. Actually what you want is something like "if you preempt a VCPU while its spinning in a lock, then push it into the slowpath and don't reschedule it without a kick". But I think that interface would have a lot of fiddly corners. J