From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756060Ab2BGQUE (ORCPT ); Tue, 7 Feb 2012 11:20:04 -0500 Received: from mail-pw0-f46.google.com ([209.85.160.46]:52839 "EHLO mail-pw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751287Ab2BGQUC (ORCPT ); Tue, 7 Feb 2012 11:20:02 -0500 Message-ID: <4F314F2C.4040100@codemonkey.ws> Date: Tue, 07 Feb 2012 10:19:56 -0600 From: Anthony Liguori User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110922 Lightning/1.0b2 Thunderbird/3.1.15 MIME-Version: 1.0 To: Avi Kivity CC: Rob Earhart , linux-kernel , KVM list , qemu-devel Subject: Re: [Qemu-devel] [RFC] Next gen kvm api References: <4F2AB552.2070909@redhat.com> <4F2E80A7.5040908@redhat.com> <4F3025FB.1070802@codemonkey.ws> <4F31132F.3010100@redhat.com> <4F31408F.80901@codemonkey.ws> <4F314B2A.4000709@redhat.com> In-Reply-To: <4F314B2A.4000709@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/07/2012 10:02 AM, Avi Kivity wrote: > On 02/07/2012 05:17 PM, Anthony Liguori wrote: >> On 02/07/2012 06:03 AM, Avi Kivity wrote: >>> On 02/06/2012 09:11 PM, Anthony Liguori wrote: >>>> >>>> I'm not so sure. ioeventfds and a future mmio-over-socketpair have to put the >>>> kthread to sleep while it waits for the other end to process it. This is >>>> effectively equivalent to a heavy weight exit. The difference in cost is >>>> dropping to userspace which is really neglible these days (< 100 cycles). >>> >>> On what machine did you measure these wonderful numbers? >> >> A syscall is what I mean by "dropping to userspace", not the cost of a heavy >> weight exit. > > Ah. But then ioeventfd has that as well, unless the other end is in the kernel too. Yes, that was my point exactly :-) ioeventfd/mmio-over-socketpair to adifferent thread is not faster than a synchronous KVM_RUN + writing to an eventfd in userspace modulo a couple of cheap syscalls. The exception is when the other end is in the kernel and there is magic optimizations (like there is today with ioeventfd). Regards, Anthony Liguori > >> I think a heavy weight exit is still around a few thousand cycles. >> >> Any nehalem class or better processor should have a syscall cost of around >> that unless I'm wildly mistaken. >> > > That's what I remember too. > >>> >>> But I agree a heavyweight exit is probably faster than a double context switch >>> on a remote core. >> >> I meant, if you already need to take a heavyweight exit (and you do to >> schedule something else on the core), than the only additional cost is taking >> a syscall return to userspace *first* before scheduling another process. That >> overhead is pretty low. > > Yeah. >