From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=59442 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pau8p-0002sw-BC for qemu-devel@nongnu.org; Thu, 06 Jan 2011 13:00:52 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pau8o-0005wS-3O for qemu-devel@nongnu.org; Thu, 06 Jan 2011 13:00:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:58563) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pau8n-0005w9-Sy for qemu-devel@nongnu.org; Thu, 06 Jan 2011 13:00:50 -0500 Date: Thu, 6 Jan 2011 20:00:31 +0200 From: "Michael S. Tsirkin" Message-ID: <20110106180031.GB28917@redhat.com> References: <20101213161219.GA6715@redhat.com> <20101213185251.GA9554@redhat.com> <20101215121423.GB1746@redhat.com> <20101219144953.GA20565@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: [Qemu-devel] Re: [PATCH v5 0/4] virtio: Use ioeventfd for virtqueue notify List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: Khoa Huynh , qemu-devel@nongnu.org On Thu, Jan 06, 2011 at 04:41:50PM +0000, Stefan Hajnoczi wrote: > Here are 4k sequential read results (cache=none) to check whether we > see an ioeventfd performance regression with virtio-blk. > > The idea is to use a small blocksize with an I/O pattern (sequential > reads) that is cheap and executes quickly. Therefore we're doing many > iops and the cost virtqueue kick/notify is especially important. > We're not trying to stress the disk, we're trying to make the > difference in ioeventfd=on/off apparent. > > I did 2 runs for both ioeventfd=off and ioeventfd=on. The results are > similar: 1% and 2% degradation in MB/s or iops. We'd have to do more > runs to see if the degradation is statistically significant, but the > percentage value is so low that I'm satisfied. > > Are you happy to merge virtio-ioeventfd v6 + your fixups? BTW if you could do some migration stress-testing too, would be nice. autotest has support for it now. -- MST