From mboxrd@z Thu Jan 1 00:00:00 1970 From: Liu Yuan Subject: Re: vhost-blk development Date: Fri, 13 Apr 2012 13:38:39 +0800 Message-ID: <4F87BBDF.5010708@gmail.com> References: <6815254.61554.1334163130932.JavaMail.root@zimbra.liquidweb.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Stefan Hajnoczi , kvm@vger.kernel.org To: Michael Baysek Return-path: Received: from mail-pz0-f52.google.com ([209.85.210.52]:57866 "EHLO mail-pz0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753064Ab2DMFip (ORCPT ); Fri, 13 Apr 2012 01:38:45 -0400 Received: by dake40 with SMTP id e40so3429668dak.11 for ; Thu, 12 Apr 2012 22:38:45 -0700 (PDT) In-Reply-To: <6815254.61554.1334163130932.JavaMail.root@zimbra.liquidweb.com> Sender: kvm-owner@vger.kernel.org List-ID: On 04/12/2012 12:52 AM, Michael Baysek wrote: > In this particular case, I did intend to deploy these instances directly to > the ramdisk. I want to squeeze every drop of performance out of these > instances for use cases with lots of concurrent accesses. I thought it > would be possible to achieve improvements an order of magnitude or more > over SSD, but it seems not to be the case (so far). Last year I tried virtio-blk over vhost, which originally planned to put virtio-blk driver into kernel to reduce system call overhead and shorten the code path. I think in your particular case (ramdisk), virtio-blk will hit the best performance improvement because biggest time-hogger IO is ripped out in the path, at least would be expected much better than my last test numbers (+15% for throughput and -10% for latency) which runs on local disk. But unfortunately, virtio-blk was not considered to be useful enough at that time, Qemu folks think it is better to optimize the IO stack in QEMU instead of setting up another code path for it. I remember that I developed virtio-blk at Linux 3.0 base, So I think it is not hard to rebase it on latest kernel code or porting it back to RHEL 6's modified 2.6.32 kernel. Thanks, Yuan