From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36349) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XIt2a-0002Cx-6H for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:30:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XIt2R-0004P2-4d for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:30:04 -0400 Received: from mail-we0-x232.google.com ([2a00:1450:400c:c03::232]:62348) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XIt2Q-0004Ox-TL for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:29:55 -0400 Received: by mail-we0-f178.google.com with SMTP id w61so3663736wes.23 for ; Sat, 16 Aug 2014 22:29:54 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <53F03DCE.3090909@redhat.com> Date: Sun, 17 Aug 2014 07:29:50 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1407209598-2572-1-git-send-email-ming.lei@canonical.com> <20140805094844.GF4391@noname.str.redhat.com> <20140805134815.GD12251@stefanha-thinkpad.redhat.com> <20140805144728.GH4391@noname.str.redhat.com> <20140806084855.GA4090@noname.str.redhat.com> <20140810114624.0305b7af@tom-ThinkPad-T410> <53E91B5D.4090009@redhat.com> <20140814104637.GB3820@noname.redhat.com> <53EE6A45.9010400@redhat.com> In-Reply-To: <53EE6A45.9010400@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: tom.leiming@gmail.com, Ming Lei , Fam Zheng , qemu-devel , Stefan Hajnoczi Il 15/08/2014 22:15, Paolo Bonzini ha scritto: >> > | Random throughput | Sequential throughput >> > ----------------+-------------------+----------------------- >> > master | 442 MB/s | 730 MB/s >> > base | 453 MB/s | 757 MB/s >> > bypass (Ming) | 461 MB/s | 734 MB/s >> > coroutine | 468 MB/s | 716 MB/s >> > bypass (Paolo) | 476 MB/s | 682 MB/s > This is pretty large, but it really smells like either a setup problem > or a kernel bug... Thinking more about the I/O scheduler, it could simply be that faster I/O = less coalescing = more bios actually reaching the driver = less speed. It should be possible to find if this is true using blktrace. (The reason why sequential I/O is faster is coalescing in the I/O scheduler). Paolo