From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756665Ab2FRJhq (ORCPT ); Mon, 18 Jun 2012 05:37:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:13918 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751468Ab2FRJhp (ORCPT ); Mon, 18 Jun 2012 05:37:45 -0400 Message-ID: <4FDEF73E.3010501@redhat.com> Date: Mon, 18 Jun 2012 17:39:10 +0800 From: Asias He User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20120605 Thunderbird/13.0 MIME-Version: 1.0 To: Stefan Hajnoczi CC: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: Re: [PATCH v2 0/3] Improve virtio-blk performance References: <1340002390-3950-1-git-send-email-asias@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/18/2012 05:14 PM, Stefan Hajnoczi wrote: > On Mon, Jun 18, 2012 at 7:53 AM, Asias He wrote: >> Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% >> latency improvement for sequential read/write, random read/write respectively. > > Sounds great. What storage configuration did you use (single spinning > disk, SSD, storage array) and are these numbers for parallel I/O or > sequential I/O? I used ramdisk as the backend storage. > What changed since Minchan worked on this? I remember he wasn't > satisfied that this was a clear win. Your numbers are strong so > either you fixed something important or you are looking at different > benchmark configurations. I am using kvm tool instead qemu. He wasn't satisfied the poor sequential performance. I removed the plug and unplug operation and bio completion batching. You can grab Michan's patch and make a diff to see the details. Here is the fio's config file. [global] exec_prerun="echo 3 > /proc/sys/vm/drop_caches" group_reporting norandommap ioscheduler=noop thread bs=512 size=4MB direct=1 filename=/dev/vdb numjobs=256 ioengine=aio iodepth=64 loops=3 [seq-read] stonewall rw=read [seq-write] stonewall rw=write [rnd-read] stonewall rw=randread [rnd-write] stonewall rw=randwrite -- Asias From mboxrd@z Thu Jan 1 00:00:00 1970 From: Asias He Subject: Re: [PATCH v2 0/3] Improve virtio-blk performance Date: Mon, 18 Jun 2012 17:39:10 +0800 Message-ID: <4FDEF73E.3010501@redhat.com> References: <1340002390-3950-1-git-send-email-asias@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org To: Stefan Hajnoczi Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org List-Id: kvm.vger.kernel.org On 06/18/2012 05:14 PM, Stefan Hajnoczi wrote: > On Mon, Jun 18, 2012 at 7:53 AM, Asias He wrote: >> Fio test shows it gives, 28%, 24%, 21%, 16% IOPS boost and 32%, 17%, 21%, 16% >> latency improvement for sequential read/write, random read/write respectively. > > Sounds great. What storage configuration did you use (single spinning > disk, SSD, storage array) and are these numbers for parallel I/O or > sequential I/O? I used ramdisk as the backend storage. > What changed since Minchan worked on this? I remember he wasn't > satisfied that this was a clear win. Your numbers are strong so > either you fixed something important or you are looking at different > benchmark configurations. I am using kvm tool instead qemu. He wasn't satisfied the poor sequential performance. I removed the plug and unplug operation and bio completion batching. You can grab Michan's patch and make a diff to see the details. Here is the fio's config file. [global] exec_prerun="echo 3 > /proc/sys/vm/drop_caches" group_reporting norandommap ioscheduler=noop thread bs=512 size=4MB direct=1 filename=/dev/vdb numjobs=256 ioengine=aio iodepth=64 loops=3 [seq-read] stonewall rw=read [seq-write] stonewall rw=write [rnd-read] stonewall rw=randread [rnd-write] stonewall rw=randwrite -- Asias