From mboxrd@z Thu Jan 1 00:00:00 1970 From: Zhi Yong Wu Subject: [Qemu-devel][RFC]QEMU disk I/O limits Date: Mon, 30 May 2011 13:09:23 +0800 Message-ID: <20110530050923.GF18832@f12.cn.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kwolf@redhat.com, vgoyal@redhat.com, guijianfeng@cn.fujitsu.com, herbert@gondor.apana.org.au, stefanha@linux.vnet.ibm.com, aliguori@us.ibm.com, raharper@us.ibm.com, luowenj@cn.ibm.com, wuzhy@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com To: qemu-devel@nongnu.org, kvm@vger.kernel.org Return-path: Received: from e28smtp04.in.ibm.com ([122.248.162.4]:59407 "EHLO e28smtp04.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751026Ab1E3FKy (ORCPT ); Mon, 30 May 2011 01:10:54 -0400 Received: from d28relay05.in.ibm.com (d28relay05.in.ibm.com [9.184.220.62]) by e28smtp04.in.ibm.com (8.14.4/8.13.1) with ESMTP id p4U5Apml009302 for ; Mon, 30 May 2011 10:40:51 +0530 Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay05.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p4U5AnJk2924546 for ; Mon, 30 May 2011 10:40:49 +0530 Received: from d28av01.in.ibm.com (loopback [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p4U5Amla017759 for ; Mon, 30 May 2011 10:40:49 +0530 Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: Hello, all, I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. More detail is available here: http://wiki.qemu.org/Features/DiskIOLimits 1.) Why we need per-drive disk I/O limits As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. 2.) How disk I/O limits will be implemented QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. 3.) How the users enable and play with it QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. Regards, Zhiyong Wu From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:36297) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QQukk-0003CG-Dk for qemu-devel@nongnu.org; Mon, 30 May 2011 01:10:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QQukj-0004Y3-6c for qemu-devel@nongnu.org; Mon, 30 May 2011 01:10:58 -0400 Received: from e28smtp06.in.ibm.com ([122.248.162.6]:48054) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QQuki-0004XM-KB for qemu-devel@nongnu.org; Mon, 30 May 2011 01:10:57 -0400 Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58]) by e28smtp06.in.ibm.com (8.14.4/8.13.1) with ESMTP id p4U5AoBr019410 for ; Mon, 30 May 2011 10:40:50 +0530 Received: from d28av01.in.ibm.com (d28av01.in.ibm.com [9.184.220.63]) by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p4U5Anj24542612 for ; Mon, 30 May 2011 10:40:49 +0530 Received: from d28av01.in.ibm.com (loopback [127.0.0.1]) by d28av01.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p4U5AmlU017759 for ; Mon, 30 May 2011 10:40:49 +0530 Date: Mon, 30 May 2011 13:09:23 +0800 From: Zhi Yong Wu Message-ID: <20110530050923.GF18832@f12.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Subject: [Qemu-devel] [RFC]QEMU disk I/O limits List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, kvm@vger.kernel.org Cc: kwolf@redhat.com, aliguori@us.ibm.com, herbert@gondor.apana.org.au, guijianfeng@cn.fujitsu.com, wuzhy@cn.ibm.com, luowenj@cn.ibm.com, zhanx@cn.ibm.com, zhaoyang@cn.ibm.com, llim@redhat.com, raharper@us.ibm.com, vgoyal@redhat.com, stefanha@linux.vnet.ibm.com Hello, all, I have prepared to work on a feature called "Disk I/O limits" for qemu-kvm projeect. This feature will enable the user to cap disk I/O amount performed by a VM.It is important for some storage resources to be shared among multi-VMs. As you've known, if some of VMs are doing excessive disk I/O, they will hurt the performance of other VMs. More detail is available here: http://wiki.qemu.org/Features/DiskIOLimits 1.) Why we need per-drive disk I/O limits As you've known, for linux, cgroup blkio-controller has supported I/O throttling on block devices. More importantly, there is no single mechanism for disk I/O throttling across all underlying storage types (image file, LVM, NFS, Ceph) and for some types there is no way to throttle at all. Disk I/O limits feature introduces QEMU block layer I/O limits together with command-line and QMP interfaces for configuring limits. This allows I/O limits to be imposed across all underlying storage types using a single interface. 2.) How disk I/O limits will be implemented QEMU block layer will introduce a per-drive disk I/O request queue for those disks whose "disk I/O limits" feature is enabled. It can control disk I/O limits individually for each disk when multiple disks are attached to a VM, and enable use cases like unlimited local disk access but shared storage access with limits. In mutliple I/O threads scenario, when an application in a VM issues a block I/O request, this request will be intercepted by QEMU block layer, then it will calculate disk runtime I/O rate and determine if it has go beyond its limits. If yes, this I/O request will enqueue to that introduced queue; otherwise it will be serviced. 3.) How the users enable and play with it QEMU -drive option will be extended so that disk I/O limits can be specified on its command line, such as -drive [iops=xxx,][throughput=xxx] or -drive [iops_rd=xxx,][iops_wr=xxx,][throughput=xxx] etc. When this argument is specified, it means that "disk I/O limits" feature is enabled for this drive disk. The feature will also provide users with the ability to change per-drive disk I/O limits at runtime using QMP commands. Regards, Zhiyong Wu