From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44619) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aoVil-0004sN-IB for qemu-devel@nongnu.org; Fri, 08 Apr 2016 08:41:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aoVii-0007Lh-Hz for qemu-devel@nongnu.org; Fri, 08 Apr 2016 08:41:07 -0400 Received: from e06smtp17.uk.ibm.com ([195.75.94.113]:33806) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aoVii-0007LA-85 for qemu-devel@nongnu.org; Fri, 08 Apr 2016 08:41:04 -0400 Received: from localhost by e06smtp17.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 8 Apr 2016 13:41:02 +0100 Date: Fri, 8 Apr 2016 14:40:56 +0200 From: Greg Kurz Message-ID: <20160408144056.3793c712@bahia.huguette.org> In-Reply-To: References: <20160408101054.14b77747@bahia.huguette.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] Virtio-9p and cgroup io-throttling List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pradeep Kiruvale Cc: qemu-devel@nongnu.org, "qemu-discuss@nongnu.org" On Fri, 8 Apr 2016 11:51:05 +0200 Pradeep Kiruvale wrote: > Hi Greg, > > Thanks for your reply. > > Below is the way how I add to blkio > > echo "8:16 8388608" > > /sys/fs/cgroup/blkio/test/blkio.throttle.write_bps_device > Ok, this just puts a limit of 8MB/s when writing to /dev/sdb for all tasks in the test cgroup... but what about the tasks themselves ? > The problem I guess is adding these task ids to the "tasks" file in cgroup > Exactly. :) > These threads are started randomly and even then I add the PIDs to the > tasks file the cgroup still does not do IO control. > How did you get the PIDs ? Are you sure these threads you have added to the cgroup are the ones that write to /dev/sdb ? > Is it possible to reduce these number of threads? I see different number of > threads doing IO at different runs. > AFAIK, no. Why don't you simply start QEMU in the cgroup ? Unless I miss something, all children threads, including the 9p ones, will be in the cgroup and honor the throttle setttings. > Regards, > Pradeep > Cheers. -- Greg > > On 8 April 2016 at 10:10, Greg Kurz wrote: > > > On Thu, 7 Apr 2016 11:48:27 +0200 > > Pradeep Kiruvale wrote: > > > > > Hi All, > > > > > > I am using virtio-9p for sharing the file between host and guest. To test > > > the shared file I do read/write options in the guest.To have controlled > > io, > > > I am using cgroup blkio. > > > > > > While using cgroup I am facing two issues,Please find the issues below. > > > > > > 1. When I do IO throttling using the cgroup the read throttling works > > fine > > > but the write throttling does not wok. It still bypasses these throttling > > > control and does the default, am I missing something here? > > > > > > > Hi, > > > > Can you provide details on your blkio setup ? > > > > > I use the following commands to create VM, share the files and to > > > read/write from guest. > > > > > > *Create vm* > > > qemu-system-x86_64 -balloon none .......-name vm0 -cpu host -m 128 -smp 1 > > > -enable-kvm -parallel .... -fsdev > > > local,id=sdb1,path=/mnt/sdb1,security_model=none,writeout=immediate > > -device > > > virtio-9p-pci,fsdev=sdb1,mount_tag=sdb1 > > > > > > *Mount file* > > > mount -t 9p -o trans=virtio,version=9p2000.L sdb1 /sdb1_ext4 2>>dd.log && > > > sync > > > > > > touch /sdb1_ext4/dddrive > > > > > > *Write test* > > > dd if=/dev/zero of=/sdb1_ext4/dddrive bs=4k count=800000 oflag=direct >> > > > dd.log 2>&1 && sync > > > > > > *Read test* > > > dd if=/sdb1_ext4/dddrive of=/dev/null >> dd.log 2>&1 && sync > > > > > > 2. The other issue is when I run "dd" command inside guest it creates > > > multiple threads to write/read. I can see those on host using iotop is > > this > > > expected behavior? > > > > > > > Yes. QEMU uses a thread pool to handle 9p requests. > > > > > Regards, > > > Pradeep > > > > Cheers. > > > > -- > > Greg > > > >