From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33338) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ayaN2-0006d7-KK for qemu-devel@nongnu.org; Fri, 06 May 2016 03:40:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ayaMp-0003M3-Lh for qemu-devel@nongnu.org; Fri, 06 May 2016 03:40:15 -0400 MIME-Version: 1.0 In-Reply-To: <20160506090202.51c60c84@bahia.huguette.org> References: <20160427083840.GA27160@igalia.com> <20160427191215.037c4c5c@bahia.huguette.org> <20160502145731.66bdcf27@bahia.huguette.org> <20160504174033.39faaada@bahia.huguette.org> <20160506090202.51c60c84@bahia.huguette.org> Date: Fri, 6 May 2016 09:39:13 +0200 Message-ID: From: Pradeep Kiruvale Content-Type: multipart/alternative; boundary=047d7b66fddd265d150532278ea9 Subject: Re: [Qemu-devel] [Qemu-discuss] iolimits for virtio-9p List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Greg Kurz Cc: Alberto Garcia , qemu-devel@nongnu.org, "qemu-discuss@nongnu.org" --047d7b66fddd265d150532278ea9 Content-Type: text/plain; charset=UTF-8 On 6 May 2016 at 09:02, Greg Kurz wrote: > On Fri, 6 May 2016 08:01:09 +0200 > Pradeep Kiruvale wrote: > > > On 4 May 2016 at 17:40, Greg Kurz wrote: > > > > > On Mon, 2 May 2016 17:49:26 +0200 > > > Pradeep Kiruvale wrote: > > > > > > > On 2 May 2016 at 14:57, Greg Kurz wrote: > > > > > > > > > On Thu, 28 Apr 2016 11:45:41 +0200 > > > > > Pradeep Kiruvale wrote: > > > > > > > > > > > On 27 April 2016 at 19:12, Greg Kurz > > > wrote: > > > > > > > > > > > > > On Wed, 27 Apr 2016 16:39:58 +0200 > > > > > > > Pradeep Kiruvale wrote: > > > > > > > > > > > > > > > On 27 April 2016 at 10:38, Alberto Garcia > > > wrote: > > > > > > > > > > > > > > > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale > > > wrote: > > > > > > > > > > > > > > > > > > > Thanks for the reply. I am still in the early phase, I > will > > > let > > > > > you > > > > > > > > > > know if any changes are needed for the APIs. > > > > > > > > > > > > > > > > > > > > We might also have to implement throttle-group.c for 9p > > > devices, > > > > > if > > > > > > > > > > we want to apply throttle for group of devices. > > > > > > > > > > > > > > > > > > Fair enough, but again please note that: > > > > > > > > > > > > > > > > > > - throttle-group.c is not meant to be generic, but it's > tied to > > > > > > > > > BlockDriverState / BlockBackend. > > > > > > > > > - it is currently being rewritten: > > > > > > > > > > > > > > > https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html > > > > > > > > > > > > > > > > > > If you can explain your use case with a bit more detail we > can > > > try > > > > > to > > > > > > > > > see what can be done about it. > > > > > > > > > > > > > > > > > > > > > > > > > > We want to use virtio-9p for block io instead of > virtio-blk-pci. > > > > > But in > > > > > > > > case of > > > > > > > > > > > > > > 9p is mostly aimed at sharing files... why would you want to > use > > > it for > > > > > > > block io instead of a true block device ? And how would you do > > > that ? > > > > > > > > > > > > > > > > > > > *Yes, we want to share the files itself. So we are using the > > > virtio-9p.* > > > > > > > > > > You want to pass a disk image to the guest as a plain file on a 9p > > > mount ? > > > > > And then, what do you do in the guest ? Attach it to a loop device > ? > > > > > > > > > > > > > Yes, would like to mount as a 9p drive and create file inside that > and > > > > read/write. > > > > This was the experiment we are doing, actual use case no idea. My > work is > > > > to do > > > > a feasibility test does it work or not. > > > > > > > > > > > > > > > > > > > *We want to have QoS on these files access for every VM.* > > > > > > > > > > > > > > > > You won't be able to have QoS on selected files, but it may be > > > possible to > > > > > introduce limits at the fsdev level: control all write accesses to > all > > > > > files > > > > > and all read accesses to all files for a 9p device. > > > > > > > > > > > > > That is right, I do not want to have QoS for individual files but to > > > whole > > > > fsdev device. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > virtio-9p we can just use fsdev devices, so we want to apply > > > > > throttling > > > > > > > > (QoS) > > > > > > > > on these devices and as of now the io throttling only > possible > > > with > > > > > the > > > > > > > > -drive option. > > > > > > > > > > > > > > > > > > > > > > Indeed. > > > > > > > > > > > > > > > As a work around we are doing the throttling using cgroup. > It has > > > > > its own > > > > > > > > costs. > > > > > > > > > > > > > > Can you elaborate ? > > > > > > > > > > > > > > > > > > > *We saw that we need to create cgroups and set it and also we > > > observed > > > > > lot > > > > > > of iowaits * > > > > > > *compared to implementing the throttling inside the qemu.* > > > > > > *This we did observe by using the virtio-blk-pci devices. (Using > > > cgroups > > > > > Vs > > > > > > qemu throttling)* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Just to be sure I get it right. > > > > > > > > > > You tried both: > > > > > 1) run QEMU with -device virtio-blk-pci and -drive throttling.* > > > > > 2) run QEMU with -device virtio-blk-pci in its own cgroup > > > > > > > > > > And 1) has better performance and is easier to use than 2) ? > > > > > > > > > > And what do you expect with 9p compared to 1) ? > > > > > > > > > > > > > > That was just to understand the cost of cpu > > > > io throttling inside the qemu vs using cgroup. > > > > > > > > The bench-marking we did to reproduce the numbers and understand the > cost > > > > mentioned in > > > > > > > > > > > > http://www.linux-kvm.org/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf > > > > > > > > Thanks, > > > > Pradeep > > > > > > > > > > Ok. So you did compare current QEMU block I/O throttling with cgroup ? > And > > > you observed numbers > > > similar to the link above ? > > > > > > > *Yes, I did, I did run DD command in guest to do IO. The recent QEMU is > in > > par with cgroups in terms * > > *of CPU utilization.* > > > > > > > > And now you would like to run the same test on a file in a 9p mount > with > > > experimental 9p QoS ? > > > > > > *Yes, you are right.* > > > > > > > Maybe possible to reuse the throttle.h API and hack v9fs_write() and > > > v9fs_read() in 9p.c then. > > > > > > > > *OK, I am looking into it. Are there any sample test cases or something > > about how to apply the* > > *throttling APIs to a device?* > > > > The throttling API is currently only used by block devices, and the only > documentation out-there is the code itself... > Thanks, I will have a look and get back to you if I have any further questions regarding this. Regards, Pradeep > > > > > Regards, > > Pradeep > > > > > > > > > Cheers. > > > > > > -- > > > Greg > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > Pradeep > > > > > > > > > > > > > > > > > > > > > --047d7b66fddd265d150532278ea9 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On 6 May 2016 at 09:02, Greg Kurz <gkurz@linux.vnet.ibm.com= > wrote:
On Fri, 6 May 2016 08:= 01:09 +0200
Pradeep Kiruvale <pradeepkiruvale@gmail.com> wrote:

> On 4 May 2016 at 17:40, Greg Kurz <gkurz@linux.vnet.ibm.com> wrote:
>
> > On Mon, 2 May 2016 17:49:26 +0200
> > Pradeep Kiruvale <pradeepkiruvale@gmail.com> wrote:
> >
> > > On 2 May 2016 at 14:57, Greg Kurz <gkurz@linux.vnet.ibm.com> wrote:
> > >
> > > > On Thu, 28 Apr 2016 11:45:41 +0200
> > > > Pradeep Kiruvale <pradeepkiruvale@gmail.com> wrote:
> > > >
> > > > > On 27 April 2016 at 19:12, Greg Kurz <gkurz@linux.vnet.ibm.com>
> > wrote:
> > > > >
> > > > > > On Wed, 27 Apr 2016 16:39:58 +0200
> > > > > > Pradeep Kiruvale <pradeepkiruvale@gmail.com> wrote:
> > > > > >
> > > > > > > On 27 April 2016 at 10:38, Alberto Garci= a <berto@igalia.com>
> > wrote:
> > > > > > >
> > > > > > > > On Wed, Apr 27, 2016 at 09:29:02AM = +0200, Pradeep Kiruvale
> > wrote:
> > > > > > > >
> > > > > > > > > Thanks for the reply. I am sti= ll in the early phase, I will
> > let
> > > > you
> > > > > > > > > know if any changes are needed= for the APIs.
> > > > > > > > >
> > > > > > > > > We might also have to implemen= t throttle-group.c for 9p
> > devices,
> > > > if
> > > > > > > > > we want to apply throttle for = group of devices.
> > > > > > > >
> > > > > > > > Fair enough, but again please note = that:
> > > > > > > >
> > > > > > > > - throttle-group.c is not meant to = be generic, but it's tied to
> > > > > > > >=C2=A0 =C2=A0BlockDriverState / Bloc= kBackend.
> > > > > > > > - it is currently being rewritten:<= br> > > > > > > > >
> > > > https://lists= .gnu.org/archive/html/qemu-block/2016-04/msg00645.html
> > > > > > > >
> > > > > > > > If you can explain your use case wi= th a bit more detail we can
> > try
> > > > to
> > > > > > > > see what can be done about it.
> > > > > > > >
> > > > > > > >
> > > > > > > We want to use=C2=A0 virtio-9p for block= io instead of virtio-blk-pci.
> > > > But in
> > > > > > > case of
> > > > > >
> > > > > > 9p is mostly aimed at sharing files... why wo= uld you want to use
> > it for
> > > > > > block io instead of a true block device ? And= how would you do
> > that ?
> > > > > >
> > > > >
> > > > > *Yes, we want to share the files itself. So we are= using the
> > virtio-9p.*
> > > >
> > > > You want to pass a disk image to the guest as a plain f= ile on a 9p
> > mount ?
> > > > And then, what do you do in the guest ? Attach it to a = loop device ?
> > > >
> > >
> > > Yes, would like to mount as=C2=A0 a 9p drive and create file= inside that and
> > > read/write.
> > > This was the experiment we are doing, actual use case no ide= a. My work is
> > > to do
> > > a feasibility test does it work or not.
> > >
> > >
> > > >
> > > > > *We want to have QoS on these files access for eve= ry VM.*
> > > > >
> > > >
> > > > You won't be able to have QoS on selected files, bu= t it may be
> > possible to
> > > > introduce limits at the fsdev level: control all write = accesses to all
> > > > files
> > > > and all read accesses to all files for a 9p device.
> > > >
> > >
> > > That is right, I do not want to have QoS for individual file= s but to
> > whole
> > > fsdev device.
> > >
> > >
> > > > >
> > > > > >
> > > > > > > virtio-9p we can just use fsdev devices,= so we want to apply
> > > > throttling
> > > > > > > (QoS)
> > > > > > > on these devices and as of now the io th= rottling only possible
> > with
> > > > the
> > > > > > > -drive option.
> > > > > > >
> > > > > >
> > > > > > Indeed.
> > > > > >
> > > > > > > As a work around we are doing the thrott= ling using cgroup. It has
> > > > its own
> > > > > > > costs.
> > > > > >
> > > > > > Can you elaborate ?
> > > > > >
> > > > >
> > > > > *We saw that we need to create cgroups and set it = and also we
> > observed
> > > > lot
> > > > > of iowaits *
> > > > > *compared to implementing the throttling inside th= e qemu.*
> > > > > *This we did observe by using the virtio-blk-pci d= evices. (Using
> > cgroups
> > > > Vs
> > > > > qemu throttling)*
> > > > >
> > > >
> > >
> > >
> > > >
> > > > Just to be sure I get it right.
> > > >
> > > > You tried both:
> > > > 1) run QEMU with -device virtio-blk-pci and -drive thro= ttling.*
> > > > 2) run QEMU with -device virtio-blk-pci in its own cgro= up
> > > >
> > > > And 1) has better performance and is easier to use than= 2) ?
> > > >
> > > > And what do you expect with 9p compared to 1) ?
> > > >
> > > >
> > > That was just to understand the cost of cpu
> > >=C2=A0 io throttling inside the qemu vs using cgroup.
> > >
> > > The bench-marking we did to reproduce the numbers and unders= tand the cost
> > > mentioned in
> > >
> > >
> > http://www.linux-kvm.org/images/7/72/2011-forum-keep-a-limit-on-it-io-thr= ottling-in-qemu.pdf
> > >
> > > Thanks,
> > > Pradeep
> > >
> >
> > Ok. So you did compare current QEMU block I/O throttling with cgr= oup ? And
> > you observed numbers
> > similar to the link above ?
> >
>
> *Yes, I did, I did run DD command in guest to do IO. The r= ecent QEMU is in
> par with cgroups in terms *
> *of CPU utilization.*
>
> >
> > And now you would like to run the same test on a file in a 9p mou= nt with
> > experimental 9p QoS ?
> >
> > *Yes, you are right.*
>
>
> > Maybe possible to reuse the throttle.h API and hack v9fs_write() = and
> > v9fs_read() in 9p.c then.
> >
> >
> *OK, I am looking into it. Are there any sample test cases or s= omething
> about how to apply the*
> *throttling APIs to a device?*
>

The throttling API is currently only used by block devices, and the only documentation out-there is the code itself...

Thanks, I will have a look and get back to you if I have any further=
questions regarding this.

Regards,
Pradeep

=C2=A0

>
> Regards,
> Pradeep
>
>
>
> > Cheers.
> >
> > --
> > Greg
> >
> > >
> > > > >
> > > > > Thanks,
> > > > > Pradeep
> > > >
> > > >
> >
> >
> >


--047d7b66fddd265d150532278ea9--