All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch
@ 2012-10-12 17:34 Edivaldo de Araujo Pereira
  2012-10-15  8:58 ` Stefan Hajnoczi
                   ` (3 more replies)
  0 siblings, 4 replies; 28+ messages in thread
From: Edivaldo de Araujo Pereira @ 2012-10-12 17:34 UTC (permalink / raw)
  To: qemu-devel

Public bug reported:

I've noticed a significant network performance regression when using
vde_switch, starting about one week ago (10/05/2012); before that date,
I used to get about 1.5 Gbits host to guest, but now I can only get
about 320 Mbits; I didn't find any modification in net/vde.*, just in
hw/virtio*.

My command line: 
 qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
  -localtime -ctrl-grab -usbdevice tablet \
  -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
  -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix

My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
problem happens with other host and guest versions, too.

I know there are better ways of running a guest, but using vde I get a
cleaner environment in the host (just one tun/tap interface to
manage...), which is quite good when running some accademic experiments.

Interestingly, at the same time I've noticed a performance enhancement
of about 25~30 % when using a tun/tap interface, bridged or not.

Thank you, very much.

Edivaldo de Araujo Pereira

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1066055

Title:
  Network performance regression with vde_switch

Status in QEMU:
  New

Bug description:
  I've noticed a significant network performance regression when using
  vde_switch, starting about one week ago (10/05/2012); before that
  date, I used to get about 1.5 Gbits host to guest, but now I can only
  get about 320 Mbits; I didn't find any modification in net/vde.*, just
  in hw/virtio*.

  My command line: 
   qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
    -localtime -ctrl-grab -usbdevice tablet \
    -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
    -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix

  My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
  problem happens with other host and guest versions, too.

  I know there are better ways of running a guest, but using vde I get a
  cleaner environment in the host (just one tun/tap interface to
  manage...), which is quite good when running some accademic
  experiments.

  Interestingly, at the same time I've noticed a performance enhancement
  of about 25~30 % when using a tun/tap interface, bridged or not.

  Thank you, very much.

  Edivaldo de Araujo Pereira

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1066055/+subscriptions

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch
  2012-10-12 17:34 [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch Edivaldo de Araujo Pereira
@ 2012-10-15  8:58 ` Stefan Hajnoczi
  2012-10-15 21:46 ` [Qemu-devel] [Bug 1066055] " Edivaldo de Araujo Pereira
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-10-15  8:58 UTC (permalink / raw)
  To: Bug 1066055; +Cc: qemu-devel

On Fri, Oct 12, 2012 at 05:34:23PM -0000, Edivaldo de Araujo Pereira wrote:
> I've noticed a significant network performance regression when using
> vde_switch, starting about one week ago (10/05/2012); before that date,
> I used to get about 1.5 Gbits host to guest, but now I can only get
> about 320 Mbits; I didn't find any modification in net/vde.*, just in
> hw/virtio*.
> 
> My command line: 
>  qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
>   -localtime -ctrl-grab -usbdevice tablet \
>   -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
>   -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix
> 
> My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
> problem happens with other host and guest versions, too.
> 
> I know there are better ways of running a guest, but using vde I get a
> cleaner environment in the host (just one tun/tap interface to
> manage...), which is quite good when running some accademic experiments.
> 
> Interestingly, at the same time I've noticed a performance enhancement
> of about 25~30 % when using a tun/tap interface, bridged or not.

Hi Edivaldo,
It would be great if you can help find the commit that caused this
regression.

The basic process is:

1. Identify a QEMU release or git tree that gives you 1.5 Gbit/s.
2. Double-check that qemu.git/master suffers reduced performance.
3. git bisect start <bad> <good>
   where <bad> and <good> are the git commits that show differing
   performance (for example, bad=HEAD good=v1.1.0)

Then git will step through the commit history and ask you to test at
each step.  (This is a binary search so even finding regressions that
happened many commits ago requires few steps.)

You can read more about git-bisect(1) here:
http://git-scm.com/book/en/Git-Tools-Debugging-with-Git#Binary-Search
http://www.kernel.org/pub/software/scm/git/docs/git-bisect.html

The end result is the commit introduced the regression.  Please post
what you find!

Stefan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-12 17:34 [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch Edivaldo de Araujo Pereira
  2012-10-15  8:58 ` Stefan Hajnoczi
@ 2012-10-15 21:46 ` Edivaldo de Araujo Pereira
  2012-10-16  7:48   ` Stefan Hajnoczi
  2012-11-01 11:42   ` [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch Michael S. Tsirkin
  2012-10-16 12:23 ` Edivaldo de Araujo Pereira
  2018-01-10 21:28 ` Thomas Huth
  3 siblings, 2 replies; 28+ messages in thread
From: Edivaldo de Araujo Pereira @ 2012-10-15 21:46 UTC (permalink / raw)
  To: qemu-devel

Hi Stefan,

Thank you, very much for taking the time to help me, and excuse me for
not seeing your answer early...

I've run the procedure you pointed me out, and the result is:

0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit
commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
Author: Amit Shah <amit.shah@redhat.com>
Date:   Tue Sep 25 00:05:15 2012 +0530

    virtio: Introduce virtqueue_get_avail_bytes()

    The current virtqueue_avail_bytes() is oddly named, and checks if a
    particular number of bytes are available in a vq.  A better API is to
    fetch the number of bytes available in the vq, and let the caller do
    what's interesting with the numbers.

    Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
    for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
    is made a wrapper over this new function.

    Signed-off-by: Amit Shah <amit.shah@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

:040000 040000 1a58b06a228651cf844621d9ee2f49b525e36c93
e09ea66ce7f6874921670b6aeab5bea921a5227d M      hw

I tried to revert that patch in the latest version, but it obviously
didnt work; I'm trying to figure out the problem, but I don't know very
well the souce code, so I think it's going to take some time. For now,
it's all I could do.

Thank you, again.
Edivaldo

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1066055

Title:
  Network performance regression with vde_switch

Status in QEMU:
  New

Bug description:
  I've noticed a significant network performance regression when using
  vde_switch, starting about one week ago (10/05/2012); before that
  date, I used to get about 1.5 Gbits host to guest, but now I can only
  get about 320 Mbits; I didn't find any modification in net/vde.*, just
  in hw/virtio*.

  My command line: 
   qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
    -localtime -ctrl-grab -usbdevice tablet \
    -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
    -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix

  My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
  problem happens with other host and guest versions, too.

  I know there are better ways of running a guest, but using vde I get a
  cleaner environment in the host (just one tun/tap interface to
  manage...), which is quite good when running some accademic
  experiments.

  Interestingly, at the same time I've noticed a performance enhancement
  of about 25~30 % when using a tun/tap interface, bridged or not.

  Thank you, very much.

  Edivaldo de Araujo Pereira

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1066055/+subscriptions

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-15 21:46 ` [Qemu-devel] [Bug 1066055] " Edivaldo de Araujo Pereira
@ 2012-10-16  7:48   ` Stefan Hajnoczi
  2012-10-22 11:18     ` Amit Shah
  2012-11-01 11:42   ` [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch Michael S. Tsirkin
  1 sibling, 1 reply; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-10-16  7:48 UTC (permalink / raw)
  To: Bug 1066055; +Cc: qemu-devel

On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de Araujo Pereira wrote:
> Hi Stefan,
> 
> Thank you, very much for taking the time to help me, and excuse me for
> not seeing your answer early...
> 
> I've run the procedure you pointed me out, and the result is:
> 
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit
> commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> Author: Amit Shah <amit.shah@redhat.com>
> Date:   Tue Sep 25 00:05:15 2012 +0530
> 
>     virtio: Introduce virtqueue_get_avail_bytes()
> 
>     The current virtqueue_avail_bytes() is oddly named, and checks if a
>     particular number of bytes are available in a vq.  A better API is to
>     fetch the number of bytes available in the vq, and let the caller do
>     what's interesting with the numbers.
> 
>     Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
>     for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
>     is made a wrapper over this new function.
> 
>     Signed-off-by: Amit Shah <amit.shah@redhat.com>
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> 
> :040000 040000 1a58b06a228651cf844621d9ee2f49b525e36c93
> e09ea66ce7f6874921670b6aeab5bea921a5227d M      hw
> 
> I tried to revert that patch in the latest version, but it obviously
> didnt work; I'm trying to figure out the problem, but I don't know very
> well the souce code, so I think it's going to take some time. For now,
> it's all I could do.

After git-bisect(1) completes it is good to sanity-check the result by
manually testing 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f^ (the commit
just before the bad commit) and 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
(the bad commit).

This will verify that the commit indeed introduces the regression.  I
suggest doing this just to be sure that you've found the bad commit.

Regarding this commit, I notice two things:

1. We will now loop over all vring descriptors because we calculate the
   total in/out length instead of returning early as soon as we see
   there is enough space.  Maybe this makes a difference, although I'm a
   little surprised you see such a huge regression.

2. The comparision semantics have changed from:

     (in_total += vring_desc_len(desc_pa, i)) >= in_bytes

   to:

     (in_bytes && in_bytes < in_total)

   Notice that virtqueue_avail_bytes() now returns 0 when in_bytes ==
   in_total.  Previously, it would return 1.  Perhaps we are starving or
   delaying I/O due to this comparison change.  You can easily change
   '<' to '<=' to see if it fixes the issue.

Stefan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-12 17:34 [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch Edivaldo de Araujo Pereira
  2012-10-15  8:58 ` Stefan Hajnoczi
  2012-10-15 21:46 ` [Qemu-devel] [Bug 1066055] " Edivaldo de Araujo Pereira
@ 2012-10-16 12:23 ` Edivaldo de Araujo Pereira
  2012-10-17 13:51   ` Stefan Hajnoczi
  2018-01-10 21:28 ` Thomas Huth
  3 siblings, 1 reply; 28+ messages in thread
From: Edivaldo de Araujo Pereira @ 2012-10-16 12:23 UTC (permalink / raw)
  To: qemu-devel

Hi Stefan

I finally could revert that commit in the latest snapshot; problem was I
needed to revert one later, that modified hw/virtio-serial-bus.c
accordingly; after that reversion, the regression in network performance
went completely away; this confirms my previous identification of the
commit that caused it.

Additionally, I tested your last suggestion, to change '<' to '<=', and
that didn't help; the problem was still there.

By the way, the performance gain I observed ,of about 25~30 % when using
a tun/tap, was in fact just apparent, because it was result of a greater
use of cpu, so it was achieved only when the host was idle; when I
stress the host, with a lot of concurrent guests generating traffic,
there is no gain at all.

Just for confirmation, this is the performance I get with latest
snapshot (8b4a3df8081f3e6f1061ed5cbb303ad623ade66b) running wget in the
guest:

$ wget -O /dev/null http://172.18.1.1/bpd.iso
--2012-10-16 09:10:18--  http://172.18.1.1/bpd.iso
Connecting to 172.18.1.1:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 358979584 (342M) [application/x-iso9660-image]
Saving to: `/dev/null'
100%[======================================================>] 358.979.584 29,7M/s   in 11s
2012-10-16 09:10:29 (30,3 MB/s) - `/dev/null' saved [358979584/358979584]

The same wget, using the same snapshot, but with that commit reverted
is:

$ wget -O /dev/null http://172.18.1.1/bpd.iso
--2012-10-16 09:15:12--  http://172.18.1.1/bpd.iso
Connecting to 172.18.1.1:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 358979584 (342M) [application/x-iso9660-image]
Saving to: `/dev/null'
100%[======================================================>] 358.979.584  180M/s   in 1,9s
2012-10-16 09:15:14 (180 MB/s) - `/dev/null' saved [358979584/358979584]

So, as I can see, there is no doubt: that commit is the culprit; as it
was intended to be related just to the quality of the source code (at
least as I can see), but implied in such a cost in performance, I think
it would be better to revert it.

Thank you very much, again.
Edivaldo

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1066055

Title:
  Network performance regression with vde_switch

Status in QEMU:
  New

Bug description:
  I've noticed a significant network performance regression when using
  vde_switch, starting about one week ago (10/05/2012); before that
  date, I used to get about 1.5 Gbits host to guest, but now I can only
  get about 320 Mbits; I didn't find any modification in net/vde.*, just
  in hw/virtio*.

  My command line: 
   qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
    -localtime -ctrl-grab -usbdevice tablet \
    -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
    -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix

  My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
  problem happens with other host and guest versions, too.

  I know there are better ways of running a guest, but using vde I get a
  cleaner environment in the host (just one tun/tap interface to
  manage...), which is quite good when running some accademic
  experiments.

  Interestingly, at the same time I've noticed a performance enhancement
  of about 25~30 % when using a tun/tap interface, bridged or not.

  Thank you, very much.

  Edivaldo de Araujo Pereira

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1066055/+subscriptions

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-16 12:23 ` Edivaldo de Araujo Pereira
@ 2012-10-17 13:51   ` Stefan Hajnoczi
  0 siblings, 0 replies; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-10-17 13:51 UTC (permalink / raw)
  To: Amit Shah; +Cc: qemu-devel, Bug 1066055

On Tue, Oct 16, 2012 at 2:23 PM, Edivaldo de Araujo Pereira
<edivaldoapereira@yahoo.com.br> wrote:
> Hi Stefan
>
> I finally could revert that commit in the latest snapshot; problem was I
> needed to revert one later, that modified hw/virtio-serial-bus.c
> accordingly; after that reversion, the regression in network performance
> went completely away; this confirms my previous identification of the
> commit that caused it.
>
> Additionally, I tested your last suggestion, to change '<' to '<=', and
> that didn't help; the problem was still there.
>
> By the way, the performance gain I observed ,of about 25~30 % when using
> a tun/tap, was in fact just apparent, because it was result of a greater
> use of cpu, so it was achieved only when the host was idle; when I
> stress the host, with a lot of concurrent guests generating traffic,
> there is no gain at all.
>
> Just for confirmation, this is the performance I get with latest
> snapshot (8b4a3df8081f3e6f1061ed5cbb303ad623ade66b) running wget in the
> guest:
>
> $ wget -O /dev/null http://172.18.1.1/bpd.iso
> --2012-10-16 09:10:18--  http://172.18.1.1/bpd.iso
> Connecting to 172.18.1.1:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 358979584 (342M) [application/x-iso9660-image]
> Saving to: `/dev/null'
> 100%[======================================================>] 358.979.584 29,7M/s   in 11s
> 2012-10-16 09:10:29 (30,3 MB/s) - `/dev/null' saved [358979584/358979584]
>
> The same wget, using the same snapshot, but with that commit reverted
> is:
>
> $ wget -O /dev/null http://172.18.1.1/bpd.iso
> --2012-10-16 09:15:12--  http://172.18.1.1/bpd.iso
> Connecting to 172.18.1.1:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 358979584 (342M) [application/x-iso9660-image]
> Saving to: `/dev/null'
> 100%[======================================================>] 358.979.584  180M/s   in 1,9s
> 2012-10-16 09:15:14 (180 MB/s) - `/dev/null' saved [358979584/358979584]
>
> So, as I can see, there is no doubt: that commit is the culprit; as it
> was intended to be related just to the quality of the source code (at
> least as I can see), but implied in such a cost in performance, I think
> it would be better to revert it.

Hi Amit,
Edivaldo has identified the following commit responsible for a network
performance regression he sees:

commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
Author: Amit Shah <amit.shah@redhat.com>
Date:   Tue Sep 25 00:05:15 2012 +0530

    virtio: Introduce virtqueue_get_avail_bytes()

I guess this is because we now iterate the entire descriptor chain to
check available space instead of returning early.

Do you want to propose a patch to fix it?

Stefan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-16  7:48   ` Stefan Hajnoczi
@ 2012-10-22 11:18     ` Amit Shah
  2012-10-22 13:50       ` Edivaldo de Araujo Pereira
  0 siblings, 1 reply; 28+ messages in thread
From: Amit Shah @ 2012-10-22 11:18 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: edivaldoapereira, qemu-devel, Bug 1066055

On (Tue) 16 Oct 2012 [09:48:09], Stefan Hajnoczi wrote:
> On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de Araujo Pereira wrote:
> > Hi Stefan,
> > 
> > Thank you, very much for taking the time to help me, and excuse me for
> > not seeing your answer early...
> > 
> > I've run the procedure you pointed me out, and the result is:
> > 
> > 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit
> > commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > Author: Amit Shah <amit.shah@redhat.com>
> > Date:   Tue Sep 25 00:05:15 2012 +0530
> > 
> >     virtio: Introduce virtqueue_get_avail_bytes()
> > 
> >     The current virtqueue_avail_bytes() is oddly named, and checks if a
> >     particular number of bytes are available in a vq.  A better API is to
> >     fetch the number of bytes available in the vq, and let the caller do
> >     what's interesting with the numbers.
> > 
> >     Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
> >     for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
> >     is made a wrapper over this new function.
> > 
> >     Signed-off-by: Amit Shah <amit.shah@redhat.com>
> >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > :040000 040000 1a58b06a228651cf844621d9ee2f49b525e36c93
> > e09ea66ce7f6874921670b6aeab5bea921a5227d M      hw
> > 
> > I tried to revert that patch in the latest version, but it obviously
> > didnt work; I'm trying to figure out the problem, but I don't know very
> > well the souce code, so I think it's going to take some time. For now,
> > it's all I could do.
> 
> After git-bisect(1) completes it is good to sanity-check the result by
> manually testing 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f^ (the commit
> just before the bad commit) and 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> (the bad commit).
> 
> This will verify that the commit indeed introduces the regression.  I
> suggest doing this just to be sure that you've found the bad commit.
> 
> Regarding this commit, I notice two things:
> 
> 1. We will now loop over all vring descriptors because we calculate the
>    total in/out length instead of returning early as soon as we see
>    there is enough space.  Maybe this makes a difference, although I'm a
>    little surprised you see such a huge regression.
> 
> 2. The comparision semantics have changed from:
> 
>      (in_total += vring_desc_len(desc_pa, i)) >= in_bytes
> 
>    to:
> 
>      (in_bytes && in_bytes < in_total)
> 
>    Notice that virtqueue_avail_bytes() now returns 0 when in_bytes ==
>    in_total.  Previously, it would return 1.  Perhaps we are starving or
>    delaying I/O due to this comparison change.  You can easily change
>    '<' to '<=' to see if it fixes the issue.

Hi Edivaldo,

Can you try the following patch, that will confirm if it's the
descriptor walk or the botched compare that's causing the regression.

Thanks,

diff --git a/hw/virtio.c b/hw/virtio.c
index 6821092..bb08ed8 100644
--- a/hw/virtio.c
+++ b/hw/virtio.c
@@ -406,8 +406,8 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
     unsigned int in_total, out_total;
 
     virtqueue_get_avail_bytes(vq, &in_total, &out_total);
-    if ((in_bytes && in_bytes < in_total)
-        || (out_bytes && out_bytes < out_total)) {
+    if ((in_bytes && in_bytes <= in_total)
+        || (out_bytes && out_bytes <= out_total)) {
         return 1;
     }
     return 0;


		Amit

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-22 11:18     ` Amit Shah
@ 2012-10-22 13:50       ` Edivaldo de Araujo Pereira
  2012-10-23 12:55         ` Stefan Hajnoczi
  0 siblings, 1 reply; 28+ messages in thread
From: Edivaldo de Araujo Pereira @ 2012-10-22 13:50 UTC (permalink / raw)
  To: Stefan Hajnoczi, Amit Shah; +Cc: qemu-devel, Bug 1066055

Dear Amit,

On a suggestion of Stefan, I've already tested the modification in you patch, and it didn't work; but for confirmation I tested it once again, on the latest snapshot; same result, that is, it didn't work; the problem is still there.

I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.

Thank you very much.

Edivaldo

--- Em seg, 22/10/12, Amit Shah <amit.shah@redhat.com> escreveu:

> De: Amit Shah <amit.shah@redhat.com>
> Assunto: Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
> Para: "Stefan Hajnoczi" <stefanha@gmail.com>
> Cc: "Bug 1066055" <1066055@bugs.launchpad.net>, qemu-devel@nongnu.org, edivaldoapereira@yahoo.com.br
> Data: Segunda-feira, 22 de Outubro de 2012, 4:18
> On (Tue) 16 Oct 2012 [09:48:09],
> Stefan Hajnoczi wrote:
> > On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de
> Araujo Pereira wrote:
> > > Hi Stefan,
> > > 
> > > Thank you, very much for taking the time to help
> me, and excuse me for
> > > not seeing your answer early...
> > > 
> > > I've run the procedure you pointed me out, and the
> result is:
> > > 
> > > 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the
> first bad commit
> > > commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > > Author: Amit Shah <amit.shah@redhat.com>
> > > Date:   Tue Sep 25 00:05:15 2012
> +0530
> > > 
> > >     virtio: Introduce
> virtqueue_get_avail_bytes()
> > > 
> > >     The current
> virtqueue_avail_bytes() is oddly named, and checks if a
> > >     particular number of bytes
> are available in a vq.  A better API is to
> > >     fetch the number of bytes
> available in the vq, and let the caller do
> > >     what's interesting with
> the numbers.
> > > 
> > >     Introduce
> virtqueue_get_avail_bytes(), which returns the number of
> bytes
> > >     for buffers marked for
> both, in as well as out.  virtqueue_avail_bytes()
> > >     is made a wrapper over
> this new function.
> > > 
> > >     Signed-off-by: Amit Shah
> <amit.shah@redhat.com>
> > >     Signed-off-by: Michael S.
> Tsirkin <mst@redhat.com>
> > > 
> > > :040000 040000
> 1a58b06a228651cf844621d9ee2f49b525e36c93
> > > e09ea66ce7f6874921670b6aeab5bea921a5227d M 
>     hw
> > > 
> > > I tried to revert that patch in the latest
> version, but it obviously
> > > didnt work; I'm trying to figure out the problem,
> but I don't know very
> > > well the souce code, so I think it's going to take
> some time. For now,
> > > it's all I could do.
> > 
> > After git-bisect(1) completes it is good to
> sanity-check the result by
> > manually testing
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f^ (the commit
> > just before the bad commit) and
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > (the bad commit).
> > 
> > This will verify that the commit indeed introduces the
> regression.  I
> > suggest doing this just to be sure that you've found
> the bad commit.
> > 
> > Regarding this commit, I notice two things:
> > 
> > 1. We will now loop over all vring descriptors because
> we calculate the
> >    total in/out length instead of returning
> early as soon as we see
> >    there is enough space.  Maybe this
> makes a difference, although I'm a
> >    little surprised you see such a huge
> regression.
> > 
> > 2. The comparision semantics have changed from:
> > 
> >      (in_total +=
> vring_desc_len(desc_pa, i)) >= in_bytes
> > 
> >    to:
> > 
> >      (in_bytes && in_bytes <
> in_total)
> > 
> >    Notice that virtqueue_avail_bytes() now
> returns 0 when in_bytes ==
> >    in_total.  Previously, it would
> return 1.  Perhaps we are starving or
> >    delaying I/O due to this comparison
> change.  You can easily change
> >    '<' to '<=' to see if it fixes the
> issue.
> 
> Hi Edivaldo,
> 
> Can you try the following patch, that will confirm if it's
> the
> descriptor walk or the botched compare that's causing the
> regression.
> 
> Thanks,
> 
> diff --git a/hw/virtio.c b/hw/virtio.c
> index 6821092..bb08ed8 100644
> --- a/hw/virtio.c
> +++ b/hw/virtio.c
> @@ -406,8 +406,8 @@ int virtqueue_avail_bytes(VirtQueue *vq,
> unsigned int in_bytes,
>      unsigned int in_total, out_total;
>  
>      virtqueue_get_avail_bytes(vq,
> &in_total, &out_total);
> -    if ((in_bytes && in_bytes <
> in_total)
> -        || (out_bytes &&
> out_bytes < out_total)) {
> +    if ((in_bytes && in_bytes <=
> in_total)
> +        || (out_bytes &&
> out_bytes <= out_total)) {
>          return 1;
>      }
>      return 0;
> 
> 
>         Amit
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-22 13:50       ` Edivaldo de Araujo Pereira
@ 2012-10-23 12:55         ` Stefan Hajnoczi
  2012-11-01  9:19           ` Amit Shah
  0 siblings, 1 reply; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-10-23 12:55 UTC (permalink / raw)
  To: Amit Shah; +Cc: Edivaldo de Araujo Pereira, qemu-devel, Bug 1066055

On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.

I've thought a little about how to approach this.  Amit, here's a brain
dump:

The simplest solution is to make virtqueue_avail_bytes() use the old
behavior of stopping early.

However, I wonder if we can actually *improve* performance of existing
code by changing virtio-net.c:virtio_net_receive().  The intuition is
that calling virtio_net_has_buffers() (internally calls
virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
because we're repeatedly traversing the descriptor chain.

We can get rid of this repetition.  A side-effect of this is that we no
longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
how:

The common case in virtio_net_receive() is that we have buffers and they
are large enough for the received packet.  So to optimize for this case:

1. Take the VirtQueueElement off the vring but don't increment
   last_avail_idx yet.  (This is essentially a "peek" operation.)

2. If there is an error or we drop the packet because the
   VirtQueueElement is too small, just bail out and we'll grab the same
   VirtQueueElement again next time.

3. When we've committed filling in this VirtQueueElement, increment
   last_avail_idx.  This is the point of no return.

Essentially we're splitting pop() into peek() and consume().  Peek()
grabs the VirtQueueElement but does not increment last_avail_idx.
Consume() simply increments last_avail_idx and maybe the EVENT_IDX
optimization stuff.

Whether this will improve performance, I'm not sure.  Perhaps
virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
following up with virtqueue_pop() is very cheap already.  But the idea
here is to avoid the virtio_net_has_buffers() because we'll find out
soon enough when we try to pop :).

Another approach would be to drop virtio_net_has_buffers() but continue
to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
VirtIONet across virtio_net_receive() calls in the case where we drop
the packet.  I don't like this approach very much though because it gets
tricky when the guest modifies the vring memory, resets the virtio
device, etc across calls.

Stefan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-23 12:55         ` Stefan Hajnoczi
@ 2012-11-01  9:19           ` Amit Shah
  2012-11-01 12:04             ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Amit Shah @ 2012-11-01  9:19 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Edivaldo de Araujo Pereira, Michael S. Tsirkin, qemu-devel, Bug 1066055

On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> 
> I've thought a little about how to approach this.  Amit, here's a brain
> dump:
> 
> The simplest solution is to make virtqueue_avail_bytes() use the old
> behavior of stopping early.
> 
> However, I wonder if we can actually *improve* performance of existing
> code by changing virtio-net.c:virtio_net_receive().  The intuition is
> that calling virtio_net_has_buffers() (internally calls
> virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> because we're repeatedly traversing the descriptor chain.
> 
> We can get rid of this repetition.  A side-effect of this is that we no
> longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> how:
> 
> The common case in virtio_net_receive() is that we have buffers and they
> are large enough for the received packet.  So to optimize for this case:
> 
> 1. Take the VirtQueueElement off the vring but don't increment
>    last_avail_idx yet.  (This is essentially a "peek" operation.)
> 
> 2. If there is an error or we drop the packet because the
>    VirtQueueElement is too small, just bail out and we'll grab the same
>    VirtQueueElement again next time.
> 
> 3. When we've committed filling in this VirtQueueElement, increment
>    last_avail_idx.  This is the point of no return.
> 
> Essentially we're splitting pop() into peek() and consume().  Peek()
> grabs the VirtQueueElement but does not increment last_avail_idx.
> Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> optimization stuff.
> 
> Whether this will improve performance, I'm not sure.  Perhaps
> virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> following up with virtqueue_pop() is very cheap already.  But the idea
> here is to avoid the virtio_net_has_buffers() because we'll find out
> soon enough when we try to pop :).

This sounds doable -- adding mst for comments.

> Another approach would be to drop virtio_net_has_buffers() but continue
> to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> VirtIONet across virtio_net_receive() calls in the case where we drop
> the packet.  I don't like this approach very much though because it gets
> tricky when the guest modifies the vring memory, resets the virtio
> device, etc across calls.

Right.

Also, save/load will become slightly complicated in both these
cases, but it might be worth it.

Michael, can you comment pls?


		Amit

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-15 21:46 ` [Qemu-devel] [Bug 1066055] " Edivaldo de Araujo Pereira
  2012-10-16  7:48   ` Stefan Hajnoczi
@ 2012-11-01 11:42   ` Michael S. Tsirkin
  1 sibling, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-01 11:42 UTC (permalink / raw)
  To: Bug 1066055; +Cc: qemu-devel

On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de Araujo Pereira wrote:
> Hi Stefan,
> 
> Thank you, very much for taking the time to help me, and excuse me for
> not seeing your answer early...
> 
> I've run the procedure you pointed me out, and the result is:
> 
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit
> commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> Author: Amit Shah <amit.shah@redhat.com>
> Date:   Tue Sep 25 00:05:15 2012 +0530
> 
>     virtio: Introduce virtqueue_get_avail_bytes()
> 
>     The current virtqueue_avail_bytes() is oddly named, and checks if a
>     particular number of bytes are available in a vq.  A better API is to
>     fetch the number of bytes available in the vq, and let the caller do
>     what's interesting with the numbers.
> 
>     Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
>     for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
>     is made a wrapper over this new function.
> 
>     Signed-off-by: Amit Shah <amit.shah@redhat.com>
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> 
> :040000 040000 1a58b06a228651cf844621d9ee2f49b525e36c93
> e09ea66ce7f6874921670b6aeab5bea921a5227d M      hw
> 
> I tried to revert that patch in the latest version, but it obviously
> didnt work; I'm trying to figure out the problem, but I don't know very
> well the souce code, so I think it's going to take some time. For now,
> it's all I could do.
> 
> Thank you, again.
> Edivaldo


Could you please try reverting the following two patches:
git revert 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
git revert ad3005ad8c70a69705149d3ce6d1e51fb76edb15

the result will build and will confirm that this is the problem.

Amit, could you please clarify what kind of performance
gain if any do you see from ad3005ad8c70a69705149d3ce6d1e51fb76edb15?

If it's small maybe we should simply revert?

-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-11-01  9:19           ` Amit Shah
@ 2012-11-01 12:04             ` Michael S. Tsirkin
  2012-11-01 15:12               ` Amit Shah
  2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
  0 siblings, 2 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-01 12:04 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Edivaldo de Araujo Pereira

On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > 
> > I've thought a little about how to approach this.  Amit, here's a brain
> > dump:
> > 
> > The simplest solution is to make virtqueue_avail_bytes() use the old
> > behavior of stopping early.
> > 
> > However, I wonder if we can actually *improve* performance of existing
> > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > that calling virtio_net_has_buffers() (internally calls
> > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > because we're repeatedly traversing the descriptor chain.
> > 
> > We can get rid of this repetition.  A side-effect of this is that we no
> > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > how:
> > 
> > The common case in virtio_net_receive() is that we have buffers and they
> > are large enough for the received packet.  So to optimize for this case:
> > 
> > 1. Take the VirtQueueElement off the vring but don't increment
> >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > 
> > 2. If there is an error or we drop the packet because the
> >    VirtQueueElement is too small, just bail out and we'll grab the same
> >    VirtQueueElement again next time.
> > 
> > 3. When we've committed filling in this VirtQueueElement, increment
> >    last_avail_idx.  This is the point of no return.
> > 
> > Essentially we're splitting pop() into peek() and consume().  Peek()
> > grabs the VirtQueueElement but does not increment last_avail_idx.
> > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > optimization stuff.
> > 
> > Whether this will improve performance, I'm not sure.  Perhaps
> > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > following up with virtqueue_pop() is very cheap already.  But the idea
> > here is to avoid the virtio_net_has_buffers() because we'll find out
> > soon enough when we try to pop :).
> 
> This sounds doable -- adding mst for comments.
> 
> > Another approach would be to drop virtio_net_has_buffers() but continue
> > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > VirtIONet across virtio_net_receive() calls in the case where we drop
> > the packet.  I don't like this approach very much though because it gets
> > tricky when the guest modifies the vring memory, resets the virtio
> > device, etc across calls.
> 
> Right.
> 
> Also, save/load will become slightly complicated in both these
> cases, but it might be worth it.
> 
> Michael, can you comment pls?
> 
> 
> 		Amit

It will also complicate switching to/from vhost-net.

If this patch helps serial but degrades speed for -net I'm inclined
to simply make serial and net user different codepaths.

-- 
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-11-01 12:04             ` Michael S. Tsirkin
@ 2012-11-01 15:12               ` Amit Shah
  2012-11-01 15:50                 ` Michael S. Tsirkin
  2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
  1 sibling, 1 reply; 28+ messages in thread
From: Amit Shah @ 2012-11-01 15:12 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Edivaldo de Araujo Pereira

On (Thu) 01 Nov 2012 [14:04:11], Michael S. Tsirkin wrote:
> On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> > On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > > 
> > > I've thought a little about how to approach this.  Amit, here's a brain
> > > dump:
> > > 
> > > The simplest solution is to make virtqueue_avail_bytes() use the old
> > > behavior of stopping early.
> > > 
> > > However, I wonder if we can actually *improve* performance of existing
> > > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > > that calling virtio_net_has_buffers() (internally calls
> > > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > > because we're repeatedly traversing the descriptor chain.
> > > 
> > > We can get rid of this repetition.  A side-effect of this is that we no
> > > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > > how:
> > > 
> > > The common case in virtio_net_receive() is that we have buffers and they
> > > are large enough for the received packet.  So to optimize for this case:
> > > 
> > > 1. Take the VirtQueueElement off the vring but don't increment
> > >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > > 
> > > 2. If there is an error or we drop the packet because the
> > >    VirtQueueElement is too small, just bail out and we'll grab the same
> > >    VirtQueueElement again next time.
> > > 
> > > 3. When we've committed filling in this VirtQueueElement, increment
> > >    last_avail_idx.  This is the point of no return.
> > > 
> > > Essentially we're splitting pop() into peek() and consume().  Peek()
> > > grabs the VirtQueueElement but does not increment last_avail_idx.
> > > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > > optimization stuff.
> > > 
> > > Whether this will improve performance, I'm not sure.  Perhaps
> > > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > > following up with virtqueue_pop() is very cheap already.  But the idea
> > > here is to avoid the virtio_net_has_buffers() because we'll find out
> > > soon enough when we try to pop :).
> > 
> > This sounds doable -- adding mst for comments.
> > 
> > > Another approach would be to drop virtio_net_has_buffers() but continue
> > > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > > VirtIONet across virtio_net_receive() calls in the case where we drop
> > > the packet.  I don't like this approach very much though because it gets
> > > tricky when the guest modifies the vring memory, resets the virtio
> > > device, etc across calls.
> > 
> > Right.
> > 
> > Also, save/load will become slightly complicated in both these
> > cases, but it might be worth it.
> > 
> > Michael, can you comment pls?
> > 
> 
> It will also complicate switching to/from vhost-net.
> 
> If this patch helps serial but degrades speed for -net I'm inclined
> to simply make serial and net user different codepaths.

There's an opportunity for optimisation, let's not discount it so
quickly.  The reporter also pointed out there was ~ +20% difference
with tun/tap, so eliminating the call to virtqueue_avail_bytes() can
help overall.

		Amit

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-11-01 15:12               ` Amit Shah
@ 2012-11-01 15:50                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-01 15:50 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Edivaldo de Araujo Pereira, qemu-devel, Bug 1066055

On Thu, Nov 01, 2012 at 08:42:50PM +0530, Amit Shah wrote:
> On (Thu) 01 Nov 2012 [14:04:11], Michael S. Tsirkin wrote:
> > On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> > > On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > > > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > > > 
> > > > I've thought a little about how to approach this.  Amit, here's a brain
> > > > dump:
> > > > 
> > > > The simplest solution is to make virtqueue_avail_bytes() use the old
> > > > behavior of stopping early.
> > > > 
> > > > However, I wonder if we can actually *improve* performance of existing
> > > > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > > > that calling virtio_net_has_buffers() (internally calls
> > > > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > > > because we're repeatedly traversing the descriptor chain.
> > > > 
> > > > We can get rid of this repetition.  A side-effect of this is that we no
> > > > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > > > how:
> > > > 
> > > > The common case in virtio_net_receive() is that we have buffers and they
> > > > are large enough for the received packet.  So to optimize for this case:
> > > > 
> > > > 1. Take the VirtQueueElement off the vring but don't increment
> > > >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > > > 
> > > > 2. If there is an error or we drop the packet because the
> > > >    VirtQueueElement is too small, just bail out and we'll grab the same
> > > >    VirtQueueElement again next time.
> > > > 
> > > > 3. When we've committed filling in this VirtQueueElement, increment
> > > >    last_avail_idx.  This is the point of no return.
> > > > 
> > > > Essentially we're splitting pop() into peek() and consume().  Peek()
> > > > grabs the VirtQueueElement but does not increment last_avail_idx.
> > > > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > > > optimization stuff.
> > > > 
> > > > Whether this will improve performance, I'm not sure.  Perhaps
> > > > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > > > following up with virtqueue_pop() is very cheap already.  But the idea
> > > > here is to avoid the virtio_net_has_buffers() because we'll find out
> > > > soon enough when we try to pop :).
> > > 
> > > This sounds doable -- adding mst for comments.
> > > 
> > > > Another approach would be to drop virtio_net_has_buffers() but continue
> > > > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > > > VirtIONet across virtio_net_receive() calls in the case where we drop
> > > > the packet.  I don't like this approach very much though because it gets
> > > > tricky when the guest modifies the vring memory, resets the virtio
> > > > device, etc across calls.
> > > 
> > > Right.
> > > 
> > > Also, save/load will become slightly complicated in both these
> > > cases, but it might be worth it.
> > > 
> > > Michael, can you comment pls?
> > > 
> > 
> > It will also complicate switching to/from vhost-net.
> > 
> > If this patch helps serial but degrades speed for -net I'm inclined
> > to simply make serial and net user different codepaths.
> 
> There's an opportunity for optimisation, let's not discount it so
> quickly.  The reporter also pointed out there was ~ +20% difference
> with tun/tap,

Seems to do with host powermanagement.

> so eliminating the call to virtqueue_avail_bytes() can
> help overall.
> 
> 		Amit

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-01 12:04             ` Michael S. Tsirkin
  2012-11-01 15:12               ` Amit Shah
@ 2012-11-01 16:07               ` Michael S. Tsirkin
  2012-11-02  9:56                 ` Amit Shah
                                   ` (2 more replies)
  1 sibling, 3 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-01 16:07 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Edivaldo de Araujo Pereira, qemu-devel,
	Anthony Liguori, Bug 1066055

Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
a regression in virtio-net performance because it looks
into the ring aggressively while we really only care
about a single packet worth of buffers.
To fix, add parameters limiting lookahead, and
use in virtqueue_avail_bytes.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>

---

Edivaldo could you please confirm this fixes regression?

diff --git a/hw/virtio-serial-bus.c b/hw/virtio-serial-bus.c
index d20bd8b..a761cdc 100644
--- a/hw/virtio-serial-bus.c
+++ b/hw/virtio-serial-bus.c
@@ -297,7 +297,7 @@ size_t virtio_serial_guest_ready(VirtIOSerialPort *port)
     if (use_multiport(port->vser) && !port->guest_connected) {
         return 0;
     }
-    virtqueue_get_avail_bytes(vq, &bytes, NULL);
+    virtqueue_get_avail_bytes(vq, &bytes, NULL, UINT_MAX, 0);
     return bytes;
 }
 
diff --git a/hw/virtio.c b/hw/virtio.c
index ec8b7d8..f40a8c5 100644
--- a/hw/virtio.c
+++ b/hw/virtio.c
@@ -336,7 +336,8 @@ static unsigned virtqueue_next_desc(hwaddr desc_pa,
 }
 
 void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
-                               unsigned int *out_bytes)
+                               unsigned int *out_bytes,
+                               unsigned max_in_bytes, unsigned max_out_bytes)
 {
     unsigned int idx;
     unsigned int total_bufs, in_total, out_total;
@@ -385,6 +386,9 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
             } else {
                 out_total += vring_desc_len(desc_pa, i);
             }
+            if (in_total >= max_in_bytes && out_total >= max_out_bytes) {
+                goto done;
+            }
         } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
 
         if (!indirect)
@@ -392,6 +396,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
         else
             total_bufs++;
     }
+done:
     if (in_bytes) {
         *in_bytes = in_total;
     }
@@ -405,12 +410,8 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
 {
     unsigned int in_total, out_total;
 
-    virtqueue_get_avail_bytes(vq, &in_total, &out_total);
-    if ((in_bytes && in_bytes < in_total)
-        || (out_bytes && out_bytes < out_total)) {
-        return 1;
-    }
-    return 0;
+    virtqueue_get_avail_bytes(vq, &in_total, &out_total, in_bytes, out_bytes);
+    return in_bytes <= in_total && out_bytes <= out_total;
 }
 
 void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
diff --git a/hw/virtio.h b/hw/virtio.h
index ac482be..1278328 100644
--- a/hw/virtio.h
+++ b/hw/virtio.h
@@ -150,7 +150,8 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
 int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
                           unsigned int out_bytes);
 void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
-                               unsigned int *out_bytes);
+                               unsigned int *out_bytes,
+                               unsigned max_in_bytes, unsigned max_out_bytes);
 
 void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
@ 2012-11-02  9:56                 ` Amit Shah
  2012-11-02 10:18                 ` Stefan Hajnoczi
  2012-11-27 16:25                 ` Michael S. Tsirkin
  2 siblings, 0 replies; 28+ messages in thread
From: Amit Shah @ 2012-11-02  9:56 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Edivaldo de Araujo Pereira, qemu-devel,
	Anthony Liguori, Bug 1066055

On (Thu) 01 Nov 2012 [18:07:21], Michael S. Tsirkin wrote:
> Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> a regression in virtio-net performance because it looks
> into the ring aggressively while we really only care
> about a single packet worth of buffers.
> To fix, add parameters limiting lookahead, and
> use in virtqueue_avail_bytes.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>

Acked-by: Amit Shah <amit.shah@redhat.com>

		Amit

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
  2012-11-02  9:56                 ` Amit Shah
@ 2012-11-02 10:18                 ` Stefan Hajnoczi
  2012-11-02 14:48                   ` Michael S. Tsirkin
  2012-11-27 16:25                 ` Michael S. Tsirkin
  2 siblings, 1 reply; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-11-02 10:18 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Amit Shah, Edivaldo de Araujo Pereira, qemu-devel,
	Anthony Liguori, Bug 1066055

On Thu, Nov 1, 2012 at 5:07 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> a regression in virtio-net performance because it looks
> into the ring aggressively while we really only care
> about a single packet worth of buffers.
> To fix, add parameters limiting lookahead, and
> use in virtqueue_avail_bytes.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>

Nice, much simpler than the ideas I had.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-02 10:18                 ` Stefan Hajnoczi
@ 2012-11-02 14:48                   ` Michael S. Tsirkin
  2012-11-02 19:44                     ` Stefan Hajnoczi
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-02 14:48 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Amit Shah, Edivaldo de Araujo Pereira, qemu-devel,
	Anthony Liguori, Bug 1066055

On Fri, Nov 02, 2012 at 11:18:18AM +0100, Stefan Hajnoczi wrote:
> On Thu, Nov 1, 2012 at 5:07 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> > a regression in virtio-net performance because it looks
> > into the ring aggressively while we really only care
> > about a single packet worth of buffers.
> > To fix, add parameters limiting lookahead, and
> > use in virtqueue_avail_bytes.
> >
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
> 
> Nice, much simpler than the ideas I had.
> 
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

Anthony could you apply this out of band please so this stops
biting people?

Thanks,
MST

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-02 14:48                   ` Michael S. Tsirkin
@ 2012-11-02 19:44                     ` Stefan Hajnoczi
  0 siblings, 0 replies; 28+ messages in thread
From: Stefan Hajnoczi @ 2012-11-02 19:44 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Amit Shah, Edivaldo de Araujo Pereira, qemu-devel,
	Anthony Liguori, Bug 1066055

On Fri, Nov 2, 2012 at 3:48 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Fri, Nov 02, 2012 at 11:18:18AM +0100, Stefan Hajnoczi wrote:
>> On Thu, Nov 1, 2012 at 5:07 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>> > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
>> > a regression in virtio-net performance because it looks
>> > into the ring aggressively while we really only care
>> > about a single packet worth of buffers.
>> > To fix, add parameters limiting lookahead, and
>> > use in virtqueue_avail_bytes.
>> >
>> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>> > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
>>
>> Nice, much simpler than the ideas I had.
>>
>> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> Anthony could you apply this out of band please so this stops
> biting people?

Especially for the 1.3 release so that we don't have a virtio
performance regression.

Stefan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
  2012-11-02  9:56                 ` Amit Shah
  2012-11-02 10:18                 ` Stefan Hajnoczi
@ 2012-11-27 16:25                 ` Michael S. Tsirkin
  2012-11-27 16:54                   ` Edivaldo de Araujo Pereira
                                     ` (2 more replies)
  2 siblings, 3 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-27 16:25 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Anthony Liguori,
	Edivaldo de Araujo Pereira

On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
> Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> a regression in virtio-net performance because it looks
> into the ring aggressively while we really only care
> about a single packet worth of buffers.
> To fix, add parameters limiting lookahead, and
> use in virtqueue_avail_bytes.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>

Ping.
Anthony - going to apply this?


> ---
> 
> Edivaldo could you please confirm this fixes regression?
> 
> diff --git a/hw/virtio-serial-bus.c b/hw/virtio-serial-bus.c
> index d20bd8b..a761cdc 100644
> --- a/hw/virtio-serial-bus.c
> +++ b/hw/virtio-serial-bus.c
> @@ -297,7 +297,7 @@ size_t virtio_serial_guest_ready(VirtIOSerialPort *port)
>      if (use_multiport(port->vser) && !port->guest_connected) {
>          return 0;
>      }
> -    virtqueue_get_avail_bytes(vq, &bytes, NULL);
> +    virtqueue_get_avail_bytes(vq, &bytes, NULL, UINT_MAX, 0);
>      return bytes;
>  }
>  
> diff --git a/hw/virtio.c b/hw/virtio.c
> index ec8b7d8..f40a8c5 100644
> --- a/hw/virtio.c
> +++ b/hw/virtio.c
> @@ -336,7 +336,8 @@ static unsigned virtqueue_next_desc(hwaddr desc_pa,
>  }
>  
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> -                               unsigned int *out_bytes)
> +                               unsigned int *out_bytes,
> +                               unsigned max_in_bytes, unsigned max_out_bytes)
>  {
>      unsigned int idx;
>      unsigned int total_bufs, in_total, out_total;
> @@ -385,6 +386,9 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>              } else {
>                  out_total += vring_desc_len(desc_pa, i);
>              }
> +            if (in_total >= max_in_bytes && out_total >= max_out_bytes) {
> +                goto done;
> +            }
>          } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>  
>          if (!indirect)
> @@ -392,6 +396,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>          else
>              total_bufs++;
>      }
> +done:
>      if (in_bytes) {
>          *in_bytes = in_total;
>      }
> @@ -405,12 +410,8 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>  {
>      unsigned int in_total, out_total;
>  
> -    virtqueue_get_avail_bytes(vq, &in_total, &out_total);
> -    if ((in_bytes && in_bytes < in_total)
> -        || (out_bytes && out_bytes < out_total)) {
> -        return 1;
> -    }
> -    return 0;
> +    virtqueue_get_avail_bytes(vq, &in_total, &out_total, in_bytes, out_bytes);
> +    return in_bytes <= in_total && out_bytes <= out_total;
>  }
>  
>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
> diff --git a/hw/virtio.h b/hw/virtio.h
> index ac482be..1278328 100644
> --- a/hw/virtio.h
> +++ b/hw/virtio.h
> @@ -150,7 +150,8 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>                            unsigned int out_bytes);
>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> -                               unsigned int *out_bytes);
> +                               unsigned int *out_bytes,
> +                               unsigned max_in_bytes, unsigned max_out_bytes);
>  
>  void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
>  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-27 16:25                 ` Michael S. Tsirkin
@ 2012-11-27 16:54                   ` Edivaldo de Araujo Pereira
  2012-11-27 19:47                   ` Anthony Liguori
  2012-11-28 21:53                   ` Michael S. Tsirkin
  2 siblings, 0 replies; 28+ messages in thread
From: Edivaldo de Araujo Pereira @ 2012-11-27 16:54 UTC (permalink / raw)
  To: Amit Shah, Michael S. Tsirkin
  Cc: Stefan Hajnoczi, qemu-devel, Anthony Liguori, Bug 1066055

Dear friends, 

Please excuse-me for not reporting earlier... I confirm that the patch by Michael really fixes the problem I've reported. The regression has gone away when I used it, so I think it is good to be applied.

Thanks,

Edivaldo de Araújo Pereira


--- Em ter, 27/11/12, Michael S. Tsirkin <mst@redhat.com> escreveu:

> De: Michael S. Tsirkin <mst@redhat.com>
> Assunto: Re: [PATCH] virtio: limit avail bytes lookahead
> Para: "Amit Shah" <amit.shah@redhat.com>
> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Edivaldo de Araujo Pereira" <edivaldoapereira@yahoo.com.br>, qemu-devel@nongnu.org, "Anthony Liguori" <anthony@codemonkey.ws>, "Bug 1066055" <1066055@bugs.launchpad.net>
> Data: Terça-feira, 27 de Novembro de 2012, 8:25
> On Thu, Nov 01, 2012 at 06:07:21PM
> +0200, Michael S. Tsirkin wrote:
> > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> introduced
> > a regression in virtio-net performance because it
> looks
> > into the ring aggressively while we really only care
> > about a single packet worth of buffers.
> > To fix, add parameters limiting lookahead, and
> > use in virtqueue_avail_bytes.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
> 
> Ping.
> Anthony - going to apply this?
> 
> 
> > ---
> > 
> > Edivaldo could you please confirm this fixes
> regression?
> > 
> > diff --git a/hw/virtio-serial-bus.c
> b/hw/virtio-serial-bus.c
> > index d20bd8b..a761cdc 100644
> > --- a/hw/virtio-serial-bus.c
> > +++ b/hw/virtio-serial-bus.c
> > @@ -297,7 +297,7 @@ size_t
> virtio_serial_guest_ready(VirtIOSerialPort *port)
> >      if (use_multiport(port->vser)
> && !port->guest_connected) {
> >          return 0;
> >      }
> > -    virtqueue_get_avail_bytes(vq,
> &bytes, NULL);
> > +    virtqueue_get_avail_bytes(vq,
> &bytes, NULL, UINT_MAX, 0);
> >      return bytes;
> >  }
> >  
> > diff --git a/hw/virtio.c b/hw/virtio.c
> > index ec8b7d8..f40a8c5 100644
> > --- a/hw/virtio.c
> > +++ b/hw/virtio.c
> > @@ -336,7 +336,8 @@ static unsigned
> virtqueue_next_desc(hwaddr desc_pa,
> >  }
> >  
> >  void virtqueue_get_avail_bytes(VirtQueue *vq,
> unsigned int *in_bytes,
> > -             
>              
>    unsigned int *out_bytes)
> > +             
>              
>    unsigned int *out_bytes,
> > +             
>              
>    unsigned max_in_bytes, unsigned
> max_out_bytes)
> >  {
> >      unsigned int idx;
> >      unsigned int total_bufs, in_total,
> out_total;
> > @@ -385,6 +386,9 @@ void
> virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int
> *in_bytes,
> >              } else
> {
> >               
>   out_total += vring_desc_len(desc_pa, i);
> >              }
> > +            if (in_total
> >= max_in_bytes && out_total >= max_out_bytes)
> {
> > +             
>   goto done;
> > +            }
> >          } while ((i =
> virtqueue_next_desc(desc_pa, i, max)) != max);
> >  
> >          if (!indirect)
> > @@ -392,6 +396,7 @@ void
> virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int
> *in_bytes,
> >          else
> >             
> total_bufs++;
> >      }
> > +done:
> >      if (in_bytes) {
> >          *in_bytes =
> in_total;
> >      }
> > @@ -405,12 +410,8 @@ int
> virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
> >  {
> >      unsigned int in_total, out_total;
> >  
> > -    virtqueue_get_avail_bytes(vq,
> &in_total, &out_total);
> > -    if ((in_bytes && in_bytes <
> in_total)
> > -        || (out_bytes &&
> out_bytes < out_total)) {
> > -        return 1;
> > -    }
> > -    return 0;
> > +    virtqueue_get_avail_bytes(vq,
> &in_total, &out_total, in_bytes, out_bytes);
> > +    return in_bytes <= in_total
> && out_bytes <= out_total;
> >  }
> >  
> >  void virtqueue_map_sg(struct iovec *sg, hwaddr
> *addr,
> > diff --git a/hw/virtio.h b/hw/virtio.h
> > index ac482be..1278328 100644
> > --- a/hw/virtio.h
> > +++ b/hw/virtio.h
> > @@ -150,7 +150,8 @@ int virtqueue_pop(VirtQueue *vq,
> VirtQueueElement *elem);
> >  int virtqueue_avail_bytes(VirtQueue *vq, unsigned
> int in_bytes,
> >               
>             unsigned int
> out_bytes);
> >  void virtqueue_get_avail_bytes(VirtQueue *vq,
> unsigned int *in_bytes,
> > -             
>              
>    unsigned int *out_bytes);
> > +             
>              
>    unsigned int *out_bytes,
> > +             
>              
>    unsigned max_in_bytes, unsigned
> max_out_bytes);
> >  
> >  void virtio_notify(VirtIODevice *vdev, VirtQueue
> *vq);
> >  
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-27 16:25                 ` Michael S. Tsirkin
  2012-11-27 16:54                   ` Edivaldo de Araujo Pereira
@ 2012-11-27 19:47                   ` Anthony Liguori
  2012-11-28 21:53                   ` Michael S. Tsirkin
  2 siblings, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-11-27 19:47 UTC (permalink / raw)
  To: Michael S. Tsirkin, Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Edivaldo de Araujo Pereira

"Michael S. Tsirkin" <mst@redhat.com> writes:

> On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
>> Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
>> a regression in virtio-net performance because it looks
>> into the ring aggressively while we really only care
>> about a single packet worth of buffers.
>> To fix, add parameters limiting lookahead, and
>> use in virtqueue_avail_bytes.
>> 
>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>> Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
>
> Ping.
> Anthony - going to apply this?

Yes, I've got it queued now.  In the future, please top post patches.

Regards,

Anthony Liguori

>
>
>> ---
>> 
>> Edivaldo could you please confirm this fixes regression?
>> 
>> diff --git a/hw/virtio-serial-bus.c b/hw/virtio-serial-bus.c
>> index d20bd8b..a761cdc 100644
>> --- a/hw/virtio-serial-bus.c
>> +++ b/hw/virtio-serial-bus.c
>> @@ -297,7 +297,7 @@ size_t virtio_serial_guest_ready(VirtIOSerialPort *port)
>>      if (use_multiport(port->vser) && !port->guest_connected) {
>>          return 0;
>>      }
>> -    virtqueue_get_avail_bytes(vq, &bytes, NULL);
>> +    virtqueue_get_avail_bytes(vq, &bytes, NULL, UINT_MAX, 0);
>>      return bytes;
>>  }
>>  
>> diff --git a/hw/virtio.c b/hw/virtio.c
>> index ec8b7d8..f40a8c5 100644
>> --- a/hw/virtio.c
>> +++ b/hw/virtio.c
>> @@ -336,7 +336,8 @@ static unsigned virtqueue_next_desc(hwaddr desc_pa,
>>  }
>>  
>>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>> -                               unsigned int *out_bytes)
>> +                               unsigned int *out_bytes,
>> +                               unsigned max_in_bytes, unsigned max_out_bytes)
>>  {
>>      unsigned int idx;
>>      unsigned int total_bufs, in_total, out_total;
>> @@ -385,6 +386,9 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>>              } else {
>>                  out_total += vring_desc_len(desc_pa, i);
>>              }
>> +            if (in_total >= max_in_bytes && out_total >= max_out_bytes) {
>> +                goto done;
>> +            }
>>          } while ((i = virtqueue_next_desc(desc_pa, i, max)) != max);
>>  
>>          if (!indirect)
>> @@ -392,6 +396,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>>          else
>>              total_bufs++;
>>      }
>> +done:
>>      if (in_bytes) {
>>          *in_bytes = in_total;
>>      }
>> @@ -405,12 +410,8 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>>  {
>>      unsigned int in_total, out_total;
>>  
>> -    virtqueue_get_avail_bytes(vq, &in_total, &out_total);
>> -    if ((in_bytes && in_bytes < in_total)
>> -        || (out_bytes && out_bytes < out_total)) {
>> -        return 1;
>> -    }
>> -    return 0;
>> +    virtqueue_get_avail_bytes(vq, &in_total, &out_total, in_bytes, out_bytes);
>> +    return in_bytes <= in_total && out_bytes <= out_total;
>>  }
>>  
>>  void virtqueue_map_sg(struct iovec *sg, hwaddr *addr,
>> diff --git a/hw/virtio.h b/hw/virtio.h
>> index ac482be..1278328 100644
>> --- a/hw/virtio.h
>> +++ b/hw/virtio.h
>> @@ -150,7 +150,8 @@ int virtqueue_pop(VirtQueue *vq, VirtQueueElement *elem);
>>  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
>>                            unsigned int out_bytes);
>>  void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>> -                               unsigned int *out_bytes);
>> +                               unsigned int *out_bytes,
>> +                               unsigned max_in_bytes, unsigned max_out_bytes);
>>  
>>  void virtio_notify(VirtIODevice *vdev, VirtQueue *vq);
>>  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-27 16:25                 ` Michael S. Tsirkin
  2012-11-27 16:54                   ` Edivaldo de Araujo Pereira
  2012-11-27 19:47                   ` Anthony Liguori
@ 2012-11-28 21:53                   ` Michael S. Tsirkin
  2012-11-29 13:04                     ` Amit Shah
  2 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-28 21:53 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Anthony Liguori,
	Edivaldo de Araujo Pereira

On Tue, Nov 27, 2012 at 06:25:04PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
> > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> > a regression in virtio-net performance because it looks
> > into the ring aggressively while we really only care
> > about a single packet worth of buffers.
> > To fix, add parameters limiting lookahead, and
> > use in virtqueue_avail_bytes.
> > 
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
> 
> Ping.
> Anthony - going to apply this?

virtio rng was added since so naturally build broke.
Here's a patch on top to fix it up. I never used virtio rng before so
could not test at this hour, but it does fix the build.

I'll take a look at how to test it tomorrow but any
info would be appreciated.
Amit could you pls review?

-- 


diff --git a/hw/virtio-rng.c b/hw/virtio-rng.c
index df329f2..a73ef8e 100644
--- a/hw/virtio-rng.c
+++ b/hw/virtio-rng.c
@@ -43,11 +43,11 @@ static bool is_guest_ready(VirtIORNG *vrng)
     return false;
 }
 
-static size_t get_request_size(VirtQueue *vq)
+static size_t get_request_size(VirtQueue *vq, unsigned quota)
 {
     unsigned int in, out;
 
-    virtqueue_get_avail_bytes(vq, &in, &out);
+    virtqueue_get_avail_bytes(vq, &in, &out, quota, 0);
     return in;
 }
 
@@ -84,12 +84,18 @@ static void chr_read(void *opaque, const void *buf, size_t size)
 static void virtio_rng_process(VirtIORNG *vrng)
 {
     size_t size;
+    unsigned quota;
 
     if (!is_guest_ready(vrng)) {
         return;
     }
 
-    size = get_request_size(vrng->vq);
+    if (vrng->quota_remaining < 0) {
+        quota = 0;
+    } else {
+        quota = MIN((uint64_t)vrng->quota_remaining, (uint64_t)UINT32_MAX);
+    }
+    size = get_request_size(vrng->vq, quota);
     size = MIN(vrng->quota_remaining, size);
     if (size) {
         rng_backend_request_entropy(vrng->rng, size, chr_read, vrng);

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-28 21:53                   ` Michael S. Tsirkin
@ 2012-11-29 13:04                     ` Amit Shah
  2012-11-29 14:10                       ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Amit Shah @ 2012-11-29 13:04 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Anthony Liguori,
	Edivaldo de Araujo Pereira

On (Wed) 28 Nov 2012 [23:53:08], Michael S. Tsirkin wrote:
> On Tue, Nov 27, 2012 at 06:25:04PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
> > > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> > > a regression in virtio-net performance because it looks
> > > into the ring aggressively while we really only care
> > > about a single packet worth of buffers.
> > > To fix, add parameters limiting lookahead, and
> > > use in virtqueue_avail_bytes.
> > > 
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
> > 
> > Ping.
> > Anthony - going to apply this?
> 
> virtio rng was added since so naturally build broke.
> Here's a patch on top to fix it up. I never used virtio rng before so
> could not test at this hour, but it does fix the build.
> 
> I'll take a look at how to test it tomorrow but any
> info would be appreciated.
> Amit could you pls review?

Looks fine, I assume you will send a v2 of the patch to Anthony?

		Amit

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-29 13:04                     ` Amit Shah
@ 2012-11-29 14:10                       ` Michael S. Tsirkin
  2012-11-29 19:02                         ` Anthony Liguori
  0 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-29 14:10 UTC (permalink / raw)
  To: Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Anthony Liguori,
	Edivaldo de Araujo Pereira

On Thu, Nov 29, 2012 at 06:34:46PM +0530, Amit Shah wrote:
> On (Wed) 28 Nov 2012 [23:53:08], Michael S. Tsirkin wrote:
> > On Tue, Nov 27, 2012 at 06:25:04PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
> > > > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
> > > > a regression in virtio-net performance because it looks
> > > > into the ring aggressively while we really only care
> > > > about a single packet worth of buffers.
> > > > To fix, add parameters limiting lookahead, and
> > > > use in virtqueue_avail_bytes.
> > > > 
> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
> > > 
> > > Ping.
> > > Anthony - going to apply this?
> > 
> > virtio rng was added since so naturally build broke.
> > Here's a patch on top to fix it up. I never used virtio rng before so
> > could not test at this hour, but it does fix the build.
> > 
> > I'll take a look at how to test it tomorrow but any
> > info would be appreciated.
> > Amit could you pls review?
> 
> Looks fine, I assume you will send a v2 of the patch to Anthony?
> 
> 		Amit


Anthony volunteered to test this so there will only be v2 if he sees
problems.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-29 14:10                       ` Michael S. Tsirkin
@ 2012-11-29 19:02                         ` Anthony Liguori
  2012-11-29 22:39                           ` Michael S. Tsirkin
  0 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-11-29 19:02 UTC (permalink / raw)
  To: Michael S. Tsirkin, Amit Shah
  Cc: Stefan Hajnoczi, Bug 1066055, qemu-devel, Edivaldo de Araujo Pereira

"Michael S. Tsirkin" <mst@redhat.com> writes:

> On Thu, Nov 29, 2012 at 06:34:46PM +0530, Amit Shah wrote:
>> On (Wed) 28 Nov 2012 [23:53:08], Michael S. Tsirkin wrote:
>> > On Tue, Nov 27, 2012 at 06:25:04PM +0200, Michael S. Tsirkin wrote:
>> > > On Thu, Nov 01, 2012 at 06:07:21PM +0200, Michael S. Tsirkin wrote:
>> > > > Commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f introduced
>> > > > a regression in virtio-net performance because it looks
>> > > > into the ring aggressively while we really only care
>> > > > about a single packet worth of buffers.
>> > > > To fix, add parameters limiting lookahead, and
>> > > > use in virtqueue_avail_bytes.
>> > > > 
>> > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>> > > > Reported-by: Edivaldo de Araujo Pereira <edivaldoapereira@yahoo.com.br>
>> > > 
>> > > Ping.
>> > > Anthony - going to apply this?
>> > 
>> > virtio rng was added since so naturally build broke.
>> > Here's a patch on top to fix it up. I never used virtio rng before so
>> > could not test at this hour, but it does fix the build.
>> > 
>> > I'll take a look at how to test it tomorrow but any
>> > info would be appreciated.
>> > Amit could you pls review?
>> 
>> Looks fine, I assume you will send a v2 of the patch to Anthony?
>> 
>> 		Amit
>
>
> Anthony volunteered to test this so there will only be v2 if he sees
> problems.

I need a Signed-off-by so why don't you just go ahead and send a v2 and
I'll test that.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead
  2012-11-29 19:02                         ` Anthony Liguori
@ 2012-11-29 22:39                           ` Michael S. Tsirkin
  0 siblings, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2012-11-29 22:39 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Amit Shah, Stefan Hajnoczi, Bug 1066055, qemu-devel,
	Edivaldo de Araujo Pereira

> I need a Signed-off-by so why don't you just go ahead and send a v2 and
> I'll test that.

OK I sent a v2 which is this fixup + the original patch smashed
together.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
  2012-10-12 17:34 [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch Edivaldo de Araujo Pereira
                   ` (2 preceding siblings ...)
  2012-10-16 12:23 ` Edivaldo de Araujo Pereira
@ 2018-01-10 21:28 ` Thomas Huth
  3 siblings, 0 replies; 28+ messages in thread
From: Thomas Huth @ 2018-01-10 21:28 UTC (permalink / raw)
  To: qemu-devel

The fix had been included here:
https://git.qemu.org/?p=qemu.git;a=commitdiff;h=e1f7b4812eab992de46c98b
... so closing this bug now.

** Changed in: qemu
       Status: New => Fix Released

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1066055

Title:
  Network performance regression with vde_switch

Status in QEMU:
  Fix Released

Bug description:
  I've noticed a significant network performance regression when using
  vde_switch, starting about one week ago (10/05/2012); before that
  date, I used to get about 1.5 Gbits host to guest, but now I can only
  get about 320 Mbits; I didn't find any modification in net/vde.*, just
  in hw/virtio*.

  My command line: 
   qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \
    -localtime -ctrl-grab -usbdevice tablet \
    -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \
    -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix

  My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same
  problem happens with other host and guest versions, too.

  I know there are better ways of running a guest, but using vde I get a
  cleaner environment in the host (just one tun/tap interface to
  manage...), which is quite good when running some accademic
  experiments.

  Interestingly, at the same time I've noticed a performance enhancement
  of about 25~30 % when using a tun/tap interface, bridged or not.

  Thank you, very much.

  Edivaldo de Araujo Pereira

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1066055/+subscriptions

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-01-10 21:40 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-12 17:34 [Qemu-devel] [Bug 1066055] [NEW] Network performance regression with vde_switch Edivaldo de Araujo Pereira
2012-10-15  8:58 ` Stefan Hajnoczi
2012-10-15 21:46 ` [Qemu-devel] [Bug 1066055] " Edivaldo de Araujo Pereira
2012-10-16  7:48   ` Stefan Hajnoczi
2012-10-22 11:18     ` Amit Shah
2012-10-22 13:50       ` Edivaldo de Araujo Pereira
2012-10-23 12:55         ` Stefan Hajnoczi
2012-11-01  9:19           ` Amit Shah
2012-11-01 12:04             ` Michael S. Tsirkin
2012-11-01 15:12               ` Amit Shah
2012-11-01 15:50                 ` Michael S. Tsirkin
2012-11-01 16:07               ` [Qemu-devel] [PATCH] virtio: limit avail bytes lookahead Michael S. Tsirkin
2012-11-02  9:56                 ` Amit Shah
2012-11-02 10:18                 ` Stefan Hajnoczi
2012-11-02 14:48                   ` Michael S. Tsirkin
2012-11-02 19:44                     ` Stefan Hajnoczi
2012-11-27 16:25                 ` Michael S. Tsirkin
2012-11-27 16:54                   ` Edivaldo de Araujo Pereira
2012-11-27 19:47                   ` Anthony Liguori
2012-11-28 21:53                   ` Michael S. Tsirkin
2012-11-29 13:04                     ` Amit Shah
2012-11-29 14:10                       ` Michael S. Tsirkin
2012-11-29 19:02                         ` Anthony Liguori
2012-11-29 22:39                           ` Michael S. Tsirkin
2012-11-01 11:42   ` [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch Michael S. Tsirkin
2012-10-16 12:23 ` Edivaldo de Araujo Pereira
2012-10-17 13:51   ` Stefan Hajnoczi
2018-01-10 21:28 ` Thomas Huth

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.