All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] slow virtio network with vhost=on and multiple cores
@ 2012-11-05 11:51 Dietmar Maurer
  2012-11-05 12:36 ` Stefan Hajnoczi
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-05 11:51 UTC (permalink / raw)
  To: qemu-devel

Network performance with vhost=on is extremely bad if a guest uses multiple cores:

HOST kernel: RHEL 2.6.32-279.11.1.el6
KVM 1.2.0
GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2

Test with something like (install Debian Squeeze first):

./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -hda debian-squeeze-netinst.raw -netdev type=tap,id=net0,ifname=tap111i0,vhost=on -device virtio-net-pci,netdev=net0,id=net0

Downloading a larger file with wget inside the guest will show the problem. Speed drops from 100MB/s to 15MB/s.

Can someone reproduce that bug? Any ideas how to fix that?

- Dietmar 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-05 11:51 [Qemu-devel] slow virtio network with vhost=on and multiple cores Dietmar Maurer
@ 2012-11-05 12:36 ` Stefan Hajnoczi
  2012-11-05 15:01   ` Dietmar Maurer
                     ` (2 more replies)
  0 siblings, 3 replies; 51+ messages in thread
From: Stefan Hajnoczi @ 2012-11-05 12:36 UTC (permalink / raw)
  To: Dietmar Maurer; +Cc: qemu-devel, Michael S. Tsirkin

On Mon, Nov 5, 2012 at 12:51 PM, Dietmar Maurer <dietmar@proxmox.com> wrote:
> Network performance with vhost=on is extremely bad if a guest uses multiple cores:
>
> HOST kernel: RHEL 2.6.32-279.11.1.el6
> KVM 1.2.0
> GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
>
> Test with something like (install Debian Squeeze first):
>
> ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -hda debian-squeeze-netinst.raw -netdev type=tap,id=net0,ifname=tap111i0,vhost=on -device virtio-net-pci,netdev=net0,id=net0
>
> Downloading a larger file with wget inside the guest will show the problem. Speed drops from 100MB/s to 15MB/s.
>
> Can someone reproduce that bug? Any ideas how to fix that?

Which exact QEMU are you using: qemu or qemu-kvm?  Distro package or
from source?  Any patches applied?

Was performance better in previous versions?

Is the network path you are downloading across reasonably idle so that
you get reproducible results between runs?

Stefan

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-05 12:36 ` Stefan Hajnoczi
@ 2012-11-05 15:01   ` Dietmar Maurer
  2012-11-06  6:12   ` Dietmar Maurer
  2012-11-06  6:48   ` Dietmar Maurer
  2 siblings, 0 replies; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-05 15:01 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Michael S. Tsirkin

> On Mon, Nov 5, 2012 at 12:51 PM, Dietmar Maurer
> <dietmar@proxmox.com> wrote:
> > Network performance with vhost=on is extremely bad if a guest uses
> multiple cores:
> >
> > HOST kernel: RHEL 2.6.32-279.11.1.el6
> > KVM 1.2.0
> > GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
> >
> > Test with something like (install Debian Squeeze first):
> >
> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
> hda
> > debian-squeeze-netinst.raw -netdev
> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
> > virtio-net-pci,netdev=net0,id=net0
> >
> > Downloading a larger file with wget inside the guest will show the problem.
> Speed drops from 100MB/s to 15MB/s.
> >
> > Can someone reproduce that bug? Any ideas how to fix that?
> 
> Which exact QEMU are you using: qemu or qemu-kvm?  Distro package or
> from source?  Any patches applied?

qemu-kvm 1.2, compile from source (no patches)

I would also try with latest qemu, but seem qemu site is down?

> Was performance better in previous versions?

We observed the problem only with version 1.2.

> Is the network path you are downloading across reasonably idle so that you
> get reproducible results between runs?

Sure. Using vhost=on drops performance from 100MB/s to 15MB/s.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-05 12:36 ` Stefan Hajnoczi
  2012-11-05 15:01   ` Dietmar Maurer
@ 2012-11-06  6:12   ` Dietmar Maurer
  2012-11-06  7:46     ` Peter Lieven
  2012-11-06  6:48   ` Dietmar Maurer
  2 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06  6:12 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Michael S. Tsirkin

> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
> hda
> > debian-squeeze-netinst.raw -netdev
> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
> > virtio-net-pci,netdev=net0,id=net0
> >
> > Downloading a larger file with wget inside the guest will show the problem.
> Speed drops from 100MB/s to 15MB/s.
> >
> > Can someone reproduce that bug? Any ideas how to fix that?
> 
> Which exact QEMU are you using: qemu or qemu-kvm?  Distro package or
> from source?  Any patches applied?

I just tested with latest sources from git://git.qemu-project.org/qemu.git

./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -hda debian-squeeze-netinst.raw -netdev type=tap,id=net0,ifname=tap111i0,vhost=on -device virtio-net-pci,netdev=net0,id=net0 --enable-kvm

got incredible bad performance  - can't you reproduce the problem?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-05 12:36 ` Stefan Hajnoczi
  2012-11-05 15:01   ` Dietmar Maurer
  2012-11-06  6:12   ` Dietmar Maurer
@ 2012-11-06  6:48   ` Dietmar Maurer
  2 siblings, 0 replies; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06  6:48 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Michael S. Tsirkin

> Is the network path you are downloading across reasonably idle so that you
> get reproducible results between runs?

Also tested with netperf (to local host) now.

Results in short (performance in Mbit/sec):

vhost=off,cores=1: 3982
vhost=off,cores=2: 3930
vhost=off,cores=4: 3912

vhost=on,cores=1: 22065
vhost=on,cores=2: 1392
vhost=on,cores=4: 532

As you can see, vhost performance drops from 22065 to 532 -  this is by factor 40!

And ideas?


-- complete logs---

With vhost=off / cores=1

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    3982.74  


With vhost=off / cores=2

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    3930.62   

With vhost=off / cores=4

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    3912.36  


With vhost=on /cores=1

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.00    22065.48 

With vhost=on /cores=2

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.01    1392.61

With vhost=on /cores=4

# netperf -H maui
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to maui.maurer-it.com (192.168.2.2) port 0 AF_INET : demo
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  16384  16384    10.05     532.31

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  6:12   ` Dietmar Maurer
@ 2012-11-06  7:46     ` Peter Lieven
  2012-11-06  7:51       ` Dietmar Maurer
  2012-11-06  9:01       ` Dietmar Maurer
  0 siblings, 2 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-06  7:46 UTC (permalink / raw)
  To: Dietmar Maurer; +Cc: Stefan Hajnoczi, qemu-devel, Michael S. Tsirkin

Dietmar Maurer wrote:
>> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
>> hda
>> > debian-squeeze-netinst.raw -netdev
>> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
>> > virtio-net-pci,netdev=net0,id=net0
>> >
>> > Downloading a larger file with wget inside the guest will show the
>> problem.
>> Speed drops from 100MB/s to 15MB/s.
>> >
>> > Can someone reproduce that bug? Any ideas how to fix that?
>>
>> Which exact QEMU are you using: qemu or qemu-kvm?  Distro package or
>> from source?  Any patches applied?
>
> I just tested with latest sources from git://git.qemu-project.org/qemu.git
>
> ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -hda
> debian-squeeze-netinst.raw -netdev
> type=tap,id=net0,ifname=tap111i0,vhost=on -device
> virtio-net-pci,netdev=net0,id=net0 --enable-kvm
>
> got incredible bad performance  - can't you reproduce the problem?

I have seen a similar problem, but it seems to be only with extreme old
guest kernels. With Ubuntu 12.04 it seems not to be there. Can you try a
recent guest kernel.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  7:46     ` Peter Lieven
@ 2012-11-06  7:51       ` Dietmar Maurer
  2012-11-06  9:01       ` Dietmar Maurer
  1 sibling, 0 replies; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06  7:51 UTC (permalink / raw)
  To: Peter Lieven; +Cc: Stefan Hajnoczi, qemu-devel, Michael S. Tsirkin

> > got incredible bad performance  - can't you reproduce the problem?
> 
> I have seen a similar problem, but it seems to be only with extreme old guest
> kernels. With Ubuntu 12.04 it seems not to be there. Can you try a recent
> guest kernel.

I know that it works with newer kernels. But we need to get that work with older ones.

They worked before.

I am starting a bisect now:

v1.1.0 good
v1.2.0-rc0 bad

will tell you the result.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  7:46     ` Peter Lieven
  2012-11-06  7:51       ` Dietmar Maurer
@ 2012-11-06  9:01       ` Dietmar Maurer
  2012-11-06  9:26         ` Jan Kiszka
  1 sibling, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06  9:01 UTC (permalink / raw)
  To: Peter Lieven, jan.kiszka; +Cc: Stefan Hajnoczi, qemu-devel, Michael S. Tsirkin

OK, bisect point me to this commit:

# git bisect bad
7d37d351dffee60fc7048bbfd8573421f15eb724 is the first bad commit
commit 7d37d351dffee60fc7048bbfd8573421f15eb724
Author: Jan Kiszka <jan.kiszka@siemens.com>
Date:   Thu May 17 10:32:39 2012 -0300

    virtio/vhost: Add support for KVM in-kernel MSI injection
    
    Make use of the new vector notifier to track changes of the MSI-X
    configuration of virtio PCI devices. On enabling events, we establish
    the required virtual IRQ to MSI-X message route and link the signaling
    eventfd file descriptor to this vIRQ line. That way, vhost-generated
    interrupts can be directly delivered to an in-kernel MSI-X consumer like
    the x86 APIC.
    
    Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
    Signed-off-by: Avi Kivity <avi@redhat.com>

:040000 040000 1734ddc60cd8a85c7187e93e5a4c02e6d7706cf8 f417e63a684f3b92f5ff35d256962a2490890f00 M	hw


This obviously breaks vhost when using multiple cores.


> -----Original Message-----
> From: Peter Lieven [mailto:lieven-lists@dlhnet.de]
> Sent: Dienstag, 06. November 2012 08:47
> To: Dietmar Maurer
> Cc: Stefan Hajnoczi; qemu-devel@nongnu.org; Michael S. Tsirkin
> Subject: Re: [Qemu-devel] slow virtio network with vhost=on and multiple
> cores
> 
> Dietmar Maurer wrote:
> >> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m
> 512 -
> >> hda
> >> > debian-squeeze-netinst.raw -netdev
> >> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
> >> > virtio-net-pci,netdev=net0,id=net0
> >> >
> >> > Downloading a larger file with wget inside the guest will show the
> >> problem.
> >> Speed drops from 100MB/s to 15MB/s.
> >> >
> >> > Can someone reproduce that bug? Any ideas how to fix that?
> >>
> >> Which exact QEMU are you using: qemu or qemu-kvm?  Distro package or
> >> from source?  Any patches applied?
> >
> > I just tested with latest sources from
> > git://git.qemu-project.org/qemu.git
> >
> > ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
> hda
> > debian-squeeze-netinst.raw -netdev
> > type=tap,id=net0,ifname=tap111i0,vhost=on -device
> > virtio-net-pci,netdev=net0,id=net0 --enable-kvm
> >
> > got incredible bad performance  - can't you reproduce the problem?
> 
> I have seen a similar problem, ...

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  9:01       ` Dietmar Maurer
@ 2012-11-06  9:26         ` Jan Kiszka
  2012-11-06  9:46           ` Dietmar Maurer
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Kiszka @ 2012-11-06  9:26 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

[-- Attachment #1: Type: text/plain, Size: 1183 bytes --]

On 2012-11-06 10:01, Dietmar Maurer wrote:
> OK, bisect point me to this commit:
> 
> # git bisect bad
> 7d37d351dffee60fc7048bbfd8573421f15eb724 is the first bad commit
> commit 7d37d351dffee60fc7048bbfd8573421f15eb724
> Author: Jan Kiszka <jan.kiszka@siemens.com>
> Date:   Thu May 17 10:32:39 2012 -0300
> 
>     virtio/vhost: Add support for KVM in-kernel MSI injection
>     
>     Make use of the new vector notifier to track changes of the MSI-X
>     configuration of virtio PCI devices. On enabling events, we establish
>     the required virtual IRQ to MSI-X message route and link the signaling
>     eventfd file descriptor to this vIRQ line. That way, vhost-generated
>     interrupts can be directly delivered to an in-kernel MSI-X consumer like
>     the x86 APIC.
>     
>     Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>     Signed-off-by: Avi Kivity <avi@redhat.com>
> 
> :040000 040000 1734ddc60cd8a85c7187e93e5a4c02e6d7706cf8 f417e63a684f3b92f5ff35d256962a2490890f00 M	hw
> 
> 
> This obviously breaks vhost when using multiple cores.

With "obviously" you mean you already have a clue why?

I'll try to reproduce.

Jan



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  9:26         ` Jan Kiszka
@ 2012-11-06  9:46           ` Dietmar Maurer
  2012-11-06 10:12             ` Jan Kiszka
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06  9:46 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

> > This obviously breaks vhost when using multiple cores.
> 
> With "obviously" you mean you already have a clue why?
> 
> I'll try to reproduce.

No, sorry - just meant the performance regression is obvious (factor 20 to 40).

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06  9:46           ` Dietmar Maurer
@ 2012-11-06 10:12             ` Jan Kiszka
  2012-11-06 11:24               ` Dietmar Maurer
  0 siblings, 1 reply; 51+ messages in thread
From: Jan Kiszka @ 2012-11-06 10:12 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

[-- Attachment #1: Type: text/plain, Size: 495 bytes --]

On 2012-11-06 10:46, Dietmar Maurer wrote:
>>> This obviously breaks vhost when using multiple cores.
>>
>> With "obviously" you mean you already have a clue why?
>>
>> I'll try to reproduce.
> 
> No, sorry - just meant the performance regression is obvious (factor 20 to 40).
> 

OK. Did you try to bisect over qemu-kvm as well? I'm wondering if there
was a version of vhost there that didn't have this regression but was
also using direct IRQ delivery inside the kernel.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06 10:12             ` Jan Kiszka
@ 2012-11-06 11:24               ` Dietmar Maurer
  2012-11-08  9:39                 ` Jan Kiszka
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-06 11:24 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

> On 2012-11-06 10:46, Dietmar Maurer wrote:
> >>> This obviously breaks vhost when using multiple cores.
> >>
> >> With "obviously" you mean you already have a clue why?
> >>
> >> I'll try to reproduce.
> >
> > No, sorry - just meant the performance regression is obvious (factor 20 to
> 40).
> >
> 
> OK. Did you try to bisect over qemu-kvm as well?

No (I thought that is the same code base?)

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-06 11:24               ` Dietmar Maurer
@ 2012-11-08  9:39                 ` Jan Kiszka
  2012-11-08 10:55                   ` Peter Lieven
  2012-11-08 12:03                   ` Dietmar Maurer
  0 siblings, 2 replies; 51+ messages in thread
From: Jan Kiszka @ 2012-11-08  9:39 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

[-- Attachment #1: Type: text/plain, Size: 1042 bytes --]

On 2012-11-06 12:24, Dietmar Maurer wrote:
>> On 2012-11-06 10:46, Dietmar Maurer wrote:
>>>>> This obviously breaks vhost when using multiple cores.
>>>>
>>>> With "obviously" you mean you already have a clue why?
>>>>
>>>> I'll try to reproduce.
>>>
>>> No, sorry - just meant the performance regression is obvious (factor 20 to
>> 40).
>>>
>>
>> OK. Did you try to bisect over qemu-kvm as well?
> 
> No (I thought that is the same code base?)

Already answered, though accidentally in private only: it is the same
code base now, but qemu-kvm has a different history and may contain
versions that didn't suffer from the issue.

Meanwhile I quickly tried to reproduce but didn't succeed so far
(>10GBit between host and guest with vhost=on and 2 guest cores).
However, I finally realized that we are talking about a pretty special
host kernel which I don't have around. I guess this is better dealt with
by Red Hat folks. Specifically, they should know what features that
kernel exposes and what it lacks.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 259 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-08  9:39                 ` Jan Kiszka
@ 2012-11-08 10:55                   ` Peter Lieven
  2012-11-08 12:03                   ` Dietmar Maurer
  1 sibling, 0 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-08 10:55 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Stefan Hajnoczi, Peter Lieven, Michael S. Tsirkin,
	Dietmar Maurer, qemu-devel

Jan Kiszka wrote:
> On 2012-11-06 12:24, Dietmar Maurer wrote:
>>> On 2012-11-06 10:46, Dietmar Maurer wrote:
>>>>>> This obviously breaks vhost when using multiple cores.
>>>>>
>>>>> With "obviously" you mean you already have a clue why?
>>>>>
>>>>> I'll try to reproduce.
>>>>
>>>> No, sorry - just meant the performance regression is obvious (factor
>>>> 20 to
>>> 40).
>>>>
>>>
>>> OK. Did you try to bisect over qemu-kvm as well?
>>
>> No (I thought that is the same code base?)
>
> Already answered, though accidentally in private only: it is the same
> code base now, but qemu-kvm has a different history and may contain
> versions that didn't suffer from the issue.
>
> Meanwhile I quickly tried to reproduce but didn't succeed so far
> (>10GBit between host and guest with vhost=on and 2 guest cores).
> However, I finally realized that we are talking about a pretty special
> host kernel which I don't have around. I guess this is better dealt with
> by Red Hat folks. Specifically, they should know what features that
> kernel exposes and what it lacks.

Hi Jan,

I see the same issue also with an Ubuntu 12.04 (3.2.0) host kernel.
The performance drawback can also only be seen if the guest kernel
is rather old (e.g. 2.6.32).

Maybe the old guest kernel misses a feature the new injection routines
need and the failure path is very slow?

Peter

>
> Jan
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-08  9:39                 ` Jan Kiszka
  2012-11-08 10:55                   ` Peter Lieven
@ 2012-11-08 12:03                   ` Dietmar Maurer
  2012-11-08 15:02                     ` Peter Lieven
  1 sibling, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-08 12:03 UTC (permalink / raw)
  To: Jan Kiszka; +Cc: Stefan Hajnoczi, Peter Lieven, qemu-devel, Michael S. Tsirkin

> Meanwhile I quickly tried to reproduce but didn't succeed so far (>10GBit
> between host and guest with vhost=on and 2 guest cores).
> However, I finally realized that we are talking about a pretty special host
> kernel which I don't have around. I guess this is better dealt with by Red Hat
> folks. Specifically, they should know what features that kernel exposes and
> what it lacks.

You can also reproduce the bug with Ubuntu 12.04 on the host - it is not red-hat related.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-08 12:03                   ` Dietmar Maurer
@ 2012-11-08 15:02                     ` Peter Lieven
  2012-11-09  5:55                       ` Dietmar Maurer
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-08 15:02 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, qemu-devel,
	Michael S. Tsirkin

Dietmar Maurer wrote:
>> Meanwhile I quickly tried to reproduce but didn't succeed so far
>> (>10GBit
>> between host and guest with vhost=on and 2 guest cores).
>> However, I finally realized that we are talking about a pretty special
>> host
>> kernel which I don't have around. I guess this is better dealt with by
>> Red Hat
>> folks. Specifically, they should know what features that kernel exposes
>> and
>> what it lacks.
>
> You can also reproduce the bug with Ubuntu 12.04 on the host - it is not
> red-hat related.

Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off
as cmdline option to qemu-kvm-1.2.0?

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-08 15:02                     ` Peter Lieven
@ 2012-11-09  5:55                       ` Dietmar Maurer
  2012-11-09 17:27                         ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-09  5:55 UTC (permalink / raw)
  To: Peter Lieven; +Cc: Stefan Hajnoczi, Jan Kiszka, qemu-devel, Michael S. Tsirkin

> Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off as
> cmdline option to qemu-kvm-1.2.0?

I get full speed if i use that flag. 

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-09  5:55                       ` Dietmar Maurer
@ 2012-11-09 17:27                         ` Peter Lieven
  2012-11-09 17:51                           ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-09 17:27 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, qemu-devel,
	Michael S. Tsirkin

Dietmar Maurer wrote:
>> Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off
>> as
>> cmdline option to qemu-kvm-1.2.0?
>
> I get full speed if i use that flag.
>
>

I also tried to reproduce it and can confirm your findings. Host Ubuntu
12.04 LTS (kernel 3.2) with vanilla qemu-kvm 1.2.0, vhost-net and an
Ubuntu 10.4.4 (Linux 2.6.32) as guest. Vhost-Net performance drops by
approx. factor 50-100 if I do not disable kernel_irqchip. Normal Virtio
and e1000 seems to work fine.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-09 17:27                         ` Peter Lieven
@ 2012-11-09 17:51                           ` Peter Lieven
  2012-11-09 18:03                             ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-09 17:51 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel, Jan Kiszka,
	Dietmar Maurer

it seems that with in-kernel irqchip the interrupts are distributed across
all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
this is related.

without inkernel irqchip
           CPU0       CPU1       CPU2       CPU3
  0:         16          0          0          0   IO-APIC-edge      timer
  1:         23          0          0          0   IO-APIC-edge      i8042
  4:          1          0          0          0   IO-APIC-edge
  6:          4          0          0          0   IO-APIC-edge      floppy
  7:          0          0          0          0   IO-APIC-edge      parport0
  8:          0          0          0          0   IO-APIC-edge      rtc0
  9:          0          0          0          0   IO-APIC-fasteoi   acpi
 11:         76          0          0          0   IO-APIC-fasteoi  
uhci_hcd:usb1
 12:        102          0          0          0   IO-APIC-edge      i8042
 14:          0          0          0          0   IO-APIC-edge      ata_piix
 15:      16881          0          0          0   IO-APIC-edge      ata_piix
 24:          0          0          0          0   PCI-MSI-edge     
virtio1-config
 25:       5225          0          0          0   PCI-MSI-edge     
virtio1-requests
 26:          0          0          0          0   PCI-MSI-edge     
virtio0-config
 27:      72493          0          0          0   PCI-MSI-edge     
virtio0-input
...

with inkernel irqchip
           CPU0       CPU1       CPU2       CPU3
  0:         16          0          0          0   IO-APIC-edge      timer
  1:          0          3          3          1   IO-APIC-edge      i8042
  4:          0          0          1          0   IO-APIC-edge
  6:          1          0          1          2   IO-APIC-edge      floppy
  7:          0          0          0          0   IO-APIC-edge      parport0
  8:          0          0          0          0   IO-APIC-edge      rtc0
  9:          0          0          0          0   IO-APIC-fasteoi   acpi
 11:          7          9          4          1   IO-APIC-fasteoi  
uhci_hcd:usb1
 12:         30         27         29         34   IO-APIC-edge      i8042
 14:          0          0          0          0   IO-APIC-edge      ata_piix
 15:        943        937        950        943   IO-APIC-edge      ata_piix
 24:          0          0          0          0   PCI-MSI-edge     
virtio0-config
 25:        930        978        980        947   PCI-MSI-edge     
virtio0-input
 26:          0          0          1          0   PCI-MSI-edge     
virtio0-output
 27:          0          0          0          0   PCI-MSI-edge     
virtio1-config
 28:        543        541        542        553   PCI-MSI-edge     
virtio1-requests
...

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-09 17:51                           ` Peter Lieven
@ 2012-11-09 18:03                             ` Peter Lieven
  2012-11-13 11:44                               ` Peter Lieven
  2012-11-13 11:49                               ` Peter Lieven
  0 siblings, 2 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-09 18:03 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel, Jan Kiszka,
	Dietmar Maurer

Remark:
If i disable interrupts on CPU1-3 for virtio the performance is ok again.

Now we need someone with deeper knowledge of the in-kernel irqchip and the
virtio/vhost driver development to say if this is a regression in qemu-kvm
or a problem with the old virtio drivers if they receive the interrupt on
different CPUs.

Peter Lieven wrote:
> it seems that with in-kernel irqchip the interrupts are distributed across
> all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
> this is related.
>
> without inkernel irqchip
>            CPU0       CPU1       CPU2       CPU3
>   0:         16          0          0          0   IO-APIC-edge      timer
>   1:         23          0          0          0   IO-APIC-edge      i8042
>   4:          1          0          0          0   IO-APIC-edge
>   6:          4          0          0          0   IO-APIC-edge
> floppy
>   7:          0          0          0          0   IO-APIC-edge
> parport0
>   8:          0          0          0          0   IO-APIC-edge      rtc0
>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
>  11:         76          0          0          0   IO-APIC-fasteoi
> uhci_hcd:usb1
>  12:        102          0          0          0   IO-APIC-edge      i8042
>  14:          0          0          0          0   IO-APIC-edge
> ata_piix
>  15:      16881          0          0          0   IO-APIC-edge
> ata_piix
>  24:          0          0          0          0   PCI-MSI-edge
> virtio1-config
>  25:       5225          0          0          0   PCI-MSI-edge
> virtio1-requests
>  26:          0          0          0          0   PCI-MSI-edge
> virtio0-config
>  27:      72493          0          0          0   PCI-MSI-edge
> virtio0-input
> ...
>
> with inkernel irqchip
>            CPU0       CPU1       CPU2       CPU3
>   0:         16          0          0          0   IO-APIC-edge      timer
>   1:          0          3          3          1   IO-APIC-edge      i8042
>   4:          0          0          1          0   IO-APIC-edge
>   6:          1          0          1          2   IO-APIC-edge
> floppy
>   7:          0          0          0          0   IO-APIC-edge
> parport0
>   8:          0          0          0          0   IO-APIC-edge      rtc0
>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
>  11:          7          9          4          1   IO-APIC-fasteoi
> uhci_hcd:usb1
>  12:         30         27         29         34   IO-APIC-edge      i8042
>  14:          0          0          0          0   IO-APIC-edge
> ata_piix
>  15:        943        937        950        943   IO-APIC-edge
> ata_piix
>  24:          0          0          0          0   PCI-MSI-edge
> virtio0-config
>  25:        930        978        980        947   PCI-MSI-edge
> virtio0-input
>  26:          0          0          1          0   PCI-MSI-edge
> virtio0-output
>  27:          0          0          0          0   PCI-MSI-edge
> virtio1-config
>  28:        543        541        542        553   PCI-MSI-edge
> virtio1-requests
> ...
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-09 18:03                             ` Peter Lieven
@ 2012-11-13 11:44                               ` Peter Lieven
  2012-11-13 11:49                               ` Peter Lieven
  1 sibling, 0 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-13 11:44 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jan Kiszka, Dietmar Maurer,
	qemu-devel


On 09.11.2012 19:03, Peter Lieven wrote:
> Remark:
> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>
> Now we need someone with deeper knowledge of the in-kernel irqchip and the
> virtio/vhost driver development to say if this is a regression in qemu-kvm
> or a problem with the old virtio drivers if they receive the interrupt on
> different CPUs.
anyone?


>
> Peter Lieven wrote:
>> it seems that with in-kernel irqchip the interrupts are distributed across
>> all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
>> this is related.
>>
>> without inkernel irqchip
>>             CPU0       CPU1       CPU2       CPU3
>>    0:         16          0          0          0   IO-APIC-edge      timer
>>    1:         23          0          0          0   IO-APIC-edge      i8042
>>    4:          1          0          0          0   IO-APIC-edge
>>    6:          4          0          0          0   IO-APIC-edge
>> floppy
>>    7:          0          0          0          0   IO-APIC-edge
>> parport0
>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>   11:         76          0          0          0   IO-APIC-fasteoi
>> uhci_hcd:usb1
>>   12:        102          0          0          0   IO-APIC-edge      i8042
>>   14:          0          0          0          0   IO-APIC-edge
>> ata_piix
>>   15:      16881          0          0          0   IO-APIC-edge
>> ata_piix
>>   24:          0          0          0          0   PCI-MSI-edge
>> virtio1-config
>>   25:       5225          0          0          0   PCI-MSI-edge
>> virtio1-requests
>>   26:          0          0          0          0   PCI-MSI-edge
>> virtio0-config
>>   27:      72493          0          0          0   PCI-MSI-edge
>> virtio0-input
>> ...
>>
>> with inkernel irqchip
>>             CPU0       CPU1       CPU2       CPU3
>>    0:         16          0          0          0   IO-APIC-edge      timer
>>    1:          0          3          3          1   IO-APIC-edge      i8042
>>    4:          0          0          1          0   IO-APIC-edge
>>    6:          1          0          1          2   IO-APIC-edge
>> floppy
>>    7:          0          0          0          0   IO-APIC-edge
>> parport0
>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>   11:          7          9          4          1   IO-APIC-fasteoi
>> uhci_hcd:usb1
>>   12:         30         27         29         34   IO-APIC-edge      i8042
>>   14:          0          0          0          0   IO-APIC-edge
>> ata_piix
>>   15:        943        937        950        943   IO-APIC-edge
>> ata_piix
>>   24:          0          0          0          0   PCI-MSI-edge
>> virtio0-config
>>   25:        930        978        980        947   PCI-MSI-edge
>> virtio0-input
>>   26:          0          0          1          0   PCI-MSI-edge
>> virtio0-output
>>   27:          0          0          0          0   PCI-MSI-edge
>> virtio1-config
>>   28:        543        541        542        553   PCI-MSI-edge
>> virtio1-requests
>> ...
>>
>>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-09 18:03                             ` Peter Lieven
  2012-11-13 11:44                               ` Peter Lieven
@ 2012-11-13 11:49                               ` Peter Lieven
  2012-11-13 16:22                                 ` Michael S. Tsirkin
  1 sibling, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-13 11:49 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Michael S. Tsirkin, Jan Kiszka, Dietmar Maurer,
	qemu-devel


On 09.11.2012 19:03, Peter Lieven wrote:
> Remark:
> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>
> Now we need someone with deeper knowledge of the in-kernel irqchip and the
> virtio/vhost driver development to say if this is a regression in qemu-kvm
> or a problem with the old virtio drivers if they receive the interrupt on
> different CPUs.
anyone?


>
> Peter Lieven wrote:
>> it seems that with in-kernel irqchip the interrupts are distributed across
>> all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
>> this is related.
>>
>> without inkernel irqchip
>>             CPU0       CPU1       CPU2       CPU3
>>    0:         16          0          0          0   IO-APIC-edge      timer
>>    1:         23          0          0          0   IO-APIC-edge      i8042
>>    4:          1          0          0          0   IO-APIC-edge
>>    6:          4          0          0          0   IO-APIC-edge
>> floppy
>>    7:          0          0          0          0   IO-APIC-edge
>> parport0
>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>   11:         76          0          0          0   IO-APIC-fasteoi
>> uhci_hcd:usb1
>>   12:        102          0          0          0   IO-APIC-edge      i8042
>>   14:          0          0          0          0   IO-APIC-edge
>> ata_piix
>>   15:      16881          0          0          0   IO-APIC-edge
>> ata_piix
>>   24:          0          0          0          0   PCI-MSI-edge
>> virtio1-config
>>   25:       5225          0          0          0   PCI-MSI-edge
>> virtio1-requests
>>   26:          0          0          0          0   PCI-MSI-edge
>> virtio0-config
>>   27:      72493          0          0          0   PCI-MSI-edge
>> virtio0-input
>> ...
>>
>> with inkernel irqchip
>>             CPU0       CPU1       CPU2       CPU3
>>    0:         16          0          0          0   IO-APIC-edge      timer
>>    1:          0          3          3          1   IO-APIC-edge      i8042
>>    4:          0          0          1          0   IO-APIC-edge
>>    6:          1          0          1          2   IO-APIC-edge
>> floppy
>>    7:          0          0          0          0   IO-APIC-edge
>> parport0
>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>   11:          7          9          4          1   IO-APIC-fasteoi
>> uhci_hcd:usb1
>>   12:         30         27         29         34   IO-APIC-edge      i8042
>>   14:          0          0          0          0   IO-APIC-edge
>> ata_piix
>>   15:        943        937        950        943   IO-APIC-edge
>> ata_piix
>>   24:          0          0          0          0   PCI-MSI-edge
>> virtio0-config
>>   25:        930        978        980        947   PCI-MSI-edge
>> virtio0-input
>>   26:          0          0          1          0   PCI-MSI-edge
>> virtio0-output
>>   27:          0          0          0          0   PCI-MSI-edge
>> virtio1-config
>>   28:        543        541        542        553   PCI-MSI-edge
>> virtio1-requests
>> ...
>>
>>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:22                                 ` Michael S. Tsirkin
@ 2012-11-13 16:21                                   ` Peter Lieven
  2012-11-13 16:26                                     ` Michael S. Tsirkin
  2012-11-13 16:33                                   ` Michael S. Tsirkin
  1 sibling, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-13 16:21 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On 13.11.2012 17:22, Michael S. Tsirkin wrote:
> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>> On 09.11.2012 19:03, Peter Lieven wrote:
>>> Remark:
>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>>>
>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
>>> or a problem with the old virtio drivers if they receive the interrupt on
>>> different CPUs.
>> anyone?
> Looks like the problem is not in the guest: I tried ubuntu guest
> on a rhel host, I got 8GB/s with vhost and 4GB/s without
> on a host to guest banchmark.
which ubuntu version in the guest?

Peter

>
>
>
>>> Peter Lieven wrote:
>>>> it seems that with in-kernel irqchip the interrupts are distributed across
>>>> all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
>>>> this is related.
>>>>
>>>> without inkernel irqchip
>>>>             CPU0       CPU1       CPU2       CPU3
>>>>    0:         16          0          0          0   IO-APIC-edge      timer
>>>>    1:         23          0          0          0   IO-APIC-edge      i8042
>>>>    4:          1          0          0          0   IO-APIC-edge
>>>>    6:          4          0          0          0   IO-APIC-edge
>>>> floppy
>>>>    7:          0          0          0          0   IO-APIC-edge
>>>> parport0
>>>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>>>   11:         76          0          0          0   IO-APIC-fasteoi
>>>> uhci_hcd:usb1
>>>>   12:        102          0          0          0   IO-APIC-edge      i8042
>>>>   14:          0          0          0          0   IO-APIC-edge
>>>> ata_piix
>>>>   15:      16881          0          0          0   IO-APIC-edge
>>>> ata_piix
>>>>   24:          0          0          0          0   PCI-MSI-edge
>>>> virtio1-config
>>>>   25:       5225          0          0          0   PCI-MSI-edge
>>>> virtio1-requests
>>>>   26:          0          0          0          0   PCI-MSI-edge
>>>> virtio0-config
>>>>   27:      72493          0          0          0   PCI-MSI-edge
>>>> virtio0-input
>>>> ...
>>>>
>>>> with inkernel irqchip
>>>>             CPU0       CPU1       CPU2       CPU3
>>>>    0:         16          0          0          0   IO-APIC-edge      timer
>>>>    1:          0          3          3          1   IO-APIC-edge      i8042
>>>>    4:          0          0          1          0   IO-APIC-edge
>>>>    6:          1          0          1          2   IO-APIC-edge
>>>> floppy
>>>>    7:          0          0          0          0   IO-APIC-edge
>>>> parport0
>>>>    8:          0          0          0          0   IO-APIC-edge      rtc0
>>>>    9:          0          0          0          0   IO-APIC-fasteoi   acpi
>>>>   11:          7          9          4          1   IO-APIC-fasteoi
>>>> uhci_hcd:usb1
>>>>   12:         30         27         29         34   IO-APIC-edge      i8042
>>>>   14:          0          0          0          0   IO-APIC-edge
>>>> ata_piix
>>>>   15:        943        937        950        943   IO-APIC-edge
>>>> ata_piix
>>>>   24:          0          0          0          0   PCI-MSI-edge
>>>> virtio0-config
>>>>   25:        930        978        980        947   PCI-MSI-edge
>>>> virtio0-input
>>>>   26:          0          0          1          0   PCI-MSI-edge
>>>> virtio0-output
>>>>   27:          0          0          0          0   PCI-MSI-edge
>>>> virtio1-config
>>>>   28:        543        541        542        553   PCI-MSI-edge
>>>> virtio1-requests
>>>> ...
>>>>
>>>>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 11:49                               ` Peter Lieven
@ 2012-11-13 16:22                                 ` Michael S. Tsirkin
  2012-11-13 16:21                                   ` Peter Lieven
  2012-11-13 16:33                                   ` Michael S. Tsirkin
  0 siblings, 2 replies; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 16:22 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> 
> On 09.11.2012 19:03, Peter Lieven wrote:
> >Remark:
> >If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >
> >Now we need someone with deeper knowledge of the in-kernel irqchip and the
> >virtio/vhost driver development to say if this is a regression in qemu-kvm
> >or a problem with the old virtio drivers if they receive the interrupt on
> >different CPUs.
> anyone?

Looks like the problem is not in the guest: I tried ubuntu guest
on a rhel host, I got 8GB/s with vhost and 4GB/s without
on a host to guest banchmark.



> 
> >
> >Peter Lieven wrote:
> >>it seems that with in-kernel irqchip the interrupts are distributed across
> >>all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
> >>this is related.
> >>
> >>without inkernel irqchip
> >>            CPU0       CPU1       CPU2       CPU3
> >>   0:         16          0          0          0   IO-APIC-edge      timer
> >>   1:         23          0          0          0   IO-APIC-edge      i8042
> >>   4:          1          0          0          0   IO-APIC-edge
> >>   6:          4          0          0          0   IO-APIC-edge
> >>floppy
> >>   7:          0          0          0          0   IO-APIC-edge
> >>parport0
> >>   8:          0          0          0          0   IO-APIC-edge      rtc0
> >>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> >>  11:         76          0          0          0   IO-APIC-fasteoi
> >>uhci_hcd:usb1
> >>  12:        102          0          0          0   IO-APIC-edge      i8042
> >>  14:          0          0          0          0   IO-APIC-edge
> >>ata_piix
> >>  15:      16881          0          0          0   IO-APIC-edge
> >>ata_piix
> >>  24:          0          0          0          0   PCI-MSI-edge
> >>virtio1-config
> >>  25:       5225          0          0          0   PCI-MSI-edge
> >>virtio1-requests
> >>  26:          0          0          0          0   PCI-MSI-edge
> >>virtio0-config
> >>  27:      72493          0          0          0   PCI-MSI-edge
> >>virtio0-input
> >>...
> >>
> >>with inkernel irqchip
> >>            CPU0       CPU1       CPU2       CPU3
> >>   0:         16          0          0          0   IO-APIC-edge      timer
> >>   1:          0          3          3          1   IO-APIC-edge      i8042
> >>   4:          0          0          1          0   IO-APIC-edge
> >>   6:          1          0          1          2   IO-APIC-edge
> >>floppy
> >>   7:          0          0          0          0   IO-APIC-edge
> >>parport0
> >>   8:          0          0          0          0   IO-APIC-edge      rtc0
> >>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> >>  11:          7          9          4          1   IO-APIC-fasteoi
> >>uhci_hcd:usb1
> >>  12:         30         27         29         34   IO-APIC-edge      i8042
> >>  14:          0          0          0          0   IO-APIC-edge
> >>ata_piix
> >>  15:        943        937        950        943   IO-APIC-edge
> >>ata_piix
> >>  24:          0          0          0          0   PCI-MSI-edge
> >>virtio0-config
> >>  25:        930        978        980        947   PCI-MSI-edge
> >>virtio0-input
> >>  26:          0          0          1          0   PCI-MSI-edge
> >>virtio0-output
> >>  27:          0          0          0          0   PCI-MSI-edge
> >>virtio1-config
> >>  28:        543        541        542        553   PCI-MSI-edge
> >>virtio1-requests
> >>...
> >>
> >>
> >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:21                                   ` Peter Lieven
@ 2012-11-13 16:26                                     ` Michael S. Tsirkin
  2012-11-13 16:27                                       ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 16:26 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
> >On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>On 09.11.2012 19:03, Peter Lieven wrote:
> >>>Remark:
> >>>If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >>>
> >>>Now we need someone with deeper knowledge of the in-kernel irqchip and the
> >>>virtio/vhost driver development to say if this is a regression in qemu-kvm
> >>>or a problem with the old virtio drivers if they receive the interrupt on
> >>>different CPUs.
> >>anyone?
> >Looks like the problem is not in the guest: I tried ubuntu guest
> >on a rhel host, I got 8GB/s with vhost and 4GB/s without
> >on a host to guest banchmark.
> which ubuntu version in the guest?
> 
> Peter

ubuntu-10.04.4-server-amd64.iso

> >
> >
> >
> >>>Peter Lieven wrote:
> >>>>it seems that with in-kernel irqchip the interrupts are distributed across
> >>>>all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
> >>>>this is related.
> >>>>
> >>>>without inkernel irqchip
> >>>>            CPU0       CPU1       CPU2       CPU3
> >>>>   0:         16          0          0          0   IO-APIC-edge      timer
> >>>>   1:         23          0          0          0   IO-APIC-edge      i8042
> >>>>   4:          1          0          0          0   IO-APIC-edge
> >>>>   6:          4          0          0          0   IO-APIC-edge
> >>>>floppy
> >>>>   7:          0          0          0          0   IO-APIC-edge
> >>>>parport0
> >>>>   8:          0          0          0          0   IO-APIC-edge      rtc0
> >>>>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> >>>>  11:         76          0          0          0   IO-APIC-fasteoi
> >>>>uhci_hcd:usb1
> >>>>  12:        102          0          0          0   IO-APIC-edge      i8042
> >>>>  14:          0          0          0          0   IO-APIC-edge
> >>>>ata_piix
> >>>>  15:      16881          0          0          0   IO-APIC-edge
> >>>>ata_piix
> >>>>  24:          0          0          0          0   PCI-MSI-edge
> >>>>virtio1-config
> >>>>  25:       5225          0          0          0   PCI-MSI-edge
> >>>>virtio1-requests
> >>>>  26:          0          0          0          0   PCI-MSI-edge
> >>>>virtio0-config
> >>>>  27:      72493          0          0          0   PCI-MSI-edge
> >>>>virtio0-input
> >>>>...
> >>>>
> >>>>with inkernel irqchip
> >>>>            CPU0       CPU1       CPU2       CPU3
> >>>>   0:         16          0          0          0   IO-APIC-edge      timer
> >>>>   1:          0          3          3          1   IO-APIC-edge      i8042
> >>>>   4:          0          0          1          0   IO-APIC-edge
> >>>>   6:          1          0          1          2   IO-APIC-edge
> >>>>floppy
> >>>>   7:          0          0          0          0   IO-APIC-edge
> >>>>parport0
> >>>>   8:          0          0          0          0   IO-APIC-edge      rtc0
> >>>>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> >>>>  11:          7          9          4          1   IO-APIC-fasteoi
> >>>>uhci_hcd:usb1
> >>>>  12:         30         27         29         34   IO-APIC-edge      i8042
> >>>>  14:          0          0          0          0   IO-APIC-edge
> >>>>ata_piix
> >>>>  15:        943        937        950        943   IO-APIC-edge
> >>>>ata_piix
> >>>>  24:          0          0          0          0   PCI-MSI-edge
> >>>>virtio0-config
> >>>>  25:        930        978        980        947   PCI-MSI-edge
> >>>>virtio0-input
> >>>>  26:          0          0          1          0   PCI-MSI-edge
> >>>>virtio0-output
> >>>>  27:          0          0          0          0   PCI-MSI-edge
> >>>>virtio1-config
> >>>>  28:        543        541        542        553   PCI-MSI-edge
> >>>>virtio1-requests
> >>>>...
> >>>>
> >>>>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:26                                     ` Michael S. Tsirkin
@ 2012-11-13 16:27                                       ` Peter Lieven
  2012-11-13 16:59                                         ` Dietmar Maurer
  2012-11-13 17:03                                         ` Michael S. Tsirkin
  0 siblings, 2 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-13 16:27 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel


Am 13.11.2012 um 17:26 schrieb Michael S. Tsirkin:

> On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
>> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
>>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>>>> On 09.11.2012 19:03, Peter Lieven wrote:
>>>>> Remark:
>>>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>>>>> 
>>>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
>>>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
>>>>> or a problem with the old virtio drivers if they receive the interrupt on
>>>>> different CPUs.
>>>> anyone?
>>> Looks like the problem is not in the guest: I tried ubuntu guest
>>> on a rhel host, I got 8GB/s with vhost and 4GB/s without
>>> on a host to guest banchmark.
>> which ubuntu version in the guest?
>> 
>> Peter
> 
> ubuntu-10.04.4-server-amd64.iso

can you try with vnet_hdr=on on the nic.

the bug seems to be only relevant when vhost-net is used.

Dietmar, see you implications with normal virtio?

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:22                                 ` Michael S. Tsirkin
  2012-11-13 16:21                                   ` Peter Lieven
@ 2012-11-13 16:33                                   ` Michael S. Tsirkin
  2012-11-13 16:35                                     ` Peter Lieven
                                                       ` (2 more replies)
  1 sibling, 3 replies; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 16:33 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> > 
> > On 09.11.2012 19:03, Peter Lieven wrote:
> > >Remark:
> > >If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> > >
> > >Now we need someone with deeper knowledge of the in-kernel irqchip and the
> > >virtio/vhost driver development to say if this is a regression in qemu-kvm
> > >or a problem with the old virtio drivers if they receive the interrupt on
> > >different CPUs.
> > anyone?
> 
> Looks like the problem is not in the guest: I tried ubuntu guest
> on a rhel host, I got 8GB/s with vhost and 4GB/s without
> on a host to guest banchmark.
> 

Tried with upstream qemu on rhel kernel and that's even a bit faster.
So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all
so maybe their vhost backport is broken insome way.

> 
> > 
> > >
> > >Peter Lieven wrote:
> > >>it seems that with in-kernel irqchip the interrupts are distributed across
> > >>all vpcus. without in-kernel irqchip all interrupts are on cpu0. maybe
> > >>this is related.
> > >>
> > >>without inkernel irqchip
> > >>            CPU0       CPU1       CPU2       CPU3
> > >>   0:         16          0          0          0   IO-APIC-edge      timer
> > >>   1:         23          0          0          0   IO-APIC-edge      i8042
> > >>   4:          1          0          0          0   IO-APIC-edge
> > >>   6:          4          0          0          0   IO-APIC-edge
> > >>floppy
> > >>   7:          0          0          0          0   IO-APIC-edge
> > >>parport0
> > >>   8:          0          0          0          0   IO-APIC-edge      rtc0
> > >>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> > >>  11:         76          0          0          0   IO-APIC-fasteoi
> > >>uhci_hcd:usb1
> > >>  12:        102          0          0          0   IO-APIC-edge      i8042
> > >>  14:          0          0          0          0   IO-APIC-edge
> > >>ata_piix
> > >>  15:      16881          0          0          0   IO-APIC-edge
> > >>ata_piix
> > >>  24:          0          0          0          0   PCI-MSI-edge
> > >>virtio1-config
> > >>  25:       5225          0          0          0   PCI-MSI-edge
> > >>virtio1-requests
> > >>  26:          0          0          0          0   PCI-MSI-edge
> > >>virtio0-config
> > >>  27:      72493          0          0          0   PCI-MSI-edge
> > >>virtio0-input
> > >>...
> > >>
> > >>with inkernel irqchip
> > >>            CPU0       CPU1       CPU2       CPU3
> > >>   0:         16          0          0          0   IO-APIC-edge      timer
> > >>   1:          0          3          3          1   IO-APIC-edge      i8042
> > >>   4:          0          0          1          0   IO-APIC-edge
> > >>   6:          1          0          1          2   IO-APIC-edge
> > >>floppy
> > >>   7:          0          0          0          0   IO-APIC-edge
> > >>parport0
> > >>   8:          0          0          0          0   IO-APIC-edge      rtc0
> > >>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
> > >>  11:          7          9          4          1   IO-APIC-fasteoi
> > >>uhci_hcd:usb1
> > >>  12:         30         27         29         34   IO-APIC-edge      i8042
> > >>  14:          0          0          0          0   IO-APIC-edge
> > >>ata_piix
> > >>  15:        943        937        950        943   IO-APIC-edge
> > >>ata_piix
> > >>  24:          0          0          0          0   PCI-MSI-edge
> > >>virtio0-config
> > >>  25:        930        978        980        947   PCI-MSI-edge
> > >>virtio0-input
> > >>  26:          0          0          1          0   PCI-MSI-edge
> > >>virtio0-output
> > >>  27:          0          0          0          0   PCI-MSI-edge
> > >>virtio1-config
> > >>  28:        543        541        542        553   PCI-MSI-edge
> > >>virtio1-requests
> > >>...
> > >>
> > >>
> > >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:33                                   ` Michael S. Tsirkin
@ 2012-11-13 16:35                                     ` Peter Lieven
  2012-11-13 16:46                                       ` Michael S. Tsirkin
  2012-11-13 17:03                                     ` Dietmar Maurer
  2012-11-19 13:49                                     ` Peter Lieven
  2 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-13 16:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel


Am 13.11.2012 um 17:33 schrieb Michael S. Tsirkin:

> On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>>> 
>>> On 09.11.2012 19:03, Peter Lieven wrote:
>>>> Remark:
>>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>>>> 
>>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
>>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
>>>> or a problem with the old virtio drivers if they receive the interrupt on
>>>> different CPUs.
>>> anyone?
>> 
>> Looks like the problem is not in the guest: I tried ubuntu guest
>> on a rhel host, I got 8GB/s with vhost and 4GB/s without
>> on a host to guest banchmark.
>> 
> 
> Tried with upstream qemu on rhel kernel and that's even a bit faster.
> So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all
> so maybe their vhost backport is broken insome way.

That might be. I think Dietmar was reporting that he had problems
with Debian. They likely use the same back port.

Is it correct that with kernel_irqchip the IRQs are
delivered to all vCPUs? Without kernel_irqchip (in qemu-kvm 1.0.1
for instance) they where delivered only to vCPU 0. This scenario
was working.

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:35                                     ` Peter Lieven
@ 2012-11-13 16:46                                       ` Michael S. Tsirkin
  0 siblings, 0 replies; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 16:46 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On Tue, Nov 13, 2012 at 05:35:55PM +0100, Peter Lieven wrote:
> 
> Am 13.11.2012 um 17:33 schrieb Michael S. Tsirkin:
> 
> > On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
> >> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>> 
> >>> On 09.11.2012 19:03, Peter Lieven wrote:
> >>>> Remark:
> >>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >>>> 
> >>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
> >>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
> >>>> or a problem with the old virtio drivers if they receive the interrupt on
> >>>> different CPUs.
> >>> anyone?
> >> 
> >> Looks like the problem is not in the guest: I tried ubuntu guest
> >> on a rhel host, I got 8GB/s with vhost and 4GB/s without
> >> on a host to guest banchmark.
> >> 
> > 
> > Tried with upstream qemu on rhel kernel and that's even a bit faster.
> > So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all
> > so maybe their vhost backport is broken insome way.
> 
> That might be. I think Dietmar was reporting that he had problems
> with Debian. They likely use the same back port.
> 
> Is it correct that with kernel_irqchip the IRQs are
> delivered to all vCPUs? Without kernel_irqchip (in qemu-kvm 1.0.1
> for instance) they where delivered only to vCPU 0. This scenario
> was working.
> 
> Peter

You need to look at how MSI tables are programmed to check if it's
OK - guest can program MSI to do it like that. pciutils does not
do this unfortunately so you'll have to write a bit of C code if you
want to do this.

-- 
MST

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:27                                       ` Peter Lieven
@ 2012-11-13 16:59                                         ` Dietmar Maurer
  2012-11-13 17:03                                         ` Michael S. Tsirkin
  1 sibling, 0 replies; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-13 16:59 UTC (permalink / raw)
  To: Peter Lieven, Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, qemu-devel

> the bug seems to be only relevant when vhost-net is used.
> 
> Dietmar, see you implications with normal virtio?

no, only with vhost=on

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:27                                       ` Peter Lieven
  2012-11-13 16:59                                         ` Dietmar Maurer
@ 2012-11-13 17:03                                         ` Michael S. Tsirkin
  1 sibling, 0 replies; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 17:03 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On Tue, Nov 13, 2012 at 05:27:17PM +0100, Peter Lieven wrote:
> 
> Am 13.11.2012 um 17:26 schrieb Michael S. Tsirkin:
> 
> > On Tue, Nov 13, 2012 at 05:21:50PM +0100, Peter Lieven wrote:
> >> On 13.11.2012 17:22, Michael S. Tsirkin wrote:
> >>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>>> On 09.11.2012 19:03, Peter Lieven wrote:
> >>>>> Remark:
> >>>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >>>>> 
> >>>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
> >>>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
> >>>>> or a problem with the old virtio drivers if they receive the interrupt on
> >>>>> different CPUs.
> >>>> anyone?
> >>> Looks like the problem is not in the guest: I tried ubuntu guest
> >>> on a rhel host, I got 8GB/s with vhost and 4GB/s without
> >>> on a host to guest banchmark.
> >> which ubuntu version in the guest?
> >> 
> >> Peter
> > 
> > ubuntu-10.04.4-server-amd64.iso
> 
> can you try with vnet_hdr=on on the nic.

You mean on tap? Same thing.

> the bug seems to be only relevant when vhost-net is used.
> 
> Dietmar, see you implications with normal virtio?
> 
> Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:33                                   ` Michael S. Tsirkin
  2012-11-13 16:35                                     ` Peter Lieven
@ 2012-11-13 17:03                                     ` Dietmar Maurer
  2012-11-13 17:07                                       ` Michael S. Tsirkin
  2012-11-19 13:49                                     ` Peter Lieven
  2 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-13 17:03 UTC (permalink / raw)
  To: Michael S. Tsirkin, Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, qemu-devel

> Tried with upstream qemu on rhel kernel and that's even a bit faster.
> So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all so maybe their
> vhost backport is broken insome way.

You can also reproduce the problem with RHEL6.2 as guest
But it seems RHEL 6.3 fixed it.

There seem to be a few vhost specific patches between those 2 versions.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 17:03                                     ` Dietmar Maurer
@ 2012-11-13 17:07                                       ` Michael S. Tsirkin
  2012-11-13 17:38                                         ` Dietmar Maurer
  0 siblings, 1 reply; 51+ messages in thread
From: Michael S. Tsirkin @ 2012-11-13 17:07 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Jan Kiszka, Peter Lieven, Peter Lieven, qemu-devel, Stefan Hajnoczi

On Tue, Nov 13, 2012 at 05:03:20PM +0000, Dietmar Maurer wrote:
> > Tried with upstream qemu on rhel kernel and that's even a bit faster.
> > So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all so maybe their
> > vhost backport is broken insome way.
> 
> You can also reproduce the problem with RHEL6.2 as guest
> But it seems RHEL 6.3 fixed it.

RHEL6.2 on ubuntu host?

> There seem to be a few vhost specific patches between those 2 versions.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 17:07                                       ` Michael S. Tsirkin
@ 2012-11-13 17:38                                         ` Dietmar Maurer
  2012-11-15 18:26                                           ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-13 17:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jan Kiszka, Peter Lieven, Peter Lieven, qemu-devel, Stefan Hajnoczi

> > You can also reproduce the problem with RHEL6.2 as guest But it seems
> > RHEL 6.3 fixed it.
> 
> RHEL6.2 on ubuntu host?

I only tested with RHEL6.3 kernel on host.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 17:38                                         ` Dietmar Maurer
@ 2012-11-15 18:26                                           ` Peter Lieven
  2012-11-16 10:44                                             ` Dietmar Maurer
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-11-15 18:26 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Peter Lieven, Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Jan Kiszka, Peter Lieven

Dietmar Maurer wrote:
>> > You can also reproduce the problem with RHEL6.2 as guest But it seems
>> > RHEL 6.3 fixed it.
>>
>> RHEL6.2 on ubuntu host?
>
> I only tested with RHEL6.3 kernel on host.

can you check if there is a difference on interrupt delivery between those
two?

cat /proc/interrupts should be sufficient after some traffic has flown.

peter

>
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-15 18:26                                           ` Peter Lieven
@ 2012-11-16 10:44                                             ` Dietmar Maurer
  2012-11-16 11:00                                               ` Alexandre DERUMIER
  0 siblings, 1 reply; 51+ messages in thread
From: Dietmar Maurer @ 2012-11-16 10:44 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, qemu-devel,
	Michael S. Tsirkin

> > I only tested with RHEL6.3 kernel on host.
> 
> can you check if there is a difference on interrupt delivery between those
> two?
> 
> cat /proc/interrupts should be sufficient after some traffic has flown.

While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.

Sigh :-/

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-16 10:44                                             ` Dietmar Maurer
@ 2012-11-16 11:00                                               ` Alexandre DERUMIER
  2012-12-03 11:23                                                 ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Alexandre DERUMIER @ 2012-11-16 11:00 UTC (permalink / raw)
  To: Dietmar Maurer
  Cc: Peter Lieven, Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Jan Kiszka, Peter Lieven

>>While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>
>>Sigh :-/

Hi,

I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).

They all use broadcom bnx2 network card (don't know if it can be related)

host kernel : rhel 63 with 2.6.32 kernel

guest kernel : 2.6.32  (debian squeeze, ubuntu).

No problem with guest kernel 3.2




----- Mail original -----

De: "Dietmar Maurer" <dietmar@proxmox.com>
À: "Peter Lieven" <lieven-lists@dlhnet.de>
Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
Envoyé: Vendredi 16 Novembre 2012 11:44:26
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores

> > I only tested with RHEL6.3 kernel on host.
>
> can you check if there is a difference on interrupt delivery between those
> two?
>
> cat /proc/interrupts should be sufficient after some traffic has flown.

While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.

Sigh :-/

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-13 16:33                                   ` Michael S. Tsirkin
  2012-11-13 16:35                                     ` Peter Lieven
  2012-11-13 17:03                                     ` Dietmar Maurer
@ 2012-11-19 13:49                                     ` Peter Lieven
  2 siblings, 0 replies; 51+ messages in thread
From: Peter Lieven @ 2012-11-19 13:49 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Stefan Hajnoczi, Peter Lieven, Jan Kiszka, Dietmar Maurer, qemu-devel

On 13.11.2012 17:33, Michael S. Tsirkin wrote:
> On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
>> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
>>> On 09.11.2012 19:03, Peter Lieven wrote:
>>>> Remark:
>>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
>>>>
>>>> Now we need someone with deeper knowledge of the in-kernel irqchip and the
>>>> virtio/vhost driver development to say if this is a regression in qemu-kvm
>>>> or a problem with the old virtio drivers if they receive the interrupt on
>>>> different CPUs.
>>> anyone?
>> Looks like the problem is not in the guest: I tried ubuntu guest
>> on a rhel host, I got 8GB/s with vhost and 4GB/s without
>> on a host to guest banchmark.
>>
> Tried with upstream qemu on rhel kernel and that's even a bit faster.
> So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all
> so maybe their vhost backport is broken insome way.
>
Do you remember since which version of the vanilla kernel vhost-net
was officially included?

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-11-16 11:00                                               ` Alexandre DERUMIER
@ 2012-12-03 11:23                                                 ` Peter Lieven
  2012-12-09 18:38                                                   ` Alexandre DERUMIER
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2012-12-03 11:23 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel, Jan Kiszka,
	Peter Lieven, Dietmar Maurer


Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>:

>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>> 
>>> Sigh :-/
> 
> Hi,
> 
> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
> 
> They all use broadcom bnx2 network card (don't know if it can be related)
> 
> host kernel : rhel 63 with 2.6.32 kernel
> 
> guest kernel : 2.6.32  (debian squeeze, ubuntu).
> 
> No problem with guest kernel 3.2

Have you had any further progress on this regression/problem?

Thanks,
Peter

> 
> 
> 
> 
> ----- Mail original -----
> 
> De: "Dietmar Maurer" <dietmar@proxmox.com>
> À: "Peter Lieven" <lieven-lists@dlhnet.de>
> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
> Envoyé: Vendredi 16 Novembre 2012 11:44:26
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> 
>>> I only tested with RHEL6.3 kernel on host.
>> 
>> can you check if there is a difference on interrupt delivery between those
>> two?
>> 
>> cat /proc/interrupts should be sufficient after some traffic has flown.
> 
> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
> 
> Sigh :-/

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-12-03 11:23                                                 ` Peter Lieven
@ 2012-12-09 18:38                                                   ` Alexandre DERUMIER
  2013-03-14  9:22                                                     ` Davide Guerri
  0 siblings, 1 reply; 51+ messages in thread
From: Alexandre DERUMIER @ 2012-12-09 18:38 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel, Jan Kiszka,
	Peter Lieven, Dietmar Maurer

>>Have you had any further progress on this regression/problem?

Hi Peter,
I didn't re-tested myself,
but a proxmox user who's have the problem with qemu-kvm 1.2, with windows guest and linux guest,
don't have the problem anymore with qemu 1.3.

http://forum.proxmox.com/threads/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed

I'll try to redone test myself this week

Regards,

Alexandre

----- Mail original -----

De: "Peter Lieven" <pl@dlhnet.de>
À: "Alexandre DERUMIER" <aderumier@odiso.com>
Cc: "Dietmar Maurer" <dietmar@proxmox.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>, "Peter Lieven" <lieven-lists@dlhnet.de>
Envoyé: Lundi 3 Décembre 2012 12:23:11
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores


Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>:

>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>>
>>> Sigh :-/
>
> Hi,
>
> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
>
> They all use broadcom bnx2 network card (don't know if it can be related)
>
> host kernel : rhel 63 with 2.6.32 kernel
>
> guest kernel : 2.6.32 (debian squeeze, ubuntu).
>
> No problem with guest kernel 3.2

Have you had any further progress on this regression/problem?

Thanks,
Peter

>
>
>
>
> ----- Mail original -----
>
> De: "Dietmar Maurer" <dietmar@proxmox.com>
> À: "Peter Lieven" <lieven-lists@dlhnet.de>
> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
> Envoyé: Vendredi 16 Novembre 2012 11:44:26
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>
>>> I only tested with RHEL6.3 kernel on host.
>>
>> can you check if there is a difference on interrupt delivery between those
>> two?
>>
>> cat /proc/interrupts should be sufficient after some traffic has flown. 
>
> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>
> Sigh :-/

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2012-12-09 18:38                                                   ` Alexandre DERUMIER
@ 2013-03-14  9:22                                                     ` Davide Guerri
  2013-03-14 10:43                                                       ` Alexandre DERUMIER
  0 siblings, 1 reply; 51+ messages in thread
From: Davide Guerri @ 2013-03-14  9:22 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: Peter Lieven, Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Jan Kiszka, Peter Lieven, Dietmar Maurer

[-- Attachment #1: Type: text/plain, Size: 4330 bytes --]

I'd like to reopen this thread because this problem is still here and it's really annoying.
Another possible work-around is to pin the virtio nic irq to one virtual CPU (with /proc/smp_affinity) but (of course) this is still sub-optimal.

Here are some graphs showing the performance of a heavy loaded machine after and before a live migration from KVM-1.0 to KVM-1.2.
Host is a Ubuntu 12.10 Kernel 3.5, guest is a Ubuntu 10.04.4 LTS Kernel 2.6.32 

History
-------
CPU time:  http://s1299.beta.photobucket.com/user/dguerri/media/CPU-10days_zps4a088ff0.png.html 
CPU Load:  http://s1299.beta.photobucket.com/user/dguerri/media/Load-10days_zpsff27f212.png.html
NET pps:   http://s1299.beta.photobucket.com/user/dguerri/media/pps-10days_zps003dd039.png.html
NET Mbps:  http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-10days_zpsfc3cba8c.png.html

Current
-------
CPU time:  http://s1299.beta.photobucket.com/user/dguerri/media/CPU-2days_zpsd362cac6.png.html
CPU Load:  http://s1299.beta.photobucket.com/user/dguerri/media/load-2days_zpsd4b7b50d.png.html
NET pps:   http://s1299.beta.photobucket.com/user/dguerri/media/pps-2days_zps8f5458c9.png.html
NET Mbps:  http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-2days_zps299338b9.png.html	

Red arrows indicate the kvm version change.
Black arrows indicate when I "pinned" the virtio NIC IRQ to only one CPU (that machine has 2 virtual cores).
As can be seen the performance penalty is rather high: that machine was almost unusable!

Does version 1.3 fixes this issue?

Could someone with the required knowledge look into this, please? 
Please, this is a very nasty bug because I guess I'm not the only one who is unable to upgrade all the machines with a (not-so) old kernel... :)

Thanks!

Davide Guerri.




On 09/dic/2012, at 19:38, Alexandre DERUMIER <aderumier@odiso.com> wrote:

>>> Have you had any further progress on this regression/problem?
> 
> Hi Peter,
> I didn't re-tested myself,
> but a proxmox user who's have the problem with qemu-kvm 1.2, with windows guest and linux guest,
> don't have the problem anymore with qemu 1.3.
> 
> http://forum.proxmox.com/threads/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed
> 
> I'll try to redone test myself this week
> 
> Regards,
> 
> Alexandre
> 
> ----- Mail original -----
> 
> De: "Peter Lieven" <pl@dlhnet.de>
> À: "Alexandre DERUMIER" <aderumier@odiso.com>
> Cc: "Dietmar Maurer" <dietmar@proxmox.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>, "Peter Lieven" <lieven-lists@dlhnet.de>
> Envoyé: Lundi 3 Décembre 2012 12:23:11
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> 
> 
> Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>:
> 
>>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>>> 
>>>> Sigh :-/
>> 
>> Hi,
>> 
>> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
>> 
>> They all use broadcom bnx2 network card (don't know if it can be related)
>> 
>> host kernel : rhel 63 with 2.6.32 kernel
>> 
>> guest kernel : 2.6.32 (debian squeeze, ubuntu).
>> 
>> No problem with guest kernel 3.2
> 
> Have you had any further progress on this regression/problem?
> 
> Thanks,
> Peter
> 
>> 
>> 
>> 
>> 
>> ----- Mail original -----
>> 
>> De: "Dietmar Maurer" <dietmar@proxmox.com>
>> À: "Peter Lieven" <lieven-lists@dlhnet.de>
>> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
>> Envoyé: Vendredi 16 Novembre 2012 11:44:26
>> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>> 
>>>> I only tested with RHEL6.3 kernel on host.
>>> 
>>> can you check if there is a difference on interrupt delivery between those
>>> two?
>>> 
>>> cat /proc/interrupts should be sufficient after some traffic has flown. 
>> 
>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>> 
>> Sigh :-/
> 
> 


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5930 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14  9:22                                                     ` Davide Guerri
@ 2013-03-14 10:43                                                       ` Alexandre DERUMIER
  2013-03-14 17:50                                                         ` Michael S. Tsirkin
  0 siblings, 1 reply; 51+ messages in thread
From: Alexandre DERUMIER @ 2013-03-14 10:43 UTC (permalink / raw)
  To: Davide Guerri
  Cc: Peter Lieven, Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Jan Kiszka, Peter Lieven, Dietmar Maurer

I don't think it's fixed in 1.3 or 1.4, some proxmox users have reported again this bug with guest kernel 2.6.32. (proxmox host is rhel 6.3 kernel + qemu 1.4)



----- Mail original -----

De: "Davide Guerri" <d.guerri@unidata.it>
À: "Alexandre DERUMIER" <aderumier@odiso.com>
Cc: "Peter Lieven" <pl@dlhnet.de>, "Michael S. Tsirkin" <mst@redhat.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, qemu-devel@nongnu.org, "Jan Kiszka" <jan.kiszka@web.de>, "Peter Lieven" <lieven-lists@dlhnet.de>, "Dietmar Maurer" <dietmar@proxmox.com>
Envoyé: Jeudi 14 Mars 2013 10:22:22
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores

I'd like to reopen this thread because this problem is still here and it's really annoying.
Another possible work-around is to pin the virtio nic irq to one virtual CPU (with /proc/smp_affinity) but (of course) this is still sub-optimal.

Here are some graphs showing the performance of a heavy loaded machine after and before a live migration from KVM-1.0 to KVM-1.2.
Host is a Ubuntu 12.10 Kernel 3.5, guest is a Ubuntu 10.04.4 LTS Kernel 2.6.32

History
-------
CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-10days_zps4a088ff0.png.html
CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/Load-10days_zpsff27f212.png.html
NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-10days_zps003dd039.png.html
NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-10days_zpsfc3cba8c.png.html

Current
-------
CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-2days_zpsd362cac6.png.html
CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/load-2days_zpsd4b7b50d.png.html
NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-2days_zps8f5458c9.png.html
NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-2days_zps299338b9.png.html

Red arrows indicate the kvm version change.
Black arrows indicate when I "pinned" the virtio NIC IRQ to only one CPU (that machine has 2 virtual cores).
As can be seen the performance penalty is rather high: that machine was almost unusable!

Does version 1.3 fixes this issue?

Could someone with the required knowledge look into this, please?
Please, this is a very nasty bug because I guess I'm not the only one who is unable to upgrade all the machines with a (not-so) old kernel... :)

Thanks!

Davide Guerri.




On 09/dic/2012, at 19:38, Alexandre DERUMIER <aderumier@odiso.com> wrote:

>>> Have you had any further progress on this regression/problem?
>
> Hi Peter,
> I didn't re-tested myself,
> but a proxmox user who's have the problem with qemu-kvm 1.2, with windows guest and linux guest,
> don't have the problem anymore with qemu 1.3.
>
> http://forum.proxmox.com/threads/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed
>
> I'll try to redone test myself this week
>
> Regards,
>
> Alexandre
>
> ----- Mail original -----
>
> De: "Peter Lieven" <pl@dlhnet.de>
> À: "Alexandre DERUMIER" <aderumier@odiso.com>
> Cc: "Dietmar Maurer" <dietmar@proxmox.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>, "Peter Lieven" <lieven-lists@dlhnet.de>
> Envoyé: Lundi 3 Décembre 2012 12:23:11
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>
>
> Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>: 
>
>>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>>>
>>>> Sigh :-/
>>
>> Hi,
>>
>> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
>>
>> They all use broadcom bnx2 network card (don't know if it can be related)
>>
>> host kernel : rhel 63 with 2.6.32 kernel
>>
>> guest kernel : 2.6.32 (debian squeeze, ubuntu).
>>
>> No problem with guest kernel 3.2
>
> Have you had any further progress on this regression/problem?
>
> Thanks,
> Peter
>
>>
>>
>>
>>
>> ----- Mail original -----
>>
>> De: "Dietmar Maurer" <dietmar@proxmox.com>
>> À: "Peter Lieven" <lieven-lists@dlhnet.de>
>> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
>> Envoyé: Vendredi 16 Novembre 2012 11:44:26
>> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>>
>>>> I only tested with RHEL6.3 kernel on host.
>>>
>>> can you check if there is a difference on interrupt delivery between those
>>> two?
>>>
>>> cat /proc/interrupts should be sufficient after some traffic has flown.
>>
>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>
>> Sigh :-/
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14 10:43                                                       ` Alexandre DERUMIER
@ 2013-03-14 17:50                                                         ` Michael S. Tsirkin
  2013-03-14 18:15                                                           ` Davide Guerri
  0 siblings, 1 reply; 51+ messages in thread
From: Michael S. Tsirkin @ 2013-03-14 17:50 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: Peter Lieven, Stefan Hajnoczi, qemu-devel, Davide Guerri,
	Jan Kiszka, Peter Lieven, Dietmar Maurer


Could be one of the bugs fixed in guest drivers since 2.6.32.
For example, 39d321577405e8e269fd238b278aaf2425fa788a ?
Does it help if you try a more recent guest?

On Thu, Mar 14, 2013 at 11:43:30AM +0100, Alexandre DERUMIER wrote:
> I don't think it's fixed in 1.3 or 1.4, some proxmox users have reported again this bug with guest kernel 2.6.32. (proxmox host is rhel 6.3 kernel + qemu 1.4)
> 
> 
> 
> ----- Mail original -----
> 
> De: "Davide Guerri" <d.guerri@unidata.it>
> À: "Alexandre DERUMIER" <aderumier@odiso.com>
> Cc: "Peter Lieven" <pl@dlhnet.de>, "Michael S. Tsirkin" <mst@redhat.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, qemu-devel@nongnu.org, "Jan Kiszka" <jan.kiszka@web.de>, "Peter Lieven" <lieven-lists@dlhnet.de>, "Dietmar Maurer" <dietmar@proxmox.com>
> Envoyé: Jeudi 14 Mars 2013 10:22:22
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> 
> I'd like to reopen this thread because this problem is still here and it's really annoying.
> Another possible work-around is to pin the virtio nic irq to one virtual CPU (with /proc/smp_affinity) but (of course) this is still sub-optimal.
> 
> Here are some graphs showing the performance of a heavy loaded machine after and before a live migration from KVM-1.0 to KVM-1.2.
> Host is a Ubuntu 12.10 Kernel 3.5, guest is a Ubuntu 10.04.4 LTS Kernel 2.6.32
> 
> History
> -------
> CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-10days_zps4a088ff0.png.html
> CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/Load-10days_zpsff27f212.png.html
> NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-10days_zps003dd039.png.html
> NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-10days_zpsfc3cba8c.png.html
> 
> Current
> -------
> CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-2days_zpsd362cac6.png.html
> CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/load-2days_zpsd4b7b50d.png.html
> NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-2days_zps8f5458c9.png.html
> NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-2days_zps299338b9.png.html
> 
> Red arrows indicate the kvm version change.
> Black arrows indicate when I "pinned" the virtio NIC IRQ to only one CPU (that machine has 2 virtual cores).
> As can be seen the performance penalty is rather high: that machine was almost unusable!
> 
> Does version 1.3 fixes this issue?
> 
> Could someone with the required knowledge look into this, please?
> Please, this is a very nasty bug because I guess I'm not the only one who is unable to upgrade all the machines with a (not-so) old kernel... :)
> 
> Thanks!
> 
> Davide Guerri.
> 
> 
> 
> 
> On 09/dic/2012, at 19:38, Alexandre DERUMIER <aderumier@odiso.com> wrote:
> 
> >>> Have you had any further progress on this regression/problem?
> >
> > Hi Peter,
> > I didn't re-tested myself,
> > but a proxmox user who's have the problem with qemu-kvm 1.2, with windows guest and linux guest,
> > don't have the problem anymore with qemu 1.3.
> >
> > http://forum.proxmox.com/threads/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed
> >
> > I'll try to redone test myself this week
> >
> > Regards,
> >
> > Alexandre
> >
> > ----- Mail original -----
> >
> > De: "Peter Lieven" <pl@dlhnet.de>
> > À: "Alexandre DERUMIER" <aderumier@odiso.com>
> > Cc: "Dietmar Maurer" <dietmar@proxmox.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>, "Peter Lieven" <lieven-lists@dlhnet.de>
> > Envoyé: Lundi 3 Décembre 2012 12:23:11
> > Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> >
> >
> > Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>: 
> >
> >>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
> >>>>
> >>>> Sigh :-/
> >>
> >> Hi,
> >>
> >> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
> >>
> >> They all use broadcom bnx2 network card (don't know if it can be related)
> >>
> >> host kernel : rhel 63 with 2.6.32 kernel
> >>
> >> guest kernel : 2.6.32 (debian squeeze, ubuntu).
> >>
> >> No problem with guest kernel 3.2
> >
> > Have you had any further progress on this regression/problem?
> >
> > Thanks,
> > Peter
> >
> >>
> >>
> >>
> >>
> >> ----- Mail original -----
> >>
> >> De: "Dietmar Maurer" <dietmar@proxmox.com>
> >> À: "Peter Lieven" <lieven-lists@dlhnet.de>
> >> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
> >> Envoyé: Vendredi 16 Novembre 2012 11:44:26
> >> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> >>
> >>>> I only tested with RHEL6.3 kernel on host.
> >>>
> >>> can you check if there is a difference on interrupt delivery between those
> >>> two?
> >>>
> >>> cat /proc/interrupts should be sufficient after some traffic has flown.
> >>
> >> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
> >>
> >> Sigh :-/
> >
> >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14 17:50                                                         ` Michael S. Tsirkin
@ 2013-03-14 18:15                                                           ` Davide Guerri
  2013-03-14 18:21                                                             ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Davide Guerri @ 2013-03-14 18:15 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Lieven, Stefan Hajnoczi, qemu-devel, Alexandre DERUMIER,
	Davide Guerri, Jan Kiszka, Peter Lieven, Dietmar Maurer

[-- Attachment #1: Type: text/plain, Size: 5792 bytes --]

Of course I can do some test but a kernel upgrade is not an option here :( 

However I don't think this is only a guest problem since it has been triggered by something between KVM 1.0 and 1.2.



On 14/mar/2013, at 18:50, Michael S. Tsirkin <mst@redhat.com> wrote:

> 
> Could be one of the bugs fixed in guest drivers since 2.6.32.
> For example, 39d321577405e8e269fd238b278aaf2425fa788a ?
> Does it help if you try a more recent guest?
> 
> On Thu, Mar 14, 2013 at 11:43:30AM +0100, Alexandre DERUMIER wrote:
>> I don't think it's fixed in 1.3 or 1.4, some proxmox users have reported again this bug with guest kernel 2.6.32. (proxmox host is rhel 6.3 kernel + qemu 1.4)
>> 
>> 
>> 
>> ----- Mail original -----
>> 
>> De: "Davide Guerri" <d.guerri@unidata.it>
>> À: "Alexandre DERUMIER" <aderumier@odiso.com>
>> Cc: "Peter Lieven" <pl@dlhnet.de>, "Michael S. Tsirkin" <mst@redhat.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, qemu-devel@nongnu.org, "Jan Kiszka" <jan.kiszka@web.de>, "Peter Lieven" <lieven-lists@dlhnet.de>, "Dietmar Maurer" <dietmar@proxmox.com>
>> Envoyé: Jeudi 14 Mars 2013 10:22:22
>> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>> 
>> I'd like to reopen this thread because this problem is still here and it's really annoying.
>> Another possible work-around is to pin the virtio nic irq to one virtual CPU (with /proc/smp_affinity) but (of course) this is still sub-optimal.
>> 
>> Here are some graphs showing the performance of a heavy loaded machine after and before a live migration from KVM-1.0 to KVM-1.2.
>> Host is a Ubuntu 12.10 Kernel 3.5, guest is a Ubuntu 10.04.4 LTS Kernel 2.6.32
>> 
>> History
>> -------
>> CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-10days_zps4a088ff0.png.html
>> CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/Load-10days_zpsff27f212.png.html
>> NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-10days_zps003dd039.png.html
>> NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-10days_zpsfc3cba8c.png.html
>> 
>> Current
>> -------
>> CPU time: http://s1299.beta.photobucket.com/user/dguerri/media/CPU-2days_zpsd362cac6.png.html
>> CPU Load: http://s1299.beta.photobucket.com/user/dguerri/media/load-2days_zpsd4b7b50d.png.html
>> NET pps: http://s1299.beta.photobucket.com/user/dguerri/media/pps-2days_zps8f5458c9.png.html
>> NET Mbps: http://s1299.beta.photobucket.com/user/dguerri/media/Mbps-2days_zps299338b9.png.html
>> 
>> Red arrows indicate the kvm version change.
>> Black arrows indicate when I "pinned" the virtio NIC IRQ to only one CPU (that machine has 2 virtual cores).
>> As can be seen the performance penalty is rather high: that machine was almost unusable!
>> 
>> Does version 1.3 fixes this issue?
>> 
>> Could someone with the required knowledge look into this, please?
>> Please, this is a very nasty bug because I guess I'm not the only one who is unable to upgrade all the machines with a (not-so) old kernel... :)
>> 
>> Thanks!
>> 
>> Davide Guerri.
>> 
>> 
>> 
>> 
>> On 09/dic/2012, at 19:38, Alexandre DERUMIER <aderumier@odiso.com> wrote:
>> 
>>>>> Have you had any further progress on this regression/problem?
>>> 
>>> Hi Peter,
>>> I didn't re-tested myself,
>>> but a proxmox user who's have the problem with qemu-kvm 1.2, with windows guest and linux guest,
>>> don't have the problem anymore with qemu 1.3.
>>> 
>>> http://forum.proxmox.com/threads/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed
>>> 
>>> I'll try to redone test myself this week
>>> 
>>> Regards,
>>> 
>>> Alexandre
>>> 
>>> ----- Mail original -----
>>> 
>>> De: "Peter Lieven" <pl@dlhnet.de>
>>> À: "Alexandre DERUMIER" <aderumier@odiso.com>
>>> Cc: "Dietmar Maurer" <dietmar@proxmox.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>, "Peter Lieven" <lieven-lists@dlhnet.de>
>>> Envoyé: Lundi 3 Décembre 2012 12:23:11
>>> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>>> 
>>> 
>>> Am 16.11.2012 um 12:00 schrieb Alexandre DERUMIER <aderumier@odiso.com>: 
>>> 
>>>>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>>>>> 
>>>>>> Sigh :-/
>>>> 
>>>> Hi,
>>>> 
>>>> I can reproduce the bug on all my dell servers,differents generation (R710 (intel),R815 (amd), 2950 (intel).
>>>> 
>>>> They all use broadcom bnx2 network card (don't know if it can be related)
>>>> 
>>>> host kernel : rhel 63 with 2.6.32 kernel
>>>> 
>>>> guest kernel : 2.6.32 (debian squeeze, ubuntu).
>>>> 
>>>> No problem with guest kernel 3.2
>>> 
>>> Have you had any further progress on this regression/problem?
>>> 
>>> Thanks,
>>> Peter
>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ----- Mail original -----
>>>> 
>>>> De: "Dietmar Maurer" <dietmar@proxmox.com>
>>>> À: "Peter Lieven" <lieven-lists@dlhnet.de>
>>>> Cc: "Stefan Hajnoczi" <stefanha@gmail.com>, "Peter Lieven" <pl@dlhnet.de>, "Jan Kiszka" <jan.kiszka@web.de>, qemu-devel@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>
>>>> Envoyé: Vendredi 16 Novembre 2012 11:44:26
>>>> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
>>>> 
>>>>>> I only tested with RHEL6.3 kernel on host.
>>>>> 
>>>>> can you check if there is a difference on interrupt delivery between those
>>>>> two?
>>>>> 
>>>>> cat /proc/interrupts should be sufficient after some traffic has flown.
>>>> 
>>>> While trying to reproduce the bug, we just detected that it depends on the hardware (mainboard) you run on.
>>>> 
>>>> Sigh :-/
>>> 
>>> 
> 


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5930 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14 18:15                                                           ` Davide Guerri
@ 2013-03-14 18:21                                                             ` Peter Lieven
  2013-03-14 23:04                                                               ` Davide Guerri
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2013-03-14 18:21 UTC (permalink / raw)
  To: Davide Guerri
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Alexandre DERUMIER, Jan Kiszka, Peter Lieven, Dietmar Maurer


Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:

> Of course I can do some test but a kernel upgrade is not an option here :( 

disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14 18:21                                                             ` Peter Lieven
@ 2013-03-14 23:04                                                               ` Davide Guerri
  2013-03-15  7:23                                                                 ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Davide Guerri @ 2013-03-14 23:04 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Alexandre DERUMIER, Jan Kiszka, Peter Lieven, Dietmar Maurer

[-- Attachment #1: Type: text/plain, Size: 527 bytes --]

Yes this is definitely an option :)

Just for curiosity, what is the effect of "in-kernel irqchip"?
Is it possible to disable it on a "live" domain?

Cheers,
 Davide


On 14/mar/2013, at 19:21, Peter Lieven <pl@dlhnet.de> wrote:

> 
> Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:
> 
>> Of course I can do some test but a kernel upgrade is not an option here :( 
> 
> disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.
> 
> Peter
> 
> 


[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5930 bytes --]

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-14 23:04                                                               ` Davide Guerri
@ 2013-03-15  7:23                                                                 ` Peter Lieven
  2013-03-17  9:08                                                                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 51+ messages in thread
From: Peter Lieven @ 2013-03-15  7:23 UTC (permalink / raw)
  To: Davide Guerri
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, qemu-devel,
	Alexandre DERUMIER, Jan Kiszka, Peter Lieven, Dietmar Maurer

On 15.03.2013 00:04, Davide Guerri wrote:
> Yes this is definitely an option :)
>
> Just for curiosity, what is the effect of "in-kernel irqchip"?

it emulates the irqchip in-kernel (in the KVM kernel module) which
avoids userspace exits to qemu. in your particular case I remember
that it made all IRQs deliverd to vcpu0 on. So I think this is a workaround
and not the real fix. I think Michael is right that it is a
client kernel bug. It would be good to find out what it is and ask
the 2.6.32 maintainers to include it. i further have seen that
with more recent kernels and inkernel-irqchip the irqs are delivered
to vcpu0 only again (without multiqueue).

> Is it possible to disable it on a "live" domain?

try it. i don't know. you definetely have to do a live migration for it,
but I have no clue if the VM will survice this.

Peter

>
> Cheers,
>   Davide
>
>
> On 14/mar/2013, at 19:21, Peter Lieven <pl@dlhnet.de> wrote:
>
>>
>> Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:
>>
>>> Of course I can do some test but a kernel upgrade is not an option here :(
>>
>> disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.
>>
>> Peter
>>
>>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-15  7:23                                                                 ` Peter Lieven
@ 2013-03-17  9:08                                                                   ` Michael S. Tsirkin
  2013-03-18  9:50                                                                     ` Alexandre DERUMIER
  0 siblings, 1 reply; 51+ messages in thread
From: Michael S. Tsirkin @ 2013-03-17  9:08 UTC (permalink / raw)
  To: Peter Lieven
  Cc: Stefan Hajnoczi, qemu-devel, Alexandre DERUMIER, Davide Guerri,
	Jan Kiszka, Peter Lieven, Dietmar Maurer

On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote:
> On 15.03.2013 00:04, Davide Guerri wrote:
> >Yes this is definitely an option :)
> >
> >Just for curiosity, what is the effect of "in-kernel irqchip"?
> 
> it emulates the irqchip in-kernel (in the KVM kernel module) which
> avoids userspace exits to qemu. in your particular case I remember
> that it made all IRQs deliverd to vcpu0 on. So I think this is a workaround
> and not the real fix. I think Michael is right that it is a
> client kernel bug. It would be good to find out what it is and ask
> the 2.6.32 maintainers to include it. i further have seen that
> with more recent kernels and inkernel-irqchip the irqs are delivered
> to vcpu0 only again (without multiqueue).
>
> >Is it possible to disable it on a "live" domain?
> 
> try it. i don't know. you definetely have to do a live migration for it,
> but I have no clue if the VM will survice this.
> 
> Peter

I doubt you can migrate VMs between irqchip/non irqchip configurations.

> >
> >Cheers,
> >  Davide
> >
> >
> >On 14/mar/2013, at 19:21, Peter Lieven <pl@dlhnet.de> wrote:
> >
> >>
> >>Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:
> >>
> >>>Of course I can do some test but a kernel upgrade is not an option here :(
> >>
> >>disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.
> >>
> >>Peter
> >>
> >>
> >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-17  9:08                                                                   ` Michael S. Tsirkin
@ 2013-03-18  9:50                                                                     ` Alexandre DERUMIER
  2013-03-18  9:53                                                                       ` Michael S. Tsirkin
  0 siblings, 1 reply; 51+ messages in thread
From: Alexandre DERUMIER @ 2013-03-18  9:50 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Lieven, Stefan Hajnoczi, qemu-devel, Davide Guerri,
	Jan Kiszka, Peter Lieven, Dietmar Maurer

Hello, about this bug,


Last Proxmox distrib use 2.6.32 rhel 6.3 kernel + qemu 1.4 , and have this problem with guest with 2.6.32 kernel.

do you think that +x2apic in guest cpu could help ?

(I think it's enable by default in RHEV/OVIRT ? but not in proxmox)



----- Mail original -----

De: "Michael S. Tsirkin" <mst@redhat.com>
À: "Peter Lieven" <pl@dlhnet.de>
Cc: "Davide Guerri" <d.guerri@unidata.it>, "Alexandre DERUMIER" <aderumier@odiso.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, qemu-devel@nongnu.org, "Jan Kiszka" <jan.kiszka@web.de>, "Peter Lieven" <lieven-lists@dlhnet.de>, "Dietmar Maurer" <dietmar@proxmox.com>
Envoyé: Dimanche 17 Mars 2013 10:08:17
Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores

On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote:
> On 15.03.2013 00:04, Davide Guerri wrote:
> >Yes this is definitely an option :)
> >
> >Just for curiosity, what is the effect of "in-kernel irqchip"?
>
> it emulates the irqchip in-kernel (in the KVM kernel module) which
> avoids userspace exits to qemu. in your particular case I remember
> that it made all IRQs deliverd to vcpu0 on. So I think this is a workaround
> and not the real fix. I think Michael is right that it is a
> client kernel bug. It would be good to find out what it is and ask
> the 2.6.32 maintainers to include it. i further have seen that
> with more recent kernels and inkernel-irqchip the irqs are delivered
> to vcpu0 only again (without multiqueue).
>
> >Is it possible to disable it on a "live" domain?
>
> try it. i don't know. you definetely have to do a live migration for it, 
> but I have no clue if the VM will survice this.
>
> Peter

I doubt you can migrate VMs between irqchip/non irqchip configurations.

> >
> >Cheers,
> > Davide
> >
> >
> >On 14/mar/2013, at 19:21, Peter Lieven <pl@dlhnet.de> wrote:
> >
> >>
> >>Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:
> >>
> >>>Of course I can do some test but a kernel upgrade is not an option here :(
> >>
> >>disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.
> >>
> >>Peter
> >>
> >>
> >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-18  9:50                                                                     ` Alexandre DERUMIER
@ 2013-03-18  9:53                                                                       ` Michael S. Tsirkin
  2013-03-25 13:34                                                                         ` Peter Lieven
  0 siblings, 1 reply; 51+ messages in thread
From: Michael S. Tsirkin @ 2013-03-18  9:53 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: Peter Lieven, Stefan Hajnoczi, qemu-devel, Davide Guerri,
	Jan Kiszka, Peter Lieven, Dietmar Maurer

Do you see all interrupts going to the same CPU? If yes is
irqbalance in guest running?

On Mon, Mar 18, 2013 at 10:50:19AM +0100, Alexandre DERUMIER wrote:
> Hello, about this bug,
> 
> 
> Last Proxmox distrib use 2.6.32 rhel 6.3 kernel + qemu 1.4 , and have this problem with guest with 2.6.32 kernel.
> 
> do you think that +x2apic in guest cpu could help ?
> 
> (I think it's enable by default in RHEV/OVIRT ? but not in proxmox)
> 
> 
> 
> ----- Mail original -----
> 
> De: "Michael S. Tsirkin" <mst@redhat.com>
> À: "Peter Lieven" <pl@dlhnet.de>
> Cc: "Davide Guerri" <d.guerri@unidata.it>, "Alexandre DERUMIER" <aderumier@odiso.com>, "Stefan Hajnoczi" <stefanha@gmail.com>, qemu-devel@nongnu.org, "Jan Kiszka" <jan.kiszka@web.de>, "Peter Lieven" <lieven-lists@dlhnet.de>, "Dietmar Maurer" <dietmar@proxmox.com>
> Envoyé: Dimanche 17 Mars 2013 10:08:17
> Objet: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
> 
> On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote:
> > On 15.03.2013 00:04, Davide Guerri wrote:
> > >Yes this is definitely an option :)
> > >
> > >Just for curiosity, what is the effect of "in-kernel irqchip"?
> >
> > it emulates the irqchip in-kernel (in the KVM kernel module) which
> > avoids userspace exits to qemu. in your particular case I remember
> > that it made all IRQs deliverd to vcpu0 on. So I think this is a workaround
> > and not the real fix. I think Michael is right that it is a
> > client kernel bug. It would be good to find out what it is and ask
> > the 2.6.32 maintainers to include it. i further have seen that
> > with more recent kernels and inkernel-irqchip the irqs are delivered
> > to vcpu0 only again (without multiqueue).
> >
> > >Is it possible to disable it on a "live" domain?
> >
> > try it. i don't know. you definetely have to do a live migration for it, 
> > but I have no clue if the VM will survice this.
> >
> > Peter
> 
> I doubt you can migrate VMs between irqchip/non irqchip configurations.
> 
> > >
> > >Cheers,
> > > Davide
> > >
> > >
> > >On 14/mar/2013, at 19:21, Peter Lieven <pl@dlhnet.de> wrote:
> > >
> > >>
> > >>Am 14.03.2013 um 19:15 schrieb Davide Guerri <d.guerri@unidata.it>:
> > >>
> > >>>Of course I can do some test but a kernel upgrade is not an option here :(
> > >>
> > >>disabling the in-kernel irqchip (default since 1.2.0) should also help, maybe this is an option.
> > >>
> > >>Peter
> > >>
> > >>
> > >

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
  2013-03-18  9:53                                                                       ` Michael S. Tsirkin
@ 2013-03-25 13:34                                                                         ` Peter Lieven
  0 siblings, 0 replies; 51+ messages in thread
From: Peter Lieven @ 2013-03-25 13:34 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Lieven, Stefan Hajnoczi, qemu-devel, Alexandre DERUMIER,
	Davide Guerri, Jan Kiszka, Dietmar Maurer

On 18.03.2013 10:53, Michael S. Tsirkin wrote:
> Do you see all interrupts going to the same CPU? If yes is
> irqbalance in guest running?
I  had the same issue today. Problem is irqs are going to all vCPUs. If the smp_affinity is set to one CPU only
performance is OK. irqbalance is running, but it doesn't matter.

Peter

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2013-03-25 13:35 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-05 11:51 [Qemu-devel] slow virtio network with vhost=on and multiple cores Dietmar Maurer
2012-11-05 12:36 ` Stefan Hajnoczi
2012-11-05 15:01   ` Dietmar Maurer
2012-11-06  6:12   ` Dietmar Maurer
2012-11-06  7:46     ` Peter Lieven
2012-11-06  7:51       ` Dietmar Maurer
2012-11-06  9:01       ` Dietmar Maurer
2012-11-06  9:26         ` Jan Kiszka
2012-11-06  9:46           ` Dietmar Maurer
2012-11-06 10:12             ` Jan Kiszka
2012-11-06 11:24               ` Dietmar Maurer
2012-11-08  9:39                 ` Jan Kiszka
2012-11-08 10:55                   ` Peter Lieven
2012-11-08 12:03                   ` Dietmar Maurer
2012-11-08 15:02                     ` Peter Lieven
2012-11-09  5:55                       ` Dietmar Maurer
2012-11-09 17:27                         ` Peter Lieven
2012-11-09 17:51                           ` Peter Lieven
2012-11-09 18:03                             ` Peter Lieven
2012-11-13 11:44                               ` Peter Lieven
2012-11-13 11:49                               ` Peter Lieven
2012-11-13 16:22                                 ` Michael S. Tsirkin
2012-11-13 16:21                                   ` Peter Lieven
2012-11-13 16:26                                     ` Michael S. Tsirkin
2012-11-13 16:27                                       ` Peter Lieven
2012-11-13 16:59                                         ` Dietmar Maurer
2012-11-13 17:03                                         ` Michael S. Tsirkin
2012-11-13 16:33                                   ` Michael S. Tsirkin
2012-11-13 16:35                                     ` Peter Lieven
2012-11-13 16:46                                       ` Michael S. Tsirkin
2012-11-13 17:03                                     ` Dietmar Maurer
2012-11-13 17:07                                       ` Michael S. Tsirkin
2012-11-13 17:38                                         ` Dietmar Maurer
2012-11-15 18:26                                           ` Peter Lieven
2012-11-16 10:44                                             ` Dietmar Maurer
2012-11-16 11:00                                               ` Alexandre DERUMIER
2012-12-03 11:23                                                 ` Peter Lieven
2012-12-09 18:38                                                   ` Alexandre DERUMIER
2013-03-14  9:22                                                     ` Davide Guerri
2013-03-14 10:43                                                       ` Alexandre DERUMIER
2013-03-14 17:50                                                         ` Michael S. Tsirkin
2013-03-14 18:15                                                           ` Davide Guerri
2013-03-14 18:21                                                             ` Peter Lieven
2013-03-14 23:04                                                               ` Davide Guerri
2013-03-15  7:23                                                                 ` Peter Lieven
2013-03-17  9:08                                                                   ` Michael S. Tsirkin
2013-03-18  9:50                                                                     ` Alexandre DERUMIER
2013-03-18  9:53                                                                       ` Michael S. Tsirkin
2013-03-25 13:34                                                                         ` Peter Lieven
2012-11-19 13:49                                     ` Peter Lieven
2012-11-06  6:48   ` Dietmar Maurer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.