All of lore.kernel.org
 help / color / mirror / Atom feed
* Network I/O performance
@ 2009-05-12  0:28 Fischer, Anna
  2009-05-13  7:23 ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Fischer, Anna @ 2009-05-12  0:28 UTC (permalink / raw)
  To: kvm

I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the tun/tap device model and the Linux bridge kernel module to connect my VM to the network. I have 2 10G Intel 82598 network devices (with the ixgbe driver) attached to my machine and I want to do packet routing in my VM (the VM has two virtual network interfaces configured). Analysing the network performance of the standard QEMU emulated NICs, I get less that 1G of throughput on those 10G links. Surprisingly though, I don't really see CPU utilization being maxed out. This is a dual core machine, and mpstat shows me that both CPUs are about 40% idle. My VM is more or less unresponsive due to the high network processing load while the host OS still seems to be in good shape. How can I best tune this setup to achieve best 
 possible performance with KVM? I know there is virtIO and I know there is PCI pass-through, but those models are not an option for me right now.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Network I/O performance
  2009-05-12  0:28 Network I/O performance Fischer, Anna
@ 2009-05-13  7:23 ` Avi Kivity
  2009-05-13 15:56   ` Fischer, Anna
  0 siblings, 1 reply; 13+ messages in thread
From: Avi Kivity @ 2009-05-13  7:23 UTC (permalink / raw)
  To: Fischer, Anna; +Cc: kvm

Fischer, Anna wrote:
> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the tun/tap device model and the Linux bridge kernel module to connect my VM to the network. I have 2 10G Intel 82598 network devices (with the ixgbe driver) attached to my machine and I want to do packet routing in my VM (the VM has two virtual network interfaces configured). Analysing the network performance of the standard QEMU emulated NICs, I get less that 1G of throughput on those 10G links. Surprisingly though, I don't really see CPU utilization being maxed out. This is a dual core machine, and mpstat shows me that both CPUs are about 40% idle. My VM is more or less unresponsive due to the high network processing load while the host OS still seems to be in good shape. How can I best tune this setup to achieve bes
 t possible performance with KVM? I know there is virtIO and I know there is PCI pass-through, but those models are not an option for me right now.
>   

How many cpus are assigned to the guest?  If only one, then 40% idle 
equates to 100% of a core for the guest and 20% for housekeeping.

If this is the case, you could try pinning the vcpu thread ("info cpus" 
from the monitor) to one core.  You should then see 100%/20% cpu load 
distribution.

wrt emulated NIC performance, I'm guessing you're not doing tcp?  If you 
were we might do something with TSO.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Network I/O performance
  2009-05-13  7:23 ` Avi Kivity
@ 2009-05-13 15:56   ` Fischer, Anna
  2009-05-17 21:14     ` Avi Kivity
  0 siblings, 1 reply; 13+ messages in thread
From: Fischer, Anna @ 2009-05-13 15:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

> Subject: Re: Network I/O performance
> 
> Fischer, Anna wrote:
> > I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use
> the tun/tap device model and the Linux bridge kernel module to connect
> my VM to the network. I have 2 10G Intel 82598 network devices (with
> the ixgbe driver) attached to my machine and I want to do packet
> routing in my VM (the VM has two virtual network interfaces
> configured). Analysing the network performance of the standard QEMU
> emulated NICs, I get less that 1G of throughput on those 10G links.
> Surprisingly though, I don't really see CPU utilization being maxed
> out. This is a dual core machine, and mpstat shows me that both CPUs
> are about 40% idle. My VM is more or less unresponsive due to the high
> network processing load while the host OS still seems to be in good
> shape. How can I best tune this setup to achieve best possible
> performance with KVM? I know there is virtIO and I know there is PCI
> pass-through, but those models are not an option for me right now.
> >
> 
> How many cpus are assigned to the guest?  If only one, then 40% idle
> equates to 100% of a core for the guest and 20% for housekeeping.

No, the machine has a dual core CPU and I have configured the guest with 2 CPUs. So I would want to see KVM using up to 200% of CPU, ideally. There is nothing else running on that machine.
 
> If this is the case, you could try pinning the vcpu thread ("info cpus"
> from the monitor) to one core.  You should then see 100%/20% cpu load
> distribution.
> 
> wrt emulated NIC performance, I'm guessing you're not doing tcp?  If
> you
> were we might do something with TSO.

No, I am measuring UDP throughput performance. I have now tried using a different NIC model, and the e1000 model seems to achieve slightly better performance (CPU goes up to 110% only though). I have also been running virtio now, and while its performance with 2.6.20 was very poor too, when changing the guest kernel to 2.6.30, I get a reasonable performance and higher CPU utilization (e.g. it goes up to 180-190%). I have to throttle the incoming bandwidth though, because as soon as I go over a certain threshold, CPU goes back down to 90% and throughput goes down too. 

I have not seen this with Xen/VMware where I mostly managed to max out CPU completely before throughput performance did not go up anymore.

I have also realized that when using the tun/tap configuration with a bridge, packets are replicated on all tap devices when QEMU writes packets to the tun interface. I guess this is a limitation of tun/tap as it does not know to which tap device the packet has to go to. The tap device then eventually drops packets when the destination MAC is not its own, but it still receives the packet which causes more overhead in the system overall.

I have not yet experimented much with pinning VCPU threads to cores. I will do that as well.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Network I/O performance
  2009-05-13 15:56   ` Fischer, Anna
@ 2009-05-17 21:14     ` Avi Kivity
  2009-05-19  1:30       ` Herbert Xu
                         ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Avi Kivity @ 2009-05-17 21:14 UTC (permalink / raw)
  To: Fischer, Anna; +Cc: kvm, Herbert Xu

Fischer, Anna wrote:
>> Subject: Re: Network I/O performance
>>
>> Fischer, Anna wrote:
>>     
>>> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use
>>>       
>> the tun/tap device model and the Linux bridge kernel module to connect
>> my VM to the network. I have 2 10G Intel 82598 network devices (with
>> the ixgbe driver) attached to my machine and I want to do packet
>> routing in my VM (the VM has two virtual network interfaces
>> configured). Analysing the network performance of the standard QEMU
>> emulated NICs, I get less that 1G of throughput on those 10G links.
>> Surprisingly though, I don't really see CPU utilization being maxed
>> out. This is a dual core machine, and mpstat shows me that both CPUs
>> are about 40% idle. My VM is more or less unresponsive due to the high
>> network processing load while the host OS still seems to be in good
>> shape. How can I best tune this setup to achieve best possible
>> performance with KVM? I know there is virtIO and I know there is PCI
>> pass-through, but those models are not an option for me right now.
>>     
>> How many cpus are assigned to the guest?  If only one, then 40% idle
>> equates to 100% of a core for the guest and 20% for housekeeping.
>>     
>
> No, the machine has a dual core CPU and I have configured the guest with 2 CPUs. So I would want to see KVM using up to 200% of CPU, ideally. There is nothing else running on that machine.
>   

Well, it really depends on the workload, whether it can utilize both vcpus.

>  
>   
>> If this is the case, you could try pinning the vcpu thread ("info cpus"
>> from the monitor) to one core.  You should then see 100%/20% cpu load
>> distribution.
>>
>> wrt emulated NIC performance, I'm guessing you're not doing tcp?  If
>> you
>> were we might do something with TSO.
>>     
>
> No, I am measuring UDP throughput performance. I have now tried using a different NIC model, and the e1000 model seems to achieve slightly better performance (CPU goes up to 110% only though). I have also been running virtio now, and while its performance with 2.6.20 was very poor too, when changing the guest kernel to 2.6.30, I get a reasonable performance and higher CPU utilization (e.g. it goes up to 180-190%). I have to throttle the incoming bandwidth though, because as soon as I go over a certain threshold, CPU goes back down to 90% and throughput goes down too. 
>   

Yes, there's a known issue with UDP, where we don't report congestion 
and the queues start dropping packets.  There's a patch for tun queued 
for the next merge window; you'll need a 2.6.31 host for that IIRC 
(Herbert?)

> I have not seen this with Xen/VMware where I mostly managed to max out CPU completely before throughput performance did not go up anymore.
>
> I have also realized that when using the tun/tap configuration with a bridge, packets are replicated on all tap devices when QEMU writes packets to the tun interface. I guess this is a limitation of tun/tap as it does not know to which tap device the packet has to go to. The tap device then eventually drops packets when the destination MAC is not its own, but it still receives the packet which causes more overhead in the system overall.
>   

Right, I guess you'd see this with a real switch as well?  Maybe have 
your guest send a packet out once in a while so the bridge can learn its 
MAC address (we do this after migration, for example).

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Network I/O performance
  2009-05-17 21:14     ` Avi Kivity
@ 2009-05-19  1:30       ` Herbert Xu
  2009-05-19  4:53         ` Avi Kivity
  2009-05-19  7:18       ` tun/tap and Vlans (was: Re: Network I/O performance) Lukas Kolbe
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2009-05-19  1:30 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Fischer, Anna, kvm

On Mon, May 18, 2009 at 12:14:34AM +0300, Avi Kivity wrote:
>
>> No, I am measuring UDP throughput performance. I have now tried using a 
>> different NIC model, and the e1000 model seems to achieve slightly 
>> better performance (CPU goes up to 110% only though). I have also been 
>> running virtio now, and while its performance with 2.6.20 was very poor 
>> too, when changing the guest kernel to 2.6.30, I get a reasonable 
>> performance and higher CPU utilization (e.g. it goes up to 180-190%). I 
>> have to throttle the incoming bandwidth though, because as soon as I go 
>> over a certain threshold, CPU goes back down to 90% and throughput goes 
>> down too.   
>
> Yes, there's a known issue with UDP, where we don't report congestion  
> and the queues start dropping packets.  There's a patch for tun queued  
> for the next merge window; you'll need a 2.6.31 host for that IIRC  
> (Herbert?)

It should be in 2.6.30 in fact.  However, this is for outbound
traffic only since inbound traffic shouldn't have this problem
of the guest sending faster than the wire.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Network I/O performance
  2009-05-19  1:30       ` Herbert Xu
@ 2009-05-19  4:53         ` Avi Kivity
  0 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2009-05-19  4:53 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Fischer, Anna, kvm

Herbert Xu wrote:
>> Yes, there's a known issue with UDP, where we don't report congestion  
>> and the queues start dropping packets.  There's a patch for tun queued  
>> for the next merge window; you'll need a 2.6.31 host for that IIRC  
>> (Herbert?)
>>     
>
> It should be in 2.6.30 in fact.  However, this is for outbound
> traffic only since inbound traffic shouldn't have this problem
> of the guest sending faster than the wire.
>   

Is there a corresponding qemu change?  Or is this a already handled by 
the existing code?

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* tun/tap and Vlans (was: Re: Network I/O performance)
  2009-05-17 21:14     ` Avi Kivity
  2009-05-19  1:30       ` Herbert Xu
@ 2009-05-19  7:18       ` Lukas Kolbe
  2009-05-19  7:45         ` tun/tap and Vlans Avi Kivity
  2009-05-19 21:22       ` Does KVM suffer from ACK-compression as you increase the number of VMs? Andrew de Andrade
  2009-05-20 10:15       ` Network I/O performance Fischer, Anna
  3 siblings, 1 reply; 13+ messages in thread
From: Lukas Kolbe @ 2009-05-19  7:18 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

Hi all,

On a sidenote:

> > I have also realized that when using the tun/tap configuration with
> > a bridge, packets are replicated on all tap devices when QEMU writes
> > packets to the tun interface. I guess this is a limitation of
> > tun/tap as it does not know to which tap device the packet has to go
> > to. The tap device then eventually drops packets when the
> > destination MAC is not its own, but it still receives the packet 
> > which causes more overhead in the system overall.
> 
> Right, I guess you'd see this with a real switch as well?  Maybe have 
> your guest send a packet out once in a while so the bridge can learn its 
> MAC address (we do this after migration, for example).

Does this mean that it is not possible for having each tun device in a
seperate bridge that serves a seperate Vlan? We have experienced a
strange problem that we couldn't yet explain. Given this setup:

Guest            Host          
kvm1 --- eth0 -+- bridge0 --- vlan1 \
               |                     +-- eth0
kvm2 -+- eth0 -/                     /
      \- eth1 --- bridge1 --- vlan2 +

When sending packets through kvm2/eth0, they appear on both bridges and
also vlans, also when sending packets through kvm2/eth1. When the guest
has only one interface, the packets only appear on one bridge and one
vlan as it's supposed to be.

Can this be worked around?

-- 
Lukas



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: tun/tap and Vlans
  2009-05-19  7:18       ` tun/tap and Vlans (was: Re: Network I/O performance) Lukas Kolbe
@ 2009-05-19  7:45         ` Avi Kivity
  2009-05-19 19:46           ` Lukas Kolbe
  2009-05-20 10:25           ` Fischer, Anna
  0 siblings, 2 replies; 13+ messages in thread
From: Avi Kivity @ 2009-05-19  7:45 UTC (permalink / raw)
  To: Lukas Kolbe; +Cc: kvm

Lukas Kolbe wrote:
>> Right, I guess you'd see this with a real switch as well?  Maybe have 
>> your guest send a packet out once in a while so the bridge can learn its 
>> MAC address (we do this after migration, for example).
>>     
>
> Does this mean that it is not possible for having each tun device in a
> seperate bridge that serves a seperate Vlan? We have experienced a
> strange problem that we couldn't yet explain. Given this setup:
>
> Guest            Host          
> kvm1 --- eth0 -+- bridge0 --- vlan1 \
>                |                     +-- eth0
> kvm2 -+- eth0 -/                     /
>       \- eth1 --- bridge1 --- vlan2 +
>
> When sending packets through kvm2/eth0, they appear on both bridges and
> also vlans, also when sending packets through kvm2/eth1. When the guest
> has only one interface, the packets only appear on one bridge and one
> vlan as it's supposed to be.
>
> Can this be worked around?
>   

This is strange.  Can you post the command line you used to start kvm2?

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: tun/tap and Vlans
  2009-05-19  7:45         ` tun/tap and Vlans Avi Kivity
@ 2009-05-19 19:46           ` Lukas Kolbe
  2009-05-20 10:25           ` Fischer, Anna
  1 sibling, 0 replies; 13+ messages in thread
From: Lukas Kolbe @ 2009-05-19 19:46 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

Am Dienstag, den 19.05.2009, 10:45 +0300 schrieb Avi Kivity:

Hi,

> > Guest            Host          
> > kvm1 --- eth0 -+- bridge0 --- vlan1 \
> >                |                     +-- eth0
> > kvm2 -+- eth0 -/                     /
> >       \- eth1 --- bridge1 --- vlan2 +
> >
> > When sending packets through kvm2/eth0, they appear on both bridges and
> > also vlans, also when sending packets through kvm2/eth1. When the guest
> > has only one interface, the packets only appear on one bridge and one
> > vlan as it's supposed to be.
> >
> > Can this be worked around?
> >   
> 
> This is strange.  Can you post the command line you used to start kvm2?

Please bear with me - this was a few weeks ago and we didn't investigate
further as we had other problems to solve. I'll set up a testbed next
week and hope to report back with more details.

-- 
Lukas




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Does KVM suffer from ACK-compression as you increase the number of VMs?
  2009-05-17 21:14     ` Avi Kivity
  2009-05-19  1:30       ` Herbert Xu
  2009-05-19  7:18       ` tun/tap and Vlans (was: Re: Network I/O performance) Lukas Kolbe
@ 2009-05-19 21:22       ` Andrew de Andrade
  2009-05-20 10:15       ` Network I/O performance Fischer, Anna
  3 siblings, 0 replies; 13+ messages in thread
From: Andrew de Andrade @ 2009-05-19 21:22 UTC (permalink / raw)
  To: kvm

I recently read the following paper from 2004 that discusses ACK- 
compression in a VMware GSX 2.5.1 environment.
http://www.cs.clemson.edu/~jmarty/papers/ccn2004.pdf

I was wondering if anyone had checked to see if KVM also suffers from  
ACK-compression as you increase the number of VMs on each host  
(increasing virtualization overhead)?

If it does suffer delays, what solutions exist for remedying this?

In addition to that, I was also curious what the maximum number of VMs  
people have been able to fit on a host, and what bottlenecks they  
encountered as they reached a maximum level of VMs before things fell  
apart.

thanks,

andrew


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Network I/O performance
  2009-05-17 21:14     ` Avi Kivity
                         ` (2 preceding siblings ...)
  2009-05-19 21:22       ` Does KVM suffer from ACK-compression as you increase the number of VMs? Andrew de Andrade
@ 2009-05-20 10:15       ` Fischer, Anna
  3 siblings, 0 replies; 13+ messages in thread
From: Fischer, Anna @ 2009-05-20 10:15 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm, Herbert Xu

> Subject: Re: Network I/O performance
> 
> Fischer, Anna wrote:
> >> Subject: Re: Network I/O performance
> >>
> >> Fischer, Anna wrote:
> >>
> >>> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I
> use
> >>>
> >> the tun/tap device model and the Linux bridge kernel module to
> connect
> >> my VM to the network. I have 2 10G Intel 82598 network devices (with
> >> the ixgbe driver) attached to my machine and I want to do packet
> >> routing in my VM (the VM has two virtual network interfaces
> >> configured). Analysing the network performance of the standard QEMU
> >> emulated NICs, I get less that 1G of throughput on those 10G links.
> >> Surprisingly though, I don't really see CPU utilization being maxed
> >> out. This is a dual core machine, and mpstat shows me that both CPUs
> >> are about 40% idle. My VM is more or less unresponsive due to the
> high
> >> network processing load while the host OS still seems to be in good
> >> shape. How can I best tune this setup to achieve best possible
> >> performance with KVM? I know there is virtIO and I know there is PCI
> >> pass-through, but those models are not an option for me right now.
> >>
> >> How many cpus are assigned to the guest?  If only one, then 40% idle
> >> equates to 100% of a core for the guest and 20% for housekeeping.
> >>
> >
> > No, the machine has a dual core CPU and I have configured the guest
> with 2 CPUs. So I would want to see KVM using up to 200% of CPU,
> ideally. There is nothing else running on that machine.
> >
> 
> Well, it really depends on the workload, whether it can utilize both
> vcpus.
> 
> >
> >
> >> If this is the case, you could try pinning the vcpu thread ("info
> cpus"
> >> from the monitor) to one core.  You should then see 100%/20% cpu
> load
> >> distribution.
> >>
> >> wrt emulated NIC performance, I'm guessing you're not doing tcp?  If
> >> you
> >> were we might do something with TSO.
> >>
> >
> > No, I am measuring UDP throughput performance. I have now tried using
> a different NIC model, and the e1000 model seems to achieve slightly
> better performance (CPU goes up to 110% only though). I have also been
> running virtio now, and while its performance with 2.6.20 was very poor
> too, when changing the guest kernel to 2.6.30, I get a reasonable
> performance and higher CPU utilization (e.g. it goes up to 180-190%). I
> have to throttle the incoming bandwidth though, because as soon as I go
> over a certain threshold, CPU goes back down to 90% and throughput goes
> down too.
> >
> 
> Yes, there's a known issue with UDP, where we don't report congestion
> and the queues start dropping packets.  There's a patch for tun queued
> for the next merge window; you'll need a 2.6.31 host for that IIRC
> (Herbert?)
> 
> > I have not seen this with Xen/VMware where I mostly managed to max
> out CPU completely before throughput performance did not go up anymore.
> >
> > I have also realized that when using the tun/tap configuration with a
> bridge, packets are replicated on all tap devices when QEMU writes
> packets to the tun interface. I guess this is a limitation of tun/tap
> as it does not know to which tap device the packet has to go to. The
> tap device then eventually drops packets when the destination MAC is
> not its own, but it still receives the packet which causes more
> overhead in the system overall.
> >
> 
> Right, I guess you'd see this with a real switch as well?  Maybe have
> your guest send a packet out once in a while so the bridge can learn
> its
> MAC address (we do this after migration, for example).

No, this is not about the bridge - packets are replicated by tun/tap as far as I can see. In fact I run two bridges, and attach my two tap interfaces to those (one tap per bridge to connect it to the external network). And packets that should actually only go to one bridge, are replicated on the other one, too. This is far off from being ideal, but I guess the issue is that the tun/tap interface is a 1-N mapping, so there is not much you can do.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: tun/tap and Vlans
  2009-05-19  7:45         ` tun/tap and Vlans Avi Kivity
  2009-05-19 19:46           ` Lukas Kolbe
@ 2009-05-20 10:25           ` Fischer, Anna
  2009-05-20 10:38             ` Avi Kivity
  1 sibling, 1 reply; 13+ messages in thread
From: Fischer, Anna @ 2009-05-20 10:25 UTC (permalink / raw)
  To: Avi Kivity, Lukas Kolbe; +Cc: kvm

> Subject: Re: tun/tap and Vlans
> 
> Lukas Kolbe wrote:
> >> Right, I guess you'd see this with a real switch as well?  Maybe
> have
> >> your guest send a packet out once in a while so the bridge can learn
> its
> >> MAC address (we do this after migration, for example).
> >>
> >
> > Does this mean that it is not possible for having each tun device in
> a
> > seperate bridge that serves a seperate Vlan? We have experienced a
> > strange problem that we couldn't yet explain. Given this setup:
> >
> > Guest            Host
> > kvm1 --- eth0 -+- bridge0 --- vlan1 \
> >                |                     +-- eth0
> > kvm2 -+- eth0 -/                     /
> >       \- eth1 --- bridge1 --- vlan2 +
> >
> > When sending packets through kvm2/eth0, they appear on both bridges
> and
> > also vlans, also when sending packets through kvm2/eth1. When the
> guest
> > has only one interface, the packets only appear on one bridge and one
> > vlan as it's supposed to be.
> >
> > Can this be worked around?
> >
> 
> This is strange.  Can you post the command line you used to start kvm2?

This is exactly my scenario as well. 

When QEMU sends packets through the tun interface coming from a VM then those will be passed to both tap devices of that VM. Simply because it doesn't know where to send the packet to. It just copies the buffer to the tap interface. The tap interface then eventually discards the packet if the MAC address doesn't match its own.

What you would need is a 1:1 mapping, e.g. one tun interface per tap device. 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: tun/tap and Vlans
  2009-05-20 10:25           ` Fischer, Anna
@ 2009-05-20 10:38             ` Avi Kivity
  0 siblings, 0 replies; 13+ messages in thread
From: Avi Kivity @ 2009-05-20 10:38 UTC (permalink / raw)
  To: Fischer, Anna; +Cc: Lukas Kolbe, kvm

Fischer, Anna wrote:
>> Subject: Re: tun/tap and Vlans
>>
>> Lukas Kolbe wrote:
>>     
>>>> Right, I guess you'd see this with a real switch as well?  Maybe
>>>>         
>> have
>>     
>>>> your guest send a packet out once in a while so the bridge can learn
>>>>         
>> its
>>     
>>>> MAC address (we do this after migration, for example).
>>>>
>>>>         
>>> Does this mean that it is not possible for having each tun device in
>>>       
>> a
>>     
>>> seperate bridge that serves a seperate Vlan? We have experienced a
>>> strange problem that we couldn't yet explain. Given this setup:
>>>
>>> Guest            Host
>>> kvm1 --- eth0 -+- bridge0 --- vlan1 \
>>>                |                     +-- eth0
>>> kvm2 -+- eth0 -/                     /
>>>       \- eth1 --- bridge1 --- vlan2 +
>>>
>>> When sending packets through kvm2/eth0, they appear on both bridges
>>>       
>> and
>>     
>>> also vlans, also when sending packets through kvm2/eth1. When the
>>>       
>> guest
>>     
>>> has only one interface, the packets only appear on one bridge and one
>>> vlan as it's supposed to be.
>>>
>>> Can this be worked around?
>>>
>>>       
>> This is strange.  Can you post the command line you used to start kvm2?
>>     
>
> This is exactly my scenario as well. 
>
> When QEMU sends packets through the tun interface coming from a VM then those will be passed to both tap devices of that VM. Simply because it doesn't know where to send the packet to. It just copies the buffer to the tap interface. The tap interface then eventually discards the packet if the MAC address doesn't match its own.
>
> What you would need is a 1:1 mapping, e.g. one tun interface per tap device. 
>   

There ougt to be a 1:1 mapping thought the vlan parameter.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2009-05-20 10:38 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-12  0:28 Network I/O performance Fischer, Anna
2009-05-13  7:23 ` Avi Kivity
2009-05-13 15:56   ` Fischer, Anna
2009-05-17 21:14     ` Avi Kivity
2009-05-19  1:30       ` Herbert Xu
2009-05-19  4:53         ` Avi Kivity
2009-05-19  7:18       ` tun/tap and Vlans (was: Re: Network I/O performance) Lukas Kolbe
2009-05-19  7:45         ` tun/tap and Vlans Avi Kivity
2009-05-19 19:46           ` Lukas Kolbe
2009-05-20 10:25           ` Fischer, Anna
2009-05-20 10:38             ` Avi Kivity
2009-05-19 21:22       ` Does KVM suffer from ACK-compression as you increase the number of VMs? Andrew de Andrade
2009-05-20 10:15       ` Network I/O performance Fischer, Anna

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.