All of lore.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: "Fischer, Anna" <anna.fischer@hp.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Herbert Xu <herbert@gondor.apana.org.au>
Subject: Re: Network I/O performance
Date: Mon, 18 May 2009 00:14:34 +0300	[thread overview]
Message-ID: <4A107E3A.9050209@redhat.com> (raw)
In-Reply-To: <0199E0D51A61344794750DC57738F58E66BC980A2D@GVW1118EXC.americas.hpqcorp.net>

Fischer, Anna wrote:
>> Subject: Re: Network I/O performance
>>
>> Fischer, Anna wrote:
>>     
>>> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use
>>>       
>> the tun/tap device model and the Linux bridge kernel module to connect
>> my VM to the network. I have 2 10G Intel 82598 network devices (with
>> the ixgbe driver) attached to my machine and I want to do packet
>> routing in my VM (the VM has two virtual network interfaces
>> configured). Analysing the network performance of the standard QEMU
>> emulated NICs, I get less that 1G of throughput on those 10G links.
>> Surprisingly though, I don't really see CPU utilization being maxed
>> out. This is a dual core machine, and mpstat shows me that both CPUs
>> are about 40% idle. My VM is more or less unresponsive due to the high
>> network processing load while the host OS still seems to be in good
>> shape. How can I best tune this setup to achieve best possible
>> performance with KVM? I know there is virtIO and I know there is PCI
>> pass-through, but those models are not an option for me right now.
>>     
>> How many cpus are assigned to the guest?  If only one, then 40% idle
>> equates to 100% of a core for the guest and 20% for housekeeping.
>>     
>
> No, the machine has a dual core CPU and I have configured the guest with 2 CPUs. So I would want to see KVM using up to 200% of CPU, ideally. There is nothing else running on that machine.
>   

Well, it really depends on the workload, whether it can utilize both vcpus.

>  
>   
>> If this is the case, you could try pinning the vcpu thread ("info cpus"
>> from the monitor) to one core.  You should then see 100%/20% cpu load
>> distribution.
>>
>> wrt emulated NIC performance, I'm guessing you're not doing tcp?  If
>> you
>> were we might do something with TSO.
>>     
>
> No, I am measuring UDP throughput performance. I have now tried using a different NIC model, and the e1000 model seems to achieve slightly better performance (CPU goes up to 110% only though). I have also been running virtio now, and while its performance with 2.6.20 was very poor too, when changing the guest kernel to 2.6.30, I get a reasonable performance and higher CPU utilization (e.g. it goes up to 180-190%). I have to throttle the incoming bandwidth though, because as soon as I go over a certain threshold, CPU goes back down to 90% and throughput goes down too. 
>   

Yes, there's a known issue with UDP, where we don't report congestion 
and the queues start dropping packets.  There's a patch for tun queued 
for the next merge window; you'll need a 2.6.31 host for that IIRC 
(Herbert?)

> I have not seen this with Xen/VMware where I mostly managed to max out CPU completely before throughput performance did not go up anymore.
>
> I have also realized that when using the tun/tap configuration with a bridge, packets are replicated on all tap devices when QEMU writes packets to the tun interface. I guess this is a limitation of tun/tap as it does not know to which tap device the packet has to go to. The tap device then eventually drops packets when the destination MAC is not its own, but it still receives the packet which causes more overhead in the system overall.
>   

Right, I guess you'd see this with a real switch as well?  Maybe have 
your guest send a packet out once in a while so the bridge can learn its 
MAC address (we do this after migration, for example).

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


  reply	other threads:[~2009-05-17 21:14 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-12  0:28 Network I/O performance Fischer, Anna
2009-05-13  7:23 ` Avi Kivity
2009-05-13 15:56   ` Fischer, Anna
2009-05-17 21:14     ` Avi Kivity [this message]
2009-05-19  1:30       ` Herbert Xu
2009-05-19  4:53         ` Avi Kivity
2009-05-19  7:18       ` tun/tap and Vlans (was: Re: Network I/O performance) Lukas Kolbe
2009-05-19  7:45         ` tun/tap and Vlans Avi Kivity
2009-05-19 19:46           ` Lukas Kolbe
2009-05-20 10:25           ` Fischer, Anna
2009-05-20 10:38             ` Avi Kivity
2009-05-19 21:22       ` Does KVM suffer from ACK-compression as you increase the number of VMs? Andrew de Andrade
2009-05-20 10:15       ` Network I/O performance Fischer, Anna

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A107E3A.9050209@redhat.com \
    --to=avi@redhat.com \
    --cc=anna.fischer@hp.com \
    --cc=herbert@gondor.apana.org.au \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.