From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Fischer, Anna" Subject: RE: Network I/O performance Date: Wed, 13 May 2009 15:56:23 +0000 Message-ID: <0199E0D51A61344794750DC57738F58E66BC980A2D@GVW1118EXC.americas.hpqcorp.net> References: <0199E0D51A61344794750DC57738F58E66BBF927A9@GVW1118EXC.americas.hpqcorp.net> <4A0A7556.4070406@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT Cc: "kvm@vger.kernel.org" To: Avi Kivity Return-path: Received: from g1t0028.austin.hp.com ([15.216.28.35]:33085 "EHLO g1t0028.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753823AbZEMP53 convert rfc822-to-8bit (ORCPT ); Wed, 13 May 2009 11:57:29 -0400 In-Reply-To: <4A0A7556.4070406@redhat.com> Content-Language: en-US Sender: kvm-owner@vger.kernel.org List-ID: > Subject: Re: Network I/O performance > > Fischer, Anna wrote: > > I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use > the tun/tap device model and the Linux bridge kernel module to connect > my VM to the network. I have 2 10G Intel 82598 network devices (with > the ixgbe driver) attached to my machine and I want to do packet > routing in my VM (the VM has two virtual network interfaces > configured). Analysing the network performance of the standard QEMU > emulated NICs, I get less that 1G of throughput on those 10G links. > Surprisingly though, I don't really see CPU utilization being maxed > out. This is a dual core machine, and mpstat shows me that both CPUs > are about 40% idle. My VM is more or less unresponsive due to the high > network processing load while the host OS still seems to be in good > shape. How can I best tune this setup to achieve best possible > performance with KVM? I know there is virtIO and I know there is PCI > pass-through, but those models are not an option for me right now. > > > > How many cpus are assigned to the guest? If only one, then 40% idle > equates to 100% of a core for the guest and 20% for housekeeping. No, the machine has a dual core CPU and I have configured the guest with 2 CPUs. So I would want to see KVM using up to 200% of CPU, ideally. There is nothing else running on that machine. > If this is the case, you could try pinning the vcpu thread ("info cpus" > from the monitor) to one core. You should then see 100%/20% cpu load > distribution. > > wrt emulated NIC performance, I'm guessing you're not doing tcp? If > you > were we might do something with TSO. No, I am measuring UDP throughput performance. I have now tried using a different NIC model, and the e1000 model seems to achieve slightly better performance (CPU goes up to 110% only though). I have also been running virtio now, and while its performance with 2.6.20 was very poor too, when changing the guest kernel to 2.6.30, I get a reasonable performance and higher CPU utilization (e.g. it goes up to 180-190%). I have to throttle the incoming bandwidth though, because as soon as I go over a certain threshold, CPU goes back down to 90% and throughput goes down too. I have not seen this with Xen/VMware where I mostly managed to max out CPU completely before throughput performance did not go up anymore. I have also realized that when using the tun/tap configuration with a bridge, packets are replicated on all tap devices when QEMU writes packets to the tun interface. I guess this is a limitation of tun/tap as it does not know to which tap device the packet has to go to. The tap device then eventually drops packets when the destination MAC is not its own, but it still receives the packet which causes more overhead in the system overall. I have not yet experimented much with pinning VCPU threads to cores. I will do that as well.