From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:54038) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Sp1bt-0001An-8d for qemu-devel@nongnu.org; Wed, 11 Jul 2012 14:26:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Sp1br-0004xy-Dg for qemu-devel@nongnu.org; Wed, 11 Jul 2012 14:26:01 -0400 Received: from indium.canonical.com ([91.189.90.7]:37285) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Sp1br-0004xt-8Y for qemu-devel@nongnu.org; Wed, 11 Jul 2012 14:25:59 -0400 Received: from loganberry.canonical.com ([91.189.90.37]) by indium.canonical.com with esmtp (Exim 4.71 #1 (Debian)) id 1Sp1bq-0005XA-7C for ; Wed, 11 Jul 2012 18:25:58 +0000 Received: from loganberry.canonical.com (localhost [127.0.0.1]) by loganberry.canonical.com (Postfix) with ESMTP id 32D092E807E for ; Wed, 11 Jul 2012 18:25:58 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Date: Wed, 11 Jul 2012 18:18:00 -0000 From: zerocoolx <602336@bugs.launchpad.net> Sender: bounces@canonical.com References: <20100706162521.16079.85635.malonedeb@soybean.canonical.com> Message-Id: <20120711181800.26536.96925.malone@chaenomeles.canonical.com> Errors-To: bounces@canonical.com Subject: [Qemu-devel] [Bug 602336] Re: bad network performance with 10Gbit Reply-To: Bug 602336 <602336@bugs.launchpad.net> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org At the moment i'm using version qemu 0.12.3+noroms-0ubuntu9.18 of my ubuntu distribution. I'm triing to compile the latest upstream version during the next two weeks to verify if this is still an issue. -- = You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/602336 Title: bad network performance with 10Gbit Status in QEMU: Incomplete Bug description: Hello, I have trouble with the network performance inside my virtual machines. I= don't know if this is realy a bug, but I didn't find a solution for this p= roblem in other forums or maillists. My KVM-Host machine is connected to a 10Gbit Network. All interfaces are configured to a mtu of 4132. On this host I have no problems and I can use the full bandwidth: CPU_Info: 2x Intel Xeon X5570 flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat = pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm = constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmper= f pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse= 4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid KVM Version: QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-200= 8 Fabrice Bellard 0.12.3+noroms-0ubuntu9 KVM Host Kernel: 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Li= nux KVM Host OS: Ubuntu 10.04 LTS Codename: lucid KVM Guest Kernel: 2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Li= nux KVM Guest OS: Ubuntu 10.04 LTS Codename: lucid = # iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4 [ ID] Interval Transfer Bandwidth [ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec [ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec [ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec [ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec [SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec = Inside a virtual machine don't reach this result: # iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec [ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec [ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec [ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec [SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, but I have also tried to use the e1000 nic device instead. With starting the iperf performance test on multiple vms simultaneously I can use the full bandwidth of the kvm host's interface. But only one vm can't use the full bandwith. Is this a known limitation, or can I improve this performance? Does anyone have an idea how I can improve my network performance? It's very important, because I want to use the network interface to boot all vms via AOE (ATA over Ethernet). If I mount a harddisk via AOE inside a vm I get only this results: Write |CPU |Rewrite |CPU |Read |CPU 102440 |10 |51343 |5 |104249 |3 On the KVM Host I get those results on a mouted AOE Device: Write |CPU |Rewrite |CPU |Read |CPU 205597 |19 |139118 |11 |391316 |11 If I mount the AOE Device directly on the kvm-host and put a virtual hard= disk-file in it I got the following results inside a vm using this harddisk= -file: Write |CPU |Rewrite |CPU |Read |CPU 175140 |12 |136113 |24 |599989 |29 I have just tested vhost_net, but without success. I have upgraded my kernel to 2.6.35-6 with vhost_net support and have installed the qemu-kvm version from git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git (0.12.50) But I still have the same results as before. I had already posted my problem into a few forums, but still got no reply. I would feel very happy if someone can help me. best regards To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/602336/+subscriptions