All of lore.kernel.org
 help / color / mirror / Atom feed
* Degrading Network performance as KVM/kernel version increases
@ 2010-08-31 23:00 matthew.r.rohrer
  2010-08-31 23:56 ` Brian Jackson
  0 siblings, 1 reply; 3+ messages in thread
From: matthew.r.rohrer @ 2010-08-31 23:00 UTC (permalink / raw)
  To: kvm

I have been getting degrading network performance with newer versions of
KVM and was wondering if this was expected?  It seems like a bug, but I
am new to this and maybe I am doing something wrong so I thought I would
ask.

KVM Host OS: Fedora 12 x86_64
KVM Guest OS Tiny Core Linux 2.6.33.3 kernel

I have tried multiple host kernels 2.6.31.5, 2.6.31.6, 2.6.32.19 and
2.6.35.4 along with versions qemu-kvm 11.0 and qemu-system-x86_64 12.5
compiled from from qemu-kvm repo.

Setup is: 2 hosts with 1 guest on each connected by 10 Gb nic.

I am using virtio and have checked that hardware acceleration is
working.

Processor usage is less than 50% on host and guests. 

Here is what I am seeing, I will just include guest to guest statistics,
I do have more (host to guest, etc.) if interested:

With kernel 2.6.31.5 and usign qemu-kvm 11.0  1.57 Gb/s (guest 1 to
guest 2)  then 1.37 Gb/s (guest 2 to guest 1) with a single iperf
thread.
With kernel 2.6.31.5 and usign qemu-kvm 11.0  3.16 Gb/s (guest 1 to
guest 2)  then 4.29 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.31.5 and usign qemu-system 12.5  1.02 Gb/s (guest 1 to
guest 2) then .420 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.31.5 and usign qemu-system 12.5  1.30 Gb/s (guest 1 to
guest 2)  then .655 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.31.5 on host 1 and 2.6.32.19 on host 2 and usign
qemu-kvm 11.0  .580 Gb/s (guest 1 to guest 2)  then 1.32 Gb/s(guest 2 to
guest 1) with a single iperf thread.

With kernel 2.6.32.19 and usign qemu-kvm 11.0  .548 Gb/s (guest 1 to
guest 2) then .603 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.32.19 and usign qemu-kvm 11.0  .569 Gb/s (guest 1 to
guest 2)  then .478 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.32.19 and usign qemu-system 12.5  .571 Gb/s (guest 1 to
guest 2) then .500 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.32.19 and usign qemu-system 12.5  .633 Gb/s (guest 1 to
guest 2)  then .705 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.35.4 and usign qemu-system 12.5  .418 Gb/s (guest 1 to
guest 2) and then I gave up.


My goal is to get as much bandwidth as I can between the 2 guests
running on separate hosts.  The most I have been able to get is ~4 Gb/s
running 4 threads on iperf from guest A to guest B.  I cannot seem to
get much over 1.5Gb/s from guest to guest with a single iperf thread.
Is there some sort of know send limit per thread?  Is it expected that
the latest version of the kernel and modules perform worse than earlier
versions in the area of network performance ( I am guessing not, am I
doing something wrong?)?  I am using virtio and have checked that
hardware acceleration is working.  4 iperf threads host to host yields
~9.5 Gb/s.  Any ideas on how I can get better performance with newer
versions?  I have tried using vhost in 2.6.35 but I get the vhost could
not be initialized error.  The only thing I could find on the vhost
error is that selinux should be off which it is.

I am looking for ideas on increasing the bandwidth between guests and
thoughts on the degrading performance.

Thanks for your help! --Matt

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Degrading Network performance as KVM/kernel version increases
  2010-08-31 23:00 Degrading Network performance as KVM/kernel version increases matthew.r.rohrer
@ 2010-08-31 23:56 ` Brian Jackson
  2010-09-01 21:05   ` matthew.r.rohrer
  0 siblings, 1 reply; 3+ messages in thread
From: Brian Jackson @ 2010-08-31 23:56 UTC (permalink / raw)
  To: matthew.r.rohrer; +Cc: kvm

  On 8/31/2010 6:00 PM, matthew.r.rohrer@L-3com.com wrote:
> I have been getting degrading network performance with newer versions of
> KVM and was wondering if this was expected?  It seems like a bug, but I
> am new to this and maybe I am doing something wrong so I thought I would
> ask.
>
> KVM Host OS: Fedora 12 x86_64
> KVM Guest OS Tiny Core Linux 2.6.33.3 kernel
>
> I have tried multiple host kernels 2.6.31.5, 2.6.31.6, 2.6.32.19 and
> 2.6.35.4 along with versions qemu-kvm 11.0 and qemu-system-x86_64 12.5
> compiled from from qemu-kvm repo.


I can't say anything about the kernel version making things worse. At 
least for the qemu-kvm version, you should be using -device and -netdev 
instead of -net nic -net tap (see 
*http://git.qemu.org/qemu.git/tree/docs/qdev-device-use.txt since it's 
not in the 0.12 tree).*


> Setup is: 2 hosts with 1 guest on each connected by 10 Gb nic.
>
> I am using virtio and have checked that hardware acceleration is
> working.
>
> Processor usage is less than 50% on host and guests.
>
> Here is what I am seeing, I will just include guest to guest statistics,
> I do have more (host to guest, etc.) if interested:

<snip results>

>
> My goal is to get as much bandwidth as I can between the 2 guests
> running on separate hosts.  The most I have been able to get is ~4 Gb/s
> running 4 threads on iperf from guest A to guest B.  I cannot seem to
> get much over 1.5Gb/s from guest to guest with a single iperf thread.
> Is there some sort of know send limit per thread?  Is it expected that
> the latest version of the kernel and modules perform worse than earlier
> versions in the area of network performance ( I am guessing not, am I
> doing something wrong?)?  I am using virtio and have checked that
> hardware acceleration is working.  4 iperf threads host to host yields
> ~9.5 Gb/s.  Any ideas on how I can get better performance with newer
> versions?  I have tried using vhost in 2.6.35 but I get the vhost could
> not be initialized error.  The only thing I could find on the vhost
> error is that selinux should be off which it is.
>
> I am looking for ideas on increasing the bandwidth between guests and
> thoughts on the degrading performance.


Vhost-net is probably your best bet for maximizing throughput. You might 
try a separate post just for the vhost error if nobody chimes in about 
it here.


> Thanks for your help! --Matt


^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Degrading Network performance as KVM/kernel version increases
  2010-08-31 23:56 ` Brian Jackson
@ 2010-09-01 21:05   ` matthew.r.rohrer
  0 siblings, 0 replies; 3+ messages in thread
From: matthew.r.rohrer @ 2010-09-01 21:05 UTC (permalink / raw)
  To: Brian Jackson; +Cc: kvm

>I can't say anything about the kernel version making things worse. At 
>least for the qemu-kvm version, you should be using -device and -netdev

>instead of -net nic -net tap (see 
>*http://git.qemu.org/qemu.git/tree/docs/qdev-device-use.txt since it's 
>not in the 0.12 tree).*

Thanks for your suggestion Brian I was not doing this correctly. After
changing to -device and -netdev I did get a significant performance
increase although overall performance is still much worse than what I
was getting in the 2.3.31.5 kernel with version 11.  Below is part of my
startup script. Any chance you notice anything else wrong?  

Before your suggestion (what was working well with version 11 & 2.6.31):
  modprobe kvm
  modprobe kvm_intel
  modprobe tun
  echo -e "Setting up bridge device br0" "\r"
  brctl addbr br0
  ifconfig br0 192.168.100.254 netmask 255.255.255.0 up
  brctl addif br0 eth7
  ifconfig eth7 down
  ifconfig eth7 0.0.0.0
  for ((i=0; i < NUM_OF_DEVICES ; i++)); do
               echo -e "Setting up " "\r"
               tunctl -b -g ${KVMNET_GID} -t kvmnet$i
               brctl addif br0 kvmnet$i
               ifconfig kvmnet$i up 0.0.0.0 promisc
 done
 echo "1" > /proc/sys/net/ipv4/ip_forward
 iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth7 -j
MASQUERADE
 for ((i=0; i < NUM_OF_DEVICES ; i++)); do
	    echo -e "Creating Virtual disk" $i "\r"
	    qemu-img create -f qcow2 vdisk_$i.img $VDISK_SIZE
	    echo -e "Starting Virtual Machine" $i "\r"
	    /root/bin/qemu-system-x86_64 -cpu host -drive
file=./vdisk_$i.img,if=virtio,boot=on -cdrom ./$2 -boot d \
		-net nic,model=virtio,macaddr=52:54:00:12:34:3$i \
		-net tap,ifname=kvmnet$i,script=no \
		-m 1024 \
		-smp 2 \
		-usb \
		-usbdevice tablet \
		-localtime \
		-daemonize \
            -vga std


After changing to -device and -netdev(working better with the latest
stuff but still much worse than 2.6.31):
	modprobe kvm
      modprobe kvm_intel
      modprobe tun
      echo -e "Setting up bridge device br0" "\r"
      brctl addbr br0
      ifconfig br0 192.168.100.254 netmask 255.255.255.0 up
      brctl addif br0 eth7
      ifconfig eth7 down
      ifconfig eth7 0.0.0.0
      for ((i=0; i < NUM_OF_DEVICES ; i++)); do
               echo -e "Setting up " "\r"
               tunctl -b -g ${KVMNET_GID} -t kvmnet$i
               brctl addif br0 kvmnet$i
               ifconfig kvmnet$i up 0.0.0.0 promisc
       done
	echo "1" > /proc/sys/net/ipv4/ip_forward
	iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth7 -j
MASQUERADE
	for ((i=0; i < NUM_OF_DEVICES ; i++)); do
	    echo -e "Creating Virtual disk" $i "\r"
	    qemu-img create -f qcow2 vdisk_$i.img $VDISK_SIZE
	    echo -e "Starting Virtual Machine" $i "\r"
	    /root/bin/qemu-system-x86_64 -cpu host -drive
file=./vdisk_$i.img,if=virtio,boot=on -cdrom ./$2 -boot d \
		-netdev type=tap,id=tap.0,script=no,ifname=kvmnet$i \
		-device
virtio-net-pci,netdev=tap.0,mac=52:54:00:12:34:3$i \
		-m 1024 \
		-smp 2 \
		-usb \
		-usbdevice tablet \
		-localtime \
		-daemonize \
            -vga std



>Vhost-net is probably your best bet for maximizing throughput. You
might 
>try a separate post just for the vhost error if nobody chimes in about 
>it here.

I will do that.

Thanks again for the input! --Matt

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2010-09-01 21:05 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-31 23:00 Degrading Network performance as KVM/kernel version increases matthew.r.rohrer
2010-08-31 23:56 ` Brian Jackson
2010-09-01 21:05   ` matthew.r.rohrer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.