All of lore.kernel.org
 help / color / mirror / Atom feed
From: Martin Petermann <martin@linux.vnet.ibm.com>
To: dlaor@redhat.com
Cc: kvm@vger.kernel.org, ralphw@linux.vnet.ibm.com
Subject: Re: bridge + KVM performance
Date: Mon, 06 Jul 2009 21:27:51 +0200	[thread overview]
Message-ID: <1246908471.18699.355.camel@bl3aed4p.de.ibm.com> (raw)
In-Reply-To: <4A51E5AB.7070103@redhat.com>

On Mon, 2009-07-06 at 14:53 +0300, Dor Laor wrote:
> On 07/06/2009 12:34 PM, Martin Petermann wrote:
> > I'm currently looking at the network performance between two KVM guests
> > running on the same host. The host system is applied with two quad core
> > Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
> > enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
> > on all the three systems:
> >
> >   ____________________     ____________________
> > |                    |   |                    |
> > |     KVM guest      |   |      KVM guest     |
> > |    ic01vn08man     |   |     ic01vn09man    |
> > |____________________|   |____________________|
> >               \                  /
> >                \                /
> >                 \              /
> >                  \            /
> >                   \          /
> >                ____\________/______
> >               |                    |
> >               |     KVM host       |
> >               |ethernet bridge: br3|
> >               |____________________|
> >
> >
> > On the host I've created a network bridge in the following way
> >
> > [root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3
> > DEVICE=br3
> > TYPE=Bridge
> > ONBOOT=yes
> >
> > and installed the bridge with the commands
> >
> > brctl addbr br3
> > ifconfig br3 up
> >
> > Within the configuration files of the KVM guests I added the following
> > sections:
> >
> > ic01vn08man.xml;
> > ...
> >      <interface type='bridge'>
> >        <source bridge='br3'/>
> >        <model type='virtio' />
> >        <mac address="00:ad:be:ef:99:08"/>
> >      </interface>
> > ...
> >
> > ic01vn09man.xml
> > ...
> >      <interface type='bridge'>
> >        <source bridge='br3'/>
> >        <model type='virtio' />
> >        <mac address="00:ad:be:ef:99:09"/>
> >      </interface>
> > ...
> >
> > Within the guests I have configured the network in the following way:
> >
> > [root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> > # Virtio Network Device
> > DEVICE=eth3
> > BOOTPROTO=static
> > IPADDR=192.168.100.8
> > NETMASK=255.255.255.0
> > HWADDR=00:AD:BE:EF:99:08
> > ONBOOT=yes
> >
> > [root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> > # Virtio Network Device
> > DEVICE=eth3
> > BOOTPROTO=static
> > IPADDR=192.168.100.9
> > NETMASK=255.255.255.0
> > HWADDR=00:AD:BE:EF:99:09
> > ONBOOT=yes
> >
> > If I now test the network performance using the iperf tool
> > (http://sourceforge.net/projects/iperf/)
> >
> > performance between two guests (iperf server is running on other guest
> > ic01vn08man/192.168.100.8: ic01vn09man<->  ic01vn08man):
> >
> > [root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> > -w 131072
> > ------------------------------------------------------------
> > Client connecting to 192.168.100.8, TCP port 5001
> > TCP window size:   256 KByte (WARNING: requested   128 KByte)
> > ------------------------------------------------------------
> > [  4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port
> > 5001
> > [  3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port
> > 5001
> > [  4]  0.0-60.1 sec  2.54 GBytes    363 Mbits/sec
> > [  3]  0.0-60.1 sec  2.53 GBytes    361 Mbits/sec
> > [SUM]  0.0-60.1 sec  5.06 GBytes    724 Mbits/sec
> >
> > results within the same guest (iperf server is running on the same
> > system: ic01vn08man<->  ic01vn08man):
> >
> > [root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> > -w 131072
> 
> If you'll drop the -w 131072 you'll get over a 1G performance. Because 
> of bad buffering config you get lots of idle time (check your cpu 
> consumption).
> Using netperf is more recommended. You can check one of vmware's 
> performance documents and check the huge difference of message size and 
> socket sizes.
> 

Thanks for your answer. If I remove the "-w" specification I can see a
throughput of about 1.2 Gbits/sec. Using netperf I can see a similar
throughput:

[root@ic01vn09man netperf-2.4.5]# netperf -f g -H 192.168.100.8 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.8
(192.168.100.8) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^9bits/sec  

 87380  16384  16384    10.01       1.40   
 
Also changing the the message size and socket size options does not help
here. What is limiting the network performance of the guests to
something more than 1Gbit/sec? 

The performance data from the performance document you have mentioned
(10Gbps Networking Performance) shows much better values even with
MTU=1500 and appropriate values for socket size and message size.

> 
> > ------------------------------------------------------------
> > Client connecting to 192.168.100.8, TCP port 5001
> > TCP window size:   256 KByte (WARNING: requested   128 KByte)
> > ------------------------------------------------------------
> > [  4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port
> > 5001
> > [  3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port
> > 5001
> > [  3]  0.0-60.0 sec  46.2 GBytes  6.62 Gbits/sec
> > [  4]  0.0-60.0 sec  45.2 GBytes  6.47 Gbits/sec
> > [SUM]  0.0-60.0 sec  91.4 GBytes  13.1 Gbits/sec
> >
> > 724 Mbits/sec is far away from what I have assumed. The host system is
> > connected with 10G ethernet and it would be necessary to have a similar
> > performance.
> >
> > Regards
> >    Martin
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


      parent reply	other threads:[~2009-07-06 19:27 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-06  9:34 bridge + KVM performance Martin Petermann
2009-07-06 11:53 ` Dor Laor
2009-07-06 13:42   ` EOI acceleration for high bandwidth IO Dong, Eddie
2009-07-06 14:03     ` Avi Kivity
2009-07-06 14:34       ` Dong, Eddie
2009-07-06 14:53         ` Avi Kivity
2009-07-06 19:27   ` Martin Petermann [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1246908471.18699.355.camel@bl3aed4p.de.ibm.com \
    --to=martin@linux.vnet.ibm.com \
    --cc=dlaor@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=ralphw@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.