kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* bridge + KVM performance
@ 2009-07-06  9:34 Martin Petermann
  2009-07-06 11:53 ` Dor Laor
  0 siblings, 1 reply; 7+ messages in thread
From: Martin Petermann @ 2009-07-06  9:34 UTC (permalink / raw)
  To: kvm, ralphw

I'm currently looking at the network performance between two KVM guests
running on the same host. The host system is applied with two quad core
Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
on all the three systems:

 ____________________     ____________________
|                    |   |                    |
|     KVM guest      |   |      KVM guest     |
|    ic01vn08man     |   |     ic01vn09man    |
|____________________|   |____________________|
             \                  /
              \                /
               \              /
                \            / 
                 \          /
              ____\________/______ 
             |                    |
             |     KVM host       |
             |ethernet bridge: br3|
             |____________________|


On the host I've created a network bridge in the following way

[root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3
DEVICE=br3
TYPE=Bridge
ONBOOT=yes

and installed the bridge with the commands

brctl addbr br3
ifconfig br3 up

Within the configuration files of the KVM guests I added the following
sections:

ic01vn08man.xml;
...
    <interface type='bridge'>
      <source bridge='br3'/>
      <model type='virtio' />
      <mac address="00:ad:be:ef:99:08"/>
    </interface>
...

ic01vn09man.xml
...
    <interface type='bridge'>
      <source bridge='br3'/>
      <model type='virtio' />
      <mac address="00:ad:be:ef:99:09"/>
    </interface>
...

Within the guests I have configured the network in the following way:

[root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Virtio Network Device
DEVICE=eth3
BOOTPROTO=static
IPADDR=192.168.100.8
NETMASK=255.255.255.0
HWADDR=00:AD:BE:EF:99:08
ONBOOT=yes

[root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
# Virtio Network Device
DEVICE=eth3
BOOTPROTO=static
IPADDR=192.168.100.9
NETMASK=255.255.255.0
HWADDR=00:AD:BE:EF:99:09
ONBOOT=yes

If I now test the network performance using the iperf tool
(http://sourceforge.net/projects/iperf/) 

performance between two guests (iperf server is running on other guest
ic01vn08man/192.168.100.8: ic01vn09man <-> ic01vn08man):

[root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
-w 131072
------------------------------------------------------------
Client connecting to 192.168.100.8, TCP port 5001
TCP window size:   256 KByte (WARNING: requested   128 KByte)
------------------------------------------------------------
[  4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port
5001
[  3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port
5001
[  4]  0.0-60.1 sec  2.54 GBytes    363 Mbits/sec
[  3]  0.0-60.1 sec  2.53 GBytes    361 Mbits/sec
[SUM]  0.0-60.1 sec  5.06 GBytes    724 Mbits/sec

results within the same guest (iperf server is running on the same
system: ic01vn08man <-> ic01vn08man):

[root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
-w 131072
------------------------------------------------------------
Client connecting to 192.168.100.8, TCP port 5001
TCP window size:   256 KByte (WARNING: requested   128 KByte)
------------------------------------------------------------
[  4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port
5001
[  3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port
5001
[  3]  0.0-60.0 sec  46.2 GBytes  6.62 Gbits/sec
[  4]  0.0-60.0 sec  45.2 GBytes  6.47 Gbits/sec
[SUM]  0.0-60.0 sec  91.4 GBytes  13.1 Gbits/sec

724 Mbits/sec is far away from what I have assumed. The host system is
connected with 10G ethernet and it would be necessary to have a similar
performance.

Regards
  Martin



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: bridge + KVM performance
  2009-07-06  9:34 bridge + KVM performance Martin Petermann
@ 2009-07-06 11:53 ` Dor Laor
  2009-07-06 13:42   ` EOI acceleration for high bandwidth IO Dong, Eddie
  2009-07-06 19:27   ` bridge + KVM performance Martin Petermann
  0 siblings, 2 replies; 7+ messages in thread
From: Dor Laor @ 2009-07-06 11:53 UTC (permalink / raw)
  To: Martin Petermann; +Cc: kvm, ralphw

On 07/06/2009 12:34 PM, Martin Petermann wrote:
> I'm currently looking at the network performance between two KVM guests
> running on the same host. The host system is applied with two quad core
> Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
> enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
> on all the three systems:
>
>   ____________________     ____________________
> |                    |   |                    |
> |     KVM guest      |   |      KVM guest     |
> |    ic01vn08man     |   |     ic01vn09man    |
> |____________________|   |____________________|
>               \                  /
>                \                /
>                 \              /
>                  \            /
>                   \          /
>                ____\________/______
>               |                    |
>               |     KVM host       |
>               |ethernet bridge: br3|
>               |____________________|
>
>
> On the host I've created a network bridge in the following way
>
> [root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3
> DEVICE=br3
> TYPE=Bridge
> ONBOOT=yes
>
> and installed the bridge with the commands
>
> brctl addbr br3
> ifconfig br3 up
>
> Within the configuration files of the KVM guests I added the following
> sections:
>
> ic01vn08man.xml;
> ...
>      <interface type='bridge'>
>        <source bridge='br3'/>
>        <model type='virtio' />
>        <mac address="00:ad:be:ef:99:08"/>
>      </interface>
> ...
>
> ic01vn09man.xml
> ...
>      <interface type='bridge'>
>        <source bridge='br3'/>
>        <model type='virtio' />
>        <mac address="00:ad:be:ef:99:09"/>
>      </interface>
> ...
>
> Within the guests I have configured the network in the following way:
>
> [root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> # Virtio Network Device
> DEVICE=eth3
> BOOTPROTO=static
> IPADDR=192.168.100.8
> NETMASK=255.255.255.0
> HWADDR=00:AD:BE:EF:99:08
> ONBOOT=yes
>
> [root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> # Virtio Network Device
> DEVICE=eth3
> BOOTPROTO=static
> IPADDR=192.168.100.9
> NETMASK=255.255.255.0
> HWADDR=00:AD:BE:EF:99:09
> ONBOOT=yes
>
> If I now test the network performance using the iperf tool
> (http://sourceforge.net/projects/iperf/)
>
> performance between two guests (iperf server is running on other guest
> ic01vn08man/192.168.100.8: ic01vn09man<->  ic01vn08man):
>
> [root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> -w 131072
> ------------------------------------------------------------
> Client connecting to 192.168.100.8, TCP port 5001
> TCP window size:   256 KByte (WARNING: requested   128 KByte)
> ------------------------------------------------------------
> [  4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port
> 5001
> [  3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port
> 5001
> [  4]  0.0-60.1 sec  2.54 GBytes    363 Mbits/sec
> [  3]  0.0-60.1 sec  2.53 GBytes    361 Mbits/sec
> [SUM]  0.0-60.1 sec  5.06 GBytes    724 Mbits/sec
>
> results within the same guest (iperf server is running on the same
> system: ic01vn08man<->  ic01vn08man):
>
> [root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> -w 131072

If you'll drop the -w 131072 you'll get over a 1G performance. Because 
of bad buffering config you get lots of idle time (check your cpu 
consumption).
Using netperf is more recommended. You can check one of vmware's 
performance documents and check the huge difference of message size and 
socket sizes.


> ------------------------------------------------------------
> Client connecting to 192.168.100.8, TCP port 5001
> TCP window size:   256 KByte (WARNING: requested   128 KByte)
> ------------------------------------------------------------
> [  4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port
> 5001
> [  3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port
> 5001
> [  3]  0.0-60.0 sec  46.2 GBytes  6.62 Gbits/sec
> [  4]  0.0-60.0 sec  45.2 GBytes  6.47 Gbits/sec
> [SUM]  0.0-60.0 sec  91.4 GBytes  13.1 Gbits/sec
>
> 724 Mbits/sec is far away from what I have assumed. The host system is
> connected with 10G ethernet and it would be necessary to have a similar
> performance.
>
> Regards
>    Martin
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

* EOI acceleration for high bandwidth IO
  2009-07-06 11:53 ` Dor Laor
@ 2009-07-06 13:42   ` Dong, Eddie
  2009-07-06 14:03     ` Avi Kivity
  2009-07-06 19:27   ` bridge + KVM performance Martin Petermann
  1 sibling, 1 reply; 7+ messages in thread
From: Dong, Eddie @ 2009-07-06 13:42 UTC (permalink / raw)
  Cc: kvm, Dong, Eddie

[-- Attachment #1: Type: text/plain, Size: 1966 bytes --]



    EOI is one of key VM Exit at high bandwidth IO such as VT-d with 10Gb/s NIC.
    This patch accelerate guest EOI emulation utilizing HW VM Exit
    information.
    
    Signed-off-by: Eddie Dong <eddie.dong@intel.com>

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index ccafe0d..b63138f 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -875,6 +875,15 @@ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
 		     | (apic_get_reg(apic, APIC_TASKPRI) & 4));
 }
 
+void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+
+	if (apic)
+		apic_set_eoi(apic);
+}
+EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
+
 u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 40010b0..3a7a29a 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -27,6 +27,7 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu);
 void kvm_lapic_reset(struct kvm_vcpu *vcpu);
 u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
 void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
+void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu);
 void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
 u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
 void kvm_apic_set_version(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3a75db3..6eea29d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3125,6 +3125,12 @@ static int handle_apic_access(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 
 	exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
 	offset = exit_qualification & 0xffful;
+	if ((exit_qualification >> 12) & 0xf == 1 &&
+		offset == APIC_EOI) {	/* EOI write */
+		kvm_lapic_set_eoi(vcpu);
+		skip_emulated_instruction(vcpu);
+		return 1;
+	}
 
 	er = emulate_instruction(vcpu, kvm_run, 0, 0, 0);
 

[-- Attachment #2: eoi2.patch --]
[-- Type: application/octet-stream, Size: 2021 bytes --]

commit 2f6079215e8ea686de1c9f7a20756aad3df09302
Author: root <root@eddie-wb.localdomain>
Date:   Mon Jul 6 22:27:20 2009 +0800

    EOI is one of key VM Exit at high bandwidth IO such as VT-d.
    This patch accelerate guest EOI emulation utilizing HW VM Exit
    information.
    
    Signed-off-by: Eddie Dong <eddie.dong@intel.com>

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index ccafe0d..b63138f 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -875,6 +875,15 @@ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
 		     | (apic_get_reg(apic, APIC_TASKPRI) & 4));
 }
 
+void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
+{
+	struct kvm_lapic *apic = vcpu->arch.apic;
+
+	if (apic)
+		apic_set_eoi(apic);
+}
+EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
+
 u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
 {
 	struct kvm_lapic *apic = vcpu->arch.apic;
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index 40010b0..3a7a29a 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -27,6 +27,7 @@ int kvm_get_apic_interrupt(struct kvm_vcpu *vcpu);
 void kvm_lapic_reset(struct kvm_vcpu *vcpu);
 u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
 void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
+void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu);
 void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
 u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
 void kvm_apic_set_version(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3a75db3..6eea29d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3125,6 +3125,12 @@ static int handle_apic_access(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 
 	exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
 	offset = exit_qualification & 0xffful;
+	if ((exit_qualification >> 12) & 0xf == 1 &&
+		offset == APIC_EOI) {	/* EOI write */
+		kvm_lapic_set_eoi(vcpu);
+		skip_emulated_instruction(vcpu);
+		return 1;
+	}
 
 	er = emulate_instruction(vcpu, kvm_run, 0, 0, 0);
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: EOI acceleration for high bandwidth IO
  2009-07-06 13:42   ` EOI acceleration for high bandwidth IO Dong, Eddie
@ 2009-07-06 14:03     ` Avi Kivity
  2009-07-06 14:34       ` Dong, Eddie
  0 siblings, 1 reply; 7+ messages in thread
From: Avi Kivity @ 2009-07-06 14:03 UTC (permalink / raw)
  To: Dong, Eddie; +Cc: kvm

On 07/06/2009 04:42 PM, Dong, Eddie wrote:
>      EOI is one of key VM Exit at high bandwidth IO such as VT-d with 10Gb/s NIC.
>      This patch accelerate guest EOI emulation utilizing HW VM Exit
>      information.
>    

Won't this fail if the guest uses STOSD to issue the EOI?

(of course, no guest does this, just looking for potential problems)

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: EOI acceleration for high bandwidth IO
  2009-07-06 14:03     ` Avi Kivity
@ 2009-07-06 14:34       ` Dong, Eddie
  2009-07-06 14:53         ` Avi Kivity
  0 siblings, 1 reply; 7+ messages in thread
From: Dong, Eddie @ 2009-07-06 14:34 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm, Dong, Eddie

Avi Kivity wrote:
> On 07/06/2009 04:42 PM, Dong, Eddie wrote:
>>      EOI is one of key VM Exit at high bandwidth IO such as VT-d
>>      with 10Gb/s NIC. This patch accelerate guest EOI emulation
>> utilizing HW VM Exit      information. 
>> 
> 
> Won't this fail if the guest uses STOSD to issue the EOI?
> 
Good catch, should we use an exclusion list for the opcode?
Or use decode cache for hot IP (RO in EPT for gip)?

We noticed huge amount of vEOI in 10Gb/s NIC which is ~70KHZ for EOI.
With SR-IOV, it could go up much more to even million level. Decode and
emulation cost 7K cycles, while short path may only spend 3-4K cycles.

Eddie

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: EOI acceleration for high bandwidth IO
  2009-07-06 14:34       ` Dong, Eddie
@ 2009-07-06 14:53         ` Avi Kivity
  0 siblings, 0 replies; 7+ messages in thread
From: Avi Kivity @ 2009-07-06 14:53 UTC (permalink / raw)
  To: Dong, Eddie; +Cc: kvm

On 07/06/2009 05:34 PM, Dong, Eddie wrote:
> Avi Kivity wrote:
>    
>> On 07/06/2009 04:42 PM, Dong, Eddie wrote:
>>      
>>>       EOI is one of key VM Exit at high bandwidth IO such as VT-d
>>>       with 10Gb/s NIC. This patch accelerate guest EOI emulation
>>> utilizing HW VM Exit      information.
>>>
>>>        
>> Won't this fail if the guest uses STOSD to issue the EOI?
>>
>>      
> Good catch, should we use an exclusion list for the opcode?
>    

That means fetching the opcode and doing partial decoding, which will 
negate the advantage.

> Or use decode cache for hot IP (RO in EPT for gip)?
>    

How can you tell if the code did not change?

I think it's reasonable to assume that the guest won't use STOSD for EOI 
though, and to apply your patch.  There's no risk to the host.

> We noticed huge amount of vEOI in 10Gb/s NIC which is ~70KHZ for EOI.
> With SR-IOV, it could go up much more to even million level. Decode and
> emulation cost 7K cycles, while short path may only spend 3-4K cycles.
>    

Yes, and I think we can drop the short path further to almost zero by 
using paravirtualization.  It would work for Linux and Windows x86 (with 
something similar to tpr patching).  Unfortunately it won't work on 
Windows x64 since it doesn't allow patching.

We can also expose x2apic (already merged) or Hyper-V enlightenment 
which converts EOI to MSR write which is fairly fast.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: bridge + KVM performance
  2009-07-06 11:53 ` Dor Laor
  2009-07-06 13:42   ` EOI acceleration for high bandwidth IO Dong, Eddie
@ 2009-07-06 19:27   ` Martin Petermann
  1 sibling, 0 replies; 7+ messages in thread
From: Martin Petermann @ 2009-07-06 19:27 UTC (permalink / raw)
  To: dlaor; +Cc: kvm, ralphw

On Mon, 2009-07-06 at 14:53 +0300, Dor Laor wrote:
> On 07/06/2009 12:34 PM, Martin Petermann wrote:
> > I'm currently looking at the network performance between two KVM guests
> > running on the same host. The host system is applied with two quad core
> > Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests,
> > enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5)
> > on all the three systems:
> >
> >   ____________________     ____________________
> > |                    |   |                    |
> > |     KVM guest      |   |      KVM guest     |
> > |    ic01vn08man     |   |     ic01vn09man    |
> > |____________________|   |____________________|
> >               \                  /
> >                \                /
> >                 \              /
> >                  \            /
> >                   \          /
> >                ____\________/______
> >               |                    |
> >               |     KVM host       |
> >               |ethernet bridge: br3|
> >               |____________________|
> >
> >
> > On the host I've created a network bridge in the following way
> >
> > [root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3
> > DEVICE=br3
> > TYPE=Bridge
> > ONBOOT=yes
> >
> > and installed the bridge with the commands
> >
> > brctl addbr br3
> > ifconfig br3 up
> >
> > Within the configuration files of the KVM guests I added the following
> > sections:
> >
> > ic01vn08man.xml;
> > ...
> >      <interface type='bridge'>
> >        <source bridge='br3'/>
> >        <model type='virtio' />
> >        <mac address="00:ad:be:ef:99:08"/>
> >      </interface>
> > ...
> >
> > ic01vn09man.xml
> > ...
> >      <interface type='bridge'>
> >        <source bridge='br3'/>
> >        <model type='virtio' />
> >        <mac address="00:ad:be:ef:99:09"/>
> >      </interface>
> > ...
> >
> > Within the guests I have configured the network in the following way:
> >
> > [root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> > # Virtio Network Device
> > DEVICE=eth3
> > BOOTPROTO=static
> > IPADDR=192.168.100.8
> > NETMASK=255.255.255.0
> > HWADDR=00:AD:BE:EF:99:08
> > ONBOOT=yes
> >
> > [root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3
> > # Virtio Network Device
> > DEVICE=eth3
> > BOOTPROTO=static
> > IPADDR=192.168.100.9
> > NETMASK=255.255.255.0
> > HWADDR=00:AD:BE:EF:99:09
> > ONBOOT=yes
> >
> > If I now test the network performance using the iperf tool
> > (http://sourceforge.net/projects/iperf/)
> >
> > performance between two guests (iperf server is running on other guest
> > ic01vn08man/192.168.100.8: ic01vn09man<->  ic01vn08man):
> >
> > [root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> > -w 131072
> > ------------------------------------------------------------
> > Client connecting to 192.168.100.8, TCP port 5001
> > TCP window size:   256 KByte (WARNING: requested   128 KByte)
> > ------------------------------------------------------------
> > [  4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port
> > 5001
> > [  3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port
> > 5001
> > [  4]  0.0-60.1 sec  2.54 GBytes    363 Mbits/sec
> > [  3]  0.0-60.1 sec  2.53 GBytes    361 Mbits/sec
> > [SUM]  0.0-60.1 sec  5.06 GBytes    724 Mbits/sec
> >
> > results within the same guest (iperf server is running on the same
> > system: ic01vn08man<->  ic01vn08man):
> >
> > [root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m
> > -w 131072
> 
> If you'll drop the -w 131072 you'll get over a 1G performance. Because 
> of bad buffering config you get lots of idle time (check your cpu 
> consumption).
> Using netperf is more recommended. You can check one of vmware's 
> performance documents and check the huge difference of message size and 
> socket sizes.
> 

Thanks for your answer. If I remove the "-w" specification I can see a
throughput of about 1.2 Gbits/sec. Using netperf I can see a similar
throughput:

[root@ic01vn09man netperf-2.4.5]# netperf -f g -H 192.168.100.8 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.100.8
(192.168.100.8) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^9bits/sec  

 87380  16384  16384    10.01       1.40   
 
Also changing the the message size and socket size options does not help
here. What is limiting the network performance of the guests to
something more than 1Gbit/sec? 

The performance data from the performance document you have mentioned
(10Gbps Networking Performance) shows much better values even with
MTU=1500 and appropriate values for socket size and message size.

> 
> > ------------------------------------------------------------
> > Client connecting to 192.168.100.8, TCP port 5001
> > TCP window size:   256 KByte (WARNING: requested   128 KByte)
> > ------------------------------------------------------------
> > [  4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port
> > 5001
> > [  3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port
> > 5001
> > [  3]  0.0-60.0 sec  46.2 GBytes  6.62 Gbits/sec
> > [  4]  0.0-60.0 sec  45.2 GBytes  6.47 Gbits/sec
> > [SUM]  0.0-60.0 sec  91.4 GBytes  13.1 Gbits/sec
> >
> > 724 Mbits/sec is far away from what I have assumed. The host system is
> > connected with 10G ethernet and it would be necessary to have a similar
> > performance.
> >
> > Regards
> >    Martin
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-07-06 19:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-06  9:34 bridge + KVM performance Martin Petermann
2009-07-06 11:53 ` Dor Laor
2009-07-06 13:42   ` EOI acceleration for high bandwidth IO Dong, Eddie
2009-07-06 14:03     ` Avi Kivity
2009-07-06 14:34       ` Dong, Eddie
2009-07-06 14:53         ` Avi Kivity
2009-07-06 19:27   ` bridge + KVM performance Martin Petermann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).