All of lore.kernel.org
 help / color / mirror / Atom feed
* Bandwith limitation with KVM VMs
@ 2009-08-03 16:32 Daniel Bareiro
  2009-08-03 16:52 ` Gregory Haskins
  2009-08-04  9:25 ` Kai Zimmer
  0 siblings, 2 replies; 7+ messages in thread
From: Daniel Bareiro @ 2009-08-03 16:32 UTC (permalink / raw)
  To: KVM General

[-- Attachment #1: Type: text/plain, Size: 495 bytes --]

Hi all!

I have a KVM VM that it has installed a MRTG on the network interface
and that it doesn't register more than 10 Mbps, seeming that per moments
it is saturated in this value.

Has KVM some bandwidth limitation of the virtualized network interfaces?
In such case, exists some way to increase that limitation?

Thanks in advance.

Regards,
Daniel
-- 
Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
Powered by Debian GNU/Linux Squeeze - Linux user #188.598

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-03 16:32 Bandwith limitation with KVM VMs Daniel Bareiro
@ 2009-08-03 16:52 ` Gregory Haskins
  2009-08-04  1:17   ` Daniel Bareiro
  2009-08-04  9:25 ` Kai Zimmer
  1 sibling, 1 reply; 7+ messages in thread
From: Gregory Haskins @ 2009-08-03 16:52 UTC (permalink / raw)
  To: dbareiro, KVM General

[-- Attachment #1: Type: text/plain, Size: 1416 bytes --]

Daniel Bareiro wrote:
> Hi all!
> 
> I have a KVM VM that it has installed a MRTG on the network interface
> and that it doesn't register more than 10 Mbps, seeming that per moments
> it is saturated in this value.
> 
> Has KVM some bandwidth limitation of the virtualized network interfaces?
> In such case, exists some way to increase that limitation?
> 

There is no set artificial limit afaict, though there are a large number
of factors that can affect performance.  Of course, everything has an
ultimate ceiling (KVM included) but I have found this limit in KVM to be
orders of magnitude faster than 10Mbps.   Properly tuned, you should
easily be able to saturate a GE link at line rate, or even 4Gbps-5Gpbs
of a 10GE link.

If you need even more than that, I would suggest taking a look at my
recently announced project which focuses on IO performance:

http://developer.novell.com/wiki/index.php/AlacrityVM

However, since you are only hitting 10Mb/s now, there is ton of headroom
left even on upstream KVM so you might find it to be satisfactory as is,
once you address the current bottleneck in your setup.

Things to check:  What linkspeed does the host see to the next hop?  How
much bandwidth does the host see to the same end-point?  What is your
overall topology, especially for the VM (are you using -net tap, etc).
What MTU are you using.  Etc.

Good luck!
-Greg


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-03 16:52 ` Gregory Haskins
@ 2009-08-04  1:17   ` Daniel Bareiro
  2009-08-04  3:01     ` Gregory Haskins
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Bareiro @ 2009-08-04  1:17 UTC (permalink / raw)
  To: KVM General

[-- Attachment #1: Type: text/plain, Size: 2589 bytes --]

Hi Gregory.

On Monday, 03 August 2009 12:52:28 -0400,
Gregory Haskins wrote:

> > I have a KVM VM that it has installed a MRTG on the network
> > interface and that it doesn't register more than 10 Mbps, seeming
> > that per moments it is saturated in this value.
> > 
> > Has KVM some bandwidth limitation of the virtualized network
> > interfaces?  In such case, exists some way to increase that
> > limitation?

> There is no set artificial limit afaict, though there are a large
> number of factors that can affect performance.  Of course, everything
> has an ultimate ceiling (KVM included) but I have found this limit in
> KVM to be orders of magnitude faster than 10Mbps.   Properly tuned,
> you should easily be able to saturate a GE link at line rate, or even
> 4Gbps-5Gpbs of a 10GE link.

> However, since you are only hitting 10Mb/s now, there is ton of
> headroom left even on upstream KVM so you might find it to be
> satisfactory as is, once you address the current bottleneck in your
> setup.
> 
> Things to check:  What linkspeed does the host see to the next hop?
> How much bandwidth does the host see to the same end-point?  What is
> your overall topology, especially for the VM (are you using -net tap,
> etc).  What MTU are you using.  Etc.

It draws attention that when executing 'cfgmaker' from MRTG server
against the IP of the VM, it returns max speed of 1250 kBytes/s, that
is to say 10 Mbps:

sparky:~# /usr/bin/cfgmaker --global 'WorkDir: /tmp' --global \ 
 'Options[_]: bits,growright' xxxxxxxxxxxxxx@10.1.0.42
[...]
MaxBytes[10.1.0.42_2]: 1250000


But nevertheless from within of the VM I see the following thing:

aps2:~# ethtool eth0
Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 32
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: pumbg
        Wake-on: d
        Current message level: 0x00000007 (7)
        Link detected: yes


do you think that can give some indication?

Thanks for your reply.

Regards,
Daniel
-- 
Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
Powered by Debian GNU/Linux Squeeze - Linux user #188.598

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-04  1:17   ` Daniel Bareiro
@ 2009-08-04  3:01     ` Gregory Haskins
  2009-08-04  9:48       ` Daniel Bareiro
  0 siblings, 1 reply; 7+ messages in thread
From: Gregory Haskins @ 2009-08-04  3:01 UTC (permalink / raw)
  To: dbareiro, KVM General

[-- Attachment #1: Type: text/plain, Size: 2892 bytes --]

Daniel Bareiro wrote:
> Hi Gregory.
> 
> On Monday, 03 August 2009 12:52:28 -0400,
> Gregory Haskins wrote:
> 
>>> I have a KVM VM that it has installed a MRTG on the network
>>> interface and that it doesn't register more than 10 Mbps, seeming
>>> that per moments it is saturated in this value.
>>>
>>> Has KVM some bandwidth limitation of the virtualized network
>>> interfaces?  In such case, exists some way to increase that
>>> limitation?
> 
>> There is no set artificial limit afaict, though there are a large
>> number of factors that can affect performance.  Of course, everything
>> has an ultimate ceiling (KVM included) but I have found this limit in
>> KVM to be orders of magnitude faster than 10Mbps.   Properly tuned,
>> you should easily be able to saturate a GE link at line rate, or even
>> 4Gbps-5Gpbs of a 10GE link.
> 
>> However, since you are only hitting 10Mb/s now, there is ton of
>> headroom left even on upstream KVM so you might find it to be
>> satisfactory as is, once you address the current bottleneck in your
>> setup.
>>
>> Things to check:  What linkspeed does the host see to the next hop?
>> How much bandwidth does the host see to the same end-point?  What is
>> your overall topology, especially for the VM (are you using -net tap,
>> etc).  What MTU are you using.  Etc.
> 
> It draws attention that when executing 'cfgmaker' from MRTG server
> against the IP of the VM, it returns max speed of 1250 kBytes/s, that
> is to say 10 Mbps:
> 
> sparky:~# /usr/bin/cfgmaker --global 'WorkDir: /tmp' --global \ 
>  'Options[_]: bits,growright' xxxxxxxxxxxxxx@10.1.0.42
> [...]
> MaxBytes[10.1.0.42_2]: 1250000
> 
> 
> But nevertheless from within of the VM I see the following thing:
> 
> aps2:~# ethtool eth0
> Settings for eth0:
>         Supported ports: [ TP MII ]
>         Supported link modes:   10baseT/Half 10baseT/Full
>                                 100baseT/Half 100baseT/Full
>         Supports auto-negotiation: Yes
>         Advertised link modes:  10baseT/Half 10baseT/Full
>                                 100baseT/Half 100baseT/Full
>         Advertised auto-negotiation: Yes
>         Speed: 100Mb/s
>         Duplex: Full
>         Port: MII
>         PHYAD: 32
>         Transceiver: internal
>         Auto-negotiation: on
>         Supports Wake-on: pumbg
>         Wake-on: d
>         Current message level: 0x00000007 (7)
>         Link detected: yes
> 
> 
> do you think that can give some indication?

Hmm.. I am not familiar with MRTG/cfgmaker, but from your ethtool output
I suspect you are not using virtio.  How do you launch the guest?

Have you actually run something like netperf or an rsync to see what
type of bandwidth you actually get?  Perhaps this is just the mrtg tool
getting confused about the actual interface capabilities.

-Greg


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-03 16:32 Bandwith limitation with KVM VMs Daniel Bareiro
  2009-08-03 16:52 ` Gregory Haskins
@ 2009-08-04  9:25 ` Kai Zimmer
  1 sibling, 0 replies; 7+ messages in thread
From: Kai Zimmer @ 2009-08-04  9:25 UTC (permalink / raw)
  To: dbareiro, KVM General

Daniel Bareiro schrieb:
> Has KVM some bandwidth limitation of the virtualized network interfaces?
> In such case, exists some way to increase that limitation?

Which interface do you use? Have you tried virtio?

regards,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-04  3:01     ` Gregory Haskins
@ 2009-08-04  9:48       ` Daniel Bareiro
  2009-08-04 10:05         ` Kai Zimmer
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Bareiro @ 2009-08-04  9:48 UTC (permalink / raw)
  To: KVM General

[-- Attachment #1: Type: text/plain, Size: 7239 bytes --]

Hi Gregory.

On Monday, 03 August 2009 23:01:30 -0400,
Gregory Haskins wrote:

> >> There is no set artificial limit afaict, though there are a large
> >> number of factors that can affect performance.  Of course,
> >> everything has an ultimate ceiling (KVM included) but I have found
> >> this limit in KVM to be orders of magnitude faster than 10Mbps.
> >> Properly tuned, you should easily be able to saturate a GE link at
> >> line rate, or even 4Gbps-5Gpbs of a 10GE link.
 
> >> However, since you are only hitting 10Mb/s now, there is ton of
> >> headroom left even on upstream KVM so you might find it to be
> >> satisfactory as is, once you address the current bottleneck in your
> >> setup.

> >> Things to check:  What linkspeed does the host see to the next hop?
> >> How much bandwidth does the host see to the same end-point?  What
> >> is your overall topology, especially for the VM (are you using -net
> >> tap, etc).  What MTU are you using.  Etc.
 
> > It draws attention that when executing 'cfgmaker' from MRTG server
> > against the IP of the VM, it returns max speed of 1250 kBytes/s,
> > that is to say 10 Mbps:
> > 
> > sparky:~# /usr/bin/cfgmaker --global 'WorkDir: /tmp' --global \ 
> >  'Options[_]: bits,growright' xxxxxxxxxxxxxx@10.1.0.42
> > [...]
> > MaxBytes[10.1.0.42_2]: 1250000
> > 
> > 
> > But nevertheless from within of the VM I see the following thing:
> > 
> > aps2:~# ethtool eth0
> > Settings for eth0:
> >         Supported ports: [ TP MII ]
> >         Supported link modes:   10baseT/Half 10baseT/Full
> >                                 100baseT/Half 100baseT/Full
> >         Supports auto-negotiation: Yes
> >         Advertised link modes:  10baseT/Half 10baseT/Full
> >                                 100baseT/Half 100baseT/Full
> >         Advertised auto-negotiation: Yes
> >         Speed: 100Mb/s
> >         Duplex: Full
> >         Port: MII
> >         PHYAD: 32
> >         Transceiver: internal
> >         Auto-negotiation: on
> >         Supports Wake-on: pumbg
> >         Wake-on: d
> >         Current message level: 0x00000007 (7)
> >         Link detected: yes
> > 
> > 
> > do you think that can give some indication?
 
> Hmm.. I am not familiar with MRTG/cfgmaker, but from your ethtool output
> I suspect you are not using virtio.  How do you launch the guest?

The VMs are booted with a syntax similar to the following one:

root@ss02:~# ps ax|grep aps2|grep -v grep
28711 ?        Sl   8171:06 kvm -hda /dev/vm/aps2-raiz -hdb \
  /dev/vm/aps2-space -hdc /dev/vm/aps2-index -hdd /dev/vm/aps2-cache -m \
  4096 -smp 4 -net nic,vlan=0,macaddr=00:16:3E:00:00:27 -net tap \
  -daemonize -vnc :5 -k es -localtime -monitor \
  telnet:localhost:4005,server,nowait -serial \
  telnet:localhost:4045,server,nowait \

According to I was reading, it would be necessary to use with -net the
option model=virtio, so I would not be using virtio with the VMs.

In order to provide a little more information, I will say to you that
this VM is running on host with Ubuntu Hardy Heron server amd64 with
kernel 2.6.24 and kvm-62, both installed from the Ubuntu repositories,
but my idea is to upgrade in the short term to kernel 2.6.30 with KVM-88
compiled by myself basing me on a suggestion that did Avi to me by
problems that I am having of memory usage [1].

The host machine has two physical interfaces with the following
configuration:

auto br0
iface br0 inet static
         address 10.1.0.47
         netmask 255.255.0.0
         network 10.1.0.0
         broadcast 10.1.255.255
         bridge_ports eth1
         bridge_stp off
         bridge_maxwait 5

auto eth0
iface eth0 inet static
         address 10.3.0.47
         netmask 255.255.0.0
         network 10.3.0.0
         broadcast 10.3.255.255
         gateway 10.3.0.5


Executing ethtool from the host machine, I get the following data:

root@ss02:~# ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: g
        Wake-on: g
        Link detected: yes


root@ss02:~# ethtool eth1
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 100Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: g
        Wake-on: g
        Link detected: yes


Apparently even though the interface for bridge supports 10/100/1000,
the one of the VM is 10/100.

> Have you actually run something like netperf or an rsync to see what
> type of bandwidth you actually get?  Perhaps this is just the mrtg
> tool getting confused about the actual interface capabilities.

It draws attention to me that I have MRTG registering the bandwidth of
several routers and for those cases it register over 10 Mbps without
problem. These are the values that I obtain in a snapshot with iptraf:

Iface     Total      IP         NonIP     BadIP       Activity
lo        120        120        0         0            3,00 kbits/sec
eth0      2654       2654       0         0           56,40 kbits/sec
eth1      346250     346250     0         0        18459,20 kbits/sec
tap0      24         24         0         0            0,40 kbits/sec
tap2      128        128        0         0            1,20 kbits/sec 
tap3      8366       8366       0         0          685,40 kbits/sec
tap1      24         24         0         0            0,40 kbits/sec
tap5      19531      19531      0         0         2742,80 kbits/sec
tap4      328258     328258     0         0        16127,80 kbits/sec

Also, according to MRTG, at this precise moment the VM interface is not
saturated in 10 Mbps. There is some command whose execution can be used
to associate a tap device with a VM?

Like additional note, I am observing that the sum of the activity on
each tap device exceeds the measurement for eth1. This can be possible?

Thanks for your reply.

Regards,
Daniel

[1] http://thread.gmane.org/gmane.comp.emulators.kvm.devel/37704
-- 
Fingerprint: BFB3 08D6 B4D1 31B2 72B9  29CE 6696 BF1B 14E6 1D37
Powered by Debian GNU/Linux Squeeze - Linux user #188.598

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Bandwith limitation with KVM VMs
  2009-08-04  9:48       ` Daniel Bareiro
@ 2009-08-04 10:05         ` Kai Zimmer
  0 siblings, 0 replies; 7+ messages in thread
From: Kai Zimmer @ 2009-08-04 10:05 UTC (permalink / raw)
  To: dbareiro, KVM General

Daniel Bareiro schrieb:
> root@ss02:~# ps ax|grep aps2|grep -v grep
> 28711 ?        Sl   8171:06 kvm -hda /dev/vm/aps2-raiz -hdb \
>   /dev/vm/aps2-space -hdc /dev/vm/aps2-index -hdd /dev/vm/aps2-cache -m \
>   4096 -smp 4 -net nic,vlan=0,macaddr=00:16:3E:00:00:27 -net tap \
>   -daemonize -vnc :5 -k es -localtime -monitor \
>   telnet:localhost:4005,server,nowait -serial \
>   telnet:localhost:4045,server,nowait \
> 
> According to I was reading, it would be necessary to use with -net the
> option model=virtio, so I would not be using virtio with the VMs.

The default NIC is a rtl8139 which 10/100 MBit. Use
qemu-kvm -net nic,model=?
for a list of available devices for your target.
"model=virtio" will probably give you best performance, if the guest OS
supports it - try it out.

best regards,
Kai

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-08-04 10:05 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-03 16:32 Bandwith limitation with KVM VMs Daniel Bareiro
2009-08-03 16:52 ` Gregory Haskins
2009-08-04  1:17   ` Daniel Bareiro
2009-08-04  3:01     ` Gregory Haskins
2009-08-04  9:48       ` Daniel Bareiro
2009-08-04 10:05         ` Kai Zimmer
2009-08-04  9:25 ` Kai Zimmer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.