All of lore.kernel.org
 help / color / mirror / Atom feed
* how to tweak kernel to get the best out of kvm?
@ 2010-03-05 15:20 Harald Dunkel
  2010-03-08 11:02 ` Avi Kivity
  0 siblings, 1 reply; 9+ messages in thread
From: Harald Dunkel @ 2010-03-05 15:20 UTC (permalink / raw)
  To: KVM Mailing List

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi folks,

Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up
all block device or file system performance, so that the kvm clients
become almost unresponsive. This is _very_ bad. I would like to make
sure that the kvm clients do not affect each other, and that all
(including the server itself) get a fair part of computing power and
memory space.

What config options would you suggest to build and run a Linux
kernel optimized for running kvm clients?

Sorry for asking, but AFAICS some general guidelines for kvm are
missing here. Of course I saw a lot of options in Documentation/\
kernel-parameters.txt, but unfortunately I am not a kernel hacker.

Any helpful comment would be highly appreciated.


Regards

Harri
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkuRIVYACgkQUTlbRTxpHjebXQCdHSKXYPfkwzSeyawrumELfVPu
MbYAn07JoomtdVkA6YES4EgKayn6KSH6
=2mVb
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-05 15:20 how to tweak kernel to get the best out of kvm? Harald Dunkel
@ 2010-03-08 11:02 ` Avi Kivity
       [not found]   ` <4B979776.1000701@aixigo.de>
  0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2010-03-08 11:02 UTC (permalink / raw)
  To: Harald Dunkel; +Cc: KVM Mailing List

On 03/05/2010 05:20 PM, Harald Dunkel wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hi folks,
>
> Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up
> all block device or file system performance, so that the kvm clients
> become almost unresponsive. This is _very_ bad. I would like to make
> sure that the kvm clients do not affect each other, and that all
> (including the server itself) get a fair part of computing power and
> memory space.
>    

Please describe the issue in detail, provide output from 'vmstat' and 'top'.

> What config options would you suggest to build and run a Linux
> kernel optimized for running kvm clients?
>
> Sorry for asking, but AFAICS some general guidelines for kvm are
> missing here. Of course I saw a lot of options in Documentation/\
> kernel-parameters.txt, but unfortunately I am not a kernel hacker.
>
> Any helpful comment would be highly appreciated.
>    

One way to ensure guests don't affect each other is not to overcommit, 
that is make sure each guest gets its own cores, there is enough memory 
for all guests, and guests have separate disks.  Of course that defeats 
the some of the reasons for virtualizing in the first place; but if you 
share resources, some compromises must be made.

If you do share resources, then Linux manages how they are shared.  The 
scheduler will share the processors, the memory management subsystem 
will share memory, and the I/O scheduler will share disk bandwidth.  If 
you see a problem in one of these areas you will need to tune the 
subsystem that is misbehaving.

There is also a larger effort to improve control of sharing called 
control groups (cgroups).  You may want to read up on this as it can 
provide very fine grain control on resource sharing.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
       [not found]   ` <4B979776.1000701@aixigo.de>
@ 2010-03-10 13:15     ` Avi Kivity
  2010-03-10 15:57       ` Javier Guerra Giraldez
  2010-03-11 13:24       ` Harald Dunkel
  0 siblings, 2 replies; 9+ messages in thread
From: Avi Kivity @ 2010-03-10 13:15 UTC (permalink / raw)
  To: Harald Dunkel; +Cc: Harald Dunkel, KVM Mailing List

On 03/10/2010 02:58 PM, Harald Dunkel wrote:
> Hi Avi,
>
> On 03/08/10 12:02, Avi Kivity wrote:
>    
>> On 03/05/2010 05:20 PM, Harald Dunkel wrote:
>>      
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA1
>>>
>>> Hi folks,
>>>
>>> Problem: My kvm server (8 cores, 64 GByte RAM, amd64) can eat up
>>> all block device or file system performance, so that the kvm clients
>>> become almost unresponsive. This is _very_ bad. I would like to make
>>> sure that the kvm clients do not affect each other, and that all
>>> (including the server itself) get a fair part of computing power and
>>> memory space.
>>>
>>>        
>> Please describe the issue in detail, provide output from 'vmstat' and
>> 'top'.
>>
>>      
> Sorry for the delay. I cannot put these services at risk, so I
> have setup a test environment on another host (2 quadcore Xeons,
> ht enabled, 32 GByte RAM, no swap, bridged networking) to
> reproduce the problem.
>
> There are 8 virtual hosts, each with a single CPU, 1 GByte RAM
> and 4 GByte swap on a virtual disk. The virtual disks are image
> files in the local file system. These images are not shared.
>
> For testing each virtual host builds the Linux kernel. In
> parallel I am running rsync to clone a remote virtual machine
> (22 GByte) to the local physical disk.
>
> Attached you can find the requested logs. The kern.log shows
> the problem: The virtual CPUs get stuck (as it seems). Several
> virtual hosts showed this effect. One vhost was unresponsive
> for more than 30 minutes.
>
> Surely this is a stress test, but I had a similar effect with
> our virtual mail server on the production system, while I
> was running a similar rsync session. mailhost was unresponsive
> for more than 2 minutes, then it was back. The other 8 virtual
> hosts on this system were started, but idle (AFAICT).
>
>    

You have tons of iowait time, indicating an I/O bottleneck.

What filesystem are you using for the host?  Are you using qcow2 or raw 
access?  What's the qemu command line.

Perhaps your filesystem doesn't perform well on synchronous writes.  For 
testing only, you might try cache=writeback.

> BTW, please note that free memory goes down over time. This
> happens only if the rsync is running. Without rsync the free
> memory is stable.
>    

That's expected.  rsync fills up the guest and host pagecache, both 
drain free memory (the guest only until it has touched all of its memory).

>>> What config options would you suggest to build and run a Linux
>>> kernel optimized for running kvm clients?
>>>
>>> Sorry for asking, but AFAICS some general guidelines for kvm are
>>> missing here. Of course I saw a lot of options in Documentation/\
>>> kernel-parameters.txt, but unfortunately I am not a kernel hacker.
>>>
>>> Any helpful comment would be highly appreciated.
>>>
>>>        
>> One way to ensure guests don't affect each other is not to overcommit,
>> that is make sure each guest gets its own cores, there is enough memory
>> for all guests, and guests have separate disks.  Of course that defeats
>> the some of the reasons for virtualizing in the first place; but if you
>> share resources, some compromises must be made.
>>
>>      
> How many virtual machines would you assume I could run on a
> host with 64 GByte RAM, 2 quad cores, a bonding NIC with
> 4*1Gbit/sec and a hardware RAID? Each vhost is supposed to
> get 4 GByte RAM and 1 CPU.
>    

15 guests should fit comfortably, more with ksm running if the workloads 
are similar, or if you use ballooning.

>    
>> If you do share resources, then Linux manages how they are shared.  The
>> scheduler will share the processors, the memory management subsystem
>> will share memory, and the I/O scheduler will share disk bandwidth.  If
>> you see a problem in one of these areas you will need to tune the
>> subsystem that is misbehaving.
>>
>>      
> Do you think that the bridge connecting the tunnel devices and
> the real NIC makes the problems? Is there also a subsystem managing
> network access?
>    

Here the problem is likely the host filesystem and/or I/O scheduler.

The optimal layout is placing guest disks in LVM volumes, and accessing 
them with -drive file=...,cache=none.  However, file-based access should 
also work.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-10 13:15     ` Avi Kivity
@ 2010-03-10 15:57       ` Javier Guerra Giraldez
  2010-03-10 16:00         ` Avi Kivity
  2010-03-11 13:24       ` Harald Dunkel
  1 sibling, 1 reply; 9+ messages in thread
From: Javier Guerra Giraldez @ 2010-03-10 15:57 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Harald Dunkel, Harald Dunkel, KVM Mailing List

On Wed, Mar 10, 2010 at 8:15 AM, Avi Kivity <avi@redhat.com> wrote:
> 15 guests should fit comfortably, more with ksm running if the workloads are
> similar, or if you use ballooning.

is there any simple way to get some stats to see how is ksm doing?

-- 
Javier

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-10 15:57       ` Javier Guerra Giraldez
@ 2010-03-10 16:00         ` Avi Kivity
  0 siblings, 0 replies; 9+ messages in thread
From: Avi Kivity @ 2010-03-10 16:00 UTC (permalink / raw)
  To: Javier Guerra Giraldez; +Cc: Harald Dunkel, Harald Dunkel, KVM Mailing List

On 03/10/2010 05:57 PM, Javier Guerra Giraldez wrote:
> On Wed, Mar 10, 2010 at 8:15 AM, Avi Kivity<avi@redhat.com>  wrote:
>    
>> 15 guests should fit comfortably, more with ksm running if the workloads are
>> similar, or if you use ballooning.
>>      
> is there any simple way to get some stats to see how is ksm doing?
>    

See /sys/kernel/mm/ksm

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-10 13:15     ` Avi Kivity
  2010-03-10 15:57       ` Javier Guerra Giraldez
@ 2010-03-11 13:24       ` Harald Dunkel
  2010-03-13  8:54         ` Avi Kivity
  1 sibling, 1 reply; 9+ messages in thread
From: Harald Dunkel @ 2010-03-11 13:24 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Harald Dunkel, KVM Mailing List

[-- Attachment #1: Type: text/plain, Size: 1990 bytes --]

Hi Avi,

I had missed to include some important syslog lines from the
host system. See attachment.

On 03/10/10 14:15, Avi Kivity wrote:
> 
> You have tons of iowait time, indicating an I/O bottleneck.
> 

Is this disk IO or network IO? The rsync session puts a
high load on both, but actually I do not see how a high
load on disk or block IO could make the virtual hosts
unresponsive, as shown by the hosts syslog?


> What filesystem are you using for the host?  Are you using qcow2 or raw
> access?  What's the qemu command line.
> 

It is ext3 and qcow2. Currently I am testing with reiserfs
on the host system. The system performance seems to be worse,
compared with ext3.

Here is the kvm command line (as generated by libvirt):

/usr/bin/kvm -S -M pc-0.11 -enable-kvm -m 1024 -smp 1 -name test0.0 \
	-uuid 74e71149-4baf-3af0-9c99-f4e50273296f \
	-monitor unix:/var/lib/libvirt/qemu/test0.0.monitor,server,nowait \
	-boot c -drive if=ide,media=cdrom,bus=1,unit=0 \
	-drive file=/export/storage/test0.0.img,if=virtio,boot=on \
	-net nic,macaddr=00:16:36:94:7e:f3,vlan=0,model=virtio,name=net0 \
	-net tap,fd=60,vlan=0,name=hostnet0 -serial pty -parallel none \
	-usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -balloon virtio

>>>      
>> How many virtual machines would you assume I could run on a
>> host with 64 GByte RAM, 2 quad cores, a bonding NIC with
>> 4*1Gbit/sec and a hardware RAID? Each vhost is supposed to
>> get 4 GByte RAM and 1 CPU.
>>    
> 
> 15 guests should fit comfortably, more with ksm running if the workloads
> are similar, or if you use ballooning.
> 

15 vhosts would be nice. ksm is in the kernel, but not in my qemu-kvm
(yet).

> 
> Here the problem is likely the host filesystem and/or I/O scheduler.
> 
> The optimal layout is placing guest disks in LVM volumes, and accessing
> them with -drive file=...,cache=none.  However, file-based access should
> also work.
> 

I will try LVM tomorrow, when the test with reiserfs is completed.


Many thanx

Harri

[-- Attachment #2: syslog.gz --]
[-- Type: application/gzip, Size: 1164 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-11 13:24       ` Harald Dunkel
@ 2010-03-13  8:54         ` Avi Kivity
  2010-03-15 13:54           ` Harald Dunkel
  0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2010-03-13  8:54 UTC (permalink / raw)
  To: Harald Dunkel; +Cc: Harald Dunkel, KVM Mailing List

On 03/11/2010 03:24 PM, Harald Dunkel wrote:
> Hi Avi,
>
> I had missed to include some important syslog lines from the
> host system. See attachment.
>
> On 03/10/10 14:15, Avi Kivity wrote:
>    
>> You have tons of iowait time, indicating an I/O bottleneck.
>>
>>      
> Is this disk IO or network IO?

disk.

> The rsync session puts a
> high load on both, but actually I do not see how a high
> load on disk or block IO could make the virtual hosts
> unresponsive, as shown by the hosts syslog?
>
>    

qcow2 is still not fully asynchronous, so sometimes when it waits, a 
vcpu waits as well.

>> Here the problem is likely the host filesystem and/or I/O scheduler.
>>
>> The optimal layout is placing guest disks in LVM volumes, and accessing
>> them with -drive file=...,cache=none.  However, file-based access should
>> also work.
>>
>>      
> I will try LVM tomorrow, when the test with reiserfs is completed.
>
>    

If the slowdown is indeed due to I/O, LVM (with cache=off) should 
eliminate it completely.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
  2010-03-13  8:54         ` Avi Kivity
@ 2010-03-15 13:54           ` Harald Dunkel
  0 siblings, 0 replies; 9+ messages in thread
From: Harald Dunkel @ 2010-03-15 13:54 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Harald Dunkel, KVM Mailing List

On 03/13/10 09:54, Avi Kivity wrote:
> 
> If the slowdown is indeed due to I/O, LVM (with cache=off) should
> eliminate it completely.
> 
As promised I have installed LVM: The difference is remarkable.
My test case (running 8 vhosts in parallel, each building a Linux
kernel) just works. There is no blocking job (by now), all
vhosts can be pinged, great.

Many thanx for your help, and for the nice software, of course.


Regards

Harri

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: how to tweak kernel to get the best out of kvm?
@ 2010-04-23  8:47 Alec Istomin
  0 siblings, 0 replies; 9+ messages in thread
From: Alec Istomin @ 2010-04-23  8:47 UTC (permalink / raw)
  To: kvm

Hi there!
I'm trying to understand kvm performance bottleneck and I was hoping to
get sharing statistics form ksm. Was not able to locate it so far, the
below mentioned /sys entry is missing, while ksm module is loaded.

I'm using latest from rhel 5 x64: 2.6.18-194.el5, kmod-kvm-83-164.el5 and
I'm sorry if this is the wrong place for a redhat-specific question.

Thanks,
 Alec
 
On Wed, 10 Mar 2010 08:01:12 -0800, Avi Kivity wrote:
>  See /sys/kernel/mm/ksm



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-04-23  9:05 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-05 15:20 how to tweak kernel to get the best out of kvm? Harald Dunkel
2010-03-08 11:02 ` Avi Kivity
     [not found]   ` <4B979776.1000701@aixigo.de>
2010-03-10 13:15     ` Avi Kivity
2010-03-10 15:57       ` Javier Guerra Giraldez
2010-03-10 16:00         ` Avi Kivity
2010-03-11 13:24       ` Harald Dunkel
2010-03-13  8:54         ` Avi Kivity
2010-03-15 13:54           ` Harald Dunkel
2010-04-23  8:47 Alec Istomin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.