All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Bruce Rogers <brogers@novell.com>
Cc: kvm@vger.kernel.org
Subject: Re: kvm scaling question
Date: Fri, 11 Sep 2009 18:53:55 -0300	[thread overview]
Message-ID: <20090911215355.GD6244@amt.cnet> (raw)
In-Reply-To: <4AAA1A0A0200004800080E06@novprvlin0050.provo.novell.com>

On Fri, Sep 11, 2009 at 09:36:10AM -0600, Bruce Rogers wrote:
> I am wondering if anyone has investigated how well kvm scales when supporting many guests, or many vcpus or both.
> 
> I'll do some investigations into the per vm memory overhead and
> play with bumping the max vcpu limit way beyond 16, but hopefully
> someone can comment on issues such as locking problems that are known
> to exist and needing to be addressed to increased parallellism,
> general overhead percentages which can help provide consolidation
> expectations, etc.

I suppose it depends on the guest and workload. With an EPT host and
16-way Linux guest doing kernel compilations, on recent kernel, i see:

# Samples: 98703304
#
# Overhead          Command                      Shared Object  Symbol
# ........  ...............  .................................  ......
#
    97.15%               sh  [kernel]                           [k] vmx_vcpu_run
     0.27%               sh  [kernel]                           [k] kvm_arch_vcpu_ioctl_
     0.12%               sh  [kernel]                           [k] default_send_IPI_mas
     0.09%               sh  [kernel]                           [k] _spin_lock_irq

Which is pretty good. Without EPT/NPT the mmu_lock seems to be the major
bottleneck to parallelism.

> Also, when I did a simple experiment with vcpu overcommitment, I was
> surprised how quickly performance suffered (just bringing a Linux vm
> up), since I would have assumed the additional vcpus would have been
> halted the vast majority of the time. On a 2 proc box, overcommitment
> to 8 vcpus in a guest (I know this isn't a good usage scenario, but
> does provide some insights) caused the boot time to increase to almost
> exponential levels. At 16 vcpus, it took hours to just reach the gui
> login prompt.

One probable reason for that are vcpus which hold spinlocks in the guest
are scheduled out in favour of vcpus which spin on that same lock.

> Any perspective you can offer would be appreciated.
> 
> Bruce

  parent reply	other threads:[~2009-09-11 21:56 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-11 15:36 kvm scaling question Bruce Rogers
2009-09-11 15:46 ` Javier Guerra
2009-09-14 23:12   ` Bruce Rogers
2009-09-11 21:53 ` Marcelo Tosatti [this message]
2009-09-11 23:02   ` Andre Przywara
2009-09-14 23:21     ` Bruce Rogers
2009-09-14 23:19   ` Bruce Rogers
2009-09-15 14:10     ` Andrew Theurer
2009-09-12  9:52 ` Proxmox VE 1.4beta1 released Martin Maurer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090911215355.GD6244@amt.cnet \
    --to=mtosatti@redhat.com \
    --cc=brogers@novell.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.