All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sasha Levin <levinsasha928@gmail.com>
To: Gleb Natapov <gleb@redhat.com>
Cc: kvm <kvm@vger.kernel.org>
Subject: Re: APIC lookups
Date: Sat, 03 Sep 2011 10:42:20 +0300	[thread overview]
Message-ID: <1315035740.31676.36.camel@lappy> (raw)
In-Reply-To: <20110903073208.GK26451@redhat.com>

On Sat, 2011-09-03 at 10:32 +0300, Gleb Natapov wrote:
> On Fri, Sep 02, 2011 at 10:08:42PM +0300, Sasha Levin wrote:
> > On Fri, 2011-09-02 at 21:13 +0300, Gleb Natapov wrote:
> > > On Fri, Sep 02, 2011 at 08:55:55PM +0300, Sasha Levin wrote:
> > > > Hi,
> > > > 
> > > > I've noticed that kvm_irq_delivery_to_apic() is locating the destination
> > > > APIC by running through kvm_for_each_vcpu() which becomes a scalability
> > > > issue with a large number if vcpus.
> > > > 
> > > > I'm thinking about speeding that up using a radix tree for lookups, and
> > > > was wondering if it sounds right.
> > > > 
> > > We have to call kvm_apic_match_dest() on each apic to see if it should
> > > get the message. Single message can be sent to more than one apic. It is
> > > likely possible to optimize common case of physical addressing fixed
> > > destination, but then just use array of 256 elements, no need for a tree.
> > 
> > I think it's also possible to handle it for logical addressing as well,
> > instead of a simple compare we just need to go through all the IDs that
> > would 'and' with the dest.
> > 
> There are two kinds of logical addressing: flat and cluster. And
> I see nothing that prevents different CPUs be in different mode.
> 

Hm... I thought that when using logical addressing it's either flat or
cluster, not both.

In that case - yes, let's skip that.

> It is better to cache lookup result in irq routing entry to speedup
> following interrupts.
> 
> > > Do you see this function in profiling?
> > 
> > I was running profiling to see which functions get much slower during
> > regular operation (not boot) when you run with large amount of vcpus,
> > and this was one of them.
> > 
> > Though this is probably due to the method we use to find lowest priority
> > and not the lookups themselves.
> > 
> Currently we round robin between all cpus on each interrupt when lowest priority
> delivery is used. We should do it on each N interrupts where N >> 1.

I'll try that and see how it improves performance.

-- 

Sasha.


      reply	other threads:[~2011-09-03  7:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-02 17:55 APIC lookups Sasha Levin
2011-09-02 18:07 ` Sasha Levin
2011-09-02 18:13 ` Gleb Natapov
2011-09-02 19:08   ` Sasha Levin
2011-09-03  7:32     ` Gleb Natapov
2011-09-03  7:42       ` Sasha Levin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1315035740.31676.36.camel@lappy \
    --to=levinsasha928@gmail.com \
    --cc=gleb@redhat.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.