All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: Sheng Yang <sheng@linux.intel.com>
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	xen-devel <xen-devel@lists.xensource.com>,
	Jeremy Fitzhardinge <Jeremy.Fitzhardinge@citrix.com>,
	Keir Fraser <Keir.Fraser@eu.citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Re: [PATCH 5/7] xen: Make event channel work with PV featured HVM
Date: Tue, 9 Feb 2010 18:01:30 +0000	[thread overview]
Message-ID: <alpine.DEB.2.00.1002091731500.11349@kaball-desktop> (raw)
In-Reply-To: <201002100117.54755.sheng@linux.intel.com>

On Tue, 9 Feb 2010, Sheng Yang wrote:
> Thanks Stefano, I haven't consider this before...
> 
> But for evtchn/vector mapping, I think there is still problem existed for this 
> case.
> 
> For natively support MSI, LAPIC is a must. Because IA32 MSI msg/addr not 
> only contained the vector number, but also contained information like LAPIC 
> delivery mode/destination mode etc. If we want to natively support MSI, we 
> need LAPIC. But discard LAPIC is the target of this patchset, due to it's 
> unnecessary VMExit; and we would replace it with evtchn.
> 
> And still, the target of this patch is: we want to eliminate the overhead of 
> interrupt handling. Especially, our target overhead is *APIC, because they 
> would cause unnecessary VMExit in the current hardware(e.g. EOI). Then we 
> introduced the evtchn, because it's a mature shared memory based event 
> delivery mechanism, with the minimal overhead. We replace the *APIC with 
> dynamic IRQ chip which is more efficient, and no more unnecessary VMExit. 
> Because we enabled evtchn, so we can support PV driver seamless - but you know 
> this can also be done by platform PCI driver. The main target of this patch is 
> to benefit interrupt intensive assigned devices. And we would only support 
> MSI/MSI-X devices(if you don't mind two lines more code in Xen, we can also 
> get assigned device support now, with MSI2INTx translation, but I think it's a 
> little hacky). We are working on evtchn support on MSI/MSI-X devices; we 
> already have workable patches, but we want to get a solution for both PV 
> featured HVM and pv_ops dom0, so we are still purposing an approach that 
> upstream Linux can accept.
> 
> In fact, I don't think guest evtchn code was written with coexisted with other 
> interrupt delivery mechanism in mind. Many codes is exclusive, self-
> maintained. So use it exclusive seems a good idea to keep it simple and nature 
> to me(sure, the easy way as well). I think it's maybe necessary to touch some 
> generic code if making evtchn coexist with *APIC. At the same time, MSI/MSI-X 
> benefit is a must for us, which means no LAPIC...

First you say that for MSI to work LAPIC is a must, but then you say
that for performance gains you want to avoid LAPIC altogether.
Which one is the correct one?

If you want to avoid LAPIC then my suggestion of mapping vectors into
event channels is still a good one (if it is actually possible to do
without touching generic kernel code, but to be sure it needs to be
tried).

Regarding making event channels coexist with *APIC, my suggestion is
actually more similar to what you have already done than you think:
instead of a global switch just use a per-device (actually
per-vector) switch.
The principal difference would be that in xen instead of having all the
assert_irq related changes and a global if( is_hvm_pv_evtchn_domaini(d) ),
your changes would be limited to vlapic.c and you would check that the
guest enabled event channels as a delivery mechanism for that particular
vector, like if ( delivery_mode(vlapic, vec) == EVENT_CHANNEL ).

> And I still have question on "flexibility": how much we can benefit if evtchn 
> can coexist with *APIC? What I can think of is some level triggered 
> interrupts, like USB, but they are rare and not useful when we targeting 
> servers. Well, in this case I think PVonHVM could fit the job better...

it is not only about flexibility, but also about code changes in
delicate code paths and designing a system that can work with pci
passthrough and MSI too.
You said that you are working on patches to make MSI devices work: maybe
seeing a working implementation of that would convince us about which one
is the correct approach.


  reply	other threads:[~2010-02-09 17:58 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-08  8:05 [PATCH 0/7][v3] PV featured HVM(Hybrid) for Xen Sheng Yang
2010-02-08  8:05 ` [PATCH 1/7] xen: add support for hvm_op Sheng Yang
2010-02-08  8:05   ` Sheng Yang
2010-02-08  8:05 ` [PATCH 2/7] xen: Import cpuid.h from Xen Sheng Yang
2010-02-08  8:05 ` [PATCH 3/7] xen/hvm: Xen PV featured HVM initialization Sheng Yang
2010-02-08  8:05 ` [PATCH 4/7] xen: The entrance for PV featured HVM Sheng Yang
2010-02-08  8:05 ` [PATCH 5/7] xen: Make event channel work with " Sheng Yang
2010-02-09 11:52   ` Ian Campbell
2010-02-09 11:52     ` Ian Campbell
2010-02-09 12:46     ` Sheng Yang
2010-02-09 14:02       ` [Xen-devel] " Ian Campbell
2010-02-09 14:02         ` Ian Campbell
2010-02-09 17:17         ` [Xen-devel] " Sheng Yang
2010-02-09 17:17           ` Sheng Yang
2010-02-09 18:01           ` Stefano Stabellini [this message]
2010-02-11  9:59             ` [Xen-devel] " Sheng Yang
2010-02-12 11:59               ` [Xen-devel] Re: [PATCH 5/7] xen: Make event channel work with PV featured HVe Stefano Stabellini
2010-02-12 11:59                 ` Stefano Stabellini
2010-02-10  3:16         ` [Xen-devel] Re: [PATCH 5/7] xen: Make event channel work with PV featured HVM Nakajima, Jun
2010-02-10  3:16           ` Nakajima, Jun
2010-02-10 10:20           ` [Xen-devel] " Ian Campbell
2010-02-10 10:20             ` Ian Campbell
2010-02-11  9:59             ` [Xen-devel] " Sheng Yang
2010-02-09 12:51   ` [Xen-devel] " Stefano Stabellini
2010-02-09 12:51     ` Stefano Stabellini
2010-02-08  8:05 ` [PATCH 6/7] xen: Unified checking for Xen of PV drivers to xenbus_register_frontend() Sheng Yang
2010-02-08  8:05 ` [PATCH 7/7] xen: Enable grant table and xenbus Sheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.00.1002091731500.11349@kaball-desktop \
    --to=stefano.stabellini@eu.citrix.com \
    --cc=Ian.Campbell@eu.citrix.com \
    --cc=Jeremy.Fitzhardinge@citrix.com \
    --cc=Keir.Fraser@eu.citrix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=sheng@linux.intel.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.