From: David Woodhouse <dwmw2@infradead.org>
To: Joao Martins <joao.m.martins@oracle.com>,
Ankur Arora <ankur.a.arora@oracle.com>
Cc: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Ingo Molnar" <mingo@redhat.com>,
"Borislav Petkov" <bp@alien8.de>,
"H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 10/39] KVM: x86/xen: support upcall vector
Date: Wed, 02 Dec 2020 19:02:33 +0000 [thread overview]
Message-ID: <052867ae1c997487d85c21e995feb5647ac6c458.camel@infradead.org> (raw)
In-Reply-To: <c43024b3-6508-3b77-870c-da81e74284a4@oracle.com>
[-- Attachment #1: Type: text/plain, Size: 3776 bytes --]
On Wed, 2020-12-02 at 18:34 +0000, Joao Martins wrote:
> On 12/2/20 4:47 PM, David Woodhouse wrote:
> > On Wed, 2020-12-02 at 13:12 +0000, Joao Martins wrote:
> > > On 12/2/20 11:17 AM, David Woodhouse wrote:
> > > > I might be more inclined to go for a model where the kernel handles the
> > > > evtchn_pending/evtchn_mask for us. What would go into the irq routing
> > > > table is { vcpu, port# } which get passed to kvm_xen_evtchn_send().
> > >
> > > But passing port to the routing and handling the sending of events wouldn't it lead to
> > > unnecessary handling of event channels which aren't handled by the kernel, compared to
> > > just injecting caring about the upcall?
> >
> > Well, I'm generally in favour of *not* doing things in the kernel that
> > don't need to be there.
> >
> > But if the kernel is going to short-circuit the IPIs and VIRQs, then
> > it's already going to have to handle the evtchn_pending/evtchn_mask
> > bitmaps, and actually injecting interrupts.
> >
>
> Right. I was trying to point that out in the discussion we had
> in next patch. But true be told, more about touting the idea of kernel
> knowing if a given event channel is registered for userspace handling,
> rather than fully handling the event channel.
>
> I suppose we are able to provide both options to the VMM anyway
> i.e. 1) letting them handle it enterily in userspace by intercepting
> EVTCHNOP_send, or through the irq route if we want kernel to offload it.
Right. The kernel takes what it knows about and anything else goes up
to userspace.
I do like the way you've handled the vcpu binding in userspace, and the
kernel just knows that a given port goes to a given target CPU.
>
> > For the VMM
> > API I think we should follow the Xen model, mixing the domain-wide and
> > per-vCPU configuration. It's the best way to faithfully model the
> > behaviour a true Xen guest would experience.
> >
> > So KVM_XEN_ATTR_TYPE_CALLBACK_VIA can be used to set one of
> > • HVMIRQ_callback_vector, taking a vector#
> > • HVMIRQ_callback_gsi for the in-kernel irqchip, taking a GSI#
> >
> > And *maybe* in a later patch it could also handle
> > • HVMIRQ_callback_gsi for split-irqchip, taking an eventfd
> > • HVMIRQ_callback_pci_intx, taking an eventfd (or a pair, for EOI?)
> >
>
> Most of the Xen versions we were caring had callback_vector and
> vcpu callback vector (despite Linux not using the latter). But if you're
> dating back to 3.2 and 4.1 well (or certain Windows drivers), I suppose
> gsi and pci-intx are must-haves.
Note sure about GSI but PCI-INTX is definitely something I've seen in
active use by customers recently. I think SLES10 will use that.
> I feel we could just accommodate it as subtype in KVM_XEN_ATTR_TYPE_CALLBACK_VIA.
> Don't see the adavantage in having another xen attr type.
Yeah, fair enough.
> But kinda have mixed feelings in having kernel handling all event channels ABI,
> as opposed to only the ones userspace asked to offload. It looks a tad unncessary besides
> the added gain to VMMs that don't need to care about how the internals of event channels.
> But performance-wise it wouldn't bring anything better. But maybe, the former is reason
> enough to consider it.
Yeah, we'll see. Especially when it comes to implementing FIFO event
channels, I'd rather just do it in one place — and if the kernel does
it anyway then it's hardly difficult to hook into that.
But I've been about as coherent as I can be in email, and I think we're
generally aligned on the direction. I'll do some more experiments and
see what I can get working, and what it looks like.
I'm focusing on making the shinfo stuff all use kvm_map_gfn() first.
[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]
next prev parent reply other threads:[~2020-12-02 19:03 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-20 20:15 [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 01/39] KVM: x86: fix Xen hypercall page msr handling Joao Martins
2019-02-22 1:30 ` Sean Christopherson
2019-02-22 11:47 ` Joao Martins
2019-02-22 12:51 ` Paolo Bonzini
2020-11-30 10:39 ` David Woodhouse
2020-11-30 11:03 ` Paolo Bonzini
2020-11-30 11:27 ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 02/39] KVM: x86/xen: intercept xen hypercalls if enabled Joao Martins
2019-02-21 18:29 ` Sean Christopherson
2019-02-21 20:56 ` Joao Martins
2019-02-22 0:30 ` Sean Christopherson
2019-02-22 12:50 ` Paolo Bonzini
2020-12-01 9:48 ` David Woodhouse
2020-12-01 11:19 ` David Woodhouse
2020-12-02 11:17 ` Joao Martins
2020-12-02 12:12 ` David Woodhouse
2020-12-02 5:19 ` Ankur Arora
2020-12-02 8:03 ` David Woodhouse
2020-12-02 18:20 ` Ankur Arora
2019-02-20 20:15 ` [PATCH RFC 03/39] KVM: x86/xen: register shared_info page Joao Martins
2020-12-01 13:07 ` David Woodhouse
2020-12-02 0:40 ` Ankur Arora
2020-12-02 1:26 ` David Woodhouse
2020-12-02 5:17 ` Ankur Arora
2020-12-02 10:50 ` Joao Martins
2020-12-02 10:44 ` Joao Martins
2020-12-02 12:20 ` David Woodhouse
2020-12-02 20:32 ` Ankur Arora
2020-12-03 10:16 ` David Woodhouse
2020-12-04 17:30 ` Sean Christopherson
2020-12-02 20:33 ` Ankur Arora
2020-12-12 12:07 ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 04/39] KVM: x86/xen: setup pvclock updates Joao Martins
2019-02-20 20:15 ` [PATCH RFC 05/39] KVM: x86/xen: update wallclock region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 06/39] KVM: x86/xen: register vcpu info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 07/39] KVM: x86/xen: register vcpu time info region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 08/39] KVM: x86/xen: register steal clock Joao Martins
2019-02-20 20:15 ` [PATCH RFC 09/39] KVM: x86: declare Xen HVM guest capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 10/39] KVM: x86/xen: support upcall vector Joao Martins
2020-12-02 11:17 ` David Woodhouse
2020-12-02 13:12 ` Joao Martins
2020-12-02 16:47 ` David Woodhouse
2020-12-02 18:34 ` Joao Martins
2020-12-02 19:02 ` David Woodhouse [this message]
2020-12-02 20:12 ` Joao Martins
2020-12-02 20:37 ` David Woodhouse
2020-12-03 1:08 ` Ankur Arora
2020-12-08 16:08 ` David Woodhouse
2020-12-09 6:35 ` Ankur Arora
2020-12-09 10:27 ` David Woodhouse
2020-12-09 10:51 ` Joao Martins
2020-12-09 11:39 ` David Woodhouse
2020-12-09 13:26 ` Joao Martins
2020-12-09 15:41 ` David Woodhouse
2020-12-09 16:12 ` Joao Martins
2021-01-01 14:33 ` David Woodhouse
2021-01-05 12:11 ` Joao Martins
2021-01-05 13:23 ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 11/39] KVM: x86/xen: evtchn signaling via eventfd Joao Martins
2020-11-30 9:41 ` David Woodhouse
2020-11-30 12:17 ` Joao Martins
2020-11-30 12:55 ` David Woodhouse
2020-11-30 15:08 ` Joao Martins
2020-11-30 16:48 ` David Woodhouse
2020-11-30 17:15 ` Joao Martins
2020-11-30 18:01 ` David Woodhouse
2020-11-30 18:41 ` Joao Martins
2020-11-30 19:04 ` David Woodhouse
2020-11-30 19:25 ` Joao Martins
2021-11-23 13:15 ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 12/39] KVM: x86/xen: store virq when assigning evtchn Joao Martins
[not found] ` <b750291466f3c89e0a393e48079c087704b217a5.camel@amazon.co.uk>
2022-02-10 12:17 ` Joao Martins
2022-02-10 15:23 ` [EXTERNAL] " David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 13/39] KVM: x86/xen: handle PV timers oneshot mode Joao Martins
2019-02-20 20:15 ` [PATCH RFC 14/39] KVM: x86/xen: handle PV IPI vcpu yield Joao Martins
2019-02-20 20:15 ` [PATCH RFC 15/39] KVM: x86/xen: handle PV spinlocks slowpath Joao Martins
2022-02-08 12:36 ` David Woodhouse
2022-02-10 12:17 ` Joao Martins
2022-02-10 14:11 ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 16/39] KVM: x86: declare Xen HVM evtchn offload capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 17/39] x86/xen: export vcpu_info and shared_info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 18/39] x86/xen: make hypercall_page generic Joao Martins
2019-02-20 20:15 ` [PATCH RFC 19/39] xen/xenbus: xenbus uninit support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 20/39] xen-blkback: module_exit support Joao Martins
2019-02-25 18:57 ` Konrad Rzeszutek Wilk
2019-02-26 11:20 ` Joao Martins
2019-02-20 20:15 ` [PATCH RFC 21/39] KVM: x86/xen: domid allocation Joao Martins
2019-02-20 20:15 ` [PATCH RFC 22/39] KVM: x86/xen: grant table init Joao Martins
2019-02-20 20:15 ` [PATCH RFC 23/39] KVM: x86/xen: grant table grow support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 24/39] KVM: x86/xen: backend hypercall support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 25/39] KVM: x86/xen: grant map support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 26/39] KVM: x86/xen: grant unmap support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 27/39] KVM: x86/xen: grant copy support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 28/39] KVM: x86/xen: interdomain evtchn support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 29/39] KVM: x86/xen: evtchn unmask support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 30/39] KVM: x86/xen: add additional evtchn ops Joao Martins
2019-02-20 20:16 ` [PATCH RFC 31/39] xen-shim: introduce shim domain driver Joao Martins
2019-02-20 20:16 ` [PATCH RFC 32/39] xen/balloon: xen_shim_domain() support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 33/39] xen/grant-table: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 34/39] xen/gntdev: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 35/39] xen/xenbus: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 36/39] drivers/xen: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 37/39] xen-netback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 38/39] xen-blkback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 39/39] KVM: x86: declare Xen HVM Dom0 capability Joao Martins
2019-02-20 21:09 ` [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Paolo Bonzini
2019-02-21 0:29 ` Ankur Arora
2019-02-21 11:45 ` Joao Martins
2019-02-22 16:59 ` Paolo Bonzini
2019-03-12 17:14 ` Joao Martins
2019-04-08 6:44 ` Juergen Gross
2019-04-08 10:36 ` Joao Martins
2019-04-08 10:42 ` Juergen Gross
2019-04-08 17:31 ` Joao Martins
2019-04-09 0:35 ` Stefano Stabellini
2019-04-10 5:50 ` [Xen-devel] " Ankur Arora
2019-04-10 20:45 ` Stefano Stabellini
2019-04-09 5:04 ` Juergen Gross
2019-04-10 6:55 ` Ankur Arora
2019-04-10 7:14 ` Juergen Gross
2019-02-20 23:39 ` [Xen-devel] " Marek Marczykowski-Górecki
2019-02-21 0:31 ` Ankur Arora
2019-02-21 7:57 ` Juergen Gross
2019-02-21 12:00 ` Joao Martins
2019-02-21 11:55 ` Joao Martins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=052867ae1c997487d85c21e995feb5647ac6c458.camel@infradead.org \
--to=dwmw2@infradead.org \
--cc=ankur.a.arora@oracle.com \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=hpa@zytor.com \
--cc=joao.m.martins@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).