linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: David Woodhouse <dwmw2@infradead.org>,
	Ankur Arora <ankur.a.arora@oracle.com>
Cc: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 10/39] KVM: x86/xen: support upcall vector
Date: Tue, 5 Jan 2021 12:11:46 +0000	[thread overview]
Message-ID: <abd12f38-a2fa-1e7c-72e9-4d2296c7b4a2@oracle.com> (raw)
In-Reply-To: <c4a1a6824c534d1f1420f6c64d72b02eb5caf4c6.camel@infradead.org>

On 1/1/21 2:33 PM, David Woodhouse wrote:
> On Wed, 2020-12-02 at 18:34 +0000, Joao Martins wrote:
>>> But if the kernel is going to short-circuit the IPIs and VIRQs, then
>>> it's already going to have to handle the evtchn_pending/evtchn_mask
>>> bitmaps, and actually injecting interrupts.
>>>
>>
>> Right. I was trying to point that out in the discussion we had
>> in next patch. But true be told, more about touting the idea of kernel
>> knowing if a given event channel is registered for userspace handling,
>> rather than fully handling the event channel.
>>
>> I suppose we are able to provide both options to the VMM anyway
>> i.e. 1) letting them handle it enterily in userspace by intercepting
>> EVTCHNOP_send, or through the irq route if we want kernel to offload it.
>>
>>> Given that it has to have that functionality anyway, it seems saner to
>>> let the kernel have full control over it and to just expose
>>> 'evtchn_send' to userspace.
>>>
>>> The alternative is to have userspace trying to play along with the
>>> atomic handling of those bitmasks too 
>>
>> That part is not particularly hard -- having it done already.
> 
> Right, for 2-level it works out fairly well. I like your model of
> installing { vcpu_id, port } into the KVM IRQ routing table and that's
> enough for the kernel to raise the event channels that it *needs* to
> know about, while userspace can do the others for itself. It's just
> atomic test-and-set bitop stuff with no real need for coordination.
> 
> For FIFO event channels it gets more fun, because the updates aren't
> truly atomic — they require a spinlock around the three operations that
> the host has to perform when linking an event into a queue: 
> 
>  • Set the new port's LINKED bit
>  • Set the previous head's LINK to point to the new port
>  • Store the new port# as the head.
> 
> One option might be to declare that for FIFO, all updates for a given
> queue *must* be handled either by the kernel, or by userspace, and
> there's sharing of control.
> 
> Or maybe there's a way to keep the kernel side simple by avoiding the
> tail handling completely. Surely we only really care about kernel
> handling of the *fast* path, where a single event channel is triggered
> and handled immediately? In the high-latency case where we're gathering
> multiple events in a queue before the guest ever gets to process them, 
> we might as well let that be handled by userspace, right?
> 
> So in that case, perhaps the kernel part could forget all the horrid
> nonsense with tracking the tail of the queue. It would handle the event
> in-kernel *only* in the case where the event is the *first* in the
> queue, and the head was previously zero?
> 
> But even that isn't a simple atomic operation though; we still have to
> mark the event LINKED, then update the head pointer to reference it.
> And what if we set the 'LINKED' bit but then find that userspace has
> already linked another port so ->head is no longer zero?
> 
> Can we just clear the LINKED bit and then punt the whole event for
> userspace to (re)handle? Or raise a special event to userspace so it
> knows it needs to go ahead and link the port even though its LINKED bit
> has already been set?
> 
> None of the available options really fill me with joy; I'm somewhat
> inclined just to declare that the kernel won't support acceleration of
> FIFO event channels at all.
> 
> None of which matters a *huge* amount right now if I was only going to
> leave that as a future optimisation anyway.
> 
ACK.

> What it does actually mean in the short term is that as I update your
> KVM_IRQ_ROUTING_XEN_EVTCHN support, I probably *won't* bother to add a
> 'priority' field to struct kvm_irq_routing_xen_evtchn to make it
> extensible to FIFO event channels. We can always add that later.
> 
> Does that seem reasonable?
> 
Yes, makes sense IMHO. Guests need to anyway fallback to 2L if the
evtchnop_init_control hypercall fails, and the way we are handling events,
doesn't warrant FIFO event channel support as mandatory.

Despite the many fifo event features, IIRC the main driving motivation
was to go beyond the 1K/4K port limit for 32b/64b guests to be 128K max
ports per guest instead. But that was mostly a limit for Domain-0 as it hosts
all the backend handling in the traditional (i.e. without driver
domains) deployment. Therefore limiting how many vdevs one could host in
the system. And on cases for dense VM consolidation where you host 1K
guests with e.g. 3 block devices and 1 network interface one would quickly
ran out of interdomain event channel ports in Dom0. But that is not the
case here.

We are anyways ABI-limited to HVM_MAX_VCPUS (128) and if we assume
4 event channels for the legacy guest per VCPU (4 ipi evts, 1 timer,
1 debug) then it means we have still 3328 ports for interdomain event
channels for a 128 vCPU HVM guest ... when using the 2L event channels ABI.

	Joao

  reply	other threads:[~2021-01-05 12:15 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-20 20:15 [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 01/39] KVM: x86: fix Xen hypercall page msr handling Joao Martins
2019-02-22  1:30   ` Sean Christopherson
2019-02-22 11:47     ` Joao Martins
2019-02-22 12:51     ` Paolo Bonzini
2020-11-30 10:39       ` David Woodhouse
2020-11-30 11:03         ` Paolo Bonzini
2020-11-30 11:27           ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 02/39] KVM: x86/xen: intercept xen hypercalls if enabled Joao Martins
2019-02-21 18:29   ` Sean Christopherson
2019-02-21 20:56     ` Joao Martins
2019-02-22  0:30       ` Sean Christopherson
2019-02-22 12:50         ` Paolo Bonzini
2020-12-01  9:48   ` David Woodhouse
2020-12-01 11:19     ` David Woodhouse
2020-12-02 11:17       ` Joao Martins
2020-12-02 12:12         ` David Woodhouse
2020-12-02  5:19     ` Ankur Arora
2020-12-02  8:03       ` David Woodhouse
2020-12-02 18:20         ` Ankur Arora
2019-02-20 20:15 ` [PATCH RFC 03/39] KVM: x86/xen: register shared_info page Joao Martins
2020-12-01 13:07   ` David Woodhouse
2020-12-02  0:40     ` Ankur Arora
2020-12-02  1:26       ` David Woodhouse
2020-12-02  5:17         ` Ankur Arora
2020-12-02 10:50           ` Joao Martins
2020-12-02 10:44       ` Joao Martins
2020-12-02 12:20         ` David Woodhouse
2020-12-02 20:32           ` Ankur Arora
2020-12-03 10:16             ` David Woodhouse
2020-12-04 17:30               ` Sean Christopherson
2020-12-02 20:33         ` Ankur Arora
2020-12-12 12:07       ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 04/39] KVM: x86/xen: setup pvclock updates Joao Martins
2019-02-20 20:15 ` [PATCH RFC 05/39] KVM: x86/xen: update wallclock region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 06/39] KVM: x86/xen: register vcpu info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 07/39] KVM: x86/xen: register vcpu time info region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 08/39] KVM: x86/xen: register steal clock Joao Martins
2019-02-20 20:15 ` [PATCH RFC 09/39] KVM: x86: declare Xen HVM guest capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 10/39] KVM: x86/xen: support upcall vector Joao Martins
2020-12-02 11:17   ` David Woodhouse
2020-12-02 13:12     ` Joao Martins
2020-12-02 16:47       ` David Woodhouse
2020-12-02 18:34         ` Joao Martins
2020-12-02 19:02           ` David Woodhouse
2020-12-02 20:12             ` Joao Martins
2020-12-02 20:37               ` David Woodhouse
2020-12-03  1:08             ` Ankur Arora
2020-12-08 16:08             ` David Woodhouse
2020-12-09  6:35               ` Ankur Arora
2020-12-09 10:27                 ` David Woodhouse
2020-12-09 10:51                   ` Joao Martins
2020-12-09 11:39                     ` David Woodhouse
2020-12-09 13:26                       ` Joao Martins
2020-12-09 15:41                         ` David Woodhouse
2020-12-09 16:12                           ` Joao Martins
2021-01-01 14:33           ` David Woodhouse
2021-01-05 12:11             ` Joao Martins [this message]
2021-01-05 13:23               ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 11/39] KVM: x86/xen: evtchn signaling via eventfd Joao Martins
2020-11-30  9:41   ` David Woodhouse
2020-11-30 12:17     ` Joao Martins
2020-11-30 12:55       ` David Woodhouse
2020-11-30 15:08         ` Joao Martins
2020-11-30 16:48           ` David Woodhouse
2020-11-30 17:15             ` Joao Martins
2020-11-30 18:01               ` David Woodhouse
2020-11-30 18:41                 ` Joao Martins
2020-11-30 19:04                   ` David Woodhouse
2020-11-30 19:25                     ` Joao Martins
2021-11-23 13:15           ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 12/39] KVM: x86/xen: store virq when assigning evtchn Joao Martins
     [not found]   ` <b750291466f3c89e0a393e48079c087704b217a5.camel@amazon.co.uk>
2022-02-10 12:17     ` Joao Martins
2022-02-10 15:23       ` [EXTERNAL] " David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 13/39] KVM: x86/xen: handle PV timers oneshot mode Joao Martins
2019-02-20 20:15 ` [PATCH RFC 14/39] KVM: x86/xen: handle PV IPI vcpu yield Joao Martins
2019-02-20 20:15 ` [PATCH RFC 15/39] KVM: x86/xen: handle PV spinlocks slowpath Joao Martins
2022-02-08 12:36   ` David Woodhouse
2022-02-10 12:17     ` Joao Martins
2022-02-10 14:11       ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 16/39] KVM: x86: declare Xen HVM evtchn offload capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 17/39] x86/xen: export vcpu_info and shared_info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 18/39] x86/xen: make hypercall_page generic Joao Martins
2019-02-20 20:15 ` [PATCH RFC 19/39] xen/xenbus: xenbus uninit support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 20/39] xen-blkback: module_exit support Joao Martins
2019-02-25 18:57   ` Konrad Rzeszutek Wilk
2019-02-26 11:20     ` Joao Martins
2019-02-20 20:15 ` [PATCH RFC 21/39] KVM: x86/xen: domid allocation Joao Martins
2019-02-20 20:15 ` [PATCH RFC 22/39] KVM: x86/xen: grant table init Joao Martins
2019-02-20 20:15 ` [PATCH RFC 23/39] KVM: x86/xen: grant table grow support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 24/39] KVM: x86/xen: backend hypercall support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 25/39] KVM: x86/xen: grant map support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 26/39] KVM: x86/xen: grant unmap support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 27/39] KVM: x86/xen: grant copy support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 28/39] KVM: x86/xen: interdomain evtchn support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 29/39] KVM: x86/xen: evtchn unmask support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 30/39] KVM: x86/xen: add additional evtchn ops Joao Martins
2019-02-20 20:16 ` [PATCH RFC 31/39] xen-shim: introduce shim domain driver Joao Martins
2019-02-20 20:16 ` [PATCH RFC 32/39] xen/balloon: xen_shim_domain() support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 33/39] xen/grant-table: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 34/39] xen/gntdev: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 35/39] xen/xenbus: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 36/39] drivers/xen: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 37/39] xen-netback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 38/39] xen-blkback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 39/39] KVM: x86: declare Xen HVM Dom0 capability Joao Martins
2019-02-20 21:09 ` [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Paolo Bonzini
2019-02-21  0:29   ` Ankur Arora
2019-02-21 11:45   ` Joao Martins
2019-02-22 16:59     ` Paolo Bonzini
2019-03-12 17:14       ` Joao Martins
2019-04-08  6:44         ` Juergen Gross
2019-04-08 10:36           ` Joao Martins
2019-04-08 10:42             ` Juergen Gross
2019-04-08 17:31               ` Joao Martins
2019-04-09  0:35                 ` Stefano Stabellini
2019-04-10  5:50                   ` [Xen-devel] " Ankur Arora
2019-04-10 20:45                     ` Stefano Stabellini
2019-04-09  5:04                 ` Juergen Gross
2019-04-10  6:55                   ` Ankur Arora
2019-04-10  7:14                     ` Juergen Gross
2019-02-20 23:39 ` [Xen-devel] " Marek Marczykowski-Górecki
2019-02-21  0:31   ` Ankur Arora
2019-02-21  7:57   ` Juergen Gross
2019-02-21 12:00     ` Joao Martins
2019-02-21 11:55   ` Joao Martins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abd12f38-a2fa-1e7c-72e9-4d2296c7b4a2@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=ankur.a.arora@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=dwmw2@infradead.org \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).