On Wed, 2020-12-02 at 13:12 +0000, Joao Martins wrote: > On 12/2/20 11:17 AM, David Woodhouse wrote: > > I might be more inclined to go for a model where the kernel handles the > > evtchn_pending/evtchn_mask for us. What would go into the irq routing > > table is { vcpu, port# } which get passed to kvm_xen_evtchn_send(). > > > > But passing port to the routing and handling the sending of events wouldn't it lead to > unnecessary handling of event channels which aren't handled by the kernel, compared to > just injecting caring about the upcall? Well, I'm generally in favour of *not* doing things in the kernel that don't need to be there. But if the kernel is going to short-circuit the IPIs and VIRQs, then it's already going to have to handle the evtchn_pending/evtchn_mask bitmaps, and actually injecting interrupts. Given that it has to have that functionality anyway, it seems saner to let the kernel have full control over it and to just expose 'evtchn_send' to userspace. The alternative is to have userspace trying to play along with the atomic handling of those bitmasks too, and injecting events through KVM_INTERRUPT/KVM_SIGNAL_MSI in parallel to the kernel doing so. That seems like *more* complexity, not less. > I wanted to mention the GSI callback method too, but wasn't enterily sure if eventfd was > enough. That actually works quite nicely even for userspace irqchip. Forgetting Xen for the moment... my model for userspace I/OAPIC with interrupt remapping is that during normal runtime, the irqfd is assigned and things all work and we can even have IRQ posting for eventfds which came from VFIO. When the IOMMU invalidates an IRQ translation, all it does is *deassign* the irqfd from the KVM IRQ. So the next time that eventfd fires, it's caught in the userspace event loop instead. Userspace can then retranslate through the IOMMU and reassign the irqfd for next time. So, back to Xen. As things stand with just the hypercall+shinfo patches I've already rebased, we have enough to do fully functional Xen hosting. The event channels are slow but it *can* be done entirely in userspace — handling *all* the hypercalls, and delivering interrupts to the guest in whatever mode is required. Event channels are a very important optimisation though. For the VMM API I think we should follow the Xen model, mixing the domain-wide and per-vCPU configuration. It's the best way to faithfully model the behaviour a true Xen guest would experience. So KVM_XEN_ATTR_TYPE_CALLBACK_VIA can be used to set one of • HVMIRQ_callback_vector, taking a vector# • HVMIRQ_callback_gsi for the in-kernel irqchip, taking a GSI# And *maybe* in a later patch it could also handle • HVMIRQ_callback_gsi for split-irqchip, taking an eventfd • HVMIRQ_callback_pci_intx, taking an eventfd (or a pair, for EOI?) I don't know if the latter two really make sense. After all the argument for handling IPI/VIRQ in kernel kind of falls over if we have to bounce out to userspace anyway. So it *only* makes sense if those eventfds actually end up wired *through* userspace to a KVM IRQFD as I described for the IOMMU stuff. In addition to that per-domain setup, we'd also have a per-vCPU KVM_XEN_ATTR_TYPE_VCPU_CALLBACK_VECTOR which takes {vCPU, vector}.