All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
@ 2016-10-14 12:30 Wei Wang
  2016-10-14 12:59 ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Wei Wang @ 2016-10-14 12:30 UTC (permalink / raw)
  To: kvm, mst, marcandre.lureau, stefanha, pbonzini; +Cc: Wei Wang

PV interrupts (PVI) enables a guest to send interrupts to another via
hypercalls.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 pv_interrupt_controller.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)
 create mode 100644 pv_interrupt_controller.c

diff --git a/pv_interrupt_controller.c b/pv_interrupt_controller.c
new file mode 100644
index 0000000..5f2431d
--- /dev/null
+++ b/pv_interrupt_controller.c
@@ -0,0 +1,27 @@
+
+The pv interrupt (PVI) hypercall is proposed to support one guest sending
+interrupts to another guest using hypercalls. The following pseduocode shows how
+a PVI is sent from the guest:
+
+#define KVM_HC_PVI 9
+kvm_hypercall2(KVM_HC_PVI, guest_uuid, guest_gsi);
+
+The new hypercall number, KVM_HC_PVI, is used for the purpose of sending PVIs.
+guest_uuid is used to identify the guest that the interrupt will be sent to.
+guest_gsi identifies the interrupt source of that guest.
+
+The PVI hypercall handler in KVM iterates the VM list (the vm_list field in
+the kvm struct), finds the guest with the passed guest_uuid, and injects an
+interrupt to the guest with the guest_gsi number.
+
+Finally, it's about the permission of sending PVI from one guest to another.
+In the PVI setup phase, the PVI receiver should get the sender's UUID (e.g. via
+the vhost-user protocol extension implemented between QEMUs), and pass it to KVM.
+Two new fields will be added to the struct kvm{ }:
+
++uuid_t uuid; // the guest uuid
++uuid_t pvi_sender_uuid[MAX_NUM]; // the sender's uuid should be registered here
+
+PVI will not be injected to the receiver guest if the sender's uuid does not appear 
+in the receiver's pvi_sender_uuid table.
+
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 12:30 [PATCH RFC] kvm/hypercall: Add the PVI hypercall support Wei Wang
@ 2016-10-14 12:59 ` Paolo Bonzini
  2016-10-14 14:00   ` Wang, Wei W
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2016-10-14 12:59 UTC (permalink / raw)
  To: Wei Wang, kvm, mst, marcandre.lureau, stefanha



On 14/10/2016 14:30, Wei Wang wrote:
> PV interrupts (PVI) enables a guest to send interrupts to another via
> hypercalls.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  pv_interrupt_controller.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
>  create mode 100644 pv_interrupt_controller.c
> 
> diff --git a/pv_interrupt_controller.c b/pv_interrupt_controller.c
> new file mode 100644
> index 0000000..5f2431d
> --- /dev/null
> +++ b/pv_interrupt_controller.c
> @@ -0,0 +1,27 @@
> +
> +The pv interrupt (PVI) hypercall is proposed to support one guest sending
> +interrupts to another guest using hypercalls. The following pseduocode shows how
> +a PVI is sent from the guest:
> +
> +#define KVM_HC_PVI 9
> +kvm_hypercall2(KVM_HC_PVI, guest_uuid, guest_gsi);
> +
> +The new hypercall number, KVM_HC_PVI, is used for the purpose of sending PVIs.
> +guest_uuid is used to identify the guest that the interrupt will be sent to.
> +guest_gsi identifies the interrupt source of that guest.
> +
> +The PVI hypercall handler in KVM iterates the VM list (the vm_list field in
> +the kvm struct), finds the guest with the passed guest_uuid, and injects an
> +interrupt to the guest with the guest_gsi number.
> +
> +Finally, it's about the permission of sending PVI from one guest to another.
> +In the PVI setup phase, the PVI receiver should get the sender's UUID (e.g. via
> +the vhost-user protocol extension implemented between QEMUs), and pass it to KVM.
> +Two new fields will be added to the struct kvm{ }:
> +
> ++uuid_t uuid; // the guest uuid
> ++uuid_t pvi_sender_uuid[MAX_NUM]; // the sender's uuid should be registered here
> +
> +PVI will not be injected to the receiver guest if the sender's uuid does not appear 
> +in the receiver's pvi_sender_uuid table.
> +
> 

Why would you do that instead of just using the local APIC?...

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 12:59 ` Paolo Bonzini
@ 2016-10-14 14:00   ` Wang, Wei W
  2016-10-14 14:13     ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Wang, Wei W @ 2016-10-14 14:00 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, mst, marcandre.lureau, stefanha

On Friday, October 14, 2016 8:59 PM, Paolo Bonzini wrote:
> On 14/10/2016 14:30, Wei Wang wrote:
> > PV interrupts (PVI) enables a guest to send interrupts to another via
> > hypercalls.
> >
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > ---
> >  pv_interrupt_controller.c | 27 +++++++++++++++++++++++++++
> >  1 file changed, 27 insertions(+)
> >  create mode 100644 pv_interrupt_controller.c
> >
> > diff --git a/pv_interrupt_controller.c b/pv_interrupt_controller.c new
> > file mode 100644 index 0000000..5f2431d
> > --- /dev/null
> > +++ b/pv_interrupt_controller.c
> > @@ -0,0 +1,27 @@
> > +
> > +The pv interrupt (PVI) hypercall is proposed to support one guest
> > +sending interrupts to another guest using hypercalls. The following
> > +pseduocode shows how a PVI is sent from the guest:
> > +
> > +#define KVM_HC_PVI 9
> > +kvm_hypercall2(KVM_HC_PVI, guest_uuid, guest_gsi);
> > +
> > +The new hypercall number, KVM_HC_PVI, is used for the purpose of sending
> PVIs.
> > +guest_uuid is used to identify the guest that the interrupt will be sent to.
> > +guest_gsi identifies the interrupt source of that guest.
> > +
> > +The PVI hypercall handler in KVM iterates the VM list (the vm_list
> > +field in the kvm struct), finds the guest with the passed guest_uuid,
> > +and injects an interrupt to the guest with the guest_gsi number.
> > +
> > +Finally, it's about the permission of sending PVI from one guest to another.
> > +In the PVI setup phase, the PVI receiver should get the sender's UUID
> > +(e.g. via the vhost-user protocol extension implemented between QEMUs),
> and pass it to KVM.
> > +Two new fields will be added to the struct kvm{ }:
> > +
> > ++uuid_t uuid; // the guest uuid
> > ++uuid_t pvi_sender_uuid[MAX_NUM]; // the sender's uuid should be
> > ++registered here
> > +
> > +PVI will not be injected to the receiver guest if the sender's uuid
> > +does not appear in the receiver's pvi_sender_uuid table.
> > +
> >
> 
> Why would you do that instead of just using the local APIC?...
> 

The interrupt will be delivered to LAPIC - the hypercall hander injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally uses LAPIC, right?

Best,
Wei




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 14:00   ` Wang, Wei W
@ 2016-10-14 14:13     ` Paolo Bonzini
  2016-10-14 14:56       ` Wang, Wei W
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2016-10-14 14:13 UTC (permalink / raw)
  To: Wang, Wei W, kvm, mst, marcandre.lureau, stefanha



On 14/10/2016 16:00, Wang, Wei W wrote:
>> Why would you do that instead of just using the local APIC?...
> 
> The interrupt will be delivered to LAPIC - the hypercall hander
> injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally
> uses LAPIC, right?

But why do you need that?  You can just deliver it to the appropriate
local APIC interrupt, there's no need to know the GSI.  The guest knows
how it has configured the GSIs.

You haven't explained the use case.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 14:13     ` Paolo Bonzini
@ 2016-10-14 14:56       ` Wang, Wei W
  2016-10-14 15:07         ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Wang, Wei W @ 2016-10-14 14:56 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, mst, marcandre.lureau, stefanha


On Friday, October 14, 2016 10:13 PM, Paolo Bonzini wrote:
> On 14/10/2016 16:00, Wang, Wei W wrote:
> >> Why would you do that instead of just using the local APIC?...
> >
> > The interrupt will be delivered to LAPIC - the hypercall hander
> > injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally uses
> > LAPIC, right?
> 
> But why do you need that?  You can just deliver it to the appropriate local APIC
> interrupt, there's no need to know the GSI.  The guest knows how it has
> configured the GSIs.
> 
> You haven't explained the use case.

Sure. One example here is to send an interrupt from a virtio driver (e.g. the vhost-pci-net that we are working on) on a guest to a virtio-net device on another guest. In terms of injecting an interrupt to the virtio-net device, should we give the sender the related GSI assigned to the virtio-net device (i.e. the GSI of an RX queue, to notify the virtio-net driver to receive packets from that RX queue)?  

Can you please explain more about "just delivering it to the appropriate local APIC"?  What would be source of the interrupt that we are injecting to? Thanks.

Best,
Wei

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 14:56       ` Wang, Wei W
@ 2016-10-14 15:07         ` Paolo Bonzini
  2016-10-14 16:51           ` Wang, Wei W
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2016-10-14 15:07 UTC (permalink / raw)
  To: Wang, Wei W, kvm, mst, marcandre.lureau, stefanha



On 14/10/2016 16:56, Wang, Wei W wrote:
> 
> On Friday, October 14, 2016 10:13 PM, Paolo Bonzini wrote:
>> On 14/10/2016 16:00, Wang, Wei W wrote:
>>>> Why would you do that instead of just using the local APIC?...
>>> 
>>> The interrupt will be delivered to LAPIC - the hypercall hander 
>>> injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally
>>> uses LAPIC, right?
>> 
>> But why do you need that?  You can just deliver it to the
>> appropriate local APIC interrupt, there's no need to know the GSI.
>> The guest knows how it has configured the GSIs.
>> 
>> You haven't explained the use case.
> 
> Sure. One example here is to send an interrupt from a virtio driver
> (e.g. the vhost-pci-net that we are working on) on a guest to a
> virtio-net device on another guest. In terms of injecting an
> interrupt to the virtio-net device, should we give the sender the
> related GSI assigned to the virtio-net device (i.e. the GSI of an RX
> queue, to notify the virtio-net driver to receive packets from that
> RX queue)?

In terms of vhost-pci, a write to an MMIO register on the vhost side
(the guest->host doorbell) would trigger an irq on the virtio side (the
host->guest doorbell).

There is no need to know GSIs, they are entirely hidden in QEMU.

Paolo

> Can you please explain more about "just delivering it to the
> appropriate local APIC"?  What would be source of the interrupt that
> we are injecting to? Thanks.
> 
> Best, Wei
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 15:07         ` Paolo Bonzini
@ 2016-10-14 16:51           ` Wang, Wei W
  2016-10-14 18:29             ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Wang, Wei W @ 2016-10-14 16:51 UTC (permalink / raw)
  To: Paolo Bonzini, kvm, mst, marcandre.lureau, stefanha

On Friday, October 14, 2016 11:08 PM, Paolo Bonzini wrote:
> On 14/10/2016 16:56, Wang, Wei W wrote:
> >
> > On Friday, October 14, 2016 10:13 PM, Paolo Bonzini wrote:
> >> On 14/10/2016 16:00, Wang, Wei W wrote:
> >>>> Why would you do that instead of just using the local APIC?...
> >>>
> >>> The interrupt will be delivered to LAPIC - the hypercall hander
> >>> injects the interrupt via kvm_set_irq(kvm, GSI,..), which finally
> >>> uses LAPIC, right?
> >>
> >> But why do you need that?  You can just deliver it to the appropriate
> >> local APIC interrupt, there's no need to know the GSI.
> >> The guest knows how it has configured the GSIs.
> >>
> >> You haven't explained the use case.
> >
> > Sure. One example here is to send an interrupt from a virtio driver
> > (e.g. the vhost-pci-net that we are working on) on a guest to a
> > virtio-net device on another guest. In terms of injecting an interrupt
> > to the virtio-net device, should we give the sender the related GSI
> > assigned to the virtio-net device (i.e. the GSI of an RX queue, to
> > notify the virtio-net driver to receive packets from that RX queue)?
> 
> In terms of vhost-pci, a write to an MMIO register on the vhost side (the guest-
> >host doorbell) would trigger an irq on the virtio side (the
> host->guest doorbell).

Yes, that's the traditional mechanism - ioeventfd and irqfd. they're fine for the current "guest virtio <->host" notification.

But when it comes to the "guest virtio<-->guest virtio" notification case, it should be clear where the interrupt should go to (e.g. which specific device interrupt it is), rather than just trapping to the host. So, instead of simply trapping to the host by an MMIO write, hypercall gives us the flexibility to pass some parameters.

> There is no need to know GSIs, they are entirely hidden in QEMU.

The GSI number is assigned in QEMU. By making use of the traditional irqfd implementation code in QEMU, a virtq's GSI is stored in the irqfd struct - "VirtIOIRQFD->virq", we can pass it or them(the multi-queue case) to the sender. I prefer GSI, because the KVM irq routing table is indexed by GSI. Would this be acceptable?
Alternatively, we can pass the vector of the virtq.

Best,
Wei





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 16:51           ` Wang, Wei W
@ 2016-10-14 18:29             ` Paolo Bonzini
  2016-10-17  6:47               ` Wang, Wei W
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2016-10-14 18:29 UTC (permalink / raw)
  To: Wei W Wang; +Cc: kvm, mst, marcandre lureau, stefanha

On Friday, October 14, 2016 6:51:53 PM, Wei W Wang" <wei.w.wang@intel.com> wrote:
> When it comes to the "guest virtio<-->guest virtio" notification case, it
> should be clear where the interrupt should go to (e.g. which specific device
> interrupt it is), rather than just trapping to the host. So, instead of
> simply trapping to the host by an MMIO write, hypercall gives us the
> flexibility to pass some parameters.

What parameters do you need?  There is no difference between "which specific
device interrupt you are raising" and "which specific virtqueue you are
kicking".  The latter uses ioeventfd just fine, and VFIO also uses eventfd
successfully.

> The GSI number is assigned in QEMU. By making use of the traditional irqfd
> implementation code in QEMU, a virtq's GSI is stored in the irqfd struct -
> "VirtIOIRQFD->virq", we can pass it or them(the multi-queue case) to the
> sender. I prefer GSI, because the KVM irq routing table is indexed by GSI.
> Would this be acceptable?
> Alternatively, we can pass the vector of the virtq.

No, the hypercall will not be accepted in any form.  The established
protocols for communication between KVM and the outside world, including
other KVM instances, are MMIO write and irqfd.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-14 18:29             ` Paolo Bonzini
@ 2016-10-17  6:47               ` Wang, Wei W
  2016-10-17  8:59                 ` Paolo Bonzini
  0 siblings, 1 reply; 11+ messages in thread
From: Wang, Wei W @ 2016-10-17  6:47 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, mst, marcandre lureau, stefanha

On Saturday, October 15, 2016 2:30 AM, Paolo Bonzini wrote:
> On Friday, October 14, 2016 6:51:53 PM, Wei W Wang"
> <wei.w.wang@intel.com> wrote:
> > When it comes to the "guest virtio<-->guest virtio" notification case,
> > it should be clear where the interrupt should go to (e.g. which
> > specific device interrupt it is), rather than just trapping to the
> > host. So, instead of simply trapping to the host by an MMIO write,
> > hypercall gives us the flexibility to pass some parameters.
> 
> What parameters do you need?  There is no difference between "which specific
> device interrupt you are raising" and "which specific virtqueue you are kicking".
> The latter uses ioeventfd just fine, and VFIO also uses eventfd successfully.

We need two parameters: destination UUID and GSI, to identify the destination VM and the destination queue interrupt.

Please let me elaborate on the two possible solutions based on the existing eventfd mechanism and the new hypercall mechanism - how can we use them to achieve the notification from virtio1 driver to virtio2 driver (across world contexts). We can't directly deliver interrupts from virtio1 driver to virtio2 driver, so here, for both solutions, we need a trampoline - the host. A uuid field is necessary to be added to the kvm struct, so that the trampoline can know who is who.

Generally, two steps are needed: 
Step1: virtio1's driver sends the interrupt request to the trampoline;
Step2: the trampoline sends the interrupt request to virtio2's driver.

*Solution 1. eventfd
Step1: achieved by virtio1's ioeventfd;
Step2: achieved by virtio2's irqfd.

In the setup phase, the trampoline makes a connection between virtio1's ioeventfd and virtio2's irqfd.  So, in this solution, we would need a host kernel module to do the trampoline work - connection setup and interrupt request delivery. 

*Solution 2. hypercall
Step1: achieved by hypercall
Step2: achieved by interrupt injection with GSI

We only need to patch the hypercall handler to inject the interrupt to the destination.

Pros & Cons:
From the performance point of view, the eventfd solution has a much longer code path (if we go through the whole code path how they are handled), which results in longer latency.
From the design point of view, I think using hypercall makes the design simple and straightforward.

> > The GSI number is assigned in QEMU. By making use of the traditional
> > irqfd implementation code in QEMU, a virtq's GSI is stored in the
> > irqfd struct - "VirtIOIRQFD->virq", we can pass it or them(the
> > multi-queue case) to the sender. I prefer GSI, because the KVM irq routing
> table is indexed by GSI.
> > Would this be acceptable?
> > Alternatively, we can pass the vector of the virtq.
> 
> No, the hypercall will not be accepted in any form.  The established protocols
> for communication between KVM and the outside world, including other KVM
> instances, are MMIO write and irqfd.

Could you please give more details about why hypercall is not welcomed, given the fact that it has already been implemented in KVM for some usages? Thanks.

Best,
Wei


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-17  6:47               ` Wang, Wei W
@ 2016-10-17  8:59                 ` Paolo Bonzini
  2016-10-19 10:54                   ` Wang, Wei W
  0 siblings, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2016-10-17  8:59 UTC (permalink / raw)
  To: Wang, Wei W; +Cc: kvm, mst, marcandre lureau, stefanha



On 17/10/2016 08:47, Wang, Wei W wrote:
> Please let me elaborate on the two possible solutions based on the
> existing eventfd mechanism and the new hypercall mechanism - how can
> we use them to achieve the notification from virtio1 driver to
> virtio2 driver (across world contexts). We can't directly deliver
> interrupts from virtio1 driver to virtio2 driver, so here, for both
> solutions, we need a trampoline - the host. A uuid field is necessary
> to be added to the kvm struct, so that the trampoline can know who is
> who.

This is already problematic.  KVM tries really, really hard to avoid any
global state across VMs.  If you define a global UUID, you'll also have
to design how to make it safe against multiple users of KVM, and how it
interacts with features like user namespace.  And you'll also have to
explain it to me, since I'm not at all a security expert.  That may be
harder than the design. :)

> Generally, two steps are needed: 
> Step1: virtio1's driver sends the interrupt request to the trampoline;
> Step2: the trampoline sends the interrupt request to virtio2's driver.
> 
> *Solution 1. eventfd
> Step1: achieved by virtio1's ioeventfd;
> Step2: achieved by virtio2's irqfd.
> 
> In the setup phase, the trampoline makes a connection between
> virtio1's ioeventfd and virtio2's irqfd. So, in this solution, we would
> need a host kernel module to do the trampoline work - connection setup
> and interrupt request delivery. 

No, you don't!  The point is that you can pass the same file descriptor
to KVM_IOEVENTFD and KVM_IRQFD.  The virtio-net VM can pass the irqfd to
the vhost-net VM, via the vhost socket.  This is exactly how things work
for vhost-user.  vhost-pci can additionally use the received file
descriptor as the ioeventfd.

>> No, the hypercall will not be accepted in any form.  The established protocols
>> for communication between KVM and the outside world, including other KVM
>> instances, are MMIO write and irqfd.
> 
> Could you please give more details about why hypercall is not
> welcomed, given the fact that it has already been implemented in KVM for
> some usages? Thanks.

Well, hypercalls aren't really that common in KVM. :)  There are exactly
two, and one of them does nothing except force a vmexit.

Anyway, here are four good reasons why this hypercall is not welcome:

1) irqfd seems to be fast enough for VFIO and existing vhost backends,
so it should be fast enough for vhost-pci as well;

2) if irqfd is not fast enough, optimizing it would benefit VFIO and
existing vhost backends, so we should first look into that anyway;

3) vhost-pci's host part should be basically a vhost-user backend
implemented by QEMU.  Any deviation from that should be considered very
carefully;

4) vhost-pci's first use case should be with DPDK, which does polling
anyway, not interrupts.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH RFC] kvm/hypercall: Add the PVI hypercall support
  2016-10-17  8:59                 ` Paolo Bonzini
@ 2016-10-19 10:54                   ` Wang, Wei W
  0 siblings, 0 replies; 11+ messages in thread
From: Wang, Wei W @ 2016-10-19 10:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, mst, marcandre lureau, stefanha

On Monday, October 17, 2016 4:59 PM, Paolo Bonzini wrote: 
> On 17/10/2016 08:47, Wang, Wei W wrote:
> Well, hypercalls aren't really that common in KVM. :)  There are exactly two, and
> one of them does nothing except force a vmexit.
> 
> Anyway, here are four good reasons why this hypercall is not welcome:
> 
> 1) irqfd seems to be fast enough for VFIO and existing vhost backends, so it
> should be fast enough for vhost-pci as well;
> 
> 2) if irqfd is not fast enough, optimizing it would benefit VFIO and existing vhost
> backends, so we should first look into that anyway;
> 
> 3) vhost-pci's host part should be basically a vhost-user backend implemented by
> QEMU.  Any deviation from that should be considered very carefully;
> 
> 4) vhost-pci's first use case should be with DPDK, which does polling anyway, not
> interrupts.
> 

Thanks Paolo for the comments. I will take your suggestions and send out a new version of the design.

Best,
Wei


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-10-19 14:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-14 12:30 [PATCH RFC] kvm/hypercall: Add the PVI hypercall support Wei Wang
2016-10-14 12:59 ` Paolo Bonzini
2016-10-14 14:00   ` Wang, Wei W
2016-10-14 14:13     ` Paolo Bonzini
2016-10-14 14:56       ` Wang, Wei W
2016-10-14 15:07         ` Paolo Bonzini
2016-10-14 16:51           ` Wang, Wei W
2016-10-14 18:29             ` Paolo Bonzini
2016-10-17  6:47               ` Wang, Wei W
2016-10-17  8:59                 ` Paolo Bonzini
2016-10-19 10:54                   ` Wang, Wei W

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.