qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt
@ 2021-09-30  2:33 Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX Cindy Lu
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

these patches add the support for configure interrupt

These codes are all tested in vp-vdpa (support configure interrupt)
vdpa_sim (not support configure interrupt), virtio tap device

test in virtio-pci bus and virtio-mmio bus

Change in v2:
Add support for virtio-mmio bus
active the notifier while the backend support configure interrupt
misc fixes from v1

Change in v3
fix the coding style problems

Change in v4
misc fixes from v3
merge the set_config_notifier to set_guest_notifier
when vdpa start, check the feature by VIRTIO_NET_F_STATUS

Change in v5
misc fixes from v4
split the code to introduce configure interrupt type and the callback function
will init the configure interrupt in all virtio-pci and virtio-mmio bus, but will
only active while using vhost-vdpa driver

Change in v6
misc fixes from v5
decouple virtqueue from interrupt setting and misc process
fix the bug in virtio_net_handle_rx
use -1 as the queue number to identify if the interrupt is configure interrupt

Change in v7
misc fixes from v6
decouple virtqueue from interrupt setting and misc process
decouple virtqueue from vector use/release process
decouple virtqueue from set notifier fd handler process
move config_notifier and masked_config_notifier to VirtIODevice
fix the bug in virtio_net_handle_rx, add more information
add VIRTIO_CONFIG_IRQ_IDX as the queue number to identify if the interrupt is configure interrupt

Change in v8
misc fixes from v7
decouple virtqueue from interrupt setting and misc process
decouple virtqueue from vector use/release process
decouple virtqueue from set notifier fd handler process
move the vhost configure interrupt to vhost_net

Change in v9
misc fixes from v8
address the comments for v8

Cindy Lu (10):
  virtio: introduce macro IRTIO_CONFIG_IRQ_IDX
  virtio-pci: decouple notifier from interrupt process
  virtio-pci: decouple the single vector from the interrupt process
  vhost: add new call back function for config interrupt
  vhost-vdpa: add support for config interrupt call back
  virtio: add support for configure interrupt
  virtio-net: add support for configure interrupt
  vhost: add support for configure interrupt
  virtio-mmio: add support for configure interrupt
  virtio-pci: add support for configure interrupt

 hw/display/vhost-user-gpu.c       |   6 +
 hw/net/vhost_net.c                |  10 ++
 hw/net/virtio-net.c               |  16 +-
 hw/virtio/trace-events            |   2 +
 hw/virtio/vhost-user-fs.c         |   9 +-
 hw/virtio/vhost-vdpa.c            |   7 +
 hw/virtio/vhost-vsock-common.c    |   6 +
 hw/virtio/vhost.c                 |  76 +++++++++
 hw/virtio/virtio-crypto.c         |   6 +
 hw/virtio/virtio-mmio.c           |  27 ++++
 hw/virtio/virtio-pci.c            | 260 ++++++++++++++++++++----------
 hw/virtio/virtio-pci.h            |   4 +-
 hw/virtio/virtio.c                |  29 ++++
 include/hw/virtio/vhost-backend.h |   3 +
 include/hw/virtio/vhost.h         |   4 +
 include/hw/virtio/virtio.h        |   6 +
 include/net/vhost_net.h           |   3 +
 17 files changed, 386 insertions(+), 88 deletions(-)

-- 
2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-10-19  6:47   ` Michael S. Tsirkin
  2021-09-30  2:33 ` [PATCH v9 02/10] virtio-pci: decouple notifier from interrupt process Cindy Lu
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

To support configure interrupt for vhost-vdpa
introduce VIRTIO_CONFIG_IRQ_IDX -1 as config queue index, Then we can reuse
the function guest_notifier_mask and guest_notifier_pending.
Add the check of queue index, if the driver does not support configure
interrupt, the function will just return

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/display/vhost-user-gpu.c    |  6 ++++++
 hw/net/virtio-net.c            | 10 +++++++---
 hw/virtio/vhost-user-fs.c      |  9 +++++++--
 hw/virtio/vhost-vsock-common.c |  6 ++++++
 hw/virtio/virtio-crypto.c      |  6 ++++++
 include/hw/virtio/virtio.h     |  2 ++
 6 files changed, 34 insertions(+), 5 deletions(-)

diff --git a/hw/display/vhost-user-gpu.c b/hw/display/vhost-user-gpu.c
index 49df56cd14..73ad3d84c9 100644
--- a/hw/display/vhost-user-gpu.c
+++ b/hw/display/vhost-user-gpu.c
@@ -485,6 +485,9 @@ vhost_user_gpu_guest_notifier_pending(VirtIODevice *vdev, int idx)
 {
     VhostUserGPU *g = VHOST_USER_GPU(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return false;
+    }
     return vhost_virtqueue_pending(&g->vhost->dev, idx);
 }
 
@@ -493,6 +496,9 @@ vhost_user_gpu_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask)
 {
     VhostUserGPU *g = VHOST_USER_GPU(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return;
+    }
     vhost_virtqueue_mask(&g->vhost->dev, vdev, idx, mask);
 }
 
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 16d20cdee5..65b7cabcaf 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3152,7 +3152,10 @@ static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
     VirtIONet *n = VIRTIO_NET(vdev);
     NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
     assert(n->vhost_started);
-    return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
+    if (idx != VIRTIO_CONFIG_IRQ_IDX) {
+        return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
+    }
+    return false;
 }
 
 static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
@@ -3161,8 +3164,9 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
     VirtIONet *n = VIRTIO_NET(vdev);
     NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
     assert(n->vhost_started);
-    vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
-                             vdev, idx, mask);
+    if (idx != VIRTIO_CONFIG_IRQ_IDX) {
+        vhost_net_virtqueue_mask(get_vhost_net(nc->peer), vdev, idx, mask);
+    }
 }
 
 static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
index c595957983..309c8efabf 100644
--- a/hw/virtio/vhost-user-fs.c
+++ b/hw/virtio/vhost-user-fs.c
@@ -156,11 +156,13 @@ static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
      */
 }
 
-static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
-                                            bool mask)
+static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask)
 {
     VHostUserFS *fs = VHOST_USER_FS(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return;
+    }
     vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
 }
 
@@ -168,6 +170,9 @@ static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
 {
     VHostUserFS *fs = VHOST_USER_FS(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return false;
+    }
     return vhost_virtqueue_pending(&fs->vhost_dev, idx);
 }
 
diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
index 4ad6e234ad..2112b44802 100644
--- a/hw/virtio/vhost-vsock-common.c
+++ b/hw/virtio/vhost-vsock-common.c
@@ -101,6 +101,9 @@ static void vhost_vsock_common_guest_notifier_mask(VirtIODevice *vdev, int idx,
 {
     VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return;
+    }
     vhost_virtqueue_mask(&vvc->vhost_dev, vdev, idx, mask);
 }
 
@@ -109,6 +112,9 @@ static bool vhost_vsock_common_guest_notifier_pending(VirtIODevice *vdev,
 {
     VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return false;
+    }
     return vhost_virtqueue_pending(&vvc->vhost_dev, idx);
 }
 
diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
index 54f9bbb789..1d5192f8b4 100644
--- a/hw/virtio/virtio-crypto.c
+++ b/hw/virtio/virtio-crypto.c
@@ -948,6 +948,9 @@ static void virtio_crypto_guest_notifier_mask(VirtIODevice *vdev, int idx,
 
     assert(vcrypto->vhost_started);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return;
+    }
     cryptodev_vhost_virtqueue_mask(vdev, queue, idx, mask);
 }
 
@@ -958,6 +961,9 @@ static bool virtio_crypto_guest_notifier_pending(VirtIODevice *vdev, int idx)
 
     assert(vcrypto->vhost_started);
 
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return false;
+    }
     return cryptodev_vhost_virtqueue_pending(vdev, queue, idx);
 }
 
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index fa711a8912..2766c293f4 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -67,6 +67,8 @@ typedef struct VirtQueueElement
 
 #define VIRTIO_NO_VECTOR 0xffff
 
+#define VIRTIO_CONFIG_IRQ_IDX -1
+
 #define TYPE_VIRTIO_DEVICE "virtio-device"
 OBJECT_DECLARE_TYPE(VirtIODevice, VirtioDeviceClass, VIRTIO_DEVICE)
 
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 02/10] virtio-pci: decouple notifier from interrupt process
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 03/10] virtio-pci: decouple the single vector from the " Cindy Lu
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

To reuse the notifier process in configure interrupt.
Use the virtio_pci_get_notifier function to get the notifier.
the INPUT of this function is the IDX, the OUTPUT is notifier and
the vector

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/virtio-pci.c | 84 +++++++++++++++++++++++++++++-------------
 1 file changed, 58 insertions(+), 26 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 433060ac02..456782c43e 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -704,29 +704,45 @@ static void kvm_virtio_pci_vq_vector_release(VirtIOPCIProxy *proxy,
 }
 
 static int kvm_virtio_pci_irqfd_use(VirtIOPCIProxy *proxy,
-                                 unsigned int queue_no,
+                                 EventNotifier *n,
                                  unsigned int vector)
 {
     VirtIOIRQFD *irqfd = &proxy->vector_irqfd[vector];
-    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
-    VirtQueue *vq = virtio_get_queue(vdev, queue_no);
-    EventNotifier *n = virtio_queue_get_guest_notifier(vq);
     return kvm_irqchip_add_irqfd_notifier_gsi(kvm_state, n, NULL, irqfd->virq);
 }
 
 static void kvm_virtio_pci_irqfd_release(VirtIOPCIProxy *proxy,
-                                      unsigned int queue_no,
+                                      EventNotifier *n ,
                                       unsigned int vector)
 {
-    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
-    VirtQueue *vq = virtio_get_queue(vdev, queue_no);
-    EventNotifier *n = virtio_queue_get_guest_notifier(vq);
     VirtIOIRQFD *irqfd = &proxy->vector_irqfd[vector];
     int ret;
 
     ret = kvm_irqchip_remove_irqfd_notifier_gsi(kvm_state, n, irqfd->virq);
     assert(ret == 0);
 }
+static int virtio_pci_get_notifier(VirtIOPCIProxy *proxy, int queue_no,
+                                      EventNotifier **n, unsigned int *vector)
+{
+    PCIDevice *dev = &proxy->pci_dev;
+    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+    VirtQueue *vq;
+
+    if (queue_no == VIRTIO_CONFIG_IRQ_IDX) {
+        return -1;
+    } else {
+        if (!virtio_queue_get_num(vdev, queue_no)) {
+            return -1;
+        }
+        *vector = virtio_queue_vector(vdev, queue_no);
+        vq = virtio_get_queue(vdev, queue_no);
+        *n = virtio_queue_get_guest_notifier(vq);
+    }
+    if (*vector >= msix_nr_vectors_allocated(dev)) {
+        return -1;
+    }
+    return 0;
+}
 
 static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
 {
@@ -735,12 +751,15 @@ static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
     unsigned int vector;
     int ret, queue_no;
-
+    EventNotifier *n;
     for (queue_no = 0; queue_no < nvqs; queue_no++) {
         if (!virtio_queue_get_num(vdev, queue_no)) {
             break;
         }
-        vector = virtio_queue_vector(vdev, queue_no);
+        ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+        if (ret < 0) {
+            break;
+        }
         if (vector >= msix_nr_vectors_allocated(dev)) {
             continue;
         }
@@ -752,7 +771,7 @@ static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
          * Otherwise, delay until unmasked in the frontend.
          */
         if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
+            ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
             if (ret < 0) {
                 kvm_virtio_pci_vq_vector_release(proxy, vector);
                 goto undo;
@@ -768,7 +787,11 @@ undo:
             continue;
         }
         if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            kvm_virtio_pci_irqfd_release(proxy, queue_no, vector);
+            ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+            if (ret < 0) {
+                break;
+            }
+            kvm_virtio_pci_irqfd_release(proxy, n, vector);
         }
         kvm_virtio_pci_vq_vector_release(proxy, vector);
     }
@@ -782,12 +805,16 @@ static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
     unsigned int vector;
     int queue_no;
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
-
+    EventNotifier *n;
+    int ret ;
     for (queue_no = 0; queue_no < nvqs; queue_no++) {
         if (!virtio_queue_get_num(vdev, queue_no)) {
             break;
         }
-        vector = virtio_queue_vector(vdev, queue_no);
+        ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+        if (ret < 0) {
+            break;
+        }
         if (vector >= msix_nr_vectors_allocated(dev)) {
             continue;
         }
@@ -795,21 +822,20 @@ static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
          * Otherwise, it was cleaned when masked in the frontend.
          */
         if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            kvm_virtio_pci_irqfd_release(proxy, queue_no, vector);
+            kvm_virtio_pci_irqfd_release(proxy, n, vector);
         }
         kvm_virtio_pci_vq_vector_release(proxy, vector);
     }
 }
 
-static int virtio_pci_vq_vector_unmask(VirtIOPCIProxy *proxy,
+static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
                                        unsigned int queue_no,
                                        unsigned int vector,
-                                       MSIMessage msg)
+                                       MSIMessage msg,
+                                       EventNotifier *n)
 {
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
-    VirtQueue *vq = virtio_get_queue(vdev, queue_no);
-    EventNotifier *n = virtio_queue_get_guest_notifier(vq);
     VirtIOIRQFD *irqfd;
     int ret = 0;
 
@@ -836,14 +862,15 @@ static int virtio_pci_vq_vector_unmask(VirtIOPCIProxy *proxy,
             event_notifier_set(n);
         }
     } else {
-        ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
+        ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
     }
     return ret;
 }
 
-static void virtio_pci_vq_vector_mask(VirtIOPCIProxy *proxy,
+static void virtio_pci_one_vector_mask(VirtIOPCIProxy *proxy,
                                              unsigned int queue_no,
-                                             unsigned int vector)
+                                             unsigned int vector,
+                                             EventNotifier *n)
 {
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
@@ -854,7 +881,7 @@ static void virtio_pci_vq_vector_mask(VirtIOPCIProxy *proxy,
     if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
         k->guest_notifier_mask(vdev, queue_no, true);
     } else {
-        kvm_virtio_pci_irqfd_release(proxy, queue_no, vector);
+        kvm_virtio_pci_irqfd_release(proxy, n, vector);
     }
 }
 
@@ -864,6 +891,7 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
     VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev);
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtQueue *vq = virtio_vector_first_queue(vdev, vector);
+    EventNotifier *n;
     int ret, index, unmasked = 0;
 
     while (vq) {
@@ -872,7 +900,8 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
             break;
         }
         if (index < proxy->nvqs_with_notifiers) {
-            ret = virtio_pci_vq_vector_unmask(proxy, index, vector, msg);
+            n = virtio_queue_get_guest_notifier(vq);
+            ret = virtio_pci_one_vector_unmask(proxy, index, vector, msg, n);
             if (ret < 0) {
                 goto undo;
             }
@@ -888,7 +917,8 @@ undo:
     while (vq && unmasked >= 0) {
         index = virtio_get_queue_index(vq);
         if (index < proxy->nvqs_with_notifiers) {
-            virtio_pci_vq_vector_mask(proxy, index, vector);
+            n = virtio_queue_get_guest_notifier(vq);
+            virtio_pci_one_vector_mask(proxy, index, vector, n);
             --unmasked;
         }
         vq = virtio_vector_next_queue(vq);
@@ -901,15 +931,17 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector)
     VirtIOPCIProxy *proxy = container_of(dev, VirtIOPCIProxy, pci_dev);
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtQueue *vq = virtio_vector_first_queue(vdev, vector);
+    EventNotifier *n;
     int index;
 
     while (vq) {
         index = virtio_get_queue_index(vq);
+        n = virtio_queue_get_guest_notifier(vq);
         if (!virtio_queue_get_num(vdev, index)) {
             break;
         }
         if (index < proxy->nvqs_with_notifiers) {
-            virtio_pci_vq_vector_mask(proxy, index, vector);
+            virtio_pci_one_vector_mask(proxy, index, vector, n);
         }
         vq = virtio_vector_next_queue(vq);
     }
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 03/10] virtio-pci: decouple the single vector from the interrupt process
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 02/10] virtio-pci: decouple notifier from interrupt process Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 04/10] vhost: add new call back function for config interrupt Cindy Lu
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

To reuse the interrupt process in configure interrupt
decouple the single vector from the interrupt process. Add new function
the kvm_virtio_pci_vector_use_one and _release_one. these functions are use
for the single vector, the whole process will finish in a loop for vq number.

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/virtio-pci.c | 130 +++++++++++++++++++++++------------------
 1 file changed, 72 insertions(+), 58 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 456782c43e..d0a2c2fb81 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -677,7 +677,6 @@ static uint32_t virtio_read_config(PCIDevice *pci_dev,
 }
 
 static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
-                                        unsigned int queue_no,
                                         unsigned int vector)
 {
     VirtIOIRQFD *irqfd = &proxy->vector_irqfd[vector];
@@ -744,87 +743,102 @@ static int virtio_pci_get_notifier(VirtIOPCIProxy *proxy, int queue_no,
     return 0;
 }
 
-static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
+static int kvm_virtio_pci_vector_use_one(VirtIOPCIProxy *proxy, int queue_no)
 {
+    unsigned int vector;
+    int ret;
+    EventNotifier *n;
     PCIDevice *dev = &proxy->pci_dev;
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
-    unsigned int vector;
-    int ret, queue_no;
-    EventNotifier *n;
-    for (queue_no = 0; queue_no < nvqs; queue_no++) {
-        if (!virtio_queue_get_num(vdev, queue_no)) {
-            break;
-        }
-        ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
-        if (ret < 0) {
-            break;
-        }
-        if (vector >= msix_nr_vectors_allocated(dev)) {
-            continue;
-        }
-        ret = kvm_virtio_pci_vq_vector_use(proxy, queue_no, vector);
+
+    ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+    if (ret < 0) {
+        return ret;
+    }
+    if (vector >= msix_nr_vectors_allocated(dev)) {
+        return -1;
+    }
+    ret = kvm_virtio_pci_vq_vector_use(proxy, vector);
+    if (ret < 0) {
+        goto undo;
+    }
+    /*
+     * If guest supports masking, set up irqfd now.
+     * Otherwise, delay until unmasked in the frontend.
+     */
+    if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
+        ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
         if (ret < 0) {
+            kvm_virtio_pci_vq_vector_release(proxy, vector);
             goto undo;
         }
-        /* If guest supports masking, set up irqfd now.
-         * Otherwise, delay until unmasked in the frontend.
-         */
-        if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
-            if (ret < 0) {
-                kvm_virtio_pci_vq_vector_release(proxy, vector);
-                goto undo;
-            }
-        }
     }
-    return 0;
 
+    return 0;
 undo:
-    while (--queue_no >= 0) {
-        vector = virtio_queue_vector(vdev, queue_no);
-        if (vector >= msix_nr_vectors_allocated(dev)) {
-            continue;
+
+    vector = virtio_queue_vector(vdev, queue_no);
+    if (vector >= msix_nr_vectors_allocated(dev)) {
+        return ret;
+    }
+    if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
+        ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+        if (ret < 0) {
+            return ret;
         }
-        if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
-            if (ret < 0) {
-                break;
-            }
-            kvm_virtio_pci_irqfd_release(proxy, n, vector);
+        kvm_virtio_pci_irqfd_release(proxy, n, vector);
+    }
+    return ret;
+}
+static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
+{
+    int queue_no;
+    int ret = 0;
+    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+
+    for (queue_no = 0; queue_no < nvqs; queue_no++) {
+        if (!virtio_queue_get_num(vdev, queue_no)) {
+            return -1;
         }
-        kvm_virtio_pci_vq_vector_release(proxy, vector);
+        ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
     }
     return ret;
 }
 
-static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
+
+static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
+                                              int queue_no)
 {
-    PCIDevice *dev = &proxy->pci_dev;
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     unsigned int vector;
-    int queue_no;
-    VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
     EventNotifier *n;
-    int ret ;
+    int ret;
+    VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
+    PCIDevice *dev = &proxy->pci_dev;
+
+    ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
+    if (ret < 0) {
+        return;
+    }
+    if (vector >= msix_nr_vectors_allocated(dev)) {
+        return;
+    }
+    if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
+        kvm_virtio_pci_irqfd_release(proxy, n, vector);
+    }
+    kvm_virtio_pci_vq_vector_release(proxy, vector);
+}
+static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
+{
+    int queue_no;
+    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+
     for (queue_no = 0; queue_no < nvqs; queue_no++) {
         if (!virtio_queue_get_num(vdev, queue_no)) {
             break;
         }
-        ret = virtio_pci_get_notifier(proxy, queue_no, &n, &vector);
-        if (ret < 0) {
-            break;
-        }
-        if (vector >= msix_nr_vectors_allocated(dev)) {
-            continue;
-        }
-        /* If guest supports masking, clean up irqfd now.
-         * Otherwise, it was cleaned when masked in the frontend.
-         */
-        if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
-            kvm_virtio_pci_irqfd_release(proxy, n, vector);
-        }
-        kvm_virtio_pci_vq_vector_release(proxy, vector);
+        kvm_virtio_pci_vector_release_one(proxy, queue_no);
     }
 }
 
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 04/10] vhost: add new call back function for config interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (2 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 03/10] virtio-pci: decouple the single vector from the " Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-10-19  6:52   ` Michael S. Tsirkin
  2021-09-30  2:33 ` [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back Cindy Lu
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

To support the config interrupt, we need to
add a new call back function for config interrupt.

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 include/hw/virtio/vhost-backend.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index 8475c5a29d..e732d2e702 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -126,6 +126,8 @@ typedef int (*vhost_get_device_id_op)(struct vhost_dev *dev, uint32_t *dev_id);
 
 typedef bool (*vhost_force_iommu_op)(struct vhost_dev *dev);
 
+typedef int (*vhost_set_config_call_op)(struct vhost_dev *dev,
+                                       int fd);
 typedef struct VhostOps {
     VhostBackendType backend_type;
     vhost_backend_init vhost_backend_init;
@@ -171,6 +173,7 @@ typedef struct VhostOps {
     vhost_vq_get_addr_op  vhost_vq_get_addr;
     vhost_get_device_id_op vhost_get_device_id;
     vhost_force_iommu_op vhost_force_iommu;
+    vhost_set_config_call_op vhost_set_config_call;
 } VhostOps;
 
 extern const VhostOps user_ops;
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (3 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 04/10] vhost: add new call back function for config interrupt Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-10-19  6:54   ` Michael S. Tsirkin
  2021-09-30  2:33 ` [PATCH v9 06/10] virtio: add support for configure interrupt Cindy Lu
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add new call back function in vhost-vdpa, this call back function will
set the fb number to hardware.

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/trace-events | 2 ++
 hw/virtio/vhost-vdpa.c | 7 +++++++
 2 files changed, 9 insertions(+)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 8ed19e9d0c..836e73d1f7 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -52,6 +52,8 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
 vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
 vhost_vdpa_set_owner(void *dev) "dev: %p"
 vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
+vhost_vdpa_set_config_call(void *dev, int fd)"dev: %p fd: %d"
+
 
 # virtio.c
 virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 4fa414feea..73764afc61 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -622,6 +622,12 @@ static int vhost_vdpa_set_vring_call(struct vhost_dev *dev,
     trace_vhost_vdpa_set_vring_call(dev, file->index, file->fd);
     return vhost_vdpa_call(dev, VHOST_SET_VRING_CALL, file);
 }
+static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
+                                       int fd)
+{
+    trace_vhost_vdpa_set_config_call(dev, fd);
+    return vhost_vdpa_call(dev, VHOST_VDPA_SET_CONFIG_CALL, &fd);
+}
 
 static int vhost_vdpa_get_features(struct vhost_dev *dev,
                                      uint64_t *features)
@@ -688,4 +694,5 @@ const VhostOps vdpa_ops = {
         .vhost_get_device_id = vhost_vdpa_get_device_id,
         .vhost_vq_get_addr = vhost_vdpa_vq_get_addr,
         .vhost_force_iommu = vhost_vdpa_force_iommu,
+        .vhost_set_config_call = vhost_vdpa_set_config_call,
 };
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 06/10] virtio: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (4 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 07/10] virtio-net: " Cindy Lu
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add the support for configure interrupt in virtio
add notifier_read and set_fd_handler function

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/virtio.c         | 29 +++++++++++++++++++++++++++++
 include/hw/virtio/virtio.h |  4 ++++
 2 files changed, 33 insertions(+)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 31987b103b..bd222edc9e 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -3531,7 +3531,14 @@ static void virtio_queue_guest_notifier_read(EventNotifier *n)
         virtio_irq(vq);
     }
 }
+static void virtio_config_guest_notifier_read(EventNotifier *n)
+{
+    VirtIODevice *vdev = container_of(n, VirtIODevice, config_notifier);
 
+    if (event_notifier_test_and_clear(n)) {
+        virtio_notify_config(vdev);
+    }
+}
 void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign,
                                                 bool with_irqfd)
 {
@@ -3548,6 +3555,23 @@ void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign,
     }
 }
 
+void virtio_config_set_guest_notifier_fd_handler(VirtIODevice *vdev,
+                                                 bool assign, bool with_irqfd)
+{
+    EventNotifier *n;
+    n = &vdev->config_notifier;
+    if (assign && !with_irqfd) {
+        event_notifier_set_handler(n, virtio_config_guest_notifier_read);
+    } else {
+        event_notifier_set_handler(n, NULL);
+    }
+    if (!assign) {
+        /* Test and clear notifier before closing it,*/
+        /* in case poll callback didn't have time to run. */
+        virtio_config_guest_notifier_read(n);
+    }
+}
+
 EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq)
 {
     return &vq->guest_notifier;
@@ -3621,6 +3645,11 @@ EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq)
     return &vq->host_notifier;
 }
 
+EventNotifier *virtio_config_get_guest_notifier(VirtIODevice *vdev)
+{
+    return &vdev->config_notifier;
+}
+
 void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled)
 {
     vq->host_notifier_enabled = enabled;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 2766c293f4..9e02d155a1 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -110,6 +110,7 @@ struct VirtIODevice
     bool use_guest_notifier_mask;
     AddressSpace *dma_as;
     QLIST_HEAD(, VirtQueue) *vector_queues;
+    EventNotifier config_notifier;
 };
 
 struct VirtioDeviceClass {
@@ -312,11 +313,14 @@ uint16_t virtio_get_queue_index(VirtQueue *vq);
 EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq);
 void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign,
                                                 bool with_irqfd);
+void virtio_config_set_guest_notifier_fd_handler(VirtIODevice *vdev,
+                                                 bool assign, bool with_irqfd);
 int virtio_device_start_ioeventfd(VirtIODevice *vdev);
 int virtio_device_grab_ioeventfd(VirtIODevice *vdev);
 void virtio_device_release_ioeventfd(VirtIODevice *vdev);
 bool virtio_device_ioeventfd_enabled(VirtIODevice *vdev);
 EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq);
+EventNotifier *virtio_config_get_guest_notifier(VirtIODevice *vdev);
 void virtio_queue_set_host_notifier_enabled(VirtQueue *vq, bool enabled);
 void virtio_queue_host_notifier_read(EventNotifier *n);
 void virtio_queue_aio_set_host_notifier_handler(VirtQueue *vq, AioContext *ctx,
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 07/10] virtio-net: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (5 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 06/10] virtio: add support for configure interrupt Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 08/10] vhost: " Cindy Lu
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add support for configure interrupt in virtio_net
The functions are config_pending and config_mask

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/net/vhost_net.c      | 10 ++++++++++
 hw/net/virtio-net.c     |  6 ++++++
 include/net/vhost_net.h |  3 +++
 3 files changed, 19 insertions(+)

diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 10a7780a13..1e78ef8349 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -433,6 +433,16 @@ void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
     vhost_virtqueue_mask(&net->dev, dev, idx, mask);
 }
 
+bool vhost_net_config_pending(VHostNetState *net)
+{
+    return vhost_config_pending(&net->dev);
+}
+
+void vhost_net_config_mask(VHostNetState *net, VirtIODevice *dev,
+                              bool mask)
+{
+    vhost_config_mask(&net->dev, dev, mask);
+}
 VHostNetState *get_vhost_net(NetClientState *nc)
 {
     VHostNetState *vhost_net = 0;
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 65b7cabcaf..005818a45a 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3155,6 +3155,9 @@ static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
     if (idx != VIRTIO_CONFIG_IRQ_IDX) {
         return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
     }
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        return vhost_net_config_pending(get_vhost_net(nc->peer));
+   }
     return false;
 }
 
@@ -3167,6 +3170,9 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
     if (idx != VIRTIO_CONFIG_IRQ_IDX) {
         vhost_net_virtqueue_mask(get_vhost_net(nc->peer), vdev, idx, mask);
     }
+    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
+        vhost_net_config_mask(get_vhost_net(nc->peer), vdev, mask);
+     }
 }
 
 static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 172b0051d8..478c127582 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -36,6 +36,9 @@ int vhost_net_set_config(struct vhost_net *net, const uint8_t *data,
 bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
 void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
                               int idx, bool mask);
+bool vhost_net_config_pending(VHostNetState *net);
+void vhost_net_config_mask(VHostNetState *net, VirtIODevice *dev,
+                              bool mask);
 int vhost_net_notify_migration_done(VHostNetState *net, char* mac_addr);
 VHostNetState *get_vhost_net(NetClientState *nc);
 
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 08/10] vhost: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (6 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 07/10] virtio-net: " Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 09/10] virtio-mmio: " Cindy Lu
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add support for configure interrupt in vhost
the interrupt will start in vhost_dev_start
and stop in vhost_dev_stop

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/vhost.c         | 76 +++++++++++++++++++++++++++++++++++++++
 include/hw/virtio/vhost.h |  4 +++
 2 files changed, 80 insertions(+)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index e8f85a5d2d..3b04027424 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1534,6 +1534,67 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
     }
 }
 
+bool vhost_config_pending(struct vhost_dev *hdev)
+{
+    assert(hdev->vhost_ops);
+    if ((hdev->started == false) ||
+        (hdev->vhost_ops->vhost_set_config_call == NULL)) {
+        return false;
+    }
+
+    EventNotifier *notifier =
+        &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier;
+    return event_notifier_test_and_clear(notifier);
+}
+
+void vhost_config_mask(struct vhost_dev *hdev, VirtIODevice *vdev, bool mask)
+{
+    int fd;
+    int r;
+    EventNotifier *notifier =
+        &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier;
+    EventNotifier *config_notifier = &vdev->config_notifier;
+    assert(hdev->vhost_ops);
+
+    if ((hdev->started == false) ||
+        (hdev->vhost_ops->vhost_set_config_call == NULL)) {
+        return;
+    }
+    if (mask) {
+        assert(vdev->use_guest_notifier_mask);
+        fd = event_notifier_get_fd(notifier);
+    } else {
+        fd = event_notifier_get_fd(config_notifier);
+    }
+    r = hdev->vhost_ops->vhost_set_config_call(hdev, fd);
+    if (r < 0) {
+        VHOST_OPS_DEBUG("vhost_set_config_call failed");
+    }
+}
+
+static void vhost_stop_config_intr(struct vhost_dev *dev)
+{
+    int fd = -1;
+    assert(dev->vhost_ops);
+    if (dev->vhost_ops->vhost_set_config_call) {
+        dev->vhost_ops->vhost_set_config_call(dev, fd);
+    }
+}
+
+static void vhost_start_config_intr(struct vhost_dev *dev)
+{
+    int r;
+
+    assert(dev->vhost_ops);
+    int fd = event_notifier_get_fd(&dev->vdev->config_notifier);
+    if (dev->vhost_ops->vhost_set_config_call) {
+        r = dev->vhost_ops->vhost_set_config_call(dev, fd);
+        if (!r) {
+            event_notifier_set(&dev->vdev->config_notifier);
+        }
+    }
+}
+
 uint64_t vhost_get_features(struct vhost_dev *hdev, const int *feature_bits,
                             uint64_t features)
 {
@@ -1752,6 +1813,16 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev)
         }
     }
 
+    r = event_notifier_init(
+        &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier, 0);
+    if (r < 0) {
+        return r;
+    }
+    event_notifier_test_and_clear(
+        &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier);
+    if (!vdev->use_guest_notifier_mask) {
+        vhost_config_mask(hdev, vdev, true);
+    }
     if (hdev->log_enabled) {
         uint64_t log_base;
 
@@ -1785,6 +1856,7 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev)
             vhost_device_iotlb_miss(hdev, vq->used_phys, true);
         }
     }
+    vhost_start_config_intr(hdev);
     return 0;
 fail_log:
     vhost_log_put(hdev, false);
@@ -1810,6 +1882,9 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev)
 
     /* should only be called after backend is connected */
     assert(hdev->vhost_ops);
+    event_notifier_test_and_clear(
+        &hdev->vqs[VHOST_QUEUE_NUM_CONFIG_INR].masked_config_notifier);
+    event_notifier_test_and_clear(&vdev->config_notifier);
 
     if (hdev->vhost_ops->vhost_dev_start) {
         hdev->vhost_ops->vhost_dev_start(hdev, false);
@@ -1827,6 +1902,7 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev)
         }
         memory_listener_unregister(&hdev->iommu_listener);
     }
+    vhost_stop_config_intr(hdev);
     vhost_log_put(hdev, true);
     hdev->started = false;
     hdev->vdev = NULL;
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index 045d0fd9f2..e938cc3b4b 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -29,6 +29,7 @@ struct vhost_virtqueue {
     unsigned long long used_phys;
     unsigned used_size;
     EventNotifier masked_notifier;
+    EventNotifier masked_config_notifier;
     struct vhost_dev *dev;
 };
 
@@ -37,6 +38,7 @@ typedef unsigned long vhost_log_chunk_t;
 #define VHOST_LOG_BITS (8 * sizeof(vhost_log_chunk_t))
 #define VHOST_LOG_CHUNK (VHOST_LOG_PAGE * VHOST_LOG_BITS)
 #define VHOST_INVALID_FEATURE_BIT   (0xff)
+#define VHOST_QUEUE_NUM_CONFIG_INR  0
 
 struct vhost_log {
     unsigned long long size;
@@ -110,6 +112,8 @@ int vhost_dev_start(struct vhost_dev *hdev, VirtIODevice *vdev);
 void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice *vdev);
 int vhost_dev_enable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev);
 void vhost_dev_disable_notifiers(struct vhost_dev *hdev, VirtIODevice *vdev);
+bool vhost_config_pending(struct vhost_dev *hdev);
+void vhost_config_mask(struct vhost_dev *hdev, VirtIODevice *vdev, bool mask);
 
 /* Test and clear masked event pending status.
  * Should be called after unmask to avoid losing events.
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 09/10] virtio-mmio: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (7 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 08/10] vhost: " Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-09-30  2:33 ` [PATCH v9 10/10] virtio-pci: " Cindy Lu
  2021-10-19  6:56 ` [PATCH v9 00/10] vhost-vdpa: " Michael S. Tsirkin
  10 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add configure interrupt support for virtio-mmio bus. This
interrupt will be working while the backend is vhost-vdpa

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/virtio-mmio.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
index 1af48a1b04..695fd31f9d 100644
--- a/hw/virtio/virtio-mmio.c
+++ b/hw/virtio/virtio-mmio.c
@@ -673,7 +673,30 @@ static int virtio_mmio_set_guest_notifier(DeviceState *d, int n, bool assign,
 
     return 0;
 }
+static int virtio_mmio_set_config_guest_notifier(DeviceState *d, bool assign)
+{
+    VirtIOMMIOProxy *proxy = VIRTIO_MMIO(d);
+    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
+    bool with_irqfd = false;
+    EventNotifier *notifier = virtio_config_get_guest_notifier(vdev);
+    int r = 0;
 
+    if (assign) {
+        r = event_notifier_init(notifier, 0);
+        if (r < 0) {
+            return r;
+        }
+        virtio_config_set_guest_notifier_fd_handler(vdev, assign, with_irqfd);
+    } else {
+        virtio_config_set_guest_notifier_fd_handler(vdev, assign, with_irqfd);
+        event_notifier_cleanup(notifier);
+    }
+    if (vdc->guest_notifier_mask && vdev->use_guest_notifier_mask) {
+        vdc->guest_notifier_mask(vdev, VIRTIO_CONFIG_IRQ_IDX, !assign);
+    }
+    return r;
+}
 static int virtio_mmio_set_guest_notifiers(DeviceState *d, int nvqs,
                                            bool assign)
 {
@@ -695,6 +718,10 @@ static int virtio_mmio_set_guest_notifiers(DeviceState *d, int nvqs,
             goto assign_error;
         }
     }
+    r = virtio_mmio_set_config_guest_notifier(d, assign);
+    if (r < 0) {
+        goto assign_error;
+    }
 
     return 0;
 
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v9 10/10] virtio-pci: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (8 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 09/10] virtio-mmio: " Cindy Lu
@ 2021-09-30  2:33 ` Cindy Lu
  2021-10-19  6:39   ` Michael S. Tsirkin
  2021-10-19  6:50   ` Michael S. Tsirkin
  2021-10-19  6:56 ` [PATCH v9 00/10] vhost-vdpa: " Michael S. Tsirkin
  10 siblings, 2 replies; 20+ messages in thread
From: Cindy Lu @ 2021-09-30  2:33 UTC (permalink / raw)
  To: lulu, mst, jasowang, kraxel, dgilbert, stefanha, arei.gonglei,
	marcandre.lureau
  Cc: qemu-devel

Add support for configure interrupt, The process is used kvm_irqfd_assign
to set the gsi to kernel. When the configure notifier was signal by
host, qemu will inject a msix interrupt to guest

Signed-off-by: Cindy Lu <lulu@redhat.com>
---
 hw/virtio/virtio-pci.c | 88 +++++++++++++++++++++++++++++++++---------
 hw/virtio/virtio-pci.h |  4 +-
 2 files changed, 72 insertions(+), 20 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index d0a2c2fb81..50179c2ba1 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -728,7 +728,8 @@ static int virtio_pci_get_notifier(VirtIOPCIProxy *proxy, int queue_no,
     VirtQueue *vq;
 
     if (queue_no == VIRTIO_CONFIG_IRQ_IDX) {
-        return -1;
+        *n = virtio_config_get_guest_notifier(vdev);
+        *vector = vdev->config_vector;
     } else {
         if (!virtio_queue_get_num(vdev, queue_no)) {
             return -1;
@@ -806,6 +807,10 @@ static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
     return ret;
 }
 
+static int kvm_virtio_pci_vector_config_use(VirtIOPCIProxy *proxy)
+{
+    return kvm_virtio_pci_vector_use_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
+}
 
 static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
                                               int queue_no)
@@ -829,6 +834,7 @@ static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
     }
     kvm_virtio_pci_vq_vector_release(proxy, vector);
 }
+
 static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
 {
     int queue_no;
@@ -842,6 +848,11 @@ static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
     }
 }
 
+static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
+{
+    kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
+}
+
 static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
                                        unsigned int queue_no,
                                        unsigned int vector,
@@ -923,9 +934,17 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
         }
         vq = virtio_vector_next_queue(vq);
     }
-
+    /* unmask config intr */
+    n = virtio_config_get_guest_notifier(vdev);
+    ret = virtio_pci_one_vector_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector,
+                                       msg, n);
+    if (ret < 0) {
+        goto undo_config;
+    }
     return 0;
-
+undo_config:
+    n = virtio_config_get_guest_notifier(vdev);
+    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
 undo:
     vq = virtio_vector_first_queue(vdev, vector);
     while (vq && unmasked >= 0) {
@@ -959,6 +978,8 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector)
         }
         vq = virtio_vector_next_queue(vq);
     }
+    n = virtio_config_get_guest_notifier(vdev);
+    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
 }
 
 static void virtio_pci_vector_poll(PCIDevice *dev,
@@ -971,19 +992,17 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
     int queue_no;
     unsigned int vector;
     EventNotifier *notifier;
-    VirtQueue *vq;
-
-    for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) {
-        if (!virtio_queue_get_num(vdev, queue_no)) {
+    int ret;
+    for (queue_no = VIRTIO_CONFIG_IRQ_IDX;
+         queue_no < proxy->nvqs_with_notifiers; queue_no++) {
+        ret = virtio_pci_get_notifier(proxy, queue_no, &notifier, &vector);
+        if (ret < 0) {
             break;
         }
-        vector = virtio_queue_vector(vdev, queue_no);
         if (vector < vector_start || vector >= vector_end ||
             !msix_is_masked(dev, vector)) {
             continue;
         }
-        vq = virtio_get_queue(vdev, queue_no);
-        notifier = virtio_queue_get_guest_notifier(vq);
         if (k->guest_notifier_pending) {
             if (k->guest_notifier_pending(vdev, queue_no)) {
                 msix_set_pending(dev, vector);
@@ -994,23 +1013,42 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
     }
 }
 
+void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
+                                              int n, bool assign,
+                                              bool with_irqfd)
+{
+    if (n == VIRTIO_CONFIG_IRQ_IDX) {
+        virtio_config_set_guest_notifier_fd_handler(vdev, assign, with_irqfd);
+    } else {
+        virtio_queue_set_guest_notifier_fd_handler(vq, assign, with_irqfd);
+    }
+}
+
 static int virtio_pci_set_guest_notifier(DeviceState *d, int n, bool assign,
                                          bool with_irqfd)
 {
     VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
     VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
     VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
-    VirtQueue *vq = virtio_get_queue(vdev, n);
-    EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
+    VirtQueue *vq = NULL;
+    EventNotifier *notifier = NULL;
+
+    if (n == VIRTIO_CONFIG_IRQ_IDX) {
+        notifier = virtio_config_get_guest_notifier(vdev);
+    } else {
+        vq = virtio_get_queue(vdev, n);
+        notifier = virtio_queue_get_guest_notifier(vq);
+    }
 
     if (assign) {
         int r = event_notifier_init(notifier, 0);
         if (r < 0) {
             return r;
         }
-        virtio_queue_set_guest_notifier_fd_handler(vq, true, with_irqfd);
+        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, true, with_irqfd);
     } else {
-        virtio_queue_set_guest_notifier_fd_handler(vq, false, with_irqfd);
+        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, false,
+                                                 with_irqfd);
         event_notifier_cleanup(notifier);
     }
 
@@ -1052,6 +1090,7 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
         msix_unset_vector_notifiers(&proxy->pci_dev);
         if (proxy->vector_irqfd) {
             kvm_virtio_pci_vector_release(proxy, nvqs);
+            kvm_virtio_pci_vector_config_release(proxy);
             g_free(proxy->vector_irqfd);
             proxy->vector_irqfd = NULL;
         }
@@ -1067,7 +1106,11 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
             goto assign_error;
         }
     }
-
+    r = virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, assign,
+                                      with_irqfd);
+    if (r < 0) {
+        goto config_assign_error;
+    }
     /* Must set vector notifier after guest notifier has been assigned */
     if ((with_irqfd || k->guest_notifier_mask) && assign) {
         if (with_irqfd) {
@@ -1076,11 +1119,14 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
                           msix_nr_vectors_allocated(&proxy->pci_dev));
             r = kvm_virtio_pci_vector_use(proxy, nvqs);
             if (r < 0) {
-                goto assign_error;
+                goto config_assign_error;
             }
         }
-        r = msix_set_vector_notifiers(&proxy->pci_dev,
-                                      virtio_pci_vector_unmask,
+        r = kvm_virtio_pci_vector_config_use(proxy);
+        if (r < 0) {
+            goto config_error;
+        }
+        r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
                                       virtio_pci_vector_mask,
                                       virtio_pci_vector_poll);
         if (r < 0) {
@@ -1095,7 +1141,11 @@ notifiers_error:
         assert(assign);
         kvm_virtio_pci_vector_release(proxy, nvqs);
     }
-
+config_error:
+    kvm_virtio_pci_vector_config_release(proxy);
+config_assign_error:
+    virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, !assign,
+                                  with_irqfd);
 assign_error:
     /* We get here on assignment failure. Recover by undoing for VQs 0 .. n. */
     assert(assign);
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index 2446dcd9ae..b704acc5a8 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -251,5 +251,7 @@ void virtio_pci_types_register(const VirtioPCIDeviceTypeInfo *t);
  * @fixed_queues.
  */
 unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
-
+void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
+                                              int n, bool assign,
+                                              bool with_irqfd);
 #endif
-- 
2.21.3



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 10/10] virtio-pci: add support for configure interrupt
  2021-09-30  2:33 ` [PATCH v9 10/10] virtio-pci: " Cindy Lu
@ 2021-10-19  6:39   ` Michael S. Tsirkin
  2021-10-19  6:50   ` Michael S. Tsirkin
  1 sibling, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:39 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:48AM +0800, Cindy Lu wrote:
> Add support for configure interrupt, The process is used kvm_irqfd_assign
> to set the gsi to kernel. When the configure notifier was signal by
> host, qemu will inject a msix interrupt to guest
> 
> Signed-off-by: Cindy Lu <lulu@redhat.com>

This one makes make check hang on my machine.

Just make, then:
QTEST_QEMU_STORAGE_DAEMON_BINARY=./build/storage-daemon/qemu-storage-daemon \
QTEST_QEMU_BINARY=build/x86_64-softmmu/qemu-system-x86_64 \
./build/tests/qtest/qos-test

and observe it hang.


> ---
>  hw/virtio/virtio-pci.c | 88 +++++++++++++++++++++++++++++++++---------
>  hw/virtio/virtio-pci.h |  4 +-
>  2 files changed, 72 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index d0a2c2fb81..50179c2ba1 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -728,7 +728,8 @@ static int virtio_pci_get_notifier(VirtIOPCIProxy *proxy, int queue_no,
>      VirtQueue *vq;
>  
>      if (queue_no == VIRTIO_CONFIG_IRQ_IDX) {
> -        return -1;
> +        *n = virtio_config_get_guest_notifier(vdev);
> +        *vector = vdev->config_vector;
>      } else {
>          if (!virtio_queue_get_num(vdev, queue_no)) {
>              return -1;
> @@ -806,6 +807,10 @@ static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
>      return ret;
>  }
>  
> +static int kvm_virtio_pci_vector_config_use(VirtIOPCIProxy *proxy)
> +{
> +    return kvm_virtio_pci_vector_use_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
> +}
>  
>  static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
>                                                int queue_no)
> @@ -829,6 +834,7 @@ static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
>      }
>      kvm_virtio_pci_vq_vector_release(proxy, vector);
>  }
> +
>  static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
>  {
>      int queue_no;
> @@ -842,6 +848,11 @@ static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
>      }
>  }
>  
> +static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
> +{
> +    kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
> +}
> +
>  static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
>                                         unsigned int queue_no,
>                                         unsigned int vector,
> @@ -923,9 +934,17 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
>          }
>          vq = virtio_vector_next_queue(vq);
>      }
> -
> +    /* unmask config intr */
> +    n = virtio_config_get_guest_notifier(vdev);
> +    ret = virtio_pci_one_vector_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector,
> +                                       msg, n);
> +    if (ret < 0) {
> +        goto undo_config;
> +    }
>      return 0;
> -
> +undo_config:
> +    n = virtio_config_get_guest_notifier(vdev);
> +    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
>  undo:
>      vq = virtio_vector_first_queue(vdev, vector);
>      while (vq && unmasked >= 0) {
> @@ -959,6 +978,8 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector)
>          }
>          vq = virtio_vector_next_queue(vq);
>      }
> +    n = virtio_config_get_guest_notifier(vdev);
> +    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
>  }
>  
>  static void virtio_pci_vector_poll(PCIDevice *dev,
> @@ -971,19 +992,17 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
>      int queue_no;
>      unsigned int vector;
>      EventNotifier *notifier;
> -    VirtQueue *vq;
> -
> -    for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) {
> -        if (!virtio_queue_get_num(vdev, queue_no)) {
> +    int ret;
> +    for (queue_no = VIRTIO_CONFIG_IRQ_IDX;
> +         queue_no < proxy->nvqs_with_notifiers; queue_no++) {
> +        ret = virtio_pci_get_notifier(proxy, queue_no, &notifier, &vector);
> +        if (ret < 0) {
>              break;
>          }
> -        vector = virtio_queue_vector(vdev, queue_no);
>          if (vector < vector_start || vector >= vector_end ||
>              !msix_is_masked(dev, vector)) {
>              continue;
>          }
> -        vq = virtio_get_queue(vdev, queue_no);
> -        notifier = virtio_queue_get_guest_notifier(vq);
>          if (k->guest_notifier_pending) {
>              if (k->guest_notifier_pending(vdev, queue_no)) {
>                  msix_set_pending(dev, vector);
> @@ -994,23 +1013,42 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
>      }
>  }
>  
> +void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
> +                                              int n, bool assign,
> +                                              bool with_irqfd)
> +{
> +    if (n == VIRTIO_CONFIG_IRQ_IDX) {
> +        virtio_config_set_guest_notifier_fd_handler(vdev, assign, with_irqfd);
> +    } else {
> +        virtio_queue_set_guest_notifier_fd_handler(vq, assign, with_irqfd);
> +    }
> +}
> +
>  static int virtio_pci_set_guest_notifier(DeviceState *d, int n, bool assign,
>                                           bool with_irqfd)
>  {
>      VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
>      VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>      VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
> -    VirtQueue *vq = virtio_get_queue(vdev, n);
> -    EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
> +    VirtQueue *vq = NULL;
> +    EventNotifier *notifier = NULL;
> +
> +    if (n == VIRTIO_CONFIG_IRQ_IDX) {
> +        notifier = virtio_config_get_guest_notifier(vdev);
> +    } else {
> +        vq = virtio_get_queue(vdev, n);
> +        notifier = virtio_queue_get_guest_notifier(vq);
> +    }
>  
>      if (assign) {
>          int r = event_notifier_init(notifier, 0);
>          if (r < 0) {
>              return r;
>          }
> -        virtio_queue_set_guest_notifier_fd_handler(vq, true, with_irqfd);
> +        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, true, with_irqfd);
>      } else {
> -        virtio_queue_set_guest_notifier_fd_handler(vq, false, with_irqfd);
> +        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, false,
> +                                                 with_irqfd);
>          event_notifier_cleanup(notifier);
>      }
>  
> @@ -1052,6 +1090,7 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>          msix_unset_vector_notifiers(&proxy->pci_dev);
>          if (proxy->vector_irqfd) {
>              kvm_virtio_pci_vector_release(proxy, nvqs);
> +            kvm_virtio_pci_vector_config_release(proxy);
>              g_free(proxy->vector_irqfd);
>              proxy->vector_irqfd = NULL;
>          }
> @@ -1067,7 +1106,11 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>              goto assign_error;
>          }
>      }
> -
> +    r = virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, assign,
> +                                      with_irqfd);
> +    if (r < 0) {
> +        goto config_assign_error;
> +    }
>      /* Must set vector notifier after guest notifier has been assigned */
>      if ((with_irqfd || k->guest_notifier_mask) && assign) {
>          if (with_irqfd) {
> @@ -1076,11 +1119,14 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>                            msix_nr_vectors_allocated(&proxy->pci_dev));
>              r = kvm_virtio_pci_vector_use(proxy, nvqs);
>              if (r < 0) {
> -                goto assign_error;
> +                goto config_assign_error;
>              }
>          }
> -        r = msix_set_vector_notifiers(&proxy->pci_dev,
> -                                      virtio_pci_vector_unmask,
> +        r = kvm_virtio_pci_vector_config_use(proxy);
> +        if (r < 0) {
> +            goto config_error;
> +        }
> +        r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
>                                        virtio_pci_vector_mask,
>                                        virtio_pci_vector_poll);
>          if (r < 0) {
> @@ -1095,7 +1141,11 @@ notifiers_error:
>          assert(assign);
>          kvm_virtio_pci_vector_release(proxy, nvqs);
>      }
> -
> +config_error:
> +    kvm_virtio_pci_vector_config_release(proxy);
> +config_assign_error:
> +    virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, !assign,
> +                                  with_irqfd);
>  assign_error:
>      /* We get here on assignment failure. Recover by undoing for VQs 0 .. n. */
>      assert(assign);
> diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> index 2446dcd9ae..b704acc5a8 100644
> --- a/hw/virtio/virtio-pci.h
> +++ b/hw/virtio/virtio-pci.h
> @@ -251,5 +251,7 @@ void virtio_pci_types_register(const VirtioPCIDeviceTypeInfo *t);
>   * @fixed_queues.
>   */
>  unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
> -
> +void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
> +                                              int n, bool assign,
> +                                              bool with_irqfd);
>  #endif
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX
  2021-09-30  2:33 ` [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX Cindy Lu
@ 2021-10-19  6:47   ` Michael S. Tsirkin
  0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:47 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:39AM +0800, Cindy Lu wrote:
> To support configure interrupt for vhost-vdpa
> introduce VIRTIO_CONFIG_IRQ_IDX -1 as config queue index, Then we can reuse
> the function guest_notifier_mask and guest_notifier_pending.
> Add the check of queue index, if the driver does not support configure
> interrupt, the function will just return
> 
> Signed-off-by: Cindy Lu <lulu@redhat.com>

typo in subject

Also the commit log and subject do not seem to match what patch is
doing. Description makes it look like a refactoring, but
it isn't. guest_notifier_mask don't exist.
And I'm not sure why it's safe to do nothing e.g. in
pending.




> ---
>  hw/display/vhost-user-gpu.c    |  6 ++++++
>  hw/net/virtio-net.c            | 10 +++++++---
>  hw/virtio/vhost-user-fs.c      |  9 +++++++--
>  hw/virtio/vhost-vsock-common.c |  6 ++++++
>  hw/virtio/virtio-crypto.c      |  6 ++++++
>  include/hw/virtio/virtio.h     |  2 ++
>  6 files changed, 34 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/display/vhost-user-gpu.c b/hw/display/vhost-user-gpu.c
> index 49df56cd14..73ad3d84c9 100644
> --- a/hw/display/vhost-user-gpu.c
> +++ b/hw/display/vhost-user-gpu.c
> @@ -485,6 +485,9 @@ vhost_user_gpu_guest_notifier_pending(VirtIODevice *vdev, int idx)
>  {
>      VhostUserGPU *g = VHOST_USER_GPU(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return false;
> +    }
>      return vhost_virtqueue_pending(&g->vhost->dev, idx);
>  }
>  
> @@ -493,6 +496,9 @@ vhost_user_gpu_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask)
>  {
>      VhostUserGPU *g = VHOST_USER_GPU(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return;
> +    }
>      vhost_virtqueue_mask(&g->vhost->dev, vdev, idx, mask);
>  }
>  
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 16d20cdee5..65b7cabcaf 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3152,7 +3152,10 @@ static bool virtio_net_guest_notifier_pending(VirtIODevice *vdev, int idx)
>      VirtIONet *n = VIRTIO_NET(vdev);
>      NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>      assert(n->vhost_started);
> -    return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
> +    if (idx != VIRTIO_CONFIG_IRQ_IDX) {
> +        return vhost_net_virtqueue_pending(get_vhost_net(nc->peer), idx);
> +    }
> +    return false;
>  }
>  
>  static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
> @@ -3161,8 +3164,9 @@ static void virtio_net_guest_notifier_mask(VirtIODevice *vdev, int idx,
>      VirtIONet *n = VIRTIO_NET(vdev);
>      NetClientState *nc = qemu_get_subqueue(n->nic, vq2q(idx));
>      assert(n->vhost_started);
> -    vhost_net_virtqueue_mask(get_vhost_net(nc->peer),
> -                             vdev, idx, mask);
> +    if (idx != VIRTIO_CONFIG_IRQ_IDX) {
> +        vhost_net_virtqueue_mask(get_vhost_net(nc->peer), vdev, idx, mask);
> +    }
>  }
>  
>  static void virtio_net_set_config_size(VirtIONet *n, uint64_t host_features)
> diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c
> index c595957983..309c8efabf 100644
> --- a/hw/virtio/vhost-user-fs.c
> +++ b/hw/virtio/vhost-user-fs.c
> @@ -156,11 +156,13 @@ static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq)
>       */
>  }
>  
> -static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx,
> -                                            bool mask)
> +static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask)
>  {
>      VHostUserFS *fs = VHOST_USER_FS(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return;
> +    }
>      vhost_virtqueue_mask(&fs->vhost_dev, vdev, idx, mask);
>  }
>  
> @@ -168,6 +170,9 @@ static bool vuf_guest_notifier_pending(VirtIODevice *vdev, int idx)
>  {
>      VHostUserFS *fs = VHOST_USER_FS(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return false;
> +    }
>      return vhost_virtqueue_pending(&fs->vhost_dev, idx);
>  }
>  
> diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
> index 4ad6e234ad..2112b44802 100644
> --- a/hw/virtio/vhost-vsock-common.c
> +++ b/hw/virtio/vhost-vsock-common.c
> @@ -101,6 +101,9 @@ static void vhost_vsock_common_guest_notifier_mask(VirtIODevice *vdev, int idx,
>  {
>      VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return;
> +    }
>      vhost_virtqueue_mask(&vvc->vhost_dev, vdev, idx, mask);
>  }
>  
> @@ -109,6 +112,9 @@ static bool vhost_vsock_common_guest_notifier_pending(VirtIODevice *vdev,
>  {
>      VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return false;
> +    }
>      return vhost_virtqueue_pending(&vvc->vhost_dev, idx);
>  }
>  
> diff --git a/hw/virtio/virtio-crypto.c b/hw/virtio/virtio-crypto.c
> index 54f9bbb789..1d5192f8b4 100644
> --- a/hw/virtio/virtio-crypto.c
> +++ b/hw/virtio/virtio-crypto.c
> @@ -948,6 +948,9 @@ static void virtio_crypto_guest_notifier_mask(VirtIODevice *vdev, int idx,
>  
>      assert(vcrypto->vhost_started);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return;
> +    }
>      cryptodev_vhost_virtqueue_mask(vdev, queue, idx, mask);
>  }
>  
> @@ -958,6 +961,9 @@ static bool virtio_crypto_guest_notifier_pending(VirtIODevice *vdev, int idx)
>  
>      assert(vcrypto->vhost_started);
>  
> +    if (idx == VIRTIO_CONFIG_IRQ_IDX) {
> +        return false;
> +    }
>      return cryptodev_vhost_virtqueue_pending(vdev, queue, idx);
>  }
>  
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index fa711a8912..2766c293f4 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -67,6 +67,8 @@ typedef struct VirtQueueElement
>  
>  #define VIRTIO_NO_VECTOR 0xffff
>

Add a comment here. E.g. /* special index value used internally for config irqs */
  
> +#define VIRTIO_CONFIG_IRQ_IDX -1
> +
>  #define TYPE_VIRTIO_DEVICE "virtio-device"
>  OBJECT_DECLARE_TYPE(VirtIODevice, VirtioDeviceClass, VIRTIO_DEVICE)
>  
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 10/10] virtio-pci: add support for configure interrupt
  2021-09-30  2:33 ` [PATCH v9 10/10] virtio-pci: " Cindy Lu
  2021-10-19  6:39   ` Michael S. Tsirkin
@ 2021-10-19  6:50   ` Michael S. Tsirkin
  1 sibling, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:50 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:48AM +0800, Cindy Lu wrote:
> Add support for configure interrupt, The process is used kvm_irqfd_assign
> to set the gsi to kernel. When the configure notifier was signal by
> host, qemu will inject a msix interrupt to guest
> 
> Signed-off-by: Cindy Lu <lulu@redhat.com>
> ---
>  hw/virtio/virtio-pci.c | 88 +++++++++++++++++++++++++++++++++---------
>  hw/virtio/virtio-pci.h |  4 +-
>  2 files changed, 72 insertions(+), 20 deletions(-)
> 
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index d0a2c2fb81..50179c2ba1 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -728,7 +728,8 @@ static int virtio_pci_get_notifier(VirtIOPCIProxy *proxy, int queue_no,
>      VirtQueue *vq;
>  
>      if (queue_no == VIRTIO_CONFIG_IRQ_IDX) {
> -        return -1;
> +        *n = virtio_config_get_guest_notifier(vdev);
> +        *vector = vdev->config_vector;
>      } else {
>          if (!virtio_queue_get_num(vdev, queue_no)) {
>              return -1;

So here you are rewriting code you added previously ... not great.


> @@ -806,6 +807,10 @@ static int kvm_virtio_pci_vector_use(VirtIOPCIProxy *proxy, int nvqs)
>      return ret;
>  }
>  
> +static int kvm_virtio_pci_vector_config_use(VirtIOPCIProxy *proxy)
> +{
> +    return kvm_virtio_pci_vector_use_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
> +}
>  
>  static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
>                                                int queue_no)
> @@ -829,6 +834,7 @@ static void kvm_virtio_pci_vector_release_one(VirtIOPCIProxy *proxy,
>      }
>      kvm_virtio_pci_vq_vector_release(proxy, vector);
>  }
> +
>  static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
>  {
>      int queue_no;
> @@ -842,6 +848,11 @@ static void kvm_virtio_pci_vector_release(VirtIOPCIProxy *proxy, int nvqs)
>      }
>  }
>  
> +static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
> +{
> +    kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
> +}
> +
>  static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
>                                         unsigned int queue_no,
>                                         unsigned int vector,
> @@ -923,9 +934,17 @@ static int virtio_pci_vector_unmask(PCIDevice *dev, unsigned vector,
>          }
>          vq = virtio_vector_next_queue(vq);
>      }
> -
> +    /* unmask config intr */
> +    n = virtio_config_get_guest_notifier(vdev);
> +    ret = virtio_pci_one_vector_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector,
> +                                       msg, n);
> +    if (ret < 0) {
> +        goto undo_config;
> +    }
>      return 0;
> -
> +undo_config:
> +    n = virtio_config_get_guest_notifier(vdev);
> +    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
>  undo:
>      vq = virtio_vector_first_queue(vdev, vector);
>      while (vq && unmasked >= 0) {
> @@ -959,6 +978,8 @@ static void virtio_pci_vector_mask(PCIDevice *dev, unsigned vector)
>          }
>          vq = virtio_vector_next_queue(vq);
>      }
> +    n = virtio_config_get_guest_notifier(vdev);
> +    virtio_pci_one_vector_mask(proxy, VIRTIO_CONFIG_IRQ_IDX, vector, n);
>  }
>  
>  static void virtio_pci_vector_poll(PCIDevice *dev,
> @@ -971,19 +992,17 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
>      int queue_no;
>      unsigned int vector;
>      EventNotifier *notifier;
> -    VirtQueue *vq;
> -
> -    for (queue_no = 0; queue_no < proxy->nvqs_with_notifiers; queue_no++) {
> -        if (!virtio_queue_get_num(vdev, queue_no)) {
> +    int ret;
> +    for (queue_no = VIRTIO_CONFIG_IRQ_IDX;
> +         queue_no < proxy->nvqs_with_notifiers; queue_no++) {

Oh, it turns out it's important that this value is -1,
otherwise the loop will just go crazy.



> +        ret = virtio_pci_get_notifier(proxy, queue_no, &notifier, &vector);
> +        if (ret < 0) {
>              break;
>          }
> -        vector = virtio_queue_vector(vdev, queue_no);
>          if (vector < vector_start || vector >= vector_end ||
>              !msix_is_masked(dev, vector)) {
>              continue;
>          }
> -        vq = virtio_get_queue(vdev, queue_no);
> -        notifier = virtio_queue_get_guest_notifier(vq);
>          if (k->guest_notifier_pending) {
>              if (k->guest_notifier_pending(vdev, queue_no)) {
>                  msix_set_pending(dev, vector);
> @@ -994,23 +1013,42 @@ static void virtio_pci_vector_poll(PCIDevice *dev,
>      }
>  }
>  
> +void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
> +                                              int n, bool assign,
> +                                              bool with_irqfd)
> +{
> +    if (n == VIRTIO_CONFIG_IRQ_IDX) {
> +        virtio_config_set_guest_notifier_fd_handler(vdev, assign, with_irqfd);
> +    } else {
> +        virtio_queue_set_guest_notifier_fd_handler(vq, assign, with_irqfd);
> +    }
> +}
> +
>  static int virtio_pci_set_guest_notifier(DeviceState *d, int n, bool assign,
>                                           bool with_irqfd)
>  {
>      VirtIOPCIProxy *proxy = to_virtio_pci_proxy(d);
>      VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>      VirtioDeviceClass *vdc = VIRTIO_DEVICE_GET_CLASS(vdev);
> -    VirtQueue *vq = virtio_get_queue(vdev, n);
> -    EventNotifier *notifier = virtio_queue_get_guest_notifier(vq);
> +    VirtQueue *vq = NULL;
> +    EventNotifier *notifier = NULL;
> +
> +    if (n == VIRTIO_CONFIG_IRQ_IDX) {
> +        notifier = virtio_config_get_guest_notifier(vdev);
> +    } else {
> +        vq = virtio_get_queue(vdev, n);
> +        notifier = virtio_queue_get_guest_notifier(vq);
> +    }
>  
>      if (assign) {
>          int r = event_notifier_init(notifier, 0);
>          if (r < 0) {
>              return r;
>          }
> -        virtio_queue_set_guest_notifier_fd_handler(vq, true, with_irqfd);
> +        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, true, with_irqfd);
>      } else {
> -        virtio_queue_set_guest_notifier_fd_handler(vq, false, with_irqfd);
> +        virtio_pci_set_guest_notifier_fd_handler(vdev, vq, n, false,
> +                                                 with_irqfd);
>          event_notifier_cleanup(notifier);
>      }
>  
> @@ -1052,6 +1090,7 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>          msix_unset_vector_notifiers(&proxy->pci_dev);
>          if (proxy->vector_irqfd) {
>              kvm_virtio_pci_vector_release(proxy, nvqs);
> +            kvm_virtio_pci_vector_config_release(proxy);
>              g_free(proxy->vector_irqfd);
>              proxy->vector_irqfd = NULL;
>          }
> @@ -1067,7 +1106,11 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>              goto assign_error;
>          }
>      }
> -
> +    r = virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, assign,
> +                                      with_irqfd);
> +    if (r < 0) {
> +        goto config_assign_error;
> +    }
>      /* Must set vector notifier after guest notifier has been assigned */
>      if ((with_irqfd || k->guest_notifier_mask) && assign) {
>          if (with_irqfd) {
> @@ -1076,11 +1119,14 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>                            msix_nr_vectors_allocated(&proxy->pci_dev));
>              r = kvm_virtio_pci_vector_use(proxy, nvqs);
>              if (r < 0) {
> -                goto assign_error;
> +                goto config_assign_error;
>              }
>          }
> -        r = msix_set_vector_notifiers(&proxy->pci_dev,
> -                                      virtio_pci_vector_unmask,
> +        r = kvm_virtio_pci_vector_config_use(proxy);
> +        if (r < 0) {
> +            goto config_error;
> +        }
> +        r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
>                                        virtio_pci_vector_mask,
>                                        virtio_pci_vector_poll);
>          if (r < 0) {
> @@ -1095,7 +1141,11 @@ notifiers_error:
>          assert(assign);
>          kvm_virtio_pci_vector_release(proxy, nvqs);
>      }
> -
> +config_error:
> +    kvm_virtio_pci_vector_config_release(proxy);
> +config_assign_error:
> +    virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, !assign,
> +                                  with_irqfd);
>  assign_error:
>      /* We get here on assignment failure. Recover by undoing for VQs 0 .. n. */
>      assert(assign);
> diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> index 2446dcd9ae..b704acc5a8 100644
> --- a/hw/virtio/virtio-pci.h
> +++ b/hw/virtio/virtio-pci.h
> @@ -251,5 +251,7 @@ void virtio_pci_types_register(const VirtioPCIDeviceTypeInfo *t);
>   * @fixed_queues.
>   */
>  unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
> -
> +void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
> +                                              int n, bool assign,
> +                                              bool with_irqfd);
>  #endif
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 04/10] vhost: add new call back function for config interrupt
  2021-09-30  2:33 ` [PATCH v9 04/10] vhost: add new call back function for config interrupt Cindy Lu
@ 2021-10-19  6:52   ` Michael S. Tsirkin
  2021-10-20  2:29     ` Cindy Lu
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:52 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:42AM +0800, Cindy Lu wrote:
> To support the config interrupt, we need to
> add a new call back function for config interrupt.
> 
> Signed-off-by: Cindy Lu <lulu@redhat.com>

Pls make commit log more informative.
Doing what? Called back when?


> ---
>  include/hw/virtio/vhost-backend.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> index 8475c5a29d..e732d2e702 100644
> --- a/include/hw/virtio/vhost-backend.h
> +++ b/include/hw/virtio/vhost-backend.h
> @@ -126,6 +126,8 @@ typedef int (*vhost_get_device_id_op)(struct vhost_dev *dev, uint32_t *dev_id);
>  
>  typedef bool (*vhost_force_iommu_op)(struct vhost_dev *dev);
>  
> +typedef int (*vhost_set_config_call_op)(struct vhost_dev *dev,
> +                                       int fd);
>  typedef struct VhostOps {
>      VhostBackendType backend_type;
>      vhost_backend_init vhost_backend_init;
> @@ -171,6 +173,7 @@ typedef struct VhostOps {
>      vhost_vq_get_addr_op  vhost_vq_get_addr;
>      vhost_get_device_id_op vhost_get_device_id;
>      vhost_force_iommu_op vhost_force_iommu;
> +    vhost_set_config_call_op vhost_set_config_call;
>  } VhostOps;
>  
>  extern const VhostOps user_ops;
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back
  2021-09-30  2:33 ` [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back Cindy Lu
@ 2021-10-19  6:54   ` Michael S. Tsirkin
  2021-10-20  3:20     ` Cindy Lu
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:54 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:43AM +0800, Cindy Lu wrote:
> Add new call back function in vhost-vdpa, this call back function will
> set the fb number to hardware.
> 
> Signed-off-by: Cindy Lu <lulu@redhat.com>

fb being what? you mean fd. said fd doing what exactly?
all this needs to be in the commit log pls.

> ---
>  hw/virtio/trace-events | 2 ++
>  hw/virtio/vhost-vdpa.c | 7 +++++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 8ed19e9d0c..836e73d1f7 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -52,6 +52,8 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
>  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
>  vhost_vdpa_set_owner(void *dev) "dev: %p"
>  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> +vhost_vdpa_set_config_call(void *dev, int fd)"dev: %p fd: %d"
> +
>  
>  # virtio.c
>  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 4fa414feea..73764afc61 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -622,6 +622,12 @@ static int vhost_vdpa_set_vring_call(struct vhost_dev *dev,
>      trace_vhost_vdpa_set_vring_call(dev, file->index, file->fd);
>      return vhost_vdpa_call(dev, VHOST_SET_VRING_CALL, file);
>  }
> +static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
> +                                       int fd)
> +{
> +    trace_vhost_vdpa_set_config_call(dev, fd);
> +    return vhost_vdpa_call(dev, VHOST_VDPA_SET_CONFIG_CALL, &fd);
> +}
>  
>  static int vhost_vdpa_get_features(struct vhost_dev *dev,
>                                       uint64_t *features)
> @@ -688,4 +694,5 @@ const VhostOps vdpa_ops = {
>          .vhost_get_device_id = vhost_vdpa_get_device_id,
>          .vhost_vq_get_addr = vhost_vdpa_vq_get_addr,
>          .vhost_force_iommu = vhost_vdpa_force_iommu,
> +        .vhost_set_config_call = vhost_vdpa_set_config_call,
>  };
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt
  2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
                   ` (9 preceding siblings ...)
  2021-09-30  2:33 ` [PATCH v9 10/10] virtio-pci: " Cindy Lu
@ 2021-10-19  6:56 ` Michael S. Tsirkin
  2021-10-20  2:31   ` Cindy Lu
  10 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2021-10-19  6:56 UTC (permalink / raw)
  To: Cindy Lu
  Cc: jasowang, dgilbert, qemu-devel, arei.gonglei, kraxel, stefanha,
	marcandre.lureau

On Thu, Sep 30, 2021 at 10:33:38AM +0800, Cindy Lu wrote:
> these patches add the support for configure interrupt
> 
> These codes are all tested in vp-vdpa (support configure interrupt)
> vdpa_sim (not support configure interrupt), virtio tap device
> 
> test in virtio-pci bus and virtio-mmio bus


I was inclined to let it slide but it hangs make check
so needs more work.
Meanwhile please go over how the patchset is structured,
and over description of each patch.
I sent some comments but same applied to everything.

Also, pls document the index == -1 hack in more detail.
how does it work and why it's helpful.

Thanks!

> Change in v2:
> Add support for virtio-mmio bus
> active the notifier while the backend support configure interrupt
> misc fixes from v1
> 
> Change in v3
> fix the coding style problems
> 
> Change in v4
> misc fixes from v3
> merge the set_config_notifier to set_guest_notifier
> when vdpa start, check the feature by VIRTIO_NET_F_STATUS
> 
> Change in v5
> misc fixes from v4
> split the code to introduce configure interrupt type and the callback function
> will init the configure interrupt in all virtio-pci and virtio-mmio bus, but will
> only active while using vhost-vdpa driver
> 
> Change in v6
> misc fixes from v5
> decouple virtqueue from interrupt setting and misc process
> fix the bug in virtio_net_handle_rx
> use -1 as the queue number to identify if the interrupt is configure interrupt
> 
> Change in v7
> misc fixes from v6
> decouple virtqueue from interrupt setting and misc process
> decouple virtqueue from vector use/release process
> decouple virtqueue from set notifier fd handler process
> move config_notifier and masked_config_notifier to VirtIODevice
> fix the bug in virtio_net_handle_rx, add more information
> add VIRTIO_CONFIG_IRQ_IDX as the queue number to identify if the interrupt is configure interrupt
> 
> Change in v8
> misc fixes from v7
> decouple virtqueue from interrupt setting and misc process
> decouple virtqueue from vector use/release process
> decouple virtqueue from set notifier fd handler process
> move the vhost configure interrupt to vhost_net
> 
> Change in v9
> misc fixes from v8
> address the comments for v8
> 
> Cindy Lu (10):
>   virtio: introduce macro IRTIO_CONFIG_IRQ_IDX
>   virtio-pci: decouple notifier from interrupt process
>   virtio-pci: decouple the single vector from the interrupt process
>   vhost: add new call back function for config interrupt
>   vhost-vdpa: add support for config interrupt call back
>   virtio: add support for configure interrupt
>   virtio-net: add support for configure interrupt
>   vhost: add support for configure interrupt
>   virtio-mmio: add support for configure interrupt
>   virtio-pci: add support for configure interrupt
> 
>  hw/display/vhost-user-gpu.c       |   6 +
>  hw/net/vhost_net.c                |  10 ++
>  hw/net/virtio-net.c               |  16 +-
>  hw/virtio/trace-events            |   2 +
>  hw/virtio/vhost-user-fs.c         |   9 +-
>  hw/virtio/vhost-vdpa.c            |   7 +
>  hw/virtio/vhost-vsock-common.c    |   6 +
>  hw/virtio/vhost.c                 |  76 +++++++++
>  hw/virtio/virtio-crypto.c         |   6 +
>  hw/virtio/virtio-mmio.c           |  27 ++++
>  hw/virtio/virtio-pci.c            | 260 ++++++++++++++++++++----------
>  hw/virtio/virtio-pci.h            |   4 +-
>  hw/virtio/virtio.c                |  29 ++++
>  include/hw/virtio/vhost-backend.h |   3 +
>  include/hw/virtio/vhost.h         |   4 +
>  include/hw/virtio/virtio.h        |   6 +
>  include/net/vhost_net.h           |   3 +
>  17 files changed, 386 insertions(+), 88 deletions(-)
> 
> -- 
> 2.21.3



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 04/10] vhost: add new call back function for config interrupt
  2021-10-19  6:52   ` Michael S. Tsirkin
@ 2021-10-20  2:29     ` Cindy Lu
  0 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-10-20  2:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, dgilbert, QEMU Developers, arei.gonglei, kraxel,
	Stefan Hajnoczi, marcandre.lureau

On Tue, Oct 19, 2021 at 2:52 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Sep 30, 2021 at 10:33:42AM +0800, Cindy Lu wrote:
> > To support the config interrupt, we need to
> > add a new call back function for config interrupt.
> >
> > Signed-off-by: Cindy Lu <lulu@redhat.com>
>
> Pls make commit log more informative.
> Doing what? Called back when?
>
sure, I will add more information in the commit log
>
> > ---
> >  include/hw/virtio/vhost-backend.h | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> > index 8475c5a29d..e732d2e702 100644
> > --- a/include/hw/virtio/vhost-backend.h
> > +++ b/include/hw/virtio/vhost-backend.h
> > @@ -126,6 +126,8 @@ typedef int (*vhost_get_device_id_op)(struct vhost_dev *dev, uint32_t *dev_id);
> >
> >  typedef bool (*vhost_force_iommu_op)(struct vhost_dev *dev);
> >
> > +typedef int (*vhost_set_config_call_op)(struct vhost_dev *dev,
> > +                                       int fd);
> >  typedef struct VhostOps {
> >      VhostBackendType backend_type;
> >      vhost_backend_init vhost_backend_init;
> > @@ -171,6 +173,7 @@ typedef struct VhostOps {
> >      vhost_vq_get_addr_op  vhost_vq_get_addr;
> >      vhost_get_device_id_op vhost_get_device_id;
> >      vhost_force_iommu_op vhost_force_iommu;
> > +    vhost_set_config_call_op vhost_set_config_call;
> >  } VhostOps;
> >
> >  extern const VhostOps user_ops;
> > --
> > 2.21.3
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt
  2021-10-19  6:56 ` [PATCH v9 00/10] vhost-vdpa: " Michael S. Tsirkin
@ 2021-10-20  2:31   ` Cindy Lu
  0 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-10-20  2:31 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, dgilbert, QEMU Developers, arei.gonglei, kraxel,
	Stefan Hajnoczi, marcandre.lureau

On Tue, Oct 19, 2021 at 2:56 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Sep 30, 2021 at 10:33:38AM +0800, Cindy Lu wrote:
> > these patches add the support for configure interrupt
> >
> > These codes are all tested in vp-vdpa (support configure interrupt)
> > vdpa_sim (not support configure interrupt), virtio tap device
> >
> > test in virtio-pci bus and virtio-mmio bus
>
>
> I was inclined to let it slide but it hangs make check
> so needs more work.
> Meanwhile please go over how the patchset is structured,
> and over description of each patch.
> I sent some comments but same applied to everything.
>
> Also, pls document the index == -1 hack in more detail.
> how does it work and why it's helpful.
>
> Thanks!
>
sure I will check this and add more details
> > Change in v2:
> > Add support for virtio-mmio bus
> > active the notifier while the backend support configure interrupt
> > misc fixes from v1
> >
> > Change in v3
> > fix the coding style problems
> >
> > Change in v4
> > misc fixes from v3
> > merge the set_config_notifier to set_guest_notifier
> > when vdpa start, check the feature by VIRTIO_NET_F_STATUS
> >
> > Change in v5
> > misc fixes from v4
> > split the code to introduce configure interrupt type and the callback function
> > will init the configure interrupt in all virtio-pci and virtio-mmio bus, but will
> > only active while using vhost-vdpa driver
> >
> > Change in v6
> > misc fixes from v5
> > decouple virtqueue from interrupt setting and misc process
> > fix the bug in virtio_net_handle_rx
> > use -1 as the queue number to identify if the interrupt is configure interrupt
> >
> > Change in v7
> > misc fixes from v6
> > decouple virtqueue from interrupt setting and misc process
> > decouple virtqueue from vector use/release process
> > decouple virtqueue from set notifier fd handler process
> > move config_notifier and masked_config_notifier to VirtIODevice
> > fix the bug in virtio_net_handle_rx, add more information
> > add VIRTIO_CONFIG_IRQ_IDX as the queue number to identify if the interrupt is configure interrupt
> >
> > Change in v8
> > misc fixes from v7
> > decouple virtqueue from interrupt setting and misc process
> > decouple virtqueue from vector use/release process
> > decouple virtqueue from set notifier fd handler process
> > move the vhost configure interrupt to vhost_net
> >
> > Change in v9
> > misc fixes from v8
> > address the comments for v8
> >
> > Cindy Lu (10):
> >   virtio: introduce macro IRTIO_CONFIG_IRQ_IDX
> >   virtio-pci: decouple notifier from interrupt process
> >   virtio-pci: decouple the single vector from the interrupt process
> >   vhost: add new call back function for config interrupt
> >   vhost-vdpa: add support for config interrupt call back
> >   virtio: add support for configure interrupt
> >   virtio-net: add support for configure interrupt
> >   vhost: add support for configure interrupt
> >   virtio-mmio: add support for configure interrupt
> >   virtio-pci: add support for configure interrupt
> >
> >  hw/display/vhost-user-gpu.c       |   6 +
> >  hw/net/vhost_net.c                |  10 ++
> >  hw/net/virtio-net.c               |  16 +-
> >  hw/virtio/trace-events            |   2 +
> >  hw/virtio/vhost-user-fs.c         |   9 +-
> >  hw/virtio/vhost-vdpa.c            |   7 +
> >  hw/virtio/vhost-vsock-common.c    |   6 +
> >  hw/virtio/vhost.c                 |  76 +++++++++
> >  hw/virtio/virtio-crypto.c         |   6 +
> >  hw/virtio/virtio-mmio.c           |  27 ++++
> >  hw/virtio/virtio-pci.c            | 260 ++++++++++++++++++++----------
> >  hw/virtio/virtio-pci.h            |   4 +-
> >  hw/virtio/virtio.c                |  29 ++++
> >  include/hw/virtio/vhost-backend.h |   3 +
> >  include/hw/virtio/vhost.h         |   4 +
> >  include/hw/virtio/virtio.h        |   6 +
> >  include/net/vhost_net.h           |   3 +
> >  17 files changed, 386 insertions(+), 88 deletions(-)
> >
> > --
> > 2.21.3
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back
  2021-10-19  6:54   ` Michael S. Tsirkin
@ 2021-10-20  3:20     ` Cindy Lu
  0 siblings, 0 replies; 20+ messages in thread
From: Cindy Lu @ 2021-10-20  3:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, dgilbert, QEMU Developers, arei.gonglei, kraxel,
	Stefan Hajnoczi, marcandre.lureau

On Tue, Oct 19, 2021 at 2:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Sep 30, 2021 at 10:33:43AM +0800, Cindy Lu wrote:
> > Add new call back function in vhost-vdpa, this call back function will
> > set the fb number to hardware.
> >
> > Signed-off-by: Cindy Lu <lulu@redhat.com>
>
> fb being what? you mean fd. said fd doing what exactly?
> all this needs to be in the commit log pls.
>
I will add more information for this
Thanks
> > ---
> >  hw/virtio/trace-events | 2 ++
> >  hw/virtio/vhost-vdpa.c | 7 +++++++
> >  2 files changed, 9 insertions(+)
> >
> > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > index 8ed19e9d0c..836e73d1f7 100644
> > --- a/hw/virtio/trace-events
> > +++ b/hw/virtio/trace-events
> > @@ -52,6 +52,8 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
> >  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
> >  vhost_vdpa_set_owner(void *dev) "dev: %p"
> >  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> > +vhost_vdpa_set_config_call(void *dev, int fd)"dev: %p fd: %d"
> > +
> >
> >  # virtio.c
> >  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index 4fa414feea..73764afc61 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -622,6 +622,12 @@ static int vhost_vdpa_set_vring_call(struct vhost_dev *dev,
> >      trace_vhost_vdpa_set_vring_call(dev, file->index, file->fd);
> >      return vhost_vdpa_call(dev, VHOST_SET_VRING_CALL, file);
> >  }
> > +static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
> > +                                       int fd)
> > +{
> > +    trace_vhost_vdpa_set_config_call(dev, fd);
> > +    return vhost_vdpa_call(dev, VHOST_VDPA_SET_CONFIG_CALL, &fd);
> > +}
> >
> >  static int vhost_vdpa_get_features(struct vhost_dev *dev,
> >                                       uint64_t *features)
> > @@ -688,4 +694,5 @@ const VhostOps vdpa_ops = {
> >          .vhost_get_device_id = vhost_vdpa_get_device_id,
> >          .vhost_vq_get_addr = vhost_vdpa_vq_get_addr,
> >          .vhost_force_iommu = vhost_vdpa_force_iommu,
> > +        .vhost_set_config_call = vhost_vdpa_set_config_call,
> >  };
> > --
> > 2.21.3
>



^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-10-20  3:39 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-30  2:33 [PATCH v9 00/10] vhost-vdpa: add support for configure interrupt Cindy Lu
2021-09-30  2:33 ` [PATCH v9 01/10] virtio: introduce macro IRTIO_CONFIG_IRQ_IDX Cindy Lu
2021-10-19  6:47   ` Michael S. Tsirkin
2021-09-30  2:33 ` [PATCH v9 02/10] virtio-pci: decouple notifier from interrupt process Cindy Lu
2021-09-30  2:33 ` [PATCH v9 03/10] virtio-pci: decouple the single vector from the " Cindy Lu
2021-09-30  2:33 ` [PATCH v9 04/10] vhost: add new call back function for config interrupt Cindy Lu
2021-10-19  6:52   ` Michael S. Tsirkin
2021-10-20  2:29     ` Cindy Lu
2021-09-30  2:33 ` [PATCH v9 05/10] vhost-vdpa: add support for config interrupt call back Cindy Lu
2021-10-19  6:54   ` Michael S. Tsirkin
2021-10-20  3:20     ` Cindy Lu
2021-09-30  2:33 ` [PATCH v9 06/10] virtio: add support for configure interrupt Cindy Lu
2021-09-30  2:33 ` [PATCH v9 07/10] virtio-net: " Cindy Lu
2021-09-30  2:33 ` [PATCH v9 08/10] vhost: " Cindy Lu
2021-09-30  2:33 ` [PATCH v9 09/10] virtio-mmio: " Cindy Lu
2021-09-30  2:33 ` [PATCH v9 10/10] virtio-pci: " Cindy Lu
2021-10-19  6:39   ` Michael S. Tsirkin
2021-10-19  6:50   ` Michael S. Tsirkin
2021-10-19  6:56 ` [PATCH v9 00/10] vhost-vdpa: " Michael S. Tsirkin
2021-10-20  2:31   ` Cindy Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).