qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/6] optimize the downtime for vfio migration
@ 2021-11-03  8:16 Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 1/6] vfio: simplify the conditional statements in vfio_msi_enable Longpeng(Mike)
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

Hi guys,
 
In vfio migration resume phase, the cost would increase if the
vfio device has more unmasked vectors. We try to optimize it in
this series.
 
You can see the commit message in PATCH 6 for details.
 
Patch 1-3 are simple cleanups and fixup.
Patch 4-5 are the preparations for the optimization.
Patch 6 optimizes the vfio msix setup path.

Changes v4->v5:
 - setup the notifier and irqfd in the same function to makes
   the code neater.    [Alex]

Changes v3->v4:
 - fix several typos and grammatical errors [Alex]
 - remove the patches that fix and clean the MSIX common part
   from this series [Alex]
 - Patch 6:
    - use vector->use directly and fill it with -1 on error
      paths [Alex]
    - add comment before enable deferring to commit [Alex]
    - move the code that do_use/release on vector 0 into an
      "else" branch [Alex]
    - introduce vfio_prepare_kvm_msi_virq_batch() that enables
      the 'defer_kvm_irq_routing' flag [Alex]
    - introduce vfio_commit_kvm_msi_virq_batch() that clears the
      'defer_kvm_irq_routing' flag and does further work [Alex]

Changes v2->v3:
 - fix two errors [Longpeng]

Changes v1->v2:
 - fix several typos and grammatical errors [Alex, Philippe]
 - split fixups and cleanups into separate patches  [Alex, Philippe]
 - introduce kvm_irqchip_add_deferred_msi_route to
   minimize code changes    [Alex]
 - enable the optimization in msi setup path    [Alex]

Longpeng (Mike) (6):
  vfio: simplify the conditional statements in vfio_msi_enable
  vfio: move re-enabling INTX out of the common helper
  vfio: simplify the failure path in vfio_msi_enable
  kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
  Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO
    migration"
  vfio: defer to commit kvm irq routing when enable msi/msix

 accel/kvm/kvm-all.c  |  15 ++++-
 hw/vfio/pci.c        | 176 ++++++++++++++++++++++++++++++++-------------------
 hw/vfio/pci.h        |   1 +
 include/sysemu/kvm.h |   6 ++
 4 files changed, 130 insertions(+), 68 deletions(-)

-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v5 1/6] vfio: simplify the conditional statements in vfio_msi_enable
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 2/6] vfio: move re-enabling INTX out of the common helper Longpeng(Mike)
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

It's unnecessary to test against the specific return value of
VFIO_DEVICE_SET_IRQS, since any positive return is an error
indicating the number of vectors we should retry with.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 hw/vfio/pci.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 5cdf1d4..dd30806 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -650,7 +650,7 @@ retry:
     if (ret) {
         if (ret < 0) {
             error_report("vfio: Error: Failed to setup MSI fds: %m");
-        } else if (ret != vdev->nr_vectors) {
+        } else {
             error_report("vfio: Error: Failed to enable %d "
                          "MSI vectors, retry with %d", vdev->nr_vectors, ret);
         }
@@ -668,7 +668,7 @@ retry:
         g_free(vdev->msi_vectors);
         vdev->msi_vectors = NULL;
 
-        if (ret > 0 && ret != vdev->nr_vectors) {
+        if (ret > 0) {
             vdev->nr_vectors = ret;
             goto retry;
         }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 2/6] vfio: move re-enabling INTX out of the common helper
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 1/6] vfio: simplify the conditional statements in vfio_msi_enable Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 3/6] vfio: simplify the failure path in vfio_msi_enable Longpeng(Mike)
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

Move re-enabling INTX out, and the callers should decide to
re-enable it or not.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 hw/vfio/pci.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index dd30806..d5e542b 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -690,7 +690,6 @@ retry:
 
 static void vfio_msi_disable_common(VFIOPCIDevice *vdev)
 {
-    Error *err = NULL;
     int i;
 
     for (i = 0; i < vdev->nr_vectors; i++) {
@@ -709,15 +708,11 @@ static void vfio_msi_disable_common(VFIOPCIDevice *vdev)
     vdev->msi_vectors = NULL;
     vdev->nr_vectors = 0;
     vdev->interrupt = VFIO_INT_NONE;
-
-    vfio_intx_enable(vdev, &err);
-    if (err) {
-        error_reportf_err(err, VFIO_MSG_PREFIX, vdev->vbasedev.name);
-    }
 }
 
 static void vfio_msix_disable(VFIOPCIDevice *vdev)
 {
+    Error *err = NULL;
     int i;
 
     msix_unset_vector_notifiers(&vdev->pdev);
@@ -738,6 +733,10 @@ static void vfio_msix_disable(VFIOPCIDevice *vdev)
     }
 
     vfio_msi_disable_common(vdev);
+    vfio_intx_enable(vdev, &err);
+    if (err) {
+        error_reportf_err(err, VFIO_MSG_PREFIX, vdev->vbasedev.name);
+    }
 
     memset(vdev->msix->pending, 0,
            BITS_TO_LONGS(vdev->msix->entries) * sizeof(unsigned long));
@@ -747,8 +746,14 @@ static void vfio_msix_disable(VFIOPCIDevice *vdev)
 
 static void vfio_msi_disable(VFIOPCIDevice *vdev)
 {
+    Error *err = NULL;
+
     vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSI_IRQ_INDEX);
     vfio_msi_disable_common(vdev);
+    vfio_intx_enable(vdev, &err);
+    if (err) {
+        error_reportf_err(err, VFIO_MSG_PREFIX, vdev->vbasedev.name);
+    }
 
     trace_vfio_msi_disable(vdev->vbasedev.name);
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 3/6] vfio: simplify the failure path in vfio_msi_enable
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 1/6] vfio: simplify the conditional statements in vfio_msi_enable Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 2/6] vfio: move re-enabling INTX out of the common helper Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

Use vfio_msi_disable_common to simplify the error handling
in vfio_msi_enable.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 hw/vfio/pci.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index d5e542b..1ff84e6 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -47,6 +47,7 @@
 
 static void vfio_disable_interrupts(VFIOPCIDevice *vdev);
 static void vfio_mmap_set_enabled(VFIOPCIDevice *vdev, bool enabled);
+static void vfio_msi_disable_common(VFIOPCIDevice *vdev);
 
 /*
  * Disabling BAR mmaping can be slow, but toggling it around INTx can
@@ -655,24 +656,12 @@ retry:
                          "MSI vectors, retry with %d", vdev->nr_vectors, ret);
         }
 
-        for (i = 0; i < vdev->nr_vectors; i++) {
-            VFIOMSIVector *vector = &vdev->msi_vectors[i];
-            if (vector->virq >= 0) {
-                vfio_remove_kvm_msi_virq(vector);
-            }
-            qemu_set_fd_handler(event_notifier_get_fd(&vector->interrupt),
-                                NULL, NULL, NULL);
-            event_notifier_cleanup(&vector->interrupt);
-        }
-
-        g_free(vdev->msi_vectors);
-        vdev->msi_vectors = NULL;
+        vfio_msi_disable_common(vdev);
 
         if (ret > 0) {
             vdev->nr_vectors = ret;
             goto retry;
         }
-        vdev->nr_vectors = 0;
 
         /*
          * Failing to setup MSI doesn't really fall within any specification.
@@ -680,7 +669,6 @@ retry:
          * out to fall back to INTx for this device.
          */
         error_report("vfio: Error: Failed to enable MSI");
-        vdev->interrupt = VFIO_INT_NONE;
 
         return;
     }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
                   ` (2 preceding siblings ...)
  2021-11-03  8:16 ` [PATCH v5 3/6] vfio: simplify the failure path in vfio_msi_enable Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-12  3:59   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  2021-11-12  9:31   ` Paolo Bonzini
  2021-11-03  8:16 ` [PATCH v5 5/6] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration" Longpeng(Mike)
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

Extract a common helper that add MSI route for specific vector
but does not commit immediately.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 accel/kvm/kvm-all.c  | 15 +++++++++++++--
 include/sysemu/kvm.h |  6 ++++++
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index db8d83b..8627f7c 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1953,7 +1953,7 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
     return kvm_set_irq(s, route->kroute.gsi, 1);
 }
 
-int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
+int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector, PCIDevice *dev)
 {
     struct kvm_irq_routing_entry kroute = {};
     int virq;
@@ -1996,7 +1996,18 @@ int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
 
     kvm_add_routing_entry(s, &kroute);
     kvm_arch_add_msi_route_post(&kroute, vector, dev);
-    kvm_irqchip_commit_routes(s);
+
+    return virq;
+}
+
+int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
+{
+    int virq;
+
+    virq = kvm_irqchip_add_deferred_msi_route(s, vector, dev);
+    if (virq >= 0) {
+        kvm_irqchip_commit_routes(s);
+    }
 
     return virq;
 }
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index a1ab1ee..8de0d9a 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -476,6 +476,12 @@ void kvm_init_cpu_signals(CPUState *cpu);
  * @return: virq (>=0) when success, errno (<0) when failed.
  */
 int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev);
+/**
+ * Add MSI route for specific vector but does not commit to KVM
+ * immediately
+ */
+int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector,
+                                       PCIDevice *dev);
 int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
                                  PCIDevice *dev);
 void kvm_irqchip_commit_routes(KVMState *s);
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 5/6] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration"
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
                   ` (3 preceding siblings ...)
  2021-11-03  8:16 ` [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-03  8:16 ` [PATCH v5 6/6] vfio: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike)
  2021-11-03 20:36 ` [PATCH v5 0/6] optimize the downtime for vfio migration Alex Williamson
  6 siblings, 0 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

Commit ecebe53fe993 ("vfio: Avoid disabling and enabling vectors
repeatedly in VFIO migration") avoids inefficiently disabling and
enabling vectors repeatedly and lets the unmasked vectors be enabled
one by one.

But we want to batch multiple routes and defer the commit, and only
commit once outside the loop of setting vector notifiers, so we
cannot enable the vectors one by one in the loop now.

Revert that commit and we will take another way in the next patch,
it can not only avoid disabling/enabling vectors repeatedly, but
also satisfy our requirement of defer to commit.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 hw/vfio/pci.c | 20 +++-----------------
 1 file changed, 3 insertions(+), 17 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 1ff84e6..69ad081 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -569,9 +569,6 @@ static void vfio_msix_vector_release(PCIDevice *pdev, unsigned int nr)
 
 static void vfio_msix_enable(VFIOPCIDevice *vdev)
 {
-    PCIDevice *pdev = &vdev->pdev;
-    unsigned int nr, max_vec = 0;
-
     vfio_disable_interrupts(vdev);
 
     vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->msix->entries);
@@ -590,22 +587,11 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev)
      * triggering to userspace, then immediately release the vector, leaving
      * the physical device with no vectors enabled, but MSI-X enabled, just
      * like the guest view.
-     * If there are already unmasked vectors (in migration resume phase and
-     * some guest startups) which will be enabled soon, we can allocate all
-     * of them here to avoid inefficiently disabling and enabling vectors
-     * repeatedly later.
      */
-    if (!pdev->msix_function_masked) {
-        for (nr = 0; nr < msix_nr_vectors_allocated(pdev); nr++) {
-            if (!msix_is_masked(pdev, nr)) {
-                max_vec = nr;
-            }
-        }
-    }
-    vfio_msix_vector_do_use(pdev, max_vec, NULL, NULL);
-    vfio_msix_vector_release(pdev, max_vec);
+    vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
+    vfio_msix_vector_release(&vdev->pdev, 0);
 
-    if (msix_set_vector_notifiers(pdev, vfio_msix_vector_use,
+    if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
                                   vfio_msix_vector_release, NULL)) {
         error_report("vfio: msix_set_vector_notifiers failed");
     }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 6/6] vfio: defer to commit kvm irq routing when enable msi/msix
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
                   ` (4 preceding siblings ...)
  2021-11-03  8:16 ` [PATCH v5 5/6] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration" Longpeng(Mike)
@ 2021-11-03  8:16 ` Longpeng(Mike)
  2021-11-03 20:36 ` [PATCH v5 0/6] optimize the downtime for vfio migration Alex Williamson
  6 siblings, 0 replies; 11+ messages in thread
From: Longpeng(Mike) @ 2021-11-03  8:16 UTC (permalink / raw)
  To: alex.williamson, pbonzini; +Cc: Longpeng(Mike), arei.gonglei, qemu-devel, kvm

In migration resume phase, all unmasked msix vectors need to be
setup when loading the VF state. However, the setup operation would
take longer if the VM has more VFs and each VF has more unmasked
vectors.

The hot spot is kvm_irqchip_commit_routes, it'll scan and update
all irqfds that are already assigned each invocation, so more
vectors means need more time to process them.

vfio_pci_load_config
  vfio_msix_enable
    msix_set_vector_notifiers
      for (vector = 0; vector < dev->msix_entries_nr; vector++) {
        vfio_msix_vector_do_use
          vfio_add_kvm_msi_virq
            kvm_irqchip_commit_routes <-- expensive
      }

We can reduce the cost by only committing once outside the loop.
The routes are cached in kvm_state, we commit them first and then
bind irqfd for each vector.

The test VM has 128 vcpus and 8 VF (each one has 65 vectors),
we measure the cost of the vfio_msix_enable for each VF, and
we can see 90+% costs can be reduce.

VF      Count of irqfds[*]  Original        With this patch

1st           65            8               2
2nd           130           15              2
3rd           195           22              2
4th           260           24              3
5th           325           36              2
6th           390           44              3
7th           455           51              3
8th           520           58              4
Total                       258ms           21ms

[*] Count of irqfds
How many irqfds that already assigned and need to process in this
round.

The optimization can be applied to msi type too.

Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
 hw/vfio/pci.c | 123 ++++++++++++++++++++++++++++++++++++++++++++--------------
 hw/vfio/pci.h |   1 +
 2 files changed, 95 insertions(+), 29 deletions(-)

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 69ad081..5b3a86d 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -413,30 +413,37 @@ static int vfio_enable_vectors(VFIOPCIDevice *vdev, bool msix)
 static void vfio_add_kvm_msi_virq(VFIOPCIDevice *vdev, VFIOMSIVector *vector,
                                   int vector_n, bool msix)
 {
-    int virq;
-
     if ((msix && vdev->no_kvm_msix) || (!msix && vdev->no_kvm_msi)) {
         return;
     }
 
-    if (event_notifier_init(&vector->kvm_interrupt, 0)) {
+    vector->virq = kvm_irqchip_add_deferred_msi_route(kvm_state, vector_n,
+                                                      &vdev->pdev);
+}
+
+static void vfio_connect_kvm_msi_virq(VFIOMSIVector *vector)
+{
+    if (vector->virq < 0) {
         return;
     }
 
-    virq = kvm_irqchip_add_msi_route(kvm_state, vector_n, &vdev->pdev);
-    if (virq < 0) {
-        event_notifier_cleanup(&vector->kvm_interrupt);
-        return;
+    if (event_notifier_init(&vector->kvm_interrupt, 0)) {
+        goto fail_notifier;
     }
 
     if (kvm_irqchip_add_irqfd_notifier_gsi(kvm_state, &vector->kvm_interrupt,
-                                       NULL, virq) < 0) {
-        kvm_irqchip_release_virq(kvm_state, virq);
-        event_notifier_cleanup(&vector->kvm_interrupt);
-        return;
+                                           NULL, vector->virq) < 0) {
+        goto fail_kvm;
     }
 
-    vector->virq = virq;
+    return;
+
+fail_kvm:
+    event_notifier_cleanup(&vector->kvm_interrupt);
+fail_notifier:
+    kvm_irqchip_release_virq(kvm_state, vector->virq);
+    vector->virq = -1;
+    return;
 }
 
 static void vfio_remove_kvm_msi_virq(VFIOMSIVector *vector)
@@ -492,6 +499,10 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
     } else {
         if (msg) {
             vfio_add_kvm_msi_virq(vdev, vector, nr, true);
+            if (!vdev->defer_kvm_irq_routing) {
+                kvm_irqchip_commit_routes(kvm_state);
+                vfio_connect_kvm_msi_virq(vector);
+            }
         }
     }
 
@@ -501,11 +512,13 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
      * increase them as needed.
      */
     if (vdev->nr_vectors < nr + 1) {
-        vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
         vdev->nr_vectors = nr + 1;
-        ret = vfio_enable_vectors(vdev, true);
-        if (ret) {
-            error_report("vfio: failed to enable vectors, %d", ret);
+        if (!vdev->defer_kvm_irq_routing) {
+            vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
+            ret = vfio_enable_vectors(vdev, true);
+            if (ret) {
+                error_report("vfio: failed to enable vectors, %d", ret);
+            }
         }
     } else {
         Error *err = NULL;
@@ -567,6 +580,30 @@ static void vfio_msix_vector_release(PCIDevice *pdev, unsigned int nr)
     }
 }
 
+static void vfio_prepare_kvm_msi_virq_batch(VFIOPCIDevice *vdev)
+{
+    assert(!vdev->defer_kvm_irq_routing);
+    vdev->defer_kvm_irq_routing = true;
+}
+
+static void vfio_commit_kvm_msi_virq_batch(VFIOPCIDevice *vdev)
+{
+    int i;
+
+    assert(vdev->defer_kvm_irq_routing);
+    vdev->defer_kvm_irq_routing = false;
+
+    if (!vdev->nr_vectors) {
+        return;
+    }
+
+    kvm_irqchip_commit_routes(kvm_state);
+
+    for (i = 0; i < vdev->nr_vectors; i++) {
+        vfio_connect_kvm_msi_virq(&vdev->msi_vectors[i]);
+    }
+}
+
 static void vfio_msix_enable(VFIOPCIDevice *vdev)
 {
     vfio_disable_interrupts(vdev);
@@ -576,26 +613,45 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev)
     vdev->interrupt = VFIO_INT_MSIX;
 
     /*
-     * Some communication channels between VF & PF or PF & fw rely on the
-     * physical state of the device and expect that enabling MSI-X from the
-     * guest enables the same on the host.  When our guest is Linux, the
-     * guest driver call to pci_enable_msix() sets the enabling bit in the
-     * MSI-X capability, but leaves the vector table masked.  We therefore
-     * can't rely on a vector_use callback (from request_irq() in the guest)
-     * to switch the physical device into MSI-X mode because that may come a
-     * long time after pci_enable_msix().  This code enables vector 0 with
-     * triggering to userspace, then immediately release the vector, leaving
-     * the physical device with no vectors enabled, but MSI-X enabled, just
-     * like the guest view.
+     * Setting vector notifiers triggers synchronous vector-use
+     * callbacks for each active vector.  Deferring to commit the KVM
+     * routes once rather than per vector provides a substantial
+     * performance improvement.
      */
-    vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
-    vfio_msix_vector_release(&vdev->pdev, 0);
+    vfio_prepare_kvm_msi_virq_batch(vdev);
 
     if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
                                   vfio_msix_vector_release, NULL)) {
         error_report("vfio: msix_set_vector_notifiers failed");
     }
 
+    vfio_commit_kvm_msi_virq_batch(vdev);
+
+    if (vdev->nr_vectors) {
+        int ret;
+
+        ret = vfio_enable_vectors(vdev, true);
+        if (ret) {
+            error_report("vfio: failed to enable vectors, %d", ret);
+        }
+    } else {
+        /*
+         * Some communication channels between VF & PF or PF & fw rely on the
+         * physical state of the device and expect that enabling MSI-X from the
+         * guest enables the same on the host.  When our guest is Linux, the
+         * guest driver call to pci_enable_msix() sets the enabling bit in the
+         * MSI-X capability, but leaves the vector table masked.  We therefore
+         * can't rely on a vector_use callback (from request_irq() in the guest)
+         * to switch the physical device into MSI-X mode because that may come a
+         * long time after pci_enable_msix().  This code enables vector 0 with
+         * triggering to userspace, then immediately release the vector, leaving
+         * the physical device with no vectors enabled, but MSI-X enabled, just
+         * like the guest view.
+         */
+        vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
+        vfio_msix_vector_release(&vdev->pdev, 0);
+    }
+
     trace_vfio_msix_enable(vdev->vbasedev.name);
 }
 
@@ -605,6 +661,13 @@ static void vfio_msi_enable(VFIOPCIDevice *vdev)
 
     vfio_disable_interrupts(vdev);
 
+    /*
+     * Setting vector notifiers needs to enable route for each vector.
+     * Deferring to commit the KVM routes once rather than per vector
+     * provides a substantial performance improvement.
+     */
+    vfio_prepare_kvm_msi_virq_batch(vdev);
+
     vdev->nr_vectors = msi_nr_vectors_allocated(&vdev->pdev);
 retry:
     vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->nr_vectors);
@@ -630,6 +693,8 @@ retry:
         vfio_add_kvm_msi_virq(vdev, vector, i, false);
     }
 
+    vfio_commit_kvm_msi_virq_batch(vdev);
+
     /* Set interrupt type prior to possible interrupts */
     vdev->interrupt = VFIO_INT_MSI;
 
diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
index 6477751..d3c5177 100644
--- a/hw/vfio/pci.h
+++ b/hw/vfio/pci.h
@@ -171,6 +171,7 @@ struct VFIOPCIDevice {
     bool no_kvm_ioeventfd;
     bool no_vfio_ioeventfd;
     bool enable_ramfb;
+    bool defer_kvm_irq_routing;
     VFIODisplay *dpy;
     Notifier irqchip_change_notifier;
 };
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/6] optimize the downtime for vfio migration
  2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
                   ` (5 preceding siblings ...)
  2021-11-03  8:16 ` [PATCH v5 6/6] vfio: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike)
@ 2021-11-03 20:36 ` Alex Williamson
  6 siblings, 0 replies; 11+ messages in thread
From: Alex Williamson @ 2021-11-03 20:36 UTC (permalink / raw)
  To: Longpeng(Mike); +Cc: pbonzini, arei.gonglei, qemu-devel, kvm

On Wed, 3 Nov 2021 16:16:51 +0800
"Longpeng(Mike)" <longpeng2@huawei.com> wrote:

> Hi guys,
>  
> In vfio migration resume phase, the cost would increase if the
> vfio device has more unmasked vectors. We try to optimize it in
> this series.
>  
> You can see the commit message in PATCH 6 for details.
>  
> Patch 1-3 are simple cleanups and fixup.
> Patch 4-5 are the preparations for the optimization.
> Patch 6 optimizes the vfio msix setup path.
> 
> Changes v4->v5:
>  - setup the notifier and irqfd in the same function to makes
>    the code neater.    [Alex]

I wish this was posted a day earlier, QEMU entered soft-freeze for the
6.2 release yesterday[1].  Since vfio migration is still an
experimental feature, let's pick this up when the next development
window opens, and please try to get an ack from Paolo for the deferred
msi route function in the meantime.  Thanks,

Alex

[1]https://wiki.qemu.org/Planning/6.2

> 
> Changes v3->v4:
>  - fix several typos and grammatical errors [Alex]
>  - remove the patches that fix and clean the MSIX common part
>    from this series [Alex]
>  - Patch 6:
>     - use vector->use directly and fill it with -1 on error
>       paths [Alex]
>     - add comment before enable deferring to commit [Alex]
>     - move the code that do_use/release on vector 0 into an
>       "else" branch [Alex]
>     - introduce vfio_prepare_kvm_msi_virq_batch() that enables
>       the 'defer_kvm_irq_routing' flag [Alex]
>     - introduce vfio_commit_kvm_msi_virq_batch() that clears the
>       'defer_kvm_irq_routing' flag and does further work [Alex]
> 
> Changes v2->v3:
>  - fix two errors [Longpeng]
> 
> Changes v1->v2:
>  - fix several typos and grammatical errors [Alex, Philippe]
>  - split fixups and cleanups into separate patches  [Alex, Philippe]
>  - introduce kvm_irqchip_add_deferred_msi_route to
>    minimize code changes    [Alex]
>  - enable the optimization in msi setup path    [Alex]
> 
> Longpeng (Mike) (6):
>   vfio: simplify the conditional statements in vfio_msi_enable
>   vfio: move re-enabling INTX out of the common helper
>   vfio: simplify the failure path in vfio_msi_enable
>   kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
>   Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO
>     migration"
>   vfio: defer to commit kvm irq routing when enable msi/msix
> 
>  accel/kvm/kvm-all.c  |  15 ++++-
>  hw/vfio/pci.c        | 176 ++++++++++++++++++++++++++++++++-------------------
>  hw/vfio/pci.h        |   1 +
>  include/sysemu/kvm.h |   6 ++
>  4 files changed, 130 insertions(+), 68 deletions(-)
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
  2021-11-03  8:16 ` [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
@ 2021-11-12  3:59   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  2021-11-12  9:31   ` Paolo Bonzini
  1 sibling, 0 replies; 11+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-11-12  3:59 UTC (permalink / raw)
  To: pbonzini; +Cc: alex.williamson, Gonglei (Arei), qemu-devel, kvm

Hi Paolo,

Ping...

Do you have any suggestions about this change ? It seems Alex has no
objection on this series now, but we need your ACK, thanks.


> -----Original Message-----
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> Sent: Wednesday, November 3, 2021 4:17 PM
> To: alex.williamson@redhat.com; pbonzini@redhat.com
> Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; Gonglei (Arei)
> <arei.gonglei@huawei.com>; Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com>
> Subject: [PATCH v5 4/6] kvm: irqchip: extract
> kvm_irqchip_add_deferred_msi_route
> 
> Extract a common helper that add MSI route for specific vector
> but does not commit immediately.
> 
> Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
> ---
>  accel/kvm/kvm-all.c  | 15 +++++++++++++--
>  include/sysemu/kvm.h |  6 ++++++
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index db8d83b..8627f7c 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -1953,7 +1953,7 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
>      return kvm_set_irq(s, route->kroute.gsi, 1);
>  }
> 
> -int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector, PCIDevice
> *dev)
>  {
>      struct kvm_irq_routing_entry kroute = {};
>      int virq;
> @@ -1996,7 +1996,18 @@ int kvm_irqchip_add_msi_route(KVMState *s, int vector,
> PCIDevice *dev)
> 
>      kvm_add_routing_entry(s, &kroute);
>      kvm_arch_add_msi_route_post(&kroute, vector, dev);
> -    kvm_irqchip_commit_routes(s);
> +
> +    return virq;
> +}
> +
> +int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> +{
> +    int virq;
> +
> +    virq = kvm_irqchip_add_deferred_msi_route(s, vector, dev);
> +    if (virq >= 0) {
> +        kvm_irqchip_commit_routes(s);
> +    }
> 
>      return virq;
>  }
> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
> index a1ab1ee..8de0d9a 100644
> --- a/include/sysemu/kvm.h
> +++ b/include/sysemu/kvm.h
> @@ -476,6 +476,12 @@ void kvm_init_cpu_signals(CPUState *cpu);
>   * @return: virq (>=0) when success, errno (<0) when failed.
>   */
>  int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev);
> +/**
> + * Add MSI route for specific vector but does not commit to KVM
> + * immediately
> + */
> +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector,
> +                                       PCIDevice *dev);
>  int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
>                                   PCIDevice *dev);
>  void kvm_irqchip_commit_routes(KVMState *s);
> --
> 1.8.3.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
  2021-11-03  8:16 ` [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
  2021-11-12  3:59   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-11-12  9:31   ` Paolo Bonzini
  2021-11-13  9:21     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  1 sibling, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2021-11-12  9:31 UTC (permalink / raw)
  To: Longpeng(Mike), alex.williamson; +Cc: arei.gonglei, qemu-devel, kvm

On 11/3/21 09:16, Longpeng(Mike) wrote:
> Extract a common helper that add MSI route for specific vector
> but does not commit immediately.
> 
> Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>

I think adding the new function is not necessary; I have no problem 
moving the call to kvm_irqchip_commit_routes to the callers.  Perhaps 
you can have an API like this:

typedef struct KVMRouteChange {
     KVMState *s;
     int changes;
} KVMRouteChange;

KVMRouteChange kvm_irqchip_begin_route_changes(KVMState *s)
{
     return (KVMRouteChange) { .s = s, .changes = 0 };
}

void kvm_irqchip_commit_route_changes(KVMRouteChange *c)
{
     if (c->changes) {
         kvm_irqchip_commit_routes(c->s);
         c->changes = 0;
    }
}

int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
{
     KVMState *s = c->s;
     ...
     kvm_add_routing_entry(s, &kroute);
     kvm_arch_add_msi_route_post(&kroute, vector, dev);
     c->changes++;

     return virq;
}

so it's harder for the callers to "forget" kvm_irqchip_commit_route_changes.

Paolo

> ---
>   accel/kvm/kvm-all.c  | 15 +++++++++++++--
>   include/sysemu/kvm.h |  6 ++++++
>   2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index db8d83b..8627f7c 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -1953,7 +1953,7 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
>       return kvm_set_irq(s, route->kroute.gsi, 1);
>   }
>   
> -int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector, PCIDevice *dev)
>   {
>       struct kvm_irq_routing_entry kroute = {};
>       int virq;
> @@ -1996,7 +1996,18 @@ int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
>   
>       kvm_add_routing_entry(s, &kroute);
>       kvm_arch_add_msi_route_post(&kroute, vector, dev);
> -    kvm_irqchip_commit_routes(s);
> +
> +    return virq;
> +}
> +
> +int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> +{
> +    int virq;
> +
> +    virq = kvm_irqchip_add_deferred_msi_route(s, vector, dev);
> +    if (virq >= 0) {
> +        kvm_irqchip_commit_routes(s);
> +    }
>   
>       return virq;
>   }
> diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
> index a1ab1ee..8de0d9a 100644
> --- a/include/sysemu/kvm.h
> +++ b/include/sysemu/kvm.h
> @@ -476,6 +476,12 @@ void kvm_init_cpu_signals(CPUState *cpu);
>    * @return: virq (>=0) when success, errno (<0) when failed.
>    */
>   int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev);
> +/**
> + * Add MSI route for specific vector but does not commit to KVM
> + * immediately
> + */
> +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector,
> +                                       PCIDevice *dev);
>   int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
>                                    PCIDevice *dev);
>   void kvm_irqchip_commit_routes(KVMState *s);
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route
  2021-11-12  9:31   ` Paolo Bonzini
@ 2021-11-13  9:21     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 11+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-11-13  9:21 UTC (permalink / raw)
  To: Paolo Bonzini, alex.williamson; +Cc: Gonglei (Arei), qemu-devel, kvm



> -----Original Message-----
> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
> Sent: Friday, November 12, 2021 5:32 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; alex.williamson@redhat.com
> Cc: qemu-devel@nongnu.org; kvm@vger.kernel.org; Gonglei (Arei)
> <arei.gonglei@huawei.com>
> Subject: Re: [PATCH v5 4/6] kvm: irqchip: extract
> kvm_irqchip_add_deferred_msi_route
> 
> On 11/3/21 09:16, Longpeng(Mike) wrote:
> > Extract a common helper that add MSI route for specific vector
> > but does not commit immediately.
> >
> > Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
> 
> I think adding the new function is not necessary; I have no problem
> moving the call to kvm_irqchip_commit_routes to the callers.  Perhaps
> you can have an API like this:
> 
> typedef struct KVMRouteChange {
>      KVMState *s;
>      int changes;
> } KVMRouteChange;
> 
> KVMRouteChange kvm_irqchip_begin_route_changes(KVMState *s)
> {
>      return (KVMRouteChange) { .s = s, .changes = 0 };
> }
> 
> void kvm_irqchip_commit_route_changes(KVMRouteChange *c)
> {
>      if (c->changes) {
>          kvm_irqchip_commit_routes(c->s);
>          c->changes = 0;
>     }
> }
> 
> int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
> {
>      KVMState *s = c->s;
>      ...
>      kvm_add_routing_entry(s, &kroute);
>      kvm_arch_add_msi_route_post(&kroute, vector, dev);
>      c->changes++;
> 
>      return virq;
> }
> 
> so it's harder for the callers to "forget" kvm_irqchip_commit_route_changes.
> 

Make sense.

We have 4 adding functions currently, the first two trigger committing inside
and the others do not.
 1. kvm_irqchip_add_adapter_route (commit inside)
 2. kvm_irqchip_add_msi_route (commit inside)
 3. kvm_irqchip_add_irq_route (commit outside)
 4. kvm_irqchip_add_hv_sint_route (commit outside)

How about just moving the kvm_irqchip_commit_routes() out of 
kvm_irqchip_add_msi_route() in this series and implement the solution you
suggested in another series ? I think we should apply the solution to
s390_adapter routing type and updating paths.


> Paolo
> 
> > ---
> >   accel/kvm/kvm-all.c  | 15 +++++++++++++--
> >   include/sysemu/kvm.h |  6 ++++++
> >   2 files changed, 19 insertions(+), 2 deletions(-)
> >
> > diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> > index db8d83b..8627f7c 100644
> > --- a/accel/kvm/kvm-all.c
> > +++ b/accel/kvm/kvm-all.c
> > @@ -1953,7 +1953,7 @@ int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
> >       return kvm_set_irq(s, route->kroute.gsi, 1);
> >   }
> >
> > -int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> > +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector, PCIDevice
> *dev)
> >   {
> >       struct kvm_irq_routing_entry kroute = {};
> >       int virq;
> > @@ -1996,7 +1996,18 @@ int kvm_irqchip_add_msi_route(KVMState *s, int vector,
> PCIDevice *dev)
> >
> >       kvm_add_routing_entry(s, &kroute);
> >       kvm_arch_add_msi_route_post(&kroute, vector, dev);
> > -    kvm_irqchip_commit_routes(s);
> > +
> > +    return virq;
> > +}
> > +
> > +int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev)
> > +{
> > +    int virq;
> > +
> > +    virq = kvm_irqchip_add_deferred_msi_route(s, vector, dev);
> > +    if (virq >= 0) {
> > +        kvm_irqchip_commit_routes(s);
> > +    }
> >
> >       return virq;
> >   }
> > diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
> > index a1ab1ee..8de0d9a 100644
> > --- a/include/sysemu/kvm.h
> > +++ b/include/sysemu/kvm.h
> > @@ -476,6 +476,12 @@ void kvm_init_cpu_signals(CPUState *cpu);
> >    * @return: virq (>=0) when success, errno (<0) when failed.
> >    */
> >   int kvm_irqchip_add_msi_route(KVMState *s, int vector, PCIDevice *dev);
> > +/**
> > + * Add MSI route for specific vector but does not commit to KVM
> > + * immediately
> > + */
> > +int kvm_irqchip_add_deferred_msi_route(KVMState *s, int vector,
> > +                                       PCIDevice *dev);
> >   int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
> >                                    PCIDevice *dev);
> >   void kvm_irqchip_commit_routes(KVMState *s);
> >


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-11-13  9:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-03  8:16 [PATCH v5 0/6] optimize the downtime for vfio migration Longpeng(Mike)
2021-11-03  8:16 ` [PATCH v5 1/6] vfio: simplify the conditional statements in vfio_msi_enable Longpeng(Mike)
2021-11-03  8:16 ` [PATCH v5 2/6] vfio: move re-enabling INTX out of the common helper Longpeng(Mike)
2021-11-03  8:16 ` [PATCH v5 3/6] vfio: simplify the failure path in vfio_msi_enable Longpeng(Mike)
2021-11-03  8:16 ` [PATCH v5 4/6] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
2021-11-12  3:59   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-11-12  9:31   ` Paolo Bonzini
2021-11-13  9:21     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-11-03  8:16 ` [PATCH v5 5/6] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration" Longpeng(Mike)
2021-11-03  8:16 ` [PATCH v5 6/6] vfio: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike)
2021-11-03 20:36 ` [PATCH v5 0/6] optimize the downtime for vfio migration Alex Williamson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).