All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/2] vhost-vDPA: vq notification map support
@ 2021-06-02  8:41 Jason Wang
  2021-06-02  8:41 ` [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping Jason Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Jason Wang @ 2021-06-02  8:41 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: si-wei.liu, elic, lingshan.zhu, Jason Wang

Hi All:

This series tries to implement doorbell mapping support for
vhost-vDPA. Tested with virtio-pci vDPA driver.

Please review.

Changes since V1:
- use dev->vq_index to calculate the virtqueue index
- remove the unused host_notifier_set

Jason Wang (2):
  vhost-vdpa: skip ram device from the IOTLB mapping
  vhost-vdpa: map virtqueue notification area if possible

 hw/virtio/vhost-vdpa.c         | 97 ++++++++++++++++++++++++++++++----
 include/hw/virtio/vhost-vdpa.h |  6 +++
 2 files changed, 93 insertions(+), 10 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping
  2021-06-02  8:41 [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
@ 2021-06-02  8:41 ` Jason Wang
  2021-06-10 20:54   ` Si-Wei Liu
  2021-06-02  8:41 ` [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible Jason Wang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2021-06-02  8:41 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: si-wei.liu, elic, lingshan.zhu, Jason Wang

vDPA is not tie to any specific hardware, for safety and simplicity,
vhost-vDPA doesn't allow MMIO area to be mapped via IOTLB. Only the
doorbell could be mapped via mmap(). So this patch exclude skip the
ram device from the IOTLB mapping.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 01d2101d09..dd4321bac2 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -27,6 +27,8 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
 {
     return (!memory_region_is_ram(section->mr) &&
             !memory_region_is_iommu(section->mr)) ||
+           /* vhost-vDPA doesn't allow MMIO to be mapped  */
+            memory_region_is_ram_device(section->mr) ||
            /*
             * Sizing an enabled 64-bit BAR can cause spurious mappings to
             * addresses in the upper part of the 64-bit address space.  These
@@ -171,22 +173,12 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
                              vaddr, section->readonly);
     if (ret) {
         error_report("vhost vdpa map fail!");
-        if (memory_region_is_ram_device(section->mr)) {
-            /* Allow unexpected mappings not to be fatal for RAM devices */
-            error_report("map ram fail!");
-          return ;
-        }
         goto fail;
     }
 
     return;
 
 fail:
-    if (memory_region_is_ram_device(section->mr)) {
-        error_report("failed to vdpa_dma_map. pci p2p may not work");
-        return;
-
-    }
     /*
      * On the initfn path, store the first error in the container so we
      * can gracefully fail.  Runtime, there's not much we can do other
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible
  2021-06-02  8:41 [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
  2021-06-02  8:41 ` [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping Jason Wang
@ 2021-06-02  8:41 ` Jason Wang
  2021-06-10 20:53   ` Si-Wei Liu
  2021-06-10  2:30 ` [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
  2021-06-14 21:57 ` no-reply
  3 siblings, 1 reply; 7+ messages in thread
From: Jason Wang @ 2021-06-02  8:41 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: si-wei.liu, elic, lingshan.zhu, Jason Wang

This patch implements the vq notification mapping support for
vhost-vDPA. This is simply done by using mmap()/munmap() for the
vhost-vDPA fd during device start/stop. For the device without
notification mapping support, we fall back to eventfd based
notification gracefully.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
Changes since v1:
- use dev->vq_index to calculate the virtqueue index
- remove the unused host_notifier_set
---
 hw/virtio/vhost-vdpa.c         | 85 ++++++++++++++++++++++++++++++++++
 include/hw/virtio/vhost-vdpa.h |  6 +++
 2 files changed, 91 insertions(+)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index dd4321bac2..f9a86afe64 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -285,12 +285,95 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
     return 0;
 }
 
+static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
+                                            int queue_index)
+{
+    size_t page_size = qemu_real_host_page_size;
+    struct vhost_vdpa *v = dev->opaque;
+    VirtIODevice *vdev = dev->vdev;
+    VhostVDPAHostNotifier *n;
+
+    n = &v->notifier[queue_index];
+
+    if (n->addr) {
+        virtio_queue_set_host_notifier_mr(vdev, queue_index, &n->mr, false);
+        object_unparent(OBJECT(&n->mr));
+        munmap(n->addr, page_size);
+        n->addr = NULL;
+    }
+}
+
+static void vhost_vdpa_host_notifiers_uninit(struct vhost_dev *dev, int n)
+{
+    int i;
+
+    for (i = 0; i < n; i++) {
+        vhost_vdpa_host_notifier_uninit(dev, i);
+    }
+}
+
+static int vhost_vdpa_host_notifier_init(struct vhost_dev *dev, int queue_index)
+{
+    size_t page_size = qemu_real_host_page_size;
+    struct vhost_vdpa *v = dev->opaque;
+    VirtIODevice *vdev = dev->vdev;
+    VhostVDPAHostNotifier *n;
+    int fd = v->device_fd;
+    void *addr;
+    char *name;
+
+    vhost_vdpa_host_notifier_uninit(dev, queue_index);
+
+    n = &v->notifier[queue_index];
+
+    addr = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd,
+                queue_index * page_size);
+    if (addr == MAP_FAILED) {
+        goto err;
+    }
+
+    name = g_strdup_printf("vhost-vdpa/host-notifier@%p mmaps[%d]",
+                           v, queue_index);
+    memory_region_init_ram_device_ptr(&n->mr, OBJECT(vdev), name,
+                                      page_size, addr);
+    g_free(name);
+
+    if (virtio_queue_set_host_notifier_mr(vdev, queue_index, &n->mr, true)) {
+        munmap(addr, page_size);
+        goto err;
+    }
+    n->addr = addr;
+
+    return 0;
+
+err:
+    return -1;
+}
+
+static void vhost_vdpa_host_notifiers_init(struct vhost_dev *dev)
+{
+    int i;
+
+    for (i = dev->vq_index; i < dev->vq_index + dev->nvqs; i++) {
+        if (vhost_vdpa_host_notifier_init(dev, i)) {
+            goto err;
+        }
+    }
+
+    return;
+
+err:
+    vhost_vdpa_host_notifiers_uninit(dev, i);
+    return;
+}
+
 static int vhost_vdpa_cleanup(struct vhost_dev *dev)
 {
     struct vhost_vdpa *v;
     assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
     v = dev->opaque;
     trace_vhost_vdpa_cleanup(dev, v);
+    vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
     memory_listener_unregister(&v->listener);
 
     dev->opaque = NULL;
@@ -467,6 +550,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
     if (started) {
         uint8_t status = 0;
         memory_listener_register(&v->listener, &address_space_memory);
+        vhost_vdpa_host_notifiers_init(dev);
         vhost_vdpa_set_vring_ready(dev);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
         vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
@@ -476,6 +560,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
         vhost_vdpa_reset_device(dev);
         vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                    VIRTIO_CONFIG_S_DRIVER);
+        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
         memory_listener_unregister(&v->listener);
 
         return 0;
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 9b81a409da..56bef30ec2 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -14,11 +14,17 @@
 
 #include "hw/virtio/virtio.h"
 
+typedef struct VhostVDPAHostNotifier {
+    MemoryRegion mr;
+    void *addr;
+} VhostVDPAHostNotifier;
+
 typedef struct vhost_vdpa {
     int device_fd;
     uint32_t msg_type;
     MemoryListener listener;
     struct vhost_dev *dev;
+    VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
 } VhostVDPA;
 
 extern AddressSpace address_space_memory;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 0/2] vhost-vDPA: vq notification map support
  2021-06-02  8:41 [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
  2021-06-02  8:41 ` [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping Jason Wang
  2021-06-02  8:41 ` [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible Jason Wang
@ 2021-06-10  2:30 ` Jason Wang
  2021-06-14 21:57 ` no-reply
  3 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2021-06-10  2:30 UTC (permalink / raw)
  To: mst, qemu-devel; +Cc: si-wei.liu, elic, lingshan.zhu


在 2021/6/2 下午4:41, Jason Wang 写道:
> Hi All:
>
> This series tries to implement doorbell mapping support for
> vhost-vDPA. Tested with virtio-pci vDPA driver.
>
> Please review.
>
> Changes since V1:
> - use dev->vq_index to calculate the virtqueue index
> - remove the unused host_notifier_set
>
> Jason Wang (2):
>    vhost-vdpa: skip ram device from the IOTLB mapping
>    vhost-vdpa: map virtqueue notification area if possible
>
>   hw/virtio/vhost-vdpa.c         | 97 ++++++++++++++++++++++++++++++----
>   include/hw/virtio/vhost-vdpa.h |  6 +++
>   2 files changed, 93 insertions(+), 10 deletions(-)
>

If no objection, I will queue this series.

Thanks



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible
  2021-06-02  8:41 ` [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible Jason Wang
@ 2021-06-10 20:53   ` Si-Wei Liu
  0 siblings, 0 replies; 7+ messages in thread
From: Si-Wei Liu @ 2021-06-10 20:53 UTC (permalink / raw)
  To: Jason Wang, mst, qemu-devel; +Cc: elic, lingshan.zhu


Looks good.

On 6/2/2021 1:41 AM, Jason Wang wrote:
> This patch implements the vq notification mapping support for
> vhost-vDPA. This is simply done by using mmap()/munmap() for the
> vhost-vDPA fd during device start/stop. For the device without
> notification mapping support, we fall back to eventfd based
> notification gracefully.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>

> ---
> Changes since v1:
> - use dev->vq_index to calculate the virtqueue index
> - remove the unused host_notifier_set
> ---
>   hw/virtio/vhost-vdpa.c         | 85 ++++++++++++++++++++++++++++++++++
>   include/hw/virtio/vhost-vdpa.h |  6 +++
>   2 files changed, 91 insertions(+)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index dd4321bac2..f9a86afe64 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -285,12 +285,95 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
>       return 0;
>   }
>   
> +static void vhost_vdpa_host_notifier_uninit(struct vhost_dev *dev,
> +                                            int queue_index)
> +{
> +    size_t page_size = qemu_real_host_page_size;
> +    struct vhost_vdpa *v = dev->opaque;
> +    VirtIODevice *vdev = dev->vdev;
> +    VhostVDPAHostNotifier *n;
> +
> +    n = &v->notifier[queue_index];
> +
> +    if (n->addr) {
> +        virtio_queue_set_host_notifier_mr(vdev, queue_index, &n->mr, false);
> +        object_unparent(OBJECT(&n->mr));
> +        munmap(n->addr, page_size);
> +        n->addr = NULL;
> +    }
> +}
> +
> +static void vhost_vdpa_host_notifiers_uninit(struct vhost_dev *dev, int n)
> +{
> +    int i;
> +
> +    for (i = 0; i < n; i++) {
> +        vhost_vdpa_host_notifier_uninit(dev, i);
> +    }
> +}
> +
> +static int vhost_vdpa_host_notifier_init(struct vhost_dev *dev, int queue_index)
> +{
> +    size_t page_size = qemu_real_host_page_size;
> +    struct vhost_vdpa *v = dev->opaque;
> +    VirtIODevice *vdev = dev->vdev;
> +    VhostVDPAHostNotifier *n;
> +    int fd = v->device_fd;
> +    void *addr;
> +    char *name;
> +
> +    vhost_vdpa_host_notifier_uninit(dev, queue_index);
> +
> +    n = &v->notifier[queue_index];
> +
> +    addr = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd,
> +                queue_index * page_size);
> +    if (addr == MAP_FAILED) {
> +        goto err;
> +    }
> +
> +    name = g_strdup_printf("vhost-vdpa/host-notifier@%p mmaps[%d]",
> +                           v, queue_index);
> +    memory_region_init_ram_device_ptr(&n->mr, OBJECT(vdev), name,
> +                                      page_size, addr);
> +    g_free(name);
> +
> +    if (virtio_queue_set_host_notifier_mr(vdev, queue_index, &n->mr, true)) {
> +        munmap(addr, page_size);
> +        goto err;
> +    }
> +    n->addr = addr;
> +
> +    return 0;
> +
> +err:
> +    return -1;
> +}
> +
> +static void vhost_vdpa_host_notifiers_init(struct vhost_dev *dev)
> +{
> +    int i;
> +
> +    for (i = dev->vq_index; i < dev->vq_index + dev->nvqs; i++) {
> +        if (vhost_vdpa_host_notifier_init(dev, i)) {
> +            goto err;
> +        }
> +    }
> +
> +    return;
> +
> +err:
> +    vhost_vdpa_host_notifiers_uninit(dev, i);
> +    return;
> +}
> +
>   static int vhost_vdpa_cleanup(struct vhost_dev *dev)
>   {
>       struct vhost_vdpa *v;
>       assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
>       v = dev->opaque;
>       trace_vhost_vdpa_cleanup(dev, v);
> +    vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
>       memory_listener_unregister(&v->listener);
>   
>       dev->opaque = NULL;
> @@ -467,6 +550,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
>       if (started) {
>           uint8_t status = 0;
>           memory_listener_register(&v->listener, &address_space_memory);
> +        vhost_vdpa_host_notifiers_init(dev);
>           vhost_vdpa_set_vring_ready(dev);
>           vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
>           vhost_vdpa_call(dev, VHOST_VDPA_GET_STATUS, &status);
> @@ -476,6 +560,7 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
>           vhost_vdpa_reset_device(dev);
>           vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
>                                      VIRTIO_CONFIG_S_DRIVER);
> +        vhost_vdpa_host_notifiers_uninit(dev, dev->nvqs);
>           memory_listener_unregister(&v->listener);
>   
>           return 0;
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index 9b81a409da..56bef30ec2 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -14,11 +14,17 @@
>   
>   #include "hw/virtio/virtio.h"
>   
> +typedef struct VhostVDPAHostNotifier {
> +    MemoryRegion mr;
> +    void *addr;
> +} VhostVDPAHostNotifier;
> +
>   typedef struct vhost_vdpa {
>       int device_fd;
>       uint32_t msg_type;
>       MemoryListener listener;
>       struct vhost_dev *dev;
> +    VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
>   } VhostVDPA;
>   
>   extern AddressSpace address_space_memory;



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping
  2021-06-02  8:41 ` [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping Jason Wang
@ 2021-06-10 20:54   ` Si-Wei Liu
  0 siblings, 0 replies; 7+ messages in thread
From: Si-Wei Liu @ 2021-06-10 20:54 UTC (permalink / raw)
  To: Jason Wang, mst, qemu-devel; +Cc: elic, lingshan.zhu



On 6/2/2021 1:41 AM, Jason Wang wrote:
> vDPA is not tie to any specific hardware, for safety and simplicity,
> vhost-vDPA doesn't allow MMIO area to be mapped via IOTLB. Only the
> doorbell could be mapped via mmap(). So this patch exclude skip the
> ram device from the IOTLB mapping.
>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com>
> ---
>   hw/virtio/vhost-vdpa.c | 12 ++----------
>   1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 01d2101d09..dd4321bac2 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -27,6 +27,8 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
>   {
>       return (!memory_region_is_ram(section->mr) &&
>               !memory_region_is_iommu(section->mr)) ||
> +           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> +            memory_region_is_ram_device(section->mr) ||
>              /*
>               * Sizing an enabled 64-bit BAR can cause spurious mappings to
>               * addresses in the upper part of the 64-bit address space.  These
> @@ -171,22 +173,12 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>                                vaddr, section->readonly);
>       if (ret) {
>           error_report("vhost vdpa map fail!");
> -        if (memory_region_is_ram_device(section->mr)) {
> -            /* Allow unexpected mappings not to be fatal for RAM devices */
> -            error_report("map ram fail!");
> -          return ;
> -        }
>           goto fail;
>       }
>   
>       return;
>   
>   fail:
> -    if (memory_region_is_ram_device(section->mr)) {
> -        error_report("failed to vdpa_dma_map. pci p2p may not work");
> -        return;
> -
> -    }
>       /*
>        * On the initfn path, store the first error in the container so we
>        * can gracefully fail.  Runtime, there's not much we can do other



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 0/2] vhost-vDPA: vq notification map support
  2021-06-02  8:41 [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
                   ` (2 preceding siblings ...)
  2021-06-10  2:30 ` [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
@ 2021-06-14 21:57 ` no-reply
  3 siblings, 0 replies; 7+ messages in thread
From: no-reply @ 2021-06-14 21:57 UTC (permalink / raw)
  To: jasowang; +Cc: mst, jasowang, qemu-devel, si-wei.liu, elic, lingshan.zhu

Patchew URL: https://patchew.org/QEMU/20210602084106.43186-1-jasowang@redhat.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210602084106.43186-1-jasowang@redhat.com
Subject: [PATCH V2 0/2] vhost-vDPA: vq notification map support

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
   a4716fd..1ea06ab  master     -> master
 - [tag update]      patchew/20210505103702.521457-1-berrange@redhat.com -> patchew/20210505103702.521457-1-berrange@redhat.com
 - [tag update]      patchew/20210510114328.21835-1-david@redhat.com -> patchew/20210510114328.21835-1-david@redhat.com
 - [tag update]      patchew/20210526170432.343588-1-philmd@redhat.com -> patchew/20210526170432.343588-1-philmd@redhat.com
 - [tag update]      patchew/20210601141805.206582-1-peterx@redhat.com -> patchew/20210601141805.206582-1-peterx@redhat.com
 * [new tag]         patchew/20210602084106.43186-1-jasowang@redhat.com -> patchew/20210602084106.43186-1-jasowang@redhat.com
 - [tag update]      patchew/20210602191125.525742-1-josemartins90@gmail.com -> patchew/20210602191125.525742-1-josemartins90@gmail.com
 - [tag update]      patchew/20210603171259.27962-1-peter.maydell@linaro.org -> patchew/20210603171259.27962-1-peter.maydell@linaro.org
Switched to a new branch 'test'

=== OUTPUT BEGIN ===
checkpatch.pl: no revisions returned for revlist 'base..'
=== OUTPUT END ===

Test command exited with code: 255


The full log is available at
http://patchew.org/logs/20210602084106.43186-1-jasowang@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-06-14 21:59 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-02  8:41 [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
2021-06-02  8:41 ` [PATCH V2 1/2] vhost-vdpa: skip ram device from the IOTLB mapping Jason Wang
2021-06-10 20:54   ` Si-Wei Liu
2021-06-02  8:41 ` [PATCH V2 2/2] vhost-vdpa: map virtqueue notification area if possible Jason Wang
2021-06-10 20:53   ` Si-Wei Liu
2021-06-10  2:30 ` [PATCH V2 0/2] vhost-vDPA: vq notification map support Jason Wang
2021-06-14 21:57 ` no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.