All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] vdpa: Check iova range on memory regions ops
@ 2021-10-05 13:48 Eugenio Pérez
  2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Eugenio Pérez @ 2021-10-05 13:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

At this moment vdpa will not send memory regions bigger than 1<<63.
However, actual iova range could be way more restrictive than that.

Since we can obtain the range through vdpa ioctl call, just save it
from the beginning of the operation and check against it.

Changes from v1:
* Use of int128_gt instead of plain uint64_t < comparison on memory
  range end.
* Document vhost_vdpa_section_end's return value so it's clear that
  it returns "one past end".

Eugenio Pérez (3):
  vdpa: Skip protected ram IOMMU mappings
  vdpa: Add vhost_vdpa_section_end
  vdpa: Check for iova range at mappings changes

 include/hw/virtio/vhost-vdpa.h |  2 +
 hw/virtio/vhost-vdpa.c         | 87 ++++++++++++++++++++++++++--------
 hw/virtio/trace-events         |  1 +
 3 files changed, 69 insertions(+), 21 deletions(-)

-- 
2.27.0




^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings
  2021-10-05 13:48 [PATCH v2 0/3] vdpa: Check iova range on memory regions ops Eugenio Pérez
@ 2021-10-05 13:48 ` Eugenio Pérez
  2021-10-12  3:45     ` Jason Wang
  2021-10-05 13:48 ` [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end Eugenio Pérez
  2021-10-05 13:48 ` [PATCH v2 3/3] vdpa: Check for iova range at mappings changes Eugenio Pérez
  2 siblings, 1 reply; 13+ messages in thread
From: Eugenio Pérez @ 2021-10-05 13:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

Following the logic of commit 56918a126ae ("memory: Add RAM_PROTECTED
flag to skip IOMMU mappings") with VFIO, skip memory sections
inaccessible via normal mechanisms, including DMA.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 47d7a5a23d..ea1aa71ad8 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -28,6 +28,7 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
 {
     return (!memory_region_is_ram(section->mr) &&
             !memory_region_is_iommu(section->mr)) ||
+            memory_region_is_protected(section->mr) ||
            /* vhost-vDPA doesn't allow MMIO to be mapped  */
             memory_region_is_ram_device(section->mr) ||
            /*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end
  2021-10-05 13:48 [PATCH v2 0/3] vdpa: Check iova range on memory regions ops Eugenio Pérez
  2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
@ 2021-10-05 13:48 ` Eugenio Pérez
  2021-10-12  3:46     ` Jason Wang
  2021-10-05 13:48 ` [PATCH v2 3/3] vdpa: Check for iova range at mappings changes Eugenio Pérez
  2 siblings, 1 reply; 13+ messages in thread
From: Eugenio Pérez @ 2021-10-05 13:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

Abstract this operation, that will be reused when validating the region
against the iova range that the device supports.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 hw/virtio/vhost-vdpa.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index ea1aa71ad8..be7c63b4ba 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -24,6 +24,19 @@
 #include "trace.h"
 #include "qemu-common.h"
 
+/*
+ * Return one past the end of the end of section. Be careful with uint64_t
+ * conversions!
+ */
+static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
+{
+    Int128 llend = int128_make64(section->offset_within_address_space);
+    llend = int128_add(llend, section->size);
+    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
+
+    return llend;
+}
+
 static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
 {
     return (!memory_region_is_ram(section->mr) &&
@@ -160,10 +173,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     }
 
     iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
-    llend = int128_make64(section->offset_within_address_space);
-    llend = int128_add(llend, section->size);
-    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
-
+    llend = vhost_vdpa_section_end(section);
     if (int128_ge(int128_make64(iova), llend)) {
         return;
     }
@@ -221,9 +231,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
     }
 
     iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
-    llend = int128_make64(section->offset_within_address_space);
-    llend = int128_add(llend, section->size);
-    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
+    llend = vhost_vdpa_section_end(section);
 
     trace_vhost_vdpa_listener_region_del(v, iova, int128_get64(llend));
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
  2021-10-05 13:48 [PATCH v2 0/3] vdpa: Check iova range on memory regions ops Eugenio Pérez
  2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
  2021-10-05 13:48 ` [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end Eugenio Pérez
@ 2021-10-05 13:48 ` Eugenio Pérez
  2021-10-12  3:50     ` Jason Wang
  2 siblings, 1 reply; 13+ messages in thread
From: Eugenio Pérez @ 2021-10-05 13:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Parav Pandit, Michael S. Tsirkin, Jason Wang, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

Check vdpa device range before updating memory regions so we don't add
any outside of it, and report the invalid change if any.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 include/hw/virtio/vhost-vdpa.h |  2 +
 hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
 hw/virtio/trace-events         |  1 +
 3 files changed, 55 insertions(+), 16 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index a8963da2d9..c288cf7ecb 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -13,6 +13,7 @@
 #define HW_VIRTIO_VHOST_VDPA_H
 
 #include "hw/virtio/virtio.h"
+#include "standard-headers/linux/vhost_types.h"
 
 typedef struct VhostVDPAHostNotifier {
     MemoryRegion mr;
@@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
     uint32_t msg_type;
     bool iotlb_batch_begin_sent;
     MemoryListener listener;
+    struct vhost_vdpa_iova_range iova_range;
     struct vhost_dev *dev;
     VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
 } VhostVDPA;
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index be7c63b4ba..6654287050 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
     return llend;
 }
 
-static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
-{
-    return (!memory_region_is_ram(section->mr) &&
-            !memory_region_is_iommu(section->mr)) ||
-            memory_region_is_protected(section->mr) ||
-           /* vhost-vDPA doesn't allow MMIO to be mapped  */
-            memory_region_is_ram_device(section->mr) ||
-           /*
-            * Sizing an enabled 64-bit BAR can cause spurious mappings to
-            * addresses in the upper part of the 64-bit address space.  These
-            * are never accessed by the CPU and beyond the address width of
-            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
-            */
-           section->offset_within_address_space & (1ULL << 63);
+static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
+                                                uint64_t iova_min,
+                                                uint64_t iova_max)
+{
+    Int128 llend;
+
+    if ((!memory_region_is_ram(section->mr) &&
+         !memory_region_is_iommu(section->mr)) ||
+        memory_region_is_protected(section->mr) ||
+        /* vhost-vDPA doesn't allow MMIO to be mapped  */
+        memory_region_is_ram_device(section->mr)) {
+        return true;
+    }
+
+    if (section->offset_within_address_space < iova_min) {
+        error_report("RAM section out of device range (min=%lu, addr=%lu)",
+                     iova_min, section->offset_within_address_space);
+        return true;
+    }
+
+    llend = vhost_vdpa_section_end(section);
+    if (int128_gt(llend, int128_make64(iova_max))) {
+        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
+                     iova_max, int128_get64(llend));
+        return true;
+    }
+
+    return false;
 }
 
 static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
@@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
     void *vaddr;
     int ret;
 
-    if (vhost_vdpa_listener_skipped_section(section)) {
+    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
+                                            v->iova_range.last)) {
         return;
     }
 
@@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
     Int128 llend, llsize;
     int ret;
 
-    if (vhost_vdpa_listener_skipped_section(section)) {
+    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
+                                            v->iova_range.last)) {
         return;
     }
 
@@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
     vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
 }
 
+static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
+{
+    int ret;
+
+    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
+    if (ret != 0) {
+        return ret;
+    }
+
+    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
+                                    v->iova_range.last);
+    return ret;
+}
+
 static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
 {
     struct vhost_vdpa *v;
+    int r;
     assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
     trace_vhost_vdpa_init(dev, opaque);
 
@@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
     v->listener = vhost_vdpa_memory_listener;
     v->msg_type = VHOST_IOTLB_MSG_V2;
 
+    r = vhost_vdpa_get_iova_range(v);
+    if (unlikely(!r)) {
+        return r;
+    }
+
     vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
                                VIRTIO_CONFIG_S_DRIVER);
 
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 8ed19e9d0c..650e521e35 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
 vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
 vhost_vdpa_set_owner(void *dev) "dev: %p"
 vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
+vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
 
 # virtio.c
 virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings
  2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
@ 2021-10-12  3:45     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:45 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Following the logic of commit 56918a126ae ("memory: Add RAM_PROTECTED
> flag to skip IOMMU mappings") with VFIO, skip memory sections
> inaccessible via normal mechanisms, including DMA.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-vdpa.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 47d7a5a23d..ea1aa71ad8 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -28,6 +28,7 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
>  {
>      return (!memory_region_is_ram(section->mr) &&
>              !memory_region_is_iommu(section->mr)) ||
> +            memory_region_is_protected(section->mr) ||
>             /* vhost-vDPA doesn't allow MMIO to be mapped  */
>              memory_region_is_ram_device(section->mr) ||
>             /*
> --
> 2.27.0
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings
@ 2021-10-12  3:45     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:45 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Following the logic of commit 56918a126ae ("memory: Add RAM_PROTECTED
> flag to skip IOMMU mappings") with VFIO, skip memory sections
> inaccessible via normal mechanisms, including DMA.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-vdpa.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index 47d7a5a23d..ea1aa71ad8 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -28,6 +28,7 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
>  {
>      return (!memory_region_is_ram(section->mr) &&
>              !memory_region_is_iommu(section->mr)) ||
> +            memory_region_is_protected(section->mr) ||
>             /* vhost-vDPA doesn't allow MMIO to be mapped  */
>              memory_region_is_ram_device(section->mr) ||
>             /*
> --
> 2.27.0
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end
  2021-10-05 13:48 ` [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end Eugenio Pérez
@ 2021-10-12  3:46     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:46 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Abstract this operation, that will be reused when validating the region
> against the iova range that the device supports.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-vdpa.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index ea1aa71ad8..be7c63b4ba 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -24,6 +24,19 @@
>  #include "trace.h"
>  #include "qemu-common.h"
>
> +/*
> + * Return one past the end of the end of section. Be careful with uint64_t
> + * conversions!
> + */
> +static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> +{
> +    Int128 llend = int128_make64(section->offset_within_address_space);
> +    llend = int128_add(llend, section->size);
> +    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> +
> +    return llend;
> +}
> +
>  static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
>  {
>      return (!memory_region_is_ram(section->mr) &&
> @@ -160,10 +173,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>      }
>
>      iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> -    llend = int128_make64(section->offset_within_address_space);
> -    llend = int128_add(llend, section->size);
> -    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> -
> +    llend = vhost_vdpa_section_end(section);
>      if (int128_ge(int128_make64(iova), llend)) {
>          return;
>      }
> @@ -221,9 +231,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>      }
>
>      iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> -    llend = int128_make64(section->offset_within_address_space);
> -    llend = int128_add(llend, section->size);
> -    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> +    llend = vhost_vdpa_section_end(section);
>
>      trace_vhost_vdpa_listener_region_del(v, iova, int128_get64(llend));
>
> --
> 2.27.0
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end
@ 2021-10-12  3:46     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:46 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Abstract this operation, that will be reused when validating the region
> against the iova range that the device supports.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  hw/virtio/vhost-vdpa.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index ea1aa71ad8..be7c63b4ba 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -24,6 +24,19 @@
>  #include "trace.h"
>  #include "qemu-common.h"
>
> +/*
> + * Return one past the end of the end of section. Be careful with uint64_t
> + * conversions!
> + */
> +static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> +{
> +    Int128 llend = int128_make64(section->offset_within_address_space);
> +    llend = int128_add(llend, section->size);
> +    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> +
> +    return llend;
> +}
> +
>  static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
>  {
>      return (!memory_region_is_ram(section->mr) &&
> @@ -160,10 +173,7 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>      }
>
>      iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> -    llend = int128_make64(section->offset_within_address_space);
> -    llend = int128_add(llend, section->size);
> -    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> -
> +    llend = vhost_vdpa_section_end(section);
>      if (int128_ge(int128_make64(iova), llend)) {
>          return;
>      }
> @@ -221,9 +231,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>      }
>
>      iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> -    llend = int128_make64(section->offset_within_address_space);
> -    llend = int128_add(llend, section->size);
> -    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> +    llend = vhost_vdpa_section_end(section);
>
>      trace_vhost_vdpa_listener_region_del(v, iova, int128_get64(llend));
>
> --
> 2.27.0
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
  2021-10-05 13:48 ` [PATCH v2 3/3] vdpa: Check for iova range at mappings changes Eugenio Pérez
@ 2021-10-12  3:50     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:50 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Check vdpa device range before updating memory regions so we don't add
> any outside of it, and report the invalid change if any.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost-vdpa.h |  2 +
>  hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
>  hw/virtio/trace-events         |  1 +
>  3 files changed, 55 insertions(+), 16 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index a8963da2d9..c288cf7ecb 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -13,6 +13,7 @@
>  #define HW_VIRTIO_VHOST_VDPA_H
>
>  #include "hw/virtio/virtio.h"
> +#include "standard-headers/linux/vhost_types.h"
>
>  typedef struct VhostVDPAHostNotifier {
>      MemoryRegion mr;
> @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
>      uint32_t msg_type;
>      bool iotlb_batch_begin_sent;
>      MemoryListener listener;
> +    struct vhost_vdpa_iova_range iova_range;
>      struct vhost_dev *dev;
>      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
>  } VhostVDPA;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index be7c63b4ba..6654287050 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
>      return llend;
>  }
>
> -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> -{
> -    return (!memory_region_is_ram(section->mr) &&
> -            !memory_region_is_iommu(section->mr)) ||
> -            memory_region_is_protected(section->mr) ||
> -           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> -            memory_region_is_ram_device(section->mr) ||
> -           /*
> -            * Sizing an enabled 64-bit BAR can cause spurious mappings to
> -            * addresses in the upper part of the 64-bit address space.  These
> -            * are never accessed by the CPU and beyond the address width of
> -            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
> -            */
> -           section->offset_within_address_space & (1ULL << 63);
> +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> +                                                uint64_t iova_min,
> +                                                uint64_t iova_max)
> +{
> +    Int128 llend;
> +
> +    if ((!memory_region_is_ram(section->mr) &&
> +         !memory_region_is_iommu(section->mr)) ||
> +        memory_region_is_protected(section->mr) ||
> +        /* vhost-vDPA doesn't allow MMIO to be mapped  */
> +        memory_region_is_ram_device(section->mr)) {
> +        return true;
> +    }
> +
> +    if (section->offset_within_address_space < iova_min) {
> +        error_report("RAM section out of device range (min=%lu, addr=%lu)",
> +                     iova_min, section->offset_within_address_space);
> +        return true;
> +    }
> +
> +    llend = vhost_vdpa_section_end(section);
> +    if (int128_gt(llend, int128_make64(iova_max))) {
> +        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> +                     iova_max, int128_get64(llend));
> +        return true;
> +    }
> +
> +    return false;
>  }
>
>  static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>      void *vaddr;
>      int ret;
>
> -    if (vhost_vdpa_listener_skipped_section(section)) {
> +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> +                                            v->iova_range.last)) {
>          return;
>      }
>
> @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>      Int128 llend, llsize;
>      int ret;
>
> -    if (vhost_vdpa_listener_skipped_section(section)) {
> +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> +                                            v->iova_range.last)) {
>          return;
>      }
>
> @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
>      vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
>  }
>
> +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> +{
> +    int ret;
> +
> +    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> +    if (ret != 0) {
> +        return ret;
> +    }

I think we need a fallback for the kernel that does not support
VHOST_VDPA_GET_IOVA_RANGE?

Thanks

> +
> +    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> +                                    v->iova_range.last);
> +    return ret;
> +}
> +
>  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>  {
>      struct vhost_vdpa *v;
> +    int r;
>      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
>      trace_vhost_vdpa_init(dev, opaque);
>
> @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>      v->listener = vhost_vdpa_memory_listener;
>      v->msg_type = VHOST_IOTLB_MSG_V2;
>
> +    r = vhost_vdpa_get_iova_range(v);
> +    if (unlikely(!r)) {
> +        return r;
> +    }
> +
>      vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
>                                 VIRTIO_CONFIG_S_DRIVER);
>
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 8ed19e9d0c..650e521e35 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
>  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
>  vhost_vdpa_set_owner(void *dev) "dev: %p"
>  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
>
>  # virtio.c
>  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> --
> 2.27.0
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
@ 2021-10-12  3:50     ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  3:50 UTC (permalink / raw)
  To: Eugenio Pérez
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Check vdpa device range before updating memory regions so we don't add
> any outside of it, and report the invalid change if any.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  include/hw/virtio/vhost-vdpa.h |  2 +
>  hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
>  hw/virtio/trace-events         |  1 +
>  3 files changed, 55 insertions(+), 16 deletions(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index a8963da2d9..c288cf7ecb 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -13,6 +13,7 @@
>  #define HW_VIRTIO_VHOST_VDPA_H
>
>  #include "hw/virtio/virtio.h"
> +#include "standard-headers/linux/vhost_types.h"
>
>  typedef struct VhostVDPAHostNotifier {
>      MemoryRegion mr;
> @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
>      uint32_t msg_type;
>      bool iotlb_batch_begin_sent;
>      MemoryListener listener;
> +    struct vhost_vdpa_iova_range iova_range;
>      struct vhost_dev *dev;
>      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
>  } VhostVDPA;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index be7c63b4ba..6654287050 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
>      return llend;
>  }
>
> -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> -{
> -    return (!memory_region_is_ram(section->mr) &&
> -            !memory_region_is_iommu(section->mr)) ||
> -            memory_region_is_protected(section->mr) ||
> -           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> -            memory_region_is_ram_device(section->mr) ||
> -           /*
> -            * Sizing an enabled 64-bit BAR can cause spurious mappings to
> -            * addresses in the upper part of the 64-bit address space.  These
> -            * are never accessed by the CPU and beyond the address width of
> -            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
> -            */
> -           section->offset_within_address_space & (1ULL << 63);
> +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> +                                                uint64_t iova_min,
> +                                                uint64_t iova_max)
> +{
> +    Int128 llend;
> +
> +    if ((!memory_region_is_ram(section->mr) &&
> +         !memory_region_is_iommu(section->mr)) ||
> +        memory_region_is_protected(section->mr) ||
> +        /* vhost-vDPA doesn't allow MMIO to be mapped  */
> +        memory_region_is_ram_device(section->mr)) {
> +        return true;
> +    }
> +
> +    if (section->offset_within_address_space < iova_min) {
> +        error_report("RAM section out of device range (min=%lu, addr=%lu)",
> +                     iova_min, section->offset_within_address_space);
> +        return true;
> +    }
> +
> +    llend = vhost_vdpa_section_end(section);
> +    if (int128_gt(llend, int128_make64(iova_max))) {
> +        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> +                     iova_max, int128_get64(llend));
> +        return true;
> +    }
> +
> +    return false;
>  }
>
>  static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
>      void *vaddr;
>      int ret;
>
> -    if (vhost_vdpa_listener_skipped_section(section)) {
> +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> +                                            v->iova_range.last)) {
>          return;
>      }
>
> @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>      Int128 llend, llsize;
>      int ret;
>
> -    if (vhost_vdpa_listener_skipped_section(section)) {
> +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> +                                            v->iova_range.last)) {
>          return;
>      }
>
> @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
>      vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
>  }
>
> +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> +{
> +    int ret;
> +
> +    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> +    if (ret != 0) {
> +        return ret;
> +    }

I think we need a fallback for the kernel that does not support
VHOST_VDPA_GET_IOVA_RANGE?

Thanks

> +
> +    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> +                                    v->iova_range.last);
> +    return ret;
> +}
> +
>  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>  {
>      struct vhost_vdpa *v;
> +    int r;
>      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
>      trace_vhost_vdpa_init(dev, opaque);
>
> @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
>      v->listener = vhost_vdpa_memory_listener;
>      v->msg_type = VHOST_IOTLB_MSG_V2;
>
> +    r = vhost_vdpa_get_iova_range(v);
> +    if (unlikely(!r)) {
> +        return r;
> +    }
> +
>      vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
>                                 VIRTIO_CONFIG_S_DRIVER);
>
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 8ed19e9d0c..650e521e35 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
>  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
>  vhost_vdpa_set_owner(void *dev) "dev: %p"
>  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
>
>  # virtio.c
>  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> --
> 2.27.0
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
  2021-10-12  3:50     ` Jason Wang
  (?)
@ 2021-10-12  6:27     ` Eugenio Perez Martin
  2021-10-12  6:44         ` Jason Wang
  -1 siblings, 1 reply; 13+ messages in thread
From: Eugenio Perez Martin @ 2021-10-12  6:27 UTC (permalink / raw)
  To: Jason Wang
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

On Tue, Oct 12, 2021 at 5:50 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Check vdpa device range before updating memory regions so we don't add
> > any outside of it, and report the invalid change if any.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  include/hw/virtio/vhost-vdpa.h |  2 +
> >  hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
> >  hw/virtio/trace-events         |  1 +
> >  3 files changed, 55 insertions(+), 16 deletions(-)
> >
> > diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> > index a8963da2d9..c288cf7ecb 100644
> > --- a/include/hw/virtio/vhost-vdpa.h
> > +++ b/include/hw/virtio/vhost-vdpa.h
> > @@ -13,6 +13,7 @@
> >  #define HW_VIRTIO_VHOST_VDPA_H
> >
> >  #include "hw/virtio/virtio.h"
> > +#include "standard-headers/linux/vhost_types.h"
> >
> >  typedef struct VhostVDPAHostNotifier {
> >      MemoryRegion mr;
> > @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
> >      uint32_t msg_type;
> >      bool iotlb_batch_begin_sent;
> >      MemoryListener listener;
> > +    struct vhost_vdpa_iova_range iova_range;
> >      struct vhost_dev *dev;
> >      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
> >  } VhostVDPA;
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index be7c63b4ba..6654287050 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> >      return llend;
> >  }
> >
> > -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> > -{
> > -    return (!memory_region_is_ram(section->mr) &&
> > -            !memory_region_is_iommu(section->mr)) ||
> > -            memory_region_is_protected(section->mr) ||
> > -           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > -            memory_region_is_ram_device(section->mr) ||
> > -           /*
> > -            * Sizing an enabled 64-bit BAR can cause spurious mappings to
> > -            * addresses in the upper part of the 64-bit address space.  These
> > -            * are never accessed by the CPU and beyond the address width of
> > -            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
> > -            */
> > -           section->offset_within_address_space & (1ULL << 63);
> > +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> > +                                                uint64_t iova_min,
> > +                                                uint64_t iova_max)
> > +{
> > +    Int128 llend;
> > +
> > +    if ((!memory_region_is_ram(section->mr) &&
> > +         !memory_region_is_iommu(section->mr)) ||
> > +        memory_region_is_protected(section->mr) ||
> > +        /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > +        memory_region_is_ram_device(section->mr)) {
> > +        return true;
> > +    }
> > +
> > +    if (section->offset_within_address_space < iova_min) {
> > +        error_report("RAM section out of device range (min=%lu, addr=%lu)",
> > +                     iova_min, section->offset_within_address_space);
> > +        return true;
> > +    }
> > +
> > +    llend = vhost_vdpa_section_end(section);
> > +    if (int128_gt(llend, int128_make64(iova_max))) {
> > +        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> > +                     iova_max, int128_get64(llend));
> > +        return true;
> > +    }
> > +
> > +    return false;
> >  }
> >
> >  static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> > @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
> >      void *vaddr;
> >      int ret;
> >
> > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > +                                            v->iova_range.last)) {
> >          return;
> >      }
> >
> > @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
> >      Int128 llend, llsize;
> >      int ret;
> >
> > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > +                                            v->iova_range.last)) {
> >          return;
> >      }
> >
> > @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> >      vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
> >  }
> >
> > +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > +{
> > +    int ret;
> > +
> > +    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> > +    if (ret != 0) {
> > +        return ret;
> > +    }
>
> I think we need a fallback for the kernel that does not support
> VHOST_VDPA_GET_IOVA_RANGE?
>

I'm fine with giving a default, but only "old" kernels will not have
the syscall. Future kernels will return success and [0, ULLONG_MAX]
range, won't it?

Taking that into account, what is the good default value? Before this
commit the valid range was [0, 2^63). Although that seems too wide for
every iommu, I would go with that one. The kernel considers fine [0,
ULLONG_MAX] in case the device and iommu domain does not support
them...

Thanks!

> Thanks
>
> > +
> > +    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > +                                    v->iova_range.last);
> > +    return ret;
> > +}
> > +
> >  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> >  {
> >      struct vhost_vdpa *v;
> > +    int r;
> >      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
> >      trace_vhost_vdpa_init(dev, opaque);
> >
> > @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> >      v->listener = vhost_vdpa_memory_listener;
> >      v->msg_type = VHOST_IOTLB_MSG_V2;
> >
> > +    r = vhost_vdpa_get_iova_range(v);
> > +    if (unlikely(!r)) {
> > +        return r;
> > +    }
> > +
> >      vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
> >                                 VIRTIO_CONFIG_S_DRIVER);
> >
> > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > index 8ed19e9d0c..650e521e35 100644
> > --- a/hw/virtio/trace-events
> > +++ b/hw/virtio/trace-events
> > @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
> >  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
> >  vhost_vdpa_set_owner(void *dev) "dev: %p"
> >  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> > +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
> >
> >  # virtio.c
> >  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> > --
> > 2.27.0
> >
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
  2021-10-12  6:27     ` Eugenio Perez Martin
@ 2021-10-12  6:44         ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  6:44 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen

On Tue, Oct 12, 2021 at 2:28 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Tue, Oct 12, 2021 at 5:50 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Check vdpa device range before updating memory regions so we don't add
> > > any outside of it, and report the invalid change if any.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  include/hw/virtio/vhost-vdpa.h |  2 +
> > >  hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
> > >  hw/virtio/trace-events         |  1 +
> > >  3 files changed, 55 insertions(+), 16 deletions(-)
> > >
> > > diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> > > index a8963da2d9..c288cf7ecb 100644
> > > --- a/include/hw/virtio/vhost-vdpa.h
> > > +++ b/include/hw/virtio/vhost-vdpa.h
> > > @@ -13,6 +13,7 @@
> > >  #define HW_VIRTIO_VHOST_VDPA_H
> > >
> > >  #include "hw/virtio/virtio.h"
> > > +#include "standard-headers/linux/vhost_types.h"
> > >
> > >  typedef struct VhostVDPAHostNotifier {
> > >      MemoryRegion mr;
> > > @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
> > >      uint32_t msg_type;
> > >      bool iotlb_batch_begin_sent;
> > >      MemoryListener listener;
> > > +    struct vhost_vdpa_iova_range iova_range;
> > >      struct vhost_dev *dev;
> > >      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
> > >  } VhostVDPA;
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index be7c63b4ba..6654287050 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> > >      return llend;
> > >  }
> > >
> > > -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> > > -{
> > > -    return (!memory_region_is_ram(section->mr) &&
> > > -            !memory_region_is_iommu(section->mr)) ||
> > > -            memory_region_is_protected(section->mr) ||
> > > -           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > > -            memory_region_is_ram_device(section->mr) ||
> > > -           /*
> > > -            * Sizing an enabled 64-bit BAR can cause spurious mappings to
> > > -            * addresses in the upper part of the 64-bit address space.  These
> > > -            * are never accessed by the CPU and beyond the address width of
> > > -            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
> > > -            */
> > > -           section->offset_within_address_space & (1ULL << 63);
> > > +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> > > +                                                uint64_t iova_min,
> > > +                                                uint64_t iova_max)
> > > +{
> > > +    Int128 llend;
> > > +
> > > +    if ((!memory_region_is_ram(section->mr) &&
> > > +         !memory_region_is_iommu(section->mr)) ||
> > > +        memory_region_is_protected(section->mr) ||
> > > +        /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > > +        memory_region_is_ram_device(section->mr)) {
> > > +        return true;
> > > +    }
> > > +
> > > +    if (section->offset_within_address_space < iova_min) {
> > > +        error_report("RAM section out of device range (min=%lu, addr=%lu)",
> > > +                     iova_min, section->offset_within_address_space);
> > > +        return true;
> > > +    }
> > > +
> > > +    llend = vhost_vdpa_section_end(section);
> > > +    if (int128_gt(llend, int128_make64(iova_max))) {
> > > +        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> > > +                     iova_max, int128_get64(llend));
> > > +        return true;
> > > +    }
> > > +
> > > +    return false;
> > >  }
> > >
> > >  static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> > > @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
> > >      void *vaddr;
> > >      int ret;
> > >
> > > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > > +                                            v->iova_range.last)) {
> > >          return;
> > >      }
> > >
> > > @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
> > >      Int128 llend, llsize;
> > >      int ret;
> > >
> > > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > > +                                            v->iova_range.last)) {
> > >          return;
> > >      }
> > >
> > > @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > >      vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
> > >  }
> > >
> > > +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > +{
> > > +    int ret;
> > > +
> > > +    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> > > +    if (ret != 0) {
> > > +        return ret;
> > > +    }
> >
> > I think we need a fallback for the kernel that does not support
> > VHOST_VDPA_GET_IOVA_RANGE?
> >
>
> I'm fine with giving a default, but only "old" kernels will not have
> the syscall. Future kernels will return success and [0, ULLONG_MAX]
> range, won't it?

Yes, but we need to make sure the qemu won't break on the kernel
without GET_IOVA_RANGE.

>
> Taking that into account, what is the good default value? Before this
> commit the valid range was [0, 2^63). Although that seems too wide for
> every iommu, I would go with that one. The kernel considers fine [0,
> ULLONG_MAX] in case the device and iommu domain does not support
> them...

I think it's good enough to use the value before the patch.

Thanks

>
> Thanks!
>
> > Thanks
> >
> > > +
> > > +    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > +                                    v->iova_range.last);
> > > +    return ret;
> > > +}
> > > +
> > >  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >  {
> > >      struct vhost_vdpa *v;
> > > +    int r;
> > >      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
> > >      trace_vhost_vdpa_init(dev, opaque);
> > >
> > > @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >      v->listener = vhost_vdpa_memory_listener;
> > >      v->msg_type = VHOST_IOTLB_MSG_V2;
> > >
> > > +    r = vhost_vdpa_get_iova_range(v);
> > > +    if (unlikely(!r)) {
> > > +        return r;
> > > +    }
> > > +
> > >      vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
> > >                                 VIRTIO_CONFIG_S_DRIVER);
> > >
> > > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > > index 8ed19e9d0c..650e521e35 100644
> > > --- a/hw/virtio/trace-events
> > > +++ b/hw/virtio/trace-events
> > > @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
> > >  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
> > >  vhost_vdpa_set_owner(void *dev) "dev: %p"
> > >  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> > > +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
> > >
> > >  # virtio.c
> > >  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> > > --
> > > 2.27.0
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
@ 2021-10-12  6:44         ` Jason Wang
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Wang @ 2021-10-12  6:44 UTC (permalink / raw)
  To: Eugenio Perez Martin
  Cc: Parav Pandit, Michael S. Tsirkin, qemu-devel, virtualization,
	Stefan Hajnoczi, Eli Cohen, Stefano Garzarella

On Tue, Oct 12, 2021 at 2:28 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Tue, Oct 12, 2021 at 5:50 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > Check vdpa device range before updating memory regions so we don't add
> > > any outside of it, and report the invalid change if any.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  include/hw/virtio/vhost-vdpa.h |  2 +
> > >  hw/virtio/vhost-vdpa.c         | 68 ++++++++++++++++++++++++++--------
> > >  hw/virtio/trace-events         |  1 +
> > >  3 files changed, 55 insertions(+), 16 deletions(-)
> > >
> > > diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> > > index a8963da2d9..c288cf7ecb 100644
> > > --- a/include/hw/virtio/vhost-vdpa.h
> > > +++ b/include/hw/virtio/vhost-vdpa.h
> > > @@ -13,6 +13,7 @@
> > >  #define HW_VIRTIO_VHOST_VDPA_H
> > >
> > >  #include "hw/virtio/virtio.h"
> > > +#include "standard-headers/linux/vhost_types.h"
> > >
> > >  typedef struct VhostVDPAHostNotifier {
> > >      MemoryRegion mr;
> > > @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
> > >      uint32_t msg_type;
> > >      bool iotlb_batch_begin_sent;
> > >      MemoryListener listener;
> > > +    struct vhost_vdpa_iova_range iova_range;
> > >      struct vhost_dev *dev;
> > >      VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
> > >  } VhostVDPA;
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index be7c63b4ba..6654287050 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> > >      return llend;
> > >  }
> > >
> > > -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> > > -{
> > > -    return (!memory_region_is_ram(section->mr) &&
> > > -            !memory_region_is_iommu(section->mr)) ||
> > > -            memory_region_is_protected(section->mr) ||
> > > -           /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > > -            memory_region_is_ram_device(section->mr) ||
> > > -           /*
> > > -            * Sizing an enabled 64-bit BAR can cause spurious mappings to
> > > -            * addresses in the upper part of the 64-bit address space.  These
> > > -            * are never accessed by the CPU and beyond the address width of
> > > -            * some IOMMU hardware.  TODO: VDPA should tell us the IOMMU width.
> > > -            */
> > > -           section->offset_within_address_space & (1ULL << 63);
> > > +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> > > +                                                uint64_t iova_min,
> > > +                                                uint64_t iova_max)
> > > +{
> > > +    Int128 llend;
> > > +
> > > +    if ((!memory_region_is_ram(section->mr) &&
> > > +         !memory_region_is_iommu(section->mr)) ||
> > > +        memory_region_is_protected(section->mr) ||
> > > +        /* vhost-vDPA doesn't allow MMIO to be mapped  */
> > > +        memory_region_is_ram_device(section->mr)) {
> > > +        return true;
> > > +    }
> > > +
> > > +    if (section->offset_within_address_space < iova_min) {
> > > +        error_report("RAM section out of device range (min=%lu, addr=%lu)",
> > > +                     iova_min, section->offset_within_address_space);
> > > +        return true;
> > > +    }
> > > +
> > > +    llend = vhost_vdpa_section_end(section);
> > > +    if (int128_gt(llend, int128_make64(iova_max))) {
> > > +        error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> > > +                     iova_max, int128_get64(llend));
> > > +        return true;
> > > +    }
> > > +
> > > +    return false;
> > >  }
> > >
> > >  static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> > > @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
> > >      void *vaddr;
> > >      int ret;
> > >
> > > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > > +                                            v->iova_range.last)) {
> > >          return;
> > >      }
> > >
> > > @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
> > >      Int128 llend, llsize;
> > >      int ret;
> > >
> > > -    if (vhost_vdpa_listener_skipped_section(section)) {
> > > +    if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > > +                                            v->iova_range.last)) {
> > >          return;
> > >      }
> > >
> > > @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > >      vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
> > >  }
> > >
> > > +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > > +{
> > > +    int ret;
> > > +
> > > +    ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> > > +    if (ret != 0) {
> > > +        return ret;
> > > +    }
> >
> > I think we need a fallback for the kernel that does not support
> > VHOST_VDPA_GET_IOVA_RANGE?
> >
>
> I'm fine with giving a default, but only "old" kernels will not have
> the syscall. Future kernels will return success and [0, ULLONG_MAX]
> range, won't it?

Yes, but we need to make sure the qemu won't break on the kernel
without GET_IOVA_RANGE.

>
> Taking that into account, what is the good default value? Before this
> commit the valid range was [0, 2^63). Although that seems too wide for
> every iommu, I would go with that one. The kernel considers fine [0,
> ULLONG_MAX] in case the device and iommu domain does not support
> them...

I think it's good enough to use the value before the patch.

Thanks

>
> Thanks!
>
> > Thanks
> >
> > > +
> > > +    trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > > +                                    v->iova_range.last);
> > > +    return ret;
> > > +}
> > > +
> > >  static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >  {
> > >      struct vhost_vdpa *v;
> > > +    int r;
> > >      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
> > >      trace_vhost_vdpa_init(dev, opaque);
> > >
> > > @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > >      v->listener = vhost_vdpa_memory_listener;
> > >      v->msg_type = VHOST_IOTLB_MSG_V2;
> > >
> > > +    r = vhost_vdpa_get_iova_range(v);
> > > +    if (unlikely(!r)) {
> > > +        return r;
> > > +    }
> > > +
> > >      vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
> > >                                 VIRTIO_CONFIG_S_DRIVER);
> > >
> > > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > > index 8ed19e9d0c..650e521e35 100644
> > > --- a/hw/virtio/trace-events
> > > +++ b/hw/virtio/trace-events
> > > @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
> > >  vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
> > >  vhost_vdpa_set_owner(void *dev) "dev: %p"
> > >  vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> > > +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
> > >
> > >  # virtio.c
> > >  virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> > > --
> > > 2.27.0
> > >
> >
>



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-10-12  6:48 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-05 13:48 [PATCH v2 0/3] vdpa: Check iova range on memory regions ops Eugenio Pérez
2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
2021-10-12  3:45   ` Jason Wang
2021-10-12  3:45     ` Jason Wang
2021-10-05 13:48 ` [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end Eugenio Pérez
2021-10-12  3:46   ` Jason Wang
2021-10-12  3:46     ` Jason Wang
2021-10-05 13:48 ` [PATCH v2 3/3] vdpa: Check for iova range at mappings changes Eugenio Pérez
2021-10-12  3:50   ` Jason Wang
2021-10-12  3:50     ` Jason Wang
2021-10-12  6:27     ` Eugenio Perez Martin
2021-10-12  6:44       ` Jason Wang
2021-10-12  6:44         ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.