qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/10] virtio-iommu: VFIO integration
@ 2020-10-08 17:15 Jean-Philippe Brucker
  2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
                   ` (11 more replies)
  0 siblings, 12 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

This series adds support for VFIO endpoints to virtio-iommu.

Versions 1 to 9 were posted by Bharat Bhushan, but I am taking over for
now since he doesn't have much time to spend on it. Thanks again Bharat
for the work!

Two major changes since [v9]:

* Don't use per-endoint page_size_mask properties. Instead keep a global
  page size for the virtio-iommu device, updated when adding a VFIO
  endpoint. Reject hotplug if the page size is incompatible.

* Try to make the MAP/UNMAP paths more efficient, by keeping track of
  memory region within the endpoint structure.

More testing would be appreciated, since I can only test using a software
model as host at the moment. But it does seem to hold well with PCIe
hotplug/unplug, and pass-through to guest userspace, which are no mean
feat.

Note that one page size combination is not supported: host 64kB guest
4kB cannot work, because the host IOMMU driver will automatically pick
64kB pages which prevents mapping at a smaller granule. Supporting this
case would require introducing a page size negotiation mechanism from
the guest all the way to the host IOMMU driver. Possible, but not
planned at the moment.

[v9] https://lore.kernel.org/qemu-devel/20200323084617.1782-1-bbhushan2@marvell.com/

Bharat Bhushan (7):
  virtio-iommu: Add memory notifiers for map/unmap
  virtio-iommu: Call memory notifiers in attach/detach
  virtio-iommu: Add replay() memory region callback
  virtio-iommu: Add notify_flag_changed() memory region callback
  memory: Add interface to set iommu page size mask
  vfio: Set IOMMU page size as per host supported page size
  virtio-iommu: Set supported page size mask

Jean-Philippe Brucker (3):
  virtio-iommu: Fix virtio_iommu_mr()
  virtio-iommu: Store memory region in endpoint struct
  vfio: Don't issue full 2^64 unmap

 include/exec/memory.h    |  26 +++++
 hw/vfio/common.c         |  19 ++++
 hw/virtio/virtio-iommu.c | 204 ++++++++++++++++++++++++++++++++++++++-
 softmmu/memory.c         |  13 +++
 hw/virtio/trace-events   |   5 +
 5 files changed, 265 insertions(+), 2 deletions(-)

-- 
2.28.0



^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr()
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  7:36   ` Auger Eric
  2020-10-19 21:36   ` Peter Xu
  2020-10-08 17:15 ` [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct Jean-Philippe Brucker
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

Due to an invalid mask, virtio_iommu_mr() may return the wrong memory
region. It hasn't been too problematic so far because the function was
only used to test existence of an endpoint, but that is about to change.

Fixes: cfb42188b24d ("virtio-iommu: Implement attach/detach command")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
 hw/virtio/virtio-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index a91fa2f674c..543fbbb24fb 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -101,7 +101,7 @@ static IOMMUMemoryRegion *virtio_iommu_mr(VirtIOIOMMU *s, uint32_t sid)
     bus_n = PCI_BUS_NUM(sid);
     iommu_pci_bus = iommu_find_iommu_pcibus(s, bus_n);
     if (iommu_pci_bus) {
-        devfn = sid & PCI_DEVFN_MAX;
+        devfn = sid & (PCI_DEVFN_MAX - 1);
         dev = iommu_pci_bus->pbdev[devfn];
         if (dev) {
             return &dev->iommu_mr;
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
  2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  7:37   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap Jean-Philippe Brucker
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

Store the memory region associated to each endpoint into the endpoint
structure, to allow efficient memory notification on map/unmap.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
Not super confident about the reconstruct_endpoint() change since I
haven't tested migration yet. Does it make sense?
---
 hw/virtio/virtio-iommu.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 543fbbb24fb..33115e82186 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUDomain {
 typedef struct VirtIOIOMMUEndpoint {
     uint32_t id;
     VirtIOIOMMUDomain *domain;
+    IOMMUMemoryRegion *iommu_mr;
     QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
 } VirtIOIOMMUEndpoint;
 
@@ -137,16 +138,19 @@ static VirtIOIOMMUEndpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
                                                       uint32_t ep_id)
 {
     VirtIOIOMMUEndpoint *ep;
+    IOMMUMemoryRegion *mr;
 
     ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(ep_id));
     if (ep) {
         return ep;
     }
-    if (!virtio_iommu_mr(s, ep_id)) {
+    mr = virtio_iommu_mr(s, ep_id);
+    if (!mr) {
         return NULL;
     }
     ep = g_malloc0(sizeof(*ep));
     ep->id = ep_id;
+    ep->iommu_mr = mr;
     trace_virtio_iommu_get_endpoint(ep_id);
     g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
     return ep;
@@ -927,9 +931,14 @@ static gboolean reconstruct_endpoints(gpointer key, gpointer value,
     VirtIOIOMMU *s = (VirtIOIOMMU *)data;
     VirtIOIOMMUDomain *d = (VirtIOIOMMUDomain *)value;
     VirtIOIOMMUEndpoint *iter;
+    IOMMUMemoryRegion *mr;
 
     QLIST_FOREACH(iter, &d->endpoint_list, next) {
+        mr = virtio_iommu_mr(s, iter->id);
+        assert(mr);
+
         iter->domain = d;
+        iter->iommu_mr = mr;
         g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
     }
     return false; /* continue the domain traversal */
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
  2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
  2020-10-08 17:15 ` [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  7:58   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach Jean-Philippe Brucker
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Extend VIRTIO_IOMMU_T_MAP/UNMAP request to notify memory listeners. It
will call VFIO notifier to map/unmap regions in the physical IOMMU.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10:
* Use the flags from IOMMUMemoryRegion
* Pass virt_start/virt_end rather than size, to avoid dealing with
  overflow and for consistency.
---
 hw/virtio/virtio-iommu.c | 53 ++++++++++++++++++++++++++++++++++++++++
 hw/virtio/trace-events   |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 33115e82186..fcdf3a819f8 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -125,6 +125,49 @@ static gint interval_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
     }
 }
 
+static void virtio_iommu_notify_map(IOMMUMemoryRegion *mr, hwaddr virt_start,
+                                    hwaddr virt_end, hwaddr paddr)
+{
+    IOMMUTLBEntry entry;
+    IOMMUNotifierFlag flags = mr->iommu_notify_flags;
+
+    if (!(flags & IOMMU_NOTIFIER_MAP)) {
+        return;
+    }
+
+    trace_virtio_iommu_notify_map(mr->parent_obj.name, virt_start, virt_end,
+                                  paddr);
+
+    entry.target_as = &address_space_memory;
+    entry.addr_mask = virt_end - virt_start;
+    entry.iova = virt_start;
+    entry.perm = IOMMU_RW;
+    entry.translated_addr = paddr;
+
+    memory_region_notify_iommu(mr, 0, entry);
+}
+
+static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
+                                      hwaddr virt_end)
+{
+    IOMMUTLBEntry entry;
+    IOMMUNotifierFlag flags = mr->iommu_notify_flags;
+
+    if (!(flags & IOMMU_NOTIFIER_UNMAP)) {
+        return;
+    }
+
+    trace_virtio_iommu_notify_unmap(mr->parent_obj.name, virt_start, virt_end);
+
+    entry.target_as = &address_space_memory;
+    entry.addr_mask = virt_end - virt_start;
+    entry.iova = virt_start;
+    entry.perm = IOMMU_NONE;
+    entry.translated_addr = 0;
+
+    memory_region_notify_iommu(mr, 0, entry);
+}
+
 static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
 {
     if (!ep->domain) {
@@ -315,6 +358,7 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
     VirtIOIOMMUDomain *domain;
     VirtIOIOMMUInterval *interval;
     VirtIOIOMMUMapping *mapping;
+    VirtIOIOMMUEndpoint *ep;
 
     if (flags & ~VIRTIO_IOMMU_MAP_F_MASK) {
         return VIRTIO_IOMMU_S_INVAL;
@@ -344,6 +388,10 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
 
     g_tree_insert(domain->mappings, interval, mapping);
 
+    QLIST_FOREACH(ep, &domain->endpoint_list, next) {
+        virtio_iommu_notify_map(ep->iommu_mr, virt_start, virt_end, phys_start);
+    }
+
     return VIRTIO_IOMMU_S_OK;
 }
 
@@ -356,6 +404,7 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
     VirtIOIOMMUMapping *iter_val;
     VirtIOIOMMUInterval interval, *iter_key;
     VirtIOIOMMUDomain *domain;
+    VirtIOIOMMUEndpoint *ep;
     int ret = VIRTIO_IOMMU_S_OK;
 
     trace_virtio_iommu_unmap(domain_id, virt_start, virt_end);
@@ -373,6 +422,10 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
         uint64_t current_high = iter_key->high;
 
         if (interval.low <= current_low && interval.high >= current_high) {
+            QLIST_FOREACH(ep, &domain->endpoint_list, next) {
+                virtio_iommu_notify_unmap(ep->iommu_mr, current_low,
+                                          current_high);
+            }
             g_tree_remove(domain->mappings, iter_key);
             trace_virtio_iommu_unmap_done(domain_id, current_low, current_high);
         } else {
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index cf1e59de302..65a48555c78 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -106,6 +106,8 @@ virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
 virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d"
 virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64
 virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start, uint64_t end) "dev= %d, type=%d start=0x%"PRIx64" end=0x%"PRIx64
+virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
+virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
 
 # virtio-mem.c
 virtio_mem_send_response(uint16_t type) "type=%" PRIu16
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (2 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  8:05   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback Jean-Philippe Brucker
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Call the memory notifiers when attaching an endpoint to a domain, to
replay existing mappings, and when detaching the endpoint, to remove all
mappings.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10: Remove notifiers_list, rename callbacks
---
 hw/virtio/virtio-iommu.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index fcdf3a819f8..7e6e3cf5200 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -168,11 +168,39 @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
     memory_region_notify_iommu(mr, 0, entry);
 }
 
+static gboolean virtio_iommu_notify_unmap_cb(gpointer key, gpointer value,
+                                             gpointer data)
+{
+    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+    virtio_iommu_notify_unmap(mr, interval->low, interval->high);
+
+    return false;
+}
+
+static gboolean virtio_iommu_notify_map_cb(gpointer key, gpointer value,
+                                           gpointer data)
+{
+    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
+    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+    virtio_iommu_notify_map(mr, interval->low, interval->high,
+                            mapping->phys_addr);
+
+    return false;
+}
+
 static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
 {
+    VirtIOIOMMUDomain *domain = ep->domain;
+
     if (!ep->domain) {
         return;
     }
+    g_tree_foreach(domain->mappings, virtio_iommu_notify_unmap_cb,
+                   ep->iommu_mr);
     QLIST_REMOVE(ep, next);
     ep->domain = NULL;
 }
@@ -315,6 +343,10 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
 
     ep->domain = domain;
 
+    /* Replay domain mappings on the associated memory region */
+    g_tree_foreach(domain->mappings, virtio_iommu_notify_map_cb,
+                   ep->iommu_mr);
+
     return VIRTIO_IOMMU_S_OK;
 }
 
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (3 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  9:12   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() " Jean-Philippe Brucker
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Implement the replay callback to setup all mappings for a new memory
region.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10: Homogenize tracepoint arguments
---
 hw/virtio/virtio-iommu.c | 41 ++++++++++++++++++++++++++++++++++++++++
 hw/virtio/trace-events   |  1 +
 2 files changed, 42 insertions(+)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 7e6e3cf5200..d2b96846134 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -861,6 +861,46 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
     return (ua > ub) - (ua < ub);
 }
 
+static gboolean virtio_iommu_remap(gpointer key, gpointer value, gpointer data)
+{
+    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
+    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+    trace_virtio_iommu_remap(mr->parent_obj.name, interval->low, interval->high,
+                             mapping->phys_addr);
+    virtio_iommu_notify_unmap(mr, interval->low, interval->high);
+    virtio_iommu_notify_map(mr, interval->low, interval->high,
+                            mapping->phys_addr);
+    return false;
+}
+
+static void virtio_iommu_replay(IOMMUMemoryRegion *mr, IOMMUNotifier *n)
+{
+    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+    VirtIOIOMMU *s = sdev->viommu;
+    uint32_t sid;
+    VirtIOIOMMUEndpoint *ep;
+
+    sid = virtio_iommu_get_bdf(sdev);
+
+    qemu_mutex_lock(&s->mutex);
+
+    if (!s->endpoints) {
+        goto unlock;
+    }
+
+    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
+    if (!ep || !ep->domain) {
+        goto unlock;
+    }
+
+    g_tree_foreach(ep->domain->mappings, virtio_iommu_remap, mr);
+
+unlock:
+    qemu_mutex_unlock(&s->mutex);
+}
+
 static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -1091,6 +1131,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
     IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
 
     imrc->translate = virtio_iommu_translate;
+    imrc->replay = virtio_iommu_replay;
 }
 
 static const TypeInfo virtio_iommu_info = {
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 65a48555c78..16f4729db4b 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -108,6 +108,7 @@ virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uin
 virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start, uint64_t end) "dev= %d, type=%d start=0x%"PRIx64" end=0x%"PRIx64
 virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
 virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
+virtio_iommu_remap(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
 
 # virtio-mem.c
 virtio_mem_send_response(uint16_t type) "type=%" PRIu16
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() memory region callback
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (4 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  8:18   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 07/10] memory: Add interface to set iommu page size mask Jean-Philippe Brucker
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Add notify_flag_changed() to notice when memory listeners are added and
removed.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10:
* Use notifier flags instead of notifiers_list
* Homogenize tracepoints
---
 hw/virtio/virtio-iommu.c | 14 ++++++++++++++
 hw/virtio/trace-events   |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index d2b96846134..8823bfc804a 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -901,6 +901,19 @@ unlock:
     qemu_mutex_unlock(&s->mutex);
 }
 
+static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
+                                            IOMMUNotifierFlag old,
+                                            IOMMUNotifierFlag new,
+                                            Error **errp)
+{
+    if (old == IOMMU_NOTIFIER_NONE) {
+        trace_virtio_iommu_notify_flag_add(iommu_mr->parent_obj.name);
+    } else if (new == IOMMU_NOTIFIER_NONE) {
+        trace_virtio_iommu_notify_flag_del(iommu_mr->parent_obj.name);
+    }
+    return 0;
+}
+
 static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -1132,6 +1145,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
 
     imrc->translate = virtio_iommu_translate;
     imrc->replay = virtio_iommu_replay;
+    imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
 }
 
 static const TypeInfo virtio_iommu_info = {
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 16f4729db4b..9108992bcc3 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -109,6 +109,8 @@ virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start,
 virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
 virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
 virtio_iommu_remap(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
+virtio_iommu_notify_flag_add(const char *name) "add notifier mr=%s"
+virtio_iommu_notify_flag_del(const char *name) "del notifier mr=%s"
 
 # virtio-mem.c
 virtio_mem_send_response(uint16_t type) "type=%" PRIu16
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 07/10] memory: Add interface to set iommu page size mask
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (5 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() " Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16  9:24   ` Auger Eric
  2020-10-19 21:36   ` Peter Xu
  2020-10-08 17:15 ` [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size Jean-Philippe Brucker
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Allow to set the page size mask supported by an iommu memory region.
This enables a vIOMMU to communicate the page size granule supported by
an assigned device, on hosts that use page sizes greater than 4kB.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10: Add errp parameter
---
 include/exec/memory.h | 26 ++++++++++++++++++++++++++
 softmmu/memory.c      | 13 +++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/include/exec/memory.h b/include/exec/memory.h
index dee09851622..c2da8381bec 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -382,6 +382,20 @@ struct IOMMUMemoryRegionClass {
      * @iommu: the IOMMUMemoryRegion
      */
     int (*num_indexes)(IOMMUMemoryRegion *iommu);
+
+    /*
+     * Set supported IOMMU page size
+     *
+     * If supported, allows to restrict the page size mask that can be supported
+     * with a given IOMMU memory region. For example, to propagate host physical
+     * IOMMU page size mask limitations to the virtual IOMMU.
+     *
+     * Returns 0 on success, or a negative error. In case of failure, the error
+     * object must be created.
+     */
+     int (*iommu_set_page_size_mask)(IOMMUMemoryRegion *iommu,
+                                     uint64_t page_size_mask,
+                                     Error **errp);
 };
 
 typedef struct CoalescedMemoryRange CoalescedMemoryRange;
@@ -1389,6 +1403,18 @@ int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
  */
 int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
 
+/**
+ * memory_region_iommu_set_page_size_mask: set the supported page
+ * sizes for a given IOMMU memory region
+ *
+ * @iommu_mr: IOMMU memory region
+ * @page_size_mask: supported page size mask
+ * @errp: pointer to Error*, to store an error if it happens.
+ */
+int memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
+                                           uint64_t page_size_mask,
+                                           Error **errp);
+
 /**
  * memory_region_name: get a memory region's name
  *
diff --git a/softmmu/memory.c b/softmmu/memory.c
index fa280a19f7f..5c855a02704 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1811,6 +1811,19 @@ static int memory_region_update_iommu_notify_flags(IOMMUMemoryRegion *iommu_mr,
     return ret;
 }
 
+int memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
+                                           uint64_t page_size_mask,
+                                           Error **errp)
+{
+    IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
+    int ret = 0;
+
+    if (imrc->iommu_set_page_size_mask) {
+        ret = imrc->iommu_set_page_size_mask(iommu_mr, page_size_mask, errp);
+    }
+    return ret;
+}
+
 int memory_region_register_iommu_notifier(MemoryRegion *mr,
                                           IOMMUNotifier *n, Error **errp)
 {
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (6 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 07/10] memory: Add interface to set iommu page size mask Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-08 21:22   ` Alex Williamson
  2020-10-16  9:25   ` Auger Eric
  2020-10-08 17:15 ` [PATCH v10 09/10] virtio-iommu: Set supported page size mask Jean-Philippe Brucker
                   ` (3 subsequent siblings)
  11 siblings, 2 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

Set IOMMU supported page size mask same as host Linux supported page
size mask.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
 hw/vfio/common.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 13471ae2943..e66054b02a7 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -636,6 +636,14 @@ static void vfio_listener_region_add(MemoryListener *listener,
                             int128_get64(llend),
                             iommu_idx);
 
+        ret = memory_region_iommu_set_page_size_mask(giommu->iommu,
+                                                     container->pgsizes,
+                                                     &err);
+        if (ret) {
+            g_free(giommu);
+            goto fail;
+        }
+
         ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
                                                     &err);
         if (ret) {
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (7 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-16 13:08   ` Auger Eric
  2020-10-19 21:35   ` Peter Xu
  2020-10-08 17:15 ` [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap Jean-Philippe Brucker
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

From: Bharat Bhushan <bbhushan2@marvell.com>

The virtio-iommu device can deal with arbitrary page sizes for virtual
endpoints, but for endpoints assigned with VFIO it must follow the page
granule used by the host IOMMU driver.

Implement the interface to set the vIOMMU page size mask, called by VFIO
for each endpoint. We assume that all host IOMMU drivers use the same
page granule (the host page granule). Override the page_size_mask field
in the virtio config space.

Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
v10: Use global page mask, allowing VFIO to override it until boot.
---
 hw/virtio/virtio-iommu.c | 51 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 8823bfc804a..dd0b3093d1b 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -914,6 +914,56 @@ static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
     return 0;
 }
 
+static int virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
+                                           uint64_t page_size_mask,
+                                           Error **errp)
+{
+    int new_granule, old_granule;
+    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+    VirtIOIOMMU *s = sdev->viommu;
+
+    if (!page_size_mask) {
+        return -1;
+    }
+
+    new_granule = ctz64(page_size_mask);
+    old_granule = ctz64(s->config.page_size_mask);
+
+    /*
+     * Modifying the page size after machine initialization isn't supported.
+     * Having a different mask is possible but the guest will use sub-optimal
+     * block sizes, so warn about it.
+     */
+    if (qdev_hotplug) {
+        if (new_granule != old_granule) {
+            error_setg(errp,
+                       "virtio-iommu page mask 0x%"PRIx64
+                       " is incompatible with mask 0x%"PRIx64,
+                       s->config.page_size_mask, page_size_mask);
+            return -1;
+        } else if (page_size_mask != s->config.page_size_mask) {
+            warn_report("virtio-iommu page mask 0x%"PRIx64
+                        " does not match 0x%"PRIx64,
+                        s->config.page_size_mask, page_size_mask);
+        }
+        return 0;
+    }
+
+    /*
+     * Disallow shrinking the page size. For example if an endpoint only
+     * supports 64kB pages, we can't globally enable 4kB pages. But that
+     * shouldn't happen, the host is unlikely to setup differing page granules.
+     * The other bits are only hints describing optimal block sizes.
+     */
+    if (new_granule < old_granule) {
+        error_setg(errp, "memory region shrinks the virtio-iommu page granule");
+        return -1;
+    }
+
+    s->config.page_size_mask = page_size_mask;
+    return 0;
+}
+
 static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -1146,6 +1196,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
     imrc->translate = virtio_iommu_translate;
     imrc->replay = virtio_iommu_replay;
     imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
+    imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
 }
 
 static const TypeInfo virtio_iommu_info = {
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (8 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 09/10] virtio-iommu: Set supported page size mask Jean-Philippe Brucker
@ 2020-10-08 17:15 ` Jean-Philippe Brucker
  2020-10-08 21:22   ` Alex Williamson
  2020-10-16  9:47   ` Auger Eric
  2020-10-16 13:13 ` [PATCH v10 00/10] virtio-iommu: VFIO integration Auger Eric
  2020-10-30 10:27 ` Michael S. Tsirkin
  11 siblings, 2 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-08 17:15 UTC (permalink / raw)
  To: eric.auger, alex.williamson
  Cc: Jean-Philippe Brucker, mst, qemu-devel, peterx, pbonzini, bbhushan2

IOMMUs may declare memory regions spanning from 0 to UINT64_MAX. When
attempting to deal with such region, vfio_listener_region_del() passes a
size of 2^64 to int128_get64() which throws an assertion failure.  Even
ignoring this, the VFIO_IOMMU_DMA_MAP ioctl cannot handle this size
since the size field is 64-bit. Split the request in two.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
For me this happens when memory_region_iommu_set_page_size_mask()
returns an error because a hotplugged endpoint uses an incompatible page
mask. vfio_connect_container() releases the memory listener which calls
region_del() with the 2^64 IOMMU region. There are probably other ways
to reach this.
---
 hw/vfio/common.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index e66054b02a7..e90a89c389e 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -797,6 +797,17 @@ static void vfio_listener_region_del(MemoryListener *listener,
     }
 
     if (try_unmap) {
+        if (llsize == int128_2_64()) {
+            /* The unmap ioctl doesn't accept a full 64-bit span. */
+            llsize = int128_rshift(llsize, 1);
+            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
+            if (ret) {
+                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
+                             "0x%"HWADDR_PRIx") = %d (%m)",
+                             container, iova, int128_get64(llsize), ret);
+            }
+            iova += int128_get64(llsize);
+        }
         ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
         if (ret) {
             error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
-- 
2.28.0



^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-08 17:15 ` [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap Jean-Philippe Brucker
@ 2020-10-08 21:22   ` Alex Williamson
  2020-10-30 10:25     ` Michael S. Tsirkin
  2020-10-16  9:47   ` Auger Eric
  1 sibling, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-10-08 21:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, peterx, eric.auger, pbonzini, bbhushan2

On Thu,  8 Oct 2020 19:15:58 +0200
Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> IOMMUs may declare memory regions spanning from 0 to UINT64_MAX. When
> attempting to deal with such region, vfio_listener_region_del() passes a
> size of 2^64 to int128_get64() which throws an assertion failure.  Even
> ignoring this, the VFIO_IOMMU_DMA_MAP ioctl cannot handle this size
> since the size field is 64-bit. Split the request in two.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> For me this happens when memory_region_iommu_set_page_size_mask()
> returns an error because a hotplugged endpoint uses an incompatible page
> mask. vfio_connect_container() releases the memory listener which calls
> region_del() with the 2^64 IOMMU region. There are probably other ways
> to reach this.
> ---
>  hw/vfio/common.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index e66054b02a7..e90a89c389e 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -797,6 +797,17 @@ static void vfio_listener_region_del(MemoryListener *listener,
>      }
>  
>      if (try_unmap) {
> +        if (llsize == int128_2_64()) {
> +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> +            llsize = int128_rshift(llsize, 1);
> +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> +            if (ret) {
> +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> +                             "0x%"HWADDR_PRIx") = %d (%m)",
> +                             container, iova, int128_get64(llsize), ret);
> +            }
> +            iova += int128_get64(llsize);
> +        }
>          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
>          if (ret) {
>              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "

We're still susceptible that splitting the range in two could result in
unmap calls that attempt to bisect a mapping that spans both ranges.
Both unmap calls would fail in that case.  I think we could solve this
more completely with a high water marker, but this probably good enough
for now.

Acked-by: Alex Williamson <alex.williamson@redhat.com>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size
  2020-10-08 17:15 ` [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size Jean-Philippe Brucker
@ 2020-10-08 21:22   ` Alex Williamson
  2020-10-30 10:26     ` Michael S. Tsirkin
  2020-10-16  9:25   ` Auger Eric
  1 sibling, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-10-08 21:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, peterx, eric.auger, pbonzini, bbhushan2

On Thu,  8 Oct 2020 19:15:56 +0200
Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:

> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Set IOMMU supported page size mask same as host Linux supported page
> size mask.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
>  hw/vfio/common.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 13471ae2943..e66054b02a7 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -636,6 +636,14 @@ static void vfio_listener_region_add(MemoryListener *listener,
>                              int128_get64(llend),
>                              iommu_idx);
>  
> +        ret = memory_region_iommu_set_page_size_mask(giommu->iommu,
> +                                                     container->pgsizes,
> +                                                     &err);
> +        if (ret) {
> +            g_free(giommu);
> +            goto fail;
> +        }
> +
>          ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
>                                                      &err);
>          if (ret) {

Acked-by: Alex Williamson <alex.williamson@redhat.com>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr()
  2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
@ 2020-10-16  7:36   ` Auger Eric
  2020-10-19 21:36   ` Peter Xu
  1 sibling, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  7:36 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> Due to an invalid mask, virtio_iommu_mr() may return the wrong memory
> region. It hasn't been too problematic so far because the function was
> only used to test existence of an endpoint, but that is about to change.
> 
> Fixes: cfb42188b24d ("virtio-iommu: Implement attach/detach command")
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Maybe add
CC: QEMU Stable <qemu-stable@nongnu.org>

Acked-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

> ---
>  hw/virtio/virtio-iommu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index a91fa2f674c..543fbbb24fb 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -101,7 +101,7 @@ static IOMMUMemoryRegion *virtio_iommu_mr(VirtIOIOMMU *s, uint32_t sid)
>      bus_n = PCI_BUS_NUM(sid);
>      iommu_pci_bus = iommu_find_iommu_pcibus(s, bus_n);
>      if (iommu_pci_bus) {
> -        devfn = sid & PCI_DEVFN_MAX;
> +        devfn = sid & (PCI_DEVFN_MAX - 1);
>          dev = iommu_pci_bus->pbdev[devfn];
>          if (dev) {
>              return &dev->iommu_mr;
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct
  2020-10-08 17:15 ` [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct Jean-Philippe Brucker
@ 2020-10-16  7:37   ` Auger Eric
  0 siblings, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  7:37 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> Store the memory region associated to each endpoint into the endpoint
> structure, to allow efficient memory notification on map/unmap.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Acked-by: Eric Auger <eric.auger@redhat.com>

> ---
> Not super confident about the reconstruct_endpoint() change since I
> haven't tested migration yet. Does it make sense?
It sounds good to me. I tested migration with vhost and this works properly.

Eric
> ---
>  hw/virtio/virtio-iommu.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 543fbbb24fb..33115e82186 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUDomain {
>  typedef struct VirtIOIOMMUEndpoint {
>      uint32_t id;
>      VirtIOIOMMUDomain *domain;
> +    IOMMUMemoryRegion *iommu_mr;
>      QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
>  } VirtIOIOMMUEndpoint;
>  
> @@ -137,16 +138,19 @@ static VirtIOIOMMUEndpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
>                                                        uint32_t ep_id)
>  {
>      VirtIOIOMMUEndpoint *ep;
> +    IOMMUMemoryRegion *mr;
>  
>      ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(ep_id));
>      if (ep) {
>          return ep;
>      }
> -    if (!virtio_iommu_mr(s, ep_id)) {
> +    mr = virtio_iommu_mr(s, ep_id);
> +    if (!mr) {
>          return NULL;
>      }
>      ep = g_malloc0(sizeof(*ep));
>      ep->id = ep_id;
> +    ep->iommu_mr = mr;
>      trace_virtio_iommu_get_endpoint(ep_id);
>      g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
>      return ep;
> @@ -927,9 +931,14 @@ static gboolean reconstruct_endpoints(gpointer key, gpointer value,
>      VirtIOIOMMU *s = (VirtIOIOMMU *)data;
>      VirtIOIOMMUDomain *d = (VirtIOIOMMUDomain *)value;
>      VirtIOIOMMUEndpoint *iter;
> +    IOMMUMemoryRegion *mr;
>  
>      QLIST_FOREACH(iter, &d->endpoint_list, next) {
> +        mr = virtio_iommu_mr(s, iter->id);
> +        assert(mr);
> +
>          iter->domain = d;
> +        iter->iommu_mr = mr;
>          g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
>      }
>      return false; /* continue the domain traversal */
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap
  2020-10-08 17:15 ` [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap Jean-Philippe Brucker
@ 2020-10-16  7:58   ` Auger Eric
  2020-10-22 16:41     ` Jean-Philippe Brucker
  0 siblings, 1 reply; 47+ messages in thread
From: Auger Eric @ 2020-10-16  7:58 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Extend VIRTIO_IOMMU_T_MAP/UNMAP request to notify memory listeners. It
> will call VFIO notifier to map/unmap regions in the physical IOMMU.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10:
> * Use the flags from IOMMUMemoryRegion
> * Pass virt_start/virt_end rather than size, to avoid dealing with
>   overflow and for consistency.
> ---
>  hw/virtio/virtio-iommu.c | 53 ++++++++++++++++++++++++++++++++++++++++
>  hw/virtio/trace-events   |  2 ++
>  2 files changed, 55 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 33115e82186..fcdf3a819f8 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -125,6 +125,49 @@ static gint interval_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
>      }
>  }
>  
> +static void virtio_iommu_notify_map(IOMMUMemoryRegion *mr, hwaddr virt_start,
> +                                    hwaddr virt_end, hwaddr paddr)
> +{
> +    IOMMUTLBEntry entry;
> +    IOMMUNotifierFlag flags = mr->iommu_notify_flags;
> +
> +    if (!(flags & IOMMU_NOTIFIER_MAP)) {
> +        return;
> +    }
> +
> +    trace_virtio_iommu_notify_map(mr->parent_obj.name, virt_start, virt_end,
> +                                  paddr);
> +
> +    entry.target_as = &address_space_memory;
> +    entry.addr_mask = virt_end - virt_start;
> +    entry.iova = virt_start;
> +    entry.perm = IOMMU_RW;
logically you should be able to cascade the struct virtio_iommu_req_map
*req flags field instead.
> +    entry.translated_addr = paddr;
> +
> +    memory_region_notify_iommu(mr, 0, entry);
> +}
> +
> +static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
> +                                      hwaddr virt_end)
> +{
> +    IOMMUTLBEntry entry;
> +    IOMMUNotifierFlag flags = mr->iommu_notify_flags;
> +
> +    if (!(flags & IOMMU_NOTIFIER_UNMAP)) {
> +        return;
> +    }
> +
> +    trace_virtio_iommu_notify_unmap(mr->parent_obj.name, virt_start, virt_end);
> +
> +    entry.target_as = &address_space_memory;
> +    entry.addr_mask = virt_end - virt_start;
> +    entry.iova = virt_start;
> +    entry.perm = IOMMU_NONE;
> +    entry.translated_addr = 0;
> +
> +    memory_region_notify_iommu(mr, 0, entry);
> +}
> +
>  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
>  {
>      if (!ep->domain) {
> @@ -315,6 +358,7 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
>      VirtIOIOMMUDomain *domain;
>      VirtIOIOMMUInterval *interval;
>      VirtIOIOMMUMapping *mapping;
> +    VirtIOIOMMUEndpoint *ep;
>  
>      if (flags & ~VIRTIO_IOMMU_MAP_F_MASK) {
>          return VIRTIO_IOMMU_S_INVAL;
> @@ -344,6 +388,10 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
>  
>      g_tree_insert(domain->mappings, interval, mapping);
>  
> +    QLIST_FOREACH(ep, &domain->endpoint_list, next) {
> +        virtio_iommu_notify_map(ep->iommu_mr, virt_start, virt_end, phys_start);
> +    }
> +
>      return VIRTIO_IOMMU_S_OK;
>  }
>  
> @@ -356,6 +404,7 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
>      VirtIOIOMMUMapping *iter_val;
>      VirtIOIOMMUInterval interval, *iter_key;
>      VirtIOIOMMUDomain *domain;
> +    VirtIOIOMMUEndpoint *ep;
>      int ret = VIRTIO_IOMMU_S_OK;
>  
>      trace_virtio_iommu_unmap(domain_id, virt_start, virt_end);
> @@ -373,6 +422,10 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
>          uint64_t current_high = iter_key->high;
>  
>          if (interval.low <= current_low && interval.high >= current_high) {
> +            QLIST_FOREACH(ep, &domain->endpoint_list, next) {
> +                virtio_iommu_notify_unmap(ep->iommu_mr, current_low,
> +                                          current_high);
> +            }
>              g_tree_remove(domain->mappings, iter_key);
>              trace_virtio_iommu_unmap_done(domain_id, current_low, current_high);
>          } else {
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index cf1e59de302..65a48555c78 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -106,6 +106,8 @@ virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
>  virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d"
>  virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64
>  virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start, uint64_t end) "dev= %d, type=%d start=0x%"PRIx64" end=0x%"PRIx64
> +virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
> +virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
>  
>  # virtio-mem.c
>  virtio_mem_send_response(uint16_t type) "type=%" PRIu16
> 
Besides it looks good to me.

Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach
  2020-10-08 17:15 ` [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach Jean-Philippe Brucker
@ 2020-10-16  8:05   ` Auger Eric
  0 siblings, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  8:05 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Call the memory notifiers when attaching an endpoint to a domain, to
> replay existing mappings, and when detaching the endpoint, to remove all
> mappings.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10: Remove notifiers_list, rename callbacks
> ---
>  hw/virtio/virtio-iommu.c | 32 ++++++++++++++++++++++++++++++++
>  1 file changed, 32 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index fcdf3a819f8..7e6e3cf5200 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -168,11 +168,39 @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr virt_start,
>      memory_region_notify_iommu(mr, 0, entry);
>  }
>  
> +static gboolean virtio_iommu_notify_unmap_cb(gpointer key, gpointer value,
> +                                             gpointer data)
> +{
> +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> +
> +    virtio_iommu_notify_unmap(mr, interval->low, interval->high);
> +
> +    return false;
> +}
> +
> +static gboolean virtio_iommu_notify_map_cb(gpointer key, gpointer value,
> +                                           gpointer data)
> +{
> +    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> +
> +    virtio_iommu_notify_map(mr, interval->low, interval->high,
> +                            mapping->phys_addr);
Here also I think we should apply the mapping->flags.
> +
> +    return false;
> +}
> +
>  static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
>  {
> +    VirtIOIOMMUDomain *domain = ep->domain;
> +
>      if (!ep->domain) {
>          return;
>      }
> +    g_tree_foreach(domain->mappings, virtio_iommu_notify_unmap_cb,
> +                   ep->iommu_mr);
>      QLIST_REMOVE(ep, next);
>      ep->domain = NULL;
>  }
> @@ -315,6 +343,10 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
>  
>      ep->domain = domain;
>  
> +    /* Replay domain mappings on the associated memory region */
> +    g_tree_foreach(domain->mappings, virtio_iommu_notify_map_cb,
> +                   ep->iommu_mr);
> +
>      return VIRTIO_IOMMU_S_OK;
>  }
>  
> 
Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() memory region callback
  2020-10-08 17:15 ` [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() " Jean-Philippe Brucker
@ 2020-10-16  8:18   ` Auger Eric
  0 siblings, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  8:18 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Add notify_flag_changed() to notice when memory listeners are added and
> removed.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10:
> * Use notifier flags instead of notifiers_list
> * Homogenize tracepoints
> ---
>  hw/virtio/virtio-iommu.c | 14 ++++++++++++++
>  hw/virtio/trace-events   |  2 ++
>  2 files changed, 16 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index d2b96846134..8823bfc804a 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -901,6 +901,19 @@ unlock:
>      qemu_mutex_unlock(&s->mutex);
>  }
>  
> +static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
> +                                            IOMMUNotifierFlag old,
> +                                            IOMMUNotifierFlag new,
> +                                            Error **errp)
> +{
> +    if (old == IOMMU_NOTIFIER_NONE) {
> +        trace_virtio_iommu_notify_flag_add(iommu_mr->parent_obj.name);
> +    } else if (new == IOMMU_NOTIFIER_NONE) {
> +        trace_virtio_iommu_notify_flag_del(iommu_mr->parent_obj.name);
> +    }
> +    return 0;
> +}
> +
>  static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> @@ -1132,6 +1145,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
>  
>      imrc->translate = virtio_iommu_translate;
>      imrc->replay = virtio_iommu_replay;
> +    imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
>  }
>  
>  static const TypeInfo virtio_iommu_info = {
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 16f4729db4b..9108992bcc3 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -109,6 +109,8 @@ virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start,
>  virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
>  virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
>  virtio_iommu_remap(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
> +virtio_iommu_notify_flag_add(const char *name) "add notifier mr=%s"
Maybe "add notifier %d to mr=%s"
> +virtio_iommu_notify_flag_del(const char *name) "del notifier mr=%s"
from?
>  
>  # virtio-mem.c
>  virtio_mem_send_response(uint16_t type) "type=%" PRIu16
> 

Besides
Acked-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback
  2020-10-08 17:15 ` [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback Jean-Philippe Brucker
@ 2020-10-16  9:12   ` Auger Eric
  2020-10-22 16:42     ` Jean-Philippe Brucker
  0 siblings, 1 reply; 47+ messages in thread
From: Auger Eric @ 2020-10-16  9:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Implement the replay callback to setup all mappings for a new memory
> region.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10: Homogenize tracepoint arguments
> ---
>  hw/virtio/virtio-iommu.c | 41 ++++++++++++++++++++++++++++++++++++++++
>  hw/virtio/trace-events   |  1 +
>  2 files changed, 42 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 7e6e3cf5200..d2b96846134 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -861,6 +861,46 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
>      return (ua > ub) - (ua < ub);
>  }
>  
> +static gboolean virtio_iommu_remap(gpointer key, gpointer value, gpointer data)
> +{
> +    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> +
> +    trace_virtio_iommu_remap(mr->parent_obj.name, interval->low, interval->high,
> +                             mapping->phys_addr);
> +    virtio_iommu_notify_unmap(mr, interval->low, interval->high);
> +    virtio_iommu_notify_map(mr, interval->low, interval->high,
> +                            mapping->phys_addr);
I don't get the preliminary unmap with the same data. Why isn't the map
sufficient to replay?

The default implementation only notifies for valid entries.
> +    return false;
> +}
> +
> +static void virtio_iommu_replay(IOMMUMemoryRegion *mr, IOMMUNotifier *n)
> +{
> +    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> +    VirtIOIOMMU *s = sdev->viommu;
> +    uint32_t sid;
> +    VirtIOIOMMUEndpoint *ep;
> +
> +    sid = virtio_iommu_get_bdf(sdev);
> +
> +    qemu_mutex_lock(&s->mutex);
> +
> +    if (!s->endpoints) {
> +        goto unlock;
> +    }
> +
> +    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
> +    if (!ep || !ep->domain) {
> +        goto unlock;
> +    }
> +
> +    g_tree_foreach(ep->domain->mappings, virtio_iommu_remap, mr);
> +
> +unlock:
> +    qemu_mutex_unlock(&s->mutex);
> +}
> +
>  static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> @@ -1091,6 +1131,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
>      IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
>  
>      imrc->translate = virtio_iommu_translate;
> +    imrc->replay = virtio_iommu_replay;
>  }
>  
>  static const TypeInfo virtio_iommu_info = {
> diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> index 65a48555c78..16f4729db4b 100644
> --- a/hw/virtio/trace-events
> +++ b/hw/virtio/trace-events
> @@ -108,6 +108,7 @@ virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uin
>  virtio_iommu_fill_resv_property(uint32_t devid, uint8_t subtype, uint64_t start, uint64_t end) "dev= %d, type=%d start=0x%"PRIx64" end=0x%"PRIx64
>  virtio_iommu_notify_map(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
>  virtio_iommu_notify_unmap(const char *name, uint64_t virt_start, uint64_t virt_end) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
> +virtio_iommu_remap(const char *name, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start) "mr=%s virt_start=0x%"PRIx64" virt_end=0x%"PRIx64" phys_start=0x%"PRIx64
>  
>  # virtio-mem.c
>  virtio_mem_send_response(uint16_t type) "type=%" PRIu16
> 
Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 07/10] memory: Add interface to set iommu page size mask
  2020-10-08 17:15 ` [PATCH v10 07/10] memory: Add interface to set iommu page size mask Jean-Philippe Brucker
@ 2020-10-16  9:24   ` Auger Eric
  2020-10-22 16:43     ` Jean-Philippe Brucker
  2020-10-19 21:36   ` Peter Xu
  1 sibling, 1 reply; 47+ messages in thread
From: Auger Eric @ 2020-10-16  9:24 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Allow to set the page size mask supported by an iommu memory region.
> This enables a vIOMMU to communicate the page size granule supported by
> an assigned device, on hosts that use page sizes greater than 4kB.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10: Add errp parameter
> ---
>  include/exec/memory.h | 26 ++++++++++++++++++++++++++
>  softmmu/memory.c      | 13 +++++++++++++
>  2 files changed, 39 insertions(+)
> 
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index dee09851622..c2da8381bec 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -382,6 +382,20 @@ struct IOMMUMemoryRegionClass {
>       * @iommu: the IOMMUMemoryRegion
>       */
>      int (*num_indexes)(IOMMUMemoryRegion *iommu);
> +
> +    /*
> +     * Set supported IOMMU page size
> +     *
> +     * If supported, allows to restrict the page size mask that can be supported
To match other docs: Optional method:
> +     * with a given IOMMU memory region. For example, to propagate host physical
> +     * IOMMU page size mask limitations to the virtual IOMMU.
> +     *
> +     * Returns 0 on success, or a negative error. In case of failure, the error
> +     * object must be created.
document args as done for other functions?
> +     */
> +     int (*iommu_set_page_size_mask)(IOMMUMemoryRegion *iommu,
> +                                     uint64_t page_size_mask,
> +                                     Error **errp);
>  };
>  
>  typedef struct CoalescedMemoryRange CoalescedMemoryRange;
> @@ -1389,6 +1403,18 @@ int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
>   */
>  int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
>  
> +/**
> + * memory_region_iommu_set_page_size_mask: set the supported page
> + * sizes for a given IOMMU memory region
> + *
> + * @iommu_mr: IOMMU memory region
> + * @page_size_mask: supported page size mask
> + * @errp: pointer to Error*, to store an error if it happens.
> + */
> +int memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
> +                                           uint64_t page_size_mask,
> +                                           Error **errp);
> +
>  /**
>   * memory_region_name: get a memory region's name
>   *
> diff --git a/softmmu/memory.c b/softmmu/memory.c
> index fa280a19f7f..5c855a02704 100644
> --- a/softmmu/memory.c
> +++ b/softmmu/memory.c
> @@ -1811,6 +1811,19 @@ static int memory_region_update_iommu_notify_flags(IOMMUMemoryRegion *iommu_mr,
>      return ret;
>  }
>  
> +int memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
> +                                           uint64_t page_size_mask,
> +                                           Error **errp)
> +{
> +    IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
> +    int ret = 0;
> +
> +    if (imrc->iommu_set_page_size_mask) {
> +        ret = imrc->iommu_set_page_size_mask(iommu_mr, page_size_mask, errp);
> +    }
> +    return ret;
> +}
> +
>  int memory_region_register_iommu_notifier(MemoryRegion *mr,
>                                            IOMMUNotifier *n, Error **errp)
>  {
> 
Besides

Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size
  2020-10-08 17:15 ` [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size Jean-Philippe Brucker
  2020-10-08 21:22   ` Alex Williamson
@ 2020-10-16  9:25   ` Auger Eric
  1 sibling, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,
On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Set IOMMU supported page size mask same as host Linux supported page
> size mask.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> ---
>  hw/vfio/common.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 13471ae2943..e66054b02a7 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -636,6 +636,14 @@ static void vfio_listener_region_add(MemoryListener *listener,
>                              int128_get64(llend),
>                              iommu_idx);
>  
> +        ret = memory_region_iommu_set_page_size_mask(giommu->iommu,
> +                                                     container->pgsizes,
> +                                                     &err);
> +        if (ret) {
> +            g_free(giommu);
> +            goto fail;
> +        }
> +
>          ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
>                                                      &err);
>          if (ret) {
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-08 17:15 ` [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap Jean-Philippe Brucker
  2020-10-08 21:22   ` Alex Williamson
@ 2020-10-16  9:47   ` Auger Eric
  1 sibling, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16  9:47 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> IOMMUs may declare memory regions spanning from 0 to UINT64_MAX. When
> attempting to deal with such region, vfio_listener_region_del() passes a
> size of 2^64 to int128_get64() which throws an assertion failure.  Even
> ignoring this, the VFIO_IOMMU_DMA_MAP ioctl cannot handle this size
> since the size field is 64-bit. Split the request in two.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric
> ---
> For me this happens when memory_region_iommu_set_page_size_mask()
> returns an error because a hotplugged endpoint uses an incompatible page
> mask. vfio_connect_container() releases the memory listener which calls
> region_del() with the 2^64 IOMMU region. There are probably other ways
> to reach this.
> ---
>  hw/vfio/common.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index e66054b02a7..e90a89c389e 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -797,6 +797,17 @@ static void vfio_listener_region_del(MemoryListener *listener,
>      }
>  
>      if (try_unmap) {
> +        if (llsize == int128_2_64()) {
> +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> +            llsize = int128_rshift(llsize, 1);
> +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> +            if (ret) {
> +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> +                             "0x%"HWADDR_PRIx") = %d (%m)",
> +                             container, iova, int128_get64(llsize), ret);
> +            }
> +            iova += int128_get64(llsize);
> +        }
>          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
>          if (ret) {
>              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-08 17:15 ` [PATCH v10 09/10] virtio-iommu: Set supported page size mask Jean-Philippe Brucker
@ 2020-10-16 13:08   ` Auger Eric
  2020-10-22 16:43     ` Jean-Philippe Brucker
  2020-10-19 21:35   ` Peter Xu
  1 sibling, 1 reply; 47+ messages in thread
From: Auger Eric @ 2020-10-16 13:08 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> The virtio-iommu device can deal with arbitrary page sizes for virtual
> endpoints, but for endpoints assigned with VFIO it must follow the page
> granule used by the host IOMMU driver.
> 
> Implement the interface to set the vIOMMU page size mask, called by VFIO
> for each endpoint. We assume that all host IOMMU drivers use the same
> page granule (the host page granule). Override the page_size_mask field
> in the virtio config space.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>

> ---
> v10: Use global page mask, allowing VFIO to override it until boot.
> ---
>  hw/virtio/virtio-iommu.c | 51 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 51 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 8823bfc804a..dd0b3093d1b 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -914,6 +914,56 @@ static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
>      return 0;
>  }
>  
> +static int virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> +                                           uint64_t page_size_mask,
> +                                           Error **errp)
> +{
> +    int new_granule, old_granule;
> +    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> +    VirtIOIOMMU *s = sdev->viommu;
> +
> +    if (!page_size_mask) {
set errp
> +        return -1;
> +    }
> +
> +    new_granule = ctz64(page_size_mask);
> +    old_granule = ctz64(s->config.page_size_mask);

I think this would be interesting to add a trace point
> +
> +    /*
> +     * Modifying the page size after machine initialization isn't supported.
> +     * Having a different mask is possible but the guest will use sub-optimal
> +     * block sizes, so warn about it.
> +     */
> +    if (qdev_hotplug) {
> +        if (new_granule != old_granule) {
> +            error_setg(errp,
> +                       "virtio-iommu page mask 0x%"PRIx64
> +                       " is incompatible with mask 0x%"PRIx64,
> +                       s->config.page_size_mask, page_size_mask);
> +            return -1;
> +        } else if (page_size_mask != s->config.page_size_mask) {
> +            warn_report("virtio-iommu page mask 0x%"PRIx64
> +                        " does not match 0x%"PRIx64,
> +                        s->config.page_size_mask, page_size_mask);
> +        }
> +        return 0;
> +    }
> +
> +    /*
> +     * Disallow shrinking the page size. For example if an endpoint only
> +     * supports 64kB pages, we can't globally enable 4kB pages. But that
> +     * shouldn't happen, the host is unlikely to setup differing page granules.
> +     * The other bits are only hints describing optimal block sizes.
> +     */
> +    if (new_granule < old_granule) {
> +        error_setg(errp, "memory region shrinks the virtio-iommu page granule");
> +        return -1;
> +    }
> +
> +    s->config.page_size_mask = page_size_mask;
> +    return 0;
> +}
> +
>  static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> @@ -1146,6 +1196,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
>      imrc->translate = virtio_iommu_translate;
>      imrc->replay = virtio_iommu_replay;
>      imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
> +    imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
>  }
>  
>  static const TypeInfo virtio_iommu_info = {
> 
Thanks

Eric



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 00/10] virtio-iommu: VFIO integration
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (9 preceding siblings ...)
  2020-10-08 17:15 ` [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap Jean-Philippe Brucker
@ 2020-10-16 13:13 ` Auger Eric
  2020-10-30 10:27 ` Michael S. Tsirkin
  11 siblings, 0 replies; 47+ messages in thread
From: Auger Eric @ 2020-10-16 13:13 UTC (permalink / raw)
  To: Jean-Philippe Brucker, alex.williamson
  Cc: pbonzini, bbhushan2, qemu-devel, peterx, mst

Hi Jean,

On 10/8/20 7:15 PM, Jean-Philippe Brucker wrote:
> This series adds support for VFIO endpoints to virtio-iommu.
> 
> Versions 1 to 9 were posted by Bharat Bhushan, but I am taking over for
> now since he doesn't have much time to spend on it. Thanks again Bharat
> for the work!
> 
> Two major changes since [v9]:
> 
> * Don't use per-endoint page_size_mask properties. Instead keep a global
>   page size for the virtio-iommu device, updated when adding a VFIO
>   endpoint. Reject hotplug if the page size is incompatible.
> 
> * Try to make the MAP/UNMAP paths more efficient, by keeping track of
>   memory region within the endpoint structure.
> 
> More testing would be appreciated, since I can only test using a software
> model as host at the moment. But it does seem to hold well with PCIe
> hotplug/unplug, and pass-through to guest userspace, which are no mean
> feat.

I tested vhost migration and the following configurations:
host 4K- guest 4K: vhost, vfio, vfio hotplug
host 64K - guest 64K: vhost, vfio, vfio hotplug
host 4K - guest 64K: vhost, vfio, vfio hoplug

All those configs worked for me. I haven't noticed any isse with those.

Thanks

Eric
> 
> Note that one page size combination is not supported: host 64kB guest
> 4kB cannot work, because the host IOMMU driver will automatically pick
> 64kB pages which prevents mapping at a smaller granule. Supporting this
> case would require introducing a page size negotiation mechanism from
> the guest all the way to the host IOMMU driver. Possible, but not
> planned at the moment.
> 
> [v9] https://lore.kernel.org/qemu-devel/20200323084617.1782-1-bbhushan2@marvell.com/
> 
> Bharat Bhushan (7):
>   virtio-iommu: Add memory notifiers for map/unmap
>   virtio-iommu: Call memory notifiers in attach/detach
>   virtio-iommu: Add replay() memory region callback
>   virtio-iommu: Add notify_flag_changed() memory region callback
>   memory: Add interface to set iommu page size mask
>   vfio: Set IOMMU page size as per host supported page size
>   virtio-iommu: Set supported page size mask
> 
> Jean-Philippe Brucker (3):
>   virtio-iommu: Fix virtio_iommu_mr()
>   virtio-iommu: Store memory region in endpoint struct
>   vfio: Don't issue full 2^64 unmap
> 
>  include/exec/memory.h    |  26 +++++
>  hw/vfio/common.c         |  19 ++++
>  hw/virtio/virtio-iommu.c | 204 ++++++++++++++++++++++++++++++++++++++-
>  softmmu/memory.c         |  13 +++
>  hw/virtio/trace-events   |   5 +
>  5 files changed, 265 insertions(+), 2 deletions(-)
> 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-08 17:15 ` [PATCH v10 09/10] virtio-iommu: Set supported page size mask Jean-Philippe Brucker
  2020-10-16 13:08   ` Auger Eric
@ 2020-10-19 21:35   ` Peter Xu
  2020-10-22 16:39     ` Jean-Philippe Brucker
  1 sibling, 1 reply; 47+ messages in thread
From: Peter Xu @ 2020-10-19 21:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 08, 2020 at 07:15:57PM +0200, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> The virtio-iommu device can deal with arbitrary page sizes for virtual
> endpoints, but for endpoints assigned with VFIO it must follow the page
> granule used by the host IOMMU driver.
> 
> Implement the interface to set the vIOMMU page size mask, called by VFIO
> for each endpoint. We assume that all host IOMMU drivers use the same
> page granule (the host page granule). Override the page_size_mask field
> in the virtio config space.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
> v10: Use global page mask, allowing VFIO to override it until boot.
> ---
>  hw/virtio/virtio-iommu.c | 51 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 51 insertions(+)
> 
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 8823bfc804a..dd0b3093d1b 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -914,6 +914,56 @@ static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
>      return 0;
>  }
>  
> +static int virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> +                                           uint64_t page_size_mask,
> +                                           Error **errp)
> +{
> +    int new_granule, old_granule;
> +    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> +    VirtIOIOMMU *s = sdev->viommu;
> +
> +    if (!page_size_mask) {
> +        return -1;
> +    }
> +
> +    new_granule = ctz64(page_size_mask);
> +    old_granule = ctz64(s->config.page_size_mask);
> +
> +    /*
> +     * Modifying the page size after machine initialization isn't supported.
> +     * Having a different mask is possible but the guest will use sub-optimal
> +     * block sizes, so warn about it.
> +     */
> +    if (qdev_hotplug) {
> +        if (new_granule != old_granule) {
> +            error_setg(errp,
> +                       "virtio-iommu page mask 0x%"PRIx64
> +                       " is incompatible with mask 0x%"PRIx64,
> +                       s->config.page_size_mask, page_size_mask);
> +            return -1;
> +        } else if (page_size_mask != s->config.page_size_mask) {
> +            warn_report("virtio-iommu page mask 0x%"PRIx64
> +                        " does not match 0x%"PRIx64,
> +                        s->config.page_size_mask, page_size_mask);
> +        }
> +        return 0;
> +    }
> +
> +    /*
> +     * Disallow shrinking the page size. For example if an endpoint only
> +     * supports 64kB pages, we can't globally enable 4kB pages. But that
> +     * shouldn't happen, the host is unlikely to setup differing page granules.
> +     * The other bits are only hints describing optimal block sizes.
> +     */
> +    if (new_granule < old_granule) {
> +        error_setg(errp, "memory region shrinks the virtio-iommu page granule");
> +        return -1;
> +    }

My understanding is that shrink is actually allowed, instead we should forbid
growing of the mask?  For example, initially the old_granule will always points
to the guest page size.  Then as long as the host page size (which new_granule
represents) is smaller than the old_granule, then it seems fine... Or am I wrong?

Another thing, IIUC this function will be majorly called in vfio code when the
container page mask will be passed into it.  If there're multiple vfio
containers that support different host IOMMU page sizes, then IIUC the order of
the call to virtio_iommu_set_page_size_mask() is undefined.  It's probably
related to which "-device vfio-pci,..." parameter is earlier.

To make this simpler, I'm thinking whether we should just forbid the case where
devices have different iommu page sizes.  So when assigned devices are used, we
make sure all host iommu page sizes are the same, and the value should be
smaller than guest page size.  Otherwise we'll simply fall back to guest psize.

Thanks,

> +
> +    s->config.page_size_mask = page_size_mask;
> +    return 0;
> +}
> +
>  static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> @@ -1146,6 +1196,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
>      imrc->translate = virtio_iommu_translate;
>      imrc->replay = virtio_iommu_replay;
>      imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
> +    imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
>  }
>  
>  static const TypeInfo virtio_iommu_info = {
> -- 
> 2.28.0
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr()
  2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
  2020-10-16  7:36   ` Auger Eric
@ 2020-10-19 21:36   ` Peter Xu
  1 sibling, 0 replies; 47+ messages in thread
From: Peter Xu @ 2020-10-19 21:36 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 08, 2020 at 07:15:49PM +0200, Jean-Philippe Brucker wrote:
> Due to an invalid mask, virtio_iommu_mr() may return the wrong memory
> region. It hasn't been too problematic so far because the function was
> only used to test existence of an endpoint, but that is about to change.
> 
> Fixes: cfb42188b24d ("virtio-iommu: Implement attach/detach command")
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>

Reviewed-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 07/10] memory: Add interface to set iommu page size mask
  2020-10-08 17:15 ` [PATCH v10 07/10] memory: Add interface to set iommu page size mask Jean-Philippe Brucker
  2020-10-16  9:24   ` Auger Eric
@ 2020-10-19 21:36   ` Peter Xu
  1 sibling, 0 replies; 47+ messages in thread
From: Peter Xu @ 2020-10-19 21:36 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 08, 2020 at 07:15:55PM +0200, Jean-Philippe Brucker wrote:
> From: Bharat Bhushan <bbhushan2@marvell.com>
> 
> Allow to set the page size mask supported by an iommu memory region.
> This enables a vIOMMU to communicate the page size granule supported by
> an assigned device, on hosts that use page sizes greater than 4kB.
> 
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>

Acked-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-19 21:35   ` Peter Xu
@ 2020-10-22 16:39     ` Jean-Philippe Brucker
  2020-10-22 20:56       ` Peter Xu
  0 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-22 16:39 UTC (permalink / raw)
  To: Peter Xu
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Mon, Oct 19, 2020 at 05:35:39PM -0400, Peter Xu wrote:
> > +    /*
> > +     * Disallow shrinking the page size. For example if an endpoint only
> > +     * supports 64kB pages, we can't globally enable 4kB pages. But that
> > +     * shouldn't happen, the host is unlikely to setup differing page granules.
> > +     * The other bits are only hints describing optimal block sizes.
> > +     */
> > +    if (new_granule < old_granule) {
> > +        error_setg(errp, "memory region shrinks the virtio-iommu page granule");
> > +        return -1;
> > +    }
> 
> My understanding is that shrink is actually allowed, instead we should forbid
> growing of the mask?  For example, initially the old_granule will always points
> to the guest page size.  Then as long as the host page size (which new_granule
> represents) is smaller than the old_granule, then it seems fine... Or am I wrong?

The case I was checking against is two assigned devices with different
page sizes. First one sets a 64kB page size, then the second one shouldn't
be able to shrink it back to 4kB, because the guest would create mappings
not aligned on 64kB, which can't be applied by the pIOMMU of the first
device.

But let's forget this case for now, in practice all assigned devices use
the host page size.

> 
> Another thing, IIUC this function will be majorly called in vfio code when the
> container page mask will be passed into it.  If there're multiple vfio
> containers that support different host IOMMU page sizes, then IIUC the order of
> the call to virtio_iommu_set_page_size_mask() is undefined.  It's probably
> related to which "-device vfio-pci,..." parameter is earlier.
> 
> To make this simpler, I'm thinking whether we should just forbid the case where
> devices have different iommu page sizes.  So when assigned devices are used, we
> make sure all host iommu page sizes are the same, and the value should be
> smaller than guest page size.  Otherwise we'll simply fall back to guest psize.

Mostly agree, I need to simplify this function.

I don't think we care about guest page size, though. Currently our default
mask is TARGET_PAGE_MASK, which is the smallest size supported by vCPUs
(generally 4kB), but it doesn't really mean guest page size, since the
guest can choose a larger granule at runtime. Besides virtio-iommu can in
theory map at byte granule if there isn't any assigned device, so our
default mask could as well be ~0ULL (but doesn't work at the moment, I've
tried).

So what I'd like to do for next version:

* Set qemu_real_host_page_mask as the default page mask, instead of the
  rather arbitrary TARGET_PAGE_MASK. Otherwise we cannot hotplug assigned
  devices on a 64kB host, since TARGET_PAGE_MASK is pretty much always
  4kB.

* Disallow changing the page size. It's simpler and works in
  practice if we default to qemu_real_host_page_mask.

* For non-hotplug devices, allow changing the rest of the mask. For
  hotplug devices, only warn about it.

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap
  2020-10-16  7:58   ` Auger Eric
@ 2020-10-22 16:41     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-22 16:41 UTC (permalink / raw)
  To: Auger Eric; +Cc: mst, qemu-devel, peterx, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 16, 2020 at 09:58:28AM +0200, Auger Eric wrote:
> > +static void virtio_iommu_notify_map(IOMMUMemoryRegion *mr, hwaddr virt_start,
> > +                                    hwaddr virt_end, hwaddr paddr)
> > +{
> > +    IOMMUTLBEntry entry;
> > +    IOMMUNotifierFlag flags = mr->iommu_notify_flags;
> > +
> > +    if (!(flags & IOMMU_NOTIFIER_MAP)) {
> > +        return;
> > +    }
> > +
> > +    trace_virtio_iommu_notify_map(mr->parent_obj.name, virt_start, virt_end,
> > +                                  paddr);
> > +
> > +    entry.target_as = &address_space_memory;
> > +    entry.addr_mask = virt_end - virt_start;
> > +    entry.iova = virt_start;
> > +    entry.perm = IOMMU_RW;
> logically you should be able to cascade the struct virtio_iommu_req_map
> *req flags field instead.

Agreed.

I'm also thinking of adding a check for VIRTIO_IOMMU_MAP_F_MMIO, to avoid
going further into the notifier and maybe do the same for unmap.

Thanks,
Jean



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback
  2020-10-16  9:12   ` Auger Eric
@ 2020-10-22 16:42     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-22 16:42 UTC (permalink / raw)
  To: Auger Eric; +Cc: mst, qemu-devel, peterx, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 16, 2020 at 11:12:35AM +0200, Auger Eric wrote:
> > +static gboolean virtio_iommu_remap(gpointer key, gpointer value, gpointer data)
> > +{
> > +    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> > +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> > +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> > +
> > +    trace_virtio_iommu_remap(mr->parent_obj.name, interval->low, interval->high,
> > +                             mapping->phys_addr);
> > +    virtio_iommu_notify_unmap(mr, interval->low, interval->high);
> > +    virtio_iommu_notify_map(mr, interval->low, interval->high,
> > +                            mapping->phys_addr);
> I don't get the preliminary unmap with the same data. Why isn't the map
> sufficient to replay?
> 
> The default implementation only notifies for valid entries.

Yes it should be enough, I'll remove the unmap

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 07/10] memory: Add interface to set iommu page size mask
  2020-10-16  9:24   ` Auger Eric
@ 2020-10-22 16:43     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-22 16:43 UTC (permalink / raw)
  To: Auger Eric; +Cc: mst, qemu-devel, peterx, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 16, 2020 at 11:24:08AM +0200, Auger Eric wrote:
> > +    /*
> > +     * Set supported IOMMU page size
> > +     *
> > +     * If supported, allows to restrict the page size mask that can be supported
> To match other docs: Optional method:
> > +     * with a given IOMMU memory region. For example, to propagate host physical
> > +     * IOMMU page size mask limitations to the virtual IOMMU.
> > +     *
> > +     * Returns 0 on success, or a negative error. In case of failure, the error
> > +     * object must be created.
> document args as done for other functions?

I'll change this comment to:

    /**
     * @iommu_set_page_size_mask:
     *
     * Restrict the page size mask that can be supported with a given IOMMU
     * memory region. Used for example to propagate host physical IOMMU page
     * size mask limitations to the virtual IOMMU.
     *
     * Optional method: if this method is not provided, then the default global
     * page mask is used.
     *
     * @iommu: the IOMMUMemoryRegion
     *
     * @page_size_mask: a bitmask of supported page sizes. At least one bit,
     * representing the smallest page size, must be set. Additional set bits
     * represent supported block sizes. For example a host physical IOMMU that
     * uses page tables with a page size of 4kB, and supports 2MB and 4GB
     * blocks, will set mask 0x40201000. A granule of 4kB with indiscriminate
     * block sizes is specified with mask 0xfffffffffffff000.
     *
     * Returns 0 on success, or a negative error. In case of failure, the error
     * object must be created.
     */

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-16 13:08   ` Auger Eric
@ 2020-10-22 16:43     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-22 16:43 UTC (permalink / raw)
  To: Auger Eric; +Cc: mst, qemu-devel, peterx, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 16, 2020 at 03:08:03PM +0200, Auger Eric wrote:
> > +static int virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> > +                                           uint64_t page_size_mask,
> > +                                           Error **errp)
> > +{
> > +    int new_granule, old_granule;
> > +    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> > +    VirtIOIOMMU *s = sdev->viommu;
> > +
> > +    if (!page_size_mask) {
> set errp

Woops, fixed

> > +        return -1;
> > +    }
> > +
> > +    new_granule = ctz64(page_size_mask);
> > +    old_granule = ctz64(s->config.page_size_mask);
> 
> I think this would be interesting to add a trace point

Agreed

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-22 16:39     ` Jean-Philippe Brucker
@ 2020-10-22 20:56       ` Peter Xu
  2020-10-23  7:48         ` Jean-Philippe Brucker
  0 siblings, 1 reply; 47+ messages in thread
From: Peter Xu @ 2020-10-22 20:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 22, 2020 at 06:39:37PM +0200, Jean-Philippe Brucker wrote:
> So what I'd like to do for next version:
> 
> * Set qemu_real_host_page_mask as the default page mask, instead of the
>   rather arbitrary TARGET_PAGE_MASK.

Oh, I thought TARGET_PAGE_MASK was intended - kernel committ 39b3b3c9cac1
("iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE", 2020-03-27)
explicitly introduced a check that virtio-iommu kernel driver will fail
directly if this psize is bigger than PAGE_SIZE in the guest.  So it sounds
reasonable to have the default value as PAGE_SIZE (if it's the same as
TARGET_PAGE_SIZE in QEMU, which seems true?).

For example, I'm thinking whether qemu_real_host_page_mask could be bigger than
PAGE_SIZE in the guest in some environments, then it seems virtio-iommu won't
boot anymore without assigned devices, because that extra check above will
always fail.

>   Otherwise we cannot hotplug assigned
>   devices on a 64kB host, since TARGET_PAGE_MASK is pretty much always
>   4kB.
> 
> * Disallow changing the page size. It's simpler and works in
>   practice if we default to qemu_real_host_page_mask.
> 
> * For non-hotplug devices, allow changing the rest of the mask. For
>   hotplug devices, only warn about it.

Could I ask what's "the rest of the mask"?  On the driver side, I see that
viommu_domain_finalise() will pick the largest supported page size to use, if
so then we seem to be quite restricted on what page size we can use.

I'm also a bit curious about what scenario we plan to support in this initial
version, especially for ARM.  For x86, I think it's probably always 4k
everywhere so it's fairly simple.  Know little on ARM side...

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-22 20:56       ` Peter Xu
@ 2020-10-23  7:48         ` Jean-Philippe Brucker
  2020-10-23 16:47           ` Peter Xu
  0 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-23  7:48 UTC (permalink / raw)
  To: Peter Xu
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 22, 2020 at 04:56:16PM -0400, Peter Xu wrote:
> On Thu, Oct 22, 2020 at 06:39:37PM +0200, Jean-Philippe Brucker wrote:
> > So what I'd like to do for next version:
> > 
> > * Set qemu_real_host_page_mask as the default page mask, instead of the
> >   rather arbitrary TARGET_PAGE_MASK.
> 
> Oh, I thought TARGET_PAGE_MASK was intended - kernel committ 39b3b3c9cac1
> ("iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE", 2020-03-27)
> explicitly introduced a check that virtio-iommu kernel driver will fail
> directly if this psize is bigger than PAGE_SIZE in the guest.  So it sounds
> reasonable to have the default value as PAGE_SIZE (if it's the same as
> TARGET_PAGE_SIZE in QEMU, which seems true?).
> 
> For example, I'm thinking whether qemu_real_host_page_mask could be bigger than
> PAGE_SIZE in the guest in some environments, then it seems virtio-iommu won't
> boot anymore without assigned devices, because that extra check above will
> always fail.

Right, I missed this problem again. Switching to qemu_real_host_page_mask
is probably not the best idea until we solve the host64k-guest4k problem.

> 
> >   Otherwise we cannot hotplug assigned
> >   devices on a 64kB host, since TARGET_PAGE_MASK is pretty much always
> >   4kB.
> > 
> > * Disallow changing the page size. It's simpler and works in
> >   practice if we default to qemu_real_host_page_mask.
> > 
> > * For non-hotplug devices, allow changing the rest of the mask. For
> >   hotplug devices, only warn about it.
> 
> Could I ask what's "the rest of the mask"?

The LSB in the mask defines the page size. The other bits define which
block sizes are supported, for example 2MB and 1GB blocks with a 4k page
size. These are only for optimization, the upper bits of the mask could
also be all 1s. If the guest aligns its mappings on those block sizes,
then the host can use intermediate levels in the page tables resulting in
fewer IOTLB entries.

> On the driver side, I see that
> viommu_domain_finalise() will pick the largest supported page size to use, if
> so then we seem to be quite restricted on what page size we can use.

In Linux iommu_dma_alloc_remap() tries to allocate blocks based on the
page mask (copied by viommu_domain_finalise() into domain->pgsize_bitmap)

> I'm also a bit curious about what scenario we plan to support in this initial
> version, especially for ARM.  For x86, I think it's probably always 4k
> everywhere so it's fairly simple.  Know little on ARM side...

Arm CPUs and SMMU support 4k, 16k and 64k page sizes. I don't think 16k is
used anywhere but some distributions chose 64k (RHEL, I think?), others
4k, so we need to support both.

Unfortunately as noted above host64k-guest4k is not possible without
adding a negotiation mechanism to virtio-iommu, host VFIO and IOMMU
driver.

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-23  7:48         ` Jean-Philippe Brucker
@ 2020-10-23 16:47           ` Peter Xu
  2020-10-27 17:38             ` Jean-Philippe Brucker
  0 siblings, 1 reply; 47+ messages in thread
From: Peter Xu @ 2020-10-23 16:47 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 23, 2020 at 09:48:58AM +0200, Jean-Philippe Brucker wrote:
> Arm CPUs and SMMU support 4k, 16k and 64k page sizes. I don't think 16k is
> used anywhere but some distributions chose 64k (RHEL, I think?), others
> 4k, so we need to support both.
> 
> Unfortunately as noted above host64k-guest4k is not possible without
> adding a negotiation mechanism to virtio-iommu, host VFIO and IOMMU
> driver.

I see.  Then it seems we would still need to support host4k-guest64k.

Maybe for assigned case, we can simply AND all the psize_masks of all the vfio
containers that supported to replace the default psize mask (TARGET_PAGE_SIZE)
without caring about whether it's shrinking or not?  Note that current patch
only update config.psize_mask to the new one, but I think we need to calculate
the subset of all containers rather than a simply update.  Then with the help
of 39b3b3c9cac1 imho we'll gracefully fail the probe if the psize is not
suitable anyway, e.g., host64k-guest4k.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-23 16:47           ` Peter Xu
@ 2020-10-27 17:38             ` Jean-Philippe Brucker
  2020-10-30 10:24               ` Michael S. Tsirkin
  0 siblings, 1 reply; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-27 17:38 UTC (permalink / raw)
  To: Peter Xu
  Cc: mst, qemu-devel, eric.auger, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 23, 2020 at 12:47:02PM -0400, Peter Xu wrote:
> On Fri, Oct 23, 2020 at 09:48:58AM +0200, Jean-Philippe Brucker wrote:
> > Arm CPUs and SMMU support 4k, 16k and 64k page sizes. I don't think 16k is
> > used anywhere but some distributions chose 64k (RHEL, I think?), others
> > 4k, so we need to support both.
> > 
> > Unfortunately as noted above host64k-guest4k is not possible without
> > adding a negotiation mechanism to virtio-iommu, host VFIO and IOMMU
> > driver.
> 
> I see.  Then it seems we would still need to support host4k-guest64k.
> 
> Maybe for assigned case, we can simply AND all the psize_masks of all the vfio
> containers that supported to replace the default psize mask (TARGET_PAGE_SIZE)
> without caring about whether it's shrinking or not?  Note that current patch
> only update config.psize_mask to the new one, but I think we need to calculate
> the subset of all containers rather than a simply update.

Yes I think an AND is the right operation. We'll return an error if the
resulting mask is 0. Then for hotplug, I think I'll keep the current "best
effort" code from this patch. If necessary we could later add a parameter
to set a default mask and guarantee hotplug success.

Thanks,
Jean

> Then with the help
> of 39b3b3c9cac1 imho we'll gracefully fail the probe if the psize is not
> suitable anyway, e.g., host64k-guest4k.
> 
> Thanks,
> 
> -- 
> Peter Xu
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
  2020-10-27 17:38             ` Jean-Philippe Brucker
@ 2020-10-30 10:24               ` Michael S. Tsirkin
  0 siblings, 0 replies; 47+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 10:24 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: qemu-devel, Peter Xu, eric.auger, alex.williamson, pbonzini, bbhushan2

On Tue, Oct 27, 2020 at 06:38:40PM +0100, Jean-Philippe Brucker wrote:
> On Fri, Oct 23, 2020 at 12:47:02PM -0400, Peter Xu wrote:
> > On Fri, Oct 23, 2020 at 09:48:58AM +0200, Jean-Philippe Brucker wrote:
> > > Arm CPUs and SMMU support 4k, 16k and 64k page sizes. I don't think 16k is
> > > used anywhere but some distributions chose 64k (RHEL, I think?), others
> > > 4k, so we need to support both.
> > > 
> > > Unfortunately as noted above host64k-guest4k is not possible without
> > > adding a negotiation mechanism to virtio-iommu, host VFIO and IOMMU
> > > driver.
> > 
> > I see.  Then it seems we would still need to support host4k-guest64k.
> > 
> > Maybe for assigned case, we can simply AND all the psize_masks of all the vfio
> > containers that supported to replace the default psize mask (TARGET_PAGE_SIZE)
> > without caring about whether it's shrinking or not?  Note that current patch
> > only update config.psize_mask to the new one, but I think we need to calculate
> > the subset of all containers rather than a simply update.
> 
> Yes I think an AND is the right operation. We'll return an error if the
> resulting mask is 0. Then for hotplug, I think I'll keep the current "best
> effort" code from this patch. If necessary we could later add a parameter
> to set a default mask and guarantee hotplug success.
> 
> Thanks,
> Jean


So I should expect a new version with that?

> > Then with the help
> > of 39b3b3c9cac1 imho we'll gracefully fail the probe if the psize is not
> > suitable anyway, e.g., host64k-guest4k.
> > 
> > Thanks,
> > 
> > -- 
> > Peter Xu
> > 



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-08 21:22   ` Alex Williamson
@ 2020-10-30 10:25     ` Michael S. Tsirkin
  2020-10-30 17:26       ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 10:25 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Jean-Philippe Brucker, qemu-devel, peterx, eric.auger, pbonzini,
	bbhushan2

On Thu, Oct 08, 2020 at 03:22:14PM -0600, Alex Williamson wrote:
> On Thu,  8 Oct 2020 19:15:58 +0200
> Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:
> 
> > IOMMUs may declare memory regions spanning from 0 to UINT64_MAX. When
> > attempting to deal with such region, vfio_listener_region_del() passes a
> > size of 2^64 to int128_get64() which throws an assertion failure.  Even
> > ignoring this, the VFIO_IOMMU_DMA_MAP ioctl cannot handle this size
> > since the size field is 64-bit. Split the request in two.
> > 
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > ---
> > For me this happens when memory_region_iommu_set_page_size_mask()
> > returns an error because a hotplugged endpoint uses an incompatible page
> > mask. vfio_connect_container() releases the memory listener which calls
> > region_del() with the 2^64 IOMMU region. There are probably other ways
> > to reach this.
> > ---
> >  hw/vfio/common.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index e66054b02a7..e90a89c389e 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -797,6 +797,17 @@ static void vfio_listener_region_del(MemoryListener *listener,
> >      }
> >  
> >      if (try_unmap) {
> > +        if (llsize == int128_2_64()) {
> > +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> > +            llsize = int128_rshift(llsize, 1);
> > +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> > +            if (ret) {
> > +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> > +                             "0x%"HWADDR_PRIx") = %d (%m)",
> > +                             container, iova, int128_get64(llsize), ret);
> > +            }
> > +            iova += int128_get64(llsize);
> > +        }
> >          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> >          if (ret) {
> >              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> 
> We're still susceptible that splitting the range in two could result in
> unmap calls that attempt to bisect a mapping that spans both ranges.
> Both unmap calls would fail in that case.  I think we could solve this
> more completely with a high water marker, but this probably good enough
> for now.
> 
> Acked-by: Alex Williamson <alex.williamson@redhat.com>


Are you merging this then?
If yes

Acked-by: Michael S. Tsirkin <mst@redhat.com>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size
  2020-10-08 21:22   ` Alex Williamson
@ 2020-10-30 10:26     ` Michael S. Tsirkin
  2020-10-30 15:19       ` Jean-Philippe Brucker
  0 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 10:26 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Jean-Philippe Brucker, qemu-devel, peterx, eric.auger, pbonzini,
	bbhushan2

On Thu, Oct 08, 2020 at 03:22:54PM -0600, Alex Williamson wrote:
> On Thu,  8 Oct 2020 19:15:56 +0200
> Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:
> 
> > From: Bharat Bhushan <bbhushan2@marvell.com>
> > 
> > Set IOMMU supported page size mask same as host Linux supported page
> > size mask.
> > 
> > Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > ---
> >  hw/vfio/common.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index 13471ae2943..e66054b02a7 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -636,6 +636,14 @@ static void vfio_listener_region_add(MemoryListener *listener,
> >                              int128_get64(llend),
> >                              iommu_idx);
> >  
> > +        ret = memory_region_iommu_set_page_size_mask(giommu->iommu,
> > +                                                     container->pgsizes,
> > +                                                     &err);
> > +        if (ret) {
> > +            g_free(giommu);
> > +            goto fail;
> > +        }
> > +
> >          ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
> >                                                      &err);
> >          if (ret) {
> 
> Acked-by: Alex Williamson <alex.williamson@redhat.com>

This one too, seems independent of the rest of the
patchset and can be merged separately, right?
If so

Acked-by: Michael S. Tsirkin <mst@redhat.com>



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 00/10] virtio-iommu: VFIO integration
  2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
                   ` (10 preceding siblings ...)
  2020-10-16 13:13 ` [PATCH v10 00/10] virtio-iommu: VFIO integration Auger Eric
@ 2020-10-30 10:27 ` Michael S. Tsirkin
  2020-10-30 10:48   ` Jean-Philippe Brucker
  11 siblings, 1 reply; 47+ messages in thread
From: Michael S. Tsirkin @ 2020-10-30 10:27 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: qemu-devel, peterx, eric.auger, alex.williamson, pbonzini, bbhushan2

On Thu, Oct 08, 2020 at 07:15:48PM +0200, Jean-Philippe Brucker wrote:
> This series adds support for VFIO endpoints to virtio-iommu.
> 
> Versions 1 to 9 were posted by Bharat Bhushan, but I am taking over for
> now since he doesn't have much time to spend on it. Thanks again Bharat
> for the work!

ok so just minor things left, correct? Do you plan to post v11?

> Two major changes since [v9]:
> 
> * Don't use per-endoint page_size_mask properties. Instead keep a global
>   page size for the virtio-iommu device, updated when adding a VFIO
>   endpoint. Reject hotplug if the page size is incompatible.
> 
> * Try to make the MAP/UNMAP paths more efficient, by keeping track of
>   memory region within the endpoint structure.
> 
> More testing would be appreciated, since I can only test using a software
> model as host at the moment. But it does seem to hold well with PCIe
> hotplug/unplug, and pass-through to guest userspace, which are no mean
> feat.
> 
> Note that one page size combination is not supported: host 64kB guest
> 4kB cannot work, because the host IOMMU driver will automatically pick
> 64kB pages which prevents mapping at a smaller granule. Supporting this
> case would require introducing a page size negotiation mechanism from
> the guest all the way to the host IOMMU driver. Possible, but not
> planned at the moment.
> 
> [v9] https://lore.kernel.org/qemu-devel/20200323084617.1782-1-bbhushan2@marvell.com/
> 
> Bharat Bhushan (7):
>   virtio-iommu: Add memory notifiers for map/unmap
>   virtio-iommu: Call memory notifiers in attach/detach
>   virtio-iommu: Add replay() memory region callback
>   virtio-iommu: Add notify_flag_changed() memory region callback
>   memory: Add interface to set iommu page size mask
>   vfio: Set IOMMU page size as per host supported page size
>   virtio-iommu: Set supported page size mask
> 
> Jean-Philippe Brucker (3):
>   virtio-iommu: Fix virtio_iommu_mr()
>   virtio-iommu: Store memory region in endpoint struct
>   vfio: Don't issue full 2^64 unmap
> 
>  include/exec/memory.h    |  26 +++++
>  hw/vfio/common.c         |  19 ++++
>  hw/virtio/virtio-iommu.c | 204 ++++++++++++++++++++++++++++++++++++++-
>  softmmu/memory.c         |  13 +++
>  hw/virtio/trace-events   |   5 +
>  5 files changed, 265 insertions(+), 2 deletions(-)
> 
> -- 
> 2.28.0



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 00/10] virtio-iommu: VFIO integration
  2020-10-30 10:27 ` Michael S. Tsirkin
@ 2020-10-30 10:48   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-30 10:48 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, peterx, eric.auger, alex.williamson, pbonzini, bbhushan2

On Fri, Oct 30, 2020 at 06:27:35AM -0400, Michael S. Tsirkin wrote:
> On Thu, Oct 08, 2020 at 07:15:48PM +0200, Jean-Philippe Brucker wrote:
> > This series adds support for VFIO endpoints to virtio-iommu.
> > 
> > Versions 1 to 9 were posted by Bharat Bhushan, but I am taking over for
> > now since he doesn't have much time to spend on it. Thanks again Bharat
> > for the work!
> 
> ok so just minor things left, correct? Do you plan to post v11?

Yes, today or early next week

Thanks,
Jean


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size
  2020-10-30 10:26     ` Michael S. Tsirkin
@ 2020-10-30 15:19       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 47+ messages in thread
From: Jean-Philippe Brucker @ 2020-10-30 15:19 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, peterx, eric.auger, Alex Williamson, pbonzini, bbhushan2

On Fri, Oct 30, 2020 at 06:26:31AM -0400, Michael S. Tsirkin wrote:
> On Thu, Oct 08, 2020 at 03:22:54PM -0600, Alex Williamson wrote:
> > On Thu,  8 Oct 2020 19:15:56 +0200
> > Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:
> > 
> > > From: Bharat Bhushan <bbhushan2@marvell.com>
> > > 
> > > Set IOMMU supported page size mask same as host Linux supported page
> > > size mask.
> > > 
> > > Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > ---
> > >  hw/vfio/common.c | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > > index 13471ae2943..e66054b02a7 100644
> > > --- a/hw/vfio/common.c
> > > +++ b/hw/vfio/common.c
> > > @@ -636,6 +636,14 @@ static void vfio_listener_region_add(MemoryListener *listener,
> > >                              int128_get64(llend),
> > >                              iommu_idx);
> > >  
> > > +        ret = memory_region_iommu_set_page_size_mask(giommu->iommu,
> > > +                                                     container->pgsizes,
> > > +                                                     &err);
> > > +        if (ret) {
> > > +            g_free(giommu);
> > > +            goto fail;
> > > +        }
> > > +
> > >          ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
> > >                                                      &err);
> > >          if (ret) {
> > 
> > Acked-by: Alex Williamson <alex.williamson@redhat.com>
> 
> This one too, seems independent of the rest of the
> patchset and can be merged separately, right?

This depends on patch 7 which introduces the function

Thanks,
Jean

> If so
> 
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> 


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-30 10:25     ` Michael S. Tsirkin
@ 2020-10-30 17:26       ` Alex Williamson
  2020-10-30 18:19         ` Paolo Bonzini
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-10-30 17:26 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jean-Philippe Brucker, qemu-devel, peterx, eric.auger, pbonzini,
	bbhushan2

On Fri, 30 Oct 2020 06:25:34 -0400
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Thu, Oct 08, 2020 at 03:22:14PM -0600, Alex Williamson wrote:
> > On Thu,  8 Oct 2020 19:15:58 +0200
> > Jean-Philippe Brucker <jean-philippe@linaro.org> wrote:
> >   
> > > IOMMUs may declare memory regions spanning from 0 to UINT64_MAX. When
> > > attempting to deal with such region, vfio_listener_region_del() passes a
> > > size of 2^64 to int128_get64() which throws an assertion failure.  Even
> > > ignoring this, the VFIO_IOMMU_DMA_MAP ioctl cannot handle this size
> > > since the size field is 64-bit. Split the request in two.
> > > 
> > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > ---
> > > For me this happens when memory_region_iommu_set_page_size_mask()
> > > returns an error because a hotplugged endpoint uses an incompatible page
> > > mask. vfio_connect_container() releases the memory listener which calls
> > > region_del() with the 2^64 IOMMU region. There are probably other ways
> > > to reach this.
> > > ---
> > >  hw/vfio/common.c | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > > 
> > > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > > index e66054b02a7..e90a89c389e 100644
> > > --- a/hw/vfio/common.c
> > > +++ b/hw/vfio/common.c
> > > @@ -797,6 +797,17 @@ static void vfio_listener_region_del(MemoryListener *listener,
> > >      }
> > >  
> > >      if (try_unmap) {
> > > +        if (llsize == int128_2_64()) {
> > > +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> > > +            llsize = int128_rshift(llsize, 1);
> > > +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> > > +            if (ret) {
> > > +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> > > +                             "0x%"HWADDR_PRIx") = %d (%m)",
> > > +                             container, iova, int128_get64(llsize), ret);
> > > +            }
> > > +            iova += int128_get64(llsize);
> > > +        }
> > >          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> > >          if (ret) {
> > >              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "  
> > 
> > We're still susceptible that splitting the range in two could result in
> > unmap calls that attempt to bisect a mapping that spans both ranges.
> > Both unmap calls would fail in that case.  I think we could solve this
> > more completely with a high water marker, but this probably good enough
> > for now.
> > 
> > Acked-by: Alex Williamson <alex.williamson@redhat.com>  
> 
> 
> Are you merging this then?
> If yes
> 
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> 

No, the series is focused on virtio-iommu therefore I assumed you or
Eric would merge it, thus I provided an Ack.  Thanks,

Alex



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-30 17:26       ` Alex Williamson
@ 2020-10-30 18:19         ` Paolo Bonzini
  2020-11-02 17:37           ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Paolo Bonzini @ 2020-10-30 18:19 UTC (permalink / raw)
  To: Alex Williamson, Michael S. Tsirkin
  Cc: Jean-Philippe Brucker, bbhushan2, qemu-devel, peterx, eric.auger

On 30/10/20 18:26, Alex Williamson wrote:
>>  
>>      if (try_unmap) {
>> +        if (llsize == int128_2_64()) {
>> +            /* The unmap ioctl doesn't accept a full 64-bit span. */
>> +            llsize = int128_rshift(llsize, 1);
>> +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
>> +            if (ret) {
>> +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
>> +                             "0x%"HWADDR_PRIx") = %d (%m)",
>> +                             container, iova, int128_get64(llsize), ret);
>> +            }
>> +            iova += int128_get64(llsize);
>> +        }
>>          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
>>          if (ret) {
>>              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "  
> We're still susceptible that splitting the range in two could result in
> unmap calls that attempt to bisect a mapping that spans both ranges.
> Both unmap calls would fail in that case.  I think we could solve this
> more completely with a high water marker, but this probably good enough
> for now.
> 
> Acked-by: Alex Williamson <alex.williamson@redhat.com>  

Could it also be fixed by passing an Int128 to vfio_dma_unmap?

Paolo



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-10-30 18:19         ` Paolo Bonzini
@ 2020-11-02 17:37           ` Alex Williamson
  2020-11-02 17:44             ` Paolo Bonzini
  0 siblings, 1 reply; 47+ messages in thread
From: Alex Williamson @ 2020-11-02 17:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jean-Philippe Brucker, Michael S. Tsirkin, qemu-devel, peterx,
	eric.auger, bbhushan2

On Fri, 30 Oct 2020 19:19:14 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 30/10/20 18:26, Alex Williamson wrote:
> >>  
> >>      if (try_unmap) {
> >> +        if (llsize == int128_2_64()) {
> >> +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> >> +            llsize = int128_rshift(llsize, 1);
> >> +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> >> +            if (ret) {
> >> +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> >> +                             "0x%"HWADDR_PRIx") = %d (%m)",
> >> +                             container, iova, int128_get64(llsize), ret);
> >> +            }
> >> +            iova += int128_get64(llsize);
> >> +        }
> >>          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> >>          if (ret) {
> >>              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "    
> > We're still susceptible that splitting the range in two could result in
> > unmap calls that attempt to bisect a mapping that spans both ranges.
> > Both unmap calls would fail in that case.  I think we could solve this
> > more completely with a high water marker, but this probably good enough
> > for now.
> > 
> > Acked-by: Alex Williamson <alex.williamson@redhat.com>    
> 
> Could it also be fixed by passing an Int128 to vfio_dma_unmap?

I think we still have the issue at the vfio ioctl which takes __u64 iova
and size parameters, in bytes.  Therefore we cannot unmap an entire
64-bit address space with a single ioctl call.  We'd need to make use
of a flag to modify the ioctl behavior to work terms of some page size
to achieve this, for example if iova and size were in terms of 4K
pages, we wouldn't have this issue.  Thanks,

Alex



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-11-02 17:37           ` Alex Williamson
@ 2020-11-02 17:44             ` Paolo Bonzini
  2020-11-02 18:00               ` Alex Williamson
  0 siblings, 1 reply; 47+ messages in thread
From: Paolo Bonzini @ 2020-11-02 17:44 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Jean-Philippe Brucker, Michael S. Tsirkin, qemu-devel, peterx,
	eric.auger, bbhushan2

On 02/11/20 18:37, Alex Williamson wrote:
> I think we still have the issue at the vfio ioctl which takes __u64 iova
> and size parameters, in bytes.  Therefore we cannot unmap an entire
> 64-bit address space with a single ioctl call.  We'd need to make use
> of a flag to modify the ioctl behavior to work terms of some page size
> to achieve this, for example if iova and size were in terms of 4K
> pages, we wouldn't have this issue.  Thanks,

What happens to the last page if size is unaligned (e.g. iova==0, size==
2^64-1)?

Paolo



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
  2020-11-02 17:44             ` Paolo Bonzini
@ 2020-11-02 18:00               ` Alex Williamson
  0 siblings, 0 replies; 47+ messages in thread
From: Alex Williamson @ 2020-11-02 18:00 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Jean-Philippe Brucker, Michael S. Tsirkin, qemu-devel, peterx,
	eric.auger, bbhushan2

On Mon, 2 Nov 2020 18:44:11 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 02/11/20 18:37, Alex Williamson wrote:
> > I think we still have the issue at the vfio ioctl which takes __u64 iova
> > and size parameters, in bytes.  Therefore we cannot unmap an entire
> > 64-bit address space with a single ioctl call.  We'd need to make use
> > of a flag to modify the ioctl behavior to work terms of some page size
> > to achieve this, for example if iova and size were in terms of 4K
> > pages, we wouldn't have this issue.  Thanks,  
> 
> What happens to the last page if size is unaligned (e.g. iova==0, size==
> 2^64-1)?

Both args are currently required to be aligned to the minimum IOMMU
page size.  Thanks,

Alex



^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2020-11-02 18:02 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08 17:15 [PATCH v10 00/10] virtio-iommu: VFIO integration Jean-Philippe Brucker
2020-10-08 17:15 ` [PATCH v10 01/10] virtio-iommu: Fix virtio_iommu_mr() Jean-Philippe Brucker
2020-10-16  7:36   ` Auger Eric
2020-10-19 21:36   ` Peter Xu
2020-10-08 17:15 ` [PATCH v10 02/10] virtio-iommu: Store memory region in endpoint struct Jean-Philippe Brucker
2020-10-16  7:37   ` Auger Eric
2020-10-08 17:15 ` [PATCH v10 03/10] virtio-iommu: Add memory notifiers for map/unmap Jean-Philippe Brucker
2020-10-16  7:58   ` Auger Eric
2020-10-22 16:41     ` Jean-Philippe Brucker
2020-10-08 17:15 ` [PATCH v10 04/10] virtio-iommu: Call memory notifiers in attach/detach Jean-Philippe Brucker
2020-10-16  8:05   ` Auger Eric
2020-10-08 17:15 ` [PATCH v10 05/10] virtio-iommu: Add replay() memory region callback Jean-Philippe Brucker
2020-10-16  9:12   ` Auger Eric
2020-10-22 16:42     ` Jean-Philippe Brucker
2020-10-08 17:15 ` [PATCH v10 06/10] virtio-iommu: Add notify_flag_changed() " Jean-Philippe Brucker
2020-10-16  8:18   ` Auger Eric
2020-10-08 17:15 ` [PATCH v10 07/10] memory: Add interface to set iommu page size mask Jean-Philippe Brucker
2020-10-16  9:24   ` Auger Eric
2020-10-22 16:43     ` Jean-Philippe Brucker
2020-10-19 21:36   ` Peter Xu
2020-10-08 17:15 ` [PATCH v10 08/10] vfio: Set IOMMU page size as per host supported page size Jean-Philippe Brucker
2020-10-08 21:22   ` Alex Williamson
2020-10-30 10:26     ` Michael S. Tsirkin
2020-10-30 15:19       ` Jean-Philippe Brucker
2020-10-16  9:25   ` Auger Eric
2020-10-08 17:15 ` [PATCH v10 09/10] virtio-iommu: Set supported page size mask Jean-Philippe Brucker
2020-10-16 13:08   ` Auger Eric
2020-10-22 16:43     ` Jean-Philippe Brucker
2020-10-19 21:35   ` Peter Xu
2020-10-22 16:39     ` Jean-Philippe Brucker
2020-10-22 20:56       ` Peter Xu
2020-10-23  7:48         ` Jean-Philippe Brucker
2020-10-23 16:47           ` Peter Xu
2020-10-27 17:38             ` Jean-Philippe Brucker
2020-10-30 10:24               ` Michael S. Tsirkin
2020-10-08 17:15 ` [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap Jean-Philippe Brucker
2020-10-08 21:22   ` Alex Williamson
2020-10-30 10:25     ` Michael S. Tsirkin
2020-10-30 17:26       ` Alex Williamson
2020-10-30 18:19         ` Paolo Bonzini
2020-11-02 17:37           ` Alex Williamson
2020-11-02 17:44             ` Paolo Bonzini
2020-11-02 18:00               ` Alex Williamson
2020-10-16  9:47   ` Auger Eric
2020-10-16 13:13 ` [PATCH v10 00/10] virtio-iommu: VFIO integration Auger Eric
2020-10-30 10:27 ` Michael S. Tsirkin
2020-10-30 10:48   ` Jean-Philippe Brucker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).