All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge
@ 2015-09-30  2:13 David Gibson
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer David Gibson
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

Hi Alex,

Here are the parts of my recent series to allow VFIO devices on the
spapr-pci-host-bridge device which affect the core VFIO code.  They've
been revised according to the comments from yourself and others.

There's also one patch for the memory subsystem.

Note that while these are motivated by the needs of the sPAPR code,
they changes should all be generally correct, and will allow safer and
more flexible use of VFIO devices in other potential situations as
well.

Please apply.

Changes since v2:
  * Reworked IOMMU notifier replay mechanism according to Paolo's
    suggestions.
Changes since v1:
  * Assorted minor cleanups based on comments.
  

*** BLURB HERE ***

David Gibson (7):
  vfio: Remove unneeded union from VFIOContainer
  vfio: Generalize vfio_listener_region_add failure path
  vfio: Check guest IOVA ranges against host IOMMU capabilities
  vfio: Record host IOMMU's available IO page sizes
  memory: Allow replay of IOMMU mapping notifications
  vfio: Allow hotplug of containers onto existing guest IOMMU mappings
  vfio: Expose a VFIO PCI device's group for EEH

 hw/vfio/common.c              | 140 ++++++++++++++++++++++++++----------------
 hw/vfio/pci.c                 |  14 +++++
 include/exec/memory.h         |  13 ++++
 include/hw/vfio/vfio-common.h |  23 +++----
 include/hw/vfio/vfio-pci.h    |  11 ++++
 memory.c                      |  20 ++++++
 6 files changed, 154 insertions(+), 67 deletions(-)
 create mode 100644 include/hw/vfio/vfio-pci.h

-- 
2.4.3

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  8:19   ` Laurent Vivier
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path David Gibson
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

Currently the VFIOContainer iommu_data field contains a union with
different information for different host iommu types.  However:
   * It only actually contains information for the x86-like "Type1" iommu
   * Because we have a common listener the Type1 fields are actually used
on all IOMMU types, including the SPAPR TCE type as well

In fact we now have a general structure for the listener which is unlikely
to ever need per-iommu-type information, so this patch removes the union.

In a similar way we can unify the setup of the vfio memory listener in
vfio_connect_container() that is currently split across a switch on iommu
type, but is effectively the same in both cases.

The iommu_data.release pointer was only needed as a cleanup function
which would handle potentially different data in the union.  With the
union gone, it too can be removed.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/vfio/common.c              | 52 ++++++++++++++++---------------------------
 include/hw/vfio/vfio-common.h | 16 +++----------
 2 files changed, 22 insertions(+), 46 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 0d341a3..1545f62 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -315,8 +315,7 @@ out:
 static void vfio_listener_region_add(MemoryListener *listener,
                                      MemoryRegionSection *section)
 {
-    VFIOContainer *container = container_of(listener, VFIOContainer,
-                                            iommu_data.type1.listener);
+    VFIOContainer *container = container_of(listener, VFIOContainer, listener);
     hwaddr iova, end;
     Int128 llend;
     void *vaddr;
@@ -406,9 +405,9 @@ static void vfio_listener_region_add(MemoryListener *listener,
          * can gracefully fail.  Runtime, there's not much we can do other
          * than throw a hardware error.
          */
-        if (!container->iommu_data.type1.initialized) {
-            if (!container->iommu_data.type1.error) {
-                container->iommu_data.type1.error = ret;
+        if (!container->initialized) {
+            if (!container->error) {
+                container->error = ret;
             }
         } else {
             hw_error("vfio: DMA mapping failed, unable to continue");
@@ -419,8 +418,7 @@ static void vfio_listener_region_add(MemoryListener *listener,
 static void vfio_listener_region_del(MemoryListener *listener,
                                      MemoryRegionSection *section)
 {
-    VFIOContainer *container = container_of(listener, VFIOContainer,
-                                            iommu_data.type1.listener);
+    VFIOContainer *container = container_of(listener, VFIOContainer, listener);
     hwaddr iova, end;
     int ret;
 
@@ -485,7 +483,7 @@ static const MemoryListener vfio_memory_listener = {
 
 static void vfio_listener_release(VFIOContainer *container)
 {
-    memory_listener_unregister(&container->iommu_data.type1.listener);
+    memory_listener_unregister(&container->listener);
 }
 
 int vfio_mmap_region(Object *obj, VFIORegion *region,
@@ -683,21 +681,6 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
             ret = -errno;
             goto free_container_exit;
         }
-
-        container->iommu_data.type1.listener = vfio_memory_listener;
-        container->iommu_data.release = vfio_listener_release;
-
-        memory_listener_register(&container->iommu_data.type1.listener,
-                                 container->space->as);
-
-        if (container->iommu_data.type1.error) {
-            ret = container->iommu_data.type1.error;
-            error_report("vfio: memory listener initialization failed for container");
-            goto listener_release_exit;
-        }
-
-        container->iommu_data.type1.initialized = true;
-
     } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
         ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
         if (ret) {
@@ -723,19 +706,24 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
             ret = -errno;
             goto free_container_exit;
         }
-
-        container->iommu_data.type1.listener = vfio_memory_listener;
-        container->iommu_data.release = vfio_listener_release;
-
-        memory_listener_register(&container->iommu_data.type1.listener,
-                                 container->space->as);
-
     } else {
         error_report("vfio: No available IOMMU models");
         ret = -EINVAL;
         goto free_container_exit;
     }
 
+    container->listener = vfio_memory_listener;
+
+    memory_listener_register(&container->listener, container->space->as);
+
+    if (container->error) {
+        ret = container->error;
+        error_report("vfio: memory listener initialization failed for container");
+        goto listener_release_exit;
+    }
+
+    container->initialized = true;
+
     QLIST_INIT(&container->group_list);
     QLIST_INSERT_HEAD(&space->containers, container, next);
 
@@ -774,9 +762,7 @@ static void vfio_disconnect_container(VFIOGroup *group)
         VFIOAddressSpace *space = container->space;
         VFIOGuestIOMMU *giommu, *tmp;
 
-        if (container->iommu_data.release) {
-            container->iommu_data.release(container);
-        }
+        vfio_listener_release(container);
         QLIST_REMOVE(container, next);
 
         QLIST_FOREACH_SAFE(giommu, &container->giommu_list, giommu_next, tmp) {
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index 9b9901f..fbbe6de 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -59,22 +59,12 @@ typedef struct VFIOAddressSpace {
 
 struct VFIOGroup;
 
-typedef struct VFIOType1 {
-    MemoryListener listener;
-    int error;
-    bool initialized;
-} VFIOType1;
-
 typedef struct VFIOContainer {
     VFIOAddressSpace *space;
     int fd; /* /dev/vfio/vfio, empowered by the attached groups */
-    struct {
-        /* enable abstraction to support various iommu backends */
-        union {
-            VFIOType1 type1;
-        };
-        void (*release)(struct VFIOContainer *);
-    } iommu_data;
+    MemoryListener listener;
+    int error;
+    bool initialized;
     QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
     QLIST_HEAD(, VFIOGroup) group_list;
     QLIST_ENTRY(VFIOContainer) next;
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  8:20   ` Laurent Vivier
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities David Gibson
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

If a DMA mapping operation fails in vfio_listener_region_add() it
checks to see if we've already completed initial setup of the
container.  If so it reports an error so the setup code can fail
gracefully, otherwise throws a hw_error().

There are other potential failure cases in vfio_listener_region_add()
which could benefit from the same logic, so move it to its own
fail: block.  Later patches can use this to extend other failure cases
to fail as gracefully as possible under the circumstances.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
---
 hw/vfio/common.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 1545f62..95a4850 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -399,19 +399,23 @@ static void vfio_listener_region_add(MemoryListener *listener,
         error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
                      "0x%"HWADDR_PRIx", %p) = %d (%m)",
                      container, iova, end - iova, vaddr, ret);
+        goto fail;
+    }
 
-        /*
-         * On the initfn path, store the first error in the container so we
-         * can gracefully fail.  Runtime, there's not much we can do other
-         * than throw a hardware error.
-         */
-        if (!container->initialized) {
-            if (!container->error) {
-                container->error = ret;
-            }
-        } else {
-            hw_error("vfio: DMA mapping failed, unable to continue");
+    return;
+
+fail:
+    /*
+     * On the initfn path, store the first error in the container so we
+     * can gracefully fail.  Runtime, there's not much we can do other
+     * than throw a hardware error.
+     */
+    if (!container->initialized) {
+        if (!container->error) {
+            container->error = ret;
         }
+    } else {
+        hw_error("vfio: DMA mapping failed, unable to continue");
     }
 }
 
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer David Gibson
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  8:25   ` Laurent Vivier
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes David Gibson
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

The current vfio core code assumes that the host IOMMU is capable of
mapping any IOVA the guest wants to use to where we need.  However, real
IOMMUs generally only support translating a certain range of IOVAs (the
"DMA window") not a full 64-bit address space.

The common x86 IOMMUs support a wide enough range that guests are very
unlikely to go beyond it in practice, however the IOMMU used on IBM Power
machines - in the default configuration - supports only a much more limited
IOVA range, usually 0..2GiB.

If the guest attempts to set up an IOVA range that the host IOMMU can't
map, qemu won't report an error until it actually attempts to map a bad
IOVA.  If guest RAM is being mapped directly into the IOMMU (i.e. no guest
visible IOMMU) then this will show up very quickly.  If there is a guest
visible IOMMU, however, the problem might not show up until much later when
the guest actually attempt to DMA with an IOVA the host can't handle.

This patch adds a test so that we will detect earlier if the guest is
attempting to use IOVA ranges that the host IOMMU won't be able to deal
with.

For now, we assume that "Type1" (x86) IOMMUs can support any IOVA, this is
incorrect, but no worse than what we have already.  We can't do better for
now because the Type1 kernel interface doesn't tell us what IOVA range the
IOMMU actually supports.

For the Power "sPAPR TCE" IOMMU, however, we can retrieve the supported
IOVA range and validate guest IOVA ranges against it, and this patch does
so.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
---
 hw/vfio/common.c              | 40 +++++++++++++++++++++++++++++++++++++---
 include/hw/vfio/vfio-common.h |  6 ++++++
 2 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 95a4850..2faf492 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -343,14 +343,22 @@ static void vfio_listener_region_add(MemoryListener *listener,
     if (int128_ge(int128_make64(iova), llend)) {
         return;
     }
+    end = int128_get64(llend);
+
+    if ((iova < container->min_iova) || ((end - 1) > container->max_iova)) {
+        error_report("vfio: IOMMU container %p can't map guest IOVA region"
+                     " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx,
+                     container, iova, end - 1);
+        ret = -EFAULT;
+        goto fail;
+    }
 
     memory_region_ref(section->mr);
 
     if (memory_region_is_iommu(section->mr)) {
         VFIOGuestIOMMU *giommu;
 
-        trace_vfio_listener_region_add_iommu(iova,
-                    int128_get64(int128_sub(llend, int128_one())));
+        trace_vfio_listener_region_add_iommu(iova, end - 1);
         /*
          * FIXME: We should do some checking to see if the
          * capabilities of the host VFIO IOMMU are adequate to model
@@ -387,7 +395,6 @@ static void vfio_listener_region_add(MemoryListener *listener,
 
     /* Here we assume that memory_region_is_ram(section->mr)==true */
 
-    end = int128_get64(llend);
     vaddr = memory_region_get_ram_ptr(section->mr) +
             section->offset_within_region +
             (iova - section->offset_within_address_space);
@@ -685,7 +692,19 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
             ret = -errno;
             goto free_container_exit;
         }
+
+        /*
+         * FIXME: This assumes that a Type1 IOMMU can map any 64-bit
+         * IOVA whatsoever.  That's not actually true, but the current
+         * kernel interface doesn't tell us what it can map, and the
+         * existing Type1 IOMMUs generally support any IOVA we're
+         * going to actually try in practice.
+         */
+        container->min_iova = 0;
+        container->max_iova = (hwaddr)-1;
     } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
+        struct vfio_iommu_spapr_tce_info info;
+
         ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
         if (ret) {
             error_report("vfio: failed to set group container: %m");
@@ -710,6 +729,21 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
             ret = -errno;
             goto free_container_exit;
         }
+
+        /*
+         * This only considers the host IOMMU's 32-bit window.  At
+         * some point we need to add support for the optional 64-bit
+         * window and dynamic windows
+         */
+        info.argsz = sizeof(info);
+        ret = ioctl(fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
+        if (ret) {
+            error_report("vfio: VFIO_IOMMU_SPAPR_TCE_GET_INFO failed: %m");
+            ret = -errno;
+            goto free_container_exit;
+        }
+        container->min_iova = info.dma32_window_start;
+        container->max_iova = container->min_iova + info.dma32_window_size - 1;
     } else {
         error_report("vfio: No available IOMMU models");
         ret = -EINVAL;
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index fbbe6de..27a14c0 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -65,6 +65,12 @@ typedef struct VFIOContainer {
     MemoryListener listener;
     int error;
     bool initialized;
+    /*
+     * This assumes the host IOMMU can support only a single
+     * contiguous IOVA window.  We may need to generalize that in
+     * future
+     */
+    hwaddr min_iova, max_iova;
     QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
     QLIST_HEAD(, VFIOGroup) group_list;
     QLIST_ENTRY(VFIOContainer) next;
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
                   ` (2 preceding siblings ...)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  8:27   ` Laurent Vivier
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

Depending on the host IOMMU type we determine and record the available page
sizes for IOMMU translation.  We'll need this for other validation in
future patches.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
---
 hw/vfio/common.c              | 13 +++++++++++++
 include/hw/vfio/vfio-common.h |  1 +
 2 files changed, 14 insertions(+)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 2faf492..f666de2 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -677,6 +677,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
     if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU) ||
         ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1v2_IOMMU)) {
         bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1v2_IOMMU);
+        struct vfio_iommu_type1_info info;
 
         ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
         if (ret) {
@@ -702,6 +703,15 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
          */
         container->min_iova = 0;
         container->max_iova = (hwaddr)-1;
+
+        /* Assume just 4K IOVA page size */
+        container->iova_pgsizes = 0x1000;
+        info.argsz = sizeof(info);
+        ret = ioctl(fd, VFIO_IOMMU_GET_INFO, &info);
+        /* Ignore errors */
+        if ((ret == 0) && (info.flags & VFIO_IOMMU_INFO_PGSIZES)) {
+            container->iova_pgsizes = info.iova_pgsizes;
+        }
     } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
         struct vfio_iommu_spapr_tce_info info;
 
@@ -744,6 +754,9 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
         }
         container->min_iova = info.dma32_window_start;
         container->max_iova = container->min_iova + info.dma32_window_size - 1;
+
+        /* Assume just 4K IOVA pages for now */
+        container->iova_pgsizes = 0x1000;
     } else {
         error_report("vfio: No available IOMMU models");
         ret = -EINVAL;
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index 27a14c0..f037f3c 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -71,6 +71,7 @@ typedef struct VFIOContainer {
      * future
      */
     hwaddr min_iova, max_iova;
+    uint64_t iova_pgsizes;
     QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
     QLIST_HEAD(, VFIOGroup) group_list;
     QLIST_ENTRY(VFIOContainer) next;
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
                   ` (3 preceding siblings ...)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  8:32   ` Laurent Vivier
                     ` (2 more replies)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings David Gibson
                   ` (2 subsequent siblings)
  7 siblings, 3 replies; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

When we have guest visible IOMMUs, we allow notifiers to be registered
which will be informed of all changes to IOMMU mappings.  This is used by
vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings.

However, unlike with a memory region listener, an iommu notifier won't be
told about any mappings which already exist in the (guest) IOMMU at the
time it is registered.  This can cause problems if hotplugging a VFIO
device onto a guest bus which had existing guest IOMMU mappings, but didn't
previously have an VFIO devices (and hence no host IOMMU mappings).

This adds a memory_region_iommu_replay() function to handle this case.  It
replays any existing mappings in an IOMMU memory region to a specified
notifier.  Because the IOMMU memory region doesn't internally remember the
granularity of the guest IOMMU it has a small hack where the caller must
specify a granularity at which to replay mappings.

If there are finer mappings in the guest IOMMU these will be reported in
the iotlb structures passed to the notifier which it must handle (probably
causing it to flag an error).  This isn't new - the VFIO iommu notifier
must already handle notifications about guest IOMMU mappings too short
for it to represent in the host IOMMU.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 include/exec/memory.h | 13 +++++++++++++
 memory.c              | 20 ++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/include/exec/memory.h b/include/exec/memory.h
index 5baaf48..0f07159 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -583,6 +583,19 @@ void memory_region_notify_iommu(MemoryRegion *mr,
 void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n);
 
 /**
+ * memory_region_iommu_replay: replay existing IOMMU translations to
+ * a notifier
+ *
+ * @mr: the memory region to observe
+ * @n: the notifier to which to replay iommu mappings
+ * @granularity: Minimum page granularity to replay notifications for
+ * @is_write: Whether to treat the replay as a translate "write"
+ *     through the iommu
+ */
+void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
+                                hwaddr granularity, bool is_write);
+
+/**
  * memory_region_unregister_iommu_notifier: unregister a notifier for
  * changes to IOMMU translation entries.
  *
diff --git a/memory.c b/memory.c
index ef87363..1b03d22 100644
--- a/memory.c
+++ b/memory.c
@@ -1403,6 +1403,26 @@ void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
     notifier_list_add(&mr->iommu_notify, n);
 }
 
+void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
+                                hwaddr granularity, bool is_write)
+{
+    hwaddr addr;
+    IOMMUTLBEntry iotlb;
+
+    for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
+        iotlb = mr->iommu_ops->translate(mr, addr, is_write);
+        if (iotlb.perm != IOMMU_NONE) {
+            n->notify(n, &iotlb);
+        }
+
+        /* if (2^64 - MR size) < granularity, it's possible to get an
+         * infinite loop here.  This should catch such a wraparound */
+        if ((addr + granularity) < addr) {
+            break;
+        }
+    }
+}
+
 void memory_region_unregister_iommu_notifier(Notifier *n)
 {
     notifier_remove(n);
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
                   ` (4 preceding siblings ...)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  9:09   ` Laurent Vivier
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH David Gibson
  2015-10-02 18:12 ` [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge Alex Williamson
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

At present the memory listener used by vfio to keep host IOMMU mappings
in sync with the guest memory image assumes that if a guest IOMMU
appears, then it has no existing mappings.

This may not be true if a VFIO device is hotplugged onto a guest bus
which didn't previously include a VFIO device, and which has existing
guest IOMMU mappings.

Therefore, use the memory_region_register_iommu_notifier_replay()
function in order to fix this case, replaying existing guest IOMMU
mappings, bringing the host IOMMU into sync with the guest IOMMU.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/vfio/common.c | 23 +++++++++--------------
 1 file changed, 9 insertions(+), 14 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index f666de2..6797208 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -312,6 +312,11 @@ out:
     rcu_read_unlock();
 }
 
+static hwaddr vfio_container_granularity(VFIOContainer *container)
+{
+    return (hwaddr)1 << ctz64(container->iova_pgsizes);
+}
+
 static void vfio_listener_region_add(MemoryListener *listener,
                                      MemoryRegionSection *section)
 {
@@ -369,26 +374,16 @@ static void vfio_listener_region_add(MemoryListener *listener,
          * would be the right place to wire that up (tell the KVM
          * device emulation the VFIO iommu handles to use).
          */
-        /*
-         * This assumes that the guest IOMMU is empty of
-         * mappings at this point.
-         *
-         * One way of doing this is:
-         * 1. Avoid sharing IOMMUs between emulated devices or different
-         * IOMMU groups.
-         * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
-         * there are some mappings in IOMMU.
-         *
-         * VFIO on SPAPR does that. Other IOMMU models may do that different,
-         * they must make sure there are no existing mappings or
-         * loop through existing mappings to map them into VFIO.
-         */
         giommu = g_malloc0(sizeof(*giommu));
         giommu->iommu = section->mr;
         giommu->container = container;
         giommu->n.notify = vfio_iommu_map_notify;
         QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
+
         memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
+        memory_region_iommu_replay(giommu->iommu, &giommu->n,
+                                   vfio_container_granularity(container),
+                                   false);
 
         return;
     }
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
                   ` (5 preceding siblings ...)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings David Gibson
@ 2015-09-30  2:13 ` David Gibson
  2015-09-30  9:12   ` Laurent Vivier
  2015-10-02 18:12 ` [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge Alex Williamson
  7 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2015-09-30  2:13 UTC (permalink / raw)
  To: alex.williamson, pbonzini
  Cc: lvivier, thuth, qemu-devel, mdroth, aik, abologna, gwshan,
	qemu-ppc, David Gibson

The Enhanced Error Handling (EEH) interface in PAPR operates on units of a
Partitionable Endpoint (PE).  For VFIO devices, the PE boundaries the guest
sees must match the PE (i.e. IOMMU group) boundaries on the host.  To
implement this it will need to discover from VFIO which group a given
device belongs to.

This exposes a new vfio_pci_device_group() function for this purpose.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/vfio/pci.c              | 14 ++++++++++++++
 include/hw/vfio/vfio-pci.h | 11 +++++++++++
 2 files changed, 25 insertions(+)
 create mode 100644 include/hw/vfio/vfio-pci.h

diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index dcabb6d..49ae834 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -35,6 +35,8 @@
 #include "pci.h"
 #include "trace.h"
 
+#include "hw/vfio/vfio-pci.h"
+
 #define MSIX_CAP_LENGTH 12
 
 static void vfio_disable_interrupts(VFIOPCIDevice *vdev);
@@ -2312,6 +2314,18 @@ static void vfio_unregister_req_notifier(VFIOPCIDevice *vdev)
     vdev->req_enabled = false;
 }
 
+VFIOGroup *vfio_pci_device_group(PCIDevice *pdev)
+{
+    VFIOPCIDevice *vdev;
+
+    if (!object_dynamic_cast(OBJECT(pdev), "vfio-pci")) {
+        return NULL;
+    }
+
+    vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
+    return vdev->vbasedev.group;
+}
+
 static int vfio_initfn(PCIDevice *pdev)
 {
     VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
diff --git a/include/hw/vfio/vfio-pci.h b/include/hw/vfio/vfio-pci.h
new file mode 100644
index 0000000..32105f7
--- /dev/null
+++ b/include/hw/vfio/vfio-pci.h
@@ -0,0 +1,11 @@
+#ifndef VFIO_PCI_H
+#define VFIO_PCI_H
+
+#include "qemu/typedefs.h"
+
+/* We expose the concept of a VFIOGroup, though not its internals */
+typedef struct VFIOGroup VFIOGroup;
+
+extern VFIOGroup *vfio_pci_device_group(PCIDevice *pdev);
+
+#endif /* VFIO_PCI_H */
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer David Gibson
@ 2015-09-30  8:19   ` Laurent Vivier
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:19 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> Currently the VFIOContainer iommu_data field contains a union with
> different information for different host iommu types.  However:
>    * It only actually contains information for the x86-like "Type1" iommu
>    * Because we have a common listener the Type1 fields are actually used
> on all IOMMU types, including the SPAPR TCE type as well
> 
> In fact we now have a general structure for the listener which is unlikely
> to ever need per-iommu-type information, so this patch removes the union.
> 
> In a similar way we can unify the setup of the vfio memory listener in
> vfio_connect_container() that is currently split across a switch on iommu
> type, but is effectively the same in both cases.
> 
> The iommu_data.release pointer was only needed as a cleanup function
> which would handle potentially different data in the union.  With the
> union gone, it too can be removed.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/vfio/common.c              | 52 ++++++++++++++++---------------------------
>  include/hw/vfio/vfio-common.h | 16 +++----------
>  2 files changed, 22 insertions(+), 46 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 0d341a3..1545f62 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -315,8 +315,7 @@ out:
>  static void vfio_listener_region_add(MemoryListener *listener,
>                                       MemoryRegionSection *section)
>  {
> -    VFIOContainer *container = container_of(listener, VFIOContainer,
> -                                            iommu_data.type1.listener);
> +    VFIOContainer *container = container_of(listener, VFIOContainer, listener);
>      hwaddr iova, end;
>      Int128 llend;
>      void *vaddr;
> @@ -406,9 +405,9 @@ static void vfio_listener_region_add(MemoryListener *listener,
>           * can gracefully fail.  Runtime, there's not much we can do other
>           * than throw a hardware error.
>           */
> -        if (!container->iommu_data.type1.initialized) {
> -            if (!container->iommu_data.type1.error) {
> -                container->iommu_data.type1.error = ret;
> +        if (!container->initialized) {
> +            if (!container->error) {
> +                container->error = ret;
>              }
>          } else {
>              hw_error("vfio: DMA mapping failed, unable to continue");
> @@ -419,8 +418,7 @@ static void vfio_listener_region_add(MemoryListener *listener,
>  static void vfio_listener_region_del(MemoryListener *listener,
>                                       MemoryRegionSection *section)
>  {
> -    VFIOContainer *container = container_of(listener, VFIOContainer,
> -                                            iommu_data.type1.listener);
> +    VFIOContainer *container = container_of(listener, VFIOContainer, listener);
>      hwaddr iova, end;
>      int ret;
>  
> @@ -485,7 +483,7 @@ static const MemoryListener vfio_memory_listener = {
>  
>  static void vfio_listener_release(VFIOContainer *container)
>  {
> -    memory_listener_unregister(&container->iommu_data.type1.listener);
> +    memory_listener_unregister(&container->listener);
>  }
>  
>  int vfio_mmap_region(Object *obj, VFIORegion *region,
> @@ -683,21 +681,6 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>              ret = -errno;
>              goto free_container_exit;
>          }
> -
> -        container->iommu_data.type1.listener = vfio_memory_listener;
> -        container->iommu_data.release = vfio_listener_release;
> -
> -        memory_listener_register(&container->iommu_data.type1.listener,
> -                                 container->space->as);
> -
> -        if (container->iommu_data.type1.error) {
> -            ret = container->iommu_data.type1.error;
> -            error_report("vfio: memory listener initialization failed for container");
> -            goto listener_release_exit;
> -        }
> -
> -        container->iommu_data.type1.initialized = true;
> -
>      } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
>          ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
>          if (ret) {
> @@ -723,19 +706,24 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>              ret = -errno;
>              goto free_container_exit;
>          }
> -
> -        container->iommu_data.type1.listener = vfio_memory_listener;
> -        container->iommu_data.release = vfio_listener_release;
> -
> -        memory_listener_register(&container->iommu_data.type1.listener,
> -                                 container->space->as);
> -
>      } else {
>          error_report("vfio: No available IOMMU models");
>          ret = -EINVAL;
>          goto free_container_exit;
>      }
>  
> +    container->listener = vfio_memory_listener;
> +
> +    memory_listener_register(&container->listener, container->space->as);
> +
> +    if (container->error) {
> +        ret = container->error;
> +        error_report("vfio: memory listener initialization failed for container");
> +        goto listener_release_exit;
> +    }
> +
> +    container->initialized = true;
> +
>      QLIST_INIT(&container->group_list);
>      QLIST_INSERT_HEAD(&space->containers, container, next);
>  
> @@ -774,9 +762,7 @@ static void vfio_disconnect_container(VFIOGroup *group)
>          VFIOAddressSpace *space = container->space;
>          VFIOGuestIOMMU *giommu, *tmp;
>  
> -        if (container->iommu_data.release) {
> -            container->iommu_data.release(container);
> -        }
> +        vfio_listener_release(container);
>          QLIST_REMOVE(container, next);
>  
>          QLIST_FOREACH_SAFE(giommu, &container->giommu_list, giommu_next, tmp) {
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 9b9901f..fbbe6de 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -59,22 +59,12 @@ typedef struct VFIOAddressSpace {
>  
>  struct VFIOGroup;
>  
> -typedef struct VFIOType1 {
> -    MemoryListener listener;
> -    int error;
> -    bool initialized;
> -} VFIOType1;
> -
>  typedef struct VFIOContainer {
>      VFIOAddressSpace *space;
>      int fd; /* /dev/vfio/vfio, empowered by the attached groups */
> -    struct {
> -        /* enable abstraction to support various iommu backends */
> -        union {
> -            VFIOType1 type1;
> -        };
> -        void (*release)(struct VFIOContainer *);
> -    } iommu_data;
> +    MemoryListener listener;
> +    int error;
> +    bool initialized;
>      QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
>      QLIST_HEAD(, VFIOGroup) group_list;
>      QLIST_ENTRY(VFIOContainer) next;
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path David Gibson
@ 2015-09-30  8:20   ` Laurent Vivier
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:20 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> If a DMA mapping operation fails in vfio_listener_region_add() it
> checks to see if we've already completed initial setup of the
> container.  If so it reports an error so the setup code can fail
> gracefully, otherwise throws a hw_error().
> 
> There are other potential failure cases in vfio_listener_region_add()
> which could benefit from the same logic, so move it to its own
> fail: block.  Later patches can use this to extend other failure cases
> to fail as gracefully as possible under the circumstances.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
> ---
>  hw/vfio/common.c | 26 +++++++++++++++-----------
>  1 file changed, 15 insertions(+), 11 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 1545f62..95a4850 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -399,19 +399,23 @@ static void vfio_listener_region_add(MemoryListener *listener,
>          error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
>                       "0x%"HWADDR_PRIx", %p) = %d (%m)",
>                       container, iova, end - iova, vaddr, ret);
> +        goto fail;
> +    }
>  
> -        /*
> -         * On the initfn path, store the first error in the container so we
> -         * can gracefully fail.  Runtime, there's not much we can do other
> -         * than throw a hardware error.
> -         */
> -        if (!container->initialized) {
> -            if (!container->error) {
> -                container->error = ret;
> -            }
> -        } else {
> -            hw_error("vfio: DMA mapping failed, unable to continue");
> +    return;
> +
> +fail:
> +    /*
> +     * On the initfn path, store the first error in the container so we
> +     * can gracefully fail.  Runtime, there's not much we can do other
> +     * than throw a hardware error.
> +     */
> +    if (!container->initialized) {
> +        if (!container->error) {
> +            container->error = ret;
>          }
> +    } else {
> +        hw_error("vfio: DMA mapping failed, unable to continue");
>      }
>  }
>  
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities David Gibson
@ 2015-09-30  8:25   ` Laurent Vivier
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:25 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> The current vfio core code assumes that the host IOMMU is capable of
> mapping any IOVA the guest wants to use to where we need.  However, real
> IOMMUs generally only support translating a certain range of IOVAs (the
> "DMA window") not a full 64-bit address space.
> 
> The common x86 IOMMUs support a wide enough range that guests are very
> unlikely to go beyond it in practice, however the IOMMU used on IBM Power
> machines - in the default configuration - supports only a much more limited
> IOVA range, usually 0..2GiB.
> 
> If the guest attempts to set up an IOVA range that the host IOMMU can't
> map, qemu won't report an error until it actually attempts to map a bad
> IOVA.  If guest RAM is being mapped directly into the IOMMU (i.e. no guest
> visible IOMMU) then this will show up very quickly.  If there is a guest
> visible IOMMU, however, the problem might not show up until much later when
> the guest actually attempt to DMA with an IOVA the host can't handle.
> 
> This patch adds a test so that we will detect earlier if the guest is
> attempting to use IOVA ranges that the host IOMMU won't be able to deal
> with.
> 
> For now, we assume that "Type1" (x86) IOMMUs can support any IOVA, this is
> incorrect, but no worse than what we have already.  We can't do better for
> now because the Type1 kernel interface doesn't tell us what IOVA range the
> IOMMU actually supports.
> 
> For the Power "sPAPR TCE" IOMMU, however, we can retrieve the supported
> IOVA range and validate guest IOVA ranges against it, and this patch does
> so.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
> ---
>  hw/vfio/common.c              | 40 +++++++++++++++++++++++++++++++++++++---
>  include/hw/vfio/vfio-common.h |  6 ++++++
>  2 files changed, 43 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 95a4850..2faf492 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -343,14 +343,22 @@ static void vfio_listener_region_add(MemoryListener *listener,
>      if (int128_ge(int128_make64(iova), llend)) {
>          return;
>      }
> +    end = int128_get64(llend);
> +
> +    if ((iova < container->min_iova) || ((end - 1) > container->max_iova)) {
> +        error_report("vfio: IOMMU container %p can't map guest IOVA region"
> +                     " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx,
> +                     container, iova, end - 1);
> +        ret = -EFAULT;
> +        goto fail;
> +    }
>  
>      memory_region_ref(section->mr);
>  
>      if (memory_region_is_iommu(section->mr)) {
>          VFIOGuestIOMMU *giommu;
>  
> -        trace_vfio_listener_region_add_iommu(iova,
> -                    int128_get64(int128_sub(llend, int128_one())));
> +        trace_vfio_listener_region_add_iommu(iova, end - 1);
>          /*
>           * FIXME: We should do some checking to see if the
>           * capabilities of the host VFIO IOMMU are adequate to model
> @@ -387,7 +395,6 @@ static void vfio_listener_region_add(MemoryListener *listener,
>  
>      /* Here we assume that memory_region_is_ram(section->mr)==true */
>  
> -    end = int128_get64(llend);
>      vaddr = memory_region_get_ram_ptr(section->mr) +
>              section->offset_within_region +
>              (iova - section->offset_within_address_space);
> @@ -685,7 +692,19 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>              ret = -errno;
>              goto free_container_exit;
>          }
> +
> +        /*
> +         * FIXME: This assumes that a Type1 IOMMU can map any 64-bit
> +         * IOVA whatsoever.  That's not actually true, but the current
> +         * kernel interface doesn't tell us what it can map, and the
> +         * existing Type1 IOMMUs generally support any IOVA we're
> +         * going to actually try in practice.
> +         */
> +        container->min_iova = 0;
> +        container->max_iova = (hwaddr)-1;
>      } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
> +        struct vfio_iommu_spapr_tce_info info;
> +
>          ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
>          if (ret) {
>              error_report("vfio: failed to set group container: %m");
> @@ -710,6 +729,21 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>              ret = -errno;
>              goto free_container_exit;
>          }
> +
> +        /*
> +         * This only considers the host IOMMU's 32-bit window.  At
> +         * some point we need to add support for the optional 64-bit
> +         * window and dynamic windows
> +         */
> +        info.argsz = sizeof(info);
> +        ret = ioctl(fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
> +        if (ret) {
> +            error_report("vfio: VFIO_IOMMU_SPAPR_TCE_GET_INFO failed: %m");
> +            ret = -errno;
> +            goto free_container_exit;
> +        }
> +        container->min_iova = info.dma32_window_start;
> +        container->max_iova = container->min_iova + info.dma32_window_size - 1;
>      } else {
>          error_report("vfio: No available IOMMU models");
>          ret = -EINVAL;
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index fbbe6de..27a14c0 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -65,6 +65,12 @@ typedef struct VFIOContainer {
>      MemoryListener listener;
>      int error;
>      bool initialized;
> +    /*
> +     * This assumes the host IOMMU can support only a single
> +     * contiguous IOVA window.  We may need to generalize that in
> +     * future
> +     */
> +    hwaddr min_iova, max_iova;
>      QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
>      QLIST_HEAD(, VFIOGroup) group_list;
>      QLIST_ENTRY(VFIOContainer) next;
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes David Gibson
@ 2015-09-30  8:27   ` Laurent Vivier
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:27 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> Depending on the host IOMMU type we determine and record the available page
> sizes for IOMMU translation.  We'll need this for other validation in
> future patches.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: Thomas Huth <thuth@redhat.com>
> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
> ---
>  hw/vfio/common.c              | 13 +++++++++++++
>  include/hw/vfio/vfio-common.h |  1 +
>  2 files changed, 14 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 2faf492..f666de2 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -677,6 +677,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>      if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU) ||
>          ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1v2_IOMMU)) {
>          bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1v2_IOMMU);
> +        struct vfio_iommu_type1_info info;
>  
>          ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
>          if (ret) {
> @@ -702,6 +703,15 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>           */
>          container->min_iova = 0;
>          container->max_iova = (hwaddr)-1;
> +
> +        /* Assume just 4K IOVA page size */
> +        container->iova_pgsizes = 0x1000;
> +        info.argsz = sizeof(info);
> +        ret = ioctl(fd, VFIO_IOMMU_GET_INFO, &info);
> +        /* Ignore errors */
> +        if ((ret == 0) && (info.flags & VFIO_IOMMU_INFO_PGSIZES)) {
> +            container->iova_pgsizes = info.iova_pgsizes;
> +        }
>      } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
>          struct vfio_iommu_spapr_tce_info info;
>  
> @@ -744,6 +754,9 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>          }
>          container->min_iova = info.dma32_window_start;
>          container->max_iova = container->min_iova + info.dma32_window_size - 1;
> +
> +        /* Assume just 4K IOVA pages for now */
> +        container->iova_pgsizes = 0x1000;
>      } else {
>          error_report("vfio: No available IOMMU models");
>          ret = -EINVAL;
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 27a14c0..f037f3c 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -71,6 +71,7 @@ typedef struct VFIOContainer {
>       * future
>       */
>      hwaddr min_iova, max_iova;
> +    uint64_t iova_pgsizes;
>      QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
>      QLIST_HEAD(, VFIOGroup) group_list;
>      QLIST_ENTRY(VFIOContainer) next;
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
@ 2015-09-30  8:32   ` Laurent Vivier
  2015-09-30  8:59   ` Laurent Vivier
  2015-10-05 13:21   ` Paolo Bonzini
  2 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:32 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> When we have guest visible IOMMUs, we allow notifiers to be registered
> which will be informed of all changes to IOMMU mappings.  This is used by
> vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings.
> 
> However, unlike with a memory region listener, an iommu notifier won't be
> told about any mappings which already exist in the (guest) IOMMU at the
> time it is registered.  This can cause problems if hotplugging a VFIO
> device onto a guest bus which had existing guest IOMMU mappings, but didn't
> previously have an VFIO devices (and hence no host IOMMU mappings).
> 
> This adds a memory_region_iommu_replay() function to handle this case.  It
> replays any existing mappings in an IOMMU memory region to a specified
> notifier.  Because the IOMMU memory region doesn't internally remember the
> granularity of the guest IOMMU it has a small hack where the caller must
> specify a granularity at which to replay mappings.
> 
> If there are finer mappings in the guest IOMMU these will be reported in
> the iotlb structures passed to the notifier which it must handle (probably
> causing it to flag an error).  This isn't new - the VFIO iommu notifier
> must already handle notifications about guest IOMMU mappings too short
> for it to represent in the host IOMMU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  include/exec/memory.h | 13 +++++++++++++
>  memory.c              | 20 ++++++++++++++++++++
>  2 files changed, 33 insertions(+)
> 
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 5baaf48..0f07159 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -583,6 +583,19 @@ void memory_region_notify_iommu(MemoryRegion *mr,
>  void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n);
>  
>  /**
> + * memory_region_iommu_replay: replay existing IOMMU translations to
> + * a notifier
> + *
> + * @mr: the memory region to observe
> + * @n: the notifier to which to replay iommu mappings
> + * @granularity: Minimum page granularity to replay notifications for
> + * @is_write: Whether to treat the replay as a translate "write"
> + *     through the iommu
> + */
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write);
> +
> +/**
>   * memory_region_unregister_iommu_notifier: unregister a notifier for
>   * changes to IOMMU translation entries.
>   *
> diff --git a/memory.c b/memory.c
> index ef87363..1b03d22 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -1403,6 +1403,26 @@ void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
>      notifier_list_add(&mr->iommu_notify, n);
>  }
>  
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write)
> +{
> +    hwaddr addr;
> +    IOMMUTLBEntry iotlb;
> +
> +    for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
> +        iotlb = mr->iommu_ops->translate(mr, addr, is_write);
> +        if (iotlb.perm != IOMMU_NONE) {
> +            n->notify(n, &iotlb);
> +        }
> +
> +        /* if (2^64 - MR size) < granularity, it's possible to get an
> +         * infinite loop here.  This should catch such a wraparound */
> +        if ((addr + granularity) < addr) {
> +            break;
> +        }
> +    }
> +}
> +
>  void memory_region_unregister_iommu_notifier(Notifier *n)
>  {
>      notifier_remove(n);
> 
Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
  2015-09-30  8:32   ` Laurent Vivier
@ 2015-09-30  8:59   ` Laurent Vivier
  2015-09-30 23:51     ` David Gibson
  2015-10-05 13:21   ` Paolo Bonzini
  2 siblings, 1 reply; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  8:59 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> When we have guest visible IOMMUs, we allow notifiers to be registered
> which will be informed of all changes to IOMMU mappings.  This is used by
> vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings.
> 
> However, unlike with a memory region listener, an iommu notifier won't be
> told about any mappings which already exist in the (guest) IOMMU at the
> time it is registered.  This can cause problems if hotplugging a VFIO
> device onto a guest bus which had existing guest IOMMU mappings, but didn't
> previously have an VFIO devices (and hence no host IOMMU mappings).
> 
> This adds a memory_region_iommu_replay() function to handle this case.  It
> replays any existing mappings in an IOMMU memory region to a specified
> notifier.  Because the IOMMU memory region doesn't internally remember the
> granularity of the guest IOMMU it has a small hack where the caller must
> specify a granularity at which to replay mappings.
> 
> If there are finer mappings in the guest IOMMU these will be reported in
> the iotlb structures passed to the notifier which it must handle (probably
> causing it to flag an error).  This isn't new - the VFIO iommu notifier
> must already handle notifications about guest IOMMU mappings too short
> for it to represent in the host IOMMU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  include/exec/memory.h | 13 +++++++++++++
>  memory.c              | 20 ++++++++++++++++++++
>  2 files changed, 33 insertions(+)
> 
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 5baaf48..0f07159 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -583,6 +583,19 @@ void memory_region_notify_iommu(MemoryRegion *mr,
>  void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n);
>  
>  /**
> + * memory_region_iommu_replay: replay existing IOMMU translations to
> + * a notifier
> + *
> + * @mr: the memory region to observe
> + * @n: the notifier to which to replay iommu mappings
> + * @granularity: Minimum page granularity to replay notifications for
> + * @is_write: Whether to treat the replay as a translate "write"
> + *     through the iommu
> + */
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write);
> +
> +/**
>   * memory_region_unregister_iommu_notifier: unregister a notifier for
>   * changes to IOMMU translation entries.
>   *
> diff --git a/memory.c b/memory.c
> index ef87363..1b03d22 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -1403,6 +1403,26 @@ void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
>      notifier_list_add(&mr->iommu_notify, n);
>  }
>  
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write)
> +{
> +    hwaddr addr;
> +    IOMMUTLBEntry iotlb;
> +
> +    for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
> +        iotlb = mr->iommu_ops->translate(mr, addr, is_write);

in iotlb, there is an "address_mask", on spapr, it is copied from
"page_shift", which is SPAPR_TCE_PAGE_SHIFT (12 -> 4k).

At a first glance, we would like to use it to scan the memory region,
but as granularity could be a greater value, I think it is a better choice.

But the question is: why the iotlb page_size is not equal to the
granularity given by VFIO_IOMMU_GET_INFO _IO ?

> +        if (iotlb.perm != IOMMU_NONE) {
> +            n->notify(n, &iotlb);
> +        }
> +
> +        /* if (2^64 - MR size) < granularity, it's possible to get an
> +         * infinite loop here.  This should catch such a wraparound */
> +        if ((addr + granularity) < addr) {
> +            break;
> +        }
> +    }
> +}
> +
>  void memory_region_unregister_iommu_notifier(Notifier *n)
>  {
>      notifier_remove(n);
> 

As my question is not a bout this particular patch but on another
existing part, I can say:

Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings David Gibson
@ 2015-09-30  9:09   ` Laurent Vivier
  2015-09-30 23:56     ` David Gibson
  0 siblings, 1 reply; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  9:09 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> At present the memory listener used by vfio to keep host IOMMU mappings
> in sync with the guest memory image assumes that if a guest IOMMU
> appears, then it has no existing mappings.
> 
> This may not be true if a VFIO device is hotplugged onto a guest bus
> which didn't previously include a VFIO device, and which has existing
> guest IOMMU mappings.
> 
> Therefore, use the memory_region_register_iommu_notifier_replay()
> function in order to fix this case, replaying existing guest IOMMU
> mappings, bringing the host IOMMU into sync with the guest IOMMU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/vfio/common.c | 23 +++++++++--------------
>  1 file changed, 9 insertions(+), 14 deletions(-)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index f666de2..6797208 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -312,6 +312,11 @@ out:
>      rcu_read_unlock();
>  }
>  
> +static hwaddr vfio_container_granularity(VFIOContainer *container)
> +{
> +    return (hwaddr)1 << ctz64(container->iova_pgsizes);
> +}
> +
>  static void vfio_listener_region_add(MemoryListener *listener,
>                                       MemoryRegionSection *section)
>  {
> @@ -369,26 +374,16 @@ static void vfio_listener_region_add(MemoryListener *listener,
>           * would be the right place to wire that up (tell the KVM
>           * device emulation the VFIO iommu handles to use).
>           */
> -        /*
> -         * This assumes that the guest IOMMU is empty of
> -         * mappings at this point.
> -         *
> -         * One way of doing this is:
> -         * 1. Avoid sharing IOMMUs between emulated devices or different
> -         * IOMMU groups.
> -         * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
> -         * there are some mappings in IOMMU.
> -         *
> -         * VFIO on SPAPR does that. Other IOMMU models may do that different,
> -         * they must make sure there are no existing mappings or
> -         * loop through existing mappings to map them into VFIO.
> -         */
>          giommu = g_malloc0(sizeof(*giommu));
>          giommu->iommu = section->mr;
>          giommu->container = container;
>          giommu->n.notify = vfio_iommu_map_notify;
>          QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> +
>          memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
> +        memory_region_iommu_replay(giommu->iommu, &giommu->n,
> +                                   vfio_container_granularity(container),
> +                                   false);

I'm wondering if it has any sense to provide the "is_write" information
at this level of the API: I don't think we can have access to this
information when we call this function (so it will be always used with
false, or called twice once with false, once with true). I think it
would be better to manage this internally.

-
>  
>          return;
>      }
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH David Gibson
@ 2015-09-30  9:12   ` Laurent Vivier
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Vivier @ 2015-09-30  9:12 UTC (permalink / raw)
  To: David Gibson, alex.williamson, pbonzini
  Cc: thuth, qemu-devel, mdroth, aik, abologna, gwshan, qemu-ppc



On 30/09/2015 04:13, David Gibson wrote:
> The Enhanced Error Handling (EEH) interface in PAPR operates on units of a
> Partitionable Endpoint (PE).  For VFIO devices, the PE boundaries the guest
> sees must match the PE (i.e. IOMMU group) boundaries on the host.  To
> implement this it will need to discover from VFIO which group a given
> device belongs to.
> 
> This exposes a new vfio_pci_device_group() function for this purpose.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/vfio/pci.c              | 14 ++++++++++++++
>  include/hw/vfio/vfio-pci.h | 11 +++++++++++
>  2 files changed, 25 insertions(+)
>  create mode 100644 include/hw/vfio/vfio-pci.h
> 
> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> index dcabb6d..49ae834 100644
> --- a/hw/vfio/pci.c
> +++ b/hw/vfio/pci.c
> @@ -35,6 +35,8 @@
>  #include "pci.h"
>  #include "trace.h"
>  
> +#include "hw/vfio/vfio-pci.h"
> +
>  #define MSIX_CAP_LENGTH 12
>  
>  static void vfio_disable_interrupts(VFIOPCIDevice *vdev);
> @@ -2312,6 +2314,18 @@ static void vfio_unregister_req_notifier(VFIOPCIDevice *vdev)
>      vdev->req_enabled = false;
>  }
>  
> +VFIOGroup *vfio_pci_device_group(PCIDevice *pdev)
> +{
> +    VFIOPCIDevice *vdev;
> +
> +    if (!object_dynamic_cast(OBJECT(pdev), "vfio-pci")) {
> +        return NULL;
> +    }
> +
> +    vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
> +    return vdev->vbasedev.group;
> +}
> +
>  static int vfio_initfn(PCIDevice *pdev)
>  {
>      VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev);
> diff --git a/include/hw/vfio/vfio-pci.h b/include/hw/vfio/vfio-pci.h
> new file mode 100644
> index 0000000..32105f7
> --- /dev/null
> +++ b/include/hw/vfio/vfio-pci.h
> @@ -0,0 +1,11 @@
> +#ifndef VFIO_PCI_H
> +#define VFIO_PCI_H
> +
> +#include "qemu/typedefs.h"
> +
> +/* We expose the concept of a VFIOGroup, though not its internals */
> +typedef struct VFIOGroup VFIOGroup;
> +
> +extern VFIOGroup *vfio_pci_device_group(PCIDevice *pdev);
> +
> +#endif /* VFIO_PCI_H */
> 

Reviewed-by: Laurent Vivier <lvivier@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications
  2015-09-30  8:59   ` Laurent Vivier
@ 2015-09-30 23:51     ` David Gibson
  0 siblings, 0 replies; 20+ messages in thread
From: David Gibson @ 2015-09-30 23:51 UTC (permalink / raw)
  To: Laurent Vivier
  Cc: thuth, qemu-devel, aik, mdroth, abologna, alex.williamson,
	qemu-ppc, pbonzini, gwshan

[-- Attachment #1: Type: text/plain, Size: 5426 bytes --]

On Wed, Sep 30, 2015 at 10:59:52AM +0200, Laurent Vivier wrote:
> 
> 
> On 30/09/2015 04:13, David Gibson wrote:
> > When we have guest visible IOMMUs, we allow notifiers to be registered
> > which will be informed of all changes to IOMMU mappings.  This is used by
> > vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings.
> > 
> > However, unlike with a memory region listener, an iommu notifier won't be
> > told about any mappings which already exist in the (guest) IOMMU at the
> > time it is registered.  This can cause problems if hotplugging a VFIO
> > device onto a guest bus which had existing guest IOMMU mappings, but didn't
> > previously have an VFIO devices (and hence no host IOMMU mappings).
> > 
> > This adds a memory_region_iommu_replay() function to handle this case.  It
> > replays any existing mappings in an IOMMU memory region to a specified
> > notifier.  Because the IOMMU memory region doesn't internally remember the
> > granularity of the guest IOMMU it has a small hack where the caller must
> > specify a granularity at which to replay mappings.
> > 
> > If there are finer mappings in the guest IOMMU these will be reported in
> > the iotlb structures passed to the notifier which it must handle (probably
> > causing it to flag an error).  This isn't new - the VFIO iommu notifier
> > must already handle notifications about guest IOMMU mappings too short
> > for it to represent in the host IOMMU.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  include/exec/memory.h | 13 +++++++++++++
> >  memory.c              | 20 ++++++++++++++++++++
> >  2 files changed, 33 insertions(+)
> > 
> > diff --git a/include/exec/memory.h b/include/exec/memory.h
> > index 5baaf48..0f07159 100644
> > --- a/include/exec/memory.h
> > +++ b/include/exec/memory.h
> > @@ -583,6 +583,19 @@ void memory_region_notify_iommu(MemoryRegion *mr,
> >  void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n);
> >  
> >  /**
> > + * memory_region_iommu_replay: replay existing IOMMU translations to
> > + * a notifier
> > + *
> > + * @mr: the memory region to observe
> > + * @n: the notifier to which to replay iommu mappings
> > + * @granularity: Minimum page granularity to replay notifications for
> > + * @is_write: Whether to treat the replay as a translate "write"
> > + *     through the iommu
> > + */
> > +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> > +                                hwaddr granularity, bool is_write);
> > +
> > +/**
> >   * memory_region_unregister_iommu_notifier: unregister a notifier for
> >   * changes to IOMMU translation entries.
> >   *
> > diff --git a/memory.c b/memory.c
> > index ef87363..1b03d22 100644
> > --- a/memory.c
> > +++ b/memory.c
> > @@ -1403,6 +1403,26 @@ void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
> >      notifier_list_add(&mr->iommu_notify, n);
> >  }
> >  
> > +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> > +                                hwaddr granularity, bool is_write)
> > +{
> > +    hwaddr addr;
> > +    IOMMUTLBEntry iotlb;
> > +
> > +    for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
> > +        iotlb = mr->iommu_ops->translate(mr, addr, is_write);
> 
> in iotlb, there is an "address_mask", on spapr, it is copied from
> "page_shift", which is SPAPR_TCE_PAGE_SHIFT (12 -> 4k).
> 
> At a first glance, we would like to use it to scan the memory region,
> but as granularity could be a greater value, I think it is a better choice.

Using address_mask doesn't quite work.  *If* you start with an
existing, valid translation, then you can use address mask to skip to
the end of it - that might be a useful optimization in future,
particularly if the guest IOMMU has variable page sizes.

But if you start on an address that doesn't have a current valid
translation in the IOMMU, then address_mask gets set to ~0, so it
doesn't give you any information on where to try next for a valid
mapping.  That's what the granularity parameter is needed for.

> But the question is: why the iotlb page_size is not equal to the
> granularity given by VFIO_IOMMU_GET_INFO _IO ?

Well, the iotlb page size is the page size from the *guest* iommu,
whereas VFIO_IOMMU_GET_INFO tells you the page size of the *host*
iommu.  In practice, they'll probably be the same, at least on setups
likely to work well with VFIO, but in theory they could be different.

> > +        if (iotlb.perm != IOMMU_NONE) {
> > +            n->notify(n, &iotlb);
> > +        }
> > +
> > +        /* if (2^64 - MR size) < granularity, it's possible to get an
> > +         * infinite loop here.  This should catch such a wraparound */
> > +        if ((addr + granularity) < addr) {
> > +            break;
> > +        }
> > +    }
> > +}
> > +
> >  void memory_region_unregister_iommu_notifier(Notifier *n)
> >  {
> >      notifier_remove(n);
> > 
> 
> As my question is not a bout this particular patch but on another
> existing part, I can say:
> 
> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings
  2015-09-30  9:09   ` Laurent Vivier
@ 2015-09-30 23:56     ` David Gibson
  0 siblings, 0 replies; 20+ messages in thread
From: David Gibson @ 2015-09-30 23:56 UTC (permalink / raw)
  To: Laurent Vivier
  Cc: thuth, qemu-devel, aik, mdroth, abologna, alex.williamson,
	qemu-ppc, pbonzini, gwshan

[-- Attachment #1: Type: text/plain, Size: 3699 bytes --]

On Wed, Sep 30, 2015 at 11:09:17AM +0200, Laurent Vivier wrote:
> 
> 
> On 30/09/2015 04:13, David Gibson wrote:
> > At present the memory listener used by vfio to keep host IOMMU mappings
> > in sync with the guest memory image assumes that if a guest IOMMU
> > appears, then it has no existing mappings.
> > 
> > This may not be true if a VFIO device is hotplugged onto a guest bus
> > which didn't previously include a VFIO device, and which has existing
> > guest IOMMU mappings.
> > 
> > Therefore, use the memory_region_register_iommu_notifier_replay()
> > function in order to fix this case, replaying existing guest IOMMU
> > mappings, bringing the host IOMMU into sync with the guest IOMMU.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  hw/vfio/common.c | 23 +++++++++--------------
> >  1 file changed, 9 insertions(+), 14 deletions(-)
> > 
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index f666de2..6797208 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -312,6 +312,11 @@ out:
> >      rcu_read_unlock();
> >  }
> >  
> > +static hwaddr vfio_container_granularity(VFIOContainer *container)
> > +{
> > +    return (hwaddr)1 << ctz64(container->iova_pgsizes);
> > +}
> > +
> >  static void vfio_listener_region_add(MemoryListener *listener,
> >                                       MemoryRegionSection *section)
> >  {
> > @@ -369,26 +374,16 @@ static void vfio_listener_region_add(MemoryListener *listener,
> >           * would be the right place to wire that up (tell the KVM
> >           * device emulation the VFIO iommu handles to use).
> >           */
> > -        /*
> > -         * This assumes that the guest IOMMU is empty of
> > -         * mappings at this point.
> > -         *
> > -         * One way of doing this is:
> > -         * 1. Avoid sharing IOMMUs between emulated devices or different
> > -         * IOMMU groups.
> > -         * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
> > -         * there are some mappings in IOMMU.
> > -         *
> > -         * VFIO on SPAPR does that. Other IOMMU models may do that different,
> > -         * they must make sure there are no existing mappings or
> > -         * loop through existing mappings to map them into VFIO.
> > -         */
> >          giommu = g_malloc0(sizeof(*giommu));
> >          giommu->iommu = section->mr;
> >          giommu->container = container;
> >          giommu->n.notify = vfio_iommu_map_notify;
> >          QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> > +
> >          memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
> > +        memory_region_iommu_replay(giommu->iommu, &giommu->n,
> > +                                   vfio_container_granularity(container),
> > +                                   false);
> 
> I'm wondering if it has any sense to provide the "is_write" information
> at this level of the API: I don't think we can have access to this
> information when we call this function (so it will be always used with
> false, or called twice once with false, once with true). I think it
> would be better to manage this internally.

I agree it's pretty ugly, but I'm not really sure how to handle it
better.  The translate function itself wants is_write; I'm pretty sure
"false" is the right thing here, but I'm not sure it would be right
for all potential replay cases.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge
  2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
                   ` (6 preceding siblings ...)
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH David Gibson
@ 2015-10-02 18:12 ` Alex Williamson
  7 siblings, 0 replies; 20+ messages in thread
From: Alex Williamson @ 2015-10-02 18:12 UTC (permalink / raw)
  To: David Gibson
  Cc: lvivier, thuth, qemu-devel, aik, abologna, mdroth, qemu-ppc,
	pbonzini, gwshan

On Wed, 2015-09-30 at 12:13 +1000, David Gibson wrote:
> Hi Alex,
> 
> Here are the parts of my recent series to allow VFIO devices on the
> spapr-pci-host-bridge device which affect the core VFIO code.  They've
> been revised according to the comments from yourself and others.
> 
> There's also one patch for the memory subsystem.
> 
> Note that while these are motivated by the needs of the sPAPR code,
> they changes should all be generally correct, and will allow safer and
> more flexible use of VFIO devices in other potential situations as
> well.
> 
> Please apply.
> 
> Changes since v2:
>   * Reworked IOMMU notifier replay mechanism according to Paolo's
>     suggestions.
> Changes since v1:
>   * Assorted minor cleanups based on comments.
>   
> 
> *** BLURB HERE ***
> 
> David Gibson (7):
>   vfio: Remove unneeded union from VFIOContainer
>   vfio: Generalize vfio_listener_region_add failure path
>   vfio: Check guest IOVA ranges against host IOMMU capabilities
>   vfio: Record host IOMMU's available IO page sizes
>   memory: Allow replay of IOMMU mapping notifications

AFAICT, we just need an ack of the memory API addition and then this
series should be ready to go through my tree.  Paolo, can you ack 5/7?
Thanks,

Alex

>   vfio: Allow hotplug of containers onto existing guest IOMMU mappings
>   vfio: Expose a VFIO PCI device's group for EEH
> 
>  hw/vfio/common.c              | 140 ++++++++++++++++++++++++++----------------
>  hw/vfio/pci.c                 |  14 +++++
>  include/exec/memory.h         |  13 ++++
>  include/hw/vfio/vfio-common.h |  23 +++----
>  include/hw/vfio/vfio-pci.h    |  11 ++++
>  memory.c                      |  20 ++++++
>  6 files changed, 154 insertions(+), 67 deletions(-)
>  create mode 100644 include/hw/vfio/vfio-pci.h
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications
  2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
  2015-09-30  8:32   ` Laurent Vivier
  2015-09-30  8:59   ` Laurent Vivier
@ 2015-10-05 13:21   ` Paolo Bonzini
  2 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2015-10-05 13:21 UTC (permalink / raw)
  To: David Gibson, alex.williamson
  Cc: lvivier, thuth, mdroth, aik, qemu-devel, abologna, qemu-ppc, gwshan



On 30/09/2015 04:13, David Gibson wrote:
> When we have guest visible IOMMUs, we allow notifiers to be registered
> which will be informed of all changes to IOMMU mappings.  This is used by
> vfio to keep the host IOMMU mappings in sync with guest IOMMU mappings.
> 
> However, unlike with a memory region listener, an iommu notifier won't be
> told about any mappings which already exist in the (guest) IOMMU at the
> time it is registered.  This can cause problems if hotplugging a VFIO
> device onto a guest bus which had existing guest IOMMU mappings, but didn't
> previously have an VFIO devices (and hence no host IOMMU mappings).
> 
> This adds a memory_region_iommu_replay() function to handle this case.  It
> replays any existing mappings in an IOMMU memory region to a specified
> notifier.  Because the IOMMU memory region doesn't internally remember the
> granularity of the guest IOMMU it has a small hack where the caller must
> specify a granularity at which to replay mappings.
> 
> If there are finer mappings in the guest IOMMU these will be reported in
> the iotlb structures passed to the notifier which it must handle (probably
> causing it to flag an error).  This isn't new - the VFIO iommu notifier
> must already handle notifications about guest IOMMU mappings too short
> for it to represent in the host IOMMU.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  include/exec/memory.h | 13 +++++++++++++
>  memory.c              | 20 ++++++++++++++++++++
>  2 files changed, 33 insertions(+)
> 
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 5baaf48..0f07159 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -583,6 +583,19 @@ void memory_region_notify_iommu(MemoryRegion *mr,
>  void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n);
>  
>  /**
> + * memory_region_iommu_replay: replay existing IOMMU translations to
> + * a notifier
> + *
> + * @mr: the memory region to observe
> + * @n: the notifier to which to replay iommu mappings
> + * @granularity: Minimum page granularity to replay notifications for
> + * @is_write: Whether to treat the replay as a translate "write"
> + *     through the iommu
> + */
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write);
> +
> +/**
>   * memory_region_unregister_iommu_notifier: unregister a notifier for
>   * changes to IOMMU translation entries.
>   *
> diff --git a/memory.c b/memory.c
> index ef87363..1b03d22 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -1403,6 +1403,26 @@ void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
>      notifier_list_add(&mr->iommu_notify, n);
>  }
>  
> +void memory_region_iommu_replay(MemoryRegion *mr, Notifier *n,
> +                                hwaddr granularity, bool is_write)
> +{
> +    hwaddr addr;
> +    IOMMUTLBEntry iotlb;
> +
> +    for (addr = 0; addr < memory_region_size(mr); addr += granularity) {
> +        iotlb = mr->iommu_ops->translate(mr, addr, is_write);
> +        if (iotlb.perm != IOMMU_NONE) {
> +            n->notify(n, &iotlb);
> +        }
> +
> +        /* if (2^64 - MR size) < granularity, it's possible to get an
> +         * infinite loop here.  This should catch such a wraparound */
> +        if ((addr + granularity) < addr) {
> +            break;
> +        }
> +    }
> +}
> +
>  void memory_region_unregister_iommu_notifier(Notifier *n)
>  {
>      notifier_remove(n);
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-10-05 13:21 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-30  2:13 [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge David Gibson
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 1/7] vfio: Remove unneeded union from VFIOContainer David Gibson
2015-09-30  8:19   ` Laurent Vivier
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 2/7] vfio: Generalize vfio_listener_region_add failure path David Gibson
2015-09-30  8:20   ` Laurent Vivier
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 3/7] vfio: Check guest IOVA ranges against host IOMMU capabilities David Gibson
2015-09-30  8:25   ` Laurent Vivier
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 4/7] vfio: Record host IOMMU's available IO page sizes David Gibson
2015-09-30  8:27   ` Laurent Vivier
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 5/7] memory: Allow replay of IOMMU mapping notifications David Gibson
2015-09-30  8:32   ` Laurent Vivier
2015-09-30  8:59   ` Laurent Vivier
2015-09-30 23:51     ` David Gibson
2015-10-05 13:21   ` Paolo Bonzini
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 6/7] vfio: Allow hotplug of containers onto existing guest IOMMU mappings David Gibson
2015-09-30  9:09   ` Laurent Vivier
2015-09-30 23:56     ` David Gibson
2015-09-30  2:13 ` [Qemu-devel] [PATCHv3 7/7] vfio: Expose a VFIO PCI device's group for EEH David Gibson
2015-09-30  9:12   ` Laurent Vivier
2015-10-02 18:12 ` [Qemu-devel] [PATCHv3 0/7] VFIO extensions to allow VFIO devices on spapr-pci-host-bridge Alex Williamson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.