All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2)
@ 2013-04-26  6:02 David Gibson
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes David Gibson
                   ` (4 more replies)
  0 siblings, 5 replies; 30+ messages in thread
From: David Gibson @ 2013-04-26  6:02 UTC (permalink / raw)
  To: alex.williamson, aik; +Cc: pbonzini, qemu-devel

This patch series represents a second attempt at better integration of
the vfio code with qemu's handling of guest IOMMUs.  It is based on
Paolo Bonzini's tree at git://github.com/bonzini/qemu.git (iommu
branch).

ddThis series should open the way for using VFIO with a guest system
containing an IOMMU by passing guest IOMMU operations through to the
host IOMMU via VFIO.  That's opposed to the present model of having no
IOMMU in the guest, and simply mapping all guest RAM into the host
IOMMU.

Patch 1 is just a trivial fix for a build problem introduced in the
iommu tree.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes
  2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
@ 2013-04-26  6:02 ` David Gibson
  2013-04-26  8:19   ` Paolo Bonzini
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 2/4] vfio: Associate VFIO groups with (guest) IOMMU address spaces David Gibson
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-26  6:02 UTC (permalink / raw)
  To: alex.williamson, aik; +Cc: pbonzini, qemu-devel, David Gibson

vmw_pvscsi.c directly calls pci_dma_sglist_init() instead of using the
helper for PCI devices, which means it was broken by Paolo Bonzini's
recent addition of iommu support to the memory API.  This fixes it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/scsi/vmw_pvscsi.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
index 4b4a58f..68c8d58 100644
--- a/hw/scsi/vmw_pvscsi.c
+++ b/hw/scsi/vmw_pvscsi.c
@@ -617,7 +617,7 @@ pvscsi_build_sglist(PVSCSIState *s, PVSCSIRequest *r)
 {
     PCIDevice *d = PCI_DEVICE(s);
 
-    qemu_sglist_init(&r->sgl, 1, pci_dma_context(d));
+    pci_dma_sglist_init(&r->sgl, d, 1);
     if (r->req.flags & PVSCSI_FLAG_CMD_WITH_SG_LIST) {
         pvscsi_convert_sglist(r);
     } else {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Qemu-devel] [PATCH 2/4] vfio: Associate VFIO groups with (guest) IOMMU address spaces
  2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes David Gibson
@ 2013-04-26  6:02 ` David Gibson
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion David Gibson
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-26  6:02 UTC (permalink / raw)
  To: alex.williamson, aik; +Cc: pbonzini, qemu-devel, David Gibson

The only model so far supported for VFIO passthrough devices is the model
usually used on x86, where all of the guest's RAM is mapped into the
(host) IOMMU and there is no IOMMU visible in the guest.  Later, however
we want to also support guest visible IOMMUs.

In order to do that the vfio subsystem needs to know which address space
its devices are supposed to be in.  In other words the PCI device's
iommu MemoryRegion needs to be passed through to vfio.  This patch updates
the internal interfaces to do that.  So far it doesn't do much with it,
except to verify/enforce that a group is never added to multiple address
spaces.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/misc/vfio.c |   38 +++++++++++++++++++++++++++-----------
 1 file changed, 27 insertions(+), 11 deletions(-)

diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
index 178dd11..9057964 100644
--- a/hw/misc/vfio.c
+++ b/hw/misc/vfio.c
@@ -116,6 +116,7 @@ enum {
 struct VFIOGroup;
 
 typedef struct VFIOContainer {
+    MemoryRegion *iommu;
     int fd; /* /dev/vfio/vfio, empowered by the attached groups */
     struct {
         /* enable abstraction to support various iommu backends */
@@ -2614,13 +2615,19 @@ static int vfio_load_rom(VFIODevice *vdev)
     return 0;
 }
 
-static int vfio_connect_container(VFIOGroup *group)
+static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
 {
     VFIOContainer *container;
     int ret, fd;
 
     if (group->container) {
-        return 0;
+        if (group->container->iommu == iommu) {
+            return 0;
+        } else {
+            error_report("vfio: group %d used in multiple DMA contexts",
+                         group->groupid);
+            return -EBUSY;
+        }
     }
 
     QLIST_FOREACH(container, &container_list, next) {
@@ -2646,6 +2653,7 @@ static int vfio_connect_container(VFIOGroup *group)
     }
 
     container = g_malloc0(sizeof(*container));
+    container->iommu = iommu;
     container->fd = fd;
 
     if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU)) {
@@ -2685,12 +2693,12 @@ static int vfio_connect_container(VFIOGroup *group)
     return 0;
 }
 
-static void vfio_disconnect_container(VFIOGroup *group)
+static void vfio_disconnect_context(VFIOGroup *group)
 {
     VFIOContainer *container = group->container;
 
     if (ioctl(group->fd, VFIO_GROUP_UNSET_CONTAINER, &container->fd)) {
-        error_report("vfio: error disconnecting group %d from container",
+        error_report("vfio: error disconnecting group %d from context",
                      group->groupid);
     }
 
@@ -2702,13 +2710,13 @@ static void vfio_disconnect_container(VFIOGroup *group)
             container->iommu_data.release(container);
         }
         QLIST_REMOVE(container, next);
-        DPRINTF("vfio_disconnect_container: close container->fd\n");
+        DPRINTF("vfio_disconnect_context: close container->fd\n");
         close(container->fd);
         g_free(container);
     }
 }
 
-static VFIOGroup *vfio_get_group(int groupid)
+static VFIOGroup *vfio_get_group(int groupid, MemoryRegion *iommu)
 {
     VFIOGroup *group;
     char path[32];
@@ -2716,7 +2724,15 @@ static VFIOGroup *vfio_get_group(int groupid)
 
     QLIST_FOREACH(group, &group_list, next) {
         if (group->groupid == groupid) {
-            return group;
+            /* Found it.  Now is it already in the right context? */
+            assert(group->container);
+            if (group->container->iommu == iommu) {
+                return group;
+            } else {
+                error_report("vfio: group %d used in multiple DMA contexts",
+                             group->groupid);
+                return NULL;
+            }
         }
     }
 
@@ -2749,8 +2765,8 @@ static VFIOGroup *vfio_get_group(int groupid)
     group->groupid = groupid;
     QLIST_INIT(&group->device_list);
 
-    if (vfio_connect_container(group)) {
-        error_report("vfio: failed to setup container for group %d", groupid);
+    if (vfio_connect_iommu(group, iommu)) {
+        error_report("vfio: failed to setup context for group %d", groupid);
         close(group->fd);
         g_free(group);
         return NULL;
@@ -2767,7 +2783,7 @@ static void vfio_put_group(VFIOGroup *group)
         return;
     }
 
-    vfio_disconnect_container(group);
+    vfio_disconnect_context(group);
     QLIST_REMOVE(group, next);
     DPRINTF("vfio_put_group: close group->fd\n");
     close(group->fd);
@@ -2982,7 +2998,7 @@ static int vfio_initfn(PCIDevice *pdev)
     DPRINTF("%s(%04x:%02x:%02x.%x) group %d\n", __func__, vdev->host.domain,
             vdev->host.bus, vdev->host.slot, vdev->host.function, groupid);
 
-    group = vfio_get_group(groupid);
+    group = vfio_get_group(groupid, pdev->iommu);
     if (!group) {
         error_report("vfio: failed to get group %d", groupid);
         return -ENOENT;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes David Gibson
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 2/4] vfio: Associate VFIO groups with (guest) IOMMU address spaces David Gibson
@ 2013-04-26  6:02 ` David Gibson
  2013-04-26  8:23   ` Paolo Bonzini
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 4/4] vfio: Only use memory listeners when appropriate David Gibson
  2013-04-26 15:42 ` [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) Alex Williamson
  4 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-26  6:02 UTC (permalink / raw)
  To: alex.williamson, aik; +Cc: pbonzini, qemu-devel, David Gibson

At the moment, vfio maintains a global list of containers that are assumed
to be more or less interchangeable, since they are all set up with a
MemoryListener to have all of system memory mapped.  However, that only
makes sense if all the containers are used on devices which really do
expect a dma address space identical to system memory.

This patch moves towards that by making the list of containers per
MemoryRegion (which corresponds to a dma address space) instead of global.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/misc/vfio.c         |   48 +++++++++++++++++++++++++++++++++++++++++-------
 include/exec/memory.h  |    4 ++++
 include/hw/misc/vfio.h |   20 ++++++++++++++++++++
 memory.c               |    2 ++
 stubs/Makefile.objs    |    1 +
 stubs/vfio.c           |    5 +++++
 6 files changed, 73 insertions(+), 7 deletions(-)
 create mode 100644 include/hw/misc/vfio.h
 create mode 100644 stubs/vfio.c

diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
index 9057964..707a63c 100644
--- a/hw/misc/vfio.c
+++ b/hw/misc/vfio.c
@@ -39,6 +39,7 @@
 #include "qemu/range.h"
 #include "sysemu/kvm.h"
 #include "sysemu/sysemu.h"
+#include "hw/misc/vfio.h"
 
 /* #define DEBUG_VFIO */
 #ifdef DEBUG_VFIO
@@ -177,10 +178,11 @@ typedef struct VFIOGroup {
     QLIST_ENTRY(VFIOGroup) container_next;
 } VFIOGroup;
 
-#define MSIX_CAP_LENGTH 12
+typedef struct VFIOIommu {
+    QLIST_HEAD(, VFIOContainer) containers;
+} VFIOIommu;
 
-static QLIST_HEAD(, VFIOContainer)
-    container_list = QLIST_HEAD_INITIALIZER(container_list);
+#define MSIX_CAP_LENGTH 12
 
 static QLIST_HEAD(, VFIOGroup)
     group_list = QLIST_HEAD_INITIALIZER(group_list);
@@ -2615,8 +2617,40 @@ static int vfio_load_rom(VFIODevice *vdev)
     return 0;
 }
 
+static VFIOIommu *iommu_get_vfio(MemoryRegion *iommu)
+{
+    VFIOIommu *vfio_iommu;
+
+    if (iommu->vfio) {
+        return iommu->vfio;
+    }
+
+    vfio_iommu = g_malloc0(sizeof(*vfio_iommu));
+    QLIST_INIT(&vfio_iommu->containers);
+    iommu->vfio = vfio_iommu;
+    return vfio_iommu;
+}
+
+void vfio_iommu_destroy(MemoryRegion *iommu)
+{
+    VFIOIommu *vfio_iommu = iommu->vfio;
+
+    if (!vfio_iommu) {
+        return; /* Nothing to do */
+    }
+
+    iommu->vfio = NULL;
+
+    /* Devices and the groups associated should already have been
+     * disconnected at this point */
+    assert(QLIST_EMPTY(&vfio_iommu->containers));
+
+    g_free(vfio_iommu);
+}
+
 static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
 {
+    VFIOIommu *vfio_iommu = iommu_get_vfio(iommu);
     VFIOContainer *container;
     int ret, fd;
 
@@ -2630,7 +2664,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
         }
     }
 
-    QLIST_FOREACH(container, &container_list, next) {
+    QLIST_FOREACH(container, &vfio_iommu->containers, next) {
         if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
             group->container = container;
             QLIST_INSERT_HEAD(&container->group_list, group, container_next);
@@ -2685,7 +2719,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
     }
 
     QLIST_INIT(&container->group_list);
-    QLIST_INSERT_HEAD(&container_list, container, next);
+    QLIST_INSERT_HEAD(&vfio_iommu->containers, container, next);
 
     group->container = container;
     QLIST_INSERT_HEAD(&container->group_list, group, container_next);
@@ -2693,7 +2727,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
     return 0;
 }
 
-static void vfio_disconnect_context(VFIOGroup *group)
+static void vfio_disconnect_iommu(VFIOGroup *group)
 {
     VFIOContainer *container = group->container;
 
@@ -2783,7 +2817,7 @@ static void vfio_put_group(VFIOGroup *group)
         return;
     }
 
-    vfio_disconnect_context(group);
+    vfio_disconnect_iommu(group);
     QLIST_REMOVE(group, next);
     DPRINTF("vfio_put_group: close group->fd\n");
     close(group->fd);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 99d51b7..259b8eb 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -129,6 +129,9 @@ struct MemoryRegionIOMMUOps {
 typedef struct CoalescedMemoryRange CoalescedMemoryRange;
 typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd;
 
+typedef struct VFIOIommu VFIOIommu;
+struct VFIOIommu;
+
 struct MemoryRegion {
     /* All fields are private - violators will be prosecuted */
     const MemoryRegionOps *ops;
@@ -160,6 +163,7 @@ struct MemoryRegion {
     unsigned ioeventfd_nb;
     MemoryRegionIoeventfd *ioeventfds;
     struct AddressSpace *iommu_target_as;
+    VFIOIommu *vfio;
 };
 
 struct MemoryRegionPortio {
diff --git a/include/hw/misc/vfio.h b/include/hw/misc/vfio.h
new file mode 100644
index 0000000..602e2f9
--- /dev/null
+++ b/include/hw/misc/vfio.h
@@ -0,0 +1,20 @@
+/*
+ * vfio based device assignment
+ *
+ * Copyright 2013 David Gibson, IBM Corporation.
+ * Copyright Red Hat, Inc. 2012
+ *
+ * This work is licensed under the terms of the GNU GPL, version
+ * 2. See the COPYING file in the top-level directory.
+ */
+#ifndef QEMU_VFIO_H
+#define QEMU_VFIO_H
+
+#include "qemu/queue.h"
+
+typedef struct MemoryRegion MemoryRegion;
+struct MemoryRegion;
+
+void vfio_iommu_destroy(MemoryRegion *iommu);
+
+#endif /* QEMU_VFIO_H */
diff --git a/memory.c b/memory.c
index cee3e59..50889d5 100644
--- a/memory.c
+++ b/memory.c
@@ -21,6 +21,7 @@
 #include <assert.h>
 
 #include "exec/memory-internal.h"
+#include "hw/misc/vfio.h"
 
 static unsigned memory_region_transaction_depth;
 static bool memory_region_update_pending;
@@ -1043,6 +1044,7 @@ void memory_region_init_reservation(MemoryRegion *mr,
 
 void memory_region_destroy(MemoryRegion *mr)
 {
+    vfio_iommu_destroy(mr);
     assert(QTAILQ_EMPTY(&mr->subregions));
     assert(memory_region_transaction_depth == 0);
     mr->destructor(mr);
diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
index 9c55b34..858ca6b 100644
--- a/stubs/Makefile.objs
+++ b/stubs/Makefile.objs
@@ -20,6 +20,7 @@ stub-obj-y += reset.o
 stub-obj-y += set-fd-handler.o
 stub-obj-y += slirp.o
 stub-obj-y += sysbus.o
+stub-obj-y += vfio.o
 stub-obj-y += vm-stop.o
 stub-obj-y += vmstate.o
 stub-obj-$(CONFIG_WIN32) += fd-register.o
diff --git a/stubs/vfio.c b/stubs/vfio.c
new file mode 100644
index 0000000..6aacaf6
--- /dev/null
+++ b/stubs/vfio.c
@@ -0,0 +1,5 @@
+#include "hw/misc/vfio.h"
+
+void vfio_iommu_destroy(MemoryRegion *iommu)
+{
+}
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [Qemu-devel] [PATCH 4/4] vfio: Only use memory listeners when appropriate
  2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
                   ` (2 preceding siblings ...)
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion David Gibson
@ 2013-04-26  6:02 ` David Gibson
  2013-04-26 15:42 ` [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) Alex Williamson
  4 siblings, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-26  6:02 UTC (permalink / raw)
  To: alex.williamson, aik; +Cc: pbonzini, qemu-devel, David Gibson

Currently, vfio registers a MemoryListener for every vfio container we
create, to keep the container's mappings in sync with main system memory.
That's only correct though, if the context the container is attached to
represents a dma address space which actually matches main system memory -
roughly speaking that means that there is no guest side IOMMU above the
vfio device in question.

This patch corrects the code, by only registering the MemoryListener when
the container belongs to a DMAContext which does not include an IOMMU (i.e.
which has no ->translate function).  In other cases we given an error; that
will change when vfio support for guest side IOMMUs is added.

In addition, this generalizes the code slightly, by attaching the
MemoryListener to the DMAContext's underlying AddressSpace, rather than
just assuming that it is main system memory.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/misc/vfio.c |   92 ++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 59 insertions(+), 33 deletions(-)

diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
index 707a63c..a4a5310 100644
--- a/hw/misc/vfio.c
+++ b/hw/misc/vfio.c
@@ -180,6 +180,7 @@ typedef struct VFIOGroup {
 
 typedef struct VFIOIommu {
     QLIST_HEAD(, VFIOContainer) containers;
+    AddressSpace listen_as;
 } VFIOIommu;
 
 #define MSIX_CAP_LENGTH 12
@@ -2648,29 +2649,12 @@ void vfio_iommu_destroy(MemoryRegion *iommu)
     g_free(vfio_iommu);
 }
 
-static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
+static int vfio_create_container_with_listener(VFIOGroup *group,
+                                               MemoryRegion *iommu)
 {
     VFIOIommu *vfio_iommu = iommu_get_vfio(iommu);
     VFIOContainer *container;
-    int ret, fd;
-
-    if (group->container) {
-        if (group->container->iommu == iommu) {
-            return 0;
-        } else {
-            error_report("vfio: group %d used in multiple DMA contexts",
-                         group->groupid);
-            return -EBUSY;
-        }
-    }
-
-    QLIST_FOREACH(container, &vfio_iommu->containers, next) {
-        if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
-            group->container = container;
-            QLIST_INSERT_HEAD(&container->group_list, group, container_next);
-            return 0;
-        }
-    }
+    int fd, ret;
 
     fd = qemu_open("/dev/vfio/vfio", O_RDWR);
     if (fd < 0) {
@@ -2690,15 +2674,15 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
     container->iommu = iommu;
     container->fd = fd;
 
-    if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU)) {
-        ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
-        if (ret) {
-            error_report("vfio: failed to set group container: %m");
-            g_free(container);
-            close(fd);
-            return -errno;
-        }
+    ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
+    if (ret) {
+        error_report("vfio: failed to set group container: %m");
+        g_free(container);
+        close(fd);
+        return -errno;
+    }
 
+    if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU)) {
         ret = ioctl(fd, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU);
         if (ret) {
             error_report("vfio: failed to set iommu for container: %m");
@@ -2706,11 +2690,6 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
             close(fd);
             return -errno;
         }
-
-        container->iommu_data.listener = vfio_memory_listener;
-        container->iommu_data.release = vfio_listener_release;
-
-        memory_listener_register(&container->iommu_data.listener, &address_space_memory);
     } else {
         error_report("vfio: No available IOMMU models");
         g_free(container);
@@ -2718,6 +2697,14 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
         return -EINVAL;
     }
 
+    address_space_init(&vfio_iommu->listen_as, iommu);
+
+    container->iommu_data.listener = vfio_memory_listener;
+    container->iommu_data.release = vfio_listener_release;
+
+    memory_listener_register(&container->iommu_data.listener,
+                             &vfio_iommu->listen_as);
+
     QLIST_INIT(&container->group_list);
     QLIST_INSERT_HEAD(&vfio_iommu->containers, container, next);
 
@@ -2727,6 +2714,45 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
     return 0;
 }
 
+static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
+{
+    VFIOContainer *container;
+    VFIOIommu *vfio_iommu;
+    MemoryRegion *mr;
+
+    if (group->container) {
+        if (group->container->iommu == iommu) {
+            return 0;
+        } else {
+            error_report("vfio: group %d used in multiple DMA contexts",
+                         group->groupid);
+            return -EBUSY;
+        }
+    }
+
+    vfio_iommu = iommu_get_vfio(iommu);
+    QLIST_FOREACH(container, &vfio_iommu->containers, next) {
+        if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
+            group->container = container;
+            QLIST_INSERT_HEAD(&container->group_list, group, container_next);
+            return 0;
+        }
+    }
+
+    mr = iommu;
+    while (!mr->terminates) {
+        mr = mr->parent;
+    }
+
+    if (!memory_region_is_iommu(mr)) {
+        
+        return vfio_create_container_with_listener(group, iommu);
+    }
+
+    error_report("vfio: no support for guest side IOMMU");
+    return -ENODEV;
+}
+
 static void vfio_disconnect_iommu(VFIOGroup *group)
 {
     VFIOContainer *container = group->container;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes David Gibson
@ 2013-04-26  8:19   ` Paolo Bonzini
  2013-04-26 11:04     ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-26  8:19 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

Il 26/04/2013 08:02, David Gibson ha scritto:
> vmw_pvscsi.c directly calls pci_dma_sglist_init() instead of using the
> helper for PCI devices, which means it was broken by Paolo Bonzini's
> recent addition of iommu support to the memory API.  This fixes it.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/scsi/vmw_pvscsi.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
> index 4b4a58f..68c8d58 100644
> --- a/hw/scsi/vmw_pvscsi.c
> +++ b/hw/scsi/vmw_pvscsi.c
> @@ -617,7 +617,7 @@ pvscsi_build_sglist(PVSCSIState *s, PVSCSIRequest *r)
>  {
>      PCIDevice *d = PCI_DEVICE(s);
>  
> -    qemu_sglist_init(&r->sgl, 1, pci_dma_context(d));
> +    pci_dma_sglist_init(&r->sgl, d, 1);
>      if (r->req.flags & PVSCSI_FLAG_CMD_WITH_SG_LIST) {
>          pvscsi_convert_sglist(r);
>      } else {
> 

Thanks for the heads-up.  Though maybe it should just bypass the IOMMU
like virtio, I'll think about it.

Paolo

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion David Gibson
@ 2013-04-26  8:23   ` Paolo Bonzini
  2013-04-26 11:31     ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-26  8:23 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

Il 26/04/2013 08:02, David Gibson ha scritto:
> At the moment, vfio maintains a global list of containers that are assumed
> to be more or less interchangeable, since they are all set up with a
> MemoryListener to have all of system memory mapped.  However, that only
> makes sense if all the containers are used on devices which really do
> expect a dma address space identical to system memory.
> 
> This patch moves towards that by making the list of containers per
> MemoryRegion (which corresponds to a dma address space) instead of global.

This is the same violation of encapsulation that Alex pointed out in v1.
 You need to add this capability to VFIO's MemoryListener (either the
one that's already there, or a new one), looking for IOMMU regions in
the region_add callback.

Paolo

> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/misc/vfio.c         |   48 +++++++++++++++++++++++++++++++++++++++++-------
>  include/exec/memory.h  |    4 ++++
>  include/hw/misc/vfio.h |   20 ++++++++++++++++++++
>  memory.c               |    2 ++
>  stubs/Makefile.objs    |    1 +
>  stubs/vfio.c           |    5 +++++
>  6 files changed, 73 insertions(+), 7 deletions(-)
>  create mode 100644 include/hw/misc/vfio.h
>  create mode 100644 stubs/vfio.c
> 
> diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
> index 9057964..707a63c 100644
> --- a/hw/misc/vfio.c
> +++ b/hw/misc/vfio.c
> @@ -39,6 +39,7 @@
>  #include "qemu/range.h"
>  #include "sysemu/kvm.h"
>  #include "sysemu/sysemu.h"
> +#include "hw/misc/vfio.h"
>  
>  /* #define DEBUG_VFIO */
>  #ifdef DEBUG_VFIO
> @@ -177,10 +178,11 @@ typedef struct VFIOGroup {
>      QLIST_ENTRY(VFIOGroup) container_next;
>  } VFIOGroup;
>  
> -#define MSIX_CAP_LENGTH 12
> +typedef struct VFIOIommu {
> +    QLIST_HEAD(, VFIOContainer) containers;
> +} VFIOIommu;
>  
> -static QLIST_HEAD(, VFIOContainer)
> -    container_list = QLIST_HEAD_INITIALIZER(container_list);
> +#define MSIX_CAP_LENGTH 12
>  
>  static QLIST_HEAD(, VFIOGroup)
>      group_list = QLIST_HEAD_INITIALIZER(group_list);
> @@ -2615,8 +2617,40 @@ static int vfio_load_rom(VFIODevice *vdev)
>      return 0;
>  }
>  
> +static VFIOIommu *iommu_get_vfio(MemoryRegion *iommu)
> +{
> +    VFIOIommu *vfio_iommu;
> +
> +    if (iommu->vfio) {
> +        return iommu->vfio;
> +    }
> +
> +    vfio_iommu = g_malloc0(sizeof(*vfio_iommu));
> +    QLIST_INIT(&vfio_iommu->containers);
> +    iommu->vfio = vfio_iommu;
> +    return vfio_iommu;
> +}
> +
> +void vfio_iommu_destroy(MemoryRegion *iommu)
> +{
> +    VFIOIommu *vfio_iommu = iommu->vfio;
> +
> +    if (!vfio_iommu) {
> +        return; /* Nothing to do */
> +    }
> +
> +    iommu->vfio = NULL;
> +
> +    /* Devices and the groups associated should already have been
> +     * disconnected at this point */
> +    assert(QLIST_EMPTY(&vfio_iommu->containers));
> +
> +    g_free(vfio_iommu);
> +}
> +
>  static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
>  {
> +    VFIOIommu *vfio_iommu = iommu_get_vfio(iommu);
>      VFIOContainer *container;
>      int ret, fd;
>  
> @@ -2630,7 +2664,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
>          }
>      }
>  
> -    QLIST_FOREACH(container, &container_list, next) {
> +    QLIST_FOREACH(container, &vfio_iommu->containers, next) {
>          if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
>              group->container = container;
>              QLIST_INSERT_HEAD(&container->group_list, group, container_next);
> @@ -2685,7 +2719,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
>      }
>  
>      QLIST_INIT(&container->group_list);
> -    QLIST_INSERT_HEAD(&container_list, container, next);
> +    QLIST_INSERT_HEAD(&vfio_iommu->containers, container, next);
>  
>      group->container = container;
>      QLIST_INSERT_HEAD(&container->group_list, group, container_next);
> @@ -2693,7 +2727,7 @@ static int vfio_connect_iommu(VFIOGroup *group, MemoryRegion *iommu)
>      return 0;
>  }
>  
> -static void vfio_disconnect_context(VFIOGroup *group)
> +static void vfio_disconnect_iommu(VFIOGroup *group)
>  {
>      VFIOContainer *container = group->container;
>  
> @@ -2783,7 +2817,7 @@ static void vfio_put_group(VFIOGroup *group)
>          return;
>      }
>  
> -    vfio_disconnect_context(group);
> +    vfio_disconnect_iommu(group);
>      QLIST_REMOVE(group, next);
>      DPRINTF("vfio_put_group: close group->fd\n");
>      close(group->fd);
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 99d51b7..259b8eb 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -129,6 +129,9 @@ struct MemoryRegionIOMMUOps {
>  typedef struct CoalescedMemoryRange CoalescedMemoryRange;
>  typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd;
>  
> +typedef struct VFIOIommu VFIOIommu;
> +struct VFIOIommu;
> +
>  struct MemoryRegion {
>      /* All fields are private - violators will be prosecuted */
>      const MemoryRegionOps *ops;
> @@ -160,6 +163,7 @@ struct MemoryRegion {
>      unsigned ioeventfd_nb;
>      MemoryRegionIoeventfd *ioeventfds;
>      struct AddressSpace *iommu_target_as;
> +    VFIOIommu *vfio;
>  };
>  
>  struct MemoryRegionPortio {
> diff --git a/include/hw/misc/vfio.h b/include/hw/misc/vfio.h
> new file mode 100644
> index 0000000..602e2f9
> --- /dev/null
> +++ b/include/hw/misc/vfio.h
> @@ -0,0 +1,20 @@
> +/*
> + * vfio based device assignment
> + *
> + * Copyright 2013 David Gibson, IBM Corporation.
> + * Copyright Red Hat, Inc. 2012
> + *
> + * This work is licensed under the terms of the GNU GPL, version
> + * 2. See the COPYING file in the top-level directory.
> + */
> +#ifndef QEMU_VFIO_H
> +#define QEMU_VFIO_H
> +
> +#include "qemu/queue.h"
> +
> +typedef struct MemoryRegion MemoryRegion;
> +struct MemoryRegion;
> +
> +void vfio_iommu_destroy(MemoryRegion *iommu);
> +
> +#endif /* QEMU_VFIO_H */
> diff --git a/memory.c b/memory.c
> index cee3e59..50889d5 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -21,6 +21,7 @@
>  #include <assert.h>
>  
>  #include "exec/memory-internal.h"
> +#include "hw/misc/vfio.h"
>  
>  static unsigned memory_region_transaction_depth;
>  static bool memory_region_update_pending;
> @@ -1043,6 +1044,7 @@ void memory_region_init_reservation(MemoryRegion *mr,
>  
>  void memory_region_destroy(MemoryRegion *mr)
>  {
> +    vfio_iommu_destroy(mr);
>      assert(QTAILQ_EMPTY(&mr->subregions));
>      assert(memory_region_transaction_depth == 0);
>      mr->destructor(mr);
> diff --git a/stubs/Makefile.objs b/stubs/Makefile.objs
> index 9c55b34..858ca6b 100644
> --- a/stubs/Makefile.objs
> +++ b/stubs/Makefile.objs
> @@ -20,6 +20,7 @@ stub-obj-y += reset.o
>  stub-obj-y += set-fd-handler.o
>  stub-obj-y += slirp.o
>  stub-obj-y += sysbus.o
> +stub-obj-y += vfio.o
>  stub-obj-y += vm-stop.o
>  stub-obj-y += vmstate.o
>  stub-obj-$(CONFIG_WIN32) += fd-register.o
> diff --git a/stubs/vfio.c b/stubs/vfio.c
> new file mode 100644
> index 0000000..6aacaf6
> --- /dev/null
> +++ b/stubs/vfio.c
> @@ -0,0 +1,5 @@
> +#include "hw/misc/vfio.h"
> +
> +void vfio_iommu_destroy(MemoryRegion *iommu)
> +{
> +}
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes
  2013-04-26  8:19   ` Paolo Bonzini
@ 2013-04-26 11:04     ` David Gibson
  0 siblings, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-26 11:04 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1463 bytes --]

On Fri, Apr 26, 2013 at 10:19:55AM +0200, Paolo Bonzini wrote:
> Il 26/04/2013 08:02, David Gibson ha scritto:
> > vmw_pvscsi.c directly calls pci_dma_sglist_init() instead of using the
> > helper for PCI devices, which means it was broken by Paolo Bonzini's
> > recent addition of iommu support to the memory API.  This fixes it.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  hw/scsi/vmw_pvscsi.c |    2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/hw/scsi/vmw_pvscsi.c b/hw/scsi/vmw_pvscsi.c
> > index 4b4a58f..68c8d58 100644
> > --- a/hw/scsi/vmw_pvscsi.c
> > +++ b/hw/scsi/vmw_pvscsi.c
> > @@ -617,7 +617,7 @@ pvscsi_build_sglist(PVSCSIState *s, PVSCSIRequest *r)
> >  {
> >      PCIDevice *d = PCI_DEVICE(s);
> >  
> > -    qemu_sglist_init(&r->sgl, 1, pci_dma_context(d));
> > +    pci_dma_sglist_init(&r->sgl, d, 1);
> >      if (r->req.flags & PVSCSI_FLAG_CMD_WITH_SG_LIST) {
> >          pvscsi_convert_sglist(r);
> >      } else {
> > 
> 
> Thanks for the heads-up.  Though maybe it should just bypass the IOMMU
> like virtio, I'll think about it.

Well, maybe.  virtio shouldn't really bypass the iommu, that's
basically a historical bug in the specification.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-26  8:23   ` Paolo Bonzini
@ 2013-04-26 11:31     ` David Gibson
  2013-04-26 13:40       ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-26 11:31 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1894 bytes --]

On Fri, Apr 26, 2013 at 10:23:40AM +0200, Paolo Bonzini wrote:
> Il 26/04/2013 08:02, David Gibson ha scritto:
> > At the moment, vfio maintains a global list of containers that are assumed
> > to be more or less interchangeable, since they are all set up with a
> > MemoryListener to have all of system memory mapped.  However, that only
> > makes sense if all the containers are used on devices which really do
> > expect a dma address space identical to system memory.
> > 
> > This patch moves towards that by making the list of containers per
> > MemoryRegion (which corresponds to a dma address space) instead of global.
> 
> This is the same violation of encapsulation that Alex pointed out in
> v1.

As I said in response to that, I can't see any way to avoid this that
isn't much uglier.  Fundamentally we need to associate the in-kernel
notion of an address space (the vfio container) with the qemu object.
Adding a opaque pointer to the MemoryRegion seems a lot cleaner than
having some data structure in the vfio code that's indexed by
MemoryRegion.

>  You need to add this capability to VFIO's MemoryListener (either the
> one that's already there, or a new one), looking for IOMMU regions in
> the region_add callback.

Putting it in the MemoryListener doesn't work.  First, there could be
multiple listeners if there are multiple containers attached to the
MemoryRegion - the information we have here needs to be per
MemoryRegion.  More importantly, what I'm working towards here is
vfio support for guest visible IOMMUs that don't have all of guest RAM
mapped into them initially.  In that case there won't (and can't) be
any MemoryListener at all.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-26 11:31     ` David Gibson
@ 2013-04-26 13:40       ` Paolo Bonzini
  2013-04-27  9:49         ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-26 13:40 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 26/04/2013 13:31, David Gibson ha scritto:
>> You need to add this capability to VFIO's MemoryListener (either
>> the one that's already there, or a new one), looking for IOMMU
>> regions in the region_add callback.
> 
> Putting it in the MemoryListener doesn't work.  First, there could
> be multiple listeners if there are multiple containers attached to
> the MemoryRegion - the information we have here needs to be per 
> MemoryRegion.

There's no fundamental reason for VFIO to use multiple
MemoryListeners.  It could use one for all VFIO instances.

> More importantly, what I'm working towards here is vfio support for
> guest visible IOMMUs that don't have all of guest RAM mapped into
> them initially.  In that case there won't (and can't) be any
> MemoryListener at all.

Why?  All you want here is to look for appearance of an IOMMU
MemoryRegion in the flat representation of the AddressSpace.  That's
exactly what the MemoryListener does---of course that's a different
MemoryListener implementation than VFIO's current one.

The MemoryListener is used for a lot of different things, I find it
hard to believe that this is not a variation on one of them.

Paolo

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJReoPOAAoJEBvWZb6bTYbyWRYP/jK7ZTJgQFGGoiqkZU+0/vTK
B5mrRfld9qdZzT2wLHyZ43pVyteECUJ/tA4T6J9pRMm4m1TmSLvCACEMUQbSoDKO
pfAgNQiEyt7y/9+/vZaxoBOdbovwtRaiMHWdUUgZ1fbgoyqTvGKrfXs1Jtz9+tLZ
hLD4A/CRMbId2Zji4pBtIBr6s5S12wnrfaYxrbieX1F4tSor+6LfAcTqdHRPFBU9
YY9bEJdLC/igCrJBbEVKfpJac8S+2ybRbYX9lSKNs/zZjfZ1ycRP1Vm/Hywu44PX
zs18LqTKf4begPhcTG8ioAs900kW2QIPmZB9MIl+dV9QmNhZcYmZ3AsGVElRuGe7
wnrskkGNHgPa5fEpj63C70Zrz+ONT8OFvrw7vU50tvZmi3OuAFK5+mWSPqNcKDJw
GiRHWzauXLJxE716zY7vA+AHJGyLy/mHbcDjDzlUYbUJCfTbRWhb40fcGkN2ZFV7
5jl5d2pd6A5JtZNZkc3K9O6HGfySpXAVCl4+2UqZl9WdRQkdYtNELzdXDH0eznTj
kWKcwPJj3U2SBX1TG5qnRKGkqIPHuDJkvNPWzU73iZc5nPErgVCMt9AN4yd4xcNo
s0qrSE3wTgE2FiUR3Nr9CR9+lF9dBiF1C+Dy3Yai8L0IFWQYOtRHmeRMbaZOyPxB
L4NkXs6jPDD8Pz0C/avr
=TWCF
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2)
  2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
                   ` (3 preceding siblings ...)
  2013-04-26  6:02 ` [Qemu-devel] [PATCH 4/4] vfio: Only use memory listeners when appropriate David Gibson
@ 2013-04-26 15:42 ` Alex Williamson
  2013-04-27  9:51   ` David Gibson
  4 siblings, 1 reply; 30+ messages in thread
From: Alex Williamson @ 2013-04-26 15:42 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, pbonzini, qemu-devel

On Fri, 2013-04-26 at 16:02 +1000, David Gibson wrote:
> This patch series represents a second attempt at better integration of
> the vfio code with qemu's handling of guest IOMMUs.  It is based on
> Paolo Bonzini's tree at git://github.com/bonzini/qemu.git (iommu
> branch).
> 
> ddThis series should open the way for using VFIO with a guest system
> containing an IOMMU by passing guest IOMMU operations through to the
> host IOMMU via VFIO.  That's opposed to the present model of having no
> IOMMU in the guest, and simply mapping all guest RAM into the host
> IOMMU.
> 
> Patch 1 is just a trivial fix for a build problem introduced in the
> iommu tree.

I support what this series is trying to achieve, but not how it does it.
vfio needs to be just another driver, the ugliness and heavy lifting
should be done in vfio or the common should should be extended to
generically support the hooks that vfio needs.  We can't just drop vfio
pointers and callbacks into unrelated code.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-26 13:40       ` Paolo Bonzini
@ 2013-04-27  9:49         ` David Gibson
  2013-04-27 12:17           ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-27  9:49 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2542 bytes --]

On Fri, Apr 26, 2013 at 03:40:30PM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 26/04/2013 13:31, David Gibson ha scritto:
> >> You need to add this capability to VFIO's MemoryListener (either
> >> the one that's already there, or a new one), looking for IOMMU
> >> regions in the region_add callback.
> > 
> > Putting it in the MemoryListener doesn't work.  First, there could
> > be multiple listeners if there are multiple containers attached to
> > the MemoryRegion - the information we have here needs to be per 
> > MemoryRegion.
> 
> There's no fundamental reason for VFIO to use multiple
> MemoryListeners.  It could use one for all VFIO instances.

Actually, there kind of is.  Using a new listener for each new
container means the listener framework automatically invokes callbacks
for all the existing reasons, allowing us to map them into the
container.  With a single listener, we'd have to manually write logic
to map all the existing mappings into each new container.

> > More importantly, what I'm working towards here is vfio support for
> > guest visible IOMMUs that don't have all of guest RAM mapped into
> > them initially.  In that case there won't (and can't) be any
> > MemoryListener at all.
> 
> Why?  All you want here is to look for appearance of an IOMMU
> MemoryRegion in the flat representation of the AddressSpace.  That's
> exactly what the MemoryListener does---of course that's a different
> MemoryListener implementation than VFIO's current one.
> 
> The MemoryListener is used for a lot of different things, I find it
> hard to believe that this is not a variation on one of them.

Uh.. so, yes we might actually want a memorylistener to watch for
appearance of iommu regions, if there are memoryregion layers between
us and the iommu region itself.

But the point remains that from the code which implements the guest
side IOMMU we have to somehow invoke a hook which will make parallel
mappings in the vfio container.  That will need to go from the iommu
code's handle on the address space - a MemoryRegion - to vfio's handle
on the address space, the vfio container.  I don't see a way to do
that without having a vfio private pointer in the memoryregion, that
isn't much uglier than that minor encapsulation breach.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2)
  2013-04-26 15:42 ` [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) Alex Williamson
@ 2013-04-27  9:51   ` David Gibson
  0 siblings, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-27  9:51 UTC (permalink / raw)
  To: Alex Williamson; +Cc: aik, pbonzini, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1466 bytes --]

On Fri, Apr 26, 2013 at 09:42:31AM -0600, Alex Williamson wrote:
> On Fri, 2013-04-26 at 16:02 +1000, David Gibson wrote:
> > This patch series represents a second attempt at better integration of
> > the vfio code with qemu's handling of guest IOMMUs.  It is based on
> > Paolo Bonzini's tree at git://github.com/bonzini/qemu.git (iommu
> > branch).
> > 
> > ddThis series should open the way for using VFIO with a guest system
> > containing an IOMMU by passing guest IOMMU operations through to the
> > host IOMMU via VFIO.  That's opposed to the present model of having no
> > IOMMU in the guest, and simply mapping all guest RAM into the host
> > IOMMU.
> > 
> > Patch 1 is just a trivial fix for a build problem introduced in the
> > iommu tree.
> 
> I support what this series is trying to achieve, but not how it does it.
> vfio needs to be just another driver,

That's a desirable goal, but given vfio has inherently unique
constraints, I'm not sure it's achievable.

> the ugliness and heavy lifting
> should be done in vfio or the common should should be extended to
> generically support the hooks that vfio needs.  We can't just drop vfio
> pointers and callbacks into unrelated code.  Thanks,

Well, suggest me a way to do it..

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-27  9:49         ` David Gibson
@ 2013-04-27 12:17           ` Paolo Bonzini
  2013-04-28  1:58             ` David Gibson
  2013-04-29  2:11             ` [Qemu-devel] [PATCH] memory: give name every AddressSpace Alexey Kardashevskiy
  0 siblings, 2 replies; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-27 12:17 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 27/04/2013 11:49, David Gibson ha scritto:
>> There's no fundamental reason for VFIO to use multiple 
>> MemoryListeners.  It could use one for all VFIO instances.
> 
> Actually, there kind of is.  Using a new listener for each new 
> container means the listener framework automatically invokes
> callbacks for all the existing reasons, allowing us to map them
> into the container.  With a single listener, we'd have to manually
> write logic to map all the existing mappings into each new
> container.

Ah, thanks for teaching me.  It is indeed more convenient.

>>> More importantly, what I'm working towards here is vfio support
>>> for guest visible IOMMUs that don't have all of guest RAM
>>> mapped into them initially.  In that case there won't (and
>>> can't) be any MemoryListener at all.
>> 
>> Why?  All you want here is to look for appearance of an IOMMU 
>> MemoryRegion in the flat representation of the AddressSpace.
>> That's exactly what the MemoryListener does---of course that's a
>> different MemoryListener implementation than VFIO's current one.
>> 
>> The MemoryListener is used for a lot of different things, I find
>> it hard to believe that this is not a variation on one of them.
> 
> Uh.. so, yes we might actually want a memorylistener to watch for 
> appearance of iommu regions, if there are memoryregion layers
> between us and the iommu region itself.
> 
> But the point remains that from the code which implements the
> guest side IOMMU we have to somehow invoke a hook which will make
> parallel mappings in the vfio container.  That will need to go from
> the iommu code's handle on the address space - a MemoryRegion - to
> vfio's handle on the address space, the vfio container.  I don't
> see a way to do that without having a vfio private pointer in the
> memoryregion, that isn't much uglier than that minor encapsulation
> breach.

Ok, knowing about changes that happen in the IOMMU mapping is indeed
out of scope of MemoryListeners.  What about adding a NotifierList?
Then VFIO can register a notifier and use it to learn about "events"
in the IOMMU mapping.  The notifier data can be a MemoryRegionSection
or IOMMUTableEntry, whatever you find more convenient.

Paolo

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRe8HyAAoJEBvWZb6bTYbyZCwP+gMnSTE/zmzYWXQtnXgS5XrD
ezsOO55RfV6awehSa0ITGzuMzatn3DHakp0YAfcWJaz3cdFlAT+xWg4PNbw4yilJ
a/kFoG7sG4xXQD0bEkgZmHxyPHhKhSeKQfTUJ7FRjjEDWCDXo/5fLBX01tvKefYF
DPUKK7vQANyxfsUFzqN8qJHQ4cZoA209sILCbRXdAsPIj++sBx9TPQYDoCAGNFZd
OlB60yLXdwMlb1VrUlNbZOvdut8Ky6L1RfalKgKXyZnkCFssp02CKLaubyPw0WRk
8N7xjvojEnQb82Kk5hSfsVOdwZNO03GkjhcgtuBhk90Hw43fO7JhbPYCg32EvX1E
4SX+NbbL7WYFMVqe019zjKLQ4ZX5/TdUoMJo1TqtATHUyn4k7G0cWpO52p1tr/Iw
FN8l6kzOZH1H0DbBYyc+4ifFxBJ9WiWUDUKr+CJF4EhDxzcceC2aXXTLYOA9mD2Y
oMh4gDdwlelzXO4JuGYXSU/dG8cZ0HdGnCrUO5KgQJwSio/BkUece5ga8X3qUVdG
p/Z2ZCI2xMVv/IbPVR5vJtDkPIxOmnyjQKQrMDjv7EQSf8yrW3rMcqgnxVgZZoob
o96krNwAr5tQT2a9a/U2qne6ueVe0BxcqPkd+8n5ybh/YKzjWed2r1K2Oe4ZNved
a+Qi/12Z+Pv9zTZrMH37
=AtuM
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-27 12:17           ` Paolo Bonzini
@ 2013-04-28  1:58             ` David Gibson
  2013-04-29  8:11               ` Paolo Bonzini
  2013-04-29  2:11             ` [Qemu-devel] [PATCH] memory: give name every AddressSpace Alexey Kardashevskiy
  1 sibling, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-28  1:58 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3930 bytes --]

On Sat, Apr 27, 2013 at 02:17:54PM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 27/04/2013 11:49, David Gibson ha scritto:
> >> There's no fundamental reason for VFIO to use multiple 
> >> MemoryListeners.  It could use one for all VFIO instances.
> > 
> > Actually, there kind of is.  Using a new listener for each new 
> > container means the listener framework automatically invokes
> > callbacks for all the existing reasons, allowing us to map them
> > into the container.  With a single listener, we'd have to manually
> > write logic to map all the existing mappings into each new
> > container.
> 
> Ah, thanks for teaching me.  It is indeed more convenient.
> 
> >>> More importantly, what I'm working towards here is vfio support
> >>> for guest visible IOMMUs that don't have all of guest RAM
> >>> mapped into them initially.  In that case there won't (and
> >>> can't) be any MemoryListener at all.
> >> 
> >> Why?  All you want here is to look for appearance of an IOMMU 
> >> MemoryRegion in the flat representation of the AddressSpace.
> >> That's exactly what the MemoryListener does---of course that's a
> >> different MemoryListener implementation than VFIO's current one.
> >> 
> >> The MemoryListener is used for a lot of different things, I find
> >> it hard to believe that this is not a variation on one of them.
> > 
> > Uh.. so, yes we might actually want a memorylistener to watch for 
> > appearance of iommu regions, if there are memoryregion layers
> > between us and the iommu region itself.
> > 
> > But the point remains that from the code which implements the
> > guest side IOMMU we have to somehow invoke a hook which will make
> > parallel mappings in the vfio container.  That will need to go from
> > the iommu code's handle on the address space - a MemoryRegion - to
> > vfio's handle on the address space, the vfio container.  I don't
> > see a way to do that without having a vfio private pointer in the
> > memoryregion, that isn't much uglier than that minor encapsulation
> > breach.
> 
> Ok, knowing about changes that happen in the IOMMU mapping is indeed
> out of scope of MemoryListeners.  What about adding a NotifierList?
> Then VFIO can register a notifier and use it to learn about "events"
> in the IOMMU mapping.  The notifier data can be a MemoryRegionSection
> or IOMMUTableEntry, whatever you find more convenient.

For the generic case a Notifier could work in principle.  Neither of
those structures is suitable as the data though: constructing a
MemoryRegionSection for every page we map into the IOTLB is far too
heavyweight, and the IOMMUTLBEntry doesn't contain the IOVA.

There still might be complications with setup and teardown - on
pseries, for example, we need to do some bookkeeping when a container
is first attached to an iommu address space.  Likewise we may have
some vfio specific stuff to do to clean up when an iommu address space
is removed.

For pseries, we also have a complication to handle our accelerated
case, where the IOMMU interface (hypercalls, in our case) are handled
in kvm, and directly do the host IOMMU manipulations for VFIO, without
bouncing through qemu.  To do that we need to tell the kernel the
(qemu allocated) ID for the address space so it can associated that
with the VFIO container.

Thinking over, I think what that mostly amounts to, is that if the
VFIO aspects of the address space are already wired up by the host
bridge, then the individual vfio-pci devices need a way of going from
their qemu iommu address space (which they get from pci_dev->iommu) to
the vfio specific information about that address space.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-27 12:17           ` Paolo Bonzini
  2013-04-28  1:58             ` David Gibson
@ 2013-04-29  2:11             ` Alexey Kardashevskiy
  2013-04-29  8:16               ` Paolo Bonzini
  1 sibling, 1 reply; 30+ messages in thread
From: Alexey Kardashevskiy @ 2013-04-29  2:11 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Alexey Kardashevskiy, qemu-trivial, qemu-devel, David Gibson

The "info mtree" command in QEMU console prints only "memory" and "I/O"
address spaces while there are actually a lot more other AddressSpace
structs created by PCI and VIO devices. Those devices do not normally
have names and therefore not present in "info qtree" output.

The patch fixes this.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---


The number of AddressSpace structs is constantly growing (even without IOMMU)
and it is getting harder to trace them so this is why I came up with this patch.
Or there is a reason to hide those AddressSpace structs which I do not see,
is not it?


---
 exec.c                |    6 ++----
 hw/pci/pci.c          |    3 ++-
 hw/ppc/spapr_vio.c    |    2 +-
 include/exec/memory.h |    2 +-
 memory.c              |    7 ++++---
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/exec.c b/exec.c
index 180a345..0091272 100644
--- a/exec.c
+++ b/exec.c
@@ -1839,13 +1839,11 @@ static void memory_map_init(void)
 {
     system_memory = g_malloc(sizeof(*system_memory));
     memory_region_init(system_memory, "system", INT64_MAX);
-    address_space_init(&address_space_memory, system_memory);
-    address_space_memory.name = "memory";
+    address_space_init(&address_space_memory, system_memory, "memory");
 
     system_io = g_malloc(sizeof(*system_io));
     memory_region_init(system_io, "io", 65536);
-    address_space_init(&address_space_io, system_io);
-    address_space_io.name = "I/O";
+    address_space_init(&address_space_io, system_io, "I/O");
 
     memory_listener_register(&core_memory_listener, &address_space_memory);
     memory_listener_register(&io_memory_listener, &address_space_io);
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index fe146dc..0a1acd6 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -823,7 +823,8 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     memory_region_init_alias(&pci_dev->bus_master_enable_region, "bus master",
                              pci_dev->iommu, 0, memory_region_size(pci_dev->iommu));
     memory_region_set_enabled(&pci_dev->bus_master_enable_region, false);
-    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region);
+    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region,
+                       name);
 
     pci_dev->devfn = devfn;
     pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
index fbcbd6f..fadde20 100644
--- a/hw/ppc/spapr_vio.c
+++ b/hw/ppc/spapr_vio.c
@@ -449,7 +449,7 @@ static int spapr_vio_busdev_init(DeviceState *qdev)
     if (pc->rtce_window_size) {
         uint32_t liobn = SPAPR_VIO_BASE_LIOBN | dev->reg;
         dev->tcet = spapr_tce_new_table(liobn, pc->rtce_window_size);
-        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet));
+        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet), qdev->id);
     }
 
     return pc->init(dev);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 99d51b7..b4f1182 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -840,7 +840,7 @@ void mtree_info(fprintf_function mon_printf, void *f);
  * @as: an uninitialized #AddressSpace
  * @root: a #MemoryRegion that routes addesses for the address space
  */
-void address_space_init(AddressSpace *as, MemoryRegion *root);
+void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name);
 
 
 /**
diff --git a/memory.c b/memory.c
index cee3e59..b50f5e6 100644
--- a/memory.c
+++ b/memory.c
@@ -1002,7 +1002,7 @@ void memory_region_init_iommu(MemoryRegion *mr,
     mr->terminates = true;  /* then re-forwards */
     mr->destructor = memory_region_destructor_iommu;
     mr->iommu_target_as = g_new(AddressSpace, 1);
-    address_space_init(mr->iommu_target_as, target);
+    address_space_init(mr->iommu_target_as, target, name);
 }
 
 static uint64_t invalid_read(void *opaque, hwaddr addr,
@@ -1599,7 +1599,7 @@ void memory_listener_unregister(MemoryListener *listener)
     QTAILQ_REMOVE(&memory_listeners, listener, link);
 }
 
-void address_space_init(AddressSpace *as, MemoryRegion *root)
+void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
 {
     memory_region_transaction_begin();
     as->root = root;
@@ -1608,7 +1608,7 @@ void address_space_init(AddressSpace *as, MemoryRegion *root)
     as->ioeventfd_nb = 0;
     as->ioeventfds = NULL;
     QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
-    as->name = NULL;
+    as->name = g_strdup(name?name:"anonymous");
     address_space_init_dispatch(as);
     memory_region_update_pending |= root->enabled;
     memory_region_transaction_commit();
@@ -1623,6 +1623,7 @@ void address_space_destroy(AddressSpace *as)
     QTAILQ_REMOVE(&address_spaces, as, address_spaces_link);
     address_space_destroy_dispatch(as);
     flatview_destroy(as->current_map);
+    g_free((void *)as->name);
     g_free(as->current_map);
     g_free(as->ioeventfds);
 }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-28  1:58             ` David Gibson
@ 2013-04-29  8:11               ` Paolo Bonzini
  2013-04-29 11:00                 ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-29  8:11 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 28/04/2013 03:58, David Gibson ha scritto:
>>> Ok, knowing about changes that happen in the IOMMU mapping is
>>> indeed out of scope of MemoryListeners.  What about adding a
>>> NotifierList? Then VFIO can register a notifier and use it to
>>> learn about "events" in the IOMMU mapping.  The notifier data
>>> can be a MemoryRegionSection or IOMMUTableEntry, whatever you
>>> find more convenient.
> For the generic case a Notifier could work in principle.  Neither
> of those structures is suitable as the data though: constructing a 
> MemoryRegionSection for every page we map into the IOTLB is far
> too heavyweight, and the IOMMUTLBEntry doesn't contain the IOVA.

It did in Avi's patch set.  I removed it because it was unused.  I can
add it back if you need it.

> Thinking over, I think what that mostly amounts to, is that if the 
> VFIO aspects of the address space are already wired up by the host 
> bridge, then the individual vfio-pci devices need a way of going
> from their qemu iommu address space (which they get from
> pci_dev->iommu) to the vfio specific information about that address
> space.

That can be done with just a hash table, no?

Paolo
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRfisaAAoJEBvWZb6bTYby08gP/RcK8KfnW3bt4+sKui499rN0
N2YG+YOUySyrbChY2p9MQHmvQpHeJF+Qb6J2WkpqVHCewXox90uYaLMfUvHLGDuh
tYusHQj9ew1afNYqUR19UIhiV11W+2Zy3GIDV6hEA0/lUEtvFYkKe4qqjYL6XPNh
3ZjrgYoGVZeXeN9FAcefVhKrZNkeG+QBg1FtXeCjinz3ZtjAsaR5QT8y8/quV7/W
+i9IPh5bl7eMKPFKJl92uVnX7lF3IXkm+dBJGCMuTtp2e2uCGBiAop5FvvZdH25V
VsYUxh57Mk6CFER8h23qGt7evU5QU7CS6Lq7mJav1sVJARtVGqv2Nizd8557UCgq
EYsJx7iHFJrmxCiIx4yrzLO8fPtnvwcXsmthvLcC3N9FYYCNNgaMGAV3J2OSGHqC
OkgpWWxHWokp1i1qumKGJH3KNd/3Yzys/KmeBf3t8Uk4sXFzHZcYFqr6qKopAGVw
9wLA/wBDUF9TOGg/DLcQuQ7de95uJQs7yQislcuXUS8gQ87hnvk3STCLvnPJwcvt
WcsBJFj2H62cdKemoaMaDDtwwTNds/BwHNaiQp9DjIBj6Lq25c0tk4Xswv0lc/yn
InsGeDVtCIUkEBHu9T8PD9Te2T59CgvpXjQ/ZEJkXnI5fDm1i1vda2xQx6Erp2ah
UKPltbecB2Oj09Yul6XC
=16N8
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-29  2:11             ` [Qemu-devel] [PATCH] memory: give name every AddressSpace Alexey Kardashevskiy
@ 2013-04-29  8:16               ` Paolo Bonzini
  2013-04-29  8:21                 ` Alexey Kardashevskiy
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-29  8:16 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: qemu-trivial, qemu-devel, David Gibson

Il 29/04/2013 04:11, Alexey Kardashevskiy ha scritto:
> The "info mtree" command in QEMU console prints only "memory" and "I/O"
> address spaces while there are actually a lot more other AddressSpace
> structs created by PCI and VIO devices. Those devices do not normally
> have names and therefore not present in "info qtree" output.
> 
> The patch fixes this.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> 
> 
> The number of AddressSpace structs is constantly growing (even without IOMMU)
> and it is getting harder to trace them so this is why I came up with this patch.
> Or there is a reason to hide those AddressSpace structs which I do not see,
> is not it?
> 
> 
> ---
>  exec.c                |    6 ++----
>  hw/pci/pci.c          |    3 ++-
>  hw/ppc/spapr_vio.c    |    2 +-
>  include/exec/memory.h |    2 +-
>  memory.c              |    7 ++++---
>  5 files changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/exec.c b/exec.c
> index 180a345..0091272 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -1839,13 +1839,11 @@ static void memory_map_init(void)
>  {
>      system_memory = g_malloc(sizeof(*system_memory));
>      memory_region_init(system_memory, "system", INT64_MAX);
> -    address_space_init(&address_space_memory, system_memory);
> -    address_space_memory.name = "memory";
> +    address_space_init(&address_space_memory, system_memory, "memory");
>  
>      system_io = g_malloc(sizeof(*system_io));
>      memory_region_init(system_io, "io", 65536);
> -    address_space_init(&address_space_io, system_io);
> -    address_space_io.name = "I/O";
> +    address_space_init(&address_space_io, system_io, "I/O");
>  
>      memory_listener_register(&core_memory_listener, &address_space_memory);
>      memory_listener_register(&io_memory_listener, &address_space_io);
> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
> index fe146dc..0a1acd6 100644
> --- a/hw/pci/pci.c
> +++ b/hw/pci/pci.c
> @@ -823,7 +823,8 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>      memory_region_init_alias(&pci_dev->bus_master_enable_region, "bus master",
>                               pci_dev->iommu, 0, memory_region_size(pci_dev->iommu));
>      memory_region_set_enabled(&pci_dev->bus_master_enable_region, false);
> -    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region);
> +    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region,
> +                       name);
>  
>      pci_dev->devfn = devfn;
>      pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
> diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
> index fbcbd6f..fadde20 100644
> --- a/hw/ppc/spapr_vio.c
> +++ b/hw/ppc/spapr_vio.c
> @@ -449,7 +449,7 @@ static int spapr_vio_busdev_init(DeviceState *qdev)
>      if (pc->rtce_window_size) {
>          uint32_t liobn = SPAPR_VIO_BASE_LIOBN | dev->reg;
>          dev->tcet = spapr_tce_new_table(liobn, pc->rtce_window_size);
> -        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet));
> +        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet), qdev->id);
>      }
>  
>      return pc->init(dev);
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 99d51b7..b4f1182 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -840,7 +840,7 @@ void mtree_info(fprintf_function mon_printf, void *f);
>   * @as: an uninitialized #AddressSpace
>   * @root: a #MemoryRegion that routes addesses for the address space
>   */
> -void address_space_init(AddressSpace *as, MemoryRegion *root);
> +void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name);
>  
>  

Please update the doc comment.

>  /**
> diff --git a/memory.c b/memory.c
> index cee3e59..b50f5e6 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -1002,7 +1002,7 @@ void memory_region_init_iommu(MemoryRegion *mr,
>      mr->terminates = true;  /* then re-forwards */
>      mr->destructor = memory_region_destructor_iommu;
>      mr->iommu_target_as = g_new(AddressSpace, 1);
> -    address_space_init(mr->iommu_target_as, target);
> +    address_space_init(mr->iommu_target_as, target, name);
>  }
>  
>  static uint64_t invalid_read(void *opaque, hwaddr addr,
> @@ -1599,7 +1599,7 @@ void memory_listener_unregister(MemoryListener *listener)
>      QTAILQ_REMOVE(&memory_listeners, listener, link);
>  }
>  
> -void address_space_init(AddressSpace *as, MemoryRegion *root)
> +void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
>  {
>      memory_region_transaction_begin();
>      as->root = root;
> @@ -1608,7 +1608,7 @@ void address_space_init(AddressSpace *as, MemoryRegion *root)
>      as->ioeventfd_nb = 0;
>      as->ioeventfds = NULL;
>      QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
> -    as->name = NULL;
> +    as->name = g_strdup(name?name:"anonymous");
>      address_space_init_dispatch(as);
>      memory_region_update_pending |= root->enabled;
>      memory_region_transaction_commit();
> @@ -1623,6 +1623,7 @@ void address_space_destroy(AddressSpace *as)
>      QTAILQ_REMOVE(&address_spaces, as, address_spaces_link);
>      address_space_destroy_dispatch(as);
>      flatview_destroy(as->current_map);
> +    g_free((void *)as->name);

No cast here.

Paolo

>      g_free(as->current_map);
>      g_free(as->ioeventfds);
>  }
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-29  8:16               ` Paolo Bonzini
@ 2013-04-29  8:21                 ` Alexey Kardashevskiy
  2013-04-29  9:25                   ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: Alexey Kardashevskiy @ 2013-04-29  8:21 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-trivial, qemu-devel, David Gibson

On 04/29/2013 06:16 PM, Paolo Bonzini wrote:
> Il 29/04/2013 04:11, Alexey Kardashevskiy ha scritto:
>> The "info mtree" command in QEMU console prints only "memory" and "I/O"
>> address spaces while there are actually a lot more other AddressSpace
>> structs created by PCI and VIO devices. Those devices do not normally
>> have names and therefore not present in "info qtree" output.
>>
>> The patch fixes this.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>>
>>
>> The number of AddressSpace structs is constantly growing (even without IOMMU)
>> and it is getting harder to trace them so this is why I came up with this patch.
>> Or there is a reason to hide those AddressSpace structs which I do not see,
>> is not it?
>>
>>
>> ---
>>  exec.c                |    6 ++----
>>  hw/pci/pci.c          |    3 ++-
>>  hw/ppc/spapr_vio.c    |    2 +-
>>  include/exec/memory.h |    2 +-
>>  memory.c              |    7 ++++---
>>  5 files changed, 10 insertions(+), 10 deletions(-)
>>
>> diff --git a/exec.c b/exec.c
>> index 180a345..0091272 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -1839,13 +1839,11 @@ static void memory_map_init(void)
>>  {
>>      system_memory = g_malloc(sizeof(*system_memory));
>>      memory_region_init(system_memory, "system", INT64_MAX);
>> -    address_space_init(&address_space_memory, system_memory);
>> -    address_space_memory.name = "memory";
>> +    address_space_init(&address_space_memory, system_memory, "memory");
>>  
>>      system_io = g_malloc(sizeof(*system_io));
>>      memory_region_init(system_io, "io", 65536);
>> -    address_space_init(&address_space_io, system_io);
>> -    address_space_io.name = "I/O";
>> +    address_space_init(&address_space_io, system_io, "I/O");
>>  
>>      memory_listener_register(&core_memory_listener, &address_space_memory);
>>      memory_listener_register(&io_memory_listener, &address_space_io);
>> diff --git a/hw/pci/pci.c b/hw/pci/pci.c
>> index fe146dc..0a1acd6 100644
>> --- a/hw/pci/pci.c
>> +++ b/hw/pci/pci.c
>> @@ -823,7 +823,8 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
>>      memory_region_init_alias(&pci_dev->bus_master_enable_region, "bus master",
>>                               pci_dev->iommu, 0, memory_region_size(pci_dev->iommu));
>>      memory_region_set_enabled(&pci_dev->bus_master_enable_region, false);
>> -    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region);
>> +    address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_enable_region,
>> +                       name);
>>  
>>      pci_dev->devfn = devfn;
>>      pstrcpy(pci_dev->name, sizeof(pci_dev->name), name);
>> diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
>> index fbcbd6f..fadde20 100644
>> --- a/hw/ppc/spapr_vio.c
>> +++ b/hw/ppc/spapr_vio.c
>> @@ -449,7 +449,7 @@ static int spapr_vio_busdev_init(DeviceState *qdev)
>>      if (pc->rtce_window_size) {
>>          uint32_t liobn = SPAPR_VIO_BASE_LIOBN | dev->reg;
>>          dev->tcet = spapr_tce_new_table(liobn, pc->rtce_window_size);
>> -        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet));
>> +        address_space_init(&dev->as, spapr_tce_get_iommu(dev->tcet), qdev->id);
>>      }
>>  
>>      return pc->init(dev);
>> diff --git a/include/exec/memory.h b/include/exec/memory.h
>> index 99d51b7..b4f1182 100644
>> --- a/include/exec/memory.h
>> +++ b/include/exec/memory.h
>> @@ -840,7 +840,7 @@ void mtree_info(fprintf_function mon_printf, void *f);
>>   * @as: an uninitialized #AddressSpace
>>   * @root: a #MemoryRegion that routes addesses for the address space
>>   */
>> -void address_space_init(AddressSpace *as, MemoryRegion *root);
>> +void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name);
>>  
>>  
> 
> Please update the doc comment.
> 
>>  /**
>> diff --git a/memory.c b/memory.c
>> index cee3e59..b50f5e6 100644
>> --- a/memory.c
>> +++ b/memory.c
>> @@ -1002,7 +1002,7 @@ void memory_region_init_iommu(MemoryRegion *mr,
>>      mr->terminates = true;  /* then re-forwards */
>>      mr->destructor = memory_region_destructor_iommu;
>>      mr->iommu_target_as = g_new(AddressSpace, 1);
>> -    address_space_init(mr->iommu_target_as, target);
>> +    address_space_init(mr->iommu_target_as, target, name);
>>  }
>>  
>>  static uint64_t invalid_read(void *opaque, hwaddr addr,
>> @@ -1599,7 +1599,7 @@ void memory_listener_unregister(MemoryListener *listener)
>>      QTAILQ_REMOVE(&memory_listeners, listener, link);
>>  }
>>  
>> -void address_space_init(AddressSpace *as, MemoryRegion *root)
>> +void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
>>  {
>>      memory_region_transaction_begin();
>>      as->root = root;
>> @@ -1608,7 +1608,7 @@ void address_space_init(AddressSpace *as, MemoryRegion *root)
>>      as->ioeventfd_nb = 0;
>>      as->ioeventfds = NULL;
>>      QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
>> -    as->name = NULL;
>> +    as->name = g_strdup(name?name:"anonymous");
>>      address_space_init_dispatch(as);
>>      memory_region_update_pending |= root->enabled;
>>      memory_region_transaction_commit();
>> @@ -1623,6 +1623,7 @@ void address_space_destroy(AddressSpace *as)
>>      QTAILQ_REMOVE(&address_spaces, as, address_spaces_link);
>>      address_space_destroy_dispatch(as);
>>      flatview_destroy(as->current_map);
>> +    g_free((void *)as->name);
> 
> No cast here.

?

  CC    ppc64-softmmu/memory.o
/home/alexey/pcipassthru/qemu-impreza/memory.c: In function
'address_space_destroy':
/home/alexey/pcipassthru/qemu-impreza/memory.c:1626:5: warning: passing
argument 1 of 'g_free' discards 'const' qualifier from pointer target type
[enabled by default]
     g_free(/*(void *)*/as->name);
     ^
In file included from
/home/alexey/pcipassthru/lib4qemu/usr/include/glib-2.0/glib/glist.h:34:0,
                 from
/home/alexey/pcipassthru/lib4qemu/usr/include/glib-2.0/glib/ghash.h:35,
                 from
/home/alexey/pcipassthru/lib4qemu/usr/include/glib-2.0/glib.h:52,
                 from
/home/alexey/pcipassthru/qemu-impreza/include/glib-compat.h:17,
                 from
/home/alexey/pcipassthru/qemu-impreza/include/qemu-common.h:43,
                 from
/home/alexey/pcipassthru/qemu-impreza/include/exec/memory.h:21,
                 from /home/alexey/pcipassthru/qemu-impreza/memory.c:16:
/home/alexey/pcipassthru/lib4qemu/usr/include/glib-2.0/glib/gmem.h:70:7:
note: expected 'gpointer' but argument is of type 'const char *'
 void  g_free           (gpointer  mem);
       ^


> Paolo
> 
>>      g_free(as->current_map);
>>      g_free(as->ioeventfds);
>>  }
>>
> 


-- 
Alexey

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-29  8:21                 ` Alexey Kardashevskiy
@ 2013-04-29  9:25                   ` Paolo Bonzini
  2013-04-29 11:09                     ` David Gibson
  2013-04-30  2:14                     ` Alexey Kardashevskiy
  0 siblings, 2 replies; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-29  9:25 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: qemu-trivial, qemu-devel, David Gibson

Il 29/04/2013 10:21, Alexey Kardashevskiy ha scritto:
>>> >> +    g_free((void *)as->name);
>> > 
>> > No cast here.
> ?
> 
>   CC    ppc64-softmmu/memory.o
> /home/alexey/pcipassthru/qemu-impreza/memory.c: In function
> 'address_space_destroy':
> /home/alexey/pcipassthru/qemu-impreza/memory.c:1626:5: warning: passing
> argument 1 of 'g_free' discards 'const' qualifier from pointer target type
> [enabled by default]
>      g_free(/*(void *)*/as->name);
>      ^

Please remove the const from as->name instead.  Since you are strdup-ing
it, and the field is meant to be private to memory.c anyway, you do not
need protection against changing it.

Paolo

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-29  8:11               ` Paolo Bonzini
@ 2013-04-29 11:00                 ` David Gibson
  2013-04-29 11:38                   ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-29 11:00 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2186 bytes --]

On Mon, Apr 29, 2013 at 10:11:06AM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 28/04/2013 03:58, David Gibson ha scritto:
> >>> Ok, knowing about changes that happen in the IOMMU mapping is
> >>> indeed out of scope of MemoryListeners.  What about adding a
> >>> NotifierList? Then VFIO can register a notifier and use it to
> >>> learn about "events" in the IOMMU mapping.  The notifier data
> >>> can be a MemoryRegionSection or IOMMUTableEntry, whatever you
> >>> find more convenient.
> > For the generic case a Notifier could work in principle.  Neither
> > of those structures is suitable as the data though: constructing a 
> > MemoryRegionSection for every page we map into the IOTLB is far
> > too heavyweight, and the IOMMUTLBEntry doesn't contain the IOVA.
> 
> It did in Avi's patch set.  I removed it because it was unused.  I can
> add it back if you need it.

Ok, I think I will need it.

> > Thinking over, I think what that mostly amounts to, is that if the 
> > VFIO aspects of the address space are already wired up by the host 
> > bridge, then the individual vfio-pci devices need a way of going
> > from their qemu iommu address space (which they get from
> > pci_dev->iommu) to the vfio specific information about that address
> > space.
> 
> That can be done with just a hash table, no?

Well, yes it can.  But I think that having a whole parallel lookup
structure within vfio is a worse ugliness than having a single opaque
vfio pointer in the MemoryRegion strcuture.

It does also require making sure the lifetime handling is correct.
The entry from the hash table must be removed before the corresponding
MemoryRegion is free()ed; otherwise we could end up using the same
pointer for a newly constructed MemoryRegion, and get a false lookup
in the hash.  Whether that happens essentially never, or almost
immediately in practice depends on the malloc() implementation, of
course.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-29  9:25                   ` Paolo Bonzini
@ 2013-04-29 11:09                     ` David Gibson
  2013-04-30  2:14                     ` Alexey Kardashevskiy
  1 sibling, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-29 11:09 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Alexey Kardashevskiy, qemu-trivial, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1046 bytes --]

On Mon, Apr 29, 2013 at 11:25:16AM +0200, Paolo Bonzini wrote:
> Il 29/04/2013 10:21, Alexey Kardashevskiy ha scritto:
> >>> >> +    g_free((void *)as->name);
> >> > 
> >> > No cast here.
> > ?
> > 
> >   CC    ppc64-softmmu/memory.o
> > /home/alexey/pcipassthru/qemu-impreza/memory.c: In function
> > 'address_space_destroy':
> > /home/alexey/pcipassthru/qemu-impreza/memory.c:1626:5: warning: passing
> > argument 1 of 'g_free' discards 'const' qualifier from pointer target type
> > [enabled by default]
> >      g_free(/*(void *)*/as->name);
> >      ^
> 
> Please remove the const from as->name instead.  Since you are strdup-ing
> it, and the field is meant to be private to memory.c anyway, you do not
> need protection against changing it.

As a general rule, pointer variables that you malloc()ate should not
be const.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-29 11:00                 ` David Gibson
@ 2013-04-29 11:38                   ` Paolo Bonzini
  2013-04-29 11:56                     ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-29 11:38 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 29/04/2013 13:00, David Gibson ha scritto:
> On Mon, Apr 29, 2013 at 10:11:06AM +0200, Paolo Bonzini wrote:
>> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
>> 
>> Il 28/04/2013 03:58, David Gibson ha scritto:
>>>>> Ok, knowing about changes that happen in the IOMMU mapping
>>>>> is indeed out of scope of MemoryListeners.  What about
>>>>> adding a NotifierList? Then VFIO can register a notifier
>>>>> and use it to learn about "events" in the IOMMU mapping.
>>>>> The notifier data can be a MemoryRegionSection or
>>>>> IOMMUTableEntry, whatever you find more convenient.
>>> For the generic case a Notifier could work in principle.
>>> Neither of those structures is suitable as the data though:
>>> constructing a MemoryRegionSection for every page we map into
>>> the IOTLB is far too heavyweight, and the IOMMUTLBEntry doesn't
>>> contain the IOVA.
>> 
>> It did in Avi's patch set.  I removed it because it was unused.
>> I can add it back if you need it.
> 
> Ok, I think I will need it.

Let me know. :)

>>> Thinking over, I think what that mostly amounts to, is that if
>>> the VFIO aspects of the address space are already wired up by
>>> the host bridge, then the individual vfio-pci devices need a
>>> way of going from their qemu iommu address space (which they
>>> get from pci_dev->iommu) to the vfio specific information about
>>> that address space.
>> 
>> That can be done with just a hash table, no?
> 
> Well, yes it can.  But I think that having a whole parallel lookup 
> structure within vfio is a worse ugliness than having a single
> opaque vfio pointer in the MemoryRegion strcuture.

Why should VFIO be any special in this?  It is reassuring to me that
the VFIO maintainer thinks the same. :)

> It does also require making sure the lifetime handling is correct. 
> The entry from the hash table must be removed before the
> corresponding MemoryRegion is free()ed; otherwise we could end up
> using the same pointer for a newly constructed MemoryRegion, and
> get a false lookup in the hash.  Whether that happens essentially
> never, or almost immediately in practice depends on the malloc()
> implementation, of course.

There is only one MemoryRegion per PCI host bridge, and the PCI host
bridge cannot disappear before the VFIO devices on it are torn down.
So the lifetime should not be a problem.

Paolo

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRflueAAoJEBvWZb6bTYbybdkP/A5jrip69S4OB6XyDen7GkFy
YdX1SSs23zrVJ/PjrSqGCw4caXUecqp4b+49NCMFOF9Sy7ypvPWZvR84qpSRnFD9
oXYBpKZ3HPI0j0i/Ga84Lhz0ylyhfjTBel2jpAlJJq+odjdaHBttrQbWJIOCikMD
/mPStcU3FoQRp8KysIv0rro56IR9L0sAz7MuuJT0to7NfZHC1HrwrwJJYanB3hGN
OTTYUwRf+l9jZv5JgqanK2YcyDo0j43bvri03rdiKpJ0bj8BCQqxVOhg5TjtK3P6
EK2ABspt/q2LKWkfiC6xeudQoqcdjq2CfjqM5zqFuYExbh80MlTo5/I9hGF3zebw
naCD4qxrV/KwcrbCEqU+ieAvY1rj0ALcdSizBDPp+zemF2QzS87uXS1wjYPIvFar
oMuKnPAuAqwu7+g6uIqOFc24AIPiLwoJbnSAkiZ89wSqySKIbQj7vYJYoodxPidC
rZUr0VmspAKTfaW6lsb08KiepFaj4jySpB6bgS++hZZtPPgZ7ApUc8Yr2oZlrw8r
N+IDwPDigvFOMNSCej2YyIfZfi1CvkG/sJacCF3z+uHPIjwAu5Fy3s/gM7f27WXW
k0Dfak5l/i8QPWD6kZly6meoc+vTwiOVfKZPhLowrFG+C6G36LRuSiaDvHzS5vvM
6rar2khtc7sB+20wWDSD
=bzBg
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-29 11:38                   ` Paolo Bonzini
@ 2013-04-29 11:56                     ` David Gibson
  2013-04-29 13:44                       ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-29 11:56 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 4044 bytes --]

On Mon, Apr 29, 2013 at 01:38:06PM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 29/04/2013 13:00, David Gibson ha scritto:
> > On Mon, Apr 29, 2013 at 10:11:06AM +0200, Paolo Bonzini wrote:
> >> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
> >> 
> >> Il 28/04/2013 03:58, David Gibson ha scritto:
> >>>>> Ok, knowing about changes that happen in the IOMMU mapping
> >>>>> is indeed out of scope of MemoryListeners.  What about
> >>>>> adding a NotifierList? Then VFIO can register a notifier
> >>>>> and use it to learn about "events" in the IOMMU mapping.
> >>>>> The notifier data can be a MemoryRegionSection or
> >>>>> IOMMUTableEntry, whatever you find more convenient.
> >>> For the generic case a Notifier could work in principle.
> >>> Neither of those structures is suitable as the data though:
> >>> constructing a MemoryRegionSection for every page we map into
> >>> the IOTLB is far too heavyweight, and the IOMMUTLBEntry doesn't
> >>> contain the IOVA.
> >> 
> >> It did in Avi's patch set.  I removed it because it was unused.
> >> I can add it back if you need it.
> > 
> > Ok, I think I will need it.
> 
> Let me know. :)
> 
> >>> Thinking over, I think what that mostly amounts to, is that if
> >>> the VFIO aspects of the address space are already wired up by
> >>> the host bridge, then the individual vfio-pci devices need a
> >>> way of going from their qemu iommu address space (which they
> >>> get from pci_dev->iommu) to the vfio specific information about
> >>> that address space.
> >> 
> >> That can be done with just a hash table, no?
> > 
> > Well, yes it can.  But I think that having a whole parallel lookup 
> > structure within vfio is a worse ugliness than having a single
> > opaque vfio pointer in the MemoryRegion strcuture.
> 
> Why should VFIO be any special in this?  It is reassuring to me that
> the VFIO maintainer thinks the same. :)

Because device passthrough is a sufficiently special case, IMO.  It
introduces requirements that are fundamentally different from any
emulated device.

> > It does also require making sure the lifetime handling is correct. 
> > The entry from the hash table must be removed before the
> > corresponding MemoryRegion is free()ed; otherwise we could end up
> > using the same pointer for a newly constructed MemoryRegion, and
> > get a false lookup in the hash.  Whether that happens essentially
> > never, or almost immediately in practice depends on the malloc()
> > implementation, of course.
> 
> There is only one MemoryRegion per PCI host bridge, and the PCI host

That's true for existing examples, but it need not be true.  For
example the Intel IOMMU supports multiple "iommu domains" which are
different address spaces on the same host bridge.  When one of those
domains is reconfigured, we will again need to call into vfio by some
mechanism to readjust the host side iommu configuration accordingly.

> bridge cannot disappear before the VFIO devices on it are torn down.
> So the lifetime should not be a problem.

I think it is very risky to assume that existing constraints control
the lifetime for us, since we don't know what variety of iommus we may
have in future.  I really think we should have an explicit hook which
allows us to call vfio side cleanup code when a guest side iommu
region is destroyed.  Personally, I still think a special-purpose vfio
hook is the simplest way to do that, but a more general use Notifier
list or something in the right place could do the job too.

I do realise why we don't really want vfio specific hooks in the
generic code.  But we absolutely do need some kind of hooks which vfio
can use, and until we have a second potential user of those hooks, I
think general hooks are overengineering.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-29 11:56                     ` David Gibson
@ 2013-04-29 13:44                       ` Paolo Bonzini
  2013-04-30  2:05                         ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-29 13:44 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 29/04/2013 13:56, David Gibson ha scritto:
>> Why should VFIO be any special in this?  It is reassuring to me
>> that the VFIO maintainer thinks the same. :)
> 
> Because device passthrough is a sufficiently special case, IMO.
> It introduces requirements that are fundamentally different from
> any emulated device.
> 
>>> It does also require making sure the lifetime handling is
>>> correct. The entry from the hash table must be removed before
>>> the corresponding MemoryRegion is free()ed; otherwise we could
>>> end up using the same pointer for a newly constructed
>>> MemoryRegion, and get a false lookup in the hash.  Whether that
>>> happens essentially never, or almost immediately in practice
>>> depends on the malloc() implementation, of course.
>> 
>> There is only one MemoryRegion per PCI host bridge, and the PCI
>> host
> 
> That's true for existing examples, but it need not be true.  For 
> example the Intel IOMMU supports multiple "iommu domains" which
> are different address spaces on the same host bridge.  When one of
> those domains is reconfigured, we will again need to call into vfio
> by some mechanism to readjust the host side iommu configuration
> accordingly.
> 
>> bridge cannot disappear before the VFIO devices on it are torn
>> down. So the lifetime should not be a problem.
> 
> I think it is very risky to assume that existing constraints
> control the lifetime for us, since we don't know what variety of
> iommus we may have in future.  I really think we should have an
> explicit hook which allows us to call vfio side cleanup code when a
> guest side iommu region is destroyed.  Personally, I still think a
> special-purpose vfio hook is the simplest way to do that, but a
> more general use Notifier list or something in the right place
> could do the job too.

I think this is a different problem.  Basically the question is "what
happens if a MemoryRegion 'disappears' while an AddressSpace is still
referring to it", and the answer right now is "badness".

We should look at generic fixes before dropping hooks in the code.  At
the very least an "assert(mr->parent == NULL);" is missing in
memory_region_destroy.

Paolo

> I do realise why we don't really want vfio specific hooks in the 
> generic code.  But we absolutely do need some kind of hooks which
> vfio can use, and until we have a second potential user of those
> hooks, I think general hooks are overengineering.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRfnlKAAoJEBvWZb6bTYbyC5MP/2mq2mZ9Maov0bq+AZl4zrA+
li7I1/GWXyZewfWrtkQ8onTjt1jYBcEiWlIPEYpx9S39XZtXke80w5dO4L8gA0Ow
tvFbHx8LdZRSNIuMvcq8Gw22olWKfjDisqYGHhPTw0hrSGBV6K19Jfqv4RFqovRw
mzxY3CKF3G5hJVQJmBipd/1dn0kqS3+OFZI/6Q51b3VfAxRLbNmxW0kfIbVsKbGq
5ctcIMoBJwHiQ5uuUfm1FSgSAtU8hIzHZPRYg/FnAM08u6ERQbVCW54h5RA6ECcx
hfQzTmgHsIYZEMGhvsq9h9hsro9tehchN6u2ANzrFinFNAw8U4xaPQO5OQej2Le2
50f9VoqF7Y7hhnYVivHGQVol45SNdGKV8GJws9nVxJfAIHb7Tu6SYrr7FS+5EYPl
gwH5nUqbWZP8Ss4rLdCOt4hatHwARHv5XScxQWOyjAXC1tJO228i18e1QmsOxaBN
eYN4635InosNW7jtIl1f7zv+iqkcYqkecR26Dw5Qrcy3AlivqU1s/m4mtL1GCZvT
+xO+oFCROuwNLaNBN0if4FlQfPRkDnt5uoloZYLBr4BQJ8ZIUyp4lkiAxZM3OsYp
7r+A0gT4DomxolBF0xOHZCMfc3tSOFZEPPF7cwxqRPJf9uaMG5SfYK6IHUE0zyEg
OGG/l3wDDNtGz/ZEVTfW
=biGU
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-29 13:44                       ` Paolo Bonzini
@ 2013-04-30  2:05                         ` David Gibson
  2013-04-30  2:23                           ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-30  2:05 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3034 bytes --]

On Mon, Apr 29, 2013 at 03:44:43PM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 29/04/2013 13:56, David Gibson ha scritto:
> >> Why should VFIO be any special in this?  It is reassuring to me
> >> that the VFIO maintainer thinks the same. :)
> > 
> > Because device passthrough is a sufficiently special case, IMO.
> > It introduces requirements that are fundamentally different from
> > any emulated device.
> > 
> >>> It does also require making sure the lifetime handling is
> >>> correct. The entry from the hash table must be removed before
> >>> the corresponding MemoryRegion is free()ed; otherwise we could
> >>> end up using the same pointer for a newly constructed
> >>> MemoryRegion, and get a false lookup in the hash.  Whether that
> >>> happens essentially never, or almost immediately in practice
> >>> depends on the malloc() implementation, of course.
> >> 
> >> There is only one MemoryRegion per PCI host bridge, and the PCI
> >> host
> > 
> > That's true for existing examples, but it need not be true.  For 
> > example the Intel IOMMU supports multiple "iommu domains" which
> > are different address spaces on the same host bridge.  When one of
> > those domains is reconfigured, we will again need to call into vfio
> > by some mechanism to readjust the host side iommu configuration
> > accordingly.
> > 
> >> bridge cannot disappear before the VFIO devices on it are torn
> >> down. So the lifetime should not be a problem.
> > 
> > I think it is very risky to assume that existing constraints
> > control the lifetime for us, since we don't know what variety of
> > iommus we may have in future.  I really think we should have an
> > explicit hook which allows us to call vfio side cleanup code when a
> > guest side iommu region is destroyed.  Personally, I still think a
> > special-purpose vfio hook is the simplest way to do that, but a
> > more general use Notifier list or something in the right place
> > could do the job too.
> 
> I think this is a different problem.  Basically the question is "what
> happens if a MemoryRegion 'disappears' while an AddressSpace is still
> referring to it", and the answer right now is "badness".

Well.. no.  The same problem may well exist for AddressSpace objects,
but in this case it's for the VFIO private per-address-space object.

> We should look at generic fixes before dropping hooks in the code.  At
> the very least an "assert(mr->parent == NULL);" is missing in
> memory_region_destroy.

Well, sure that's probably also a good idea.  But the whole point here
is you're insisting that the MemoryRegion code doesn't know about the
vfio private data, even as an opaque handle, and so there's no
possible assert we can put there to check it has been destroyed.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH] memory: give name every AddressSpace
  2013-04-29  9:25                   ` Paolo Bonzini
  2013-04-29 11:09                     ` David Gibson
@ 2013-04-30  2:14                     ` Alexey Kardashevskiy
  1 sibling, 0 replies; 30+ messages in thread
From: Alexey Kardashevskiy @ 2013-04-30  2:14 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, David Gibson

On 04/29/2013 07:25 PM, Paolo Bonzini wrote:
> Il 29/04/2013 10:21, Alexey Kardashevskiy ha scritto:
>>>>>> +    g_free((void *)as->name);
>>>>
>>>> No cast here.
>> ?
>>
>>   CC    ppc64-softmmu/memory.o
>> /home/alexey/pcipassthru/qemu-impreza/memory.c: In function
>> 'address_space_destroy':
>> /home/alexey/pcipassthru/qemu-impreza/memory.c:1626:5: warning: passing
>> argument 1 of 'g_free' discards 'const' qualifier from pointer target type
>> [enabled by default]
>>      g_free(/*(void *)*/as->name);
>>      ^
> 
> Please remove the const from as->name instead.  Since you are strdup-ing
> it, and the field is meant to be private to memory.c anyway, you do not
> need protection against changing it.

ok, no prob, I'll repost the patch.

In meanwhile, could you please update docs/memory.txt and include
AddressSpace and MemoryListener things in it as it is not absolutely clear
what is what and for what purpose? The current version of the doc is too
old. Thanks.



-- 
Alexey

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-30  2:05                         ` David Gibson
@ 2013-04-30  2:23                           ` David Gibson
  2013-04-30  7:30                             ` Paolo Bonzini
  0 siblings, 1 reply; 30+ messages in thread
From: David Gibson @ 2013-04-30  2:23 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 3460 bytes --]

On Tue, Apr 30, 2013 at 12:05:39PM +1000, David Gibson wrote:
> On Mon, Apr 29, 2013 at 03:44:43PM +0200, Paolo Bonzini wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> > 
> > Il 29/04/2013 13:56, David Gibson ha scritto:
> > >> Why should VFIO be any special in this?  It is reassuring to me
> > >> that the VFIO maintainer thinks the same. :)
> > > 
> > > Because device passthrough is a sufficiently special case, IMO.
> > > It introduces requirements that are fundamentally different from
> > > any emulated device.
> > > 
> > >>> It does also require making sure the lifetime handling is
> > >>> correct. The entry from the hash table must be removed before
> > >>> the corresponding MemoryRegion is free()ed; otherwise we could
> > >>> end up using the same pointer for a newly constructed
> > >>> MemoryRegion, and get a false lookup in the hash.  Whether that
> > >>> happens essentially never, or almost immediately in practice
> > >>> depends on the malloc() implementation, of course.
> > >> 
> > >> There is only one MemoryRegion per PCI host bridge, and the PCI
> > >> host
> > > 
> > > That's true for existing examples, but it need not be true.  For 
> > > example the Intel IOMMU supports multiple "iommu domains" which
> > > are different address spaces on the same host bridge.  When one of
> > > those domains is reconfigured, we will again need to call into vfio
> > > by some mechanism to readjust the host side iommu configuration
> > > accordingly.
> > > 
> > >> bridge cannot disappear before the VFIO devices on it are torn
> > >> down. So the lifetime should not be a problem.
> > > 
> > > I think it is very risky to assume that existing constraints
> > > control the lifetime for us, since we don't know what variety of
> > > iommus we may have in future.  I really think we should have an
> > > explicit hook which allows us to call vfio side cleanup code when a
> > > guest side iommu region is destroyed.  Personally, I still think a
> > > special-purpose vfio hook is the simplest way to do that, but a
> > > more general use Notifier list or something in the right place
> > > could do the job too.
> > 
> > I think this is a different problem.  Basically the question is "what
> > happens if a MemoryRegion 'disappears' while an AddressSpace is still
> > referring to it", and the answer right now is "badness".
> 
> Well.. no.  The same problem may well exist for AddressSpace objects,
> but in this case it's for the VFIO private per-address-space object.
> 
> > We should look at generic fixes before dropping hooks in the code.  At
> > the very least an "assert(mr->parent == NULL);" is missing in
> > memory_region_destroy.
> 
> Well, sure that's probably also a good idea.  But the whole point here
> is you're insisting that the MemoryRegion code doesn't know about the
> vfio private data, even as an opaque handle, and so there's no
> possible assert we can put there to check it has been destroyed.

Oh, yes, forgot to ask.  I'm still unclear on what the conceptual
difference is supposed to between a MemoryRegion and an AddressSpace.
AFAICT AddressSpace seems to be roughly just a wrapper on a
MemoryRegion that gives it some more features.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-30  2:23                           ` David Gibson
@ 2013-04-30  7:30                             ` Paolo Bonzini
  2013-04-30  7:54                               ` David Gibson
  0 siblings, 1 reply; 30+ messages in thread
From: Paolo Bonzini @ 2013-04-30  7:30 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, alex.williamson, qemu-devel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Il 30/04/2013 04:23, David Gibson ha scritto:
>>> I think this is a different problem.  Basically the question is
>>> "what happens if a MemoryRegion 'disappears' while an
>>> AddressSpace is still referring to it", and the answer right
>>> now is "badness".
>> 
>> Well.. no.  The same problem may well exist for AddressSpace
>> objects, but in this case it's for the VFIO private
>> per-address-space object.

Well, once we clarify the lifetime of AddressSpaces, we can clarify
the lifetime of the VFIO private object.

>>> We should look at generic fixes before dropping hooks in the
>>> code.  At the very least an "assert(mr->parent == NULL);" is
>>> missing in memory_region_destroy.
>> 
>> Well, sure that's probably also a good idea.  But the whole point
>> here is you're insisting that the MemoryRegion code doesn't know
>> about the vfio private data, even as an opaque handle, and so
>> there's no possible assert we can put there to check it has been
>> destroyed.
> 
> Oh, yes, forgot to ask.  I'm still unclear on what the conceptual 
> difference is supposed to between a MemoryRegion and an
> AddressSpace. AFAICT AddressSpace seems to be roughly just a
> wrapper on a MemoryRegion that gives it some more features.

Yes, an AddressSpace is a wrapper for a MemoryRegion that has a NULL
- ->parent, with two extra features:

- - QEMU computes the "flat" view of the MemoryRegion and a compressed
radix tree representation of that view

- - MemoryListeners can ask to be called only for some AddressSpaces

Paolo

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBAgAGBQJRf3MEAAoJEBvWZb6bTYbyRtEQAImxNNRh39bektv28FJ58hre
XrEbck6PZZmJr2xARUrqdri4U8Dw/vXSJOyVWEfQWBvswDBImb6ATO3CcGGYztxS
zxmget+Uobbd6ibtKU88JZP0w1WJV7Cn0oj//ylvp+I1DzCJpLRDZJLYMKMns28F
KLvA9OTLXlExqEx3/qLc38rIPJ/OJmMqMgCmwNT8ba03wmvO9lSNjHsqi4Htg8Ml
AnC29dfiLoNE4Zw+E/N9X7AeNaDvG2GJ52G9suJcd8fgkirEzOfwTqirxeEEGeDw
oX7PX02yUuPMo5p6BeocqKFS5qgU9oMwSwSYSuy+0uMcoeBhLaVYXy1pRZDvIWVz
b6hHSNc3TsgNlmi2hwYjJG/4ARngXvS1llNCvfNpuENAq3H2zH0DjpW5IHOrt9cv
97FPeWKzka2MqLUkdziGKyno2Tlgq10lfqF4TS85PhXgP66A4gtMo1XeZosCIYjX
rGNoQuSnau/938H4Nkt3TAIGQZNypycPeIBXhDNQjFhKXrlLbQjdI5FCjs+3De83
AjEB4OqPEWJ2xeP9HefrEajLQB0T/MSGvoExZKBnFfFyqSv8a7MPZElSnLL6fJ4T
/fj/ow3gBbDz4SnNg80FxxbZI8UOvxz3mUcCK8ogQiRBkYdp8M6lmA1rvO1FTafT
f1eyeutVKUrfyanZfHy8
=yOML
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion
  2013-04-30  7:30                             ` Paolo Bonzini
@ 2013-04-30  7:54                               ` David Gibson
  0 siblings, 0 replies; 30+ messages in thread
From: David Gibson @ 2013-04-30  7:54 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: aik, alex.williamson, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2062 bytes --]

On Tue, Apr 30, 2013 at 09:30:12AM +0200, Paolo Bonzini wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Il 30/04/2013 04:23, David Gibson ha scritto:
> >>> I think this is a different problem.  Basically the question is
> >>> "what happens if a MemoryRegion 'disappears' while an
> >>> AddressSpace is still referring to it", and the answer right
> >>> now is "badness".
> >> 
> >> Well.. no.  The same problem may well exist for AddressSpace
> >> objects, but in this case it's for the VFIO private
> >> per-address-space object.
> 
> Well, once we clarify the lifetime of AddressSpaces, we can clarify
> the lifetime of the VFIO private object.

Hrm.  I don't see why.  The VFIO private data is tied to the
MemoryRegion, not an AddressSpace since that's what represents the
iommu.

> >>> We should look at generic fixes before dropping hooks in the
> >>> code.  At the very least an "assert(mr->parent == NULL);" is
> >>> missing in memory_region_destroy.
> >> 
> >> Well, sure that's probably also a good idea.  But the whole point
> >> here is you're insisting that the MemoryRegion code doesn't know
> >> about the vfio private data, even as an opaque handle, and so
> >> there's no possible assert we can put there to check it has been
> >> destroyed.
> > 
> > Oh, yes, forgot to ask.  I'm still unclear on what the conceptual 
> > difference is supposed to between a MemoryRegion and an
> > AddressSpace. AFAICT AddressSpace seems to be roughly just a
> > wrapper on a MemoryRegion that gives it some more features.
> 
> Yes, an AddressSpace is a wrapper for a MemoryRegion that has a NULL
> - ->parent, with two extra features:
> 
> - - QEMU computes the "flat" view of the MemoryRegion and a compressed
> radix tree representation of that view

Ok.  It's not clear to me what that means in practice.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2013-04-30  8:03 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-26  6:02 [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) David Gibson
2013-04-26  6:02 ` [Qemu-devel] [PATCH 1/4] Fix vmw_pvscsi.c for iommu support changes David Gibson
2013-04-26  8:19   ` Paolo Bonzini
2013-04-26 11:04     ` David Gibson
2013-04-26  6:02 ` [Qemu-devel] [PATCH 2/4] vfio: Associate VFIO groups with (guest) IOMMU address spaces David Gibson
2013-04-26  6:02 ` [Qemu-devel] [PATCH 3/4] vfio: Move container list to iommu MemoryRegion David Gibson
2013-04-26  8:23   ` Paolo Bonzini
2013-04-26 11:31     ` David Gibson
2013-04-26 13:40       ` Paolo Bonzini
2013-04-27  9:49         ` David Gibson
2013-04-27 12:17           ` Paolo Bonzini
2013-04-28  1:58             ` David Gibson
2013-04-29  8:11               ` Paolo Bonzini
2013-04-29 11:00                 ` David Gibson
2013-04-29 11:38                   ` Paolo Bonzini
2013-04-29 11:56                     ` David Gibson
2013-04-29 13:44                       ` Paolo Bonzini
2013-04-30  2:05                         ` David Gibson
2013-04-30  2:23                           ` David Gibson
2013-04-30  7:30                             ` Paolo Bonzini
2013-04-30  7:54                               ` David Gibson
2013-04-29  2:11             ` [Qemu-devel] [PATCH] memory: give name every AddressSpace Alexey Kardashevskiy
2013-04-29  8:16               ` Paolo Bonzini
2013-04-29  8:21                 ` Alexey Kardashevskiy
2013-04-29  9:25                   ` Paolo Bonzini
2013-04-29 11:09                     ` David Gibson
2013-04-30  2:14                     ` Alexey Kardashevskiy
2013-04-26  6:02 ` [Qemu-devel] [PATCH 4/4] vfio: Only use memory listeners when appropriate David Gibson
2013-04-26 15:42 ` [Qemu-devel] [0/4] RFC: Preparations for VFIO and guest IOMMUs (v2) Alex Williamson
2013-04-27  9:51   ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.