All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW)
@ 2015-06-18 11:37 Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 01/14] vmstate: Define VARRAY with VMS_ALLOC Alexey Kardashevskiy
                   ` (14 more replies)
  0 siblings, 15 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson


(cut-n-paste from kernel patchset)

Each Partitionable Endpoint (IOMMU group) has an address range on a PCI bus
where devices are allowed to do DMA. These ranges are called DMA windows.
By default, there is a single DMA window, 1 or 2GB big, mapped at zero
on a PCI bus.

PAPR defines a DDW RTAS API which allows pseries guests
querying the hypervisor about DDW support and capabilities (page size mask
for now). A pseries guest may request an additional (to the default)
DMA windows using this RTAS API.
The existing pseries Linux guests request an additional window as big as
the guest RAM and map the entire guest window which effectively creates
direct mapping of the guest memory to a PCI bus.

This patchset reworks PPC64 IOMMU code and adds necessary structures
to support big windows.

Once a Linux guest discovers the presence of DDW, it does:
1. query hypervisor about number of available windows and page size masks;
2. create a window with the biggest possible page size (today 4K/64K/16M);
3. map the entire guest RAM via H_PUT_TCE* hypercalls;
4. switche dma_ops to direct_dma_ops on the selected PE.

Once this is done, H_PUT_TCE is not called anymore for 64bit devices and
the guest does not waste time on DMA map/unmap operations.

Note that 32bit devices won't use DDW and will keep using the default
DMA window so KVM optimizations will be required (to be posted later).

This patchset adds DDW support for pseries. The host kernel changes are
required, posted as:

[PATCH kernel v11 00/34] powerpc/iommu/vfio: Enable Dynamic DMA windows

This patchset is based on git://github.com/dgibson/qemu.git spapr-next branch.

Please comment. Thanks!

Changes:
v8:
* reworked unreferencing in "spapr_iommu: Introduce "enabled" state for TCE table"
* added clean-up patch "spapr_iommu: Remove vfio_accel flag from sPAPRTCETable"
* rebased on latest spapr-next

v7:
* bunch of cleanups, renames after David+Thomas+Michael review
* patches are reorganized and those which do not need the host kernel headers
update are put first and can be pulled if these are good enough :)

v6:
* spapr-pci-vfio-host-bridge is now a synonim of spapr-pci-host-bridge -
same PHB can host emulated and VFIO devices
* changed patches order
* lot of small changes

v5:
* TCE tables got "enabled" state and are persistent, i.e. not recreated
every reboot
* added v2 of SPAPR_TCE_IOMMU
* fixed migration for emulated PHB with enabled DDW
* huge pile of other changes

v4:
* reimplemented the whole thing
* machine reset and ddw-reset RTAS call both remove all TCE tables and
create the default one
* IOMMU group id is not needed to use VFIO PHB anymore, multiple groups
are supported on the same VFIO container and virtual PHB

v3:
* removed "reset" from API now
* reworked machine versions
* applied multiple comments
* includes David's machine QOM rework as this patchset adds a new machine type

v2:
* tested on emulated PHB
* removed "ddw" machine property, now it is PHB property
* disabled by default
* defined "pseries-2.2" machine which enables DDW by default
* fixed reset() and reference counting




Alexey Kardashevskiy (14):
  vmstate: Define VARRAY with VMS_ALLOC
  vfio: spapr: Move SPAPR-related code to a separate file
  spapr_pci_vfio: Enable multiple groups per container
  spapr_pci: Convert finish_realize() to
    dma_capabilities_update()+dma_init_window()
  spapr_iommu: Move table allocation to helpers
  spapr_iommu: Introduce "enabled" state for TCE table
  spapr_iommu: Remove vfio_accel flag from sPAPRTCETable
  spapr_iommu: Add root memory region
  spapr_pci: Do complete reset of DMA config when resetting PHB
  spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge
  spapr_pci: Enable vfio-pci hotplug
  linux headers update for DDW on SPAPR
  vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
  spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)

 hw/ppc/Makefile.objs          |   3 +
 hw/ppc/spapr.c                |   5 +
 hw/ppc/spapr_iommu.c          | 210 ++++++++++++++++++++++------
 hw/ppc/spapr_pci.c            | 263 +++++++++++++++++++++++++----------
 hw/ppc/spapr_pci_vfio.c       | 167 ++++++++++++----------
 hw/ppc/spapr_rtas_ddw.c       | 300 ++++++++++++++++++++++++++++++++++++++++
 hw/ppc/spapr_vio.c            |   9 +-
 hw/vfio/Makefile.objs         |   1 +
 hw/vfio/common.c              | 183 +++++-------------------
 hw/vfio/spapr.c               | 315 ++++++++++++++++++++++++++++++++++++++++++
 include/hw/pci-host/spapr.h   |  49 +++++--
 include/hw/ppc/spapr.h        |  33 +++--
 include/hw/vfio/vfio-common.h |  16 +++
 include/hw/vfio/vfio.h        |   2 +-
 include/migration/vmstate.h   |  10 ++
 linux-headers/linux/vfio.h    |  88 +++++++++++-
 trace-events                  |   9 +-
 17 files changed, 1299 insertions(+), 364 deletions(-)
 create mode 100644 hw/ppc/spapr_rtas_ddw.c
 create mode 100644 hw/vfio/spapr.c

-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 01/14] vmstate: Define VARRAY with VMS_ALLOC
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file Alexey Kardashevskiy
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This allows dynamic allocation for migrating arrays.

Already existing VMSTATE_VARRAY_UINT32 requires an array to be
pre-allocated, however there are cases when the size is not known in
advance and there is no real need to enforce it.

This defines another variant of VMSTATE_VARRAY_UINT32 with WMS_ALLOC
flag which tells the receiving side to allocate memory for the array
before receiving the data.

The first user of it is a dynamic DMA window which existence and size
are totally dynamic.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 include/migration/vmstate.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 7153b1e..1aeb690 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -295,6 +295,16 @@ extern const VMStateInfo vmstate_info_bitmap;
     .offset     = vmstate_offset_pointer(_state, _field, _type),     \
 }
 
+#define VMSTATE_VARRAY_UINT32_ALLOC(_field, _state, _field_num, _version, _info, _type) {\
+    .name       = (stringify(_field)),                               \
+    .version_id = (_version),                                        \
+    .num_offset = vmstate_offset_value(_state, _field_num, uint32_t),\
+    .info       = &(_info),                                          \
+    .size       = sizeof(_type),                                     \
+    .flags      = VMS_VARRAY_UINT32|VMS_POINTER|VMS_ALLOC,           \
+    .offset     = vmstate_offset_pointer(_state, _field, _type),     \
+}
+
 #define VMSTATE_VARRAY_UINT16_UNSAFE(_field, _state, _field_num, _version, _info, _type) {\
     .name       = (stringify(_field)),                               \
     .version_id = (_version),                                        \
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 01/14] vmstate: Define VARRAY with VMS_ALLOC Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 21:10   ` Alex Williamson
  2015-06-23  5:49   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container Alexey Kardashevskiy
                   ` (12 subsequent siblings)
  14 siblings, 2 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This moves SPAPR bits to a separate file to avoid pollution of x86 code.

This enables spapr-vfio on CONFIG_SOFTMMU (not CONFIG_PSERIES) as
the config options are only visible in makefiles and not in the source code
so there is no an obvious way of implementing stubs if hw/vfio/spapr.c is
not compiled.

This is a mechanical patch.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/vfio/Makefile.objs         |   1 +
 hw/vfio/common.c              | 137 ++-----------------------
 hw/vfio/spapr.c               | 229 ++++++++++++++++++++++++++++++++++++++++++
 include/hw/vfio/vfio-common.h |  13 +++
 4 files changed, 249 insertions(+), 131 deletions(-)
 create mode 100644 hw/vfio/spapr.c

diff --git a/hw/vfio/Makefile.objs b/hw/vfio/Makefile.objs
index d540c9d..0dd99ed 100644
--- a/hw/vfio/Makefile.objs
+++ b/hw/vfio/Makefile.objs
@@ -3,4 +3,5 @@ obj-$(CONFIG_SOFTMMU) += common.o
 obj-$(CONFIG_PCI) += pci.o
 obj-$(CONFIG_SOFTMMU) += platform.o
 obj-$(CONFIG_SOFTMMU) += calxeda-xgmac.o
+obj-$(CONFIG_SOFTMMU) += spapr.o
 endif
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index b1045da..3e4c685 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -190,8 +190,8 @@ const MemoryRegionOps vfio_region_ops = {
 /*
  * DMA - Mapping and unmapping for the "type1" IOMMU interface used on x86
  */
-static int vfio_dma_unmap(VFIOContainer *container,
-                          hwaddr iova, ram_addr_t size)
+int vfio_dma_unmap(VFIOContainer *container,
+                   hwaddr iova, ram_addr_t size)
 {
     struct vfio_iommu_type1_dma_unmap unmap = {
         .argsz = sizeof(unmap),
@@ -208,8 +208,8 @@ static int vfio_dma_unmap(VFIOContainer *container,
     return 0;
 }
 
-static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
-                        ram_addr_t size, void *vaddr, bool readonly)
+int vfio_dma_map(VFIOContainer *container, hwaddr iova,
+                 ram_addr_t size, void *vaddr, bool readonly)
 {
     struct vfio_iommu_type1_dma_map map = {
         .argsz = sizeof(map),
@@ -238,7 +238,7 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
     return -errno;
 }
 
-static bool vfio_listener_skipped_section(MemoryRegionSection *section)
+bool vfio_listener_skipped_section(MemoryRegionSection *section)
 {
     return (!memory_region_is_ram(section->mr) &&
             !memory_region_is_iommu(section->mr)) ||
@@ -251,67 +251,6 @@ static bool vfio_listener_skipped_section(MemoryRegionSection *section)
            section->offset_within_address_space & (1ULL << 63);
 }
 
-static void vfio_iommu_map_notify(Notifier *n, void *data)
-{
-    VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
-    VFIOContainer *container = giommu->container;
-    IOMMUTLBEntry *iotlb = data;
-    MemoryRegion *mr;
-    hwaddr xlat;
-    hwaddr len = iotlb->addr_mask + 1;
-    void *vaddr;
-    int ret;
-
-    trace_vfio_iommu_map_notify(iotlb->iova,
-                                iotlb->iova + iotlb->addr_mask);
-
-    /*
-     * The IOMMU TLB entry we have just covers translation through
-     * this IOMMU to its immediate target.  We need to translate
-     * it the rest of the way through to memory.
-     */
-    rcu_read_lock();
-    mr = address_space_translate(&address_space_memory,
-                                 iotlb->translated_addr,
-                                 &xlat, &len, iotlb->perm & IOMMU_WO);
-    if (!memory_region_is_ram(mr)) {
-        error_report("iommu map to non memory area %"HWADDR_PRIx"",
-                     xlat);
-        goto out;
-    }
-    /*
-     * Translation truncates length to the IOMMU page size,
-     * check that it did not truncate too much.
-     */
-    if (len & iotlb->addr_mask) {
-        error_report("iommu has granularity incompatible with target AS");
-        goto out;
-    }
-
-    if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
-        vaddr = memory_region_get_ram_ptr(mr) + xlat;
-        ret = vfio_dma_map(container, iotlb->iova,
-                           iotlb->addr_mask + 1, vaddr,
-                           !(iotlb->perm & IOMMU_WO) || mr->readonly);
-        if (ret) {
-            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
-                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
-                         container, iotlb->iova,
-                         iotlb->addr_mask + 1, vaddr, ret);
-        }
-    } else {
-        ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask + 1);
-        if (ret) {
-            error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
-                         "0x%"HWADDR_PRIx") = %d (%m)",
-                         container, iotlb->iova,
-                         iotlb->addr_mask + 1, ret);
-        }
-    }
-out:
-    rcu_read_unlock();
-}
-
 static void vfio_listener_region_add(MemoryListener *listener,
                                      MemoryRegionSection *section)
 {
@@ -347,45 +286,6 @@ static void vfio_listener_region_add(MemoryListener *listener,
 
     memory_region_ref(section->mr);
 
-    if (memory_region_is_iommu(section->mr)) {
-        VFIOGuestIOMMU *giommu;
-
-        trace_vfio_listener_region_add_iommu(iova,
-                    int128_get64(int128_sub(llend, int128_one())));
-        /*
-         * FIXME: We should do some checking to see if the
-         * capabilities of the host VFIO IOMMU are adequate to model
-         * the guest IOMMU
-         *
-         * FIXME: For VFIO iommu types which have KVM acceleration to
-         * avoid bouncing all map/unmaps through qemu this way, this
-         * would be the right place to wire that up (tell the KVM
-         * device emulation the VFIO iommu handles to use).
-         */
-        /*
-         * This assumes that the guest IOMMU is empty of
-         * mappings at this point.
-         *
-         * One way of doing this is:
-         * 1. Avoid sharing IOMMUs between emulated devices or different
-         * IOMMU groups.
-         * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
-         * there are some mappings in IOMMU.
-         *
-         * VFIO on SPAPR does that. Other IOMMU models may do that different,
-         * they must make sure there are no existing mappings or
-         * loop through existing mappings to map them into VFIO.
-         */
-        giommu = g_malloc0(sizeof(*giommu));
-        giommu->iommu = section->mr;
-        giommu->container = container;
-        giommu->n.notify = vfio_iommu_map_notify;
-        QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
-        memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
-
-        return;
-    }
-
     /* Here we assume that memory_region_is_ram(section->mr)==true */
 
     end = int128_get64(llend);
@@ -438,27 +338,6 @@ static void vfio_listener_region_del(MemoryListener *listener,
         return;
     }
 
-    if (memory_region_is_iommu(section->mr)) {
-        VFIOGuestIOMMU *giommu;
-
-        QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
-            if (giommu->iommu == section->mr) {
-                memory_region_unregister_iommu_notifier(&giommu->n);
-                QLIST_REMOVE(giommu, giommu_next);
-                g_free(giommu);
-                break;
-            }
-        }
-
-        /*
-         * FIXME: We assume the one big unmap below is adequate to
-         * remove any individual page mappings in the IOMMU which
-         * might have been copied into VFIO. This works for a page table
-         * based IOMMU where a big unmap flattens a large range of IO-PTEs.
-         * That may not be true for all IOMMU types.
-         */
-    }
-
     iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
     end = (section->offset_within_address_space + int128_get64(section->size)) &
           TARGET_PAGE_MASK;
@@ -724,11 +603,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
             goto free_container_exit;
         }
 
-        container->iommu_data.type1.listener = vfio_memory_listener;
-        container->iommu_data.release = vfio_listener_release;
-
-        memory_listener_register(&container->iommu_data.type1.listener,
-                                 container->space->as);
+        spapr_memory_listener_register(container);
 
     } else {
         error_report("vfio: No available IOMMU models");
diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c
new file mode 100644
index 0000000..f03fab1
--- /dev/null
+++ b/hw/vfio/spapr.c
@@ -0,0 +1,229 @@
+/*
+ * QEMU sPAPR VFIO IOMMU
+ *
+ * Copyright (c) 2015 Alexey Kardashevskiy, IBM Corporation.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License,
+ *  or (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "hw/vfio/vfio-common.h"
+#include "qemu/error-report.h"
+#include "trace.h"
+
+static void vfio_iommu_map_notify(Notifier *n, void *data)
+{
+    VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
+    VFIOContainer *container = giommu->container;
+    IOMMUTLBEntry *iotlb = data;
+    MemoryRegion *mr;
+    hwaddr xlat;
+    hwaddr len = iotlb->addr_mask + 1;
+    void *vaddr;
+    int ret;
+
+    trace_vfio_iommu_map_notify(iotlb->iova,
+                                iotlb->iova + iotlb->addr_mask);
+
+    /*
+     * The IOMMU TLB entry we have just covers translation through
+     * this IOMMU to its immediate target.  We need to translate
+     * it the rest of the way through to memory.
+     */
+    rcu_read_lock();
+    mr = address_space_translate(&address_space_memory,
+                                 iotlb->translated_addr,
+                                 &xlat, &len, iotlb->perm & IOMMU_WO);
+    if (!memory_region_is_ram(mr)) {
+        error_report("iommu map to non memory area %"HWADDR_PRIx"",
+                     xlat);
+        goto out;
+    }
+    /*
+     * Translation truncates length to the IOMMU page size,
+     * check that it did not truncate too much.
+     */
+    if (len & iotlb->addr_mask) {
+        error_report("iommu has granularity incompatible with target AS");
+        goto out;
+    }
+
+    if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
+        vaddr = memory_region_get_ram_ptr(mr) + xlat;
+        ret = vfio_dma_map(container, iotlb->iova,
+                           iotlb->addr_mask + 1, vaddr,
+                           !(iotlb->perm & IOMMU_WO) || mr->readonly);
+        if (ret) {
+            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
+                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
+                         container, iotlb->iova,
+                         iotlb->addr_mask + 1, vaddr, ret);
+        }
+    } else {
+        ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask + 1);
+        if (ret) {
+            error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
+                         "0x%"HWADDR_PRIx") = %d (%m)",
+                         container, iotlb->iova,
+                         iotlb->addr_mask + 1, ret);
+        }
+    }
+out:
+    rcu_read_unlock();
+}
+
+static void vfio_spapr_listener_region_add(MemoryListener *listener,
+                                     MemoryRegionSection *section)
+{
+    VFIOContainer *container = container_of(listener, VFIOContainer,
+                                            iommu_data.spapr.listener);
+    hwaddr iova;
+    Int128 llend;
+    VFIOGuestIOMMU *giommu;
+
+    if (vfio_listener_skipped_section(section)) {
+        trace_vfio_listener_region_add_skip(
+            section->offset_within_address_space,
+            section->offset_within_address_space +
+            int128_get64(int128_sub(section->size, int128_one())));
+        return;
+    }
+
+    if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
+                 (section->offset_within_region & ~TARGET_PAGE_MASK))) {
+        error_report("%s received unaligned region", __func__);
+        return;
+    }
+
+    iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
+    llend = int128_make64(section->offset_within_address_space);
+    llend = int128_add(llend, section->size);
+    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
+
+    if (int128_ge(int128_make64(iova), llend)) {
+        return;
+    }
+
+    memory_region_ref(section->mr);
+
+    trace_vfio_listener_region_add_iommu(iova,
+         int128_get64(int128_sub(llend, int128_one())));
+    /*
+     * FIXME: We should do some checking to see if the
+     * capabilities of the host VFIO IOMMU are adequate to model
+     * the guest IOMMU
+     *
+     * FIXME: For VFIO iommu types which have KVM acceleration to
+     * avoid bouncing all map/unmaps through qemu this way, this
+     * would be the right place to wire that up (tell the KVM
+     * device emulation the VFIO iommu handles to use).
+     */
+    /*
+     * This assumes that the guest IOMMU is empty of
+     * mappings at this point.
+     *
+     * One way of doing this is:
+     * 1. Avoid sharing IOMMUs between emulated devices or different
+     * IOMMU groups.
+     * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
+     * there are some mappings in IOMMU.
+     *
+     * VFIO on SPAPR does that. Other IOMMU models may do that different,
+     * they must make sure there are no existing mappings or
+     * loop through existing mappings to map them into VFIO.
+     */
+    giommu = g_malloc0(sizeof(*giommu));
+    giommu->iommu = section->mr;
+    giommu->container = container;
+    giommu->n.notify = vfio_iommu_map_notify;
+    QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
+    memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
+}
+
+static void vfio_spapr_listener_region_del(MemoryListener *listener,
+                                     MemoryRegionSection *section)
+{
+    VFIOContainer *container = container_of(listener, VFIOContainer,
+                                            iommu_data.spapr.listener);
+    hwaddr iova, end;
+    int ret;
+    VFIOGuestIOMMU *giommu;
+
+    if (vfio_listener_skipped_section(section)) {
+        trace_vfio_listener_region_del_skip(
+            section->offset_within_address_space,
+            section->offset_within_address_space +
+            int128_get64(int128_sub(section->size, int128_one())));
+        return;
+    }
+
+    if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
+                 (section->offset_within_region & ~TARGET_PAGE_MASK))) {
+        error_report("%s received unaligned region", __func__);
+        return;
+    }
+
+    QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
+        if (giommu->iommu == section->mr) {
+            memory_region_unregister_iommu_notifier(&giommu->n);
+            QLIST_REMOVE(giommu, giommu_next);
+            g_free(giommu);
+            break;
+        }
+    }
+
+    /*
+     * FIXME: We assume the one big unmap below is adequate to
+     * remove any individual page mappings in the IOMMU which
+     * might have been copied into VFIO. This works for a page table
+     * based IOMMU where a big unmap flattens a large range of IO-PTEs.
+     * That may not be true for all IOMMU types.
+     */
+
+    iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
+    end = (section->offset_within_address_space + int128_get64(section->size)) &
+        TARGET_PAGE_MASK;
+
+    if (iova >= end) {
+        return;
+    }
+
+    trace_vfio_listener_region_del(iova, end - 1);
+
+    ret = vfio_dma_unmap(container, iova, end - iova);
+    memory_region_unref(section->mr);
+    if (ret) {
+        error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
+                     "0x%"HWADDR_PRIx") = %d (%m)",
+                     container, iova, end - iova, ret);
+    }
+}
+
+static const MemoryListener vfio_spapr_memory_listener = {
+    .region_add = vfio_spapr_listener_region_add,
+    .region_del = vfio_spapr_listener_region_del,
+};
+
+static void vfio_spapr_listener_release(VFIOContainer *container)
+{
+    memory_listener_unregister(&container->iommu_data.spapr.listener);
+}
+
+void spapr_memory_listener_register(VFIOContainer *container)
+{
+    container->iommu_data.spapr.listener = vfio_spapr_memory_listener;
+    container->iommu_data.release = vfio_spapr_listener_release;
+
+    memory_listener_register(&container->iommu_data.spapr.listener,
+                             container->space->as);
+}
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index 59a321d..d513a2f 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -70,6 +70,10 @@ typedef struct VFIOType1 {
     bool initialized;
 } VFIOType1;
 
+typedef struct VFIOSPAPR {
+    MemoryListener listener;
+} VFIOSPAPR;
+
 typedef struct VFIOContainer {
     VFIOAddressSpace *space;
     int fd; /* /dev/vfio/vfio, empowered by the attached groups */
@@ -77,6 +81,7 @@ typedef struct VFIOContainer {
         /* enable abstraction to support various iommu backends */
         union {
             VFIOType1 type1;
+            VFIOSPAPR spapr;
         };
         void (*release)(struct VFIOContainer *);
     } iommu_data;
@@ -146,4 +151,12 @@ extern const MemoryRegionOps vfio_region_ops;
 extern QLIST_HEAD(vfio_group_head, VFIOGroup) vfio_group_list;
 extern QLIST_HEAD(vfio_as_head, VFIOAddressSpace) vfio_address_spaces;
 
+extern int vfio_dma_map(VFIOContainer *container, hwaddr iova,
+                        ram_addr_t size, void *vaddr, bool readonly);
+extern int vfio_dma_unmap(VFIOContainer *container,
+                          hwaddr iova, ram_addr_t size);
+bool vfio_listener_skipped_section(MemoryRegionSection *section);
+
+extern void spapr_memory_listener_register(VFIOContainer *container);
+
 #endif /* !HW_VFIO_VFIO_COMMON_H */
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 01/14] vmstate: Define VARRAY with VMS_ALLOC Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-25 19:59   ` Alex Williamson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 04/14] spapr_pci: Convert finish_realize() to dma_capabilities_update()+dma_init_window() Alexey Kardashevskiy
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This enables multiple IOMMU groups in one VFIO container which means
that multiple devices from different groups can share the same IOMMU
table (or tables if DDW).

This removes a group id from vfio_container_ioctl(). The kernel support
is required for this; if the host kernel does not have the support,
it will allow only one group per container. The PHB's "iommuid" property
is ignored. The ioctl is called for every container attached to
the address space. At the moment there is just one container anyway.

If there is no container attached to the address space,
vfio_container_do_ioctl() returns -1.

This removes casts to sPAPRPHBVFIOState as none of sPAPRPHBVFIOState
members is accessed here.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_pci_vfio.c | 21 ++++++---------------
 hw/vfio/common.c        | 20 ++++++--------------
 include/hw/vfio/vfio.h  |  2 +-
 3 files changed, 13 insertions(+), 30 deletions(-)

diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index 99a1be5..e89cbff 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -35,12 +35,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
     sPAPRTCETable *tcet;
     uint32_t liobn = svphb->phb.dma_liobn;
 
-    if (svphb->iommugroupid == -1) {
-        error_setg(errp, "Wrong IOMMU group ID %d", svphb->iommugroupid);
-        return;
-    }
-
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&svphb->phb.iommu_as,
                                VFIO_CHECK_EXTENSION,
                                (void *) VFIO_SPAPR_TCE_IOMMU);
     if (ret != 1) {
@@ -49,7 +44,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
         return;
     }
 
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
     if (ret) {
         error_setg_errno(errp, -ret,
@@ -79,7 +74,6 @@ static void spapr_phb_vfio_reset(DeviceState *qdev)
 static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
                                          unsigned int addr, int option)
 {
-    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
 
@@ -116,7 +110,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
         return RTAS_OUT_PARAM_ERROR;
     }
 
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_EEH_PE_OP, &op);
     if (ret < 0) {
         return RTAS_OUT_HW_ERROR;
@@ -127,12 +121,11 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
 
 static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
 {
-    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
 
     op.op = VFIO_EEH_PE_GET_STATE;
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_EEH_PE_OP, &op);
     if (ret < 0) {
         return RTAS_OUT_PARAM_ERROR;
@@ -144,7 +137,6 @@ static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
 
 static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
 {
-    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
 
@@ -162,7 +154,7 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
         return RTAS_OUT_PARAM_ERROR;
     }
 
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_EEH_PE_OP, &op);
     if (ret < 0) {
         return RTAS_OUT_HW_ERROR;
@@ -173,12 +165,11 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
 
 static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
 {
-    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
 
     op.op = VFIO_EEH_PE_CONFIGURE;
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
+    ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_EEH_PE_OP, &op);
     if (ret < 0) {
         return RTAS_OUT_PARAM_ERROR;
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 3e4c685..369e564 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -793,34 +793,26 @@ void vfio_put_base_device(VFIODevice *vbasedev)
     close(vbasedev->fd);
 }
 
-static int vfio_container_do_ioctl(AddressSpace *as, int32_t groupid,
+static int vfio_container_do_ioctl(AddressSpace *as,
                                    int req, void *param)
 {
-    VFIOGroup *group;
     VFIOContainer *container;
     int ret = -1;
+    VFIOAddressSpace *space = vfio_get_address_space(as);
 
-    group = vfio_get_group(groupid, as);
-    if (!group) {
-        error_report("vfio: group %d not registered", groupid);
-        return ret;
-    }
-
-    container = group->container;
-    if (group->container) {
+    QLIST_FOREACH(container, &space->containers, next) {
         ret = ioctl(container->fd, req, param);
         if (ret < 0) {
             error_report("vfio: failed to ioctl %d to container: ret=%d, %s",
                          _IOC_NR(req) - VFIO_BASE, ret, strerror(errno));
+            return -errno;
         }
     }
 
-    vfio_put_group(group);
-
     return ret;
 }
 
-int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
+int vfio_container_ioctl(AddressSpace *as,
                          int req, void *param)
 {
     /* We allow only certain ioctls to the container */
@@ -835,5 +827,5 @@ int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
         return -1;
     }
 
-    return vfio_container_do_ioctl(as, groupid, req, param);
+    return vfio_container_do_ioctl(as, req, param);
 }
diff --git a/include/hw/vfio/vfio.h b/include/hw/vfio/vfio.h
index 0b26cd8..76b5744 100644
--- a/include/hw/vfio/vfio.h
+++ b/include/hw/vfio/vfio.h
@@ -3,7 +3,7 @@
 
 #include "qemu/typedefs.h"
 
-extern int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
+extern int vfio_container_ioctl(AddressSpace *as,
                                 int req, void *param);
 
 #endif
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 04/14] spapr_pci: Convert finish_realize() to dma_capabilities_update()+dma_init_window()
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (2 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers Alexey Kardashevskiy
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This reworks finish_realize() which used to finalize DMA setup with
an assumption that it will not change later.

New callbacks supports various window parameters such as page and
windows sizes. The new callback return error code rather than Error**.

This is a mechanical change so no change in behaviour is expected.
This is a part of getting rid of spapr-pci-vfio-host-bridge type.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v8:
* moved spapr_phb_dma_capabilities_update() higher to avoid forward
declaration in following patches and keep DMA code together (i.e. next
to spapr_pci_dma_iommu())
---
 hw/ppc/spapr_pci.c          | 59 ++++++++++++++++++++++++++-------------------
 hw/ppc/spapr_pci_vfio.c     | 47 +++++++++++++++---------------------
 include/hw/pci-host/spapr.h |  8 +++++-
 3 files changed, 61 insertions(+), 53 deletions(-)

diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 9a7b270..bbbb699 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -745,6 +745,28 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
     return &phb->iommu_as;
 }
 
+static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
+{
+    sphb->dma32_window_start = 0;
+    sphb->dma32_window_size = SPAPR_PCI_DMA32_SIZE;
+
+    return 0;
+}
+
+static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
+                                     uint32_t liobn, uint32_t page_shift,
+                                     uint64_t window_size)
+{
+    uint64_t bus_offset = sphb->dma32_window_start;
+    sPAPRTCETable *tcet;
+
+    tcet = spapr_tce_new_table(DEVICE(sphb), liobn, bus_offset, page_shift,
+                               window_size >> page_shift,
+                               false);
+
+    return tcet ? 0 : -1;
+}
+
 /* Macros to operate with address in OF binding to PCI */
 #define b_x(x, p, l)    (((x) & ((1<<(l))-1)) << (p))
 #define b_n(x)          b_x((x), 31, 1) /* 0 if relocatable */
@@ -1129,6 +1151,7 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
     int i;
     PCIBus *bus;
     uint64_t msi_window_size = 4096;
+    sPAPRTCETable *tcet;
 
     if (sphb->index != (uint32_t)-1) {
         hwaddr windows_base;
@@ -1278,33 +1301,18 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
         }
     }
 
-    if (!info->finish_realize) {
-        error_setg(errp, "finish_realize not defined");
-        return;
-    }
-
-    info->finish_realize(sphb, errp);
-
-    sphb->msi = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, g_free);
-}
-
-static void spapr_phb_finish_realize(sPAPRPHBState *sphb, Error **errp)
-{
-    sPAPRTCETable *tcet;
-    uint32_t nb_table;
-
-    nb_table = SPAPR_PCI_DMA32_SIZE >> SPAPR_TCE_PAGE_SHIFT;
-    tcet = spapr_tce_new_table(DEVICE(sphb), sphb->dma_liobn,
-                               0, SPAPR_TCE_PAGE_SHIFT, nb_table, false);
+    info->dma_capabilities_update(sphb);
+    info->dma_init_window(sphb, sphb->dma_liobn, SPAPR_TCE_PAGE_SHIFT,
+                          sphb->dma32_window_size);
+    tcet = spapr_tce_find_by_liobn(sphb->dma_liobn);
     if (!tcet) {
-        error_setg(errp, "Unable to create TCE table for %s",
-                   sphb->dtbusname);
-        return ;
+        error_setg(errp, "failed to create TCE table");
+        return;
     }
-
-    /* Register default 32bit DMA window */
-    memory_region_add_subregion(&sphb->iommu_root, 0,
+    memory_region_add_subregion(&sphb->iommu_root, tcet->bus_offset,
                                 spapr_tce_get_iommu(tcet));
+
+    sphb->msi = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, g_free);
 }
 
 static int spapr_phb_children_reset(Object *child, void *opaque)
@@ -1452,9 +1460,10 @@ static void spapr_phb_class_init(ObjectClass *klass, void *data)
     dc->vmsd = &vmstate_spapr_pci;
     set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
     dc->cannot_instantiate_with_device_add_yet = false;
-    spc->finish_realize = spapr_phb_finish_realize;
     hp->plug = spapr_phb_hot_plug_child;
     hp->unplug = spapr_phb_hot_unplug_child;
+    spc->dma_capabilities_update = spapr_phb_dma_capabilities_update;
+    spc->dma_init_window = spapr_phb_dma_init_window;
 }
 
 static const TypeInfo spapr_phb_info = {
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index e89cbff..f1dd28c 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -27,43 +27,35 @@ static Property spapr_phb_vfio_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
-static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
+static int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
 {
-    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
     struct vfio_iommu_spapr_tce_info info = { .argsz = sizeof(info) };
     int ret;
-    sPAPRTCETable *tcet;
-    uint32_t liobn = svphb->phb.dma_liobn;
-
-    ret = vfio_container_ioctl(&svphb->phb.iommu_as,
-                               VFIO_CHECK_EXTENSION,
-                               (void *) VFIO_SPAPR_TCE_IOMMU);
-    if (ret != 1) {
-        error_setg_errno(errp, -ret,
-                         "spapr-vfio: SPAPR extension is not supported");
-        return;
-    }
 
     ret = vfio_container_ioctl(&sphb->iommu_as,
                                VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
     if (ret) {
-        error_setg_errno(errp, -ret,
-                         "spapr-vfio: get info from container failed");
-        return;
+        return ret;
     }
 
-    tcet = spapr_tce_new_table(DEVICE(sphb), liobn, info.dma32_window_start,
-                               SPAPR_TCE_PAGE_SHIFT,
-                               info.dma32_window_size >> SPAPR_TCE_PAGE_SHIFT,
+    sphb->dma32_window_start = info.dma32_window_start;
+    sphb->dma32_window_size = info.dma32_window_size;
+
+    return ret;
+}
+
+static int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
+                                          uint32_t liobn, uint32_t page_shift,
+                                          uint64_t window_size)
+{
+    uint64_t bus_offset = sphb->dma32_window_start;
+    sPAPRTCETable *tcet;
+
+    tcet = spapr_tce_new_table(DEVICE(sphb), liobn, bus_offset, page_shift,
+                               window_size >> page_shift,
                                true);
-    if (!tcet) {
-        error_setg(errp, "spapr-vfio: failed to create VFIO TCE table");
-        return;
-    }
 
-    /* Register default 32bit DMA window */
-    memory_region_add_subregion(&sphb->iommu_root, tcet->bus_offset,
-                                spapr_tce_get_iommu(tcet));
+    return tcet ? 0 : -1;
 }
 
 static void spapr_phb_vfio_reset(DeviceState *qdev)
@@ -185,7 +177,8 @@ static void spapr_phb_vfio_class_init(ObjectClass *klass, void *data)
 
     dc->props = spapr_phb_vfio_properties;
     dc->reset = spapr_phb_vfio_reset;
-    spc->finish_realize = spapr_phb_vfio_finish_realize;
+    spc->dma_capabilities_update = spapr_phb_vfio_dma_capabilities_update;
+    spc->dma_init_window = spapr_phb_vfio_dma_init_window;
     spc->eeh_set_option = spapr_phb_vfio_eeh_set_option;
     spc->eeh_get_state = spapr_phb_vfio_eeh_get_state;
     spc->eeh_reset = spapr_phb_vfio_eeh_reset;
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index 5322b56..b6d5719 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -48,7 +48,10 @@ typedef struct sPAPRPHBVFIOState sPAPRPHBVFIOState;
 struct sPAPRPHBClass {
     PCIHostBridgeClass parent_class;
 
-    void (*finish_realize)(sPAPRPHBState *sphb, Error **errp);
+    int (*dma_capabilities_update)(sPAPRPHBState *sphb);
+    int (*dma_init_window)(sPAPRPHBState *sphb,
+                           uint32_t liobn, uint32_t page_shift,
+                           uint64_t window_size);
     int (*eeh_set_option)(sPAPRPHBState *sphb, unsigned int addr, int option);
     int (*eeh_get_state)(sPAPRPHBState *sphb, int *state);
     int (*eeh_reset)(sPAPRPHBState *sphb, int option);
@@ -90,6 +93,9 @@ struct sPAPRPHBState {
     int32_t msi_devs_num;
     spapr_pci_msi_mig *msi_devs;
 
+    uint32_t dma32_window_start;
+    uint32_t dma32_window_size;
+
     QLIST_ENTRY(sPAPRPHBState) list;
 };
 
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (3 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 04/14] spapr_pci: Convert finish_realize() to dma_capabilities_update()+dma_init_window() Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-22  3:28   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table Alexey Kardashevskiy
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

At the moment presence of vfio-pci devices on a bus affect the way
the guest view table is allocated. If there is no vfio-pci on a PHB
and the host kernel supports KVM acceleration of H_PUT_TCE, a table
is allocated in KVM. However, if there is vfio-pci and we do yet not
KVM acceleration for these, the table has to be allocated by
the userspace. At the moment the table is allocated once at boot time
but next patches will reallocate it.

This moves kvmppc_create_spapr_tce/g_malloc0 and their counterparts
to helpers.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 hw/ppc/spapr_iommu.c | 58 +++++++++++++++++++++++++++++++++++-----------------
 trace-events         |  2 +-
 2 files changed, 40 insertions(+), 20 deletions(-)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index f61504e..0cf5010 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -74,6 +74,37 @@ static IOMMUAccessFlags spapr_tce_iommu_access_flags(uint64_t tce)
     }
 }
 
+static uint64_t *spapr_tce_alloc_table(uint32_t liobn,
+                                       uint32_t nb_table,
+                                       uint32_t page_shift,
+                                       int *fd,
+                                       bool vfio_accel)
+{
+    uint64_t *table = NULL;
+    uint64_t window_size = (uint64_t)nb_table << page_shift;
+
+    if (kvm_enabled() && !(window_size >> 32)) {
+        table = kvmppc_create_spapr_tce(liobn, window_size, fd, vfio_accel);
+    }
+
+    if (!table) {
+        *fd = -1;
+        table = g_malloc0(nb_table * sizeof(uint64_t));
+    }
+
+    trace_spapr_iommu_alloc_table(liobn, table, *fd);
+
+    return table;
+}
+
+static void spapr_tce_free_table(uint64_t *table, int fd, uint32_t nb_table)
+{
+    if (!kvm_enabled() ||
+        (kvmppc_remove_spapr_tce(table, fd, nb_table) != 0)) {
+        g_free(table);
+    }
+}
+
 /* Called from RCU critical section */
 static IOMMUTLBEntry spapr_tce_translate_iommu(MemoryRegion *iommu, hwaddr addr,
                                                bool is_write)
@@ -140,21 +171,13 @@ static MemoryRegionIOMMUOps spapr_iommu_ops = {
 static int spapr_tce_table_realize(DeviceState *dev)
 {
     sPAPRTCETable *tcet = SPAPR_TCE_TABLE(dev);
-    uint64_t window_size = (uint64_t)tcet->nb_table << tcet->page_shift;
 
-    if (kvm_enabled() && !(window_size >> 32)) {
-        tcet->table = kvmppc_create_spapr_tce(tcet->liobn,
-                                              window_size,
-                                              &tcet->fd,
-                                              tcet->vfio_accel);
-    }
-
-    if (!tcet->table) {
-        size_t table_size = tcet->nb_table * sizeof(uint64_t);
-        tcet->table = g_malloc0(table_size);
-    }
-
-    trace_spapr_iommu_new_table(tcet->liobn, tcet, tcet->table, tcet->fd);
+    tcet->fd = -1;
+    tcet->table = spapr_tce_alloc_table(tcet->liobn,
+                                        tcet->nb_table,
+                                        tcet->page_shift,
+                                        &tcet->fd,
+                                        tcet->vfio_accel);
 
     memory_region_init_iommu(&tcet->iommu, OBJECT(dev), &spapr_iommu_ops,
                              "iommu-spapr",
@@ -208,11 +231,8 @@ static void spapr_tce_table_unrealize(DeviceState *dev, Error **errp)
 
     QLIST_REMOVE(tcet, list);
 
-    if (!kvm_enabled() ||
-        (kvmppc_remove_spapr_tce(tcet->table, tcet->fd,
-                                 tcet->nb_table) != 0)) {
-        g_free(tcet->table);
-    }
+    spapr_tce_free_table(tcet->table, tcet->fd, tcet->nb_table);
+    tcet->fd = -1;
 }
 
 MemoryRegion *spapr_tce_get_iommu(sPAPRTCETable *tcet)
diff --git a/trace-events b/trace-events
index 52b7efa..a93af9a 100644
--- a/trace-events
+++ b/trace-events
@@ -1362,7 +1362,7 @@ spapr_iommu_pci_get(uint64_t liobn, uint64_t ioba, uint64_t ret, uint64_t tce) "
 spapr_iommu_pci_indirect(uint64_t liobn, uint64_t ioba, uint64_t tce, uint64_t iobaN, uint64_t tceN, uint64_t ret) "liobn=%"PRIx64" ioba=0x%"PRIx64" tcelist=0x%"PRIx64" iobaN=0x%"PRIx64" tceN=0x%"PRIx64" ret=%"PRId64
 spapr_iommu_pci_stuff(uint64_t liobn, uint64_t ioba, uint64_t tce_value, uint64_t npages, uint64_t ret) "liobn=%"PRIx64" ioba=0x%"PRIx64" tcevalue=0x%"PRIx64" npages=%"PRId64" ret=%"PRId64
 spapr_iommu_xlate(uint64_t liobn, uint64_t ioba, uint64_t tce, unsigned perm, unsigned pgsize) "liobn=%"PRIx64" 0x%"PRIx64" -> 0x%"PRIx64" perm=%u mask=%x"
-spapr_iommu_new_table(uint64_t liobn, void *tcet, void *table, int fd) "liobn=%"PRIx64" tcet=%p table=%p fd=%d"
+spapr_iommu_alloc_table(uint64_t liobn, void *table, int fd) "liobn=%"PRIx64" table=%p fd=%d"
 
 # hw/ppc/ppc.c
 ppc_tb_adjust(uint64_t offs1, uint64_t offs2, int64_t diff, int64_t seconds) "adjusted from 0x%"PRIx64" to 0x%"PRIx64", diff %"PRId64" (%"PRId64"s)"
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (4 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-22  3:45   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable Alexey Kardashevskiy
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

Currently TCE tables are created once at start and their size never
changes. We are going to change that by introducing a Dynamic DMA windows
support where DMA configuration may change during the guest execution.

This changes spapr_tce_new_table() to create an empty stub object. Only
LIOBN is assigned by the time of creation. It still will be called once
at the owner object (VIO or PHB) creation.

This introduces an "enabled" state for TCE table objects with two
helper functions - spapr_tce_table_enable()/spapr_tce_table_disable().
spapr_tce_table_enable() receives TCE table parameters and allocates
a guest view of the TCE table (in the user space or KVM).
spapr_tce_table_disable() disposes the table.

Follow up patches will disable+enable tables on reset (system reset
or DDW reset).

No visible change in behaviour is expected except the actual table
will be reallocated every reset. We might optimize this later.

The other way to implement this would be dynamically create/remove
the TCE table QOM objects but this would make migration impossible
as migration expects all QOM objects to exist at the receiver
so we have to have TCE table objects created when migration begins.

spapr_tce_table_do_enable() is separated from from spapr_tce_table_enable()
as later it will be called at the sPAPRTCETable post-migration stage when
it has all the properties set after the migration.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v8:
* add missing unparent_object() to spapr_tce_table_unrealize() (parenting
is made by memory_region_init_iommu)
* tcet->iommu is alive as long as sPAPRTCETable is,
memory_region_set_size() is used to enable/disable MR

v7:
* s'tmp[64]'tmp[32]' as we need less than 64bytes and more than 16 bytes
and 32 is the closest power-of-two (just looks nices to have power-of-two
values)
* updated commit log about having spapr_tce_table_do_enable() splitted
from spapr_tce_table_enable()

v6:
* got rid of set_props()
---
 hw/ppc/spapr_iommu.c    | 79 +++++++++++++++++++++++++++++++++++--------------
 hw/ppc/spapr_pci.c      | 17 +++++++----
 hw/ppc/spapr_pci_vfio.c | 10 +++----
 hw/ppc/spapr_vio.c      |  9 +++---
 include/hw/ppc/spapr.h  | 11 +++----
 5 files changed, 82 insertions(+), 44 deletions(-)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 0cf5010..fbca136 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -173,15 +173,9 @@ static int spapr_tce_table_realize(DeviceState *dev)
     sPAPRTCETable *tcet = SPAPR_TCE_TABLE(dev);
 
     tcet->fd = -1;
-    tcet->table = spapr_tce_alloc_table(tcet->liobn,
-                                        tcet->nb_table,
-                                        tcet->page_shift,
-                                        &tcet->fd,
-                                        tcet->vfio_accel);
 
     memory_region_init_iommu(&tcet->iommu, OBJECT(dev), &spapr_iommu_ops,
-                             "iommu-spapr",
-                             (uint64_t)tcet->nb_table << tcet->page_shift);
+                             "iommu-spapr", 0);
 
     QLIST_INSERT_HEAD(&spapr_tce_tables, tcet, list);
 
@@ -191,14 +185,10 @@ static int spapr_tce_table_realize(DeviceState *dev)
     return 0;
 }
 
-sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn,
-                                   uint64_t bus_offset,
-                                   uint32_t page_shift,
-                                   uint32_t nb_table,
-                                   bool vfio_accel)
+sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn)
 {
     sPAPRTCETable *tcet;
-    char tmp[64];
+    char tmp[32];
 
     if (spapr_tce_find_by_liobn(liobn)) {
         fprintf(stderr, "Attempted to create TCE table with duplicate"
@@ -206,16 +196,8 @@ sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn,
         return NULL;
     }
 
-    if (!nb_table) {
-        return NULL;
-    }
-
     tcet = SPAPR_TCE_TABLE(object_new(TYPE_SPAPR_TCE_TABLE));
     tcet->liobn = liobn;
-    tcet->bus_offset = bus_offset;
-    tcet->page_shift = page_shift;
-    tcet->nb_table = nb_table;
-    tcet->vfio_accel = vfio_accel;
 
     snprintf(tmp, sizeof(tmp), "tce-table-%x", liobn);
     object_property_add_child(OBJECT(owner), tmp, OBJECT(tcet), NULL);
@@ -225,14 +207,65 @@ sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn,
     return tcet;
 }
 
+static void spapr_tce_table_do_enable(sPAPRTCETable *tcet)
+{
+    if (!tcet->nb_table) {
+        return;
+    }
+
+    tcet->table = spapr_tce_alloc_table(tcet->liobn,
+                                        tcet->nb_table,
+                                        tcet->page_shift,
+                                        &tcet->fd,
+                                        tcet->vfio_accel);
+
+    memory_region_set_size(&tcet->iommu,
+                           (uint64_t)tcet->nb_table << tcet->page_shift);
+
+    tcet->enabled = true;
+}
+
+void spapr_tce_table_enable(sPAPRTCETable *tcet,
+                            uint64_t bus_offset, uint32_t page_shift,
+                            uint32_t nb_table, bool vfio_accel)
+{
+    if (tcet->enabled) {
+        return;
+    }
+
+    tcet->bus_offset = bus_offset;
+    tcet->page_shift = page_shift;
+    tcet->nb_table = nb_table;
+    tcet->vfio_accel = vfio_accel;
+
+    spapr_tce_table_do_enable(tcet);
+}
+
+void spapr_tce_table_disable(sPAPRTCETable *tcet)
+{
+    if (!tcet->enabled) {
+        return;
+    }
+
+    memory_region_set_size(&tcet->iommu, 0);
+
+    spapr_tce_free_table(tcet->table, tcet->fd, tcet->nb_table);
+    tcet->fd = -1;
+    tcet->table = NULL;
+    tcet->enabled = false;
+    tcet->bus_offset = 0;
+    tcet->page_shift = 0;
+    tcet->nb_table = 0;
+    tcet->vfio_accel = false;
+}
+
 static void spapr_tce_table_unrealize(DeviceState *dev, Error **errp)
 {
     sPAPRTCETable *tcet = SPAPR_TCE_TABLE(dev);
 
     QLIST_REMOVE(tcet, list);
 
-    spapr_tce_free_table(tcet->table, tcet->fd, tcet->nb_table);
-    tcet->fd = -1;
+    spapr_tce_table_disable(tcet);
 }
 
 MemoryRegion *spapr_tce_get_iommu(sPAPRTCETable *tcet)
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index bbbb699..d1f29d8 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -758,13 +758,12 @@ static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
                                      uint64_t window_size)
 {
     uint64_t bus_offset = sphb->dma32_window_start;
-    sPAPRTCETable *tcet;
+    sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
 
-    tcet = spapr_tce_new_table(DEVICE(sphb), liobn, bus_offset, page_shift,
-                               window_size >> page_shift,
-                               false);
-
-    return tcet ? 0 : -1;
+    spapr_tce_table_enable(tcet, bus_offset, page_shift,
+                           window_size >> page_shift,
+                           false);
+    return 0;
 }
 
 /* Macros to operate with address in OF binding to PCI */
@@ -1301,6 +1300,12 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
         }
     }
 
+    tcet = spapr_tce_new_table(DEVICE(sphb), sphb->dma_liobn);
+    if (!tcet) {
+            error_setg(errp, "failed to create TCE table");
+            return;
+    }
+
     info->dma_capabilities_update(sphb);
     info->dma_init_window(sphb, sphb->dma_liobn, SPAPR_TCE_PAGE_SHIFT,
                           sphb->dma32_window_size);
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index f1dd28c..a5b97d0 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -49,13 +49,13 @@ static int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
                                           uint64_t window_size)
 {
     uint64_t bus_offset = sphb->dma32_window_start;
-    sPAPRTCETable *tcet;
+    sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
 
-    tcet = spapr_tce_new_table(DEVICE(sphb), liobn, bus_offset, page_shift,
-                               window_size >> page_shift,
-                               true);
+    spapr_tce_table_enable(tcet, bus_offset, page_shift,
+                           window_size >> page_shift,
+                           true);
 
-    return tcet ? 0 : -1;
+    return 0;
 }
 
 static void spapr_phb_vfio_reset(DeviceState *qdev)
diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
index e380b5f..46c5bb0 100644
--- a/hw/ppc/spapr_vio.c
+++ b/hw/ppc/spapr_vio.c
@@ -480,11 +480,10 @@ static void spapr_vio_busdev_realize(DeviceState *qdev, Error **errp)
         memory_region_add_subregion_overlap(&dev->mrroot, 0, &dev->mrbypass, 1);
         address_space_init(&dev->as, &dev->mrroot, qdev->id);
 
-        dev->tcet = spapr_tce_new_table(qdev, liobn,
-                                        0,
-                                        SPAPR_TCE_PAGE_SHIFT,
-                                        pc->rtce_window_size >>
-                                        SPAPR_TCE_PAGE_SHIFT, false);
+        dev->tcet = spapr_tce_new_table(qdev, liobn);
+        spapr_tce_table_enable(dev->tcet, 0, SPAPR_TCE_PAGE_SHIFT,
+                               pc->rtce_window_size >> SPAPR_TCE_PAGE_SHIFT,
+                               false);
         dev->tcet->vdev = dev;
         memory_region_add_subregion_overlap(&dev->mrroot, 0,
                                             spapr_tce_get_iommu(dev->tcet), 2);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 91a61ab..ed68c95 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -552,6 +552,7 @@ typedef struct sPAPRTCETable sPAPRTCETable;
 
 struct sPAPRTCETable {
     DeviceState parent;
+    bool enabled;
     uint32_t liobn;
     uint32_t nb_table;
     uint64_t bus_offset;
@@ -578,11 +579,11 @@ void spapr_events_init(sPAPRMachineState *sm);
 void spapr_events_fdt_skel(void *fdt, uint32_t epow_irq);
 int spapr_h_cas_compose_response(sPAPRMachineState *sm,
                                  target_ulong addr, target_ulong size);
-sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn,
-                                   uint64_t bus_offset,
-                                   uint32_t page_shift,
-                                   uint32_t nb_table,
-                                   bool vfio_accel);
+sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn);
+void spapr_tce_table_enable(sPAPRTCETable *tcet,
+                            uint64_t bus_offset, uint32_t page_shift,
+                            uint32_t nb_table, bool vfio_accel);
+void spapr_tce_table_disable(sPAPRTCETable *tcet);
 MemoryRegion *spapr_tce_get_iommu(sPAPRTCETable *tcet);
 int spapr_dma_dt(void *fdt, int node_off, const char *propname,
                  uint32_t liobn, uint64_t window, uint32_t size);
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (5 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-22  3:51   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 08/14] spapr_iommu: Add root memory region Alexey Kardashevskiy
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

sPAPRTCETable has a vfio_accel flag which is passed to
kvmppc_create_spapr_tce() and controls whether to create a guest view
table in KVM as this depends on the host kernel ability to accelerate
H_PUT_TCE for VFIO devices. We would set this flag at the moment
when sPAPRTCETable is created in spapr_tce_new_table() and
use when the table is allocated in spapr_tce_table_realize().

Now we explicitly enable/disable DMA windows via spapr_tce_table_enable()
and spapr_tce_table_disable() and can pass this flag directly without
caching it in sPAPRTCETable.

This removes the flag. This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v8:
* new to patchset, this is cleanup
---
 hw/ppc/spapr_iommu.c   | 8 +++-----
 include/hw/ppc/spapr.h | 1 -
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index fbca136..1378a7a 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -207,7 +207,7 @@ sPAPRTCETable *spapr_tce_new_table(DeviceState *owner, uint32_t liobn)
     return tcet;
 }
 
-static void spapr_tce_table_do_enable(sPAPRTCETable *tcet)
+static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel)
 {
     if (!tcet->nb_table) {
         return;
@@ -217,7 +217,7 @@ static void spapr_tce_table_do_enable(sPAPRTCETable *tcet)
                                         tcet->nb_table,
                                         tcet->page_shift,
                                         &tcet->fd,
-                                        tcet->vfio_accel);
+                                        vfio_accel);
 
     memory_region_set_size(&tcet->iommu,
                            (uint64_t)tcet->nb_table << tcet->page_shift);
@@ -236,9 +236,8 @@ void spapr_tce_table_enable(sPAPRTCETable *tcet,
     tcet->bus_offset = bus_offset;
     tcet->page_shift = page_shift;
     tcet->nb_table = nb_table;
-    tcet->vfio_accel = vfio_accel;
 
-    spapr_tce_table_do_enable(tcet);
+    spapr_tce_table_do_enable(tcet, vfio_accel);
 }
 
 void spapr_tce_table_disable(sPAPRTCETable *tcet)
@@ -256,7 +255,6 @@ void spapr_tce_table_disable(sPAPRTCETable *tcet)
     tcet->bus_offset = 0;
     tcet->page_shift = 0;
     tcet->nb_table = 0;
-    tcet->vfio_accel = false;
 }
 
 static void spapr_tce_table_unrealize(DeviceState *dev, Error **errp)
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index ed68c95..1da0ade 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -559,7 +559,6 @@ struct sPAPRTCETable {
     uint32_t page_shift;
     uint64_t *table;
     bool bypass;
-    bool vfio_accel;
     int fd;
     MemoryRegion iommu;
     struct VIOsPAPRDevice *vdev; /* for @bypass migration compatibility only */
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 08/14] spapr_iommu: Add root memory region
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (6 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 09/14] spapr_pci: Do complete reset of DMA config when resetting PHB Alexey Kardashevskiy
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

We are going to have multiple DMA windows at different offsets on
a PCI bus. For the sake of migration, we will have as many TCE table
objects pre-created as many windows supported.
So we need a way to map windows dynamically onto a PCI bus
when migration of a table is completed but at this stage a TCE table
object does not have access to a PHB to ask it to map a DMA window
backed by just migrated TCE table.

This adds a "root" memory region (UINT64_MAX long) to the TCE object.
This new region is mapped on a PCI bus with enabled overlapping as
there will be one root MR per TCE table, each of them mapped at 0.
The actual IOMMU memory region is a subregion of the root region and
a TCE table enables/disables this subregion and maps it at
the specific offset inside the root MR which is 1:1 mapping of
a PCI address space.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_iommu.c   | 13 ++++++++++---
 hw/ppc/spapr_pci.c     |  2 +-
 include/hw/ppc/spapr.h |  2 +-
 3 files changed, 12 insertions(+), 5 deletions(-)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 1378a7a..45c00d8 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -171,11 +171,16 @@ static MemoryRegionIOMMUOps spapr_iommu_ops = {
 static int spapr_tce_table_realize(DeviceState *dev)
 {
     sPAPRTCETable *tcet = SPAPR_TCE_TABLE(dev);
+    Object *tcetobj = OBJECT(tcet);
+    char tmp[32];
 
     tcet->fd = -1;
 
-    memory_region_init_iommu(&tcet->iommu, OBJECT(dev), &spapr_iommu_ops,
-                             "iommu-spapr", 0);
+    snprintf(tmp, sizeof(tmp), "tce-root-%x", tcet->liobn);
+    memory_region_init(&tcet->root, tcetobj, tmp, UINT64_MAX);
+
+    snprintf(tmp, sizeof(tmp), "tce-iommu-%x", tcet->liobn);
+    memory_region_init_iommu(&tcet->iommu, tcetobj, &spapr_iommu_ops, tmp, 0);
 
     QLIST_INSERT_HEAD(&spapr_tce_tables, tcet, list);
 
@@ -221,6 +226,7 @@ static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel)
 
     memory_region_set_size(&tcet->iommu,
                            (uint64_t)tcet->nb_table << tcet->page_shift);
+    memory_region_add_subregion(&tcet->root, tcet->bus_offset, &tcet->iommu);
 
     tcet->enabled = true;
 }
@@ -246,6 +252,7 @@ void spapr_tce_table_disable(sPAPRTCETable *tcet)
         return;
     }
 
+    memory_region_del_subregion(&tcet->root, &tcet->iommu);
     memory_region_set_size(&tcet->iommu, 0);
 
     spapr_tce_free_table(tcet->table, tcet->fd, tcet->nb_table);
@@ -268,7 +275,7 @@ static void spapr_tce_table_unrealize(DeviceState *dev, Error **errp)
 
 MemoryRegion *spapr_tce_get_iommu(sPAPRTCETable *tcet)
 {
-    return &tcet->iommu;
+    return &tcet->root;
 }
 
 static void spapr_tce_reset(DeviceState *dev)
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index d1f29d8..8707ac8 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -1314,7 +1314,7 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
         error_setg(errp, "failed to create TCE table");
         return;
     }
-    memory_region_add_subregion(&sphb->iommu_root, tcet->bus_offset,
+    memory_region_add_subregion(&sphb->iommu_root, 0,
                                 spapr_tce_get_iommu(tcet));
 
     sphb->msi = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, g_free);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 1da0ade..e32e787 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -560,7 +560,7 @@ struct sPAPRTCETable {
     uint64_t *table;
     bool bypass;
     int fd;
-    MemoryRegion iommu;
+    MemoryRegion root, iommu;
     struct VIOsPAPRDevice *vdev; /* for @bypass migration compatibility only */
     QLIST_ENTRY(sPAPRTCETable) list;
 };
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 09/14] spapr_pci: Do complete reset of DMA config when resetting PHB
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (7 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 08/14] spapr_iommu: Add root memory region Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge Alexey Kardashevskiy
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

On a system reset, DMA configuration has to reset too. At the moment
it clears the table content. This is enough for the single table case
but with DDW, we will also have to disable all DMA windows except
the default one. Furthermore according to sPAPR, if the guest removed
the default window and created a huge one at the same zero offset on
a PCI bus, the reset handler has to recreate the default window with
the default properties (2GB big, 4K pages).

This reworks SPAPR PHB code to disable the existing DMA window on reset
and then configure and enable the default window.
Without DDW that means that the same window will be disabled and then
enabled with no other change in behaviour.

This changes the table creation to do it in one place in PHB (VFIO PHB
just inherits the behaviour from PHB). The actual table allocation is
done from the reset handler and this is where dma_init_window() is called.

This disables all DMA windows on a PHB reset. It does not make any
difference now as there is just one DMA window but it will later with DDW
patches.

This makes spapr_phb_dma_reset() and spapr_phb_dma_remove_window() public
as these will be used in DDW RTAS "ibm,reset-pe-dma-window" and
"ibm,remove-pe-dma-window" handlers later; the handlers will reside in
hw/ppc/spapr_rtas_ddw.c.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v7:
* s'finish_realize'dma_init_window' in the commit log
* added details (initial clause about reuse was there :) )
why exactly spapr_phb_dma_remove_window is public
---
 hw/ppc/spapr_pci.c          | 45 ++++++++++++++++++++++++++++++++++++---------
 hw/ppc/spapr_pci_vfio.c     |  6 ------
 include/hw/pci-host/spapr.h |  3 +++
 3 files changed, 39 insertions(+), 15 deletions(-)

diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 8707ac8..3a37b6c 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -766,6 +766,40 @@ static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
     return 0;
 }
 
+int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
+                                sPAPRTCETable *tcet)
+{
+    spapr_tce_table_disable(tcet);
+
+    return 0;
+}
+
+static int spapr_phb_disable_dma_windows(Object *child, void *opaque)
+{
+    sPAPRPHBState *sphb = SPAPR_PCI_HOST_BRIDGE(opaque);
+    sPAPRTCETable *tcet = (sPAPRTCETable *)
+        object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
+
+    if (tcet) {
+        spapr_phb_dma_remove_window(sphb, tcet);
+    }
+
+    return 0;
+}
+
+int spapr_phb_dma_reset(sPAPRPHBState *sphb)
+{
+    const uint32_t liobn = SPAPR_PCI_LIOBN(sphb->index, 0);
+    sPAPRPHBClass *spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
+
+    spc->dma_capabilities_update(sphb); /* Refresh @has_vfio status */
+    object_child_foreach(OBJECT(sphb), spapr_phb_disable_dma_windows, sphb);
+    spc->dma_init_window(sphb, liobn, SPAPR_TCE_PAGE_SHIFT,
+                         sphb->dma32_window_size);
+
+    return 0;
+}
+
 /* Macros to operate with address in OF binding to PCI */
 #define b_x(x, p, l)    (((x) & ((1<<(l))-1)) << (p))
 #define b_n(x)          b_x((x), 31, 1) /* 0 if relocatable */
@@ -1145,7 +1179,6 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
     SysBusDevice *s = SYS_BUS_DEVICE(dev);
     sPAPRPHBState *sphb = SPAPR_PCI_HOST_BRIDGE(s);
     PCIHostState *phb = PCI_HOST_BRIDGE(s);
-    sPAPRPHBClass *info = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(s);
     char *namebuf;
     int i;
     PCIBus *bus;
@@ -1306,14 +1339,6 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
             return;
     }
 
-    info->dma_capabilities_update(sphb);
-    info->dma_init_window(sphb, sphb->dma_liobn, SPAPR_TCE_PAGE_SHIFT,
-                          sphb->dma32_window_size);
-    tcet = spapr_tce_find_by_liobn(sphb->dma_liobn);
-    if (!tcet) {
-        error_setg(errp, "failed to create TCE table");
-        return;
-    }
     memory_region_add_subregion(&sphb->iommu_root, 0,
                                 spapr_tce_get_iommu(tcet));
 
@@ -1333,6 +1358,8 @@ static int spapr_phb_children_reset(Object *child, void *opaque)
 
 static void spapr_phb_reset(DeviceState *qdev)
 {
+    spapr_phb_dma_reset(SPAPR_PCI_HOST_BRIDGE(qdev));
+
     /* Reset the IOMMU state */
     object_child_foreach(OBJECT(qdev), spapr_phb_children_reset, NULL);
 }
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index a5b97d0..f89e053 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -58,11 +58,6 @@ static int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
     return 0;
 }
 
-static void spapr_phb_vfio_reset(DeviceState *qdev)
-{
-    /* Do nothing */
-}
-
 static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
                                          unsigned int addr, int option)
 {
@@ -176,7 +171,6 @@ static void spapr_phb_vfio_class_init(ObjectClass *klass, void *data)
     sPAPRPHBClass *spc = SPAPR_PCI_HOST_BRIDGE_CLASS(klass);
 
     dc->props = spapr_phb_vfio_properties;
-    dc->reset = spapr_phb_vfio_reset;
     spc->dma_capabilities_update = spapr_phb_vfio_dma_capabilities_update;
     spc->dma_init_window = spapr_phb_vfio_dma_init_window;
     spc->eeh_set_option = spapr_phb_vfio_eeh_set_option;
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index b6d5719..4d63a63 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -143,5 +143,8 @@ void spapr_pci_rtas_init(void);
 sPAPRPHBState *spapr_pci_find_phb(sPAPRMachineState *spapr, uint64_t buid);
 PCIDevice *spapr_pci_find_dev(sPAPRMachineState *spapr, uint64_t buid,
                               uint32_t config_addr);
+int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
+                                sPAPRTCETable *tcet);
+int spapr_phb_dma_reset(sPAPRPHBState *sphb);
 
 #endif /* __HW_SPAPR_PCI_H__ */
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (8 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 09/14] spapr_pci: Do complete reset of DMA config when resetting PHB Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-22  4:41   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug Alexey Kardashevskiy
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

sPAPRTCETable is handling 2 TCE tables already:

1) guest view of the TCE table - emulated devices use only this table;

2) hardware IOMMU table - VFIO PCI devices use it for actual work but
it does not replace 1) and it is not visible to the guest.
The initialization of this table is driven by vfio-pci device,
DMA map/unmap requests are handled via MemoryListener so there is very
little to do in spapr-pci-vfio-host-bridge.

This moves VFIO bits to the generic spapr-pci-host-bridge which allows
putting emulated and VFIO devices on the same PHB. It is still possible
to create multiple PHBs and avoid sharing PHB resouces for emulated and
VFIO devices.

If there is no VFIO-PCI device attaches, no special ioctls will be called.
If there are some VFIO-PCI devices attached, PHB may refuse to attach
another VFIO-PCI device if a VFIO container on the host kernel side
does not support container sharing.

This changes spapr-pci-host-bridge to support properties of
spapr-pci-vfio-host-bridge. This makes spapr-pci-vfio-host-bridge type
equal to spapr-pci-host-bridge except it has an additional "iommu"
property for backward compatibility reasons.

This moves PCI device lookup from spapr_phb_vfio_eeh_set_option() to
rtas_ibm_set_eeh_option() as we need to know if the device is "vfio-pci"
and decide whether to call spapr_phb_vfio_eeh_set_option() or not.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v8:
* call spapr_phb_vfio_eeh_set_option() on vfio-pci devices only (reported by Gavin)
---
 hw/ppc/spapr_pci.c          | 80 ++++++++++++++++-----------------------------
 hw/ppc/spapr_pci_vfio.c     | 55 +++++--------------------------
 include/hw/pci-host/spapr.h | 25 ++++++--------
 3 files changed, 48 insertions(+), 112 deletions(-)

diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 3a37b6c..ca3772e 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -426,7 +426,7 @@ static void rtas_ibm_set_eeh_option(PowerPCCPU *cpu,
                                     target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
+    PCIDevice *pdev;
     uint32_t addr, option;
     uint64_t buid;
     int ret;
@@ -440,16 +440,17 @@ static void rtas_ibm_set_eeh_option(PowerPCCPU *cpu,
     option = rtas_ld(args, 3);
 
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_set_option) {
+    pdev = pci_find_device(PCI_HOST_BRIDGE(sphb)->bus,
+                           (addr >> 16) & 0xFF, (addr >> 8) & 0xFF);
+    if (!pdev || !object_dynamic_cast(OBJECT(pdev), "vfio-pci")) {
         goto param_error_exit;
     }
 
-    ret = spc->eeh_set_option(sphb, addr, option);
+    ret = spapr_phb_vfio_eeh_set_option(sphb, pdev, option);
     rtas_st(rets, 0, ret);
     return;
 
@@ -464,7 +465,6 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
                                            target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
     PCIDevice *pdev;
     uint32_t addr, option;
     uint64_t buid;
@@ -475,12 +475,7 @@ static void rtas_ibm_get_config_addr_info2(PowerPCCPU *cpu,
 
     buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
-        goto param_error_exit;
-    }
-
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_set_option) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
@@ -520,7 +515,6 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
                                             target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
     uint64_t buid;
     int state, ret;
 
@@ -530,16 +524,11 @@ static void rtas_ibm_read_slot_reset_state2(PowerPCCPU *cpu,
 
     buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_get_state) {
-        goto param_error_exit;
-    }
-
-    ret = spc->eeh_get_state(sphb, &state);
+    ret = spapr_phb_vfio_eeh_get_state(sphb, &state);
     rtas_st(rets, 0, ret);
     if (ret != RTAS_OUT_SUCCESS) {
         return;
@@ -564,7 +553,6 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
                                     target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
     uint32_t option;
     uint64_t buid;
     int ret;
@@ -576,16 +564,11 @@ static void rtas_ibm_set_slot_reset(PowerPCCPU *cpu,
     buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
     option = rtas_ld(args, 3);
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_reset) {
-        goto param_error_exit;
-    }
-
-    ret = spc->eeh_reset(sphb, option);
+    ret = spapr_phb_vfio_eeh_reset(sphb, option);
     rtas_st(rets, 0, ret);
     return;
 
@@ -600,7 +583,6 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
                                   target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
     uint64_t buid;
     int ret;
 
@@ -610,16 +592,11 @@ static void rtas_ibm_configure_pe(PowerPCCPU *cpu,
 
     buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_configure) {
-        goto param_error_exit;
-    }
-
-    ret = spc->eeh_configure(sphb);
+    ret = spapr_phb_vfio_eeh_configure(sphb);
     rtas_st(rets, 0, ret);
     return;
 
@@ -635,7 +612,6 @@ static void rtas_ibm_slot_error_detail(PowerPCCPU *cpu,
                                        target_ulong rets)
 {
     sPAPRPHBState *sphb;
-    sPAPRPHBClass *spc;
     int option;
     uint64_t buid;
 
@@ -645,12 +621,7 @@ static void rtas_ibm_slot_error_detail(PowerPCCPU *cpu,
 
     buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
     sphb = spapr_pci_find_phb(spapr, buid);
-    if (!sphb) {
-        goto param_error_exit;
-    }
-
-    spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
-    if (!spc->eeh_set_option) {
+    if (!sphb || !sphb->has_vfio) {
         goto param_error_exit;
     }
 
@@ -747,9 +718,14 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
 
 static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
 {
+    int ret;
+
     sphb->dma32_window_start = 0;
     sphb->dma32_window_size = SPAPR_PCI_DMA32_SIZE;
 
+    ret = spapr_phb_vfio_dma_capabilities_update(sphb);
+    sphb->has_vfio = (ret == 0);
+
     return 0;
 }
 
@@ -762,7 +738,8 @@ static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
 
     spapr_tce_table_enable(tcet, bus_offset, page_shift,
                            window_size >> page_shift,
-                           false);
+                           sphb->has_vfio);
+
     return 0;
 }
 
@@ -790,12 +767,11 @@ static int spapr_phb_disable_dma_windows(Object *child, void *opaque)
 int spapr_phb_dma_reset(sPAPRPHBState *sphb)
 {
     const uint32_t liobn = SPAPR_PCI_LIOBN(sphb->index, 0);
-    sPAPRPHBClass *spc = SPAPR_PCI_HOST_BRIDGE_GET_CLASS(sphb);
 
-    spc->dma_capabilities_update(sphb); /* Refresh @has_vfio status */
+    spapr_phb_dma_capabilities_update(sphb); /* Refresh @has_vfio status */
     object_child_foreach(OBJECT(sphb), spapr_phb_disable_dma_windows, sphb);
-    spc->dma_init_window(sphb, liobn, SPAPR_TCE_PAGE_SHIFT,
-                         sphb->dma32_window_size);
+    spapr_phb_dma_init_window(sphb, liobn, SPAPR_TCE_PAGE_SHIFT,
+                              sphb->dma32_window_size);
 
     return 0;
 }
@@ -1185,6 +1161,11 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
     uint64_t msi_window_size = 4096;
     sPAPRTCETable *tcet;
 
+    if ((sphb->iommugroupid != -1) &&
+        object_dynamic_cast(OBJECT(sphb), TYPE_SPAPR_PCI_VFIO_HOST_BRIDGE)) {
+        error_report("Warning: iommugroupid shall not be used");
+    }
+
     if (sphb->index != (uint32_t)-1) {
         hwaddr windows_base;
 
@@ -1482,7 +1463,6 @@ static void spapr_phb_class_init(ObjectClass *klass, void *data)
 {
     PCIHostBridgeClass *hc = PCI_HOST_BRIDGE_CLASS(klass);
     DeviceClass *dc = DEVICE_CLASS(klass);
-    sPAPRPHBClass *spc = SPAPR_PCI_HOST_BRIDGE_CLASS(klass);
     HotplugHandlerClass *hp = HOTPLUG_HANDLER_CLASS(klass);
 
     hc->root_bus_path = spapr_phb_root_bus_path;
@@ -1494,8 +1474,6 @@ static void spapr_phb_class_init(ObjectClass *klass, void *data)
     dc->cannot_instantiate_with_device_add_yet = false;
     hp->plug = spapr_phb_hot_plug_child;
     hp->unplug = spapr_phb_hot_unplug_child;
-    spc->dma_capabilities_update = spapr_phb_dma_capabilities_update;
-    spc->dma_init_window = spapr_phb_dma_init_window;
 }
 
 static const TypeInfo spapr_phb_info = {
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index f89e053..6df9a23 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -23,11 +23,11 @@
 #include "hw/vfio/vfio.h"
 
 static Property spapr_phb_vfio_properties[] = {
-    DEFINE_PROP_INT32("iommu", sPAPRPHBVFIOState, iommugroupid, -1),
+    DEFINE_PROP_INT32("iommu", sPAPRPHBState, iommugroupid, -1),
     DEFINE_PROP_END_OF_LIST(),
 };
 
-static int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
+int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
 {
     struct vfio_iommu_spapr_tce_info info = { .argsz = sizeof(info) };
     int ret;
@@ -44,22 +44,8 @@ static int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
     return ret;
 }
 
-static int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
-                                          uint32_t liobn, uint32_t page_shift,
-                                          uint64_t window_size)
-{
-    uint64_t bus_offset = sphb->dma32_window_start;
-    sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
-
-    spapr_tce_table_enable(tcet, bus_offset, page_shift,
-                           window_size >> page_shift,
-                           true);
-
-    return 0;
-}
-
-static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
-                                         unsigned int addr, int option)
+int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
+                                  PCIDevice *pdev, int option)
 {
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
@@ -68,25 +54,9 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
     case RTAS_EEH_DISABLE:
         op.op = VFIO_EEH_PE_DISABLE;
         break;
-    case RTAS_EEH_ENABLE: {
-        PCIHostState *phb;
-        PCIDevice *pdev;
-
-        /*
-         * The EEH functionality is enabled on basis of PCI device,
-         * instead of PE. We need check the validity of the PCI
-         * device address.
-         */
-        phb = PCI_HOST_BRIDGE(sphb);
-        pdev = pci_find_device(phb->bus,
-                               (addr >> 16) & 0xFF, (addr >> 8) & 0xFF);
-        if (!pdev) {
-            return RTAS_OUT_PARAM_ERROR;
-        }
-
+    case RTAS_EEH_ENABLE:
         op.op = VFIO_EEH_PE_ENABLE;
         break;
-    }
     case RTAS_EEH_THAW_IO:
         op.op = VFIO_EEH_PE_UNFREEZE_IO;
         break;
@@ -106,7 +76,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
     return RTAS_OUT_SUCCESS;
 }
 
-static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
+int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
 {
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
@@ -122,7 +92,7 @@ static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
     return RTAS_OUT_SUCCESS;
 }
 
-static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
+int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
 {
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
@@ -150,7 +120,7 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
     return RTAS_OUT_SUCCESS;
 }
 
-static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
+int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
 {
     struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
     int ret;
@@ -168,21 +138,14 @@ static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
 static void spapr_phb_vfio_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
-    sPAPRPHBClass *spc = SPAPR_PCI_HOST_BRIDGE_CLASS(klass);
 
     dc->props = spapr_phb_vfio_properties;
-    spc->dma_capabilities_update = spapr_phb_vfio_dma_capabilities_update;
-    spc->dma_init_window = spapr_phb_vfio_dma_init_window;
-    spc->eeh_set_option = spapr_phb_vfio_eeh_set_option;
-    spc->eeh_get_state = spapr_phb_vfio_eeh_get_state;
-    spc->eeh_reset = spapr_phb_vfio_eeh_reset;
-    spc->eeh_configure = spapr_phb_vfio_eeh_configure;
 }
 
 static const TypeInfo spapr_phb_vfio_info = {
     .name          = TYPE_SPAPR_PCI_VFIO_HOST_BRIDGE,
     .parent        = TYPE_SPAPR_PCI_HOST_BRIDGE,
-    .instance_size = sizeof(sPAPRPHBVFIOState),
+    .instance_size = sizeof(sPAPRPHBState),
     .class_init    = spapr_phb_vfio_class_init,
     .class_size    = sizeof(sPAPRPHBClass),
 };
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index 4d63a63..b2a8fc3 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -47,15 +47,6 @@ typedef struct sPAPRPHBVFIOState sPAPRPHBVFIOState;
 
 struct sPAPRPHBClass {
     PCIHostBridgeClass parent_class;
-
-    int (*dma_capabilities_update)(sPAPRPHBState *sphb);
-    int (*dma_init_window)(sPAPRPHBState *sphb,
-                           uint32_t liobn, uint32_t page_shift,
-                           uint64_t window_size);
-    int (*eeh_set_option)(sPAPRPHBState *sphb, unsigned int addr, int option);
-    int (*eeh_get_state)(sPAPRPHBState *sphb, int *state);
-    int (*eeh_reset)(sPAPRPHBState *sphb, int option);
-    int (*eeh_configure)(sPAPRPHBState *sphb);
 };
 
 typedef struct spapr_pci_msi {
@@ -95,16 +86,12 @@ struct sPAPRPHBState {
 
     uint32_t dma32_window_start;
     uint32_t dma32_window_size;
+    bool has_vfio;
+    int32_t iommugroupid; /* obsolete */
 
     QLIST_ENTRY(sPAPRPHBState) list;
 };
 
-struct sPAPRPHBVFIOState {
-    sPAPRPHBState phb;
-
-    int32_t iommugroupid;
-};
-
 #define SPAPR_PCI_MAX_INDEX          255
 
 #define SPAPR_PCI_BASE_BUID          0x800000020000000ULL
@@ -147,4 +134,12 @@ int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
                                 sPAPRTCETable *tcet);
 int spapr_phb_dma_reset(sPAPRPHBState *sphb);
 
+int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb);
+int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
+                                  PCIDevice *pdev, int option);
+int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state);
+int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option);
+int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb);
+
+
 #endif /* __HW_SPAPR_PCI_H__ */
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (9 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-22  5:14   ` David Gibson
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 12/14] linux headers update for DDW on SPAPR Alexey Kardashevskiy
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

sPAPR IOMMU is managing two copies of an TCE table:
1) a guest view of the table - this is what emulated devices use and
this is where H_GET_TCE reads from;
2) a hardware TCE table - only present if there is at least one vfio-pci
device on a PHB; it is updated via a memory listener on a PHB address
space which forwards map/unmap requests to vfio-pci IOMMU host driver.

At the moment presence of vfio-pci devices on a bus affect the way
the guest view table is allocated. If there is no vfio-pci on a PHB
and the host kernel supports KVM acceleration of H_PUT_TCE, a table
is allocated in KVM. However, if there is vfio-pci and we do yet not
support KVM acceleration for these, the table has to be allocated
by the userspace.

When vfio-pci device is hotplugged and there were no vfio-pci devices
already, the guest view table could have been allocated by KVM which
means that H_PUT_TCE is handled by the host kernel and since we
do not support vfio-pci in KVM, the hardware table will not be updated.

This reallocates the guest view table in QEMU if the first vfio-pci
device has just been plugged. spapr_tce_realloc_userspace() handles this.

This replays all the mappings to make sure that the tables are in sync.
This will not have a visible effect though as for a new device
the guest kernel will allocate-and-map new addresses and therefore
existing mappings from emulated devices will not be used by vfio-pci
devices.

This adds calls to spapr_phb_dma_capabilities_update() in PCI hotplug
hooks .

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 hw/ppc/spapr_iommu.c   | 50 +++++++++++++++++++++++++++++++++++++++++++++++---
 hw/ppc/spapr_pci.c     | 43 +++++++++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h |  2 ++
 trace-events           |  2 ++
 4 files changed, 94 insertions(+), 3 deletions(-)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 45c00d8..5e6bdb4 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -78,12 +78,13 @@ static uint64_t *spapr_tce_alloc_table(uint32_t liobn,
                                        uint32_t nb_table,
                                        uint32_t page_shift,
                                        int *fd,
-                                       bool vfio_accel)
+                                       bool vfio_accel,
+                                       bool force_userspace)
 {
     uint64_t *table = NULL;
     uint64_t window_size = (uint64_t)nb_table << page_shift;
 
-    if (kvm_enabled() && !(window_size >> 32)) {
+    if (kvm_enabled() && !force_userspace && !(window_size >> 32)) {
         table = kvmppc_create_spapr_tce(liobn, window_size, fd, vfio_accel);
     }
 
@@ -222,7 +223,8 @@ static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel)
                                         tcet->nb_table,
                                         tcet->page_shift,
                                         &tcet->fd,
-                                        vfio_accel);
+                                        vfio_accel,
+                                        false);
 
     memory_region_set_size(&tcet->iommu,
                            (uint64_t)tcet->nb_table << tcet->page_shift);
@@ -495,6 +497,48 @@ int spapr_dma_dt(void *fdt, int node_off, const char *propname,
     return 0;
 }
 
+static int spapr_tce_do_replay(sPAPRTCETable *tcet, uint64_t *table)
+{
+    target_ulong ioba = tcet->bus_offset, pgsz = (1ULL << tcet->page_shift);
+    long i, ret = 0;
+
+    for (i = 0; i < tcet->nb_table; ++i, ioba += pgsz) {
+        ret = put_tce_emu(tcet, ioba, table[i]);
+        if (ret)
+            break;
+    }
+
+    return ret;
+}
+
+int spapr_tce_replay(sPAPRTCETable *tcet)
+{
+    return spapr_tce_do_replay(tcet, tcet->table);
+}
+
+int spapr_tce_realloc_userspace(sPAPRTCETable *tcet, bool replay)
+{
+    int ret = 0, oldfd;
+    uint64_t *oldtable;
+
+    oldtable = tcet->table;
+    oldfd = tcet->fd;
+    tcet->table = spapr_tce_alloc_table(tcet->liobn,
+                                        tcet->nb_table,
+                                        tcet->page_shift,
+                                        &tcet->fd,
+                                        false,
+                                        true); /* force_userspace */
+
+    if (replay) {
+        ret = spapr_tce_do_replay(tcet, oldtable);
+    }
+
+    spapr_tce_free_table(oldtable, oldfd, tcet->nb_table);
+
+    return ret;
+}
+
 int spapr_tcet_dma_dt(void *fdt, int node_off, const char *propname,
                       sPAPRTCETable *tcet)
 {
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index ca3772e..1f980fa 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -716,6 +716,33 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
     return &phb->iommu_as;
 }
 
+static int spapr_phb_dma_update(Object *child, void *opaque)
+{
+    int ret = 0;
+    sPAPRTCETable *tcet = (sPAPRTCETable *)
+        object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
+
+    if (!tcet) {
+        return 0;
+    }
+
+    if (tcet->fd >= 0) {
+        /*
+         * We got first vfio-pci device on accelerated table.
+         * VFIO acceleration is not possible.
+         * Reallocate table in userspace and replay mappings.
+         */
+        ret = spapr_tce_realloc_userspace(tcet, true);
+        trace_spapr_pci_dma_realloc_update(tcet->liobn, ret);
+    } else {
+        /* There was no acceleration, so just replay mappings. */
+        ret = spapr_tce_replay(tcet);
+        trace_spapr_pci_dma_update(tcet->liobn, ret);
+    }
+
+    return 0;
+}
+
 static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
 {
     int ret;
@@ -776,6 +803,20 @@ int spapr_phb_dma_reset(sPAPRPHBState *sphb)
     return 0;
 }
 
+static int spapr_phb_hotplug_dma_sync(sPAPRPHBState *sphb)
+{
+    int ret = 0;
+    bool had_vfio = sphb->has_vfio;
+
+    spapr_phb_dma_capabilities_update(sphb);
+
+    if (!had_vfio && sphb->has_vfio) {
+        object_child_foreach(OBJECT(sphb), spapr_phb_dma_update, NULL);
+    }
+
+    return ret;
+}
+
 /* Macros to operate with address in OF binding to PCI */
 #define b_x(x, p, l)    (((x) & ((1<<(l))-1)) << (p))
 #define b_n(x)          b_x((x), 31, 1) /* 0 if relocatable */
@@ -1042,6 +1083,7 @@ static void spapr_phb_add_pci_device(sPAPRDRConnector *drc,
     if (dev->hotplugged) {
         fdt = spapr_create_pci_child_dt(phb, pdev, drc_index, drc_name,
                                         &fdt_start_offset);
+        spapr_phb_hotplug_dma_sync(phb);
     }
 
     drck->attach(drc, DEVICE(pdev),
@@ -1065,6 +1107,7 @@ static void spapr_phb_remove_pci_device_cb(DeviceState *dev, void *opaque)
      */
     pci_device_reset(PCI_DEVICE(dev));
     object_unparent(OBJECT(dev));
+    spapr_phb_hotplug_dma_sync((sPAPRPHBState *)opaque);
 }
 
 static void spapr_phb_remove_pci_device(sPAPRDRConnector *drc,
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index e32e787..4645f16 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -588,6 +588,8 @@ int spapr_dma_dt(void *fdt, int node_off, const char *propname,
                  uint32_t liobn, uint64_t window, uint32_t size);
 int spapr_tcet_dma_dt(void *fdt, int node_off, const char *propname,
                       sPAPRTCETable *tcet);
+int spapr_tce_replay(sPAPRTCETable *tcet);
+int spapr_tce_realloc_userspace(sPAPRTCETable *tcet, bool replay);
 void spapr_pci_switch_vga(bool big_endian);
 void spapr_hotplug_req_add_event(sPAPRDRConnector *drc);
 void spapr_hotplug_req_remove_event(sPAPRDRConnector *drc);
diff --git a/trace-events b/trace-events
index a93af9a..3cd8bf7 100644
--- a/trace-events
+++ b/trace-events
@@ -1300,6 +1300,8 @@ spapr_pci_rtas_ibm_query_interrupt_source_number(unsigned ioa, unsigned intr) "q
 spapr_pci_msi_write(uint64_t addr, uint64_t data, uint32_t dt_irq) "@%"PRIx64"<=%"PRIx64" IRQ %u"
 spapr_pci_lsi_set(const char *busname, int pin, uint32_t irq) "%s PIN%d IRQ %u"
 spapr_pci_msi_retry(unsigned config_addr, unsigned req_num, unsigned max_irqs) "Guest device at %x asked %u, have only %u"
+spapr_pci_dma_update(uint64_t liobn, long ret) "liobn=%"PRIx64" tcet=%ld"
+spapr_pci_dma_realloc_update(uint64_t liobn, long ret) "liobn=%"PRIx64" tcet=%ld"
 
 # hw/pci/pci.c
 pci_update_mappings_del(void *d, uint32_t bus, uint32_t func, uint32_t slot, int bar, uint64_t addr, uint64_t size) "d=%p %02x:%02x.%x %d,%#"PRIx64"+%#"PRIx64
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 12/14] linux headers update for DDW on SPAPR
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (10 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 13/14] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

Since the changes are not in upstream yet, no tag or branch is specified here.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 linux-headers/linux/vfio.h | 88 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 85 insertions(+), 3 deletions(-)

diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
index 0508d0b..a35418d 100644
--- a/linux-headers/linux/vfio.h
+++ b/linux-headers/linux/vfio.h
@@ -36,6 +36,8 @@
 /* Two-stage IOMMU */
 #define VFIO_TYPE1_NESTING_IOMMU	6	/* Implies v2 */
 
+#define VFIO_SPAPR_TCE_v2_IOMMU		7
+
 /*
  * The IOCTL interface is designed for extensibility by embedding the
  * structure length (argsz) and flags into structures passed between
@@ -443,6 +445,23 @@ struct vfio_iommu_type1_dma_unmap {
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
+ * The SPAPR TCE DDW info struct provides the information about
+ * the details of Dynamic DMA window capability.
+ *
+ * @pgsizes contains a page size bitmask, 4K/64K/16M are supported.
+ * @max_dynamic_windows_supported tells the maximum number of windows
+ * which the platform can create.
+ * @levels tells the maximum number of levels in multi-level IOMMU tables;
+ * this allows splitting a table into smaller chunks which reduces
+ * the amount of physically contiguous memory required for the table.
+ */
+struct vfio_iommu_spapr_tce_ddw_info {
+	__u64 pgsizes;			/* Bitmap of supported page sizes */
+	__u32 max_dynamic_windows_supported;
+	__u32 levels;
+};
+
+/*
  * The SPAPR TCE info struct provides the information about the PCI bus
  * address ranges available for DMA, these values are programmed into
  * the hardware so the guest has to know that information.
@@ -452,14 +471,17 @@ struct vfio_iommu_type1_dma_unmap {
  * addresses too so the window works as a filter rather than an offset
  * for IOVA addresses.
  *
- * A flag will need to be added if other page sizes are supported,
- * so as defined here, it is always 4k.
+ * Flags supported:
+ * - VFIO_IOMMU_SPAPR_INFO_DDW: informs the userspace that dynamic DMA windows
+ *   (DDW) support is present. @ddw is only supported when DDW is present.
  */
 struct vfio_iommu_spapr_tce_info {
 	__u32 argsz;
-	__u32 flags;			/* reserved for future use */
+	__u32 flags;
+#define VFIO_IOMMU_SPAPR_INFO_DDW	(1 << 0)	/* DDW supported */
 	__u32 dma32_window_start;	/* 32 bit window start (bytes) */
 	__u32 dma32_window_size;	/* 32 bit window size (bytes) */
+	struct vfio_iommu_spapr_tce_ddw_info ddw;
 };
 
 #define VFIO_IOMMU_SPAPR_TCE_GET_INFO	_IO(VFIO_TYPE, VFIO_BASE + 12)
@@ -495,6 +517,66 @@ struct vfio_eeh_pe_op {
 
 #define VFIO_EEH_PE_OP			_IO(VFIO_TYPE, VFIO_BASE + 21)
 
+/**
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 17, struct vfio_iommu_spapr_register_memory)
+ *
+ * Registers user space memory where DMA is allowed. It pins
+ * user pages and does the locked memory accounting so
+ * subsequent VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA calls
+ * get faster.
+ */
+struct vfio_iommu_spapr_register_memory {
+	__u32	argsz;
+	__u32	flags;
+	__u64	vaddr;				/* Process virtual address */
+	__u64	size;				/* Size of mapping (bytes) */
+};
+#define VFIO_IOMMU_SPAPR_REGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 17)
+
+/**
+ * VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY - _IOW(VFIO_TYPE, VFIO_BASE + 18, struct vfio_iommu_spapr_register_memory)
+ *
+ * Unregisters user space memory registered with
+ * VFIO_IOMMU_SPAPR_REGISTER_MEMORY.
+ * Uses vfio_iommu_spapr_register_memory for parameters.
+ */
+#define VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY	_IO(VFIO_TYPE, VFIO_BASE + 18)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_CREATE - _IOWR(VFIO_TYPE, VFIO_BASE + 19, struct vfio_iommu_spapr_tce_create)
+ *
+ * Creates an additional TCE table and programs it (sets a new DMA window)
+ * to every IOMMU group in the container. It receives page shift, window
+ * size and number of levels in the TCE table being created.
+ *
+ * It allocates and returns an offset on a PCI bus of the new DMA window.
+ */
+struct vfio_iommu_spapr_tce_create {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u32 page_shift;
+	__u64 window_size;
+	__u32 levels;
+	/* out */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_CREATE	_IO(VFIO_TYPE, VFIO_BASE + 19)
+
+/**
+ * VFIO_IOMMU_SPAPR_TCE_REMOVE - _IOW(VFIO_TYPE, VFIO_BASE + 20, struct vfio_iommu_spapr_tce_remove)
+ *
+ * Unprograms a TCE table from all groups in the container and destroys it.
+ * It receives a PCI bus offset as a window id.
+ */
+struct vfio_iommu_spapr_tce_remove {
+	__u32 argsz;
+	__u32 flags;
+	/* in */
+	__u64 start_addr;
+};
+#define VFIO_IOMMU_SPAPR_TCE_REMOVE	_IO(VFIO_TYPE, VFIO_BASE + 20)
+
 /* ***************************************************************** */
 
 #endif /* VFIO_H */
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 13/14] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (11 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 12/14] linux headers update for DDW on SPAPR Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW) Alexey Kardashevskiy
  2015-06-23  6:44 ` [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) David Gibson
  14 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not spent time on doing that in real time with
possible failures which cannot be handled nicely in some cases.

This adds a guest RAM memory listener which notifies a VFIO container
about memory which needs to be pinned/unpinned. VFIO MMIO regions
(i.e. "skip dump" regions) are skipped.

The feature is only enabled for SPAPR IOMMU v2. The host kernel changes
are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does
not call it when v2 is detected and enabled.

This does not change the guest visible interface.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v7:
* in vfio_spapr_ram_listener_region_del(), do unref() after ioctl()
* s'ramlistener'register_listener'

v6:
* fixed commit log (s/guest/userspace/), added note about no guest visible
change
* fixed error checking if ram registration failed
* added alignment check for section->offset_within_region

v5:
* simplified the patch
* added trace points
* added round_up() for the size
* SPAPR IOMMU v2 used
---
 hw/vfio/common.c              | 26 +++++++++----
 hw/vfio/spapr.c               | 88 ++++++++++++++++++++++++++++++++++++++++++-
 include/hw/vfio/vfio-common.h |  5 ++-
 trace-events                  |  1 +
 4 files changed, 110 insertions(+), 10 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 369e564..9e3e0b0 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -577,14 +577,18 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
 
         container->iommu_data.type1.initialized = true;
 
-    } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
+    } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) ||
+               ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) {
+        bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU);
+
         ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
         if (ret) {
             error_report("vfio: failed to set group container: %m");
             ret = -errno;
             goto free_container_exit;
         }
-        ret = ioctl(fd, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU);
+        ret = ioctl(fd, VFIO_SET_IOMMU,
+                v2 ? VFIO_SPAPR_TCE_v2_IOMMU : VFIO_SPAPR_TCE_IOMMU);
         if (ret) {
             error_report("vfio: failed to set iommu for container: %m");
             ret = -errno;
@@ -596,14 +600,20 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
          * when container fd is closed so we do not call it explicitly
          * in this file.
          */
-        ret = ioctl(fd, VFIO_IOMMU_ENABLE);
-        if (ret) {
-            error_report("vfio: failed to enable container: %m");
-            ret = -errno;
-            goto free_container_exit;
+        if (!v2) {
+            ret = ioctl(fd, VFIO_IOMMU_ENABLE);
+            if (ret) {
+                error_report("vfio: failed to enable container: %m");
+                ret = -errno;
+                goto free_container_exit;
+            }
         }
 
-        spapr_memory_listener_register(container);
+        ret = spapr_memory_listener_register(container, v2 ? 2 : 1);
+        if (ret) {
+            error_report("vfio: RAM memory listener initialization failed for container");
+            goto listener_release_exit;
+        }
 
     } else {
         error_report("vfio: No available IOMMU models");
diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c
index f03fab1..e79de4a 100644
--- a/hw/vfio/spapr.c
+++ b/hw/vfio/spapr.c
@@ -17,6 +17,9 @@
  *  along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <sys/ioctl.h>
+#include <linux/vfio.h>
+
 #include "hw/vfio/vfio-common.h"
 #include "qemu/error-report.h"
 #include "trace.h"
@@ -214,16 +217,99 @@ static const MemoryListener vfio_spapr_memory_listener = {
     .region_del = vfio_spapr_listener_region_del,
 };
 
+static void vfio_ram_do_region(VFIOContainer *container,
+                              MemoryRegionSection *section, unsigned long req)
+{
+    int ret;
+    struct vfio_iommu_spapr_register_memory reg = { .argsz = sizeof(reg) };
+
+    if (!memory_region_is_ram(section->mr) ||
+        memory_region_is_skip_dump(section->mr)) {
+        return;
+    }
+
+    if (unlikely((section->offset_within_region & (getpagesize() - 1)))) {
+        error_report("%s received unaligned region", __func__);
+        return;
+    }
+
+    reg.vaddr = (__u64) memory_region_get_ram_ptr(section->mr) +
+        section->offset_within_region;
+    reg.size = ROUND_UP(int128_get64(section->size), TARGET_PAGE_SIZE);
+
+    ret = ioctl(container->fd, req, &reg);
+    trace_vfio_ram_register(_IOC_NR(req) - VFIO_BASE, reg.vaddr, reg.size,
+            ret ? -errno : 0);
+    if (!ret) {
+        return;
+    }
+
+    /*
+     * On the initfn path, store the first error in the container so we
+     * can gracefully fail.  Runtime, there's not much we can do other
+     * than throw a hardware error.
+     */
+    if (!container->iommu_data.spapr.ram_reg_initialized) {
+        if (!container->iommu_data.spapr.ram_reg_error) {
+            container->iommu_data.spapr.ram_reg_error = -errno;
+        }
+    } else {
+        hw_error("vfio: RAM registering failed, unable to continue");
+    }
+}
+
+static void vfio_spapr_ram_listener_region_add(MemoryListener *listener,
+                                               MemoryRegionSection *section)
+{
+    VFIOContainer *container = container_of(listener, VFIOContainer,
+                                            iommu_data.spapr.register_listener);
+    memory_region_ref(section->mr);
+    vfio_ram_do_region(container, section, VFIO_IOMMU_SPAPR_REGISTER_MEMORY);
+}
+
+static void vfio_spapr_ram_listener_region_del(MemoryListener *listener,
+                                               MemoryRegionSection *section)
+{
+    VFIOContainer *container = container_of(listener, VFIOContainer,
+                                            iommu_data.spapr.register_listener);
+    vfio_ram_do_region(container, section, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY);
+    memory_region_unref(section->mr);
+}
+
+static const MemoryListener vfio_spapr_ram_memory_listener = {
+    .region_add = vfio_spapr_ram_listener_region_add,
+    .region_del = vfio_spapr_ram_listener_region_del,
+};
+
 static void vfio_spapr_listener_release(VFIOContainer *container)
 {
     memory_listener_unregister(&container->iommu_data.spapr.listener);
 }
 
-void spapr_memory_listener_register(VFIOContainer *container)
+static void vfio_spapr_listener_release_v2(VFIOContainer *container)
+{
+    memory_listener_unregister(&container->iommu_data.spapr.listener);
+    vfio_spapr_listener_release(container);
+}
+
+int spapr_memory_listener_register(VFIOContainer *container, int ver)
 {
     container->iommu_data.spapr.listener = vfio_spapr_memory_listener;
     container->iommu_data.release = vfio_spapr_listener_release;
 
     memory_listener_register(&container->iommu_data.spapr.listener,
                              container->space->as);
+    if (ver < 2) {
+        return 0;
+    }
+
+    container->iommu_data.spapr.register_listener =
+            vfio_spapr_ram_memory_listener;
+    container->iommu_data.release = vfio_spapr_listener_release_v2;
+    memory_listener_register(&container->iommu_data.spapr.register_listener,
+                             &address_space_memory);
+
+    container->iommu_data.spapr.ram_reg_initialized = true;
+
+    return container->iommu_data.spapr.ram_reg_error;
 }
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index d513a2f..d94a118 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -72,6 +72,9 @@ typedef struct VFIOType1 {
 
 typedef struct VFIOSPAPR {
     MemoryListener listener;
+    MemoryListener register_listener;
+    int ram_reg_error;
+    bool ram_reg_initialized;
 } VFIOSPAPR;
 
 typedef struct VFIOContainer {
@@ -157,6 +160,6 @@ extern int vfio_dma_unmap(VFIOContainer *container,
                           hwaddr iova, ram_addr_t size);
 bool vfio_listener_skipped_section(MemoryRegionSection *section);
 
-extern void spapr_memory_listener_register(VFIOContainer *container);
+extern int spapr_memory_listener_register(VFIOContainer *container, int ver);
 
 #endif /* !HW_VFIO_VFIO_COMMON_H */
diff --git a/trace-events b/trace-events
index 3cd8bf7..3d1aeea 100644
--- a/trace-events
+++ b/trace-events
@@ -1584,6 +1584,7 @@ vfio_disconnect_container(int fd) "close container->fd=%d"
 vfio_put_group(int fd) "close group->fd=%d"
 vfio_get_device(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
 vfio_put_base_device(int fd) "close vdev->fd=%d"
+vfio_ram_register(int req, uint64_t va, uint64_t size, int ret) "req=%d va=%"PRIx64" size=%"PRIx64" ret=%d"
 
 # hw/vfio/platform.c
 vfio_platform_populate_regions(int region_index, unsigned long flag, unsigned long size, int fd, unsigned long offset) "- region %d flags = 0x%lx, size = 0x%lx, fd= %d, offset = 0x%lx"
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (12 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 13/14] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
@ 2015-06-18 11:37 ` Alexey Kardashevskiy
  2015-06-23  6:38   ` David Gibson
  2015-06-23  6:44 ` [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) David Gibson
  14 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-18 11:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexey Kardashevskiy, Alexander Graf, Gavin Shan,
	Alex Williamson, qemu-ppc, David Gibson

This adds support for Dynamic DMA Windows (DDW) option defined by
the SPAPR specification which allows to have additional DMA window(s)

This implements DDW for emulated and VFIO devices. As all TCE root regions
are mapped at 0 and 64bit long (and actual tables are child regions),
this replaces memory_region_add_subregion() with _overlap() to make
QEMU memory API happy.

This reserves RTAS token numbers for DDW calls.

This implements helpers to interact with VFIO kernel interface.

This changes the TCE table migration descriptor to support dynamic
tables as from now on, PHB will create as many stub TCE table objects
as PHB can possibly support but not all of them might be initialized at
the time of migration because DDW might or might not be requested by
the guest.

The "ddw" property is enabled by default on a PHB but for compatibility
the pseries-2.3 machine and older disable it.

This implements DDW for VFIO. The host kernel support is required.
This adds a "levels" property to PHB to control the number of levels
in the actual TCE table allocated by the host kernel, 0 is the default
value to tell QEMU to calculate the correct value. Current hardware
supports up to 5 levels.

The existing linux guests try creating one additional huge DMA window
with 64K or 16MB pages and map the entire guest RAM to. If succeeded,
the guest switches to dma_direct_ops and never calls TCE hypercalls
(H_PUT_TCE,...) again. This enables VFIO devices to use the entire RAM
and not waste time on map/unmap later.

This adds 4 RTAS handlers:
* ibm,query-pe-dma-window
* ibm,create-pe-dma-window
* ibm,remove-pe-dma-window
* ibm,reset-pe-dma-window
These are registered from type_init() callback.

These RTAS handlers are implemented in a separate file to avoid polluting
spapr_iommu.c with PCI.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v7:
* fixed uninitialized variables

v6:
* rework as there is no more special device for VFIO PHB

v5:
* total rework
* enabled for machines >2.3
* fixed migration
* merged rtas handlers here

v4:
* reset handler is back in generalized form

v3:
* removed reset
* windows_num is now 1 or bigger rather than 0-based value and it is only
changed in PHB code, not in RTAS
* added page mask check in create()
* added SPAPR_PCI_DDW_MAX_WINDOWS to track how many windows are already
created

v2:
* tested on hacked emulated E1000
* implemented DDW reset on the PHB reset
* spapr_pci_ddw_remove/spapr_pci_ddw_reset are public for reuse by VFIO
---
 hw/ppc/Makefile.objs        |   3 +
 hw/ppc/spapr.c              |   5 +
 hw/ppc/spapr_iommu.c        |  36 +++++-
 hw/ppc/spapr_pci.c          |  83 ++++++++++--
 hw/ppc/spapr_pci_vfio.c     |  80 ++++++++++++
 hw/ppc/spapr_rtas_ddw.c     | 300 ++++++++++++++++++++++++++++++++++++++++++++
 hw/vfio/common.c            |   2 +
 include/hw/pci-host/spapr.h |  21 ++++
 include/hw/ppc/spapr.h      |  17 ++-
 trace-events                |   4 +
 10 files changed, 536 insertions(+), 15 deletions(-)
 create mode 100644 hw/ppc/spapr_rtas_ddw.c

diff --git a/hw/ppc/Makefile.objs b/hw/ppc/Makefile.objs
index c8ab06e..0b2ff6d 100644
--- a/hw/ppc/Makefile.objs
+++ b/hw/ppc/Makefile.objs
@@ -7,6 +7,9 @@ obj-$(CONFIG_PSERIES) += spapr_pci.o spapr_rtc.o spapr_drc.o
 ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES)$(CONFIG_LINUX), yyy)
 obj-y += spapr_pci_vfio.o
 endif
+ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES), yy)
+obj-y += spapr_rtas_ddw.o
+endif
 # PowerPC 4xx boards
 obj-y += ppc405_boards.o ppc4xx_devs.o ppc405_uc.o ppc440_bamboo.o
 obj-y += ppc4xx_pci.o
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 5ca817c..d50d50b 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1860,6 +1860,11 @@ static const TypeInfo spapr_machine_info = {
             .driver   = "spapr-pci-host-bridge",\
             .property = "dynamic-reconfiguration",\
             .value    = "off",\
+        },\
+        {\
+            .driver   = TYPE_SPAPR_PCI_HOST_BRIDGE,\
+            .property = "ddw",\
+            .value    = stringify(off),\
         },
 
 #define SPAPR_COMPAT_2_2 \
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 5e6bdb4..eaa1943 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -136,6 +136,15 @@ static IOMMUTLBEntry spapr_tce_translate_iommu(MemoryRegion *iommu, hwaddr addr,
     return ret;
 }
 
+static void spapr_tce_table_pre_save(void *opaque)
+{
+    sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
+
+    tcet->migtable = tcet->table;
+}
+
+static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel);
+
 static int spapr_tce_table_post_load(void *opaque, int version_id)
 {
     sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
@@ -144,22 +153,43 @@ static int spapr_tce_table_post_load(void *opaque, int version_id)
         spapr_vio_set_bypass(tcet->vdev, tcet->bypass);
     }
 
+    if (!tcet->migtable) {
+        return 0;
+    }
+
+    if (tcet->enabled) {
+        if (!tcet->table) {
+            tcet->enabled = false;
+            /* VFIO does not migrate so pass vfio_accel == false */
+            spapr_tce_table_do_enable(tcet, false);
+        }
+        memcpy(tcet->table, tcet->migtable,
+               tcet->nb_table * sizeof(tcet->table[0]));
+        free(tcet->migtable);
+        tcet->migtable = NULL;
+    }
+
     return 0;
 }
 
 static const VMStateDescription vmstate_spapr_tce_table = {
     .name = "spapr_iommu",
-    .version_id = 2,
+    .version_id = 3,
     .minimum_version_id = 2,
+    .pre_save = spapr_tce_table_pre_save,
     .post_load = spapr_tce_table_post_load,
     .fields      = (VMStateField []) {
         /* Sanity check */
         VMSTATE_UINT32_EQUAL(liobn, sPAPRTCETable),
-        VMSTATE_UINT32_EQUAL(nb_table, sPAPRTCETable),
 
         /* IOMMU state */
+        VMSTATE_BOOL_V(enabled, sPAPRTCETable, 3),
+        VMSTATE_UINT64_V(bus_offset, sPAPRTCETable, 3),
+        VMSTATE_UINT32_V(page_shift, sPAPRTCETable, 3),
+        VMSTATE_UINT32(nb_table, sPAPRTCETable),
         VMSTATE_BOOL(bypass, sPAPRTCETable),
-        VMSTATE_VARRAY_UINT32(table, sPAPRTCETable, nb_table, 0, vmstate_info_uint64, uint64_t),
+        VMSTATE_VARRAY_UINT32_ALLOC(migtable, sPAPRTCETable, nb_table, 0,
+                                    vmstate_info_uint64, uint64_t),
 
         VMSTATE_END_OF_LIST()
     },
diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index 1f980fa..ab2d650 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -719,6 +719,8 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
 static int spapr_phb_dma_update(Object *child, void *opaque)
 {
     int ret = 0;
+    uint64_t bus_offset = 0;
+    sPAPRPHBState *sphb = opaque;
     sPAPRTCETable *tcet = (sPAPRTCETable *)
         object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
 
@@ -726,6 +728,17 @@ static int spapr_phb_dma_update(Object *child, void *opaque)
         return 0;
     }
 
+    ret = spapr_phb_vfio_dma_init_window(sphb,
+                                         tcet->page_shift,
+                                         tcet->nb_table << tcet->page_shift,
+                                         &bus_offset);
+    if (ret) {
+        return ret;
+    }
+    if (bus_offset != tcet->bus_offset) {
+        return -EFAULT;
+    }
+
     if (tcet->fd >= 0) {
         /*
          * We got first vfio-pci device on accelerated table.
@@ -749,6 +762,9 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
 
     sphb->dma32_window_start = 0;
     sphb->dma32_window_size = SPAPR_PCI_DMA32_SIZE;
+    sphb->windows_supported = SPAPR_PCI_DMA_MAX_WINDOWS;
+    sphb->page_size_mask = (1ULL << 12) | (1ULL << 16) | (1ULL << 24);
+    sphb->dma64_window_size = pow2ceil(ram_size);
 
     ret = spapr_phb_vfio_dma_capabilities_update(sphb);
     sphb->has_vfio = (ret == 0);
@@ -756,12 +772,31 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
     return 0;
 }
 
-static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
-                                     uint32_t liobn, uint32_t page_shift,
-                                     uint64_t window_size)
+int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
+                              uint32_t liobn, uint32_t page_shift,
+                              uint64_t window_size)
 {
     uint64_t bus_offset = sphb->dma32_window_start;
     sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
+    int ret;
+
+    if (SPAPR_PCI_DMA_WINDOW_NUM(liobn) && !sphb->ddw_enabled) {
+        return -1;
+    }
+
+    if (sphb->ddw_enabled) {
+        if (sphb->has_vfio) {
+            ret = spapr_phb_vfio_dma_init_window(sphb,
+                                                 page_shift, window_size,
+                                                 &bus_offset);
+            if (ret) {
+                return ret;
+            }
+        } else if (SPAPR_PCI_DMA_WINDOW_NUM(liobn)) {
+            /* No VFIO so we choose a huge window address */
+            bus_offset = SPAPR_PCI_DMA64_START;
+        }
+    }
 
     spapr_tce_table_enable(tcet, bus_offset, page_shift,
                            window_size >> page_shift,
@@ -773,9 +808,14 @@ static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
 int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
                                 sPAPRTCETable *tcet)
 {
+    int ret = 0;
+
+    if (sphb->has_vfio && sphb->ddw_enabled) {
+        ret = spapr_phb_vfio_dma_remove_window(sphb, tcet);
+    }
     spapr_tce_table_disable(tcet);
 
-    return 0;
+    return ret;
 }
 
 static int spapr_phb_disable_dma_windows(Object *child, void *opaque)
@@ -811,7 +851,7 @@ static int spapr_phb_hotplug_dma_sync(sPAPRPHBState *sphb)
     spapr_phb_dma_capabilities_update(sphb);
 
     if (!had_vfio && sphb->has_vfio) {
-        object_child_foreach(OBJECT(sphb), spapr_phb_dma_update, NULL);
+        object_child_foreach(OBJECT(sphb), spapr_phb_dma_update, sphb);
     }
 
     return ret;
@@ -1357,15 +1397,17 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
         }
     }
 
-    tcet = spapr_tce_new_table(DEVICE(sphb), sphb->dma_liobn);
-    if (!tcet) {
-            error_setg(errp, "failed to create TCE table");
+    for (i = 0; i < SPAPR_PCI_DMA_MAX_WINDOWS; ++i) {
+        tcet = spapr_tce_new_table(DEVICE(sphb),
+                                   SPAPR_PCI_LIOBN(sphb->index, i));
+        if (!tcet) {
+            error_setg(errp, "spapr_tce_new_table failed");
             return;
+        }
+        memory_region_add_subregion_overlap(&sphb->iommu_root, 0,
+                                            spapr_tce_get_iommu(tcet), 0);
     }
 
-    memory_region_add_subregion(&sphb->iommu_root, 0,
-                                spapr_tce_get_iommu(tcet));
-
     sphb->msi = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, g_free);
 }
 
@@ -1400,6 +1442,8 @@ static Property spapr_phb_properties[] = {
                        SPAPR_PCI_IO_WIN_SIZE),
     DEFINE_PROP_BOOL("dynamic-reconfiguration", sPAPRPHBState, dr_enabled,
                      true),
+    DEFINE_PROP_BOOL("ddw", sPAPRPHBState, ddw_enabled, true),
+    DEFINE_PROP_UINT8("levels", sPAPRPHBState, levels, 0),
     DEFINE_PROP_END_OF_LIST(),
 };
 
@@ -1580,6 +1624,15 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
     uint32_t interrupt_map_mask[] = {
         cpu_to_be32(b_ddddd(-1)|b_fff(0)), 0x0, 0x0, cpu_to_be32(-1)};
     uint32_t interrupt_map[PCI_SLOT_MAX * PCI_NUM_PINS][7];
+    uint32_t ddw_applicable[] = {
+        cpu_to_be32(RTAS_IBM_QUERY_PE_DMA_WINDOW),
+        cpu_to_be32(RTAS_IBM_CREATE_PE_DMA_WINDOW),
+        cpu_to_be32(RTAS_IBM_REMOVE_PE_DMA_WINDOW)
+    };
+    uint32_t ddw_extensions[] = {
+        cpu_to_be32(1),
+        cpu_to_be32(RTAS_IBM_RESET_PE_DMA_WINDOW)
+    };
     sPAPRTCETable *tcet;
 
     /* Start populating the FDT */
@@ -1602,6 +1655,14 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
     _FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pci-config-space-type", 0x1));
     _FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pe-total-#msi", XICS_IRQS));
 
+    /* Dynamic DMA window */
+    if (phb->ddw_enabled) {
+        _FDT(fdt_setprop(fdt, bus_off, "ibm,ddw-applicable", &ddw_applicable,
+                         sizeof(ddw_applicable)));
+        _FDT(fdt_setprop(fdt, bus_off, "ibm,ddw-extensions",
+                         &ddw_extensions, sizeof(ddw_extensions)));
+    }
+
     /* Build the interrupt-map, this must matches what is done
      * in pci_spapr_map_irq
      */
diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
index 6df9a23..5102c72 100644
--- a/hw/ppc/spapr_pci_vfio.c
+++ b/hw/ppc/spapr_pci_vfio.c
@@ -41,6 +41,86 @@ int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
     sphb->dma32_window_start = info.dma32_window_start;
     sphb->dma32_window_size = info.dma32_window_size;
 
+    if (sphb->ddw_enabled && (info.flags & VFIO_IOMMU_SPAPR_INFO_DDW)) {
+        sphb->windows_supported = info.ddw.max_dynamic_windows_supported;
+        sphb->page_size_mask = info.ddw.pgsizes;
+        sphb->dma64_window_size = pow2ceil(ram_size);
+        sphb->max_levels = info.ddw.levels;
+    } else {
+        /* If VFIO_IOMMU_INFO_DDW is not set, disable DDW */
+        sphb->ddw_enabled = false;
+    }
+
+    return ret;
+}
+
+static int spapr_phb_vfio_levels(uint32_t entries)
+{
+    unsigned pages = (entries * sizeof(uint64_t)) / getpagesize();
+    int levels;
+
+    if (pages <= 64) {
+        levels = 1;
+    } else if (pages <= 64*64) {
+        levels = 2;
+    } else if (pages <= 64*64*64) {
+        levels = 3;
+    } else {
+        levels = 4;
+    }
+
+    return levels;
+}
+
+int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
+                                   uint32_t page_shift,
+                                   uint64_t window_size,
+                                   uint64_t *bus_offset)
+{
+    int ret;
+    struct vfio_iommu_spapr_tce_create create = {
+        .argsz = sizeof(create),
+        .page_shift = page_shift,
+        .window_size = window_size,
+        .levels = sphb->levels,
+        .start_addr = 0,
+    };
+
+    /*
+     * Dynamic windows are supported, that means that there is no
+     * pre-created window and we have to create one.
+     */
+    if (!create.levels) {
+        create.levels = spapr_phb_vfio_levels(create.window_size >>
+                                              page_shift);
+    }
+
+    if (create.levels > sphb->max_levels) {
+        return -EINVAL;
+    }
+
+    ret = vfio_container_ioctl(&sphb->iommu_as,
+                               VFIO_IOMMU_SPAPR_TCE_CREATE, &create);
+    if (ret) {
+        return ret;
+    }
+    *bus_offset = create.start_addr;
+
+    return 0;
+}
+
+int spapr_phb_vfio_dma_remove_window(sPAPRPHBState *sphb,
+                                            sPAPRTCETable *tcet)
+{
+    struct vfio_iommu_spapr_tce_remove remove = {
+        .argsz = sizeof(remove),
+        .start_addr = tcet->bus_offset
+    };
+    int ret;
+
+    ret = vfio_container_ioctl(&sphb->iommu_as,
+                               VFIO_IOMMU_SPAPR_TCE_REMOVE, &remove);
+
     return ret;
 }
 
diff --git a/hw/ppc/spapr_rtas_ddw.c b/hw/ppc/spapr_rtas_ddw.c
new file mode 100644
index 0000000..7539c6a
--- /dev/null
+++ b/hw/ppc/spapr_rtas_ddw.c
@@ -0,0 +1,300 @@
+/*
+ * QEMU sPAPR Dynamic DMA windows support
+ *
+ * Copyright (c) 2014 Alexey Kardashevskiy, IBM Corporation.
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License,
+ *  or (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "qemu/error-report.h"
+#include "hw/ppc/spapr.h"
+#include "hw/pci-host/spapr.h"
+#include "trace.h"
+
+static int spapr_phb_get_active_win_num_cb(Object *child, void *opaque)
+{
+    sPAPRTCETable *tcet;
+
+    tcet = (sPAPRTCETable *) object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
+    if (tcet && tcet->enabled) {
+        ++*(unsigned *)opaque;
+    }
+    return 0;
+}
+
+static unsigned spapr_phb_get_active_win_num(sPAPRPHBState *sphb)
+{
+    unsigned ret = 0;
+
+    object_child_foreach(OBJECT(sphb), spapr_phb_get_active_win_num_cb, &ret);
+
+    return ret;
+}
+
+static int spapr_phb_get_free_liobn_cb(Object *child, void *opaque)
+{
+    sPAPRTCETable *tcet;
+
+    tcet = (sPAPRTCETable *) object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
+    if (tcet && !tcet->enabled) {
+        *(uint32_t *)opaque = tcet->liobn;
+        return 1;
+    }
+    return 0;
+}
+
+static unsigned spapr_phb_get_free_liobn(sPAPRPHBState *sphb)
+{
+    uint32_t liobn = 0;
+
+    object_child_foreach(OBJECT(sphb), spapr_phb_get_free_liobn_cb, &liobn);
+
+    return liobn;
+}
+
+static uint32_t spapr_query_mask(struct ppc_one_seg_page_size *sps,
+                                 uint64_t page_mask)
+{
+    int i, j;
+    uint32_t mask = 0;
+    const struct { int shift; uint32_t mask; } masks[] = {
+        { 12, RTAS_DDW_PGSIZE_4K },
+        { 16, RTAS_DDW_PGSIZE_64K },
+        { 24, RTAS_DDW_PGSIZE_16M },
+        { 25, RTAS_DDW_PGSIZE_32M },
+        { 26, RTAS_DDW_PGSIZE_64M },
+        { 27, RTAS_DDW_PGSIZE_128M },
+        { 28, RTAS_DDW_PGSIZE_256M },
+        { 34, RTAS_DDW_PGSIZE_16G },
+    };
+
+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
+        for (j = 0; j < ARRAY_SIZE(masks); ++j) {
+            if ((sps[i].page_shift == masks[j].shift) &&
+                    (page_mask & (1ULL << masks[j].shift))) {
+                mask |= masks[j].mask;
+            }
+        }
+    }
+
+    return mask;
+}
+
+static void rtas_ibm_query_pe_dma_window(PowerPCCPU *cpu,
+                                         sPAPRMachineState *spapr,
+                                         uint32_t token, uint32_t nargs,
+                                         target_ulong args,
+                                         uint32_t nret, target_ulong rets)
+{
+    CPUPPCState *env = &cpu->env;
+    sPAPRPHBState *sphb;
+    uint64_t buid;
+    uint32_t avail, addr, pgmask = 0;
+    unsigned current;
+
+    if ((nargs != 3) || (nret != 5)) {
+        goto param_error_exit;
+    }
+
+    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
+    addr = rtas_ld(args, 0);
+    sphb = spapr_pci_find_phb(spapr, buid);
+    if (!sphb || !sphb->ddw_enabled) {
+        goto param_error_exit;
+    }
+
+    current = spapr_phb_get_active_win_num(sphb);
+    avail = (sphb->windows_supported > current) ?
+            (sphb->windows_supported - current) : 0;
+
+    /* Work out supported page masks */
+    pgmask = spapr_query_mask(env->sps.sps, sphb->page_size_mask);
+
+    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
+    rtas_st(rets, 1, avail);
+
+    /*
+     * This is "Largest contiguous block of TCEs allocated specifically
+     * for (that is, are reserved for) this PE".
+     * Return the maximum number as all RAM was in 4K pages.
+     */
+    rtas_st(rets, 2, sphb->dma64_window_size >> SPAPR_TCE_PAGE_SHIFT);
+    rtas_st(rets, 3, pgmask);
+    rtas_st(rets, 4, 0); /* DMA migration mask, not supported */
+
+    trace_spapr_iommu_ddw_query(buid, addr, avail, sphb->dma64_window_size,
+                                pgmask);
+    return;
+
+param_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
+}
+
+static void rtas_ibm_create_pe_dma_window(PowerPCCPU *cpu,
+                                          sPAPRMachineState *spapr,
+                                          uint32_t token, uint32_t nargs,
+                                          target_ulong args,
+                                          uint32_t nret, target_ulong rets)
+{
+    sPAPRPHBState *sphb;
+    sPAPRTCETable *tcet = NULL;
+    uint32_t addr, page_shift, window_shift, liobn;
+    uint64_t buid;
+    long ret;
+
+    if ((nargs != 5) || (nret != 4)) {
+        goto param_error_exit;
+    }
+
+    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
+    addr = rtas_ld(args, 0);
+    sphb = spapr_pci_find_phb(spapr, buid);
+    if (!sphb || !sphb->ddw_enabled) {
+        goto param_error_exit;
+    }
+
+    page_shift = rtas_ld(args, 3);
+    window_shift = rtas_ld(args, 4);
+    liobn = spapr_phb_get_free_liobn(sphb);
+
+    if (!liobn || !(sphb->page_size_mask & (1ULL << page_shift))) {
+        goto hw_error_exit;
+    }
+
+    ret = spapr_phb_dma_init_window(sphb, liobn, page_shift,
+                                    1ULL << window_shift);
+    tcet = spapr_tce_find_by_liobn(liobn);
+    trace_spapr_iommu_ddw_create(buid, addr, 1ULL << page_shift,
+                                 1ULL << window_shift,
+                                 tcet ? tcet->bus_offset : 0xbaadf00d,
+                                 liobn, ret);
+    if (ret || !tcet) {
+        goto hw_error_exit;
+    }
+
+    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
+    rtas_st(rets, 1, liobn);
+    rtas_st(rets, 2, tcet->bus_offset >> 32);
+    rtas_st(rets, 3, tcet->bus_offset & ((uint32_t) -1));
+
+    return;
+
+hw_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
+    return;
+
+param_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
+}
+
+static void rtas_ibm_remove_pe_dma_window(PowerPCCPU *cpu,
+                                          sPAPRMachineState *spapr,
+                                          uint32_t token, uint32_t nargs,
+                                          target_ulong args,
+                                          uint32_t nret, target_ulong rets)
+{
+    sPAPRPHBState *sphb;
+    sPAPRTCETable *tcet;
+    uint32_t liobn;
+    long ret;
+
+    if ((nargs != 1) || (nret != 1)) {
+        goto param_error_exit;
+    }
+
+    liobn = rtas_ld(args, 0);
+    tcet = spapr_tce_find_by_liobn(liobn);
+    if (!tcet) {
+        goto param_error_exit;
+    }
+
+    sphb = SPAPR_PCI_HOST_BRIDGE(OBJECT(tcet)->parent);
+    if (!sphb || !sphb->ddw_enabled) {
+        goto param_error_exit;
+    }
+
+    ret = spapr_phb_dma_remove_window(sphb, tcet);
+    trace_spapr_iommu_ddw_remove(liobn, ret);
+    if (ret) {
+        goto hw_error_exit;
+    }
+
+    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
+    return;
+
+hw_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
+    return;
+
+param_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
+}
+
+static void rtas_ibm_reset_pe_dma_window(PowerPCCPU *cpu,
+                                         sPAPRMachineState *spapr,
+                                         uint32_t token, uint32_t nargs,
+                                         target_ulong args,
+                                         uint32_t nret, target_ulong rets)
+{
+    sPAPRPHBState *sphb;
+    uint64_t buid;
+    uint32_t addr;
+    long ret;
+
+    if ((nargs != 3) || (nret != 1)) {
+        goto param_error_exit;
+    }
+
+    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
+    addr = rtas_ld(args, 0);
+    sphb = spapr_pci_find_phb(spapr, buid);
+    if (!sphb || !sphb->ddw_enabled) {
+        goto param_error_exit;
+    }
+
+    ret = spapr_phb_dma_reset(sphb);
+    trace_spapr_iommu_ddw_reset(buid, addr, ret);
+    if (ret) {
+        goto hw_error_exit;
+    }
+
+    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
+
+    return;
+
+hw_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
+    return;
+
+param_error_exit:
+    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
+}
+
+static void spapr_rtas_ddw_init(void)
+{
+    spapr_rtas_register(RTAS_IBM_QUERY_PE_DMA_WINDOW,
+                        "ibm,query-pe-dma-window",
+                        rtas_ibm_query_pe_dma_window);
+    spapr_rtas_register(RTAS_IBM_CREATE_PE_DMA_WINDOW,
+                        "ibm,create-pe-dma-window",
+                        rtas_ibm_create_pe_dma_window);
+    spapr_rtas_register(RTAS_IBM_REMOVE_PE_DMA_WINDOW,
+                        "ibm,remove-pe-dma-window",
+                        rtas_ibm_remove_pe_dma_window);
+    spapr_rtas_register(RTAS_IBM_RESET_PE_DMA_WINDOW,
+                        "ibm,reset-pe-dma-window",
+                        rtas_ibm_reset_pe_dma_window);
+}
+
+type_init(spapr_rtas_ddw_init)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 9e3e0b0..f915127 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -830,6 +830,8 @@ int vfio_container_ioctl(AddressSpace *as,
     case VFIO_CHECK_EXTENSION:
     case VFIO_IOMMU_SPAPR_TCE_GET_INFO:
     case VFIO_EEH_PE_OP:
+    case VFIO_IOMMU_SPAPR_TCE_CREATE:
+    case VFIO_IOMMU_SPAPR_TCE_REMOVE:
         break;
     default:
         /* Return an error on unknown requests */
diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
index b2a8fc3..1313805 100644
--- a/include/hw/pci-host/spapr.h
+++ b/include/hw/pci-host/spapr.h
@@ -88,6 +88,12 @@ struct sPAPRPHBState {
     uint32_t dma32_window_size;
     bool has_vfio;
     int32_t iommugroupid; /* obsolete */
+    bool ddw_enabled;
+    uint32_t windows_supported;
+    uint64_t page_size_mask;
+    uint64_t dma64_window_size;
+    uint8_t max_levels;
+    uint8_t levels;
 
     QLIST_ENTRY(sPAPRPHBState) list;
 };
@@ -110,6 +116,12 @@ struct sPAPRPHBState {
 
 #define SPAPR_PCI_DMA32_SIZE         0x40000000
 
+/* Default 64bit dynamic window offset */
+#define SPAPR_PCI_DMA64_START        0x8000000000000000ULL
+
+/* Maximum allowed number of DMA windows for emulated PHB */
+#define SPAPR_PCI_DMA_MAX_WINDOWS    2
+
 static inline qemu_irq spapr_phb_lsi_qirq(struct sPAPRPHBState *phb, int pin)
 {
     sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
@@ -130,11 +142,20 @@ void spapr_pci_rtas_init(void);
 sPAPRPHBState *spapr_pci_find_phb(sPAPRMachineState *spapr, uint64_t buid);
 PCIDevice *spapr_pci_find_dev(sPAPRMachineState *spapr, uint64_t buid,
                               uint32_t config_addr);
+int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
+                              uint32_t liobn, uint32_t page_shift,
+                              uint64_t window_size);
 int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
                                 sPAPRTCETable *tcet);
 int spapr_phb_dma_reset(sPAPRPHBState *sphb);
 
 int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb);
+int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
+                                   uint32_t page_shift,
+                                   uint64_t window_size,
+                                   uint64_t *bus_offset);
+int spapr_phb_vfio_dma_remove_window(sPAPRPHBState *sphb,
+                                     sPAPRTCETable *tcet);
 int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
                                   PCIDevice *pdev, int option);
 int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 4645f16..5a58785 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -416,6 +416,16 @@ int spapr_allocate_irq_block(int num, bool lsi, bool msi);
 #define RTAS_OUT_NOT_SUPPORTED      -3
 #define RTAS_OUT_NOT_AUTHORIZED     -9002
 
+/* DDW pagesize mask values from ibm,query-pe-dma-window */
+#define RTAS_DDW_PGSIZE_4K       0x01
+#define RTAS_DDW_PGSIZE_64K      0x02
+#define RTAS_DDW_PGSIZE_16M      0x04
+#define RTAS_DDW_PGSIZE_32M      0x08
+#define RTAS_DDW_PGSIZE_64M      0x10
+#define RTAS_DDW_PGSIZE_128M     0x20
+#define RTAS_DDW_PGSIZE_256M     0x40
+#define RTAS_DDW_PGSIZE_16G      0x80
+
 /* RTAS tokens */
 #define RTAS_TOKEN_BASE      0x2000
 
@@ -457,8 +467,12 @@ int spapr_allocate_irq_block(int num, bool lsi, bool msi);
 #define RTAS_IBM_SET_SLOT_RESET                 (RTAS_TOKEN_BASE + 0x23)
 #define RTAS_IBM_CONFIGURE_PE                   (RTAS_TOKEN_BASE + 0x24)
 #define RTAS_IBM_SLOT_ERROR_DETAIL              (RTAS_TOKEN_BASE + 0x25)
+#define RTAS_IBM_QUERY_PE_DMA_WINDOW            (RTAS_TOKEN_BASE + 0x26)
+#define RTAS_IBM_CREATE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x27)
+#define RTAS_IBM_REMOVE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x28)
+#define RTAS_IBM_RESET_PE_DMA_WINDOW            (RTAS_TOKEN_BASE + 0x29)
 
-#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x26)
+#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x2A)
 
 /* RTAS ibm,get-system-parameter token values */
 #define RTAS_SYSPARM_SPLPAR_CHARACTERISTICS      20
@@ -558,6 +572,7 @@ struct sPAPRTCETable {
     uint64_t bus_offset;
     uint32_t page_shift;
     uint64_t *table;
+    uint64_t *migtable;
     bool bypass;
     int fd;
     MemoryRegion root, iommu;
diff --git a/trace-events b/trace-events
index 3d1aeea..edd3164 100644
--- a/trace-events
+++ b/trace-events
@@ -1365,6 +1365,10 @@ spapr_iommu_pci_indirect(uint64_t liobn, uint64_t ioba, uint64_t tce, uint64_t i
 spapr_iommu_pci_stuff(uint64_t liobn, uint64_t ioba, uint64_t tce_value, uint64_t npages, uint64_t ret) "liobn=%"PRIx64" ioba=0x%"PRIx64" tcevalue=0x%"PRIx64" npages=%"PRId64" ret=%"PRId64
 spapr_iommu_xlate(uint64_t liobn, uint64_t ioba, uint64_t tce, unsigned perm, unsigned pgsize) "liobn=%"PRIx64" 0x%"PRIx64" -> 0x%"PRIx64" perm=%u mask=%x"
 spapr_iommu_alloc_table(uint64_t liobn, void *table, int fd) "liobn=%"PRIx64" table=%p fd=%d"
+spapr_iommu_ddw_query(uint64_t buid, uint32_t cfgaddr, unsigned wa, uint64_t win_size, uint32_t pgmask) "buid=%"PRIx64" addr=%"PRIx32", %u windows available, max window size=%"PRIx64", mask=%"PRIx32
+spapr_iommu_ddw_create(uint64_t buid, uint32_t cfgaddr, unsigned long long pg_size, unsigned long long req_size, uint64_t start, uint32_t liobn, long ret) "buid=%"PRIx64" addr=%"PRIx32", page size=0x%llx, requested=0x%llx, start addr=%"PRIx64", liobn=%"PRIx32", ret = %ld"
+spapr_iommu_ddw_remove(uint32_t liobn, long ret) "liobn=%"PRIx32", ret = %ld"
+spapr_iommu_ddw_reset(uint64_t buid, uint32_t cfgaddr, long ret) "buid=%"PRIx64" addr=%"PRIx32", ret = %ld"
 
 # hw/ppc/ppc.c
 ppc_tb_adjust(uint64_t offs1, uint64_t offs2, int64_t diff, int64_t seconds) "adjusted from 0x%"PRIx64" to 0x%"PRIx64", diff %"PRId64" (%"PRId64"s)"
-- 
2.4.0.rc3.8.gfb3e7d5

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file Alexey Kardashevskiy
@ 2015-06-18 21:10   ` Alex Williamson
  2015-06-19  0:16     ` Alexey Kardashevskiy
  2015-06-23  5:49   ` David Gibson
  1 sibling, 1 reply; 31+ messages in thread
From: Alex Williamson @ 2015-06-18 21:10 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: David Gibson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On Thu, 2015-06-18 at 21:37 +1000, Alexey Kardashevskiy wrote:
> This moves SPAPR bits to a separate file to avoid pollution of x86 code.
> 
> This enables spapr-vfio on CONFIG_SOFTMMU (not CONFIG_PSERIES) as
> the config options are only visible in makefiles and not in the source code
> so there is no an obvious way of implementing stubs if hw/vfio/spapr.c is
> not compiled.
> 
> This is a mechanical patch.


Why does spapr code always need to be pulled out of common code and
private interfaces exposed to be called in ad-hock ways?  Doesn't that
say something about a lack of design in the implementation?  Why not
also pull out type1 support and perhaps create an interface between vfio
common code and vfio iommu code?


> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/vfio/Makefile.objs         |   1 +
>  hw/vfio/common.c              | 137 ++-----------------------
>  hw/vfio/spapr.c               | 229 ++++++++++++++++++++++++++++++++++++++++++
>  include/hw/vfio/vfio-common.h |  13 +++
>  4 files changed, 249 insertions(+), 131 deletions(-)
>  create mode 100644 hw/vfio/spapr.c
> 
> diff --git a/hw/vfio/Makefile.objs b/hw/vfio/Makefile.objs
> index d540c9d..0dd99ed 100644
> --- a/hw/vfio/Makefile.objs
> +++ b/hw/vfio/Makefile.objs
> @@ -3,4 +3,5 @@ obj-$(CONFIG_SOFTMMU) += common.o
>  obj-$(CONFIG_PCI) += pci.o
>  obj-$(CONFIG_SOFTMMU) += platform.o
>  obj-$(CONFIG_SOFTMMU) += calxeda-xgmac.o
> +obj-$(CONFIG_SOFTMMU) += spapr.o
>  endif
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index b1045da..3e4c685 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -190,8 +190,8 @@ const MemoryRegionOps vfio_region_ops = {
>  /*
>   * DMA - Mapping and unmapping for the "type1" IOMMU interface used on x86
>   */
> -static int vfio_dma_unmap(VFIOContainer *container,
> -                          hwaddr iova, ram_addr_t size)
> +int vfio_dma_unmap(VFIOContainer *container,
> +                   hwaddr iova, ram_addr_t size)
>  {
>      struct vfio_iommu_type1_dma_unmap unmap = {
>          .argsz = sizeof(unmap),
> @@ -208,8 +208,8 @@ static int vfio_dma_unmap(VFIOContainer *container,
>      return 0;
>  }
>  
> -static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
> -                        ram_addr_t size, void *vaddr, bool readonly)
> +int vfio_dma_map(VFIOContainer *container, hwaddr iova,
> +                 ram_addr_t size, void *vaddr, bool readonly)
>  {
>      struct vfio_iommu_type1_dma_map map = {
>          .argsz = sizeof(map),
> @@ -238,7 +238,7 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
>      return -errno;
>  }
>  
> -static bool vfio_listener_skipped_section(MemoryRegionSection *section)
> +bool vfio_listener_skipped_section(MemoryRegionSection *section)
>  {
>      return (!memory_region_is_ram(section->mr) &&
>              !memory_region_is_iommu(section->mr)) ||
> @@ -251,67 +251,6 @@ static bool vfio_listener_skipped_section(MemoryRegionSection *section)
>             section->offset_within_address_space & (1ULL << 63);
>  }
>  
> -static void vfio_iommu_map_notify(Notifier *n, void *data)
> -{
> -    VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
> -    VFIOContainer *container = giommu->container;
> -    IOMMUTLBEntry *iotlb = data;
> -    MemoryRegion *mr;
> -    hwaddr xlat;
> -    hwaddr len = iotlb->addr_mask + 1;
> -    void *vaddr;
> -    int ret;
> -
> -    trace_vfio_iommu_map_notify(iotlb->iova,
> -                                iotlb->iova + iotlb->addr_mask);
> -
> -    /*
> -     * The IOMMU TLB entry we have just covers translation through
> -     * this IOMMU to its immediate target.  We need to translate
> -     * it the rest of the way through to memory.
> -     */
> -    rcu_read_lock();
> -    mr = address_space_translate(&address_space_memory,
> -                                 iotlb->translated_addr,
> -                                 &xlat, &len, iotlb->perm & IOMMU_WO);
> -    if (!memory_region_is_ram(mr)) {
> -        error_report("iommu map to non memory area %"HWADDR_PRIx"",
> -                     xlat);
> -        goto out;
> -    }
> -    /*
> -     * Translation truncates length to the IOMMU page size,
> -     * check that it did not truncate too much.
> -     */
> -    if (len & iotlb->addr_mask) {
> -        error_report("iommu has granularity incompatible with target AS");
> -        goto out;
> -    }
> -
> -    if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
> -        vaddr = memory_region_get_ram_ptr(mr) + xlat;
> -        ret = vfio_dma_map(container, iotlb->iova,
> -                           iotlb->addr_mask + 1, vaddr,
> -                           !(iotlb->perm & IOMMU_WO) || mr->readonly);
> -        if (ret) {
> -            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
> -                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
> -                         container, iotlb->iova,
> -                         iotlb->addr_mask + 1, vaddr, ret);
> -        }
> -    } else {
> -        ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask + 1);
> -        if (ret) {
> -            error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> -                         "0x%"HWADDR_PRIx") = %d (%m)",
> -                         container, iotlb->iova,
> -                         iotlb->addr_mask + 1, ret);
> -        }
> -    }
> -out:
> -    rcu_read_unlock();
> -}
> -
>  static void vfio_listener_region_add(MemoryListener *listener,
>                                       MemoryRegionSection *section)
>  {
> @@ -347,45 +286,6 @@ static void vfio_listener_region_add(MemoryListener *listener,
>  
>      memory_region_ref(section->mr);
>  
> -    if (memory_region_is_iommu(section->mr)) {
> -        VFIOGuestIOMMU *giommu;
> -
> -        trace_vfio_listener_region_add_iommu(iova,
> -                    int128_get64(int128_sub(llend, int128_one())));
> -        /*
> -         * FIXME: We should do some checking to see if the
> -         * capabilities of the host VFIO IOMMU are adequate to model
> -         * the guest IOMMU
> -         *
> -         * FIXME: For VFIO iommu types which have KVM acceleration to
> -         * avoid bouncing all map/unmaps through qemu this way, this
> -         * would be the right place to wire that up (tell the KVM
> -         * device emulation the VFIO iommu handles to use).
> -         */
> -        /*
> -         * This assumes that the guest IOMMU is empty of
> -         * mappings at this point.
> -         *
> -         * One way of doing this is:
> -         * 1. Avoid sharing IOMMUs between emulated devices or different
> -         * IOMMU groups.
> -         * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
> -         * there are some mappings in IOMMU.
> -         *
> -         * VFIO on SPAPR does that. Other IOMMU models may do that different,
> -         * they must make sure there are no existing mappings or
> -         * loop through existing mappings to map them into VFIO.
> -         */
> -        giommu = g_malloc0(sizeof(*giommu));
> -        giommu->iommu = section->mr;
> -        giommu->container = container;
> -        giommu->n.notify = vfio_iommu_map_notify;
> -        QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> -        memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
> -
> -        return;
> -    }
> -
>      /* Here we assume that memory_region_is_ram(section->mr)==true */
>  
>      end = int128_get64(llend);
> @@ -438,27 +338,6 @@ static void vfio_listener_region_del(MemoryListener *listener,
>          return;
>      }
>  
> -    if (memory_region_is_iommu(section->mr)) {
> -        VFIOGuestIOMMU *giommu;
> -
> -        QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
> -            if (giommu->iommu == section->mr) {
> -                memory_region_unregister_iommu_notifier(&giommu->n);
> -                QLIST_REMOVE(giommu, giommu_next);
> -                g_free(giommu);
> -                break;
> -            }
> -        }
> -
> -        /*
> -         * FIXME: We assume the one big unmap below is adequate to
> -         * remove any individual page mappings in the IOMMU which
> -         * might have been copied into VFIO. This works for a page table
> -         * based IOMMU where a big unmap flattens a large range of IO-PTEs.
> -         * That may not be true for all IOMMU types.
> -         */
> -    }
> -
>      iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
>      end = (section->offset_within_address_space + int128_get64(section->size)) &
>            TARGET_PAGE_MASK;
> @@ -724,11 +603,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
>              goto free_container_exit;
>          }
>  
> -        container->iommu_data.type1.listener = vfio_memory_listener;
> -        container->iommu_data.release = vfio_listener_release;
> -
> -        memory_listener_register(&container->iommu_data.type1.listener,
> -                                 container->space->as);
> +        spapr_memory_listener_register(container);
>  
>      } else {
>          error_report("vfio: No available IOMMU models");
> diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c
> new file mode 100644
> index 0000000..f03fab1
> --- /dev/null
> +++ b/hw/vfio/spapr.c
> @@ -0,0 +1,229 @@
> +/*
> + * QEMU sPAPR VFIO IOMMU
> + *
> + * Copyright (c) 2015 Alexey Kardashevskiy, IBM Corporation.
> + *
> + *  This program is free software; you can redistribute it and/or modify
> + *  it under the terms of the GNU General Public License as published by
> + *  the Free Software Foundation; either version 2 of the License,
> + *  or (at your option) any later version.
> + *
> + *  This program is distributed in the hope that it will be useful,
> + *  but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *  GNU General Public License for more details.
> + *
> + *  You should have received a copy of the GNU General Public License
> + *  along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "hw/vfio/vfio-common.h"
> +#include "qemu/error-report.h"
> +#include "trace.h"
> +
> +static void vfio_iommu_map_notify(Notifier *n, void *data)
> +{
> +    VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
> +    VFIOContainer *container = giommu->container;
> +    IOMMUTLBEntry *iotlb = data;
> +    MemoryRegion *mr;
> +    hwaddr xlat;
> +    hwaddr len = iotlb->addr_mask + 1;
> +    void *vaddr;
> +    int ret;
> +
> +    trace_vfio_iommu_map_notify(iotlb->iova,
> +                                iotlb->iova + iotlb->addr_mask);
> +
> +    /*
> +     * The IOMMU TLB entry we have just covers translation through
> +     * this IOMMU to its immediate target.  We need to translate
> +     * it the rest of the way through to memory.
> +     */
> +    rcu_read_lock();
> +    mr = address_space_translate(&address_space_memory,
> +                                 iotlb->translated_addr,
> +                                 &xlat, &len, iotlb->perm & IOMMU_WO);
> +    if (!memory_region_is_ram(mr)) {
> +        error_report("iommu map to non memory area %"HWADDR_PRIx"",
> +                     xlat);
> +        goto out;
> +    }
> +    /*
> +     * Translation truncates length to the IOMMU page size,
> +     * check that it did not truncate too much.
> +     */
> +    if (len & iotlb->addr_mask) {
> +        error_report("iommu has granularity incompatible with target AS");
> +        goto out;
> +    }
> +
> +    if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) {
> +        vaddr = memory_region_get_ram_ptr(mr) + xlat;
> +        ret = vfio_dma_map(container, iotlb->iova,
> +                           iotlb->addr_mask + 1, vaddr,
> +                           !(iotlb->perm & IOMMU_WO) || mr->readonly);
> +        if (ret) {
> +            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
> +                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
> +                         container, iotlb->iova,
> +                         iotlb->addr_mask + 1, vaddr, ret);
> +        }
> +    } else {
> +        ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask + 1);
> +        if (ret) {
> +            error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> +                         "0x%"HWADDR_PRIx") = %d (%m)",
> +                         container, iotlb->iova,
> +                         iotlb->addr_mask + 1, ret);
> +        }
> +    }
> +out:
> +    rcu_read_unlock();
> +}
> +
> +static void vfio_spapr_listener_region_add(MemoryListener *listener,
> +                                     MemoryRegionSection *section)
> +{
> +    VFIOContainer *container = container_of(listener, VFIOContainer,
> +                                            iommu_data.spapr.listener);
> +    hwaddr iova;
> +    Int128 llend;
> +    VFIOGuestIOMMU *giommu;
> +
> +    if (vfio_listener_skipped_section(section)) {
> +        trace_vfio_listener_region_add_skip(
> +            section->offset_within_address_space,
> +            section->offset_within_address_space +
> +            int128_get64(int128_sub(section->size, int128_one())));
> +        return;
> +    }
> +
> +    if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
> +                 (section->offset_within_region & ~TARGET_PAGE_MASK))) {
> +        error_report("%s received unaligned region", __func__);
> +        return;
> +    }
> +
> +    iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> +    llend = int128_make64(section->offset_within_address_space);
> +    llend = int128_add(llend, section->size);
> +    llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
> +
> +    if (int128_ge(int128_make64(iova), llend)) {
> +        return;
> +    }
> +
> +    memory_region_ref(section->mr);
> +
> +    trace_vfio_listener_region_add_iommu(iova,
> +         int128_get64(int128_sub(llend, int128_one())));
> +    /*
> +     * FIXME: We should do some checking to see if the
> +     * capabilities of the host VFIO IOMMU are adequate to model
> +     * the guest IOMMU
> +     *
> +     * FIXME: For VFIO iommu types which have KVM acceleration to
> +     * avoid bouncing all map/unmaps through qemu this way, this
> +     * would be the right place to wire that up (tell the KVM
> +     * device emulation the VFIO iommu handles to use).
> +     */
> +    /*
> +     * This assumes that the guest IOMMU is empty of
> +     * mappings at this point.
> +     *
> +     * One way of doing this is:
> +     * 1. Avoid sharing IOMMUs between emulated devices or different
> +     * IOMMU groups.
> +     * 2. Implement VFIO_IOMMU_ENABLE in the host kernel to fail if
> +     * there are some mappings in IOMMU.
> +     *
> +     * VFIO on SPAPR does that. Other IOMMU models may do that different,
> +     * they must make sure there are no existing mappings or
> +     * loop through existing mappings to map them into VFIO.
> +     */
> +    giommu = g_malloc0(sizeof(*giommu));
> +    giommu->iommu = section->mr;
> +    giommu->container = container;
> +    giommu->n.notify = vfio_iommu_map_notify;
> +    QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> +    memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
> +}
> +
> +static void vfio_spapr_listener_region_del(MemoryListener *listener,
> +                                     MemoryRegionSection *section)
> +{
> +    VFIOContainer *container = container_of(listener, VFIOContainer,
> +                                            iommu_data.spapr.listener);
> +    hwaddr iova, end;
> +    int ret;
> +    VFIOGuestIOMMU *giommu;
> +
> +    if (vfio_listener_skipped_section(section)) {
> +        trace_vfio_listener_region_del_skip(
> +            section->offset_within_address_space,
> +            section->offset_within_address_space +
> +            int128_get64(int128_sub(section->size, int128_one())));
> +        return;
> +    }
> +
> +    if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
> +                 (section->offset_within_region & ~TARGET_PAGE_MASK))) {
> +        error_report("%s received unaligned region", __func__);
> +        return;
> +    }
> +
> +    QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
> +        if (giommu->iommu == section->mr) {
> +            memory_region_unregister_iommu_notifier(&giommu->n);
> +            QLIST_REMOVE(giommu, giommu_next);
> +            g_free(giommu);
> +            break;
> +        }
> +    }
> +
> +    /*
> +     * FIXME: We assume the one big unmap below is adequate to
> +     * remove any individual page mappings in the IOMMU which
> +     * might have been copied into VFIO. This works for a page table
> +     * based IOMMU where a big unmap flattens a large range of IO-PTEs.
> +     * That may not be true for all IOMMU types.
> +     */
> +
> +    iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
> +    end = (section->offset_within_address_space + int128_get64(section->size)) &
> +        TARGET_PAGE_MASK;
> +
> +    if (iova >= end) {
> +        return;
> +    }
> +
> +    trace_vfio_listener_region_del(iova, end - 1);
> +
> +    ret = vfio_dma_unmap(container, iova, end - iova);
> +    memory_region_unref(section->mr);
> +    if (ret) {
> +        error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> +                     "0x%"HWADDR_PRIx") = %d (%m)",
> +                     container, iova, end - iova, ret);
> +    }
> +}
> +
> +static const MemoryListener vfio_spapr_memory_listener = {
> +    .region_add = vfio_spapr_listener_region_add,
> +    .region_del = vfio_spapr_listener_region_del,
> +};
> +
> +static void vfio_spapr_listener_release(VFIOContainer *container)
> +{
> +    memory_listener_unregister(&container->iommu_data.spapr.listener);
> +}
> +
> +void spapr_memory_listener_register(VFIOContainer *container)
> +{
> +    container->iommu_data.spapr.listener = vfio_spapr_memory_listener;
> +    container->iommu_data.release = vfio_spapr_listener_release;
> +
> +    memory_listener_register(&container->iommu_data.spapr.listener,
> +                             container->space->as);
> +}
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 59a321d..d513a2f 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -70,6 +70,10 @@ typedef struct VFIOType1 {
>      bool initialized;
>  } VFIOType1;
>  
> +typedef struct VFIOSPAPR {
> +    MemoryListener listener;
> +} VFIOSPAPR;
> +
>  typedef struct VFIOContainer {
>      VFIOAddressSpace *space;
>      int fd; /* /dev/vfio/vfio, empowered by the attached groups */
> @@ -77,6 +81,7 @@ typedef struct VFIOContainer {
>          /* enable abstraction to support various iommu backends */
>          union {
>              VFIOType1 type1;
> +            VFIOSPAPR spapr;
>          };
>          void (*release)(struct VFIOContainer *);
>      } iommu_data;
> @@ -146,4 +151,12 @@ extern const MemoryRegionOps vfio_region_ops;
>  extern QLIST_HEAD(vfio_group_head, VFIOGroup) vfio_group_list;
>  extern QLIST_HEAD(vfio_as_head, VFIOAddressSpace) vfio_address_spaces;
>  
> +extern int vfio_dma_map(VFIOContainer *container, hwaddr iova,
> +                        ram_addr_t size, void *vaddr, bool readonly);
> +extern int vfio_dma_unmap(VFIOContainer *container,
> +                          hwaddr iova, ram_addr_t size);
> +bool vfio_listener_skipped_section(MemoryRegionSection *section);
> +
> +extern void spapr_memory_listener_register(VFIOContainer *container);
> +
>  #endif /* !HW_VFIO_VFIO_COMMON_H */

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file
  2015-06-18 21:10   ` Alex Williamson
@ 2015-06-19  0:16     ` Alexey Kardashevskiy
  0 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-19  0:16 UTC (permalink / raw)
  To: Alex Williamson
  Cc: David Gibson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On 06/19/2015 07:10 AM, Alex Williamson wrote:
> On Thu, 2015-06-18 at 21:37 +1000, Alexey Kardashevskiy wrote:
>> This moves SPAPR bits to a separate file to avoid pollution of x86 code.
>>
>> This enables spapr-vfio on CONFIG_SOFTMMU (not CONFIG_PSERIES) as
>> the config options are only visible in makefiles and not in the source code
>> so there is no an obvious way of implementing stubs if hw/vfio/spapr.c is
>> not compiled.
>>
>> This is a mechanical patch.
>
>
> Why does spapr code always need to be pulled out of common code and
> private interfaces exposed to be called in ad-hock ways?  Doesn't that
> say something about a lack of design in the implementation?  Why not
> also pull out type1 support and perhaps create an interface between vfio
> common code and vfio iommu code?

But how exactly? A container_ops struct with 2 callbacks: 
init_listener()/ioctl(), per IOMMU type? vfio_container_ioctl() does not do 
anything spapr-specific (its existance is spapr-specific though). And I 
will still have to expose vfio_dma_map()/vfio_dma_unmap() to these new 
spapr.c and type1.c (move these map/unmap helpers to common_api.c?).

And then I'll be told to make a container an QOM object, with class and 
state. btw why are not they all QOM-ed already? :)

And I also need to expose ioctl(vfio_kvm_device_fd,...) but it does not 
belong to container.

Most likely other IOMMUs will look pretty much the same as TYPE1, ours is 
just really weird (is there any other arch exposing IOMMU ot the guest?).



-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers Alexey Kardashevskiy
@ 2015-06-22  3:28   ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-22  3:28 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 936 bytes --]

On Thu, Jun 18, 2015 at 09:37:27PM +1000, Alexey Kardashevskiy wrote:
> At the moment presence of vfio-pci devices on a bus affect the way
> the guest view table is allocated. If there is no vfio-pci on a PHB
> and the host kernel supports KVM acceleration of H_PUT_TCE, a table
> is allocated in KVM. However, if there is vfio-pci and we do yet not
> KVM acceleration for these, the table has to be allocated by
> the userspace. At the moment the table is allocated once at boot time
> but next patches will reallocate it.
> 
> This moves kvmppc_create_spapr_tce/g_malloc0 and their counterparts
> to helpers.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table Alexey Kardashevskiy
@ 2015-06-22  3:45   ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-22  3:45 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 2142 bytes --]

On Thu, Jun 18, 2015 at 09:37:28PM +1000, Alexey Kardashevskiy wrote:
> Currently TCE tables are created once at start and their size never
> changes. We are going to change that by introducing a Dynamic DMA windows
> support where DMA configuration may change during the guest execution.
> 
> This changes spapr_tce_new_table() to create an empty stub object. Only
> LIOBN is assigned by the time of creation. It still will be called once
> at the owner object (VIO or PHB) creation.
> 
> This introduces an "enabled" state for TCE table objects with two
> helper functions - spapr_tce_table_enable()/spapr_tce_table_disable().
> spapr_tce_table_enable() receives TCE table parameters and allocates
> a guest view of the TCE table (in the user space or KVM).
> spapr_tce_table_disable() disposes the table.
> 
> Follow up patches will disable+enable tables on reset (system reset
> or DDW reset).
> 
> No visible change in behaviour is expected except the actual table
> will be reallocated every reset. We might optimize this later.
> 
> The other way to implement this would be dynamically create/remove
> the TCE table QOM objects but this would make migration impossible
> as migration expects all QOM objects to exist at the receiver
> so we have to have TCE table objects created when migration begins.
> 
> spapr_tce_table_do_enable() is separated from from spapr_tce_table_enable()
> as later it will be called at the sPAPRTCETable post-migration stage when
> it has all the properties set after the migration.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v8:
> * add missing unparent_object() to spapr_tce_table_unrealize() (parenting
> is made by memory_region_init_iommu)

Um.. I don't see an unparent_object() in spapr_tce_table_unrealize().
Or anywhere else.  I don't actually know where it's necessary, but
this seems to contracdict your changelog regardless.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable Alexey Kardashevskiy
@ 2015-06-22  3:51   ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-22  3:51 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 1041 bytes --]

On Thu, Jun 18, 2015 at 09:37:29PM +1000, Alexey Kardashevskiy wrote:
> sPAPRTCETable has a vfio_accel flag which is passed to
> kvmppc_create_spapr_tce() and controls whether to create a guest view
> table in KVM as this depends on the host kernel ability to accelerate
> H_PUT_TCE for VFIO devices. We would set this flag at the moment
> when sPAPRTCETable is created in spapr_tce_new_table() and
> use when the table is allocated in spapr_tce_table_realize().
> 
> Now we explicitly enable/disable DMA windows via spapr_tce_table_enable()
> and spapr_tce_table_disable() and can pass this flag directly without
> caching it in sPAPRTCETable.
> 
> This removes the flag. This should cause no behavioural change.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge Alexey Kardashevskiy
@ 2015-06-22  4:41   ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-22  4:41 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 2550 bytes --]

On Thu, Jun 18, 2015 at 09:37:32PM +1000, Alexey Kardashevskiy wrote:
> sPAPRTCETable is handling 2 TCE tables already:
> 
> 1) guest view of the TCE table - emulated devices use only this table;
> 
> 2) hardware IOMMU table - VFIO PCI devices use it for actual work but
> it does not replace 1) and it is not visible to the guest.
> The initialization of this table is driven by vfio-pci device,
> DMA map/unmap requests are handled via MemoryListener so there is very
> little to do in spapr-pci-vfio-host-bridge.
> 
> This moves VFIO bits to the generic spapr-pci-host-bridge which allows
> putting emulated and VFIO devices on the same PHB. It is still possible
> to create multiple PHBs and avoid sharing PHB resouces for emulated and
> VFIO devices.
> 
> If there is no VFIO-PCI device attaches, no special ioctls will be called.
> If there are some VFIO-PCI devices attached, PHB may refuse to attach
> another VFIO-PCI device if a VFIO container on the host kernel side
> does not support container sharing.
> 
> This changes spapr-pci-host-bridge to support properties of
> spapr-pci-vfio-host-bridge. This makes spapr-pci-vfio-host-bridge type
> equal to spapr-pci-host-bridge except it has an additional "iommu"
> property for backward compatibility reasons.
> 
> This moves PCI device lookup from spapr_phb_vfio_eeh_set_option() to
> rtas_ibm_set_eeh_option() as we need to know if the device is "vfio-pci"
> and decide whether to call spapr_phb_vfio_eeh_set_option() or not.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

I like the idea of merging the two PHB classes.

But.. what if you hotplug a VFIO device on to a PHB that previously
didn't have one.  I don't see anything here that will update has_vfio
and copy the existing TCE tables into VFIO.

[snip]
> @@ -1185,6 +1161,11 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
>      uint64_t msi_window_size = 4096;
>      sPAPRTCETable *tcet;
>  
> +    if ((sphb->iommugroupid != -1) &&
> +        object_dynamic_cast(OBJECT(sphb), TYPE_SPAPR_PCI_VFIO_HOST_BRIDGE)) {
> +        error_report("Warning: iommugroupid shall not be used");

That's a rather cryptic error message.  How about
   "Warning: iommugroupid is deprecated and will be ignored"

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug Alexey Kardashevskiy
@ 2015-06-22  5:14   ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-22  5:14 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 2078 bytes --]

On Thu, Jun 18, 2015 at 09:37:33PM +1000, Alexey Kardashevskiy wrote:
> sPAPR IOMMU is managing two copies of an TCE table:
> 1) a guest view of the table - this is what emulated devices use and
> this is where H_GET_TCE reads from;
> 2) a hardware TCE table - only present if there is at least one vfio-pci
> device on a PHB; it is updated via a memory listener on a PHB address
> space which forwards map/unmap requests to vfio-pci IOMMU host driver.
> 
> At the moment presence of vfio-pci devices on a bus affect the way
> the guest view table is allocated. If there is no vfio-pci on a PHB
> and the host kernel supports KVM acceleration of H_PUT_TCE, a table
> is allocated in KVM. However, if there is vfio-pci and we do yet not
> support KVM acceleration for these, the table has to be allocated
> by the userspace.
> 
> When vfio-pci device is hotplugged and there were no vfio-pci devices
> already, the guest view table could have been allocated by KVM which
> means that H_PUT_TCE is handled by the host kernel and since we
> do not support vfio-pci in KVM, the hardware table will not be updated.
> 
> This reallocates the guest view table in QEMU if the first vfio-pci
> device has just been plugged. spapr_tce_realloc_userspace() handles this.
> 
> This replays all the mappings to make sure that the tables are in sync.
> This will not have a visible effect though as for a new device
> the guest kernel will allocate-and-map new addresses and therefore
> existing mappings from emulated devices will not be used by vfio-pci
> devices.
> 
> This adds calls to spapr_phb_dma_capabilities_update() in PCI hotplug
> hooks .
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

Please ignore comment about hotplug I made on the previous patch, not
having read this one.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file Alexey Kardashevskiy
  2015-06-18 21:10   ` Alex Williamson
@ 2015-06-23  5:49   ` David Gibson
  1 sibling, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-23  5:49 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 1102 bytes --]

On Thu, Jun 18, 2015 at 09:37:24PM +1000, Alexey Kardashevskiy wrote:
> This moves SPAPR bits to a separate file to avoid pollution of x86 code.
> 
> This enables spapr-vfio on CONFIG_SOFTMMU (not CONFIG_PSERIES) as
> the config options are only visible in makefiles and not in the source code
> so there is no an obvious way of implementing stubs if hw/vfio/spapr.c is
> not compiled.
> 
> This is a mechanical patch.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

So, I sent a R-b for this earlier, and I think the patch won't break
anything.

But on consideration I don't really see the point of this.  The parts
that are split out into vfio/spapr.c aren't really spapr specific,
just used a bit more heavily with the spapr model.  That may not even
remain the case if guest-visible x86 IOMMUs become more common.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW) Alexey Kardashevskiy
@ 2015-06-23  6:38   ` David Gibson
  2015-06-24 10:37     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2015-06-23  6:38 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 32232 bytes --]

On Thu, Jun 18, 2015 at 09:37:36PM +1000, Alexey Kardashevskiy wrote:
> This adds support for Dynamic DMA Windows (DDW) option defined by
> the SPAPR specification which allows to have additional DMA window(s)
> 
> This implements DDW for emulated and VFIO devices. As all TCE root regions
> are mapped at 0 and 64bit long (and actual tables are child regions),
> this replaces memory_region_add_subregion() with _overlap() to make
> QEMU memory API happy.
> 
> This reserves RTAS token numbers for DDW calls.
> 
> This implements helpers to interact with VFIO kernel interface.
> 
> This changes the TCE table migration descriptor to support dynamic
> tables as from now on, PHB will create as many stub TCE table objects
> as PHB can possibly support but not all of them might be initialized at
> the time of migration because DDW might or might not be requested by
> the guest.
> 
> The "ddw" property is enabled by default on a PHB but for compatibility
> the pseries-2.3 machine and older disable it.
> 
> This implements DDW for VFIO. The host kernel support is required.
> This adds a "levels" property to PHB to control the number of levels
> in the actual TCE table allocated by the host kernel, 0 is the default
> value to tell QEMU to calculate the correct value. Current hardware
> supports up to 5 levels.
> 
> The existing linux guests try creating one additional huge DMA window
> with 64K or 16MB pages and map the entire guest RAM to. If succeeded,
> the guest switches to dma_direct_ops and never calls TCE hypercalls
> (H_PUT_TCE,...) again. This enables VFIO devices to use the entire RAM
> and not waste time on map/unmap later.
> 
> This adds 4 RTAS handlers:
> * ibm,query-pe-dma-window
> * ibm,create-pe-dma-window
> * ibm,remove-pe-dma-window
> * ibm,reset-pe-dma-window
> These are registered from type_init() callback.
> 
> These RTAS handlers are implemented in a separate file to avoid polluting
> spapr_iommu.c with PCI.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

> diff --git a/hw/ppc/Makefile.objs b/hw/ppc/Makefile.objs
> index c8ab06e..0b2ff6d 100644
> --- a/hw/ppc/Makefile.objs
> +++ b/hw/ppc/Makefile.objs
> @@ -7,6 +7,9 @@ obj-$(CONFIG_PSERIES) += spapr_pci.o spapr_rtc.o spapr_drc.o
>  ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES)$(CONFIG_LINUX), yyy)
>  obj-y += spapr_pci_vfio.o
>  endif
> +ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES), yy)
> +obj-y += spapr_rtas_ddw.o
> +endif
>  # PowerPC 4xx boards
>  obj-y += ppc405_boards.o ppc4xx_devs.o ppc405_uc.o ppc440_bamboo.o
>  obj-y += ppc4xx_pci.o
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 5ca817c..d50d50b 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1860,6 +1860,11 @@ static const TypeInfo spapr_machine_info = {
>              .driver   = "spapr-pci-host-bridge",\
>              .property = "dynamic-reconfiguration",\
>              .value    = "off",\
> +        },\
> +        {\
> +            .driver   = TYPE_SPAPR_PCI_HOST_BRIDGE,\
> +            .property = "ddw",\
> +            .value    = stringify(off),\
>          },
>  
>  #define SPAPR_COMPAT_2_2 \
> diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
> index 5e6bdb4..eaa1943 100644
> --- a/hw/ppc/spapr_iommu.c
> +++ b/hw/ppc/spapr_iommu.c
> @@ -136,6 +136,15 @@ static IOMMUTLBEntry spapr_tce_translate_iommu(MemoryRegion *iommu, hwaddr addr,
>      return ret;
>  }
>  
> +static void spapr_tce_table_pre_save(void *opaque)
> +{
> +    sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
> +
> +    tcet->migtable = tcet->table;
> +}
> +
> +static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel);
> +
>  static int spapr_tce_table_post_load(void *opaque, int version_id)
>  {
>      sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
> @@ -144,22 +153,43 @@ static int spapr_tce_table_post_load(void *opaque, int version_id)
>          spapr_vio_set_bypass(tcet->vdev, tcet->bypass);
>      }
>  
> +    if (!tcet->migtable) {
> +        return 0;
> +    }
> +
> +    if (tcet->enabled) {
> +        if (!tcet->table) {
> +            tcet->enabled = false;
> +            /* VFIO does not migrate so pass vfio_accel == false */
> +            spapr_tce_table_do_enable(tcet, false);
> +        }
> +        memcpy(tcet->table, tcet->migtable,
> +               tcet->nb_table * sizeof(tcet->table[0]));
> +        free(tcet->migtable);
> +        tcet->migtable = NULL;
> +    }
> +
>      return 0;
>  }
>  
>  static const VMStateDescription vmstate_spapr_tce_table = {
>      .name = "spapr_iommu",
> -    .version_id = 2,
> +    .version_id = 3,
>      .minimum_version_id = 2,
> +    .pre_save = spapr_tce_table_pre_save,
>      .post_load = spapr_tce_table_post_load,
>      .fields      = (VMStateField []) {
>          /* Sanity check */
>          VMSTATE_UINT32_EQUAL(liobn, sPAPRTCETable),
> -        VMSTATE_UINT32_EQUAL(nb_table, sPAPRTCETable),
>  
>          /* IOMMU state */
> +        VMSTATE_BOOL_V(enabled, sPAPRTCETable, 3),
> +        VMSTATE_UINT64_V(bus_offset, sPAPRTCETable, 3),
> +        VMSTATE_UINT32_V(page_shift, sPAPRTCETable, 3),
> +        VMSTATE_UINT32(nb_table, sPAPRTCETable),
>          VMSTATE_BOOL(bypass, sPAPRTCETable),
> -        VMSTATE_VARRAY_UINT32(table, sPAPRTCETable, nb_table, 0, vmstate_info_uint64, uint64_t),
> +        VMSTATE_VARRAY_UINT32_ALLOC(migtable, sPAPRTCETable, nb_table, 0,
> +                                    vmstate_info_uint64, uint64_t),
>  
>          VMSTATE_END_OF_LIST()
>      },
> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
> index 1f980fa..ab2d650 100644
> --- a/hw/ppc/spapr_pci.c
> +++ b/hw/ppc/spapr_pci.c
> @@ -719,6 +719,8 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
>  static int spapr_phb_dma_update(Object *child, void *opaque)
>  {
>      int ret = 0;
> +    uint64_t bus_offset = 0;
> +    sPAPRPHBState *sphb = opaque;
>      sPAPRTCETable *tcet = (sPAPRTCETable *)
>          object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
>  
> @@ -726,6 +728,17 @@ static int spapr_phb_dma_update(Object *child, void *opaque)
>          return 0;
>      }
>  
> +    ret = spapr_phb_vfio_dma_init_window(sphb,
> +                                         tcet->page_shift,
> +                                         tcet->nb_table << tcet->page_shift,
> +                                         &bus_offset);
> +    if (ret) {
> +        return ret;
> +    }
> +    if (bus_offset != tcet->bus_offset) {
> +        return -EFAULT;
> +    }
> +
>      if (tcet->fd >= 0) {
>          /*
>           * We got first vfio-pci device on accelerated table.
> @@ -749,6 +762,9 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
>  
>      sphb->dma32_window_start = 0;
>      sphb->dma32_window_size = SPAPR_PCI_DMA32_SIZE;
> +    sphb->windows_supported = SPAPR_PCI_DMA_MAX_WINDOWS;
> +    sphb->page_size_mask = (1ULL << 12) | (1ULL << 16) | (1ULL << 24);
> +    sphb->dma64_window_size = pow2ceil(ram_size);

This should probably be maxram_size so we're ready for hotplug memory
- and in some other places too.

>  
>      ret = spapr_phb_vfio_dma_capabilities_update(sphb);
>      sphb->has_vfio = (ret == 0);
> @@ -756,12 +772,31 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
>      return 0;
>  }
>  
> -static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
> -                                     uint32_t liobn, uint32_t page_shift,
> -                                     uint64_t window_size)
> +int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
> +                              uint32_t liobn, uint32_t page_shift,
> +                              uint64_t window_size)
>  {
>      uint64_t bus_offset = sphb->dma32_window_start;
>      sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
> +    int ret;
> +
> +    if (SPAPR_PCI_DMA_WINDOW_NUM(liobn) && !sphb->ddw_enabled) {
> +        return -1;
> +    }
> +
> +    if (sphb->ddw_enabled) {
> +        if (sphb->has_vfio) {
> +            ret = spapr_phb_vfio_dma_init_window(sphb,
> +                                                 page_shift, window_size,
> +                                                 &bus_offset);
> +            if (ret) {
> +                return ret;
> +            }
> +        } else if (SPAPR_PCI_DMA_WINDOW_NUM(liobn)) {
> +            /* No VFIO so we choose a huge window address */
> +            bus_offset = SPAPR_PCI_DMA64_START;

Won't this logic break if you hotplug a VFIO device onto a PHB that
previously didn't have any?

> +        }
> +    }
>  
>      spapr_tce_table_enable(tcet, bus_offset, page_shift,
>                             window_size >> page_shift,
> @@ -773,9 +808,14 @@ static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
>  int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
>                                  sPAPRTCETable *tcet)
>  {
> +    int ret = 0;
> +
> +    if (sphb->has_vfio && sphb->ddw_enabled) {
> +        ret = spapr_phb_vfio_dma_remove_window(sphb, tcet);
> +    }
>      spapr_tce_table_disable(tcet);
>  
> -    return 0;
> +    return ret;
>  }
>  
>  static int spapr_phb_disable_dma_windows(Object *child, void *opaque)
> @@ -811,7 +851,7 @@ static int spapr_phb_hotplug_dma_sync(sPAPRPHBState *sphb)
>      spapr_phb_dma_capabilities_update(sphb);
>  
>      if (!had_vfio && sphb->has_vfio) {
> -        object_child_foreach(OBJECT(sphb), spapr_phb_dma_update, NULL);
> +        object_child_foreach(OBJECT(sphb), spapr_phb_dma_update, sphb);
>      }
>  
>      return ret;
> @@ -1357,15 +1397,17 @@ static void spapr_phb_realize(DeviceState *dev, Error **errp)
>          }
>      }
>  
> -    tcet = spapr_tce_new_table(DEVICE(sphb), sphb->dma_liobn);
> -    if (!tcet) {
> -            error_setg(errp, "failed to create TCE table");
> +    for (i = 0; i < SPAPR_PCI_DMA_MAX_WINDOWS; ++i) {
> +        tcet = spapr_tce_new_table(DEVICE(sphb),
> +                                   SPAPR_PCI_LIOBN(sphb->index, i));
> +        if (!tcet) {
> +            error_setg(errp, "spapr_tce_new_table failed");
>              return;
> +        }
> +        memory_region_add_subregion_overlap(&sphb->iommu_root, 0,
> +                                            spapr_tce_get_iommu(tcet), 0);
>      }
>  
> -    memory_region_add_subregion(&sphb->iommu_root, 0,
> -                                spapr_tce_get_iommu(tcet));
> -
>      sphb->msi = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, g_free);
>  }
>  
> @@ -1400,6 +1442,8 @@ static Property spapr_phb_properties[] = {
>                         SPAPR_PCI_IO_WIN_SIZE),
>      DEFINE_PROP_BOOL("dynamic-reconfiguration", sPAPRPHBState, dr_enabled,
>                       true),
> +    DEFINE_PROP_BOOL("ddw", sPAPRPHBState, ddw_enabled, true),
> +    DEFINE_PROP_UINT8("levels", sPAPRPHBState, levels, 0),
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> @@ -1580,6 +1624,15 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
>      uint32_t interrupt_map_mask[] = {
>          cpu_to_be32(b_ddddd(-1)|b_fff(0)), 0x0, 0x0, cpu_to_be32(-1)};
>      uint32_t interrupt_map[PCI_SLOT_MAX * PCI_NUM_PINS][7];
> +    uint32_t ddw_applicable[] = {
> +        cpu_to_be32(RTAS_IBM_QUERY_PE_DMA_WINDOW),
> +        cpu_to_be32(RTAS_IBM_CREATE_PE_DMA_WINDOW),
> +        cpu_to_be32(RTAS_IBM_REMOVE_PE_DMA_WINDOW)
> +    };
> +    uint32_t ddw_extensions[] = {
> +        cpu_to_be32(1),
> +        cpu_to_be32(RTAS_IBM_RESET_PE_DMA_WINDOW)
> +    };
>      sPAPRTCETable *tcet;
>  
>      /* Start populating the FDT */
> @@ -1602,6 +1655,14 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
>      _FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pci-config-space-type", 0x1));
>      _FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pe-total-#msi", XICS_IRQS));
>  
> +    /* Dynamic DMA window */
> +    if (phb->ddw_enabled) {
> +        _FDT(fdt_setprop(fdt, bus_off, "ibm,ddw-applicable", &ddw_applicable,
> +                         sizeof(ddw_applicable)));
> +        _FDT(fdt_setprop(fdt, bus_off, "ibm,ddw-extensions",
> +                         &ddw_extensions, sizeof(ddw_extensions)));
> +    }
> +
>      /* Build the interrupt-map, this must matches what is done
>       * in pci_spapr_map_irq
>       */
> diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
> index 6df9a23..5102c72 100644
> --- a/hw/ppc/spapr_pci_vfio.c
> +++ b/hw/ppc/spapr_pci_vfio.c
> @@ -41,6 +41,86 @@ int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb)
>      sphb->dma32_window_start = info.dma32_window_start;
>      sphb->dma32_window_size = info.dma32_window_size;
>  
> +    if (sphb->ddw_enabled && (info.flags & VFIO_IOMMU_SPAPR_INFO_DDW)) {
> +        sphb->windows_supported = info.ddw.max_dynamic_windows_supported;
> +        sphb->page_size_mask = info.ddw.pgsizes;
> +        sphb->dma64_window_size = pow2ceil(ram_size);
> +        sphb->max_levels = info.ddw.levels;
> +    } else {
> +        /* If VFIO_IOMMU_INFO_DDW is not set, disable DDW */
> +        sphb->ddw_enabled = false;
> +    }
> +
> +    return ret;
> +}
> +
> +static int spapr_phb_vfio_levels(uint32_t entries)
> +{
> +    unsigned pages = (entries * sizeof(uint64_t)) / getpagesize();
> +    int levels;
> +
> +    if (pages <= 64) {
> +        levels = 1;
> +    } else if (pages <= 64*64) {
> +        levels = 2;
> +    } else if (pages <= 64*64*64) {
> +        levels = 3;
> +    } else {
> +        levels = 4;
> +    }
> +
> +    return levels;
> +}
> +
> +int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
> +                                   uint32_t page_shift,
> +                                   uint64_t window_size,
> +                                   uint64_t *bus_offset)
> +{
> +    int ret;
> +    struct vfio_iommu_spapr_tce_create create = {
> +        .argsz = sizeof(create),
> +        .page_shift = page_shift,
> +        .window_size = window_size,
> +        .levels = sphb->levels,
> +        .start_addr = 0,
> +    };
> +
> +    /*
> +     * Dynamic windows are supported, that means that there is no
> +     * pre-created window and we have to create one.
> +     */
> +    if (!create.levels) {
> +        create.levels = spapr_phb_vfio_levels(create.window_size >>
> +                                              page_shift);
> +    }
> +
> +    if (create.levels > sphb->max_levels) {
> +        return -EINVAL;
> +    }
> +
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
> +                               VFIO_IOMMU_SPAPR_TCE_CREATE, &create);
> +    if (ret) {
> +        return ret;
> +    }
> +    *bus_offset = create.start_addr;
> +
> +    return 0;
> +}
> +
> +int spapr_phb_vfio_dma_remove_window(sPAPRPHBState *sphb,
> +                                            sPAPRTCETable *tcet)
> +{
> +    struct vfio_iommu_spapr_tce_remove remove = {
> +        .argsz = sizeof(remove),
> +        .start_addr = tcet->bus_offset
> +    };
> +    int ret;
> +
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
> +                               VFIO_IOMMU_SPAPR_TCE_REMOVE, &remove);
> +
>      return ret;
>  }
>  
> diff --git a/hw/ppc/spapr_rtas_ddw.c b/hw/ppc/spapr_rtas_ddw.c
> new file mode 100644
> index 0000000..7539c6a
> --- /dev/null
> +++ b/hw/ppc/spapr_rtas_ddw.c
> @@ -0,0 +1,300 @@
> +/*
> + * QEMU sPAPR Dynamic DMA windows support
> + *
> + * Copyright (c) 2014 Alexey Kardashevskiy, IBM Corporation.
> + *
> + *  This program is free software; you can redistribute it and/or modify
> + *  it under the terms of the GNU General Public License as published by
> + *  the Free Software Foundation; either version 2 of the License,
> + *  or (at your option) any later version.
> + *
> + *  This program is distributed in the hope that it will be useful,
> + *  but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *  GNU General Public License for more details.
> + *
> + *  You should have received a copy of the GNU General Public License
> + *  along with this program; if not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "qemu/error-report.h"
> +#include "hw/ppc/spapr.h"
> +#include "hw/pci-host/spapr.h"
> +#include "trace.h"
> +
> +static int spapr_phb_get_active_win_num_cb(Object *child, void *opaque)
> +{
> +    sPAPRTCETable *tcet;
> +
> +    tcet = (sPAPRTCETable *) object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
> +    if (tcet && tcet->enabled) {
> +        ++*(unsigned *)opaque;
> +    }
> +    return 0;
> +}
> +
> +static unsigned spapr_phb_get_active_win_num(sPAPRPHBState *sphb)
> +{
> +    unsigned ret = 0;
> +
> +    object_child_foreach(OBJECT(sphb), spapr_phb_get_active_win_num_cb, &ret);
> +
> +    return ret;
> +}
> +
> +static int spapr_phb_get_free_liobn_cb(Object *child, void *opaque)
> +{
> +    sPAPRTCETable *tcet;
> +
> +    tcet = (sPAPRTCETable *) object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
> +    if (tcet && !tcet->enabled) {
> +        *(uint32_t *)opaque = tcet->liobn;
> +        return 1;
> +    }
> +    return 0;
> +}
> +
> +static unsigned spapr_phb_get_free_liobn(sPAPRPHBState *sphb)
> +{
> +    uint32_t liobn = 0;
> +
> +    object_child_foreach(OBJECT(sphb), spapr_phb_get_free_liobn_cb, &liobn);
> +
> +    return liobn;
> +}
> +
> +static uint32_t spapr_query_mask(struct ppc_one_seg_page_size *sps,
> +                                 uint64_t page_mask)
> +{
> +    int i, j;
> +    uint32_t mask = 0;
> +    const struct { int shift; uint32_t mask; } masks[] = {
> +        { 12, RTAS_DDW_PGSIZE_4K },
> +        { 16, RTAS_DDW_PGSIZE_64K },
> +        { 24, RTAS_DDW_PGSIZE_16M },
> +        { 25, RTAS_DDW_PGSIZE_32M },
> +        { 26, RTAS_DDW_PGSIZE_64M },
> +        { 27, RTAS_DDW_PGSIZE_128M },
> +        { 28, RTAS_DDW_PGSIZE_256M },
> +        { 34, RTAS_DDW_PGSIZE_16G },
> +    };
> +
> +    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> +        for (j = 0; j < ARRAY_SIZE(masks); ++j) {
> +            if ((sps[i].page_shift == masks[j].shift) &&
> +                    (page_mask & (1ULL << masks[j].shift))) {
> +                mask |= masks[j].mask;
> +            }
> +        }
> +    }
> +
> +    return mask;
> +}
> +
> +static void rtas_ibm_query_pe_dma_window(PowerPCCPU *cpu,
> +                                         sPAPRMachineState *spapr,
> +                                         uint32_t token, uint32_t nargs,
> +                                         target_ulong args,
> +                                         uint32_t nret, target_ulong rets)
> +{
> +    CPUPPCState *env = &cpu->env;
> +    sPAPRPHBState *sphb;
> +    uint64_t buid;
> +    uint32_t avail, addr, pgmask = 0;
> +    unsigned current;
> +
> +    if ((nargs != 3) || (nret != 5)) {
> +        goto param_error_exit;
> +    }
> +
> +    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
> +    addr = rtas_ld(args, 0);
> +    sphb = spapr_pci_find_phb(spapr, buid);
> +    if (!sphb || !sphb->ddw_enabled) {
> +        goto param_error_exit;
> +    }
> +
> +    current = spapr_phb_get_active_win_num(sphb);
> +    avail = (sphb->windows_supported > current) ?
> +            (sphb->windows_supported - current) : 0;
> +
> +    /* Work out supported page masks */
> +    pgmask = spapr_query_mask(env->sps.sps, sphb->page_size_mask);
> +
> +    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
> +    rtas_st(rets, 1, avail);
> +
> +    /*
> +     * This is "Largest contiguous block of TCEs allocated specifically
> +     * for (that is, are reserved for) this PE".
> +     * Return the maximum number as all RAM was in 4K pages.
> +     */
> +    rtas_st(rets, 2, sphb->dma64_window_size >> SPAPR_TCE_PAGE_SHIFT);
> +    rtas_st(rets, 3, pgmask);
> +    rtas_st(rets, 4, 0); /* DMA migration mask, not supported */
> +
> +    trace_spapr_iommu_ddw_query(buid, addr, avail, sphb->dma64_window_size,
> +                                pgmask);
> +    return;
> +
> +param_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
> +}
> +
> +static void rtas_ibm_create_pe_dma_window(PowerPCCPU *cpu,
> +                                          sPAPRMachineState *spapr,
> +                                          uint32_t token, uint32_t nargs,
> +                                          target_ulong args,
> +                                          uint32_t nret, target_ulong rets)
> +{
> +    sPAPRPHBState *sphb;
> +    sPAPRTCETable *tcet = NULL;
> +    uint32_t addr, page_shift, window_shift, liobn;
> +    uint64_t buid;
> +    long ret;
> +
> +    if ((nargs != 5) || (nret != 4)) {
> +        goto param_error_exit;
> +    }
> +
> +    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
> +    addr = rtas_ld(args, 0);
> +    sphb = spapr_pci_find_phb(spapr, buid);
> +    if (!sphb || !sphb->ddw_enabled) {
> +        goto param_error_exit;
> +    }
> +
> +    page_shift = rtas_ld(args, 3);
> +    window_shift = rtas_ld(args, 4);
> +    liobn = spapr_phb_get_free_liobn(sphb);
> +
> +    if (!liobn || !(sphb->page_size_mask & (1ULL << page_shift))) {
> +        goto hw_error_exit;
> +    }
> +
> +    ret = spapr_phb_dma_init_window(sphb, liobn, page_shift,
> +                                    1ULL << window_shift);
> +    tcet = spapr_tce_find_by_liobn(liobn);
> +    trace_spapr_iommu_ddw_create(buid, addr, 1ULL << page_shift,
> +                                 1ULL << window_shift,
> +                                 tcet ? tcet->bus_offset : 0xbaadf00d,
> +                                 liobn, ret);
> +    if (ret || !tcet) {
> +        goto hw_error_exit;
> +    }
> +
> +    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
> +    rtas_st(rets, 1, liobn);
> +    rtas_st(rets, 2, tcet->bus_offset >> 32);
> +    rtas_st(rets, 3, tcet->bus_offset & ((uint32_t) -1));
> +
> +    return;
> +
> +hw_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
> +    return;
> +
> +param_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
> +}
> +
> +static void rtas_ibm_remove_pe_dma_window(PowerPCCPU *cpu,
> +                                          sPAPRMachineState *spapr,
> +                                          uint32_t token, uint32_t nargs,
> +                                          target_ulong args,
> +                                          uint32_t nret, target_ulong rets)
> +{
> +    sPAPRPHBState *sphb;
> +    sPAPRTCETable *tcet;
> +    uint32_t liobn;
> +    long ret;
> +
> +    if ((nargs != 1) || (nret != 1)) {
> +        goto param_error_exit;
> +    }
> +
> +    liobn = rtas_ld(args, 0);
> +    tcet = spapr_tce_find_by_liobn(liobn);
> +    if (!tcet) {
> +        goto param_error_exit;
> +    }
> +
> +    sphb = SPAPR_PCI_HOST_BRIDGE(OBJECT(tcet)->parent);
> +    if (!sphb || !sphb->ddw_enabled) {
> +        goto param_error_exit;
> +    }
> +
> +    ret = spapr_phb_dma_remove_window(sphb, tcet);
> +    trace_spapr_iommu_ddw_remove(liobn, ret);
> +    if (ret) {
> +        goto hw_error_exit;
> +    }
> +
> +    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
> +    return;
> +
> +hw_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
> +    return;
> +
> +param_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
> +}
> +
> +static void rtas_ibm_reset_pe_dma_window(PowerPCCPU *cpu,
> +                                         sPAPRMachineState *spapr,
> +                                         uint32_t token, uint32_t nargs,
> +                                         target_ulong args,
> +                                         uint32_t nret, target_ulong rets)
> +{
> +    sPAPRPHBState *sphb;
> +    uint64_t buid;
> +    uint32_t addr;
> +    long ret;
> +
> +    if ((nargs != 3) || (nret != 1)) {
> +        goto param_error_exit;
> +    }
> +
> +    buid = ((uint64_t)rtas_ld(args, 1) << 32) | rtas_ld(args, 2);
> +    addr = rtas_ld(args, 0);
> +    sphb = spapr_pci_find_phb(spapr, buid);
> +    if (!sphb || !sphb->ddw_enabled) {
> +        goto param_error_exit;
> +    }
> +
> +    ret = spapr_phb_dma_reset(sphb);
> +    trace_spapr_iommu_ddw_reset(buid, addr, ret);
> +    if (ret) {
> +        goto hw_error_exit;
> +    }
> +
> +    rtas_st(rets, 0, RTAS_OUT_SUCCESS);
> +
> +    return;
> +
> +hw_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_HW_ERROR);
> +    return;
> +
> +param_error_exit:
> +    rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
> +}
> +
> +static void spapr_rtas_ddw_init(void)
> +{
> +    spapr_rtas_register(RTAS_IBM_QUERY_PE_DMA_WINDOW,
> +                        "ibm,query-pe-dma-window",
> +                        rtas_ibm_query_pe_dma_window);
> +    spapr_rtas_register(RTAS_IBM_CREATE_PE_DMA_WINDOW,
> +                        "ibm,create-pe-dma-window",
> +                        rtas_ibm_create_pe_dma_window);
> +    spapr_rtas_register(RTAS_IBM_REMOVE_PE_DMA_WINDOW,
> +                        "ibm,remove-pe-dma-window",
> +                        rtas_ibm_remove_pe_dma_window);
> +    spapr_rtas_register(RTAS_IBM_RESET_PE_DMA_WINDOW,
> +                        "ibm,reset-pe-dma-window",
> +                        rtas_ibm_reset_pe_dma_window);
> +}
> +
> +type_init(spapr_rtas_ddw_init)
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 9e3e0b0..f915127 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -830,6 +830,8 @@ int vfio_container_ioctl(AddressSpace *as,
>      case VFIO_CHECK_EXTENSION:
>      case VFIO_IOMMU_SPAPR_TCE_GET_INFO:
>      case VFIO_EEH_PE_OP:
> +    case VFIO_IOMMU_SPAPR_TCE_CREATE:
> +    case VFIO_IOMMU_SPAPR_TCE_REMOVE:
>          break;
>      default:
>          /* Return an error on unknown requests */
> diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h
> index b2a8fc3..1313805 100644
> --- a/include/hw/pci-host/spapr.h
> +++ b/include/hw/pci-host/spapr.h
> @@ -88,6 +88,12 @@ struct sPAPRPHBState {
>      uint32_t dma32_window_size;
>      bool has_vfio;
>      int32_t iommugroupid; /* obsolete */
> +    bool ddw_enabled;
> +    uint32_t windows_supported;
> +    uint64_t page_size_mask;
> +    uint64_t dma64_window_size;
> +    uint8_t max_levels;
> +    uint8_t levels;
>  
>      QLIST_ENTRY(sPAPRPHBState) list;
>  };
> @@ -110,6 +116,12 @@ struct sPAPRPHBState {
>  
>  #define SPAPR_PCI_DMA32_SIZE         0x40000000
>  
> +/* Default 64bit dynamic window offset */
> +#define SPAPR_PCI_DMA64_START        0x8000000000000000ULL
> +
> +/* Maximum allowed number of DMA windows for emulated PHB */
> +#define SPAPR_PCI_DMA_MAX_WINDOWS    2
> +
>  static inline qemu_irq spapr_phb_lsi_qirq(struct sPAPRPHBState *phb, int pin)
>  {
>      sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
> @@ -130,11 +142,20 @@ void spapr_pci_rtas_init(void);
>  sPAPRPHBState *spapr_pci_find_phb(sPAPRMachineState *spapr, uint64_t buid);
>  PCIDevice *spapr_pci_find_dev(sPAPRMachineState *spapr, uint64_t buid,
>                                uint32_t config_addr);
> +int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
> +                              uint32_t liobn, uint32_t page_shift,
> +                              uint64_t window_size);
>  int spapr_phb_dma_remove_window(sPAPRPHBState *sphb,
>                                  sPAPRTCETable *tcet);
>  int spapr_phb_dma_reset(sPAPRPHBState *sphb);
>  
>  int spapr_phb_vfio_dma_capabilities_update(sPAPRPHBState *sphb);
> +int spapr_phb_vfio_dma_init_window(sPAPRPHBState *sphb,
> +                                   uint32_t page_shift,
> +                                   uint64_t window_size,
> +                                   uint64_t *bus_offset);
> +int spapr_phb_vfio_dma_remove_window(sPAPRPHBState *sphb,
> +                                     sPAPRTCETable *tcet);
>  int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>                                    PCIDevice *pdev, int option);
>  int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state);
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 4645f16..5a58785 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -416,6 +416,16 @@ int spapr_allocate_irq_block(int num, bool lsi, bool msi);
>  #define RTAS_OUT_NOT_SUPPORTED      -3
>  #define RTAS_OUT_NOT_AUTHORIZED     -9002
>  
> +/* DDW pagesize mask values from ibm,query-pe-dma-window */
> +#define RTAS_DDW_PGSIZE_4K       0x01
> +#define RTAS_DDW_PGSIZE_64K      0x02
> +#define RTAS_DDW_PGSIZE_16M      0x04
> +#define RTAS_DDW_PGSIZE_32M      0x08
> +#define RTAS_DDW_PGSIZE_64M      0x10
> +#define RTAS_DDW_PGSIZE_128M     0x20
> +#define RTAS_DDW_PGSIZE_256M     0x40
> +#define RTAS_DDW_PGSIZE_16G      0x80
> +
>  /* RTAS tokens */
>  #define RTAS_TOKEN_BASE      0x2000
>  
> @@ -457,8 +467,12 @@ int spapr_allocate_irq_block(int num, bool lsi, bool msi);
>  #define RTAS_IBM_SET_SLOT_RESET                 (RTAS_TOKEN_BASE + 0x23)
>  #define RTAS_IBM_CONFIGURE_PE                   (RTAS_TOKEN_BASE + 0x24)
>  #define RTAS_IBM_SLOT_ERROR_DETAIL              (RTAS_TOKEN_BASE + 0x25)
> +#define RTAS_IBM_QUERY_PE_DMA_WINDOW            (RTAS_TOKEN_BASE + 0x26)
> +#define RTAS_IBM_CREATE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x27)
> +#define RTAS_IBM_REMOVE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x28)
> +#define RTAS_IBM_RESET_PE_DMA_WINDOW            (RTAS_TOKEN_BASE + 0x29)
>  
> -#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x26)
> +#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x2A)
>  
>  /* RTAS ibm,get-system-parameter token values */
>  #define RTAS_SYSPARM_SPLPAR_CHARACTERISTICS      20
> @@ -558,6 +572,7 @@ struct sPAPRTCETable {
>      uint64_t bus_offset;
>      uint32_t page_shift;
>      uint64_t *table;
> +    uint64_t *migtable;
>      bool bypass;
>      int fd;
>      MemoryRegion root, iommu;
> diff --git a/trace-events b/trace-events
> index 3d1aeea..edd3164 100644
> --- a/trace-events
> +++ b/trace-events
> @@ -1365,6 +1365,10 @@ spapr_iommu_pci_indirect(uint64_t liobn, uint64_t ioba, uint64_t tce, uint64_t i
>  spapr_iommu_pci_stuff(uint64_t liobn, uint64_t ioba, uint64_t tce_value, uint64_t npages, uint64_t ret) "liobn=%"PRIx64" ioba=0x%"PRIx64" tcevalue=0x%"PRIx64" npages=%"PRId64" ret=%"PRId64
>  spapr_iommu_xlate(uint64_t liobn, uint64_t ioba, uint64_t tce, unsigned perm, unsigned pgsize) "liobn=%"PRIx64" 0x%"PRIx64" -> 0x%"PRIx64" perm=%u mask=%x"
>  spapr_iommu_alloc_table(uint64_t liobn, void *table, int fd) "liobn=%"PRIx64" table=%p fd=%d"
> +spapr_iommu_ddw_query(uint64_t buid, uint32_t cfgaddr, unsigned wa, uint64_t win_size, uint32_t pgmask) "buid=%"PRIx64" addr=%"PRIx32", %u windows available, max window size=%"PRIx64", mask=%"PRIx32
> +spapr_iommu_ddw_create(uint64_t buid, uint32_t cfgaddr, unsigned long long pg_size, unsigned long long req_size, uint64_t start, uint32_t liobn, long ret) "buid=%"PRIx64" addr=%"PRIx32", page size=0x%llx, requested=0x%llx, start addr=%"PRIx64", liobn=%"PRIx32", ret = %ld"
> +spapr_iommu_ddw_remove(uint32_t liobn, long ret) "liobn=%"PRIx32", ret = %ld"
> +spapr_iommu_ddw_reset(uint64_t buid, uint32_t cfgaddr, long ret) "buid=%"PRIx64" addr=%"PRIx32", ret = %ld"
>  
>  # hw/ppc/ppc.c
>  ppc_tb_adjust(uint64_t offs1, uint64_t offs2, int64_t diff, int64_t seconds) "adjusted from 0x%"PRIx64" to 0x%"PRIx64", diff %"PRId64" (%"PRId64"s)"

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW)
  2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
                   ` (13 preceding siblings ...)
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW) Alexey Kardashevskiy
@ 2015-06-23  6:44 ` David Gibson
  2015-06-24 10:52   ` Alexey Kardashevskiy
  14 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2015-06-23  6:44 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 2907 bytes --]

On Thu, Jun 18, 2015 at 09:37:22PM +1000, Alexey Kardashevskiy wrote:
> 
> (cut-n-paste from kernel patchset)
> 
> Each Partitionable Endpoint (IOMMU group) has an address range on a PCI bus
> where devices are allowed to do DMA. These ranges are called DMA windows.
> By default, there is a single DMA window, 1 or 2GB big, mapped at zero
> on a PCI bus.
> 
> PAPR defines a DDW RTAS API which allows pseries guests
> querying the hypervisor about DDW support and capabilities (page size mask
> for now). A pseries guest may request an additional (to the default)
> DMA windows using this RTAS API.
> The existing pseries Linux guests request an additional window as big as
> the guest RAM and map the entire guest window which effectively creates
> direct mapping of the guest memory to a PCI bus.
> 
> This patchset reworks PPC64 IOMMU code and adds necessary structures
> to support big windows.
> 
> Once a Linux guest discovers the presence of DDW, it does:
> 1. query hypervisor about number of available windows and page size masks;
> 2. create a window with the biggest possible page size (today 4K/64K/16M);
> 3. map the entire guest RAM via H_PUT_TCE* hypercalls;
> 4. switche dma_ops to direct_dma_ops on the selected PE.
> 
> Once this is done, H_PUT_TCE is not called anymore for 64bit devices and
> the guest does not waste time on DMA map/unmap operations.
> 
> Note that 32bit devices won't use DDW and will keep using the default
> DMA window so KVM optimizations will be required (to be posted later).
> 
> This patchset adds DDW support for pseries. The host kernel changes are
> required, posted as:
> 
> [PATCH kernel v11 00/34] powerpc/iommu/vfio: Enable Dynamic DMA windows
> 
> This patchset is based on git://github.com/dgibson/qemu.git spapr-next branch.

A couple of general queries - this touchs on the kernel part as well
as the qemu part:

 * Am I correct in thinking that the point in doing the
   pre-registration stuff is to allow the kernel to handle PUT_TCE
   in real mode?  i.e. that the advatage of doing preregistration
   rather than accounting on the DMA_MAP and DMA_UNMAP itself only
   appears once you have kernel KVM+VFIO acceleration?

 * Do you have test numbers to show that it's still worthwhile to have
   kernel acceleration once you have a guest using DDW?  With DDW in
   play, even if PUT_TCE is slow, it should be called a lot less
   often.

The reason I ask is that the preregistration handling is a pretty big
chunk of code that inserts itself into some pretty core kernel data
structures, all for one pretty specific use case.  We only want to do
that if there's a strong justification for it.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)
  2015-06-23  6:38   ` David Gibson
@ 2015-06-24 10:37     ` Alexey Kardashevskiy
  0 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-24 10:37 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On 06/23/2015 04:38 PM, David Gibson wrote:
> On Thu, Jun 18, 2015 at 09:37:36PM +1000, Alexey Kardashevskiy wrote:
>> This adds support for Dynamic DMA Windows (DDW) option defined by
>> the SPAPR specification which allows to have additional DMA window(s)
>>
>> This implements DDW for emulated and VFIO devices. As all TCE root regions
>> are mapped at 0 and 64bit long (and actual tables are child regions),
>> this replaces memory_region_add_subregion() with _overlap() to make
>> QEMU memory API happy.
>>
>> This reserves RTAS token numbers for DDW calls.
>>
>> This implements helpers to interact with VFIO kernel interface.
>>
>> This changes the TCE table migration descriptor to support dynamic
>> tables as from now on, PHB will create as many stub TCE table objects
>> as PHB can possibly support but not all of them might be initialized at
>> the time of migration because DDW might or might not be requested by
>> the guest.
>>
>> The "ddw" property is enabled by default on a PHB but for compatibility
>> the pseries-2.3 machine and older disable it.
>>
>> This implements DDW for VFIO. The host kernel support is required.
>> This adds a "levels" property to PHB to control the number of levels
>> in the actual TCE table allocated by the host kernel, 0 is the default
>> value to tell QEMU to calculate the correct value. Current hardware
>> supports up to 5 levels.
>>
>> The existing linux guests try creating one additional huge DMA window
>> with 64K or 16MB pages and map the entire guest RAM to. If succeeded,
>> the guest switches to dma_direct_ops and never calls TCE hypercalls
>> (H_PUT_TCE,...) again. This enables VFIO devices to use the entire RAM
>> and not waste time on map/unmap later.
>>
>> This adds 4 RTAS handlers:
>> * ibm,query-pe-dma-window
>> * ibm,create-pe-dma-window
>> * ibm,remove-pe-dma-window
>> * ibm,reset-pe-dma-window
>> These are registered from type_init() callback.
>>
>> These RTAS handlers are implemented in a separate file to avoid polluting
>> spapr_iommu.c with PCI.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>
>> diff --git a/hw/ppc/Makefile.objs b/hw/ppc/Makefile.objs
>> index c8ab06e..0b2ff6d 100644
>> --- a/hw/ppc/Makefile.objs
>> +++ b/hw/ppc/Makefile.objs
>> @@ -7,6 +7,9 @@ obj-$(CONFIG_PSERIES) += spapr_pci.o spapr_rtc.o spapr_drc.o
>>   ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES)$(CONFIG_LINUX), yyy)
>>   obj-y += spapr_pci_vfio.o
>>   endif
>> +ifeq ($(CONFIG_PCI)$(CONFIG_PSERIES), yy)
>> +obj-y += spapr_rtas_ddw.o
>> +endif
>>   # PowerPC 4xx boards
>>   obj-y += ppc405_boards.o ppc4xx_devs.o ppc405_uc.o ppc440_bamboo.o
>>   obj-y += ppc4xx_pci.o
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index 5ca817c..d50d50b 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -1860,6 +1860,11 @@ static const TypeInfo spapr_machine_info = {
>>               .driver   = "spapr-pci-host-bridge",\
>>               .property = "dynamic-reconfiguration",\
>>               .value    = "off",\
>> +        },\
>> +        {\
>> +            .driver   = TYPE_SPAPR_PCI_HOST_BRIDGE,\
>> +            .property = "ddw",\
>> +            .value    = stringify(off),\
>>           },
>>
>>   #define SPAPR_COMPAT_2_2 \
>> diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
>> index 5e6bdb4..eaa1943 100644
>> --- a/hw/ppc/spapr_iommu.c
>> +++ b/hw/ppc/spapr_iommu.c
>> @@ -136,6 +136,15 @@ static IOMMUTLBEntry spapr_tce_translate_iommu(MemoryRegion *iommu, hwaddr addr,
>>       return ret;
>>   }
>>
>> +static void spapr_tce_table_pre_save(void *opaque)
>> +{
>> +    sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
>> +
>> +    tcet->migtable = tcet->table;
>> +}
>> +
>> +static void spapr_tce_table_do_enable(sPAPRTCETable *tcet, bool vfio_accel);
>> +
>>   static int spapr_tce_table_post_load(void *opaque, int version_id)
>>   {
>>       sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
>> @@ -144,22 +153,43 @@ static int spapr_tce_table_post_load(void *opaque, int version_id)
>>           spapr_vio_set_bypass(tcet->vdev, tcet->bypass);
>>       }
>>
>> +    if (!tcet->migtable) {
>> +        return 0;
>> +    }
>> +
>> +    if (tcet->enabled) {
>> +        if (!tcet->table) {
>> +            tcet->enabled = false;
>> +            /* VFIO does not migrate so pass vfio_accel == false */
>> +            spapr_tce_table_do_enable(tcet, false);
>> +        }
>> +        memcpy(tcet->table, tcet->migtable,
>> +               tcet->nb_table * sizeof(tcet->table[0]));
>> +        free(tcet->migtable);
>> +        tcet->migtable = NULL;
>> +    }
>> +
>>       return 0;
>>   }
>>
>>   static const VMStateDescription vmstate_spapr_tce_table = {
>>       .name = "spapr_iommu",
>> -    .version_id = 2,
>> +    .version_id = 3,
>>       .minimum_version_id = 2,
>> +    .pre_save = spapr_tce_table_pre_save,
>>       .post_load = spapr_tce_table_post_load,
>>       .fields      = (VMStateField []) {
>>           /* Sanity check */
>>           VMSTATE_UINT32_EQUAL(liobn, sPAPRTCETable),
>> -        VMSTATE_UINT32_EQUAL(nb_table, sPAPRTCETable),
>>
>>           /* IOMMU state */
>> +        VMSTATE_BOOL_V(enabled, sPAPRTCETable, 3),
>> +        VMSTATE_UINT64_V(bus_offset, sPAPRTCETable, 3),
>> +        VMSTATE_UINT32_V(page_shift, sPAPRTCETable, 3),
>> +        VMSTATE_UINT32(nb_table, sPAPRTCETable),
>>           VMSTATE_BOOL(bypass, sPAPRTCETable),
>> -        VMSTATE_VARRAY_UINT32(table, sPAPRTCETable, nb_table, 0, vmstate_info_uint64, uint64_t),
>> +        VMSTATE_VARRAY_UINT32_ALLOC(migtable, sPAPRTCETable, nb_table, 0,
>> +                                    vmstate_info_uint64, uint64_t),
>>
>>           VMSTATE_END_OF_LIST()
>>       },
>> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
>> index 1f980fa..ab2d650 100644
>> --- a/hw/ppc/spapr_pci.c
>> +++ b/hw/ppc/spapr_pci.c
>> @@ -719,6 +719,8 @@ static AddressSpace *spapr_pci_dma_iommu(PCIBus *bus, void *opaque, int devfn)
>>   static int spapr_phb_dma_update(Object *child, void *opaque)
>>   {
>>       int ret = 0;
>> +    uint64_t bus_offset = 0;
>> +    sPAPRPHBState *sphb = opaque;
>>       sPAPRTCETable *tcet = (sPAPRTCETable *)
>>           object_dynamic_cast(child, TYPE_SPAPR_TCE_TABLE);
>>
>> @@ -726,6 +728,17 @@ static int spapr_phb_dma_update(Object *child, void *opaque)
>>           return 0;
>>       }
>>
>> +    ret = spapr_phb_vfio_dma_init_window(sphb,
>> +                                         tcet->page_shift,
>> +                                         tcet->nb_table << tcet->page_shift,
>> +                                         &bus_offset);
>> +    if (ret) {
>> +        return ret;
>> +    }
>> +    if (bus_offset != tcet->bus_offset) {
>> +        return -EFAULT;
>> +    }
>> +
>>       if (tcet->fd >= 0) {
>>           /*
>>            * We got first vfio-pci device on accelerated table.
>> @@ -749,6 +762,9 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
>>
>>       sphb->dma32_window_start = 0;
>>       sphb->dma32_window_size = SPAPR_PCI_DMA32_SIZE;
>> +    sphb->windows_supported = SPAPR_PCI_DMA_MAX_WINDOWS;
>> +    sphb->page_size_mask = (1ULL << 12) | (1ULL << 16) | (1ULL << 24);
>> +    sphb->dma64_window_size = pow2ceil(ram_size);
>
> This should probably be maxram_size so we're ready for hotplug memory
> - and in some other places too.


Correct. I'll fix it.


>
>>
>>       ret = spapr_phb_vfio_dma_capabilities_update(sphb);
>>       sphb->has_vfio = (ret == 0);
>> @@ -756,12 +772,31 @@ static int spapr_phb_dma_capabilities_update(sPAPRPHBState *sphb)
>>       return 0;
>>   }
>>
>> -static int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
>> -                                     uint32_t liobn, uint32_t page_shift,
>> -                                     uint64_t window_size)
>> +int spapr_phb_dma_init_window(sPAPRPHBState *sphb,
>> +                              uint32_t liobn, uint32_t page_shift,
>> +                              uint64_t window_size)
>>   {
>>       uint64_t bus_offset = sphb->dma32_window_start;
>>       sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
>> +    int ret;
>> +
>> +    if (SPAPR_PCI_DMA_WINDOW_NUM(liobn) && !sphb->ddw_enabled) {
>> +        return -1;
>> +    }
>> +
>> +    if (sphb->ddw_enabled) {
>> +        if (sphb->has_vfio) {
>> +            ret = spapr_phb_vfio_dma_init_window(sphb,
>> +                                                 page_shift, window_size,
>> +                                                 &bus_offset);
>> +            if (ret) {
>> +                return ret;
>> +            }
>> +        } else if (SPAPR_PCI_DMA_WINDOW_NUM(liobn)) {
>> +            /* No VFIO so we choose a huge window address */
>> +            bus_offset = SPAPR_PCI_DMA64_START;
>
> Won't this logic break if you hotplug a VFIO device onto a PHB that
> previously didn't have any?


This branch executes when sphb->has_vfio==false. If we have had a huge 
window already and now we are adding a new VFIO device and the host kernel 
returned a wront bus address, then we check that bus address did not 
change, it is "if (bus_offset != tcet->bus_offset)" few chunks above.





-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW)
  2015-06-23  6:44 ` [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) David Gibson
@ 2015-06-24 10:52   ` Alexey Kardashevskiy
  2015-06-25 19:59     ` Alex Williamson
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-24 10:52 UTC (permalink / raw)
  To: David Gibson
  Cc: Alex Williamson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On 06/23/2015 04:44 PM, David Gibson wrote:
> On Thu, Jun 18, 2015 at 09:37:22PM +1000, Alexey Kardashevskiy wrote:
>>
>> (cut-n-paste from kernel patchset)
>>
>> Each Partitionable Endpoint (IOMMU group) has an address range on a PCI bus
>> where devices are allowed to do DMA. These ranges are called DMA windows.
>> By default, there is a single DMA window, 1 or 2GB big, mapped at zero
>> on a PCI bus.
>>
>> PAPR defines a DDW RTAS API which allows pseries guests
>> querying the hypervisor about DDW support and capabilities (page size mask
>> for now). A pseries guest may request an additional (to the default)
>> DMA windows using this RTAS API.
>> The existing pseries Linux guests request an additional window as big as
>> the guest RAM and map the entire guest window which effectively creates
>> direct mapping of the guest memory to a PCI bus.
>>
>> This patchset reworks PPC64 IOMMU code and adds necessary structures
>> to support big windows.
>>
>> Once a Linux guest discovers the presence of DDW, it does:
>> 1. query hypervisor about number of available windows and page size masks;
>> 2. create a window with the biggest possible page size (today 4K/64K/16M);
>> 3. map the entire guest RAM via H_PUT_TCE* hypercalls;
>> 4. switche dma_ops to direct_dma_ops on the selected PE.
>>
>> Once this is done, H_PUT_TCE is not called anymore for 64bit devices and
>> the guest does not waste time on DMA map/unmap operations.
>>
>> Note that 32bit devices won't use DDW and will keep using the default
>> DMA window so KVM optimizations will be required (to be posted later).
>>
>> This patchset adds DDW support for pseries. The host kernel changes are
>> required, posted as:
>>
>> [PATCH kernel v11 00/34] powerpc/iommu/vfio: Enable Dynamic DMA windows
>>
>> This patchset is based on git://github.com/dgibson/qemu.git spapr-next branch.
>
> A couple of general queries - this touchs on the kernel part as well
> as the qemu part:
>
>   * Am I correct in thinking that the point in doing the
>     pre-registration stuff is to allow the kernel to handle PUT_TCE
>     in real mode?  i.e. that the advatage of doing preregistration
>     rather than accounting on the DMA_MAP and DMA_UNMAP itself only
>     appears once you have kernel KVM+VFIO acceleration?


Handling PUT_TCE includes 2 things:
1. get_user_pages_fast() and put_page()
2. update locked_vm

Both are tricky in real mode but 2) is also tricky in virtual mode as I 
have to deal with multiple unrelated 32bit and 64bit windows (VFIO does not 
care if they belong to one or many processes) with IOMMU page size==4K and 
gup/put_page working with 64k pages (our default page size for host kernel).

But yes, without keeping real mode handlers in mind, this thing could have 
been made simpler.


>   * Do you have test numbers to show that it's still worthwhile to have
>     kernel acceleration once you have a guest using DDW?  With DDW in
>     play, even if PUT_TCE is slow, it should be called a lot less
>     often.

With DDW, the whole RAM mapped once at first set_dma_mask(64bit) called by 
the guest, it is just a few PUT_TCE_INDIRECT calls.

If the guest uses DDW, real mode handlers cannot possibly beat it and I 
have reports that real mode handlers are noticibly slower than direct DMA 
mapping (i.e. DDW) for 40Gb devices (10Gb seems to be fine but I have not 
tried a dozen of guests yet).


> The reason I ask is that the preregistration handling is a pretty big
> chunk of code that inserts itself into some pretty core kernel data
> structures, all for one pretty specific use case.  We only want to do
> that if there's a strong justification for it.

Exactly. I keep asking Ben and Paul periodically if we want to keep it and 
the answer is always yes :)


About "vfio: spapr: Move SPAPR-related code to a separate file" - I guess I 
better off removing it for now, right?



-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW)
  2015-06-24 10:52   ` Alexey Kardashevskiy
@ 2015-06-25 19:59     ` Alex Williamson
  2015-06-26  7:01       ` David Gibson
  0 siblings, 1 reply; 31+ messages in thread
From: Alex Williamson @ 2015-06-25 19:59 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: Alexander Graf, qemu-ppc, qemu-devel, Gavin Shan, David Gibson

On Wed, 2015-06-24 at 20:52 +1000, Alexey Kardashevskiy wrote:
> On 06/23/2015 04:44 PM, David Gibson wrote:
> > On Thu, Jun 18, 2015 at 09:37:22PM +1000, Alexey Kardashevskiy wrote:
> >>
> >> (cut-n-paste from kernel patchset)
> >>
> >> Each Partitionable Endpoint (IOMMU group) has an address range on a PCI bus
> >> where devices are allowed to do DMA. These ranges are called DMA windows.
> >> By default, there is a single DMA window, 1 or 2GB big, mapped at zero
> >> on a PCI bus.
> >>
> >> PAPR defines a DDW RTAS API which allows pseries guests
> >> querying the hypervisor about DDW support and capabilities (page size mask
> >> for now). A pseries guest may request an additional (to the default)
> >> DMA windows using this RTAS API.
> >> The existing pseries Linux guests request an additional window as big as
> >> the guest RAM and map the entire guest window which effectively creates
> >> direct mapping of the guest memory to a PCI bus.
> >>
> >> This patchset reworks PPC64 IOMMU code and adds necessary structures
> >> to support big windows.
> >>
> >> Once a Linux guest discovers the presence of DDW, it does:
> >> 1. query hypervisor about number of available windows and page size masks;
> >> 2. create a window with the biggest possible page size (today 4K/64K/16M);
> >> 3. map the entire guest RAM via H_PUT_TCE* hypercalls;
> >> 4. switche dma_ops to direct_dma_ops on the selected PE.
> >>
> >> Once this is done, H_PUT_TCE is not called anymore for 64bit devices and
> >> the guest does not waste time on DMA map/unmap operations.
> >>
> >> Note that 32bit devices won't use DDW and will keep using the default
> >> DMA window so KVM optimizations will be required (to be posted later).
> >>
> >> This patchset adds DDW support for pseries. The host kernel changes are
> >> required, posted as:
> >>
> >> [PATCH kernel v11 00/34] powerpc/iommu/vfio: Enable Dynamic DMA windows
> >>
> >> This patchset is based on git://github.com/dgibson/qemu.git spapr-next branch.
> >
> > A couple of general queries - this touchs on the kernel part as well
> > as the qemu part:
> >
> >   * Am I correct in thinking that the point in doing the
> >     pre-registration stuff is to allow the kernel to handle PUT_TCE
> >     in real mode?  i.e. that the advatage of doing preregistration
> >     rather than accounting on the DMA_MAP and DMA_UNMAP itself only
> >     appears once you have kernel KVM+VFIO acceleration?
> 
> 
> Handling PUT_TCE includes 2 things:
> 1. get_user_pages_fast() and put_page()
> 2. update locked_vm
> 
> Both are tricky in real mode but 2) is also tricky in virtual mode as I 
> have to deal with multiple unrelated 32bit and 64bit windows (VFIO does not 
> care if they belong to one or many processes) with IOMMU page size==4K and 
> gup/put_page working with 64k pages (our default page size for host kernel).
> 
> But yes, without keeping real mode handlers in mind, this thing could have 
> been made simpler.
> 
> 
> >   * Do you have test numbers to show that it's still worthwhile to have
> >     kernel acceleration once you have a guest using DDW?  With DDW in
> >     play, even if PUT_TCE is slow, it should be called a lot less
> >     often.
> 
> With DDW, the whole RAM mapped once at first set_dma_mask(64bit) called by 
> the guest, it is just a few PUT_TCE_INDIRECT calls.
> 
> If the guest uses DDW, real mode handlers cannot possibly beat it and I 
> have reports that real mode handlers are noticibly slower than direct DMA 
> mapping (i.e. DDW) for 40Gb devices (10Gb seems to be fine but I have not 
> tried a dozen of guests yet).
> 
> 
> > The reason I ask is that the preregistration handling is a pretty big
> > chunk of code that inserts itself into some pretty core kernel data
> > structures, all for one pretty specific use case.  We only want to do
> > that if there's a strong justification for it.
> 
> Exactly. I keep asking Ben and Paul periodically if we want to keep it and 
> the answer is always yes :)
> 
> 
> About "vfio: spapr: Move SPAPR-related code to a separate file" - I guess I 
> better off removing it for now, right?

I won't block the series for moving the code out, but it seems like
we're consistently making spapr an exception.  It seemed like we had
settled on some spapr code that could be semi-portable to any
guest-based IOMMU model, but patch 02/14 pulls that off into the spapr
specific wart.  Theoretically we could support a guest-based IOMMU with
type1 code as well, we just don't have much impetus to do so nor does
type1 really have a mapping interface designed for that kind of
performance.  We can always pull the code out of spapr or git history if
we have some need for it though.  Thanks,

Alex

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container
  2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container Alexey Kardashevskiy
@ 2015-06-25 19:59   ` Alex Williamson
  2015-06-30  3:32     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 31+ messages in thread
From: Alex Williamson @ 2015-06-25 19:59 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: David Gibson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On Thu, 2015-06-18 at 21:37 +1000, Alexey Kardashevskiy wrote:
> This enables multiple IOMMU groups in one VFIO container which means
> that multiple devices from different groups can share the same IOMMU
> table (or tables if DDW).
> 
> This removes a group id from vfio_container_ioctl(). The kernel support
> is required for this; if the host kernel does not have the support,
> it will allow only one group per container. The PHB's "iommuid" property
> is ignored. The ioctl is called for every container attached to
> the address space. At the moment there is just one container anyway.
> 
> If there is no container attached to the address space,
> vfio_container_do_ioctl() returns -1.
> 
> This removes casts to sPAPRPHBVFIOState as none of sPAPRPHBVFIOState
> members is accessed here.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/ppc/spapr_pci_vfio.c | 21 ++++++---------------
>  hw/vfio/common.c        | 20 ++++++--------------
>  include/hw/vfio/vfio.h  |  2 +-
>  3 files changed, 13 insertions(+), 30 deletions(-)
> 
> diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
> index 99a1be5..e89cbff 100644
> --- a/hw/ppc/spapr_pci_vfio.c
> +++ b/hw/ppc/spapr_pci_vfio.c
> @@ -35,12 +35,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
>      sPAPRTCETable *tcet;
>      uint32_t liobn = svphb->phb.dma_liobn;
>  
> -    if (svphb->iommugroupid == -1) {
> -        error_setg(errp, "Wrong IOMMU group ID %d", svphb->iommugroupid);
> -        return;
> -    }
> -
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&svphb->phb.iommu_as,
>                                 VFIO_CHECK_EXTENSION,
>                                 (void *) VFIO_SPAPR_TCE_IOMMU);
>      if (ret != 1) {
> @@ -49,7 +44,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
>          return;
>      }
>  
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>                                 VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
>      if (ret) {
>          error_setg_errno(errp, -ret,
> @@ -79,7 +74,6 @@ static void spapr_phb_vfio_reset(DeviceState *qdev)
>  static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>                                           unsigned int addr, int option)
>  {
> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>      struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>      int ret;
>  
> @@ -116,7 +110,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>          return RTAS_OUT_PARAM_ERROR;
>      }
>  
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>                                 VFIO_EEH_PE_OP, &op);
>      if (ret < 0) {
>          return RTAS_OUT_HW_ERROR;
> @@ -127,12 +121,11 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>  
>  static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
>  {
> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>      struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>      int ret;
>  
>      op.op = VFIO_EEH_PE_GET_STATE;
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>                                 VFIO_EEH_PE_OP, &op);
>      if (ret < 0) {
>          return RTAS_OUT_PARAM_ERROR;
> @@ -144,7 +137,6 @@ static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
>  
>  static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>  {
> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>      struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>      int ret;
>  
> @@ -162,7 +154,7 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>          return RTAS_OUT_PARAM_ERROR;
>      }
>  
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>                                 VFIO_EEH_PE_OP, &op);
>      if (ret < 0) {
>          return RTAS_OUT_HW_ERROR;
> @@ -173,12 +165,11 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>  
>  static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
>  {
> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>      struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>      int ret;
>  
>      op.op = VFIO_EEH_PE_CONFIGURE;
> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>                                 VFIO_EEH_PE_OP, &op);
>      if (ret < 0) {
>          return RTAS_OUT_PARAM_ERROR;
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 3e4c685..369e564 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -793,34 +793,26 @@ void vfio_put_base_device(VFIODevice *vbasedev)
>      close(vbasedev->fd);
>  }
>  
> -static int vfio_container_do_ioctl(AddressSpace *as, int32_t groupid,
> +static int vfio_container_do_ioctl(AddressSpace *as,
>                                     int req, void *param)
>  {
> -    VFIOGroup *group;
>      VFIOContainer *container;
>      int ret = -1;
> +    VFIOAddressSpace *space = vfio_get_address_space(as);
>  
> -    group = vfio_get_group(groupid, as);
> -    if (!group) {
> -        error_report("vfio: group %d not registered", groupid);
> -        return ret;
> -    }
> -
> -    container = group->container;
> -    if (group->container) {
> +    QLIST_FOREACH(container, &space->containers, next) {
>          ret = ioctl(container->fd, req, param);
>          if (ret < 0) {
>              error_report("vfio: failed to ioctl %d to container: ret=%d, %s",
>                           _IOC_NR(req) - VFIO_BASE, ret, strerror(errno));
> +            return -errno;
>          }
>      }


How does this work with VFIO_EEH_PE_OP?  Suddenly we're potentially
calling it on multiple containers?  Should all of this container ioctl
code also be pulled out to spapr?

There's clearly an approach here where the patches could be split into
separate ppc and vfio series rather than this horrible practice of
intermixing them.  Thanks,

Alex
>  
> -    vfio_put_group(group);
> -
>      return ret;
>  }
>  
> -int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
> +int vfio_container_ioctl(AddressSpace *as,
>                           int req, void *param)
>  {
>      /* We allow only certain ioctls to the container */
> @@ -835,5 +827,5 @@ int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
>          return -1;
>      }
>  
> -    return vfio_container_do_ioctl(as, groupid, req, param);
> +    return vfio_container_do_ioctl(as, req, param);
>  }
> diff --git a/include/hw/vfio/vfio.h b/include/hw/vfio/vfio.h
> index 0b26cd8..76b5744 100644
> --- a/include/hw/vfio/vfio.h
> +++ b/include/hw/vfio/vfio.h
> @@ -3,7 +3,7 @@
>  
>  #include "qemu/typedefs.h"
>  
> -extern int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
> +extern int vfio_container_ioctl(AddressSpace *as,
>                                  int req, void *param);
>  
>  #endif

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW)
  2015-06-25 19:59     ` Alex Williamson
@ 2015-06-26  7:01       ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2015-06-26  7:01 UTC (permalink / raw)
  To: Alex Williamson
  Cc: Alexey Kardashevskiy, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

[-- Attachment #1: Type: text/plain, Size: 5584 bytes --]

On Thu, Jun 25, 2015 at 01:59:38PM -0600, Alex Williamson wrote:
> On Wed, 2015-06-24 at 20:52 +1000, Alexey Kardashevskiy wrote:
> > On 06/23/2015 04:44 PM, David Gibson wrote:
> > > On Thu, Jun 18, 2015 at 09:37:22PM +1000, Alexey Kardashevskiy wrote:
> > >>
> > >> (cut-n-paste from kernel patchset)
> > >>
> > >> Each Partitionable Endpoint (IOMMU group) has an address range on a PCI bus
> > >> where devices are allowed to do DMA. These ranges are called DMA windows.
> > >> By default, there is a single DMA window, 1 or 2GB big, mapped at zero
> > >> on a PCI bus.
> > >>
> > >> PAPR defines a DDW RTAS API which allows pseries guests
> > >> querying the hypervisor about DDW support and capabilities (page size mask
> > >> for now). A pseries guest may request an additional (to the default)
> > >> DMA windows using this RTAS API.
> > >> The existing pseries Linux guests request an additional window as big as
> > >> the guest RAM and map the entire guest window which effectively creates
> > >> direct mapping of the guest memory to a PCI bus.
> > >>
> > >> This patchset reworks PPC64 IOMMU code and adds necessary structures
> > >> to support big windows.
> > >>
> > >> Once a Linux guest discovers the presence of DDW, it does:
> > >> 1. query hypervisor about number of available windows and page size masks;
> > >> 2. create a window with the biggest possible page size (today 4K/64K/16M);
> > >> 3. map the entire guest RAM via H_PUT_TCE* hypercalls;
> > >> 4. switche dma_ops to direct_dma_ops on the selected PE.
> > >>
> > >> Once this is done, H_PUT_TCE is not called anymore for 64bit devices and
> > >> the guest does not waste time on DMA map/unmap operations.
> > >>
> > >> Note that 32bit devices won't use DDW and will keep using the default
> > >> DMA window so KVM optimizations will be required (to be posted later).
> > >>
> > >> This patchset adds DDW support for pseries. The host kernel changes are
> > >> required, posted as:
> > >>
> > >> [PATCH kernel v11 00/34] powerpc/iommu/vfio: Enable Dynamic DMA windows
> > >>
> > >> This patchset is based on git://github.com/dgibson/qemu.git spapr-next branch.
> > >
> > > A couple of general queries - this touchs on the kernel part as well
> > > as the qemu part:
> > >
> > >   * Am I correct in thinking that the point in doing the
> > >     pre-registration stuff is to allow the kernel to handle PUT_TCE
> > >     in real mode?  i.e. that the advatage of doing preregistration
> > >     rather than accounting on the DMA_MAP and DMA_UNMAP itself only
> > >     appears once you have kernel KVM+VFIO acceleration?
> > 
> > 
> > Handling PUT_TCE includes 2 things:
> > 1. get_user_pages_fast() and put_page()
> > 2. update locked_vm
> > 
> > Both are tricky in real mode but 2) is also tricky in virtual mode as I 
> > have to deal with multiple unrelated 32bit and 64bit windows (VFIO does not 
> > care if they belong to one or many processes) with IOMMU page size==4K and 
> > gup/put_page working with 64k pages (our default page size for host kernel).
> > 
> > But yes, without keeping real mode handlers in mind, this thing could have 
> > been made simpler.
> > 
> > 
> > >   * Do you have test numbers to show that it's still worthwhile to have
> > >     kernel acceleration once you have a guest using DDW?  With DDW in
> > >     play, even if PUT_TCE is slow, it should be called a lot less
> > >     often.
> > 
> > With DDW, the whole RAM mapped once at first set_dma_mask(64bit) called by 
> > the guest, it is just a few PUT_TCE_INDIRECT calls.
> > 
> > If the guest uses DDW, real mode handlers cannot possibly beat it and I 
> > have reports that real mode handlers are noticibly slower than direct DMA 
> > mapping (i.e. DDW) for 40Gb devices (10Gb seems to be fine but I have not 
> > tried a dozen of guests yet).
> > 
> > 
> > > The reason I ask is that the preregistration handling is a pretty big
> > > chunk of code that inserts itself into some pretty core kernel data
> > > structures, all for one pretty specific use case.  We only want to do
> > > that if there's a strong justification for it.
> > 
> > Exactly. I keep asking Ben and Paul periodically if we want to keep it and 
> > the answer is always yes :)
> > 
> > 
> > About "vfio: spapr: Move SPAPR-related code to a separate file" - I guess I 
> > better off removing it for now, right?
> 
> I won't block the series for moving the code out, but it seems like
> we're consistently making spapr an exception.  It seemed like we had
> settled on some spapr code that could be semi-portable to any
> guest-based IOMMU model, but patch 02/14 pulls that off into the spapr
> specific wart.  Theoretically we could support a guest-based IOMMU with
> type1 code as well, we just don't have much impetus to do so nor does
> type1 really have a mapping interface designed for that kind of
> performance.  We can always pull the code out of spapr or git history if
> we have some need for it though.  Thanks,

Yeah, I think 2/14 should just be dropped.  There are certainly places
(particuarly on the kernel side) where it makes sense to split out the
code for the different IOMMU types.  But the code that's being moved
in 2/14 isn't really different for spapr, nor is it large enough that
it needs to be in its own file.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container
  2015-06-25 19:59   ` Alex Williamson
@ 2015-06-30  3:32     ` Alexey Kardashevskiy
  0 siblings, 0 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2015-06-30  3:32 UTC (permalink / raw)
  To: Alex Williamson
  Cc: David Gibson, qemu-ppc, qemu-devel, Gavin Shan, Alexander Graf

On 06/26/2015 05:59 AM, Alex Williamson wrote:
> On Thu, 2015-06-18 at 21:37 +1000, Alexey Kardashevskiy wrote:
>> This enables multiple IOMMU groups in one VFIO container which means
>> that multiple devices from different groups can share the same IOMMU
>> table (or tables if DDW).
>>
>> This removes a group id from vfio_container_ioctl(). The kernel support
>> is required for this; if the host kernel does not have the support,
>> it will allow only one group per container. The PHB's "iommuid" property
>> is ignored. The ioctl is called for every container attached to
>> the address space. At the moment there is just one container anyway.
>>
>> If there is no container attached to the address space,
>> vfio_container_do_ioctl() returns -1.
>>
>> This removes casts to sPAPRPHBVFIOState as none of sPAPRPHBVFIOState
>> members is accessed here.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
>> ---
>>   hw/ppc/spapr_pci_vfio.c | 21 ++++++---------------
>>   hw/vfio/common.c        | 20 ++++++--------------
>>   include/hw/vfio/vfio.h  |  2 +-
>>   3 files changed, 13 insertions(+), 30 deletions(-)
>>
>> diff --git a/hw/ppc/spapr_pci_vfio.c b/hw/ppc/spapr_pci_vfio.c
>> index 99a1be5..e89cbff 100644
>> --- a/hw/ppc/spapr_pci_vfio.c
>> +++ b/hw/ppc/spapr_pci_vfio.c
>> @@ -35,12 +35,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
>>       sPAPRTCETable *tcet;
>>       uint32_t liobn = svphb->phb.dma_liobn;
>>
>> -    if (svphb->iommugroupid == -1) {
>> -        error_setg(errp, "Wrong IOMMU group ID %d", svphb->iommugroupid);
>> -        return;
>> -    }
>> -
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&svphb->phb.iommu_as,
>>                                  VFIO_CHECK_EXTENSION,
>>                                  (void *) VFIO_SPAPR_TCE_IOMMU);
>>       if (ret != 1) {
>> @@ -49,7 +44,7 @@ static void spapr_phb_vfio_finish_realize(sPAPRPHBState *sphb, Error **errp)
>>           return;
>>       }
>>
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>>                                  VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
>>       if (ret) {
>>           error_setg_errno(errp, -ret,
>> @@ -79,7 +74,6 @@ static void spapr_phb_vfio_reset(DeviceState *qdev)
>>   static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>>                                            unsigned int addr, int option)
>>   {
>> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>>       struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>>       int ret;
>>
>> @@ -116,7 +110,7 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>>           return RTAS_OUT_PARAM_ERROR;
>>       }
>>
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>>                                  VFIO_EEH_PE_OP, &op);
>>       if (ret < 0) {
>>           return RTAS_OUT_HW_ERROR;
>> @@ -127,12 +121,11 @@ static int spapr_phb_vfio_eeh_set_option(sPAPRPHBState *sphb,
>>
>>   static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
>>   {
>> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>>       struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>>       int ret;
>>
>>       op.op = VFIO_EEH_PE_GET_STATE;
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>>                                  VFIO_EEH_PE_OP, &op);
>>       if (ret < 0) {
>>           return RTAS_OUT_PARAM_ERROR;
>> @@ -144,7 +137,6 @@ static int spapr_phb_vfio_eeh_get_state(sPAPRPHBState *sphb, int *state)
>>
>>   static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>>   {
>> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>>       struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>>       int ret;
>>
>> @@ -162,7 +154,7 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>>           return RTAS_OUT_PARAM_ERROR;
>>       }
>>
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>>                                  VFIO_EEH_PE_OP, &op);
>>       if (ret < 0) {
>>           return RTAS_OUT_HW_ERROR;
>> @@ -173,12 +165,11 @@ static int spapr_phb_vfio_eeh_reset(sPAPRPHBState *sphb, int option)
>>
>>   static int spapr_phb_vfio_eeh_configure(sPAPRPHBState *sphb)
>>   {
>> -    sPAPRPHBVFIOState *svphb = SPAPR_PCI_VFIO_HOST_BRIDGE(sphb);
>>       struct vfio_eeh_pe_op op = { .argsz = sizeof(op) };
>>       int ret;
>>
>>       op.op = VFIO_EEH_PE_CONFIGURE;
>> -    ret = vfio_container_ioctl(&svphb->phb.iommu_as, svphb->iommugroupid,
>> +    ret = vfio_container_ioctl(&sphb->iommu_as,
>>                                  VFIO_EEH_PE_OP, &op);
>>       if (ret < 0) {
>>           return RTAS_OUT_PARAM_ERROR;
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index 3e4c685..369e564 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -793,34 +793,26 @@ void vfio_put_base_device(VFIODevice *vbasedev)
>>       close(vbasedev->fd);
>>   }
>>
>> -static int vfio_container_do_ioctl(AddressSpace *as, int32_t groupid,
>> +static int vfio_container_do_ioctl(AddressSpace *as,
>>                                      int req, void *param)
>>   {
>> -    VFIOGroup *group;
>>       VFIOContainer *container;
>>       int ret = -1;
>> +    VFIOAddressSpace *space = vfio_get_address_space(as);
>>
>> -    group = vfio_get_group(groupid, as);
>> -    if (!group) {
>> -        error_report("vfio: group %d not registered", groupid);
>> -        return ret;
>> -    }
>> -
>> -    container = group->container;
>> -    if (group->container) {
>> +    QLIST_FOREACH(container, &space->containers, next) {
>>           ret = ioctl(container->fd, req, param);
>>           if (ret < 0) {
>>               error_report("vfio: failed to ioctl %d to container: ret=%d, %s",
>>                            _IOC_NR(req) - VFIO_BASE, ret, strerror(errno));
>> +            return -errno;
>>           }
>>       }
>
>
> How does this work with VFIO_EEH_PE_OP?  Suddenly we're potentially
> calling it on multiple containers?

Gavin says yes.


> Should all of this container ioctl
> code also be pulled out to spapr?

Well, I could but I won't as I am dropping 2/14 at all.


> There's clearly an approach here where the patches could be split into
> separate ppc and vfio series rather than this horrible practice of
> intermixing them.  Thanks,

Oh. Ok. I'll move 3/14 further. I just wanted to put 2/14 and 3/14 earlier 
as it allowed me to avoid changing same lines twice in the same patchset.

>
> Alex
>>
>> -    vfio_put_group(group);
>> -
>>       return ret;
>>   }
>>
>> -int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
>> +int vfio_container_ioctl(AddressSpace *as,
>>                            int req, void *param)
>>   {
>>       /* We allow only certain ioctls to the container */
>> @@ -835,5 +827,5 @@ int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
>>           return -1;
>>       }
>>
>> -    return vfio_container_do_ioctl(as, groupid, req, param);
>> +    return vfio_container_do_ioctl(as, req, param);
>>   }
>> diff --git a/include/hw/vfio/vfio.h b/include/hw/vfio/vfio.h
>> index 0b26cd8..76b5744 100644
>> --- a/include/hw/vfio/vfio.h
>> +++ b/include/hw/vfio/vfio.h
>> @@ -3,7 +3,7 @@
>>
>>   #include "qemu/typedefs.h"
>>
>> -extern int vfio_container_ioctl(AddressSpace *as, int32_t groupid,
>> +extern int vfio_container_ioctl(AddressSpace *as,
>>                                   int req, void *param);
>>
>>   #endif
>
>
>


-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2015-06-30  3:33 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-18 11:37 [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 01/14] vmstate: Define VARRAY with VMS_ALLOC Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 02/14] vfio: spapr: Move SPAPR-related code to a separate file Alexey Kardashevskiy
2015-06-18 21:10   ` Alex Williamson
2015-06-19  0:16     ` Alexey Kardashevskiy
2015-06-23  5:49   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 03/14] spapr_pci_vfio: Enable multiple groups per container Alexey Kardashevskiy
2015-06-25 19:59   ` Alex Williamson
2015-06-30  3:32     ` Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 04/14] spapr_pci: Convert finish_realize() to dma_capabilities_update()+dma_init_window() Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 05/14] spapr_iommu: Move table allocation to helpers Alexey Kardashevskiy
2015-06-22  3:28   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 06/14] spapr_iommu: Introduce "enabled" state for TCE table Alexey Kardashevskiy
2015-06-22  3:45   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 07/14] spapr_iommu: Remove vfio_accel flag from sPAPRTCETable Alexey Kardashevskiy
2015-06-22  3:51   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 08/14] spapr_iommu: Add root memory region Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 09/14] spapr_pci: Do complete reset of DMA config when resetting PHB Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 10/14] spapr_vfio_pci: Remove redundant spapr-pci-vfio-host-bridge Alexey Kardashevskiy
2015-06-22  4:41   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 11/14] spapr_pci: Enable vfio-pci hotplug Alexey Kardashevskiy
2015-06-22  5:14   ` David Gibson
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 12/14] linux headers update for DDW on SPAPR Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 13/14] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
2015-06-18 11:37 ` [Qemu-devel] [PATCH qemu v8 14/14] spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW) Alexey Kardashevskiy
2015-06-23  6:38   ` David Gibson
2015-06-24 10:37     ` Alexey Kardashevskiy
2015-06-23  6:44 ` [Qemu-devel] [PATCH qemu v8 00/14] spapr: vfio: Enable Dynamic DMA windows (DDW) David Gibson
2015-06-24 10:52   ` Alexey Kardashevskiy
2015-06-25 19:59     ` Alex Williamson
2015-06-26  7:01       ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.