qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v16 00/10] VIRTIO-IOMMU device
@ 2020-02-14 13:27 Eric Auger
  2020-02-14 13:27 ` [PATCH v16 01/10] virtio-iommu: Add skeleton Eric Auger
                   ` (11 more replies)
  0 siblings, 12 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This series implements the QEMU virtio-iommu device.

This matches the v0.12 spec (voted) and the corresponding
virtio-iommu driver upstreamed in 5.3. All kernel dependencies
are resolved for DT integration. The virtio-iommu can be
instantiated in ARM virt using:

"-device virtio-iommu-pci".

Non DT mode is not yet supported as it has non resolved kernel
dependencies [1].

This feature targets 5.0.

Integration with vhost devices and vfio devices is not part
of this series. Please follow Bharat's respins [2].

Best Regards

Eric

This series can be found at:
https://github.com/eauger/qemu/tree/v4.2-virtio-iommu-v16

References:
[1] [RFC 00/13] virtio-iommu on non-devicetree platforms
[2] [PATCH RFC v5 0/5] virtio-iommu: VFIO integration

Testing:
- tested with guest using virtio-net-pci
  (,vhost=off,iommu_platform,disable-modern=off,disable-legacy=on)
  and virtio-blk-pci
- migration

History:

v15 -> v16:
- Collected Jean, Peter and Michael's R-bs
- last patches without R-b is the one related to hw/arm/virt.c
  + the last patch, added in this version
- Made the virtio-iommu-pci not hotpluggable (I dared to
  leave the R-b though)
- Renamed create_virtio_iommu into create_virtio_iommu_dt_bindings
- added entry in maintenance file

v14 -> v15:
- removed x-dt-binding and just kept check on hotplug_handler
- removed "tests: Add virtio-iommu test" as the check on
  hotplug_handler fails on PC machine
- destroy mappings in put_domain and remove
  g_tree_destroy(domain->mappings) in virtio_iommu_detach

v13 -> v14:
- added "virtio-iommu-pci: Introduce the x-dt-binding option"
- Removed the mappings gtree ref counting and simply delete
  the gtree when the last EP is detached from the domain


Eric Auger (10):
  virtio-iommu: Add skeleton
  virtio-iommu: Decode the command payload
  virtio-iommu: Implement attach/detach command
  virtio-iommu: Implement map/unmap
  virtio-iommu: Implement translate
  virtio-iommu: Implement fault reporting
  virtio-iommu: Support migration
  virtio-iommu-pci: Add virtio iommu pci support
  hw/arm/virt: Add the virtio-iommu device tree mappings
  MAINTAINERS: add virtio-iommu related files

 MAINTAINERS                      |   6 +
 hw/arm/virt.c                    |  57 +-
 hw/virtio/Kconfig                |   5 +
 hw/virtio/Makefile.objs          |   2 +
 hw/virtio/trace-events           |  20 +
 hw/virtio/virtio-iommu-pci.c     | 104 ++++
 hw/virtio/virtio-iommu.c         | 890 +++++++++++++++++++++++++++++++
 include/hw/arm/virt.h            |   2 +
 include/hw/pci/pci.h             |   1 +
 include/hw/virtio/virtio-iommu.h |  61 +++
 qdev-monitor.c                   |   1 +
 11 files changed, 1142 insertions(+), 7 deletions(-)
 create mode 100644 hw/virtio/virtio-iommu-pci.c
 create mode 100644 hw/virtio/virtio-iommu.c
 create mode 100644 include/hw/virtio/virtio-iommu.h

-- 
2.20.1



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v16 01/10] virtio-iommu: Add skeleton
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 02/10] virtio-iommu: Decode the command payload Eric Auger
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patchs adds the skeleton for the virtio-iommu device.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---

v13 -> v14:
- use device_class_set_props
- updated copyright's year

v12 -> v13
- removed IOMMU_PCI_BUS_MAX and IOMMU_PCI_DEVFN_MAX

v11 -> v12:
- remove s_by_bus_num
- drop set_features (rely on default implementation) and
  acked_features

v9 -> v10:
- mutex initialized here
- initialize tail
- included hw/qdev-properties.h
- removed g_memdup
- removed s->config.domain_range.start = 0;

v9 -> v10:
- expose VIRTIO_IOMMU_F_MMIO feature
- s/domain_bits/domain_range struct
- change error codes
- enforce unmigratable
- Kconfig

v7 -> v8:
- expose VIRTIO_IOMMU_F_BYPASS and VIRTIO_F_VERSION_1
  features
- set_config dummy implementation + tracing
- add trace in get_features
- set the features on realize() and store the acked ones
- remove inclusion of linux/virtio_iommu.h

v6 -> v7:
- removed qapi-event.h include
- add primary_bus and associated property

v4 -> v5:
- use the new v0.5 terminology (domain, endpoint)
- add the event virtqueue

v3 -> v4:
- use page_size_mask instead of page_sizes
- added set_features()
- added some traces (reset, set_status, set_features)
- empty virtio_iommu_set_config() as the driver MUST NOT
  write to device configuration fields
- add get_config trace

v2 -> v3:
- rebase on 2.10-rc0, ie. use IOMMUMemoryRegion and remove
  iommu_ops.
- advertise VIRTIO_IOMMU_F_MAP_UNMAP feature
- page_sizes set to TARGET_PAGE_SIZE

Conflicts:
	hw/virtio/trace-events
---
 hw/virtio/Kconfig                |   5 +
 hw/virtio/Makefile.objs          |   1 +
 hw/virtio/trace-events           |   7 +
 hw/virtio/virtio-iommu.c         | 265 +++++++++++++++++++++++++++++++
 include/hw/virtio/virtio-iommu.h |  57 +++++++
 5 files changed, 335 insertions(+)
 create mode 100644 hw/virtio/virtio-iommu.c
 create mode 100644 include/hw/virtio/virtio-iommu.h

diff --git a/hw/virtio/Kconfig b/hw/virtio/Kconfig
index f87def27a6..d29525b36f 100644
--- a/hw/virtio/Kconfig
+++ b/hw/virtio/Kconfig
@@ -9,6 +9,11 @@ config VIRTIO_RNG
     default y
     depends on VIRTIO
 
+config VIRTIO_IOMMU
+    bool
+    default y
+    depends on VIRTIO
+
 config VIRTIO_PCI
     bool
     default y if PCI_DEVICES
diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index de0f5fc39b..2fd9da7410 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -16,6 +16,7 @@ obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-p
 obj-$(CONFIG_VIRTIO_PMEM) += virtio-pmem.o
 common-obj-$(call land,$(CONFIG_VIRTIO_PMEM),$(CONFIG_VIRTIO_PCI)) += virtio-pmem-pci.o
 obj-$(call land,$(CONFIG_VHOST_USER_FS),$(CONFIG_VIRTIO_PCI)) += vhost-user-fs-pci.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
 obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
 
 ifeq ($(CONFIG_VIRTIO_PCI),y)
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index e28ba48da6..02d93d7f63 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -53,3 +53,10 @@ virtio_mmio_write_offset(uint64_t offset, uint64_t value) "virtio_mmio_write off
 virtio_mmio_guest_page(uint64_t size, int shift) "guest page size 0x%" PRIx64 " shift %d"
 virtio_mmio_queue_write(uint64_t value, int max_size) "mmio_queue write 0x%" PRIx64 " max %d"
 virtio_mmio_setting_irq(int level) "virtio_mmio setting IRQ %d"
+
+# hw/virtio/virtio-iommu.c
+virtio_iommu_device_reset(void) "reset!"
+virtio_iommu_get_features(uint64_t features) "device supports features=0x%"PRIx64
+virtio_iommu_device_status(uint8_t status) "driver status = %d"
+virtio_iommu_get_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint32_t domain_range, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" domain_range=%d probe_size=0x%x"
+virtio_iommu_set_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint32_t domain_range, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" domain_bits=%d probe_size=0x%x"
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
new file mode 100644
index 0000000000..30579267d5
--- /dev/null
+++ b/hw/virtio/virtio-iommu.c
@@ -0,0 +1,265 @@
+/*
+ * virtio-iommu device
+ *
+ * Copyright (c) 2020 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/iov.h"
+#include "qemu-common.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio.h"
+#include "sysemu/kvm.h"
+#include "trace.h"
+
+#include "standard-headers/linux/virtio_ids.h"
+
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-access.h"
+#include "hw/virtio/virtio-iommu.h"
+
+/* Max size */
+#define VIOMMU_DEFAULT_QUEUE_SIZE 256
+
+static int virtio_iommu_handle_attach(VirtIOIOMMU *s,
+                                      struct iovec *iov,
+                                      unsigned int iov_cnt)
+{
+    return VIRTIO_IOMMU_S_UNSUPP;
+}
+static int virtio_iommu_handle_detach(VirtIOIOMMU *s,
+                                      struct iovec *iov,
+                                      unsigned int iov_cnt)
+{
+    return VIRTIO_IOMMU_S_UNSUPP;
+}
+static int virtio_iommu_handle_map(VirtIOIOMMU *s,
+                                   struct iovec *iov,
+                                   unsigned int iov_cnt)
+{
+    return VIRTIO_IOMMU_S_UNSUPP;
+}
+static int virtio_iommu_handle_unmap(VirtIOIOMMU *s,
+                                     struct iovec *iov,
+                                     unsigned int iov_cnt)
+{
+    return VIRTIO_IOMMU_S_UNSUPP;
+}
+
+static void virtio_iommu_handle_command(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOIOMMU *s = VIRTIO_IOMMU(vdev);
+    struct virtio_iommu_req_head head;
+    struct virtio_iommu_req_tail tail = {};
+    VirtQueueElement *elem;
+    unsigned int iov_cnt;
+    struct iovec *iov;
+    size_t sz;
+
+    for (;;) {
+        elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+        if (!elem) {
+            return;
+        }
+
+        if (iov_size(elem->in_sg, elem->in_num) < sizeof(tail) ||
+            iov_size(elem->out_sg, elem->out_num) < sizeof(head)) {
+            virtio_error(vdev, "virtio-iommu bad head/tail size");
+            virtqueue_detach_element(vq, elem, 0);
+            g_free(elem);
+            break;
+        }
+
+        iov_cnt = elem->out_num;
+        iov = elem->out_sg;
+        sz = iov_to_buf(iov, iov_cnt, 0, &head, sizeof(head));
+        if (unlikely(sz != sizeof(head))) {
+            tail.status = VIRTIO_IOMMU_S_DEVERR;
+            goto out;
+        }
+        qemu_mutex_lock(&s->mutex);
+        switch (head.type) {
+        case VIRTIO_IOMMU_T_ATTACH:
+            tail.status = virtio_iommu_handle_attach(s, iov, iov_cnt);
+            break;
+        case VIRTIO_IOMMU_T_DETACH:
+            tail.status = virtio_iommu_handle_detach(s, iov, iov_cnt);
+            break;
+        case VIRTIO_IOMMU_T_MAP:
+            tail.status = virtio_iommu_handle_map(s, iov, iov_cnt);
+            break;
+        case VIRTIO_IOMMU_T_UNMAP:
+            tail.status = virtio_iommu_handle_unmap(s, iov, iov_cnt);
+            break;
+        default:
+            tail.status = VIRTIO_IOMMU_S_UNSUPP;
+        }
+        qemu_mutex_unlock(&s->mutex);
+
+out:
+        sz = iov_from_buf(elem->in_sg, elem->in_num, 0,
+                          &tail, sizeof(tail));
+        assert(sz == sizeof(tail));
+
+        virtqueue_push(vq, elem, sizeof(tail));
+        virtio_notify(vdev, vq);
+        g_free(elem);
+    }
+}
+
+static void virtio_iommu_get_config(VirtIODevice *vdev, uint8_t *config_data)
+{
+    VirtIOIOMMU *dev = VIRTIO_IOMMU(vdev);
+    struct virtio_iommu_config *config = &dev->config;
+
+    trace_virtio_iommu_get_config(config->page_size_mask,
+                                  config->input_range.start,
+                                  config->input_range.end,
+                                  config->domain_range.end,
+                                  config->probe_size);
+    memcpy(config_data, &dev->config, sizeof(struct virtio_iommu_config));
+}
+
+static void virtio_iommu_set_config(VirtIODevice *vdev,
+                                      const uint8_t *config_data)
+{
+    struct virtio_iommu_config config;
+
+    memcpy(&config, config_data, sizeof(struct virtio_iommu_config));
+    trace_virtio_iommu_set_config(config.page_size_mask,
+                                  config.input_range.start,
+                                  config.input_range.end,
+                                  config.domain_range.end,
+                                  config.probe_size);
+}
+
+static uint64_t virtio_iommu_get_features(VirtIODevice *vdev, uint64_t f,
+                                          Error **errp)
+{
+    VirtIOIOMMU *dev = VIRTIO_IOMMU(vdev);
+
+    f |= dev->features;
+    trace_virtio_iommu_get_features(f);
+    return f;
+}
+
+/*
+ * Migration is not yet supported: most of the state consists
+ * of balanced binary trees which are not yet ready for getting
+ * migrated
+ */
+static const VMStateDescription vmstate_virtio_iommu_device = {
+    .name = "virtio-iommu-device",
+    .unmigratable = 1,
+};
+
+static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+    VirtIOIOMMU *s = VIRTIO_IOMMU(dev);
+
+    virtio_init(vdev, "virtio-iommu", VIRTIO_ID_IOMMU,
+                sizeof(struct virtio_iommu_config));
+
+    s->req_vq = virtio_add_queue(vdev, VIOMMU_DEFAULT_QUEUE_SIZE,
+                             virtio_iommu_handle_command);
+    s->event_vq = virtio_add_queue(vdev, VIOMMU_DEFAULT_QUEUE_SIZE, NULL);
+
+    s->config.page_size_mask = TARGET_PAGE_MASK;
+    s->config.input_range.end = -1UL;
+    s->config.domain_range.end = 32;
+
+    virtio_add_feature(&s->features, VIRTIO_RING_F_EVENT_IDX);
+    virtio_add_feature(&s->features, VIRTIO_RING_F_INDIRECT_DESC);
+    virtio_add_feature(&s->features, VIRTIO_F_VERSION_1);
+    virtio_add_feature(&s->features, VIRTIO_IOMMU_F_INPUT_RANGE);
+    virtio_add_feature(&s->features, VIRTIO_IOMMU_F_DOMAIN_RANGE);
+    virtio_add_feature(&s->features, VIRTIO_IOMMU_F_MAP_UNMAP);
+    virtio_add_feature(&s->features, VIRTIO_IOMMU_F_BYPASS);
+    virtio_add_feature(&s->features, VIRTIO_IOMMU_F_MMIO);
+
+    qemu_mutex_init(&s->mutex);
+}
+
+static void virtio_iommu_device_unrealize(DeviceState *dev, Error **errp)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+
+    virtio_cleanup(vdev);
+}
+
+static void virtio_iommu_device_reset(VirtIODevice *vdev)
+{
+    trace_virtio_iommu_device_reset();
+}
+
+static void virtio_iommu_set_status(VirtIODevice *vdev, uint8_t status)
+{
+    trace_virtio_iommu_device_status(status);
+}
+
+static void virtio_iommu_instance_init(Object *obj)
+{
+}
+
+static const VMStateDescription vmstate_virtio_iommu = {
+    .name = "virtio-iommu",
+    .minimum_version_id = 1,
+    .version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_VIRTIO_DEVICE,
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static Property virtio_iommu_properties[] = {
+    DEFINE_PROP_LINK("primary-bus", VirtIOIOMMU, primary_bus, "PCI", PCIBus *),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_iommu_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+
+    device_class_set_props(dc, virtio_iommu_properties);
+    dc->vmsd = &vmstate_virtio_iommu;
+
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+    vdc->realize = virtio_iommu_device_realize;
+    vdc->unrealize = virtio_iommu_device_unrealize;
+    vdc->reset = virtio_iommu_device_reset;
+    vdc->get_config = virtio_iommu_get_config;
+    vdc->set_config = virtio_iommu_set_config;
+    vdc->get_features = virtio_iommu_get_features;
+    vdc->set_status = virtio_iommu_set_status;
+    vdc->vmsd = &vmstate_virtio_iommu_device;
+}
+
+static const TypeInfo virtio_iommu_info = {
+    .name = TYPE_VIRTIO_IOMMU,
+    .parent = TYPE_VIRTIO_DEVICE,
+    .instance_size = sizeof(VirtIOIOMMU),
+    .instance_init = virtio_iommu_instance_init,
+    .class_init = virtio_iommu_class_init,
+};
+
+static void virtio_register_types(void)
+{
+    type_register_static(&virtio_iommu_info);
+}
+
+type_init(virtio_register_types)
diff --git a/include/hw/virtio/virtio-iommu.h b/include/hw/virtio/virtio-iommu.h
new file mode 100644
index 0000000000..d24ba63305
--- /dev/null
+++ b/include/hw/virtio/virtio-iommu.h
@@ -0,0 +1,57 @@
+/*
+ * virtio-iommu device
+ *
+ * Copyright (c) 2020 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2 or later, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+#ifndef QEMU_VIRTIO_IOMMU_H
+#define QEMU_VIRTIO_IOMMU_H
+
+#include "standard-headers/linux/virtio_iommu.h"
+#include "hw/virtio/virtio.h"
+#include "hw/pci/pci.h"
+
+#define TYPE_VIRTIO_IOMMU "virtio-iommu-device"
+#define VIRTIO_IOMMU(obj) \
+        OBJECT_CHECK(VirtIOIOMMU, (obj), TYPE_VIRTIO_IOMMU)
+
+typedef struct IOMMUDevice {
+    void         *viommu;
+    PCIBus       *bus;
+    int           devfn;
+    IOMMUMemoryRegion  iommu_mr;
+    AddressSpace  as;
+} IOMMUDevice;
+
+typedef struct IOMMUPciBus {
+    PCIBus       *bus;
+    IOMMUDevice  *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
+} IOMMUPciBus;
+
+typedef struct VirtIOIOMMU {
+    VirtIODevice parent_obj;
+    VirtQueue *req_vq;
+    VirtQueue *event_vq;
+    struct virtio_iommu_config config;
+    uint64_t features;
+    GHashTable *as_by_busptr;
+    PCIBus *primary_bus;
+    GTree *domains;
+    QemuMutex mutex;
+    GTree *endpoints;
+} VirtIOIOMMU;
+
+#endif
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 02/10] virtio-iommu: Decode the command payload
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
  2020-02-14 13:27 ` [PATCH v16 01/10] virtio-iommu: Add skeleton Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 03/10] virtio-iommu: Implement attach/detach command Eric Auger
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patch adds the command payload decoding and
introduces the functions that will do the actual
command handling. Those functions are not yet implemented.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---

v11 -> v12:
- ADded Jean and Peter's R-b

v10 -> v11:
- use a macro for handle command functions

v9 -> v10:
- make virtio_iommu_handle_* more compact and
  remove get_payload_size

v7 -> v8:
- handle new domain parameter in detach
- remove reserved checks

v5 -> v6:
- change map/unmap semantics (remove size)

v4 -> v5:
- adopt new v0.5 terminology

v3 -> v4:
- no flags field anymore in struct virtio_iommu_req_unmap
- test reserved on attach/detach, change trace proto
- rebase on v2.10.0.
---
 hw/virtio/trace-events   |  4 +++
 hw/virtio/virtio-iommu.c | 76 +++++++++++++++++++++++++++++++++-------
 2 files changed, 68 insertions(+), 12 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 02d93d7f63..f7141aa2f6 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -60,3 +60,7 @@ virtio_iommu_get_features(uint64_t features) "device supports features=0x%"PRIx6
 virtio_iommu_device_status(uint8_t status) "driver status = %d"
 virtio_iommu_get_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint32_t domain_range, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" domain_range=%d probe_size=0x%x"
 virtio_iommu_set_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint32_t domain_range, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" domain_bits=%d probe_size=0x%x"
+virtio_iommu_attach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
+virtio_iommu_detach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
+virtio_iommu_map(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start, uint32_t flags) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64 " phys_start=0x%"PRIx64" flags=%d"
+virtio_iommu_unmap(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 30579267d5..86dcdc09a1 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -34,31 +34,83 @@
 /* Max size */
 #define VIOMMU_DEFAULT_QUEUE_SIZE 256
 
-static int virtio_iommu_handle_attach(VirtIOIOMMU *s,
-                                      struct iovec *iov,
-                                      unsigned int iov_cnt)
+static int virtio_iommu_attach(VirtIOIOMMU *s,
+                               struct virtio_iommu_req_attach *req)
 {
+    uint32_t domain_id = le32_to_cpu(req->domain);
+    uint32_t ep_id = le32_to_cpu(req->endpoint);
+
+    trace_virtio_iommu_attach(domain_id, ep_id);
+
     return VIRTIO_IOMMU_S_UNSUPP;
 }
-static int virtio_iommu_handle_detach(VirtIOIOMMU *s,
-                                      struct iovec *iov,
-                                      unsigned int iov_cnt)
+
+static int virtio_iommu_detach(VirtIOIOMMU *s,
+                               struct virtio_iommu_req_detach *req)
 {
+    uint32_t domain_id = le32_to_cpu(req->domain);
+    uint32_t ep_id = le32_to_cpu(req->endpoint);
+
+    trace_virtio_iommu_detach(domain_id, ep_id);
+
     return VIRTIO_IOMMU_S_UNSUPP;
 }
-static int virtio_iommu_handle_map(VirtIOIOMMU *s,
-                                   struct iovec *iov,
-                                   unsigned int iov_cnt)
+
+static int virtio_iommu_map(VirtIOIOMMU *s,
+                            struct virtio_iommu_req_map *req)
 {
+    uint32_t domain_id = le32_to_cpu(req->domain);
+    uint64_t phys_start = le64_to_cpu(req->phys_start);
+    uint64_t virt_start = le64_to_cpu(req->virt_start);
+    uint64_t virt_end = le64_to_cpu(req->virt_end);
+    uint32_t flags = le32_to_cpu(req->flags);
+
+    trace_virtio_iommu_map(domain_id, virt_start, virt_end, phys_start, flags);
+
     return VIRTIO_IOMMU_S_UNSUPP;
 }
-static int virtio_iommu_handle_unmap(VirtIOIOMMU *s,
-                                     struct iovec *iov,
-                                     unsigned int iov_cnt)
+
+static int virtio_iommu_unmap(VirtIOIOMMU *s,
+                              struct virtio_iommu_req_unmap *req)
 {
+    uint32_t domain_id = le32_to_cpu(req->domain);
+    uint64_t virt_start = le64_to_cpu(req->virt_start);
+    uint64_t virt_end = le64_to_cpu(req->virt_end);
+
+    trace_virtio_iommu_unmap(domain_id, virt_start, virt_end);
+
     return VIRTIO_IOMMU_S_UNSUPP;
 }
 
+static int virtio_iommu_iov_to_req(struct iovec *iov,
+                                   unsigned int iov_cnt,
+                                   void *req, size_t req_sz)
+{
+    size_t sz, payload_sz = req_sz - sizeof(struct virtio_iommu_req_tail);
+
+    sz = iov_to_buf(iov, iov_cnt, 0, req, payload_sz);
+    if (unlikely(sz != payload_sz)) {
+        return VIRTIO_IOMMU_S_INVAL;
+    }
+    return 0;
+}
+
+#define virtio_iommu_handle_req(__req)                                  \
+static int virtio_iommu_handle_ ## __req(VirtIOIOMMU *s,                \
+                                         struct iovec *iov,             \
+                                         unsigned int iov_cnt)          \
+{                                                                       \
+    struct virtio_iommu_req_ ## __req req;                              \
+    int ret = virtio_iommu_iov_to_req(iov, iov_cnt, &req, sizeof(req)); \
+                                                                        \
+    return ret ? ret : virtio_iommu_ ## __req(s, &req);                 \
+}
+
+virtio_iommu_handle_req(attach)
+virtio_iommu_handle_req(detach)
+virtio_iommu_handle_req(map)
+virtio_iommu_handle_req(unmap)
+
 static void virtio_iommu_handle_command(VirtIODevice *vdev, VirtQueue *vq)
 {
     VirtIOIOMMU *s = VIRTIO_IOMMU(vdev);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 03/10] virtio-iommu: Implement attach/detach command
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
  2020-02-14 13:27 ` [PATCH v16 01/10] virtio-iommu: Add skeleton Eric Auger
  2020-02-14 13:27 ` [PATCH v16 02/10] virtio-iommu: Decode the command payload Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 04/10] virtio-iommu: Implement map/unmap Eric Auger
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patch implements the endpoint attach/detach to/from
a domain.

Domain and endpoint internal datatypes are introduced.
Both are stored in RB trees. The domain owns a list of
endpoints attached to it. Also helpers to get/put
end points and domains are introduced.

As for the IOMMU memory regions, a callback is called on
PCI bus enumeration that initializes for a given device
on the bus hierarchy an IOMMU memory region. The PCI bus
hierarchy is stored locally in IOMMUPciBus and IOMMUDevice
objects.

At the time of the enumeration, the bus number may not be
computed yet.

So operations that will need to retrieve the IOMMUdevice
and its IOMMU memory region from the bus number and devfn,
once the bus number is garanteed to be frozen, use an array
of IOMMUPciBus, lazily populated.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---
v14 -> v15:
- destroy mappings in put_domain and remove
  g_tree_destroy(domain->mappings) in virtio_iommu_detach

v13 -> v14:
- in virtio_iommu_put_endpoint, if the EP is attached to a
  domain, call virtio_iommu_detach_endpoint_from_domain()
- remove domain ref counting and simply delete the mappings
  gtree when the last EP is detached from the domain
- in virtio_iommu_detach_endpoint_from_domain(), return if the
  ep's domain is unset.

v12 -> v13:
- squashed v12 4, 5, 6 into this patch
- rename virtio_iommu_get_sid into virtio_iommu_get_bdf

v11 -> v12:
- check the device is protected by the iommu on attach
- on detach, check the domain id the device is attached to matches
  the one used in the detach command
- fix mapping ref counter and destroy the domain when no end-points
  are attached anymore
---
 hw/virtio/trace-events           |   6 +
 hw/virtio/virtio-iommu.c         | 311 ++++++++++++++++++++++++++++++-
 include/hw/virtio/virtio-iommu.h |   3 +
 3 files changed, 318 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index f7141aa2f6..15595f8cd7 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -64,3 +64,9 @@ virtio_iommu_attach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
 virtio_iommu_detach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
 virtio_iommu_map(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start, uint32_t flags) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64 " phys_start=0x%"PRIx64" flags=%d"
 virtio_iommu_unmap(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
+virtio_iommu_translate(const char *name, uint32_t rid, uint64_t iova, int flag) "mr=%s rid=%d addr=0x%"PRIx64" flag=%d"
+virtio_iommu_init_iommu_mr(char *iommu_mr) "init %s"
+virtio_iommu_get_endpoint(uint32_t ep_id) "Alloc endpoint=%d"
+virtio_iommu_put_endpoint(uint32_t ep_id) "Free endpoint=%d"
+virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d"
+virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 86dcdc09a1..d9fe83f530 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -23,6 +23,8 @@
 #include "hw/qdev-properties.h"
 #include "hw/virtio/virtio.h"
 #include "sysemu/kvm.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
 #include "trace.h"
 
 #include "standard-headers/linux/virtio_ids.h"
@@ -30,19 +32,236 @@
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/virtio-access.h"
 #include "hw/virtio/virtio-iommu.h"
+#include "hw/pci/pci_bus.h"
+#include "hw/pci/pci.h"
 
 /* Max size */
 #define VIOMMU_DEFAULT_QUEUE_SIZE 256
 
+typedef struct VirtIOIOMMUDomain {
+    uint32_t id;
+    GTree *mappings;
+    QLIST_HEAD(, VirtIOIOMMUEndpoint) endpoint_list;
+} VirtIOIOMMUDomain;
+
+typedef struct VirtIOIOMMUEndpoint {
+    uint32_t id;
+    VirtIOIOMMUDomain *domain;
+    QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
+} VirtIOIOMMUEndpoint;
+
+typedef struct VirtIOIOMMUInterval {
+    uint64_t low;
+    uint64_t high;
+} VirtIOIOMMUInterval;
+
+static inline uint16_t virtio_iommu_get_bdf(IOMMUDevice *dev)
+{
+    return PCI_BUILD_BDF(pci_bus_num(dev->bus), dev->devfn);
+}
+
+/**
+ * The bus number is used for lookup when SID based operations occur.
+ * In that case we lazily populate the IOMMUPciBus array from the bus hash
+ * table. At the time the IOMMUPciBus is created (iommu_find_add_as), the bus
+ * numbers may not be always initialized yet.
+ */
+static IOMMUPciBus *iommu_find_iommu_pcibus(VirtIOIOMMU *s, uint8_t bus_num)
+{
+    IOMMUPciBus *iommu_pci_bus = s->iommu_pcibus_by_bus_num[bus_num];
+
+    if (!iommu_pci_bus) {
+        GHashTableIter iter;
+
+        g_hash_table_iter_init(&iter, s->as_by_busptr);
+        while (g_hash_table_iter_next(&iter, NULL, (void **)&iommu_pci_bus)) {
+            if (pci_bus_num(iommu_pci_bus->bus) == bus_num) {
+                s->iommu_pcibus_by_bus_num[bus_num] = iommu_pci_bus;
+                return iommu_pci_bus;
+            }
+        }
+        return NULL;
+    }
+    return iommu_pci_bus;
+}
+
+static IOMMUMemoryRegion *virtio_iommu_mr(VirtIOIOMMU *s, uint32_t sid)
+{
+    uint8_t bus_n, devfn;
+    IOMMUPciBus *iommu_pci_bus;
+    IOMMUDevice *dev;
+
+    bus_n = PCI_BUS_NUM(sid);
+    iommu_pci_bus = iommu_find_iommu_pcibus(s, bus_n);
+    if (iommu_pci_bus) {
+        devfn = sid & PCI_DEVFN_MAX;
+        dev = iommu_pci_bus->pbdev[devfn];
+        if (dev) {
+            return &dev->iommu_mr;
+        }
+    }
+    return NULL;
+}
+
+static gint interval_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
+{
+    VirtIOIOMMUInterval *inta = (VirtIOIOMMUInterval *)a;
+    VirtIOIOMMUInterval *intb = (VirtIOIOMMUInterval *)b;
+
+    if (inta->high < intb->low) {
+        return -1;
+    } else if (intb->high < inta->low) {
+        return 1;
+    } else {
+        return 0;
+    }
+}
+
+static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
+{
+    if (!ep->domain) {
+        return;
+    }
+    QLIST_REMOVE(ep, next);
+    ep->domain = NULL;
+}
+
+static VirtIOIOMMUEndpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
+                                                      uint32_t ep_id)
+{
+    VirtIOIOMMUEndpoint *ep;
+
+    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(ep_id));
+    if (ep) {
+        return ep;
+    }
+    if (!virtio_iommu_mr(s, ep_id)) {
+        return NULL;
+    }
+    ep = g_malloc0(sizeof(*ep));
+    ep->id = ep_id;
+    trace_virtio_iommu_get_endpoint(ep_id);
+    g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
+    return ep;
+}
+
+static void virtio_iommu_put_endpoint(gpointer data)
+{
+    VirtIOIOMMUEndpoint *ep = (VirtIOIOMMUEndpoint *)data;
+
+    if (ep->domain) {
+        virtio_iommu_detach_endpoint_from_domain(ep);
+    }
+
+    trace_virtio_iommu_put_endpoint(ep->id);
+    g_free(ep);
+}
+
+static VirtIOIOMMUDomain *virtio_iommu_get_domain(VirtIOIOMMU *s,
+                                                  uint32_t domain_id)
+{
+    VirtIOIOMMUDomain *domain;
+
+    domain = g_tree_lookup(s->domains, GUINT_TO_POINTER(domain_id));
+    if (domain) {
+        return domain;
+    }
+    domain = g_malloc0(sizeof(*domain));
+    domain->id = domain_id;
+    domain->mappings = g_tree_new_full((GCompareDataFunc)interval_cmp,
+                                   NULL, (GDestroyNotify)g_free,
+                                   (GDestroyNotify)g_free);
+    g_tree_insert(s->domains, GUINT_TO_POINTER(domain_id), domain);
+    QLIST_INIT(&domain->endpoint_list);
+    trace_virtio_iommu_get_domain(domain_id);
+    return domain;
+}
+
+static void virtio_iommu_put_domain(gpointer data)
+{
+    VirtIOIOMMUDomain *domain = (VirtIOIOMMUDomain *)data;
+    VirtIOIOMMUEndpoint *iter, *tmp;
+
+    QLIST_FOREACH_SAFE(iter, &domain->endpoint_list, next, tmp) {
+        virtio_iommu_detach_endpoint_from_domain(iter);
+    }
+    g_tree_destroy(domain->mappings);
+    trace_virtio_iommu_put_domain(domain->id);
+    g_free(domain);
+}
+
+static AddressSpace *virtio_iommu_find_add_as(PCIBus *bus, void *opaque,
+                                              int devfn)
+{
+    VirtIOIOMMU *s = opaque;
+    IOMMUPciBus *sbus = g_hash_table_lookup(s->as_by_busptr, bus);
+    static uint32_t mr_index;
+    IOMMUDevice *sdev;
+
+    if (!sbus) {
+        sbus = g_malloc0(sizeof(IOMMUPciBus) +
+                         sizeof(IOMMUDevice *) * PCI_DEVFN_MAX);
+        sbus->bus = bus;
+        g_hash_table_insert(s->as_by_busptr, bus, sbus);
+    }
+
+    sdev = sbus->pbdev[devfn];
+    if (!sdev) {
+        char *name = g_strdup_printf("%s-%d-%d",
+                                     TYPE_VIRTIO_IOMMU_MEMORY_REGION,
+                                     mr_index++, devfn);
+        sdev = sbus->pbdev[devfn] = g_malloc0(sizeof(IOMMUDevice));
+
+        sdev->viommu = s;
+        sdev->bus = bus;
+        sdev->devfn = devfn;
+
+        trace_virtio_iommu_init_iommu_mr(name);
+
+        memory_region_init_iommu(&sdev->iommu_mr, sizeof(sdev->iommu_mr),
+                                 TYPE_VIRTIO_IOMMU_MEMORY_REGION,
+                                 OBJECT(s), name,
+                                 UINT64_MAX);
+        address_space_init(&sdev->as,
+                           MEMORY_REGION(&sdev->iommu_mr), TYPE_VIRTIO_IOMMU);
+        g_free(name);
+    }
+    return &sdev->as;
+}
+
 static int virtio_iommu_attach(VirtIOIOMMU *s,
                                struct virtio_iommu_req_attach *req)
 {
     uint32_t domain_id = le32_to_cpu(req->domain);
     uint32_t ep_id = le32_to_cpu(req->endpoint);
+    VirtIOIOMMUDomain *domain;
+    VirtIOIOMMUEndpoint *ep;
 
     trace_virtio_iommu_attach(domain_id, ep_id);
 
-    return VIRTIO_IOMMU_S_UNSUPP;
+    ep = virtio_iommu_get_endpoint(s, ep_id);
+    if (!ep) {
+        return VIRTIO_IOMMU_S_NOENT;
+    }
+
+    if (ep->domain) {
+        VirtIOIOMMUDomain *previous_domain = ep->domain;
+        /*
+         * the device is already attached to a domain,
+         * detach it first
+         */
+        virtio_iommu_detach_endpoint_from_domain(ep);
+        if (QLIST_EMPTY(&previous_domain->endpoint_list)) {
+            g_tree_remove(s->domains, GUINT_TO_POINTER(previous_domain->id));
+        }
+    }
+
+    domain = virtio_iommu_get_domain(s, domain_id);
+    QLIST_INSERT_HEAD(&domain->endpoint_list, ep, next);
+
+    ep->domain = domain;
+
+    return VIRTIO_IOMMU_S_OK;
 }
 
 static int virtio_iommu_detach(VirtIOIOMMU *s,
@@ -50,10 +269,28 @@ static int virtio_iommu_detach(VirtIOIOMMU *s,
 {
     uint32_t domain_id = le32_to_cpu(req->domain);
     uint32_t ep_id = le32_to_cpu(req->endpoint);
+    VirtIOIOMMUDomain *domain;
+    VirtIOIOMMUEndpoint *ep;
 
     trace_virtio_iommu_detach(domain_id, ep_id);
 
-    return VIRTIO_IOMMU_S_UNSUPP;
+    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(ep_id));
+    if (!ep) {
+        return VIRTIO_IOMMU_S_NOENT;
+    }
+
+    domain = ep->domain;
+
+    if (!domain || domain->id != domain_id) {
+        return VIRTIO_IOMMU_S_INVAL;
+    }
+
+    virtio_iommu_detach_endpoint_from_domain(ep);
+
+    if (QLIST_EMPTY(&domain->endpoint_list)) {
+        g_tree_remove(s->domains, GUINT_TO_POINTER(domain->id));
+    }
+    return VIRTIO_IOMMU_S_OK;
 }
 
 static int virtio_iommu_map(VirtIOIOMMU *s,
@@ -172,6 +409,27 @@ out:
     }
 }
 
+static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
+                                            IOMMUAccessFlags flag,
+                                            int iommu_idx)
+{
+    IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+    uint32_t sid;
+
+    IOMMUTLBEntry entry = {
+        .target_as = &address_space_memory,
+        .iova = addr,
+        .translated_addr = addr,
+        .addr_mask = ~(hwaddr)0,
+        .perm = IOMMU_NONE,
+    };
+
+    sid = virtio_iommu_get_bdf(sdev);
+
+    trace_virtio_iommu_translate(mr->parent_obj.name, sid, addr, flag);
+    return entry;
+}
+
 static void virtio_iommu_get_config(VirtIODevice *vdev, uint8_t *config_data)
 {
     VirtIOIOMMU *dev = VIRTIO_IOMMU(vdev);
@@ -218,6 +476,13 @@ static const VMStateDescription vmstate_virtio_iommu_device = {
     .unmigratable = 1,
 };
 
+static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
+{
+    guint ua = GPOINTER_TO_UINT(a);
+    guint ub = GPOINTER_TO_UINT(b);
+    return (ua > ub) - (ua < ub);
+}
+
 static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -226,6 +491,8 @@ static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
     virtio_init(vdev, "virtio-iommu", VIRTIO_ID_IOMMU,
                 sizeof(struct virtio_iommu_config));
 
+    memset(s->iommu_pcibus_by_bus_num, 0, sizeof(s->iommu_pcibus_by_bus_num));
+
     s->req_vq = virtio_add_queue(vdev, VIOMMU_DEFAULT_QUEUE_SIZE,
                              virtio_iommu_handle_command);
     s->event_vq = virtio_add_queue(vdev, VIOMMU_DEFAULT_QUEUE_SIZE, NULL);
@@ -244,18 +511,43 @@ static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
     virtio_add_feature(&s->features, VIRTIO_IOMMU_F_MMIO);
 
     qemu_mutex_init(&s->mutex);
+
+    s->as_by_busptr = g_hash_table_new_full(NULL, NULL, NULL, g_free);
+
+    if (s->primary_bus) {
+        pci_setup_iommu(s->primary_bus, virtio_iommu_find_add_as, s);
+    } else {
+        error_setg(errp, "VIRTIO-IOMMU is not attached to any PCI bus!");
+    }
 }
 
 static void virtio_iommu_device_unrealize(DeviceState *dev, Error **errp)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+    VirtIOIOMMU *s = VIRTIO_IOMMU(dev);
+
+    g_tree_destroy(s->domains);
+    g_tree_destroy(s->endpoints);
 
     virtio_cleanup(vdev);
 }
 
 static void virtio_iommu_device_reset(VirtIODevice *vdev)
 {
+    VirtIOIOMMU *s = VIRTIO_IOMMU(vdev);
+
     trace_virtio_iommu_device_reset();
+
+    if (s->domains) {
+        g_tree_destroy(s->domains);
+    }
+    if (s->endpoints) {
+        g_tree_destroy(s->endpoints);
+    }
+    s->domains = g_tree_new_full((GCompareDataFunc)int_cmp,
+                                 NULL, NULL, virtio_iommu_put_domain);
+    s->endpoints = g_tree_new_full((GCompareDataFunc)int_cmp,
+                                   NULL, NULL, virtio_iommu_put_endpoint);
 }
 
 static void virtio_iommu_set_status(VirtIODevice *vdev, uint8_t status)
@@ -301,6 +593,14 @@ static void virtio_iommu_class_init(ObjectClass *klass, void *data)
     vdc->vmsd = &vmstate_virtio_iommu_device;
 }
 
+static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
+                                                  void *data)
+{
+    IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
+
+    imrc->translate = virtio_iommu_translate;
+}
+
 static const TypeInfo virtio_iommu_info = {
     .name = TYPE_VIRTIO_IOMMU,
     .parent = TYPE_VIRTIO_DEVICE,
@@ -309,9 +609,16 @@ static const TypeInfo virtio_iommu_info = {
     .class_init = virtio_iommu_class_init,
 };
 
+static const TypeInfo virtio_iommu_memory_region_info = {
+    .parent = TYPE_IOMMU_MEMORY_REGION,
+    .name = TYPE_VIRTIO_IOMMU_MEMORY_REGION,
+    .class_init = virtio_iommu_memory_region_class_init,
+};
+
 static void virtio_register_types(void)
 {
     type_register_static(&virtio_iommu_info);
+    type_register_static(&virtio_iommu_memory_region_info);
 }
 
 type_init(virtio_register_types)
diff --git a/include/hw/virtio/virtio-iommu.h b/include/hw/virtio/virtio-iommu.h
index d24ba63305..ae88f730cf 100644
--- a/include/hw/virtio/virtio-iommu.h
+++ b/include/hw/virtio/virtio-iommu.h
@@ -28,6 +28,8 @@
 #define VIRTIO_IOMMU(obj) \
         OBJECT_CHECK(VirtIOIOMMU, (obj), TYPE_VIRTIO_IOMMU)
 
+#define TYPE_VIRTIO_IOMMU_MEMORY_REGION "virtio-iommu-memory-region"
+
 typedef struct IOMMUDevice {
     void         *viommu;
     PCIBus       *bus;
@@ -48,6 +50,7 @@ typedef struct VirtIOIOMMU {
     struct virtio_iommu_config config;
     uint64_t features;
     GHashTable *as_by_busptr;
+    IOMMUPciBus *iommu_pcibus_by_bus_num[PCI_BUS_MAX];
     PCIBus *primary_bus;
     GTree *domains;
     QemuMutex mutex;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 04/10] virtio-iommu: Implement map/unmap
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (2 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 03/10] virtio-iommu: Implement attach/detach command Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 05/10] virtio-iommu: Implement translate Eric Auger
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patch implements virtio_iommu_map/unmap.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---

v11 -> v12:
- check unmanaged managed flags on map
- removed 2 qemu_log_mask in unmap()
- fix leak

v10 -> v11:
- revisit the implementation of unmap according to Peter's suggestion
- removed virt_addr and size from viommu_mapping struct
- use g_tree_lookup_extended()
- return VIRTIO_IOMMU_S_RANGE in case a mapping were
  to be split on unmap (instead of INVAL)

v5 -> v6:
- use new v0.6 fields
- replace error_report by qemu_log_mask

v3 -> v4:
- implement unmap semantics as specified in v0.4
---
 hw/virtio/trace-events   |  1 +
 hw/virtio/virtio-iommu.c | 63 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 15595f8cd7..22162d6583 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -64,6 +64,7 @@ virtio_iommu_attach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
 virtio_iommu_detach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d"
 virtio_iommu_map(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start, uint32_t flags) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64 " phys_start=0x%"PRIx64" flags=%d"
 virtio_iommu_unmap(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
+virtio_iommu_unmap_done(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64
 virtio_iommu_translate(const char *name, uint32_t rid, uint64_t iova, int flag) "mr=%s rid=%d addr=0x%"PRIx64" flag=%d"
 virtio_iommu_init_iommu_mr(char *iommu_mr) "init %s"
 virtio_iommu_get_endpoint(uint32_t ep_id) "Alloc endpoint=%d"
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index d9fe83f530..844d34c270 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -18,6 +18,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/log.h"
 #include "qemu/iov.h"
 #include "qemu-common.h"
 #include "hw/qdev-properties.h"
@@ -55,6 +56,11 @@ typedef struct VirtIOIOMMUInterval {
     uint64_t high;
 } VirtIOIOMMUInterval;
 
+typedef struct VirtIOIOMMUMapping {
+    uint64_t phys_addr;
+    uint32_t flags;
+} VirtIOIOMMUMapping;
+
 static inline uint16_t virtio_iommu_get_bdf(IOMMUDevice *dev)
 {
     return PCI_BUILD_BDF(pci_bus_num(dev->bus), dev->devfn);
@@ -301,10 +307,39 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
     uint64_t virt_start = le64_to_cpu(req->virt_start);
     uint64_t virt_end = le64_to_cpu(req->virt_end);
     uint32_t flags = le32_to_cpu(req->flags);
+    VirtIOIOMMUDomain *domain;
+    VirtIOIOMMUInterval *interval;
+    VirtIOIOMMUMapping *mapping;
+
+    if (flags & ~VIRTIO_IOMMU_MAP_F_MASK) {
+        return VIRTIO_IOMMU_S_INVAL;
+    }
+
+    domain = g_tree_lookup(s->domains, GUINT_TO_POINTER(domain_id));
+    if (!domain) {
+        return VIRTIO_IOMMU_S_NOENT;
+    }
+
+    interval = g_malloc0(sizeof(*interval));
+
+    interval->low = virt_start;
+    interval->high = virt_end;
+
+    mapping = g_tree_lookup(domain->mappings, (gpointer)interval);
+    if (mapping) {
+        g_free(interval);
+        return VIRTIO_IOMMU_S_INVAL;
+    }
 
     trace_virtio_iommu_map(domain_id, virt_start, virt_end, phys_start, flags);
 
-    return VIRTIO_IOMMU_S_UNSUPP;
+    mapping = g_malloc0(sizeof(*mapping));
+    mapping->phys_addr = phys_start;
+    mapping->flags = flags;
+
+    g_tree_insert(domain->mappings, interval, mapping);
+
+    return VIRTIO_IOMMU_S_OK;
 }
 
 static int virtio_iommu_unmap(VirtIOIOMMU *s,
@@ -313,10 +348,34 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
     uint32_t domain_id = le32_to_cpu(req->domain);
     uint64_t virt_start = le64_to_cpu(req->virt_start);
     uint64_t virt_end = le64_to_cpu(req->virt_end);
+    VirtIOIOMMUMapping *iter_val;
+    VirtIOIOMMUInterval interval, *iter_key;
+    VirtIOIOMMUDomain *domain;
+    int ret = VIRTIO_IOMMU_S_OK;
 
     trace_virtio_iommu_unmap(domain_id, virt_start, virt_end);
 
-    return VIRTIO_IOMMU_S_UNSUPP;
+    domain = g_tree_lookup(s->domains, GUINT_TO_POINTER(domain_id));
+    if (!domain) {
+        return VIRTIO_IOMMU_S_NOENT;
+    }
+    interval.low = virt_start;
+    interval.high = virt_end;
+
+    while (g_tree_lookup_extended(domain->mappings, &interval,
+                                  (void **)&iter_key, (void**)&iter_val)) {
+        uint64_t current_low = iter_key->low;
+        uint64_t current_high = iter_key->high;
+
+        if (interval.low <= current_low && interval.high >= current_high) {
+            g_tree_remove(domain->mappings, iter_key);
+            trace_virtio_iommu_unmap_done(domain_id, current_low, current_high);
+        } else {
+            ret = VIRTIO_IOMMU_S_RANGE;
+            break;
+        }
+    }
+    return ret;
 }
 
 static int virtio_iommu_iov_to_req(struct iovec *iov,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 05/10] virtio-iommu: Implement translate
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (3 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 04/10] virtio-iommu: Implement map/unmap Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 06/10] virtio-iommu: Implement fault reporting Eric Auger
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patch implements the translate callback

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---

v11 -> v12:
- Added Jean's R-b
- s/qemu_log_mask/error_report_once

v10 -> v11:
- take into account the new value struct and use
  g_tree_lookup_extended
- switched to error_report_once

v6 -> v7:
- implemented bypass-mode

v5 -> v6:
- replace error_report by qemu_log_mask

v4 -> v5:
- check the device domain is not NULL
- s/printf/error_report
- set flags to IOMMU_NONE in case of all translation faults
---
 hw/virtio/trace-events   |  1 +
 hw/virtio/virtio-iommu.c | 60 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 22162d6583..095aa8b509 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -71,3 +71,4 @@ virtio_iommu_get_endpoint(uint32_t ep_id) "Alloc endpoint=%d"
 virtio_iommu_put_endpoint(uint32_t ep_id) "Free endpoint=%d"
 virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d"
 virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
+virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d"
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 844d34c270..59e9cd3d9a 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -473,19 +473,77 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
                                             int iommu_idx)
 {
     IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+    VirtIOIOMMUInterval interval, *mapping_key;
+    VirtIOIOMMUMapping *mapping_value;
+    VirtIOIOMMU *s = sdev->viommu;
+    VirtIOIOMMUEndpoint *ep;
+    bool bypass_allowed;
     uint32_t sid;
+    bool found;
+
+    interval.low = addr;
+    interval.high = addr + 1;
 
     IOMMUTLBEntry entry = {
         .target_as = &address_space_memory,
         .iova = addr,
         .translated_addr = addr,
-        .addr_mask = ~(hwaddr)0,
+        .addr_mask = (1 << ctz32(s->config.page_size_mask)) - 1,
         .perm = IOMMU_NONE,
     };
 
+    bypass_allowed = virtio_vdev_has_feature(&s->parent_obj,
+                                             VIRTIO_IOMMU_F_BYPASS);
+
     sid = virtio_iommu_get_bdf(sdev);
 
     trace_virtio_iommu_translate(mr->parent_obj.name, sid, addr, flag);
+    qemu_mutex_lock(&s->mutex);
+
+    ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
+    if (!ep) {
+        if (!bypass_allowed) {
+            error_report_once("%s sid=%d is not known!!", __func__, sid);
+        } else {
+            entry.perm = flag;
+        }
+        goto unlock;
+    }
+
+    if (!ep->domain) {
+        if (!bypass_allowed) {
+            error_report_once("%s %02x:%02x.%01x not attached to any domain",
+                              __func__, PCI_BUS_NUM(sid),
+                              PCI_SLOT(sid), PCI_FUNC(sid));
+        } else {
+            entry.perm = flag;
+        }
+        goto unlock;
+    }
+
+    found = g_tree_lookup_extended(ep->domain->mappings, (gpointer)(&interval),
+                                   (void **)&mapping_key,
+                                   (void **)&mapping_value);
+    if (!found) {
+        error_report_once("%s no mapping for 0x%"PRIx64" for sid=%d",
+                          __func__, addr, sid);
+        goto unlock;
+    }
+
+    if (((flag & IOMMU_RO) &&
+            !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_READ)) ||
+        ((flag & IOMMU_WO) &&
+            !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_WRITE))) {
+        error_report_once("%s permission error on 0x%"PRIx64"(%d): allowed=%d",
+                          __func__, addr, flag, mapping_value->flags);
+        goto unlock;
+    }
+    entry.translated_addr = addr - mapping_key->low + mapping_value->phys_addr;
+    entry.perm = flag;
+    trace_virtio_iommu_translate_out(addr, entry.translated_addr, sid);
+
+unlock:
+    qemu_mutex_unlock(&s->mutex);
     return entry;
 }
 
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 06/10] virtio-iommu: Implement fault reporting
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (4 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 05/10] virtio-iommu: Implement translate Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 07/10] virtio-iommu: Support migration Eric Auger
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

The event queue allows to report asynchronous errors.
The translate function now injects faults when relevant.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---
v13 -> v14:
- remove loop
- add Peter's R-b

v12 -> v13:
- return on virtio_error()

v11 -> v12:
- reporting the addr associated with the fault and set the
  VIRTIO_IOMMU_FAULT_F_ADDRESS flag.
- added cpu_to_le<n>

v10 -> v11:
- change a virtio_error into an error_report_once
  (no buffer available for output faults)
---
 hw/virtio/trace-events   |  1 +
 hw/virtio/virtio-iommu.c | 70 +++++++++++++++++++++++++++++++++++++---
 2 files changed, 66 insertions(+), 5 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 095aa8b509..e83500bee9 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -72,3 +72,4 @@ virtio_iommu_put_endpoint(uint32_t ep_id) "Free endpoint=%d"
 virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d"
 virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
 virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d"
+virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 59e9cd3d9a..8509f64004 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -468,6 +468,48 @@ out:
     }
 }
 
+static void virtio_iommu_report_fault(VirtIOIOMMU *viommu, uint8_t reason,
+                                      int flags, uint32_t endpoint,
+                                      uint64_t address)
+{
+    VirtIODevice *vdev = &viommu->parent_obj;
+    VirtQueue *vq = viommu->event_vq;
+    struct virtio_iommu_fault fault;
+    VirtQueueElement *elem;
+    size_t sz;
+
+    memset(&fault, 0, sizeof(fault));
+    fault.reason = reason;
+    fault.flags = cpu_to_le32(flags);
+    fault.endpoint = cpu_to_le32(endpoint);
+    fault.address = cpu_to_le64(address);
+
+    elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+
+    if (!elem) {
+        error_report_once(
+            "no buffer available in event queue to report event");
+        return;
+    }
+
+    if (iov_size(elem->in_sg, elem->in_num) < sizeof(fault)) {
+        virtio_error(vdev, "error buffer of wrong size");
+        virtqueue_detach_element(vq, elem, 0);
+        g_free(elem);
+        return;
+    }
+
+    sz = iov_from_buf(elem->in_sg, elem->in_num, 0,
+                      &fault, sizeof(fault));
+    assert(sz == sizeof(fault));
+
+    trace_virtio_iommu_report_fault(reason, flags, endpoint, address);
+    virtqueue_push(vq, elem, sz);
+    virtio_notify(vdev, vq);
+    g_free(elem);
+
+}
+
 static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
                                             IOMMUAccessFlags flag,
                                             int iommu_idx)
@@ -476,9 +518,10 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
     VirtIOIOMMUInterval interval, *mapping_key;
     VirtIOIOMMUMapping *mapping_value;
     VirtIOIOMMU *s = sdev->viommu;
+    bool read_fault, write_fault;
     VirtIOIOMMUEndpoint *ep;
+    uint32_t sid, flags;
     bool bypass_allowed;
-    uint32_t sid;
     bool found;
 
     interval.low = addr;
@@ -504,6 +547,9 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
     if (!ep) {
         if (!bypass_allowed) {
             error_report_once("%s sid=%d is not known!!", __func__, sid);
+            virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_UNKNOWN,
+                                      VIRTIO_IOMMU_FAULT_F_ADDRESS,
+                                      sid, addr);
         } else {
             entry.perm = flag;
         }
@@ -515,6 +561,9 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
             error_report_once("%s %02x:%02x.%01x not attached to any domain",
                               __func__, PCI_BUS_NUM(sid),
                               PCI_SLOT(sid), PCI_FUNC(sid));
+            virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_DOMAIN,
+                                      VIRTIO_IOMMU_FAULT_F_ADDRESS,
+                                      sid, addr);
         } else {
             entry.perm = flag;
         }
@@ -527,15 +576,26 @@ static IOMMUTLBEntry virtio_iommu_translate(IOMMUMemoryRegion *mr, hwaddr addr,
     if (!found) {
         error_report_once("%s no mapping for 0x%"PRIx64" for sid=%d",
                           __func__, addr, sid);
+        virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_MAPPING,
+                                  VIRTIO_IOMMU_FAULT_F_ADDRESS,
+                                  sid, addr);
         goto unlock;
     }
 
-    if (((flag & IOMMU_RO) &&
-            !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_READ)) ||
-        ((flag & IOMMU_WO) &&
-            !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_WRITE))) {
+    read_fault = (flag & IOMMU_RO) &&
+                    !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_READ);
+    write_fault = (flag & IOMMU_WO) &&
+                    !(mapping_value->flags & VIRTIO_IOMMU_MAP_F_WRITE);
+
+    flags = read_fault ? VIRTIO_IOMMU_FAULT_F_READ : 0;
+    flags |= write_fault ? VIRTIO_IOMMU_FAULT_F_WRITE : 0;
+    if (flags) {
         error_report_once("%s permission error on 0x%"PRIx64"(%d): allowed=%d",
                           __func__, addr, flag, mapping_value->flags);
+        flags |= VIRTIO_IOMMU_FAULT_F_ADDRESS;
+        virtio_iommu_report_fault(s, VIRTIO_IOMMU_FAULT_R_MAPPING,
+                                  flags | VIRTIO_IOMMU_FAULT_F_ADDRESS,
+                                  sid, addr);
         goto unlock;
     }
     entry.translated_addr = addr - mapping_key->low + mapping_value->phys_addr;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 07/10] virtio-iommu: Support migration
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (5 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 06/10] virtio-iommu: Implement fault reporting Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 08/10] virtio-iommu-pci: Add virtio iommu pci support Eric Auger
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

Add Migration support. We rely on recently added gtree and qlist
migration. We only migrate the domain gtree. The endpoint gtree
is re-constructed in a post-load operation.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---

v15 -> v16:
- added Juan's R-b

v11 -> v12:
- do not migrate the endpoint gtree but reconstruct it from the
  domain gtree (Peter's suggestion)
- add MIG_PRI_IOMMU
---
 hw/virtio/virtio-iommu.c | 109 +++++++++++++++++++++++++++++++++++----
 1 file changed, 99 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 8509f64004..4cee8083bc 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -643,16 +643,6 @@ static uint64_t virtio_iommu_get_features(VirtIODevice *vdev, uint64_t f,
     return f;
 }
 
-/*
- * Migration is not yet supported: most of the state consists
- * of balanced binary trees which are not yet ready for getting
- * migrated
- */
-static const VMStateDescription vmstate_virtio_iommu_device = {
-    .name = "virtio-iommu-device",
-    .unmigratable = 1,
-};
-
 static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
 {
     guint ua = GPOINTER_TO_UINT(a);
@@ -736,9 +726,108 @@ static void virtio_iommu_instance_init(Object *obj)
 {
 }
 
+#define VMSTATE_INTERVAL                               \
+{                                                      \
+    .name = "interval",                                \
+    .version_id = 1,                                   \
+    .minimum_version_id = 1,                           \
+    .fields = (VMStateField[]) {                       \
+        VMSTATE_UINT64(low, VirtIOIOMMUInterval),      \
+        VMSTATE_UINT64(high, VirtIOIOMMUInterval),     \
+        VMSTATE_END_OF_LIST()                          \
+    }                                                  \
+}
+
+#define VMSTATE_MAPPING                               \
+{                                                     \
+    .name = "mapping",                                \
+    .version_id = 1,                                  \
+    .minimum_version_id = 1,                          \
+    .fields = (VMStateField[]) {                      \
+        VMSTATE_UINT64(phys_addr, VirtIOIOMMUMapping),\
+        VMSTATE_UINT32(flags, VirtIOIOMMUMapping),    \
+        VMSTATE_END_OF_LIST()                         \
+    },                                                \
+}
+
+static const VMStateDescription vmstate_interval_mapping[2] = {
+    VMSTATE_MAPPING,   /* value */
+    VMSTATE_INTERVAL   /* key   */
+};
+
+static int domain_preload(void *opaque)
+{
+    VirtIOIOMMUDomain *domain = opaque;
+
+    domain->mappings = g_tree_new_full((GCompareDataFunc)interval_cmp,
+                                       NULL, g_free, g_free);
+    return 0;
+}
+
+static const VMStateDescription vmstate_endpoint = {
+    .name = "endpoint",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(id, VirtIOIOMMUEndpoint),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static const VMStateDescription vmstate_domain = {
+    .name = "domain",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .pre_load = domain_preload,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(id, VirtIOIOMMUDomain),
+        VMSTATE_GTREE_V(mappings, VirtIOIOMMUDomain, 1,
+                        vmstate_interval_mapping,
+                        VirtIOIOMMUInterval, VirtIOIOMMUMapping),
+        VMSTATE_QLIST_V(endpoint_list, VirtIOIOMMUDomain, 1,
+                        vmstate_endpoint, VirtIOIOMMUEndpoint, next),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static gboolean reconstruct_endpoints(gpointer key, gpointer value,
+                                      gpointer data)
+{
+    VirtIOIOMMU *s = (VirtIOIOMMU *)data;
+    VirtIOIOMMUDomain *d = (VirtIOIOMMUDomain *)value;
+    VirtIOIOMMUEndpoint *iter;
+
+    QLIST_FOREACH(iter, &d->endpoint_list, next) {
+        iter->domain = d;
+        g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
+    }
+    return false; /* continue the domain traversal */
+}
+
+static int iommu_post_load(void *opaque, int version_id)
+{
+    VirtIOIOMMU *s = opaque;
+
+    g_tree_foreach(s->domains, reconstruct_endpoints, s);
+    return 0;
+}
+
+static const VMStateDescription vmstate_virtio_iommu_device = {
+    .name = "virtio-iommu-device",
+    .minimum_version_id = 1,
+    .version_id = 1,
+    .post_load = iommu_post_load,
+    .fields = (VMStateField[]) {
+        VMSTATE_GTREE_DIRECT_KEY_V(domains, VirtIOIOMMU, 1,
+                                   &vmstate_domain, VirtIOIOMMUDomain),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static const VMStateDescription vmstate_virtio_iommu = {
     .name = "virtio-iommu",
     .minimum_version_id = 1,
+    .priority = MIG_PRI_IOMMU,
     .version_id = 1,
     .fields = (VMStateField[]) {
         VMSTATE_VIRTIO_DEVICE,
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 08/10] virtio-iommu-pci: Add virtio iommu pci support
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (6 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 07/10] virtio-iommu: Support migration Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-14 13:27 ` [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings Eric Auger
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

This patch adds virtio-iommu-pci, which is the pci proxy for
the virtio-iommu device.

Currently non DT integration is not yet supported by the kernel.
So the machine must implement a hotplug handler for the
virtio-iommu-pci device that creates the device tree iommu-map
bindings as documented in kernel documentation:

Documentation/devicetree/bindings/virtio/iommu.txt

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

---
v15 -> v16:
- made the device not hotpluggable

v14 -> v15:
- checker whether the machine implements a hotplug handler
  for the virtio-iommu-pci

v13 -> v14:
- add device_class_set_props

v11 -> v12:
- added Jean's R-b
- remove the array of intervals. Will be introduced later?

v10 -> v11:
- add the reserved_regions array property

v9 -> v10:
- include "hw/qdev-properties.h" header

v8 -> v9:
- add the msi-bypass property
- create virtio-iommu-pci.c
---
 hw/virtio/Makefile.objs          |   1 +
 hw/virtio/virtio-iommu-pci.c     | 104 +++++++++++++++++++++++++++++++
 include/hw/pci/pci.h             |   1 +
 include/hw/virtio/virtio-iommu.h |   1 +
 qdev-monitor.c                   |   1 +
 5 files changed, 108 insertions(+)
 create mode 100644 hw/virtio/virtio-iommu-pci.c

diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 2fd9da7410..4e4d39a0a4 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -29,6 +29,7 @@ obj-$(CONFIG_VIRTIO_INPUT_HOST) += virtio-input-host-pci.o
 obj-$(CONFIG_VIRTIO_INPUT) += virtio-input-pci.o
 obj-$(CONFIG_VIRTIO_RNG) += virtio-rng-pci.o
 obj-$(CONFIG_VIRTIO_BALLOON) += virtio-balloon-pci.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu-pci.o
 obj-$(CONFIG_VIRTIO_9P) += virtio-9p-pci.o
 obj-$(CONFIG_VIRTIO_SCSI) += virtio-scsi-pci.o
 obj-$(CONFIG_VIRTIO_BLK) += virtio-blk-pci.o
diff --git a/hw/virtio/virtio-iommu-pci.c b/hw/virtio/virtio-iommu-pci.c
new file mode 100644
index 0000000000..3dfbf55b47
--- /dev/null
+++ b/hw/virtio/virtio-iommu-pci.c
@@ -0,0 +1,104 @@
+/*
+ * Virtio IOMMU PCI Bindings
+ *
+ * Copyright (c) 2019 Red Hat, Inc.
+ * Written by Eric Auger
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2 or
+ *  (at your option) any later version.
+ */
+
+#include "qemu/osdep.h"
+
+#include "virtio-pci.h"
+#include "hw/virtio/virtio-iommu.h"
+#include "hw/qdev-properties.h"
+#include "qapi/error.h"
+#include "hw/boards.h"
+
+typedef struct VirtIOIOMMUPCI VirtIOIOMMUPCI;
+
+/*
+ * virtio-iommu-pci: This extends VirtioPCIProxy.
+ *
+ */
+#define VIRTIO_IOMMU_PCI(obj) \
+        OBJECT_CHECK(VirtIOIOMMUPCI, (obj), TYPE_VIRTIO_IOMMU_PCI)
+
+struct VirtIOIOMMUPCI {
+    VirtIOPCIProxy parent_obj;
+    VirtIOIOMMU vdev;
+};
+
+static Property virtio_iommu_pci_properties[] = {
+    DEFINE_PROP_UINT32("class", VirtIOPCIProxy, class_code, 0),
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_iommu_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+    VirtIOIOMMUPCI *dev = VIRTIO_IOMMU_PCI(vpci_dev);
+    DeviceState *vdev = DEVICE(&dev->vdev);
+
+    if (!qdev_get_machine_hotplug_handler(DEVICE(vpci_dev))) {
+        MachineClass *mc = MACHINE_GET_CLASS(qdev_get_machine());
+
+        error_setg(errp,
+                   "%s machine fails to create iommu-map device tree bindings",
+                   mc->name);
+        error_append_hint(errp,
+                          "Check you machine implements a hotplug handler "
+                          "for the virtio-iommu-pci device\n");
+        error_append_hint(errp, "Check the guest is booted without FW or with "
+                          "-no-acpi\n");
+        return;
+    }
+    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+    object_property_set_link(OBJECT(dev),
+                             OBJECT(pci_get_bus(&vpci_dev->pci_dev)),
+                             "primary-bus", errp);
+    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
+}
+
+static void virtio_iommu_pci_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
+    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
+    k->realize = virtio_iommu_pci_realize;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+    device_class_set_props(dc, virtio_iommu_pci_properties);
+    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
+    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_IOMMU;
+    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
+    pcidev_k->class_id = PCI_CLASS_OTHERS;
+    dc->hotpluggable = false;
+}
+
+static void virtio_iommu_pci_instance_init(Object *obj)
+{
+    VirtIOIOMMUPCI *dev = VIRTIO_IOMMU_PCI(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_IOMMU);
+}
+
+static const VirtioPCIDeviceTypeInfo virtio_iommu_pci_info = {
+    .base_name             = TYPE_VIRTIO_IOMMU_PCI,
+    .generic_name          = "virtio-iommu-pci",
+    .transitional_name     = "virtio-iommu-pci-transitional",
+    .non_transitional_name = "virtio-iommu-pci-non-transitional",
+    .instance_size = sizeof(VirtIOIOMMUPCI),
+    .instance_init = virtio_iommu_pci_instance_init,
+    .class_init    = virtio_iommu_pci_class_init,
+};
+
+static void virtio_iommu_pci_register(void)
+{
+    virtio_pci_types_register(&virtio_iommu_pci_info);
+}
+
+type_init(virtio_iommu_pci_register)
+
+
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 2acd8321af..cfedf5a995 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -86,6 +86,7 @@ extern bool pci_available;
 #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
 #define PCI_DEVICE_ID_VIRTIO_VSOCK       0x1012
 #define PCI_DEVICE_ID_VIRTIO_PMEM        0x1013
+#define PCI_DEVICE_ID_VIRTIO_IOMMU       0x1014
 
 #define PCI_VENDOR_ID_REDHAT             0x1b36
 #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
diff --git a/include/hw/virtio/virtio-iommu.h b/include/hw/virtio/virtio-iommu.h
index ae88f730cf..6f67f1020a 100644
--- a/include/hw/virtio/virtio-iommu.h
+++ b/include/hw/virtio/virtio-iommu.h
@@ -25,6 +25,7 @@
 #include "hw/pci/pci.h"
 
 #define TYPE_VIRTIO_IOMMU "virtio-iommu-device"
+#define TYPE_VIRTIO_IOMMU_PCI "virtio-iommu-device-base"
 #define VIRTIO_IOMMU(obj) \
         OBJECT_CHECK(VirtIOIOMMU, (obj), TYPE_VIRTIO_IOMMU)
 
diff --git a/qdev-monitor.c b/qdev-monitor.c
index 8ce71a206b..dbbe92dfa1 100644
--- a/qdev-monitor.c
+++ b/qdev-monitor.c
@@ -67,6 +67,7 @@ static const QDevAlias qdev_alias_table[] = {
     { "virtio-input-host-ccw", "virtio-input-host", QEMU_ARCH_S390X },
     { "virtio-input-host-pci", "virtio-input-host",
             QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
+    { "virtio-iommu-pci", "virtio-iommu", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
     { "virtio-keyboard-ccw", "virtio-keyboard", QEMU_ARCH_S390X },
     { "virtio-keyboard-pci", "virtio-keyboard",
             QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (7 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 08/10] virtio-iommu-pci: Add virtio iommu pci support Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-21 14:25   ` Peter Maydell
  2020-02-14 13:27 ` [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files Eric Auger
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

Adds the "virtio,pci-iommu" node in the host bridge node and
the RID mapping, excluding the IOMMU RID.

This is done in the virtio-iommu-pci hotplug handler which
gets called only if no firmware is loaded or if -no-acpi is
passed on the command line. As non DT integration is
not yet supported by the kernel we must make sure we
are in DT mode. This limitation will be removed as soon
as the topology description feature gets supported.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v15 -> v16:
- rename create_virtio_iommu into create_virtio_iommu_dt_bindings

v14 -> v15:
- only support the hotplug handler for virtio-iommu-pci if
  dt can be guaranteed

v11 -> v12:
- Added Jean's R-b

v10 -> v11:
- remove msi_bypass

v8 -> v9:
- disable msi-bypass property
- addition of the subnode is handled is the hotplug handler
  and IOMMU RID is notimposed anymore

v6 -> v7:
- align to the smmu instantiation code

v4 -> v5:
- VirtMachineClass no_iommu added in this patch
- Use object_resolve_path_type
---
 hw/arm/virt.c         | 57 +++++++++++++++++++++++++++++++++++++------
 include/hw/arm/virt.h |  2 ++
 2 files changed, 52 insertions(+), 7 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index f788fe27d6..cfe317922f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -32,6 +32,7 @@
 #include "qemu-common.h"
 #include "qemu/units.h"
 #include "qemu/option.h"
+#include "monitor/qdev.h"
 #include "qapi/error.h"
 #include "hw/sysbus.h"
 #include "hw/boards.h"
@@ -54,6 +55,7 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "hw/pci-host/gpex.h"
+#include "hw/virtio/virtio-pci.h"
 #include "hw/arm/sysbus-fdt.h"
 #include "hw/platform-bus.h"
 #include "hw/qdev-properties.h"
@@ -71,6 +73,7 @@
 #include "hw/mem/pc-dimm.h"
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/generic_event_device.h"
+#include "hw/virtio/virtio-iommu.h"
 
 #define DEFINE_VIRT_MACHINE_LATEST(major, minor, latest) \
     static void virt_##major##_##minor##_class_init(ObjectClass *oc, \
@@ -1180,6 +1183,30 @@ static void create_smmu(const VirtMachineState *vms,
     g_free(node);
 }
 
+static void create_virtio_iommu_dt_bindings(VirtMachineState *vms, Error **errp)
+{
+    const char compat[] = "virtio,pci-iommu";
+    uint16_t bdf = vms->virtio_iommu_bdf;
+    char *node;
+
+    vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
+
+    node = g_strdup_printf("%s/virtio_iommu@%d", vms->pciehb_nodename, bdf);
+    qemu_fdt_add_subnode(vms->fdt, node);
+    qemu_fdt_setprop(vms->fdt, node, "compatible", compat, sizeof(compat));
+    qemu_fdt_setprop_sized_cells(vms->fdt, node, "reg",
+                                 1, bdf << 8, 1, 0, 1, 0,
+                                 1, 0, 1, 0);
+
+    qemu_fdt_setprop_cell(vms->fdt, node, "#iommu-cells", 1);
+    qemu_fdt_setprop_cell(vms->fdt, node, "phandle", vms->iommu_phandle);
+    g_free(node);
+
+    qemu_fdt_setprop_cells(vms->fdt, vms->pciehb_nodename, "iommu-map",
+                           0x0, vms->iommu_phandle, 0x0, bdf,
+                           bdf + 1, vms->iommu_phandle, bdf + 1, 0xffff - bdf);
+}
+
 static void create_pcie(VirtMachineState *vms)
 {
     hwaddr base_mmio = vms->memmap[VIRT_PCIE_MMIO].base;
@@ -1258,7 +1285,7 @@ static void create_pcie(VirtMachineState *vms)
         }
     }
 
-    nodename = g_strdup_printf("/pcie@%" PRIx64, base);
+    nodename = vms->pciehb_nodename = g_strdup_printf("/pcie@%" PRIx64, base);
     qemu_fdt_add_subnode(vms->fdt, nodename);
     qemu_fdt_setprop_string(vms->fdt, nodename,
                             "compatible", "pci-host-ecam-generic");
@@ -1301,13 +1328,16 @@ static void create_pcie(VirtMachineState *vms)
     if (vms->iommu) {
         vms->iommu_phandle = qemu_fdt_alloc_phandle(vms->fdt);
 
-        create_smmu(vms, pci->bus);
-
-        qemu_fdt_setprop_cells(vms->fdt, nodename, "iommu-map",
-                               0x0, vms->iommu_phandle, 0x0, 0x10000);
+        switch (vms->iommu) {
+        case VIRT_IOMMU_SMMUV3:
+            create_smmu(vms, pci->bus);
+            qemu_fdt_setprop_cells(vms->fdt, nodename, "iommu-map",
+                                   0x0, vms->iommu_phandle, 0x0, 0x10000);
+            break;
+        default:
+            g_assert_not_reached();
+        }
     }
-
-    g_free(nodename);
 }
 
 static void create_platform_bus(VirtMachineState *vms)
@@ -1976,6 +2006,13 @@ static void virt_machine_device_plug_cb(HotplugHandler *hotplug_dev,
     if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
         virt_memory_plug(hotplug_dev, dev, errp);
     }
+    if (object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) {
+        PCIDevice *pdev = PCI_DEVICE(dev);
+
+        vms->iommu = VIRT_IOMMU_VIRTIO;
+        vms->virtio_iommu_bdf = pci_get_bdf(pdev);
+        create_virtio_iommu_dt_bindings(vms, errp);
+    }
 }
 
 static void virt_machine_device_unplug_request_cb(HotplugHandler *hotplug_dev,
@@ -1992,7 +2029,13 @@ static HotplugHandler *virt_machine_get_hotplug_handler(MachineState *machine,
        (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM))) {
         return HOTPLUG_HANDLER(machine);
     }
+    if (object_dynamic_cast(OBJECT(dev), TYPE_VIRTIO_IOMMU_PCI)) {
+        VirtMachineState *vms = VIRT_MACHINE(machine);
 
+        if (!vms->bootinfo.firmware_loaded || !acpi_enabled) {
+            return HOTPLUG_HANDLER(machine);
+        }
+    }
     return NULL;
 }
 
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
index 71508bf40c..02f500cb8e 100644
--- a/include/hw/arm/virt.h
+++ b/include/hw/arm/virt.h
@@ -125,8 +125,10 @@ typedef struct {
     bool virt;
     int32_t gic_version;
     VirtIOMMUType iommu;
+    uint16_t virtio_iommu_bdf;
     struct arm_boot_info bootinfo;
     MemMapEntry *memmap;
+    char *pciehb_nodename;
     const int *irqmap;
     int smp_cpus;
     void *fdt;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (8 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings Eric Auger
@ 2020-02-14 13:27 ` Eric Auger
  2020-02-21 14:26   ` Peter Maydell
  2020-02-21 14:27 ` [PATCH v16 00/10] VIRTIO-IOMMU device Peter Maydell
  2020-02-27 11:17 ` Daniel P. Berrangé
  11 siblings, 1 reply; 25+ messages in thread
From: Eric Auger @ 2020-02-14 13:27 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, qemu-devel, qemu-arm, peter.maydell,
	jean-philippe, dgilbert, quintela, mst, peterx
  Cc: kevin.tian, bharatb.linux, tnowicki

Add a new "virtio-iommu" section with the new files
related to this device.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index c7717df720..b7a7a18737 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1623,6 +1623,12 @@ F: hw/input/virtio-input*.c
 F: include/hw/virtio/virtio-input.h
 F: contrib/vhost-user-input/*
 
+virtio-iommu
+M: Eric Auger <eric.auger@redhat.com>
+S: Maintained
+F: hw/virtio/virtio-iommu*.c
+F: include/hw/virtio/virtio-iommu.h
+
 virtio-serial
 M: Laurent Vivier <lvivier@redhat.com>
 R: Amit Shah <amit@kernel.org>
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings
  2020-02-14 13:27 ` [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings Eric Auger
@ 2020-02-21 14:25   ` Peter Maydell
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Maydell @ 2020-02-21 14:25 UTC (permalink / raw)
  To: Eric Auger
  Cc: Jean-Philippe Brucker, Tian, Kevin, tnowicki, Juan Quintela,
	Michael S. Tsirkin, QEMU Developers, Peter Xu,
	Dr. David Alan Gilbert, bharatb.linux, qemu-arm, Eric Auger

On Fri, 14 Feb 2020 at 13:29, Eric Auger <eric.auger@redhat.com> wrote:
>
> Adds the "virtio,pci-iommu" node in the host bridge node and
> the RID mapping, excluding the IOMMU RID.
>
> This is done in the virtio-iommu-pci hotplug handler which
> gets called only if no firmware is loaded or if -no-acpi is
> passed on the command line. As non DT integration is
> not yet supported by the kernel we must make sure we
> are in DT mode. This limitation will be removed as soon
> as the topology description feature gets supported.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files
  2020-02-14 13:27 ` [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files Eric Auger
@ 2020-02-21 14:26   ` Peter Maydell
  0 siblings, 0 replies; 25+ messages in thread
From: Peter Maydell @ 2020-02-21 14:26 UTC (permalink / raw)
  To: Eric Auger
  Cc: Jean-Philippe Brucker, Tian, Kevin, tnowicki, Juan Quintela,
	Michael S. Tsirkin, QEMU Developers, Peter Xu,
	Dr. David Alan Gilbert, bharatb.linux, qemu-arm, Eric Auger

On Fri, 14 Feb 2020 at 13:29, Eric Auger <eric.auger@redhat.com> wrote:
>
> Add a new "virtio-iommu" section with the new files
> related to this device.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> ---
>  MAINTAINERS | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c7717df720..b7a7a18737 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1623,6 +1623,12 @@ F: hw/input/virtio-input*.c
>  F: include/hw/virtio/virtio-input.h
>  F: contrib/vhost-user-input/*
>
> +virtio-iommu
> +M: Eric Auger <eric.auger@redhat.com>
> +S: Maintained
> +F: hw/virtio/virtio-iommu*.c
> +F: include/hw/virtio/virtio-iommu.h
> +
>  virtio-serial
>  M: Laurent Vivier <lvivier@redhat.com>
>  R: Amit Shah <amit@kernel.org>
> --

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (9 preceding siblings ...)
  2020-02-14 13:27 ` [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files Eric Auger
@ 2020-02-21 14:27 ` Peter Maydell
  2020-02-23  8:17   ` Michael S. Tsirkin
  2020-02-27 11:17 ` Daniel P. Berrangé
  11 siblings, 1 reply; 25+ messages in thread
From: Peter Maydell @ 2020-02-21 14:27 UTC (permalink / raw)
  To: Eric Auger
  Cc: Jean-Philippe Brucker, Tian, Kevin, tnowicki, Juan Quintela,
	Michael S. Tsirkin, QEMU Developers, Peter Xu,
	Dr. David Alan Gilbert, bharatb.linux, qemu-arm, Eric Auger

On Fri, 14 Feb 2020 at 13:28, Eric Auger <eric.auger@redhat.com> wrote:
>
> This series implements the QEMU virtio-iommu device.
>
> This matches the v0.12 spec (voted) and the corresponding
> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
> are resolved for DT integration. The virtio-iommu can be
> instantiated in ARM virt using:
>
> "-device virtio-iommu-pci".
>
> Non DT mode is not yet supported as it has non resolved kernel
> dependencies [1].
>
> This feature targets 5.0.
>
> Integration with vhost devices and vfio devices is not part
> of this series. Please follow Bharat's respins [2].

I think everything here has reviewed-by tags now -- does
anybody still want more time to review it, and what
is the preference for how it goes into master?

thanks
-- PMM


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-02-21 14:27 ` [PATCH v16 00/10] VIRTIO-IOMMU device Peter Maydell
@ 2020-02-23  8:17   ` Michael S. Tsirkin
  0 siblings, 0 replies; 25+ messages in thread
From: Michael S. Tsirkin @ 2020-02-23  8:17 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Jean-Philippe Brucker, Tian, Kevin, tnowicki, Juan Quintela,
	QEMU Developers, Peter Xu, Dr. David Alan Gilbert, Eric Auger,
	bharatb.linux, qemu-arm, Eric Auger

On Fri, Feb 21, 2020 at 02:27:30PM +0000, Peter Maydell wrote:
> On Fri, 14 Feb 2020 at 13:28, Eric Auger <eric.auger@redhat.com> wrote:
> >
> > This series implements the QEMU virtio-iommu device.
> >
> > This matches the v0.12 spec (voted) and the corresponding
> > virtio-iommu driver upstreamed in 5.3. All kernel dependencies
> > are resolved for DT integration. The virtio-iommu can be
> > instantiated in ARM virt using:
> >
> > "-device virtio-iommu-pci".
> >
> > Non DT mode is not yet supported as it has non resolved kernel
> > dependencies [1].
> >
> > This feature targets 5.0.
> >
> > Integration with vhost devices and vfio devices is not part
> > of this series. Please follow Bharat's respins [2].
> 
> I think everything here has reviewed-by tags now -- does
> anybody still want more time to review it, and what
> is the preference for how it goes into master?
> 
> thanks
> -- PMM

I guess I'll pick it up, most code seems to be virtio related.

Thanks everyone!

-- 
MST



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
                   ` (10 preceding siblings ...)
  2020-02-21 14:27 ` [PATCH v16 00/10] VIRTIO-IOMMU device Peter Maydell
@ 2020-02-27 11:17 ` Daniel P. Berrangé
  2020-02-27 13:49   ` Auger Eric
  11 siblings, 1 reply; 25+ messages in thread
From: Daniel P. Berrangé @ 2020-02-27 11:17 UTC (permalink / raw)
  To: Eric Auger
  Cc: peter.maydell, kevin.tian, tnowicki, jean-philippe, quintela,
	qemu-devel, peterx, dgilbert, bharatb.linux, qemu-arm, mst,
	eric.auger.pro

On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
> This series implements the QEMU virtio-iommu device.
> 
> This matches the v0.12 spec (voted) and the corresponding
> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
> are resolved for DT integration. The virtio-iommu can be
> instantiated in ARM virt using:
> 
> "-device virtio-iommu-pci".

Is there any more documentation besides this ?

I'm wondering on the intended usage of this, and its relation
or pros/cons vs other iommu devices

You mention Arm here, but can this virtio-iommu-pci be used on
ppc64, s390x, x86_64 too ?  If so, is it a better choice than
using intel-iommu on x86_64 ?  Anything else that is relevant
for management applications to know about when using this ?


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-02-27 11:17 ` Daniel P. Berrangé
@ 2020-02-27 13:49   ` Auger Eric
  2020-03-03  3:23     ` Zhangfei Gao
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2020-02-27 13:49 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: peter.maydell, kevin.tian, tnowicki, quintela, mst, qemu-devel,
	peterx, dgilbert, bharatb.linux, qemu-arm, jean-philippe,
	eric.auger.pro

Hi Daniel,

On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
> On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
>> This series implements the QEMU virtio-iommu device.
>>
>> This matches the v0.12 spec (voted) and the corresponding
>> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
>> are resolved for DT integration. The virtio-iommu can be
>> instantiated in ARM virt using:
>>
>> "-device virtio-iommu-pci".
> 
> Is there any more documentation besides this ?

not yet in qemu.
> 
> I'm wondering on the intended usage of this, and its relation
> or pros/cons vs other iommu devices

Maybe if you want to catch up on the topic, looking at the very first
kernel RFC may be a good starting point. Motivation, pros & cons were
discussed in that thread (hey, April 2017!)
https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html

on ARM we have SMMUv3 emulation. But the VFIO integration is not
possible because SMMU does not have any "caching mode" and my nested
paging kernel series is blocked. So the only solution to integrate with
VFIO is looming virtio-iommu.

In general the pros that were put forward are: virtio-iommu is
architecture agnostic, removes the burden to accurately model complex
device states, driver can support virtualization specific optimizations
without being constrained by production driver maintenance. Cons is perf
and mem footprint if we do not consider any optimization.

You can have a look at

http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf

> 
> You mention Arm here, but can this virtio-iommu-pci be used on
> ppc64, s390x, x86_64 too ? 

Not Yet. At the moment we are stuck with the non DT integration at
kernel level. We can instantiate the device in machvirt with DT boot only.

Work is ongoing on kernel, by Jean-Philippe to support non DT integration:

[1] [PATCH 0/3] virtio-iommu on non-devicetree platforms
(https://www.spinics.net/lists/linux-virtualization/msg41391.html)

This does nor rely on ACPI anymore.

Originally the plan was to integrate with ACPI (IORT) but Michael pushed
to pass the binding info between the protected devices and the IOMMU
through the PCI cfg space. Also this could serve environments where we
do not have ACPI. I think some people are reluctant to expose the
virtio-iommu in the [IORT] ACPI table.

But definitively the end goal is to support the virtio-iommu for other
archs. Integration with x86 is already working based on IORT or [1].


 If so, is it a better choice than
> using intel-iommu on x86_64?
Anything else that is relevant
> for management applications to know about when using this ?

I think We are still at the early stage and this would be premature even
if feasible.

Hope it helps

Thanks

Eric
> 
> 
> Regards,
> Daniel
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-02-27 13:49   ` Auger Eric
@ 2020-03-03  3:23     ` Zhangfei Gao
  2020-03-03  9:40       ` Auger Eric
  0 siblings, 1 reply; 25+ messages in thread
From: Zhangfei Gao @ 2020-03-03  3:23 UTC (permalink / raw)
  To: Auger Eric
  Cc: Peter Maydell, kevin.tian, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	jean-philippe, eric.auger.pro

Hi Eric

On Thu, Feb 27, 2020 at 9:50 PM Auger Eric <eric.auger@redhat.com> wrote:
>
> Hi Daniel,
>
> On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
> > On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
> >> This series implements the QEMU virtio-iommu device.
> >>
> >> This matches the v0.12 spec (voted) and the corresponding
> >> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
> >> are resolved for DT integration. The virtio-iommu can be
> >> instantiated in ARM virt using:
> >>
> >> "-device virtio-iommu-pci".
> >
> > Is there any more documentation besides this ?
>
> not yet in qemu.
> >
> > I'm wondering on the intended usage of this, and its relation
> > or pros/cons vs other iommu devices
>
> Maybe if you want to catch up on the topic, looking at the very first
> kernel RFC may be a good starting point. Motivation, pros & cons were
> discussed in that thread (hey, April 2017!)
> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html
>
> on ARM we have SMMUv3 emulation. But the VFIO integration is not
> possible because SMMU does not have any "caching mode" and my nested
> paging kernel series is blocked. So the only solution to integrate with
> VFIO is looming virtio-iommu.
>
> In general the pros that were put forward are: virtio-iommu is
> architecture agnostic, removes the burden to accurately model complex
> device states, driver can support virtualization specific optimizations
> without being constrained by production driver maintenance. Cons is perf
> and mem footprint if we do not consider any optimization.
>
> You can have a look at
>
> http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf
>
Thanks for the patches.

Could I ask one question?
To support vSVA and pasid in guest, which direction you recommend,
virtio-iommu or vSMMU (your nested paging)

Do we still have any obstacles?
Would you mind give some breakdown.
Jean mentioned PASID still not supported in QEMU.

Thanks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-03  3:23     ` Zhangfei Gao
@ 2020-03-03  9:40       ` Auger Eric
  2020-03-04  6:08         ` Zhangfei Gao
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2020-03-03  9:40 UTC (permalink / raw)
  To: Zhangfei Gao
  Cc: Peter Maydell, kevin.tian, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	jean-philippe, eric.auger.pro

Hi Zhangfei,
On 3/3/20 4:23 AM, Zhangfei Gao wrote:
> Hi Eric
> 
> On Thu, Feb 27, 2020 at 9:50 PM Auger Eric <eric.auger@redhat.com> wrote:
>>
>> Hi Daniel,
>>
>> On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
>>> On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
>>>> This series implements the QEMU virtio-iommu device.
>>>>
>>>> This matches the v0.12 spec (voted) and the corresponding
>>>> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
>>>> are resolved for DT integration. The virtio-iommu can be
>>>> instantiated in ARM virt using:
>>>>
>>>> "-device virtio-iommu-pci".
>>>
>>> Is there any more documentation besides this ?
>>
>> not yet in qemu.
>>>
>>> I'm wondering on the intended usage of this, and its relation
>>> or pros/cons vs other iommu devices
>>
>> Maybe if you want to catch up on the topic, looking at the very first
>> kernel RFC may be a good starting point. Motivation, pros & cons were
>> discussed in that thread (hey, April 2017!)
>> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html
>>
>> on ARM we have SMMUv3 emulation. But the VFIO integration is not
>> possible because SMMU does not have any "caching mode" and my nested
>> paging kernel series is blocked. So the only solution to integrate with
>> VFIO is looming virtio-iommu.
>>
>> In general the pros that were put forward are: virtio-iommu is
>> architecture agnostic, removes the burden to accurately model complex
>> device states, driver can support virtualization specific optimizations
>> without being constrained by production driver maintenance. Cons is perf
>> and mem footprint if we do not consider any optimization.
>>
>> You can have a look at
>>
>> http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf
>>
> Thanks for the patches.
> 
> Could I ask one question?
> To support vSVA and pasid in guest, which direction you recommend,
> virtio-iommu or vSMMU (your nested paging)
> 
> Do we still have any obstacles?
you can ask the question but not sure I can answer ;-)

1) SMMUv3 2stage implementation is blocked by Will at kernel level.

Despite this situation I may/can respin as Marvell said they were
interested in this effort. If you are also interested in (I know you
tested it several times and I am grateful to you for that), please reply
to:
[PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
(https://patchwork.kernel.org/cover/11039871/) and say you are
interested in that work so that maintainers are aware there are
potential users.

At the moment I have not supported multiple CDs because it introduced
other dependencies.

2) virtio-iommu

So only virtio-iommu dt boot on machvirt is currently supported. For non
DT, Jean respinned his kernel series
"[PATCH v2 0/3] virtio-iommu on x86 and non-devicetree platforms" as you
may know. However non DT integration still is controversial. Michael is
pushing for putting the binding info the PCI config space. Joerg
yesterday challenged this solution and said he would prefer ACPI
integration. ACPI support depends on ACPI spec update & vote anyway.

To support PASID at virtio-iommu level you also need virtio-iommu API
extensions to be proposed and written + kernel works. So that's a long
road. I will let Jean-Philippe comment on that.

I would just say that Intel is working on nested paging solution with
their emulated intel-iommu. We can help them getting that upstream and
partly benefit from this work.

> Would you mind give some breakdown.
> Jean mentioned PASID still not supported in QEMU.
Do you mean support of multiple CDs in the emulated SMMU? That's a thing
I could implement quite easily. What is more tricky is how to test it.

Thanks

Eric
> 
> Thanks
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-03  9:40       ` Auger Eric
@ 2020-03-04  6:08         ` Zhangfei Gao
  2020-03-04  8:41           ` Auger Eric
  0 siblings, 1 reply; 25+ messages in thread
From: Zhangfei Gao @ 2020-03-04  6:08 UTC (permalink / raw)
  To: Auger Eric
  Cc: Peter Maydell, kevin.tian, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	jean-philippe, eric.auger.pro

On Tue, Mar 3, 2020 at 5:41 PM Auger Eric <eric.auger@redhat.com> wrote:
>
> Hi Zhangfei,
> On 3/3/20 4:23 AM, Zhangfei Gao wrote:
> > Hi Eric
> >
> > On Thu, Feb 27, 2020 at 9:50 PM Auger Eric <eric.auger@redhat.com> wrote:
> >>
> >> Hi Daniel,
> >>
> >> On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
> >>> On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
> >>>> This series implements the QEMU virtio-iommu device.
> >>>>
> >>>> This matches the v0.12 spec (voted) and the corresponding
> >>>> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
> >>>> are resolved for DT integration. The virtio-iommu can be
> >>>> instantiated in ARM virt using:
> >>>>
> >>>> "-device virtio-iommu-pci".
> >>>
> >>> Is there any more documentation besides this ?
> >>
> >> not yet in qemu.
> >>>
> >>> I'm wondering on the intended usage of this, and its relation
> >>> or pros/cons vs other iommu devices
> >>
> >> Maybe if you want to catch up on the topic, looking at the very first
> >> kernel RFC may be a good starting point. Motivation, pros & cons were
> >> discussed in that thread (hey, April 2017!)
> >> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html
> >>
> >> on ARM we have SMMUv3 emulation. But the VFIO integration is not
> >> possible because SMMU does not have any "caching mode" and my nested
> >> paging kernel series is blocked. So the only solution to integrate with
> >> VFIO is looming virtio-iommu.
> >>
> >> In general the pros that were put forward are: virtio-iommu is
> >> architecture agnostic, removes the burden to accurately model complex
> >> device states, driver can support virtualization specific optimizations
> >> without being constrained by production driver maintenance. Cons is perf
> >> and mem footprint if we do not consider any optimization.
> >>
> >> You can have a look at
> >>
> >> http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf
> >>
> > Thanks for the patches.
> >
> > Could I ask one question?
> > To support vSVA and pasid in guest, which direction you recommend,
> > virtio-iommu or vSMMU (your nested paging)
> >
> > Do we still have any obstacles?
> you can ask the question but not sure I can answer ;-)
>
> 1) SMMUv3 2stage implementation is blocked by Will at kernel level.
>
> Despite this situation I may/can respin as Marvell said they were
> interested in this effort. If you are also interested in (I know you
> tested it several times and I am grateful to you for that), please reply
> to:
> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> (https://patchwork.kernel.org/cover/11039871/) and say you are
> interested in that work so that maintainers are aware there are
> potential users.
>
> At the moment I have not supported multiple CDs because it introduced
> other dependencies.
>
> 2) virtio-iommu
>
> So only virtio-iommu dt boot on machvirt is currently supported. For non
> DT, Jean respinned his kernel series
> "[PATCH v2 0/3] virtio-iommu on x86 and non-devicetree platforms" as you
> may know. However non DT integration still is controversial. Michael is
> pushing for putting the binding info the PCI config space. Joerg
> yesterday challenged this solution and said he would prefer ACPI
> integration. ACPI support depends on ACPI spec update & vote anyway.
>
> To support PASID at virtio-iommu level you also need virtio-iommu API
> extensions to be proposed and written + kernel works. So that's a long
> road. I will let Jean-Philippe comment on that.
>
> I would just say that Intel is working on nested paging solution with
> their emulated intel-iommu. We can help them getting that upstream and
> partly benefit from this work.
>
> > Would you mind give some breakdown.
> > Jean mentioned PASID still not supported in QEMU.
> Do you mean support of multiple CDs in the emulated SMMU? That's a thing
> I could implement quite easily. What is more tricky is how to test it.

Thanks Eric

Discussed with Jean before, here are some obstacles for vSVA via nested paging.
Do you think they are still big issues?

Copy "
* PASID support in QEMU, I don't think there is anything yet
// this is not a big issue as your comments.

* Page response support in VFIO and QEMU. With Eric's series we can
inject recoverable faults into the guest, but there is no channel for
the guest to RESUME the stall after fixing it.

* We can't use DVM in nested mode unless the VMID is shared with the
CPU. For that we'll need the host SMMU driver to hook into the KVM VMID
allocator, just like we do for the ASID allocator. I haven't yet
investigated how to do that. It's possible to do vSVA without DVM
though, by sending all TLB invalidations through the SMMU command queue.
"

Thanks


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-04  6:08         ` Zhangfei Gao
@ 2020-03-04  8:41           ` Auger Eric
  2020-03-04 16:47             ` Jean-Philippe Brucker
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2020-03-04  8:41 UTC (permalink / raw)
  To: Zhangfei Gao
  Cc: Peter Maydell, kevin.tian, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	jean-philippe, eric.auger.pro

Hi Zhangfei,

On 3/4/20 7:08 AM, Zhangfei Gao wrote:
> On Tue, Mar 3, 2020 at 5:41 PM Auger Eric <eric.auger@redhat.com> wrote:
>>
>> Hi Zhangfei,
>> On 3/3/20 4:23 AM, Zhangfei Gao wrote:
>>> Hi Eric
>>>
>>> On Thu, Feb 27, 2020 at 9:50 PM Auger Eric <eric.auger@redhat.com> wrote:
>>>>
>>>> Hi Daniel,
>>>>
>>>> On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
>>>>> On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
>>>>>> This series implements the QEMU virtio-iommu device.
>>>>>>
>>>>>> This matches the v0.12 spec (voted) and the corresponding
>>>>>> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
>>>>>> are resolved for DT integration. The virtio-iommu can be
>>>>>> instantiated in ARM virt using:
>>>>>>
>>>>>> "-device virtio-iommu-pci".
>>>>>
>>>>> Is there any more documentation besides this ?
>>>>
>>>> not yet in qemu.
>>>>>
>>>>> I'm wondering on the intended usage of this, and its relation
>>>>> or pros/cons vs other iommu devices
>>>>
>>>> Maybe if you want to catch up on the topic, looking at the very first
>>>> kernel RFC may be a good starting point. Motivation, pros & cons were
>>>> discussed in that thread (hey, April 2017!)
>>>> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html
>>>>
>>>> on ARM we have SMMUv3 emulation. But the VFIO integration is not
>>>> possible because SMMU does not have any "caching mode" and my nested
>>>> paging kernel series is blocked. So the only solution to integrate with
>>>> VFIO is looming virtio-iommu.
>>>>
>>>> In general the pros that were put forward are: virtio-iommu is
>>>> architecture agnostic, removes the burden to accurately model complex
>>>> device states, driver can support virtualization specific optimizations
>>>> without being constrained by production driver maintenance. Cons is perf
>>>> and mem footprint if we do not consider any optimization.
>>>>
>>>> You can have a look at
>>>>
>>>> http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf
>>>>
>>> Thanks for the patches.
>>>
>>> Could I ask one question?
>>> To support vSVA and pasid in guest, which direction you recommend,
>>> virtio-iommu or vSMMU (your nested paging)
>>>
>>> Do we still have any obstacles?
>> you can ask the question but not sure I can answer ;-)
>>
>> 1) SMMUv3 2stage implementation is blocked by Will at kernel level.
>>
>> Despite this situation I may/can respin as Marvell said they were
>> interested in this effort. If you are also interested in (I know you
>> tested it several times and I am grateful to you for that), please reply
>> to:
>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>> (https://patchwork.kernel.org/cover/11039871/) and say you are
>> interested in that work so that maintainers are aware there are
>> potential users.
>>
>> At the moment I have not supported multiple CDs because it introduced
>> other dependencies.
>>
>> 2) virtio-iommu
>>
>> So only virtio-iommu dt boot on machvirt is currently supported. For non
>> DT, Jean respinned his kernel series
>> "[PATCH v2 0/3] virtio-iommu on x86 and non-devicetree platforms" as you
>> may know. However non DT integration still is controversial. Michael is
>> pushing for putting the binding info the PCI config space. Joerg
>> yesterday challenged this solution and said he would prefer ACPI
>> integration. ACPI support depends on ACPI spec update & vote anyway.
>>
>> To support PASID at virtio-iommu level you also need virtio-iommu API
>> extensions to be proposed and written + kernel works. So that's a long
>> road. I will let Jean-Philippe comment on that.
>>
>> I would just say that Intel is working on nested paging solution with
>> their emulated intel-iommu. We can help them getting that upstream and
>> partly benefit from this work.
>>
>>> Would you mind give some breakdown.
>>> Jean mentioned PASID still not supported in QEMU.
>> Do you mean support of multiple CDs in the emulated SMMU? That's a thing
>> I could implement quite easily. What is more tricky is how to test it.
> 
> Thanks Eric
> 
> Discussed with Jean before, here are some obstacles for vSVA via nested paging.
> Do you think they are still big issues?
> 
> Copy "
> * PASID support in QEMU, I don't think there is anything yet
> // this is not a big issue as your comments.
> 
> * Page response support in VFIO and QEMU. With Eric's series we can
> inject recoverable faults into the guest, but there is no channel for
> the guest to RESUME the stall after fixing it.
I guess this matches a command sent through the SMMUv3 command queue
(CMD_PRI_RESP) that should be trapped by QEMU and injected to the
physical SMMU, right?

I think everybody misses that injection path and that's not specific to
virtio-iommu. PRS is not currently addressed by in-flight Intel's kernel
series ([PATCH V9 00/10] Nested Shared Virtual Address (SVA) VT-d
support) either.

I think the topic is complex enough to separate the concerns and try to
move forward in incremental steps hence my efforts to push for simple
nested use case. Can't you support vSVA without PRS first (I think this
Intel's strategy too)
> 
> * We can't use DVM in nested mode unless the VMID is shared with the
> CPU. For that we'll need the host SMMU driver to hook into the KVM VMID
> allocator, just like we do for the ASID allocator. I haven't yet
> investigated how to do that. It's possible to do vSVA without DVM
> though, by sending all TLB invalidations through the SMMU command queue.
> "
OK.

From the above arguments I am not sure there are technical blockers with
nested paging implementation. For sure there are things that are not
supported, because I did not address this topic yet.

If I were to work on this, you did not answer bout the testing feasibility.

Thanks

Eric
> 
> Thanks
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-04  8:41           ` Auger Eric
@ 2020-03-04 16:47             ` Jean-Philippe Brucker
  2020-03-05  2:56               ` Tian, Kevin
  0 siblings, 1 reply; 25+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-04 16:47 UTC (permalink / raw)
  To: Auger Eric
  Cc: Peter Maydell, kevin.tian, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	Zhangfei Gao, eric.auger.pro

On Wed, Mar 04, 2020 at 09:41:44AM +0100, Auger Eric wrote:
> >>> Could I ask one question?
> >>> To support vSVA and pasid in guest, which direction you recommend,
> >>> virtio-iommu or vSMMU (your nested paging)
> >>>
> >>> Do we still have any obstacles?
> >> you can ask the question but not sure I can answer ;-)
> >>
> >> 1) SMMUv3 2stage implementation is blocked by Will at kernel level.
> >>
> >> Despite this situation I may/can respin as Marvell said they were
> >> interested in this effort.

Do you know if they want vSVA as well or only nested translation?

> >> If you are also interested in (I know you
> >> tested it several times and I am grateful to you for that), please reply
> >> to:
> >> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> >> (https://patchwork.kernel.org/cover/11039871/) and say you are
> >> interested in that work so that maintainers are aware there are
> >> potential users.
> >>
> >> At the moment I have not supported multiple CDs because it introduced
> >> other dependencies.
> >>
> >> 2) virtio-iommu
> >>
> >> So only virtio-iommu dt boot on machvirt is currently supported. For non
> >> DT, Jean respinned his kernel series
> >> "[PATCH v2 0/3] virtio-iommu on x86 and non-devicetree platforms" as you
> >> may know. However non DT integration still is controversial. Michael is
> >> pushing for putting the binding info the PCI config space. Joerg
> >> yesterday challenged this solution and said he would prefer ACPI
> >> integration. ACPI support depends on ACPI spec update & vote anyway.
> >>
> >> To support PASID at virtio-iommu level you also need virtio-iommu API
> >> extensions to be proposed and written + kernel works. So that's a long
> >> road. I will let Jean-Philippe comment on that.

Yeah, let's put that stuff on hold. vSVA with virtio-iommu requires about
the same amount of work in the host kernel as vSMMU, minus the NESTED_MSI
stuff. The device implementation would be simpler, but the guest driver is
difficult (I'd need to extract the CD table code from the SMMU driver
again). And obtaining better performance than vSMMU would then require
upstreaming vhost-iommu. I do have incomplete drafts and prototypes but
I'll put them aside until users (other than hardware validation) show up
and actually need performance or things like unpinned stage-2.

> >> I would just say that Intel is working on nested paging solution with
> >> their emulated intel-iommu. We can help them getting that upstream and
> >> partly benefit from this work.
> >>
> >>> Would you mind give some breakdown.
> >>> Jean mentioned PASID still not supported in QEMU.
> >> Do you mean support of multiple CDs in the emulated SMMU? That's a thing
> >> I could implement quite easily. What is more tricky is how to test it.
> > 
> > Thanks Eric
> > 
> > Discussed with Jean before, here are some obstacles for vSVA via nested paging.
> > Do you think they are still big issues?
> > 
> > Copy "
> > * PASID support in QEMU, I don't think there is anything yet
> > // this is not a big issue as your comments.
> > 
> > * Page response support in VFIO and QEMU. With Eric's series we can
> > inject recoverable faults into the guest, but there is no channel for
> > the guest to RESUME the stall after fixing it.
> I guess this matches a command sent through the SMMUv3 command queue
> (CMD_PRI_RESP) that should be trapped by QEMU and injected to the
> physical SMMU, right?
> 
> I think everybody misses that injection path and that's not specific to
> virtio-iommu. PRS is not currently addressed by in-flight Intel's kernel
> series ([PATCH V9 00/10] Nested Shared Virtual Address (SVA) VT-d
> support) either.
> 
> I think the topic is complex enough to separate the concerns and try to
> move forward in incremental steps hence my efforts to push for simple
> nested use case. Can't you support vSVA without PRS first (I think this
> Intel's strategy too)

Not really, for sharing guest process address spaces you need I/O page
faults. You can test PASID alone without PRI by using auxiliary domains in
the guest, so I'd advise to start with that, but it requires modifications
to the device driver.

> > 
> > * We can't use DVM in nested mode unless the VMID is shared with the
> > CPU. For that we'll need the host SMMU driver to hook into the KVM VMID
> > allocator, just like we do for the ASID allocator. I haven't yet
> > investigated how to do that. It's possible to do vSVA without DVM
> > though, by sending all TLB invalidations through the SMMU command queue.
> > "

Hm we're already mandating DVM for host SVA, so I'd say mandate it for
vSVA as well. We'd avoid a ton of context switches, especially for the zip
accelerator which doesn't require ATC invalidations. The host needs to pin
the VMID allocated by KVM and write it in the endpoint's STE.

Thanks,
Jean

> OK.
> 
> From the above arguments I am not sure there are technical blockers with
> nested paging implementation. For sure there are things that are not
> supported, because I did not address this topic yet.
> 
> If I were to work on this, you did not answer bout the testing feasibility.
> 
> Thanks
> 
> Eric
> > 
> > Thanks
> > 
> 


^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-04 16:47             ` Jean-Philippe Brucker
@ 2020-03-05  2:56               ` Tian, Kevin
  2020-03-05  7:34                 ` Jean-Philippe Brucker
  0 siblings, 1 reply; 25+ messages in thread
From: Tian, Kevin @ 2020-03-05  2:56 UTC (permalink / raw)
  To: Jean-Philippe Brucker, Auger Eric
  Cc: Peter Maydell, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, bharatb.linux,
	qemu-arm, Wangzhou (B),
	Zhangfei Gao, eric.auger.pro

> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Sent: Thursday, March 5, 2020 12:47 AM
>
[...]
> > >
> > > * We can't use DVM in nested mode unless the VMID is shared with the
> > > CPU. For that we'll need the host SMMU driver to hook into the KVM
> VMID
> > > allocator, just like we do for the ASID allocator. I haven't yet
> > > investigated how to do that. It's possible to do vSVA without DVM
> > > though, by sending all TLB invalidations through the SMMU command
> queue.
> > > "
> 
> Hm we're already mandating DVM for host SVA, so I'd say mandate it for
> vSVA as well. We'd avoid a ton of context switches, especially for the zip
> accelerator which doesn't require ATC invalidations. The host needs to pin
> the VMID allocated by KVM and write it in the endpoint's STE.
> 

Curious... what is DVM and how is it related to SVA? Is it SMMU specific?


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-05  2:56               ` Tian, Kevin
@ 2020-03-05  7:34                 ` Jean-Philippe Brucker
  2020-03-05  7:42                   ` Tian, Kevin
  0 siblings, 1 reply; 25+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-05  7:34 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Peter Maydell, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, Auger Eric,
	bharatb.linux, qemu-arm, Wangzhou (B),
	Zhangfei Gao, eric.auger.pro

On Thu, Mar 05, 2020 at 02:56:20AM +0000, Tian, Kevin wrote:
> > From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > Sent: Thursday, March 5, 2020 12:47 AM
> >
> [...]
> > > >
> > > > * We can't use DVM in nested mode unless the VMID is shared with the
> > > > CPU. For that we'll need the host SMMU driver to hook into the KVM
> > VMID
> > > > allocator, just like we do for the ASID allocator. I haven't yet
> > > > investigated how to do that. It's possible to do vSVA without DVM
> > > > though, by sending all TLB invalidations through the SMMU command
> > queue.
> > > > "
> > 
> > Hm we're already mandating DVM for host SVA, so I'd say mandate it for
> > vSVA as well. We'd avoid a ton of context switches, especially for the zip
> > accelerator which doesn't require ATC invalidations. The host needs to pin
> > the VMID allocated by KVM and write it in the endpoint's STE.
> > 
> 
> Curious... what is DVM and how is it related to SVA? Is it SMMU specific?

Yes it stands for "Distributed Virtual Memory", an Arm interconnect
protocol. When sharing a process address space, TLB invalidations from the
CPU are broadcasted to the SMMU, so we don't have to send commands through
the SMMU queue to invalidate IOTLBs. However ATCs from PCIe endpoints do
not participate in DVM and still have to be invalidated by hand.

Thanks,
Jean


^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v16 00/10] VIRTIO-IOMMU device
  2020-03-05  7:34                 ` Jean-Philippe Brucker
@ 2020-03-05  7:42                   ` Tian, Kevin
  0 siblings, 0 replies; 25+ messages in thread
From: Tian, Kevin @ 2020-03-05  7:42 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Peter Maydell, Daniel P. Berrangé,
	kenneth-lee-2012, tnowicki, Michael S. Tsirkin, quintela,
	zhangfei.gao, qemu-devel, peterx, dgilbert, Auger Eric,
	bharatb.linux, qemu-arm, Wangzhou (B),
	Zhangfei Gao, eric.auger.pro

> From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Sent: Thursday, March 5, 2020 3:34 PM
> 
> On Thu, Mar 05, 2020 at 02:56:20AM +0000, Tian, Kevin wrote:
> > > From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > Sent: Thursday, March 5, 2020 12:47 AM
> > >
> > [...]
> > > > >
> > > > > * We can't use DVM in nested mode unless the VMID is shared with
> the
> > > > > CPU. For that we'll need the host SMMU driver to hook into the KVM
> > > VMID
> > > > > allocator, just like we do for the ASID allocator. I haven't yet
> > > > > investigated how to do that. It's possible to do vSVA without DVM
> > > > > though, by sending all TLB invalidations through the SMMU command
> > > queue.
> > > > > "
> > >
> > > Hm we're already mandating DVM for host SVA, so I'd say mandate it for
> > > vSVA as well. We'd avoid a ton of context switches, especially for the zip
> > > accelerator which doesn't require ATC invalidations. The host needs to
> pin
> > > the VMID allocated by KVM and write it in the endpoint's STE.
> > >
> >
> > Curious... what is DVM and how is it related to SVA? Is it SMMU specific?
> 
> Yes it stands for "Distributed Virtual Memory", an Arm interconnect
> protocol. When sharing a process address space, TLB invalidations from the
> CPU are broadcasted to the SMMU, so we don't have to send commands
> through
> the SMMU queue to invalidate IOTLBs. However ATCs from PCIe endpoints
> do
> not participate in DVM and still have to be invalidated by hand.
> 

ah, got it. Thanks for explanation!


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-03-05  7:43 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-14 13:27 [PATCH v16 00/10] VIRTIO-IOMMU device Eric Auger
2020-02-14 13:27 ` [PATCH v16 01/10] virtio-iommu: Add skeleton Eric Auger
2020-02-14 13:27 ` [PATCH v16 02/10] virtio-iommu: Decode the command payload Eric Auger
2020-02-14 13:27 ` [PATCH v16 03/10] virtio-iommu: Implement attach/detach command Eric Auger
2020-02-14 13:27 ` [PATCH v16 04/10] virtio-iommu: Implement map/unmap Eric Auger
2020-02-14 13:27 ` [PATCH v16 05/10] virtio-iommu: Implement translate Eric Auger
2020-02-14 13:27 ` [PATCH v16 06/10] virtio-iommu: Implement fault reporting Eric Auger
2020-02-14 13:27 ` [PATCH v16 07/10] virtio-iommu: Support migration Eric Auger
2020-02-14 13:27 ` [PATCH v16 08/10] virtio-iommu-pci: Add virtio iommu pci support Eric Auger
2020-02-14 13:27 ` [PATCH v16 09/10] hw/arm/virt: Add the virtio-iommu device tree mappings Eric Auger
2020-02-21 14:25   ` Peter Maydell
2020-02-14 13:27 ` [PATCH v16 10/10] MAINTAINERS: add virtio-iommu related files Eric Auger
2020-02-21 14:26   ` Peter Maydell
2020-02-21 14:27 ` [PATCH v16 00/10] VIRTIO-IOMMU device Peter Maydell
2020-02-23  8:17   ` Michael S. Tsirkin
2020-02-27 11:17 ` Daniel P. Berrangé
2020-02-27 13:49   ` Auger Eric
2020-03-03  3:23     ` Zhangfei Gao
2020-03-03  9:40       ` Auger Eric
2020-03-04  6:08         ` Zhangfei Gao
2020-03-04  8:41           ` Auger Eric
2020-03-04 16:47             ` Jean-Philippe Brucker
2020-03-05  2:56               ` Tian, Kevin
2020-03-05  7:34                 ` Jean-Philippe Brucker
2020-03-05  7:42                   ` Tian, Kevin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).