All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Add virtio-iommu driver
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

Implement the virtio-iommu driver following version 0.6 of the
specification [1]. Previous version, RFCv2, was sent in November [2].
This version addresses Eric's comments and changes the device number.
(Since last week I also tested and fixed the probe/release functions,
they now use devm properly.)

I did not include ACPI support because the next IORT specifications
isn't ready yet (even though the virtio-iommu spec describes the node
format, a new node type has to be allocated). Therefore only device-tree
guests are supported for the moment but the x86 prototype, on branch
virtio-iommu/devel, doesn't add much complexity.

	git://linux-arm.org/linux-jpb.git virtio-iommu/v0.6

Test it with Eric's latest QEMU device [3], for example with the
following command-line:

$ qemu-system-aarch64 -M virt -cpu cortex-a57 -nographic
	-kernel Image -append 'console=ttyAMA0 root=/dev/vda rw'
	-device virtio-iommu-device
	-device virtio-blk-pci,iommu_platform,disable-legacy=on,drive=hd0
	-drive if=none,file=rootfs.bin,id=hd0

You can also try the kvmtool device [4]. For example on AMD Seattle I
use the following commands to boot a guest with all devices behind a
virtio-iommu, then display mapping stats.

$ lkvm run -k Image --irqchip gicv2m --console virtio --vfio-pci 01:00.0
	--viommu vfio --viommu virtio
$ lkvm debug -a -i stats

[1] [RFC] virtio-iommu version 0.6
    https://www.spinics.net/lists/linux-virtualization/msg32628.html
[2] [RFC PATCH v2 0/5] Add virtio-iommu driver
    https://www.spinics.net/lists/kvm/msg159047.html
[3] [RFC v6 00/22] VIRTIO-IOMMU device
    http://lists.gnu.org/archive/html/qemu-arm/2018-02/msg00274.html
[4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.6

Jean-Philippe Brucker (4):
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue
  vfio: Allow type-1 IOMMU instantiation with a virtio-iommu

 MAINTAINERS                       |    6 +
 drivers/iommu/Kconfig             |   11 +
 drivers/iommu/Makefile            |    1 +
 drivers/iommu/virtio-iommu.c      | 1220 +++++++++++++++++++++++++++++++++++++
 drivers/vfio/Kconfig              |    2 +-
 include/uapi/linux/virtio_ids.h   |    1 +
 include/uapi/linux/virtio_iommu.h |  171 ++++++
 7 files changed, 1411 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.16.1

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] [PATCH 0/4] Add virtio-iommu driver
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

Implement the virtio-iommu driver following version 0.6 of the
specification [1]. Previous version, RFCv2, was sent in November [2].
This version addresses Eric's comments and changes the device number.
(Since last week I also tested and fixed the probe/release functions,
they now use devm properly.)

I did not include ACPI support because the next IORT specifications
isn't ready yet (even though the virtio-iommu spec describes the node
format, a new node type has to be allocated). Therefore only device-tree
guests are supported for the moment but the x86 prototype, on branch
virtio-iommu/devel, doesn't add much complexity.

	git://linux-arm.org/linux-jpb.git virtio-iommu/v0.6

Test it with Eric's latest QEMU device [3], for example with the
following command-line:

$ qemu-system-aarch64 -M virt -cpu cortex-a57 -nographic
	-kernel Image -append 'console=ttyAMA0 root=/dev/vda rw'
	-device virtio-iommu-device
	-device virtio-blk-pci,iommu_platform,disable-legacy=on,drive=hd0
	-drive if=none,file=rootfs.bin,id=hd0

You can also try the kvmtool device [4]. For example on AMD Seattle I
use the following commands to boot a guest with all devices behind a
virtio-iommu, then display mapping stats.

$ lkvm run -k Image --irqchip gicv2m --console virtio --vfio-pci 01:00.0
	--viommu vfio --viommu virtio
$ lkvm debug -a -i stats

[1] [RFC] virtio-iommu version 0.6
    https://www.spinics.net/lists/linux-virtualization/msg32628.html
[2] [RFC PATCH v2 0/5] Add virtio-iommu driver
    https://www.spinics.net/lists/kvm/msg159047.html
[3] [RFC v6 00/22] VIRTIO-IOMMU device
    http://lists.gnu.org/archive/html/qemu-arm/2018-02/msg00274.html
[4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.6

Jean-Philippe Brucker (4):
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue
  vfio: Allow type-1 IOMMU instantiation with a virtio-iommu

 MAINTAINERS                       |    6 +
 drivers/iommu/Kconfig             |   11 +
 drivers/iommu/Makefile            |    1 +
 drivers/iommu/virtio-iommu.c      | 1220 +++++++++++++++++++++++++++++++++++++
 drivers/vfio/Kconfig              |    2 +-
 include/uapi/linux/virtio_ids.h   |    1 +
 include/uapi/linux/virtio_iommu.h |  171 ++++++
 7 files changed, 1411 insertions(+), 1 deletion(-)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.16.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   6 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 116 +++++
 6 files changed, 1095 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 3bdc260e36b7..2a181924d420 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14818,6 +14818,12 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f3a21343e636..1ea0ec74524f 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -381,4 +381,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 1fb695854809..9c68be1365e1 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..a9c9245e8ba2
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,960 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	unsigned long			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * The timeout is chosen arbitrarily. It's only here to prevent locking
+	 * up the CPU in case of a device bug.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i)
+			cpu_relax();
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+	struct viommu_request req = {
+		.written = 0
+	};
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. When
+ *	the returned size is greater than zero, if a mapping is returned, the
+ *	caller must free it.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	unsigned long flags;
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node) {
+		spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+		return 0;
+	}
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false;
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->endpoints--;
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!vdomain->endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->endpoints)
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio");
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..0de9b44db14d
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,116 @@
+/*
+ * Virtio-iommu definition v0.6
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, robin.murphy,
	joro, eric.auger.pro

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   6 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 116 +++++
 6 files changed, 1095 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 3bdc260e36b7..2a181924d420 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14818,6 +14818,12 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f3a21343e636..1ea0ec74524f 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -381,4 +381,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 1fb695854809..9c68be1365e1 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..a9c9245e8ba2
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,960 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	unsigned long			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * The timeout is chosen arbitrarily. It's only here to prevent locking
+	 * up the CPU in case of a device bug.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i)
+			cpu_relax();
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+	struct viommu_request req = {
+		.written = 0
+	};
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. When
+ *	the returned size is greater than zero, if a mapping is returned, the
+ *	caller must free it.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	unsigned long flags;
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node) {
+		spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+		return 0;
+	}
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false;
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->endpoints--;
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!vdomain->endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->endpoints)
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio");
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..0de9b44db14d
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,116 @@
+/*
+ * Virtio-iommu definition v0.6
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [virtio-dev] [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   6 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 116 +++++
 6 files changed, 1095 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 3bdc260e36b7..2a181924d420 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -14818,6 +14818,12 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f3a21343e636..1ea0ec74524f 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -381,4 +381,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 1fb695854809..9c68be1365e1 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..a9c9245e8ba2
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,960 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	unsigned long			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * The timeout is chosen arbitrarily. It's only here to prevent locking
+	 * up the CPU in case of a device bug.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i)
+			cpu_relax();
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+	struct viommu_request req = {
+		.written = 0
+	};
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. When
+ *	the returned size is greater than zero, if a mapping is returned, the
+ *	caller must free it.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	unsigned long flags;
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node) {
+		spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+		return 0;
+	}
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false;
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->endpoints--;
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!vdomain->endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->endpoints)
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio");
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..0de9b44db14d
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,116 @@
+/*
+ * Virtio-iommu definition v0.6
+ *
+ * Copyright (C) 2018 ARM Ltd.
+ *
+ * SPDX-License-Identifier: BSD-3-Clause
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.16.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/4] iommu/virtio: Add probe request
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 163 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 193 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index a9c9245e8ba2..3ac4b38eaf19 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -140,6 +142,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,105 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	/*
+	 * Treat unknown subtype as RESERVED, but urge users to update their
+	 * driver.
+	 */
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -703,6 +808,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -720,8 +826,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -753,15 +879,33 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -852,6 +996,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -933,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 0de9b44db14d..2335d9ed4676 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -15,6 +15,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -26,6 +27,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -33,6 +37,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -104,6 +109,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -111,6 +147,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 2/4] iommu/virtio: Add probe request
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (3 preceding siblings ...)
  (?)
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, robin.murphy,
	joro, eric.auger.pro

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 163 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 193 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index a9c9245e8ba2..3ac4b38eaf19 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -140,6 +142,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,105 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	/*
+	 * Treat unknown subtype as RESERVED, but urge users to update their
+	 * driver.
+	 */
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -703,6 +808,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -720,8 +826,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -753,15 +879,33 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -852,6 +996,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -933,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 0de9b44db14d..2335d9ed4676 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -15,6 +15,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -26,6 +27,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -33,6 +37,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -104,6 +109,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -111,6 +147,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [virtio-dev] [PATCH 2/4] iommu/virtio: Add probe request
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 163 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 193 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index a9c9245e8ba2..3ac4b38eaf19 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -140,6 +142,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,105 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	/*
+	 * Treat unknown subtype as RESERVED, but urge users to update their
+	 * driver.
+	 */
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -703,6 +808,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -720,8 +826,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -753,15 +879,33 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -852,6 +996,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -933,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 0de9b44db14d..2335d9ed4676 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -15,6 +15,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -26,6 +27,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -33,6 +37,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -104,6 +109,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -111,6 +147,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.16.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 3/4] iommu/virtio: Add event queue
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 139 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 143 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3ac4b38eaf19..6b96f1b36d5a 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,9 +43,10 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -84,6 +91,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -161,12 +177,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -201,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * up the CPU in case of a device bug.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -210,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -553,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -934,19 +1015,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -969,7 +1075,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		return ret;
 
@@ -1010,6 +1116,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2335d9ed4676..d6c0224efe61 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -150,4 +150,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 3/4] iommu/virtio: Add event queue
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (4 preceding siblings ...)
  (?)
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, robin.murphy,
	joro, eric.auger.pro

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 139 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 143 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3ac4b38eaf19..6b96f1b36d5a 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,9 +43,10 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -84,6 +91,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -161,12 +177,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -201,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * up the CPU in case of a device bug.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -210,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -553,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -934,19 +1015,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -969,7 +1075,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		return ret;
 
@@ -1010,6 +1116,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2335d9ed4676..d6c0224efe61 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -150,4 +150,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [virtio-dev] [PATCH 3/4] iommu/virtio: Add event queue
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 139 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 143 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 3ac4b38eaf19..6b96f1b36d5a 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,9 +43,10 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -84,6 +91,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -161,12 +177,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -201,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * up the CPU in case of a device bug.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -210,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -553,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -934,19 +1015,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -969,7 +1075,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		return ret;
 
@@ -1010,6 +1116,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2335d9ed4676..d6c0224efe61 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -150,4 +150,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.16.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
VFIO_IOMMU_TYPE1 as well.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/vfio/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
index c84333eb5eb5..65a1e691110c 100644
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -21,7 +21,7 @@ config VFIO_VIRQFD
 menuconfig VFIO
 	tristate "VFIO Non-Privileged userspace driver framework"
 	depends on IOMMU_API
-	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
 	select ANON_INODES
 	help
 	  VFIO provides a framework for secure userspace device drivers.
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (7 preceding siblings ...)
  (?)
@ 2018-02-14 14:53 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, robin.murphy,
	joro, eric.auger.pro

When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
VFIO_IOMMU_TYPE1 as well.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/vfio/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
index c84333eb5eb5..65a1e691110c 100644
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -21,7 +21,7 @@ config VFIO_VIRQFD
 menuconfig VFIO
 	tristate "VFIO Non-Privileged userspace driver framework"
 	depends on IOMMU_API
-	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
 	select ANON_INODES
 	help
 	  VFIO provides a framework for secure userspace device drivers.
-- 
2.16.1

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [virtio-dev] [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
@ 2018-02-14 14:53   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-14 14:53 UTC (permalink / raw)
  To: iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
VFIO_IOMMU_TYPE1 as well.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/vfio/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
index c84333eb5eb5..65a1e691110c 100644
--- a/drivers/vfio/Kconfig
+++ b/drivers/vfio/Kconfig
@@ -21,7 +21,7 @@ config VFIO_VIRQFD
 menuconfig VFIO
 	tristate "VFIO Non-Privileged userspace driver framework"
 	depends on IOMMU_API
-	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
 	select ANON_INODES
 	help
 	  VFIO provides a framework for secure userspace device drivers.
-- 
2.16.1


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
       [not found]   ` <20180214145340.1223-5-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
@ 2018-02-14 15:26     ` Alex Williamson
       [not found]       ` <20180214082639.54556efb-DGNDKt5SQtizQB+pC5nmwQ@public.gmane.org>
  2018-02-14 15:35       ` Robin Murphy
  0 siblings, 2 replies; 61+ messages in thread
From: Alex Williamson @ 2018-02-14 15:26 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	kvm-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	marc.zyngier-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

On Wed, 14 Feb 2018 14:53:40 +0000
Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org> wrote:

> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
> VFIO_IOMMU_TYPE1 as well.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/vfio/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
> index c84333eb5eb5..65a1e691110c 100644
> --- a/drivers/vfio/Kconfig
> +++ b/drivers/vfio/Kconfig
> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>  menuconfig VFIO
>  	tristate "VFIO Non-Privileged userspace driver framework"
>  	depends on IOMMU_API
> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>  	select ANON_INODES
>  	help
>  	  VFIO provides a framework for secure userspace device drivers.

Why are we basing this on specific IOMMU drivers in the first place?
Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
targets that support it and therefore we can forget about the specific
IOMMU drivers?  Thanks,

Alex

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-02-14 15:26   ` Alex Williamson
  -1 siblings, 0 replies; 61+ messages in thread
From: Alex Williamson @ 2018-02-14 15:26 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: virtio-dev, jayachandran.nair, lorenzo.pieralisi, tnowicki, kvm,
	mst, joro, will.deacon, virtualization, marc.zyngier, iommu,
	jintack, eric.auger, robin.murphy, kvmarm, eric.auger.pro

On Wed, 14 Feb 2018 14:53:40 +0000
Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:

> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
> VFIO_IOMMU_TYPE1 as well.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/vfio/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
> index c84333eb5eb5..65a1e691110c 100644
> --- a/drivers/vfio/Kconfig
> +++ b/drivers/vfio/Kconfig
> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>  menuconfig VFIO
>  	tristate "VFIO Non-Privileged userspace driver framework"
>  	depends on IOMMU_API
> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>  	select ANON_INODES
>  	help
>  	  VFIO provides a framework for secure userspace device drivers.

Why are we basing this on specific IOMMU drivers in the first place?
Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
targets that support it and therefore we can forget about the specific
IOMMU drivers?  Thanks,

Alex

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
       [not found]       ` <20180214082639.54556efb-DGNDKt5SQtizQB+pC5nmwQ@public.gmane.org>
@ 2018-02-14 15:35         ` Robin Murphy
  2018-02-15 13:53           ` Jean-Philippe Brucker
  2018-02-15 13:53             ` Jean-Philippe Brucker
  0 siblings, 2 replies; 61+ messages in thread
From: Robin Murphy @ 2018-02-14 15:35 UTC (permalink / raw)
  To: Alex Williamson, Jean-Philippe Brucker
  Cc: virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	kvm-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	marc.zyngier-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

On 14/02/18 15:26, Alex Williamson wrote:
> On Wed, 14 Feb 2018 14:53:40 +0000
> Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org> wrote:
> 
>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>> VFIO_IOMMU_TYPE1 as well.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
>> ---
>>   drivers/vfio/Kconfig | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>> index c84333eb5eb5..65a1e691110c 100644
>> --- a/drivers/vfio/Kconfig
>> +++ b/drivers/vfio/Kconfig
>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>   menuconfig VFIO
>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>   	depends on IOMMU_API
>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>   	select ANON_INODES
>>   	help
>>   	  VFIO provides a framework for secure userspace device drivers.
> 
> Why are we basing this on specific IOMMU drivers in the first place?
> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
> targets that support it and therefore we can forget about the specific
> IOMMU drivers?  Thanks,

Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
ones) making use of IOMMU_API won't actually support VFIO, but it can't 
hurt to allow them to select the type 1 driver regardless. Especially as 
multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Robin.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 15:26     ` Alex Williamson
       [not found]       ` <20180214082639.54556efb-DGNDKt5SQtizQB+pC5nmwQ@public.gmane.org>
@ 2018-02-14 15:35       ` Robin Murphy
  1 sibling, 0 replies; 61+ messages in thread
From: Robin Murphy @ 2018-02-14 15:35 UTC (permalink / raw)
  To: Alex Williamson, Jean-Philippe Brucker
  Cc: virtio-dev, jayachandran.nair, lorenzo.pieralisi, tnowicki, kvm,
	mst, joro, will.deacon, virtualization, marc.zyngier, iommu,
	jintack, eric.auger, kvmarm, eric.auger.pro

On 14/02/18 15:26, Alex Williamson wrote:
> On Wed, 14 Feb 2018 14:53:40 +0000
> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
> 
>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>> VFIO_IOMMU_TYPE1 as well.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>   drivers/vfio/Kconfig | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>> index c84333eb5eb5..65a1e691110c 100644
>> --- a/drivers/vfio/Kconfig
>> +++ b/drivers/vfio/Kconfig
>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>   menuconfig VFIO
>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>   	depends on IOMMU_API
>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>   	select ANON_INODES
>>   	help
>>   	  VFIO provides a framework for secure userspace device drivers.
> 
> Why are we basing this on specific IOMMU drivers in the first place?
> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
> targets that support it and therefore we can forget about the specific
> IOMMU drivers?  Thanks,

Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
ones) making use of IOMMU_API won't actually support VFIO, but it can't 
hurt to allow them to select the type 1 driver regardless. Especially as 
multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Robin.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 15:35         ` Robin Murphy
  2018-02-15 13:53           ` Jean-Philippe Brucker
@ 2018-02-15 13:53             ` Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-15 13:53 UTC (permalink / raw)
  To: Robin Murphy, Alex Williamson
  Cc: iommu, kvm, virtualization, virtio-dev, kvmarm, joro, mst,
	jasowang, Marc Zyngier, Will Deacon, Lorenzo Pieralisi,
	eric.auger, eric.auger.pro, peterx, bharat.bhushan

On 14/02/18 15:35, Robin Murphy wrote:
> On 14/02/18 15:26, Alex Williamson wrote:
>> On Wed, 14 Feb 2018 14:53:40 +0000
>> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
>>
>>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>>> VFIO_IOMMU_TYPE1 as well.
>>>
>>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>>> ---
>>>   drivers/vfio/Kconfig | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>>> index c84333eb5eb5..65a1e691110c 100644
>>> --- a/drivers/vfio/Kconfig
>>> +++ b/drivers/vfio/Kconfig
>>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>>   menuconfig VFIO
>>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>>   	depends on IOMMU_API
>>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>>   	select ANON_INODES
>>>   	help
>>>   	  VFIO provides a framework for secure userspace device drivers.
>>
>> Why are we basing this on specific IOMMU drivers in the first place?
>> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
>> targets that support it and therefore we can forget about the specific
>> IOMMU drivers?  Thanks,
> 
> Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
> ones) making use of IOMMU_API won't actually support VFIO, but it can't 
> hurt to allow them to select the type 1 driver regardless. Especially as 
> multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Cool, then I'll change that line to:

+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
@ 2018-02-15 13:53             ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-15 13:53 UTC (permalink / raw)
  To: Robin Murphy, Alex Williamson
  Cc: iommu, kvm, virtualization, virtio-dev, kvmarm, joro, mst,
	jasowang, Marc Zyngier, Will Deacon, Lorenzo Pieralisi,
	eric.auger, eric.auger.pro, peterx, bharat.bhushan

On 14/02/18 15:35, Robin Murphy wrote:
> On 14/02/18 15:26, Alex Williamson wrote:
>> On Wed, 14 Feb 2018 14:53:40 +0000
>> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
>>
>>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>>> VFIO_IOMMU_TYPE1 as well.
>>>
>>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>>> ---
>>>   drivers/vfio/Kconfig | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>>> index c84333eb5eb5..65a1e691110c 100644
>>> --- a/drivers/vfio/Kconfig
>>> +++ b/drivers/vfio/Kconfig
>>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>>   menuconfig VFIO
>>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>>   	depends on IOMMU_API
>>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>>   	select ANON_INODES
>>>   	help
>>>   	  VFIO provides a framework for secure userspace device drivers.
>>
>> Why are we basing this on specific IOMMU drivers in the first place?
>> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
>> targets that support it and therefore we can forget about the specific
>> IOMMU drivers?  Thanks,
> 
> Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
> ones) making use of IOMMU_API won't actually support VFIO, but it can't 
> hurt to allow them to select the type 1 driver regardless. Especially as 
> multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Cool, then I'll change that line to:

+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
  2018-02-14 15:35         ` Robin Murphy
@ 2018-02-15 13:53           ` Jean-Philippe Brucker
  2018-02-15 13:53             ` Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-15 13:53 UTC (permalink / raw)
  To: Robin Murphy, Alex Williamson
  Cc: virtio-dev, jayachandran.nair, Lorenzo Pieralisi, tnowicki, kvm,
	mst, joro, Will Deacon, virtualization, Marc Zyngier, iommu,
	jintack, eric.auger, kvmarm, eric.auger.pro

On 14/02/18 15:35, Robin Murphy wrote:
> On 14/02/18 15:26, Alex Williamson wrote:
>> On Wed, 14 Feb 2018 14:53:40 +0000
>> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
>>
>>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>>> VFIO_IOMMU_TYPE1 as well.
>>>
>>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>>> ---
>>>   drivers/vfio/Kconfig | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>>> index c84333eb5eb5..65a1e691110c 100644
>>> --- a/drivers/vfio/Kconfig
>>> +++ b/drivers/vfio/Kconfig
>>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>>   menuconfig VFIO
>>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>>   	depends on IOMMU_API
>>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>>   	select ANON_INODES
>>>   	help
>>>   	  VFIO provides a framework for secure userspace device drivers.
>>
>> Why are we basing this on specific IOMMU drivers in the first place?
>> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
>> targets that support it and therefore we can forget about the specific
>> IOMMU drivers?  Thanks,
> 
> Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
> ones) making use of IOMMU_API won't actually support VFIO, but it can't 
> hurt to allow them to select the type 1 driver regardless. Especially as 
> multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Cool, then I'll change that line to:

+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu
@ 2018-02-15 13:53             ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-15 13:53 UTC (permalink / raw)
  To: Robin Murphy, Alex Williamson
  Cc: iommu, kvm, virtualization, virtio-dev, kvmarm, joro, mst,
	jasowang, Marc Zyngier, Will Deacon, Lorenzo Pieralisi,
	eric.auger, eric.auger.pro, peterx, bharat.bhushan, tnowicki,
	jayachandran.nair, kevin.tian, jintack

On 14/02/18 15:35, Robin Murphy wrote:
> On 14/02/18 15:26, Alex Williamson wrote:
>> On Wed, 14 Feb 2018 14:53:40 +0000
>> Jean-Philippe Brucker <jean-philippe.brucker@arm.com> wrote:
>>
>>> When enabling both VFIO and VIRTIO_IOMMU modules, automatically select
>>> VFIO_IOMMU_TYPE1 as well.
>>>
>>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>>> ---
>>>   drivers/vfio/Kconfig | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig
>>> index c84333eb5eb5..65a1e691110c 100644
>>> --- a/drivers/vfio/Kconfig
>>> +++ b/drivers/vfio/Kconfig
>>> @@ -21,7 +21,7 @@ config VFIO_VIRQFD
>>>   menuconfig VFIO
>>>   	tristate "VFIO Non-Privileged userspace driver framework"
>>>   	depends on IOMMU_API
>>> -	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
>>> +	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3 || VIRTIO_IOMMU)
>>>   	select ANON_INODES
>>>   	help
>>>   	  VFIO provides a framework for secure userspace device drivers.
>>
>> Why are we basing this on specific IOMMU drivers in the first place?
>> Only ARM is doing that.  Shouldn't IOMMU_API only be enabled for ARM
>> targets that support it and therefore we can forget about the specific
>> IOMMU drivers?  Thanks,
> 
> Makes sense - the majority of ARM systems (and mobile/embedded ARM64 
> ones) making use of IOMMU_API won't actually support VFIO, but it can't 
> hurt to allow them to select the type 1 driver regardless. Especially as 
> multiplatform configs are liable to be pulling in the SMMU driver(s) anyway.

Cool, then I'll change that line to:

+	select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]   ` <20180214145340.1223-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
@ 2018-02-19 12:23     ` Tomasz Nowicki
  2018-02-20 11:30       ` Jean-Philippe Brucker
  2018-02-20 11:30         ` [virtio-dev] " Jean-Philippe Brucker
  2018-02-21 20:12     ` kbuild test robot
                       ` (3 subsequent siblings)
  4 siblings, 2 replies; 61+ messages in thread
From: Tomasz Nowicki @ 2018-02-19 12:23 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

Hi Jean,

On 14.02.2018 15:53, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>   MAINTAINERS                       |   6 +
>   drivers/iommu/Kconfig             |  11 +
>   drivers/iommu/Makefile            |   1 +
>   drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/virtio_ids.h   |   1 +
>   include/uapi/linux/virtio_iommu.h | 116 +++++
>   6 files changed, 1095 insertions(+)
>   create mode 100644 drivers/iommu/virtio-iommu.c
>   create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3bdc260e36b7..2a181924d420 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -14818,6 +14818,12 @@ S:	Maintained
>   F:	drivers/virtio/virtio_input.c
>   F:	include/uapi/linux/virtio_input.h
>   
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>   VIRTUAL BOX GUEST DEVICE DRIVER
>   M:	Hans de Goede <hdegoede-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>   M:	Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index f3a21343e636..1ea0ec74524f 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -381,4 +381,15 @@ config QCOM_IOMMU
>   	help
>   	  Support for IOMMU on certain Qualcomm SoCs.
>   
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>   endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1fb695854809..9c68be1365e1 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>   obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>   obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>   obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	unsigned long			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);

We should remove current element from the pending list. There is no 
guarantee we get response for each while loop so when we get back for 
more the _viommu_send_reqs_sync() caller will pass pointer to the out of 
date head next time.

> +	}
> +
> +	return nr_received;
> +}
> +
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * The timeout is chosen arbitrarily. It's only here to prevent locking
> +	 * up the CPU in case of a device bug.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i)
> +			cpu_relax();
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}

Thanks,
Tomasz

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-19 12:23     ` Tomasz Nowicki
@ 2018-02-20 11:30         ` Jean-Philippe Brucker
  2018-02-20 11:30         ` [virtio-dev] " Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-20 11:30 UTC (permalink / raw)
  To: Tomasz Nowicki, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

On 19/02/18 12:23, Tomasz Nowicki wrote:
[...]
>> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>> +			       struct list_head *sent)
>> +{
>> +
>> +	unsigned int len;
>> +	int nr_received = 0;
>> +	struct viommu_request *req, *pending;
>> +
>> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>> +	if (WARN_ON(!pending))
>> +		return 0;
>> +
>> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
>> +		if (req != pending) {
>> +			dev_warn(viommu->dev, "discarding stale request\n");
>> +			continue;
>> +		}
>> +
>> +		pending->written = len;
>> +
>> +		if (++nr_received == nr_sent) {
>> +			WARN_ON(!list_is_last(&pending->list, sent));
>> +			break;
>> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
>> +			break;
>> +		}
>> +
>> +		pending = list_next_entry(pending, list);
> 
> We should remove current element from the pending list. There is no 
> guarantee we get response for each while loop so when we get back for 
> more the _viommu_send_reqs_sync() caller will pass pointer to the out of 
> date head next time.

Right, I'll fix this

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-19 12:23     ` Tomasz Nowicki
@ 2018-02-20 11:30       ` Jean-Philippe Brucker
  2018-02-20 11:30         ` [virtio-dev] " Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-20 11:30 UTC (permalink / raw)
  To: Tomasz Nowicki, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, Will Deacon, jintack, eric.auger, Robin Murphy,
	joro, eric.auger.pro

On 19/02/18 12:23, Tomasz Nowicki wrote:
[...]
>> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>> +			       struct list_head *sent)
>> +{
>> +
>> +	unsigned int len;
>> +	int nr_received = 0;
>> +	struct viommu_request *req, *pending;
>> +
>> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>> +	if (WARN_ON(!pending))
>> +		return 0;
>> +
>> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
>> +		if (req != pending) {
>> +			dev_warn(viommu->dev, "discarding stale request\n");
>> +			continue;
>> +		}
>> +
>> +		pending->written = len;
>> +
>> +		if (++nr_received == nr_sent) {
>> +			WARN_ON(!list_is_last(&pending->list, sent));
>> +			break;
>> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
>> +			break;
>> +		}
>> +
>> +		pending = list_next_entry(pending, list);
> 
> We should remove current element from the pending list. There is no 
> guarantee we get response for each while loop so when we get back for 
> more the _viommu_send_reqs_sync() caller will pass pointer to the out of 
> date head next time.

Right, I'll fix this

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-02-20 11:30         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-20 11:30 UTC (permalink / raw)
  To: Tomasz Nowicki, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

On 19/02/18 12:23, Tomasz Nowicki wrote:
[...]
>> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>> +			       struct list_head *sent)
>> +{
>> +
>> +	unsigned int len;
>> +	int nr_received = 0;
>> +	struct viommu_request *req, *pending;
>> +
>> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>> +	if (WARN_ON(!pending))
>> +		return 0;
>> +
>> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
>> +		if (req != pending) {
>> +			dev_warn(viommu->dev, "discarding stale request\n");
>> +			continue;
>> +		}
>> +
>> +		pending->written = len;
>> +
>> +		if (++nr_received == nr_sent) {
>> +			WARN_ON(!list_is_last(&pending->list, sent));
>> +			break;
>> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
>> +			break;
>> +		}
>> +
>> +		pending = list_next_entry(pending, list);
> 
> We should remove current element from the pending list. There is no 
> guarantee we get response for each while loop so when we get back for 
> more the _viommu_send_reqs_sync() caller will pass pointer to the out of 
> date head next time.

Right, I'll fix this

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]   ` <20180214145340.1223-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
  2018-02-19 12:23     ` Tomasz Nowicki
@ 2018-02-21 20:12     ` kbuild test robot
       [not found]       ` <201802220455.lMEb6LLi%fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
  2018-02-21 21:08     ` kbuild test robot
                       ` (2 subsequent siblings)
  4 siblings, 1 reply; 61+ messages in thread
From: kbuild test robot @ 2018-02-21 20:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	marc.zyngier-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kbuild-all-JC7UmRfGjtg

[-- Attachment #1: Type: text/plain, Size: 2371 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: arm64-allmodconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
   arch/arm64/kernel/head.o: In function `kimage_vaddr':
   (.idmap.text+0x0): dangerous relocation: unsupported relocation
   arch/arm64/kernel/head.o: In function `__primary_switch':
   (.idmap.text+0x340): dangerous relocation: unsupported relocation
   (.idmap.text+0x348): dangerous relocation: unsupported relocation
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
   virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
   virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
   virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
   virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
   virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 59126 bytes --]

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-02-21 20:12   ` kbuild test robot
  -1 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-02-21 20:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki, mst, will.deacon, virtualization, jintack,
	eric.auger.pro, virtio-dev, jayachandran.nair, lorenzo.pieralisi,
	kvm, joro, kvmarm, marc.zyngier, eric.auger, iommu, kbuild-all,
	robin.murphy

[-- Attachment #1: Type: text/plain, Size: 2371 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: arm64-allmodconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

   aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
   arch/arm64/kernel/head.o: In function `kimage_vaddr':
   (.idmap.text+0x0): dangerous relocation: unsupported relocation
   arch/arm64/kernel/head.o: In function `__primary_switch':
   (.idmap.text+0x340): dangerous relocation: unsupported relocation
   (.idmap.text+0x348): dangerous relocation: unsupported relocation
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
   virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
   virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
   virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
   virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
   virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 59126 bytes --]

[-- Attachment #3: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]   ` <20180214145340.1223-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
  2018-02-19 12:23     ` Tomasz Nowicki
  2018-02-21 20:12     ` kbuild test robot
@ 2018-02-21 21:08     ` kbuild test robot
  2018-03-21  6:43       ` [virtio-dev] " Tian, Kevin
  2018-03-23 14:48     ` Robin Murphy
  4 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-02-21 21:08 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	marc.zyngier-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kbuild-all-JC7UmRfGjtg

[-- Attachment #1: Type: text/plain, Size: 1715 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=ia64 

All errors (new ones prefixed by >>):

   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
>> virtio-iommu.c:(.text+0xa82): undefined reference to `virtqueue_add_sgs'
>> virtio-iommu.c:(.text+0xb52): undefined reference to `virtqueue_kick'
>> virtio-iommu.c:(.text+0xd82): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
>> virtio-iommu.c:(.text+0x23f2): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0x2572): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0x26f2): undefined reference to `virtio_check_driver_offered_feature'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
>> virtio-iommu.c:(.init.text+0x22): undefined reference to `register_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 50250 bytes --]

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
  (?)
@ 2018-02-21 21:08   ` kbuild test robot
  -1 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-02-21 21:08 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki, mst, will.deacon, virtualization, jintack,
	eric.auger.pro, virtio-dev, jayachandran.nair, lorenzo.pieralisi,
	kvm, joro, kvmarm, marc.zyngier, eric.auger, iommu, kbuild-all,
	robin.murphy

[-- Attachment #1: Type: text/plain, Size: 1715 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: ia64-allmodconfig (attached as .config)
compiler: ia64-linux-gcc (GCC) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=ia64 

All errors (new ones prefixed by >>):

   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
>> virtio-iommu.c:(.text+0xa82): undefined reference to `virtqueue_add_sgs'
>> virtio-iommu.c:(.text+0xb52): undefined reference to `virtqueue_kick'
>> virtio-iommu.c:(.text+0xd82): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
>> virtio-iommu.c:(.text+0x23f2): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0x2572): undefined reference to `virtio_check_driver_offered_feature'
   virtio-iommu.c:(.text+0x26f2): undefined reference to `virtio_check_driver_offered_feature'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
>> virtio-iommu.c:(.init.text+0x22): undefined reference to `register_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 50250 bytes --]

[-- Attachment #3: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 3/4] iommu/virtio: Add event queue
       [not found]   ` <20180214145340.1223-4-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
@ 2018-02-22  1:35     ` kbuild test robot
  0 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-02-22  1:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	marc.zyngier-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kbuild-all-JC7UmRfGjtg

[-- Attachment #1: Type: text/plain, Size: 2426 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: parisc-allmodconfig (attached as .config)
compiler: hppa-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=parisc 

All errors (new ones prefixed by >>):

   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
   (.init.text+0x24): undefined reference to `register_virtio_driver'
   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
   (.text.viommu_send_reqs_sync+0xdc): undefined reference to `virtqueue_add_sgs'
   (.text.viommu_send_reqs_sync+0x12c): undefined reference to `virtqueue_kick'
   (.text.viommu_send_reqs_sync+0x29c): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `viommu_event_handler':
>> (.text.viommu_event_handler+0x288): undefined reference to `virtqueue_add_inbuf'
>> (.text.viommu_event_handler+0x2a8): undefined reference to `virtqueue_get_buf'
>> (.text.viommu_event_handler+0x2b8): undefined reference to `virtqueue_kick'
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
   (.text.viommu_probe+0x1a0): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x248): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x2ec): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x328): undefined reference to `virtio_check_driver_offered_feature'
>> (.text.viommu_probe+0x428): undefined reference to `virtqueue_add_inbuf'
>> (.text.viommu_probe+0x440): undefined reference to `virtqueue_kick'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
   (.exit.text+0x18): undefined reference to `unregister_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52870 bytes --]

[-- Attachment #3: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 3/4] iommu/virtio: Add event queue
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-02-22  1:35   ` kbuild test robot
  -1 siblings, 0 replies; 61+ messages in thread
From: kbuild test robot @ 2018-02-22  1:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: tnowicki, mst, will.deacon, virtualization, jintack,
	eric.auger.pro, virtio-dev, jayachandran.nair, lorenzo.pieralisi,
	kvm, joro, kvmarm, marc.zyngier, eric.auger, iommu, kbuild-all,
	robin.murphy

[-- Attachment #1: Type: text/plain, Size: 2426 bytes --]

Hi Jean-Philippe,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.16-rc2 next-20180221]
[cannot apply to iommu/next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Jean-Philippe-Brucker/Add-virtio-iommu-driver/20180217-075417
config: parisc-allmodconfig (attached as .config)
compiler: hppa-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=parisc 

All errors (new ones prefixed by >>):

   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
   (.init.text+0x24): undefined reference to `register_virtio_driver'
   drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
   (.text.viommu_send_reqs_sync+0xdc): undefined reference to `virtqueue_add_sgs'
   (.text.viommu_send_reqs_sync+0x12c): undefined reference to `virtqueue_kick'
   (.text.viommu_send_reqs_sync+0x29c): undefined reference to `virtqueue_get_buf'
   drivers/iommu/virtio-iommu.o: In function `viommu_event_handler':
>> (.text.viommu_event_handler+0x288): undefined reference to `virtqueue_add_inbuf'
>> (.text.viommu_event_handler+0x2a8): undefined reference to `virtqueue_get_buf'
>> (.text.viommu_event_handler+0x2b8): undefined reference to `virtqueue_kick'
   drivers/iommu/virtio-iommu.o: In function `viommu_probe':
   (.text.viommu_probe+0x1a0): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x248): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x2ec): undefined reference to `virtio_check_driver_offered_feature'
   (.text.viommu_probe+0x328): undefined reference to `virtio_check_driver_offered_feature'
>> (.text.viommu_probe+0x428): undefined reference to `virtqueue_add_inbuf'
>> (.text.viommu_probe+0x440): undefined reference to `virtqueue_kick'
   drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
   (.exit.text+0x18): undefined reference to `unregister_virtio_driver'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52870 bytes --]

[-- Attachment #3: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-21 20:12     ` kbuild test robot
@ 2018-02-22 11:04           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-22 11:04 UTC (permalink / raw)
  To: kbuild test robot
  Cc: tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	Will Deacon,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg, Marc Zyngier,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kbuild-all-JC7UmRfGjtg

On 21/02/18 20:12, kbuild test robot wrote:
[...]
> config: arm64-allmodconfig (attached as .config)
[...]
>    aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
>    arch/arm64/kernel/head.o: In function `kimage_vaddr':
>    (.idmap.text+0x0): dangerous relocation: unsupported relocation

Is this related?

>    arch/arm64/kernel/head.o: In function `__primary_switch':
>    (.idmap.text+0x340): dangerous relocation: unsupported relocation
>    (.idmap.text+0x348): dangerous relocation: unsupported relocation
>    drivers/iommu/virtio-iommu.o: In function `viommu_probe':
>    virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
>    virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
>    virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
>    drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
>    virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
>    virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
>    virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
>    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
>    virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
>    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
>>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'

Right. At the moment CONFIG_VIRTIO_IOMMU is a bool instead of tristate,
because the IOMMU subsystem isn't entirely ready to have IOMMU drivers
built as modules. In addition to exporting symbols it would also needs to
hold off probing endpoints behind the IOMMU until the IOMMU driver is
loaded. At the moment (I think) it gives up once userspace is reached (see
of_iommu_driver_present).

The above report is due to VIRTIO=m and VIRTIO_IOMMU=y. To solve it we could:

a) Allow VIRTIO_IOMMU to be built as module by exporting a dozen IOMMU
symbols. It would be a lie. The driver wouldn't be usable because of the
probe issue discussed above, but it would build.

b) Actually make any IOMMU driver work as module. Whilst it would
certainly be a nice feature, it's a bigger problem and I don't think it
has its place in this series.

c) Make VIRTIO_IOMMU depend on VIRTIO_MMIO=y instead of VIRTIO_MMIO, which
I think is the sanest for now (and does work), even though distro kernels
probably all have VIRTIO=m.

I prefer doing c) now and experiment with b) once I got some time.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-02-22 11:04           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-02-22 11:04 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, iommu, kvm, virtualization, virtio-dev, kvmarm, joro,
	alex.williamson, mst, jasowang, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian,
	jintack

On 21/02/18 20:12, kbuild test robot wrote:
[...]
> config: arm64-allmodconfig (attached as .config)
[...]
>    aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
>    arch/arm64/kernel/head.o: In function `kimage_vaddr':
>    (.idmap.text+0x0): dangerous relocation: unsupported relocation

Is this related?

>    arch/arm64/kernel/head.o: In function `__primary_switch':
>    (.idmap.text+0x340): dangerous relocation: unsupported relocation
>    (.idmap.text+0x348): dangerous relocation: unsupported relocation
>    drivers/iommu/virtio-iommu.o: In function `viommu_probe':
>    virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
>    virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
>    virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
>    drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
>    virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
>    virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
>    virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
>    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
>    virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
>    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
>>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'

Right. At the moment CONFIG_VIRTIO_IOMMU is a bool instead of tristate,
because the IOMMU subsystem isn't entirely ready to have IOMMU drivers
built as modules. In addition to exporting symbols it would also needs to
hold off probing endpoints behind the IOMMU until the IOMMU driver is
loaded. At the moment (I think) it gives up once userspace is reached (see
of_iommu_driver_present).

The above report is due to VIRTIO=m and VIRTIO_IOMMU=y. To solve it we could:

a) Allow VIRTIO_IOMMU to be built as module by exporting a dozen IOMMU
symbols. It would be a lie. The driver wouldn't be usable because of the
probe issue discussed above, but it would build.

b) Actually make any IOMMU driver work as module. Whilst it would
certainly be a nice feature, it's a bigger problem and I don't think it
has its place in this series.

c) Make VIRTIO_IOMMU depend on VIRTIO_MMIO=y instead of VIRTIO_MMIO, which
I think is the sanest for now (and does work), even though distro kernels
probably all have VIRTIO=m.

I prefer doing c) now and experiment with b) once I got some time.

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-22 11:04           ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-02-27 14:47               ` Michael S. Tsirkin
  -1 siblings, 0 replies; 61+ messages in thread
From: Michael S. Tsirkin @ 2018-02-27 14:47 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kvm-u79uwXL29TY76Z2rM5mHXA, jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	Will Deacon,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA, kbuild test robot,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg, Marc Zyngier,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kbuild-all-JC7UmRfGjtg

On Thu, Feb 22, 2018 at 11:04:30AM +0000, Jean-Philippe Brucker wrote:
> On 21/02/18 20:12, kbuild test robot wrote:
> [...]
> > config: arm64-allmodconfig (attached as .config)
> [...]
> >    aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
> >    arch/arm64/kernel/head.o: In function `kimage_vaddr':
> >    (.idmap.text+0x0): dangerous relocation: unsupported relocation
> 
> Is this related?
> 
> >    arch/arm64/kernel/head.o: In function `__primary_switch':
> >    (.idmap.text+0x340): dangerous relocation: unsupported relocation
> >    (.idmap.text+0x348): dangerous relocation: unsupported relocation
> >    drivers/iommu/virtio-iommu.o: In function `viommu_probe':
> >    virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
> >    drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
> >    virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
> >    virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
> >    virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
> >    virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
> >>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'
> 
> Right. At the moment CONFIG_VIRTIO_IOMMU is a bool instead of tristate,
> because the IOMMU subsystem isn't entirely ready to have IOMMU drivers
> built as modules. In addition to exporting symbols it would also needs to
> hold off probing endpoints behind the IOMMU until the IOMMU driver is
> loaded. At the moment (I think) it gives up once userspace is reached (see
> of_iommu_driver_present).
> 
> The above report is due to VIRTIO=m and VIRTIO_IOMMU=y. To solve it we could:
> 
> a) Allow VIRTIO_IOMMU to be built as module by exporting a dozen IOMMU
> symbols. It would be a lie. The driver wouldn't be usable because of the
> probe issue discussed above, but it would build.
> 
> b) Actually make any IOMMU driver work as module. Whilst it would
> certainly be a nice feature, it's a bigger problem and I don't think it
> has its place in this series.
> 
> c) Make VIRTIO_IOMMU depend on VIRTIO_MMIO=y instead of VIRTIO_MMIO, which
> I think is the sanest for now (and does work), even though distro kernels
> probably all have VIRTIO=m.
> 
> I prefer doing c) now and experiment with b) once I got some time.
> 
> Thanks,
> Jean

It kind of means it's a toy for now though. So fine as long
as it's out of tree.

-- 
MST

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-22 11:04           ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-02-27 14:47           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 61+ messages in thread
From: Michael S. Tsirkin @ 2018-02-27 14:47 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kvm, Will Deacon, virtualization, tnowicki, jintack,
	eric.auger.pro, virtio-dev, jayachandran.nair, Lorenzo Pieralisi,
	kbuild test robot, joro, kvmarm, Marc Zyngier, eric.auger, iommu,
	kbuild-all, Robin Murphy

On Thu, Feb 22, 2018 at 11:04:30AM +0000, Jean-Philippe Brucker wrote:
> On 21/02/18 20:12, kbuild test robot wrote:
> [...]
> > config: arm64-allmodconfig (attached as .config)
> [...]
> >    aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
> >    arch/arm64/kernel/head.o: In function `kimage_vaddr':
> >    (.idmap.text+0x0): dangerous relocation: unsupported relocation
> 
> Is this related?
> 
> >    arch/arm64/kernel/head.o: In function `__primary_switch':
> >    (.idmap.text+0x340): dangerous relocation: unsupported relocation
> >    (.idmap.text+0x348): dangerous relocation: unsupported relocation
> >    drivers/iommu/virtio-iommu.o: In function `viommu_probe':
> >    virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
> >    drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
> >    virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
> >    virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
> >    virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
> >    virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
> >>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'
> 
> Right. At the moment CONFIG_VIRTIO_IOMMU is a bool instead of tristate,
> because the IOMMU subsystem isn't entirely ready to have IOMMU drivers
> built as modules. In addition to exporting symbols it would also needs to
> hold off probing endpoints behind the IOMMU until the IOMMU driver is
> loaded. At the moment (I think) it gives up once userspace is reached (see
> of_iommu_driver_present).
> 
> The above report is due to VIRTIO=m and VIRTIO_IOMMU=y. To solve it we could:
> 
> a) Allow VIRTIO_IOMMU to be built as module by exporting a dozen IOMMU
> symbols. It would be a lie. The driver wouldn't be usable because of the
> probe issue discussed above, but it would build.
> 
> b) Actually make any IOMMU driver work as module. Whilst it would
> certainly be a nice feature, it's a bigger problem and I don't think it
> has its place in this series.
> 
> c) Make VIRTIO_IOMMU depend on VIRTIO_MMIO=y instead of VIRTIO_MMIO, which
> I think is the sanest for now (and does work), even though distro kernels
> probably all have VIRTIO=m.
> 
> I prefer doing c) now and experiment with b) once I got some time.
> 
> Thanks,
> Jean

It kind of means it's a toy for now though. So fine as long
as it's out of tree.

-- 
MST

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-02-27 14:47               ` Michael S. Tsirkin
  0 siblings, 0 replies; 61+ messages in thread
From: Michael S. Tsirkin @ 2018-02-27 14:47 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kbuild test robot, kbuild-all, iommu, kvm, virtualization,
	virtio-dev, kvmarm, joro, alex.williamson, jasowang,
	Marc Zyngier, Robin Murphy, Will Deacon, Lorenzo Pieralisi,
	eric.auger, eric.auger.pro, peterx, bharat.bhushan, tnowicki,
	jayachandran.nair, kevin.tian, jintack

On Thu, Feb 22, 2018 at 11:04:30AM +0000, Jean-Philippe Brucker wrote:
> On 21/02/18 20:12, kbuild test robot wrote:
> [...]
> > config: arm64-allmodconfig (attached as .config)
> [...]
> >    aarch64-linux-gnu-ld: arch/arm64/kernel/head.o: relocation R_AARCH64_ABS32 against `_kernel_offset_le_lo32' can not be used when making a shared object
> >    arch/arm64/kernel/head.o: In function `kimage_vaddr':
> >    (.idmap.text+0x0): dangerous relocation: unsupported relocation
> 
> Is this related?
> 
> >    arch/arm64/kernel/head.o: In function `__primary_switch':
> >    (.idmap.text+0x340): dangerous relocation: unsupported relocation
> >    (.idmap.text+0x348): dangerous relocation: unsupported relocation
> >    drivers/iommu/virtio-iommu.o: In function `viommu_probe':
> >    virtio-iommu.c:(.text+0xbdc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xcfc): undefined reference to `virtio_check_driver_offered_feature'
> >    virtio-iommu.c:(.text+0xe10): undefined reference to `virtio_check_driver_offered_feature'
> >    drivers/iommu/virtio-iommu.o: In function `viommu_send_reqs_sync':
> >    virtio-iommu.c:(.text+0x130c): undefined reference to `virtqueue_add_sgs'
> >    virtio-iommu.c:(.text+0x1398): undefined reference to `virtqueue_kick'
> >    virtio-iommu.c:(.text+0x14d4): undefined reference to `virtqueue_get_buf'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_init':
> >    virtio-iommu.c:(.init.text+0x1c): undefined reference to `register_virtio_driver'
> >    drivers/iommu/virtio-iommu.o: In function `virtio_iommu_drv_exit':
> >>> virtio-iommu.c:(.exit.text+0x14): undefined reference to `unregister_virtio_driver'
> 
> Right. At the moment CONFIG_VIRTIO_IOMMU is a bool instead of tristate,
> because the IOMMU subsystem isn't entirely ready to have IOMMU drivers
> built as modules. In addition to exporting symbols it would also needs to
> hold off probing endpoints behind the IOMMU until the IOMMU driver is
> loaded. At the moment (I think) it gives up once userspace is reached (see
> of_iommu_driver_present).
> 
> The above report is due to VIRTIO=m and VIRTIO_IOMMU=y. To solve it we could:
> 
> a) Allow VIRTIO_IOMMU to be built as module by exporting a dozen IOMMU
> symbols. It would be a lie. The driver wouldn't be usable because of the
> probe issue discussed above, but it would build.
> 
> b) Actually make any IOMMU driver work as module. Whilst it would
> certainly be a nice feature, it's a bigger problem and I don't think it
> has its place in this series.
> 
> c) Make VIRTIO_IOMMU depend on VIRTIO_MMIO=y instead of VIRTIO_MMIO, which
> I think is the sanest for now (and does work), even though distro kernels
> probably all have VIRTIO=m.
> 
> I prefer doing c) now and experiment with b) once I got some time.
> 
> Thanks,
> Jean

It kind of means it's a toy for now though. So fine as long
as it's out of tree.

-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-03-21  6:43       ` Tian, Kevin
  -1 siblings, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-21  6:43 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org]
> Sent: Wednesday, February 14, 2018 10:54 PM
> 
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without
> emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and
> UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>

[...]
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000

this is ARM specific, and according to virtio-iommu spec isn't it
better probed on the endpoint instead of hard-coding here?

Thanks
Kevin

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
                     ` (3 preceding siblings ...)
  (?)
@ 2018-03-21  6:43   ` Tian, Kevin
  -1 siblings, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-21  6:43 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, robin.murphy,
	joro, eric.auger.pro

> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
> Sent: Wednesday, February 14, 2018 10:54 PM
> 
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without
> emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and
> UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>

[...]
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000

this is ARM specific, and according to virtio-iommu spec isn't it
better probed on the endpoint instead of hard-coding here?

Thanks
Kevin

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] RE: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-03-21  6:43       ` Tian, Kevin
  0 siblings, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-21  6:43 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, jintack

> From: Jean-Philippe Brucker [mailto:jean-philippe.brucker@arm.com]
> Sent: Wednesday, February 14, 2018 10:54 PM
> 
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without
> emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and
> UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>

[...]
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000

this is ARM specific, and according to virtio-iommu spec isn't it
better probed on the endpoint instead of hard-coding here?

Thanks
Kevin

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-21  6:43       ` [virtio-dev] " Tian, Kevin
@ 2018-03-21 13:14           ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-03-21 13:14 UTC (permalink / raw)
  To: Tian, Kevin, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, Marc Zyngier,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, Will Deacon,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

On 21/03/18 06:43, Tian, Kevin wrote:
[...]
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
> 
> this is ARM specific, and according to virtio-iommu spec isn't it
> better probed on the endpoint instead of hard-coding here?

These values are arbitrary, not really ARM-specific even if ARM is the
only user yet: we're just reserving a random IOVA region for mapping MSIs.
It is hard-coded because of the way iommu-dma.c works, but I don't quite
remember why that allocation isn't dynamic.

As said on the v0.6 spec thread, I'm not sure allocating the IOVA range in
the host is preferable. With nested translation the guest has to map it
anyway, and I believe dealing with IOVA allocation should be left to the
guest when possible.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-21  6:43       ` [virtio-dev] " Tian, Kevin
  (?)
@ 2018-03-21 13:14       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-03-21 13:14 UTC (permalink / raw)
  To: Tian, Kevin, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, Will Deacon, jintack, eric.auger, Robin Murphy,
	joro, eric.auger.pro

On 21/03/18 06:43, Tian, Kevin wrote:
[...]
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
> 
> this is ARM specific, and according to virtio-iommu spec isn't it
> better probed on the endpoint instead of hard-coding here?

These values are arbitrary, not really ARM-specific even if ARM is the
only user yet: we're just reserving a random IOVA region for mapping MSIs.
It is hard-coded because of the way iommu-dma.c works, but I don't quite
remember why that allocation isn't dynamic.

As said on the v0.6 spec thread, I'm not sure allocating the IOVA range in
the host is preferable. With nested translation the guest has to map it
anyway, and I believe dealing with IOVA allocation should be left to the
guest when possible.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-03-21 13:14           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-03-21 13:14 UTC (permalink / raw)
  To: Tian, Kevin, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi, eric.auger, eric.auger.pro,
	peterx, bharat.bhushan, tnowicki, jayachandran.nair, jintack

On 21/03/18 06:43, Tian, Kevin wrote:
[...]
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
> 
> this is ARM specific, and according to virtio-iommu spec isn't it
> better probed on the endpoint instead of hard-coding here?

These values are arbitrary, not really ARM-specific even if ARM is the
only user yet: we're just reserving a random IOVA region for mapping MSIs.
It is hard-coded because of the way iommu-dma.c works, but I don't quite
remember why that allocation isn't dynamic.

As said on the v0.6 spec thread, I'm not sure allocating the IOVA range in
the host is preferable. With nested translation the guest has to map it
anyway, and I believe dealing with IOVA allocation should be left to the
guest when possible.

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-21 13:14           ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-03-21 14:23           ` Robin Murphy
  2018-03-22 10:06               ` [virtio-dev] " Tian, Kevin
       [not found]             ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42@SHSMSX101.ccr.corp.intel.com>
  -1 siblings, 2 replies; 61+ messages in thread
From: Robin Murphy @ 2018-03-21 14:23 UTC (permalink / raw)
  To: Jean-Philippe Brucker, Tian, Kevin, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, Will Deacon,
	jintack, eric.auger.pro

On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> On 21/03/18 06:43, Tian, Kevin wrote:
> [...]
>>> +
>>> +#include <uapi/linux/virtio_iommu.h>
>>> +
>>> +#define MSI_IOVA_BASE			0x8000000
>>> +#define MSI_IOVA_LENGTH			0x100000
>>
>> this is ARM specific, and according to virtio-iommu spec isn't it
>> better probed on the endpoint instead of hard-coding here?
> 
> These values are arbitrary, not really ARM-specific even if ARM is the
> only user yet: we're just reserving a random IOVA region for mapping MSIs.
> It is hard-coded because of the way iommu-dma.c works, but I don't quite
> remember why that allocation isn't dynamic.

The host kernel needs to have *some* MSI region in place before the 
guest can start configuring interrupts, otherwise it won't know what 
address to give to the underlying hardware. However, as soon as the host 
kernel has picked a region, host userspace needs to know that it can no 
longer use addresses in that region for DMA-able guest memory. It's a 
lot easier when the address is fixed in hardware and the host userspace 
will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in the 
more general case where MSI writes undergo IOMMU address translation so 
it's an arbitrary IOVA, this has the potential to conflict with stuff 
like guest memory hotplug.

What we currently have is just the simplest option, with the host kernel 
just picking something up-front and pretending to host userspace that 
it's a fixed hardware address. There's certainly scope for it to be a 
bit more dynamic in the sense of adding an interface to let userspace 
move it around (before attaching any devices, at least), but I don't 
think it's feasible for the host kernel to second-guess userspace enough 
to make it entirely transparent like it is in the DMA API domain case.

Of course, that's all assuming the host itself is using a virtio-iommu 
(e.g. in a nested virt or emulation scenario). When it's purely within a 
guest then an MSI reservation shouldn't matter so much, since the guest 
won't be anywhere near the real hardware configuration anyway.

Robin.

> As said on the v0.6 spec thread, I'm not sure allocating the IOVA range in
> the host is preferable. With nested translation the guest has to map it
> anyway, and I believe dealing with IOVA allocation should be left to the
> guest when possible.
> 
> Thanks,
> Jean
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-21 14:23           ` Robin Murphy
@ 2018-03-22 10:06               ` Tian, Kevin
       [not found]             ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42@SHSMSX101.ccr.corp.intel.com>
  1 sibling, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-22 10:06 UTC (permalink / raw)
  To: Robin Murphy, Jean-Philippe Brucker, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, Will Deacon,
	jintack, eric.auger.pro

> From: Robin Murphy [mailto:robin.murphy@arm.com]
> Sent: Wednesday, March 21, 2018 10:24 PM
> 
> On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> > On 21/03/18 06:43, Tian, Kevin wrote:
> > [...]
> >>> +
> >>> +#include <uapi/linux/virtio_iommu.h>
> >>> +
> >>> +#define MSI_IOVA_BASE			0x8000000
> >>> +#define MSI_IOVA_LENGTH			0x100000
> >>
> >> this is ARM specific, and according to virtio-iommu spec isn't it
> >> better probed on the endpoint instead of hard-coding here?
> >
> > These values are arbitrary, not really ARM-specific even if ARM is the
> > only user yet: we're just reserving a random IOVA region for mapping
> MSIs.
> > It is hard-coded because of the way iommu-dma.c works, but I don't
> quite
> > remember why that allocation isn't dynamic.
> 
> The host kernel needs to have *some* MSI region in place before the
> guest can start configuring interrupts, otherwise it won't know what
> address to give to the underlying hardware. However, as soon as the host
> kernel has picked a region, host userspace needs to know that it can no
> longer use addresses in that region for DMA-able guest memory. It's a
> lot easier when the address is fixed in hardware and the host userspace
> will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
> the
> more general case where MSI writes undergo IOMMU address translation
> so
> it's an arbitrary IOVA, this has the potential to conflict with stuff
> like guest memory hotplug.
> 
> What we currently have is just the simplest option, with the host kernel
> just picking something up-front and pretending to host userspace that
> it's a fixed hardware address. There's certainly scope for it to be a
> bit more dynamic in the sense of adding an interface to let userspace
> move it around (before attaching any devices, at least), but I don't
> think it's feasible for the host kernel to second-guess userspace enough
> to make it entirely transparent like it is in the DMA API domain case.
> 
> Of course, that's all assuming the host itself is using a virtio-iommu
> (e.g. in a nested virt or emulation scenario). When it's purely within a
> guest then an MSI reservation shouldn't matter so much, since the guest
> won't be anywhere near the real hardware configuration anyway.
> 
> Robin.

Curious since anyway we are defining a new iommu architecture
is it possible to avoid those ARM-specific burden completely? 

Thanks
Kevin

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] RE: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-03-22 10:06               ` Tian, Kevin
  0 siblings, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-22 10:06 UTC (permalink / raw)
  To: Robin Murphy, Jean-Philippe Brucker, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, jasowang,
	Will Deacon, jintack, eric.auger.pro

> From: Robin Murphy [mailto:robin.murphy@arm.com]
> Sent: Wednesday, March 21, 2018 10:24 PM
> 
> On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> > On 21/03/18 06:43, Tian, Kevin wrote:
> > [...]
> >>> +
> >>> +#include <uapi/linux/virtio_iommu.h>
> >>> +
> >>> +#define MSI_IOVA_BASE			0x8000000
> >>> +#define MSI_IOVA_LENGTH			0x100000
> >>
> >> this is ARM specific, and according to virtio-iommu spec isn't it
> >> better probed on the endpoint instead of hard-coding here?
> >
> > These values are arbitrary, not really ARM-specific even if ARM is the
> > only user yet: we're just reserving a random IOVA region for mapping
> MSIs.
> > It is hard-coded because of the way iommu-dma.c works, but I don't
> quite
> > remember why that allocation isn't dynamic.
> 
> The host kernel needs to have *some* MSI region in place before the
> guest can start configuring interrupts, otherwise it won't know what
> address to give to the underlying hardware. However, as soon as the host
> kernel has picked a region, host userspace needs to know that it can no
> longer use addresses in that region for DMA-able guest memory. It's a
> lot easier when the address is fixed in hardware and the host userspace
> will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
> the
> more general case where MSI writes undergo IOMMU address translation
> so
> it's an arbitrary IOVA, this has the potential to conflict with stuff
> like guest memory hotplug.
> 
> What we currently have is just the simplest option, with the host kernel
> just picking something up-front and pretending to host userspace that
> it's a fixed hardware address. There's certainly scope for it to be a
> bit more dynamic in the sense of adding an interface to let userspace
> move it around (before attaching any devices, at least), but I don't
> think it's feasible for the host kernel to second-guess userspace enough
> to make it entirely transparent like it is in the DMA API domain case.
> 
> Of course, that's all assuming the host itself is using a virtio-iommu
> (e.g. in a nested virt or emulation scenario). When it's purely within a
> guest then an MSI reservation shouldn't matter so much, since the guest
> won't be anywhere near the real hardware configuration anyway.
> 
> Robin.

Curious since anyway we are defining a new iommu architecture
is it possible to avoid those ARM-specific burden completely? 

Thanks
Kevin


^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]             ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42@SHSMSX101.ccr.corp.intel.com>
@ 2018-03-23  8:27                   ` Tian, Kevin
       [not found]               ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  1 sibling, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-23  8:27 UTC (permalink / raw)
  To: 'Robin Murphy',
	Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, Marc Zyngier,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, Will Deacon,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

> From: Tian, Kevin
> Sent: Thursday, March 22, 2018 6:06 PM
> 
> > From: Robin Murphy [mailto:robin.murphy-5wv7dgnIgG8@public.gmane.org]
> > Sent: Wednesday, March 21, 2018 10:24 PM
> >
> > On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> > > On 21/03/18 06:43, Tian, Kevin wrote:
> > > [...]
> > >>> +
> > >>> +#include <uapi/linux/virtio_iommu.h>
> > >>> +
> > >>> +#define MSI_IOVA_BASE			0x8000000
> > >>> +#define MSI_IOVA_LENGTH			0x100000
> > >>
> > >> this is ARM specific, and according to virtio-iommu spec isn't it
> > >> better probed on the endpoint instead of hard-coding here?
> > >
> > > These values are arbitrary, not really ARM-specific even if ARM is the
> > > only user yet: we're just reserving a random IOVA region for mapping
> > MSIs.
> > > It is hard-coded because of the way iommu-dma.c works, but I don't
> > quite
> > > remember why that allocation isn't dynamic.
> >
> > The host kernel needs to have *some* MSI region in place before the
> > guest can start configuring interrupts, otherwise it won't know what
> > address to give to the underlying hardware. However, as soon as the host
> > kernel has picked a region, host userspace needs to know that it can no
> > longer use addresses in that region for DMA-able guest memory. It's a
> > lot easier when the address is fixed in hardware and the host userspace
> > will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
> > the
> > more general case where MSI writes undergo IOMMU address translation
> > so
> > it's an arbitrary IOVA, this has the potential to conflict with stuff
> > like guest memory hotplug.
> >
> > What we currently have is just the simplest option, with the host kernel
> > just picking something up-front and pretending to host userspace that
> > it's a fixed hardware address. There's certainly scope for it to be a
> > bit more dynamic in the sense of adding an interface to let userspace
> > move it around (before attaching any devices, at least), but I don't
> > think it's feasible for the host kernel to second-guess userspace enough
> > to make it entirely transparent like it is in the DMA API domain case.
> >
> > Of course, that's all assuming the host itself is using a virtio-iommu
> > (e.g. in a nested virt or emulation scenario). When it's purely within a
> > guest then an MSI reservation shouldn't matter so much, since the guest
> > won't be anywhere near the real hardware configuration anyway.
> >
> > Robin.
> 
> Curious since anyway we are defining a new iommu architecture
> is it possible to avoid those ARM-specific burden completely?
> 

OK, after some study around those tricks below is my learning:

- MSI_IOVA window is used only on request (iommu_dma_get
_msi_page), not meant to take effect on all architectures once 
initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
for virtio-iommu driver to implement such capability;

- I thought whether hardware MSI doorbell can be always reported
on virtio-iommu since it's newly defined. Looks there is a problem
if underlying IOMMU is sw-managed MSI style - valid mapping is
expected in all level of translations, meaning guest has to manage
stage-1 mapping in nested configuration since stage-1 is owned
by guest. 

Then virtio-iommu is naturally expected to report the same MSI 
model as supported by underlying hardware. Below are some
further thoughts along this route (use 'IOMMU' to represent the
physical one and 'virtio-iommu' for virtual one):

----

In the scope of current virtio-iommu spec v.6, there is no nested
consideration yet. Guest driver is expected to use MAP/UNMAP
interface on assigned endpoints. In this case the MAP requests
(IOVA->GPA) is caught and maintained within Qemu which then 
further talks to VFIO to map IOVA->HPA in IOMMU.

Qemu can learn the MSI model of IOMMU from sysfs.

For hardware MSI doorbell (x86 and some ARM):
* Host kernel reports to Qemu as IOMMU_RESV_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
* Guest takes the range as IOMMU_RESV_MSI. reserved
* Qemu MAP database has no mapping for the doorbell
* Physical IOMMU page table has no mapping for the doorbell
* MSI from passthrough device bypass IOMMU
* MSI from emulated device bypass virtio-iommu

For software MSI doorbell (most ARM):
* Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
* Guest takes the range as IOMMU_RESV_RESERVED
* vGIC requests to map 'GPA of the virtual doorbell'
* a map request (IOVA->GPA) sent on endpoint
* Qemu maintains the mapping in MAP database
	* but no VFIO_MAP request since it's purely virtual
* GIC requests to map 'HPA of the physical doorbell'
	* e.g. triggered by VFIO enable msi
* IOMMU now includes a valid mapping (IOVA->HPA)
* MSI from emulated device go through Qemu MAP
database (IOVA->'GPA of virtual doorbell') and then hit vGIC
* MSI from passthrough device go through IOMMU
(IOVA->'HPA of physical doorbell') and then hit GIC

In this case, host doorbell is treated as reserved resource in
guest side. Guest has its own sw-management for virtual
doorbell which is only used for emulated device. two paths 
are completely separated.

If above captures the right flow, current v0.6 spec is complete
regarding to required function definition.

----

Then comes nested case, with two level page tables (stage-1
and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
is for GPA->HPA. VFIO map/unmap happens on stage-2, 
while stage-1 is directly managed by guest (and bound to
IOMMU which enables nested translation from IOVA->GPA
->HPA).

For hardware MSI, there is nothing special compared to
previous requirement. Both host/guest treat the doorbell
as reserved and guarantee no mapping in either stage-1 or 
stage-2. 

For software MSI, more consideration is required:

* for emulated device it is just fine as long as guest keeps
IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
to walk stage-1 page table upon MSI request from emulated
device to hit vGIC;

* for passthrough device however there is a problem. We
need valid mapping in both stage-1 and stage-2, while host
kernel is only responsible for stage-2:

	1) if we expect to keep same isolation policy (i.e.
host MSI fully managed by host kernel), then an identity
mapping for host-reported MSI range is expected in stage-1.
In such case we need a new type VIRTIO_IOMMU_RESV_
MEM_T_DIRECT to teach guest setup identity mapping.
it should be the right thing to add since anyway there might
be true IOMMU_RESV_DIRECT range reported from host
which also should be handled.

	2) Alternatively we could instead allow Qemu to
request dynamic change of physical doorbell mapping in 
stage2, e.g. from GPA of virtual doorbell to HPA of physical 
doorbell. But it doesn't like a good design - VFIO doesn't
assign interrupt controller to user space then why should 
VFIO allow user mapping to doorbell...

if 1) is agreed, looks the missing part in spec is just VIRTIO_
IOMMU_RESV_MEM_T_DIRECT, though the whole story 
is lengthy and fully enabling nested require many other
works. :-)

Thanks
Kevin

^ permalink raw reply	[flat|nested] 61+ messages in thread

* RE: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]             ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42@SHSMSX101.ccr.corp.intel.com>
@ 2018-03-23  8:27               ` Tian, Kevin
       [not found]               ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  1 sibling, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-23  8:27 UTC (permalink / raw)
  To: 'Robin Murphy',
	Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev,
	kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, Will Deacon,
	jintack, eric.auger.pro

> From: Tian, Kevin
> Sent: Thursday, March 22, 2018 6:06 PM
> 
> > From: Robin Murphy [mailto:robin.murphy@arm.com]
> > Sent: Wednesday, March 21, 2018 10:24 PM
> >
> > On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> > > On 21/03/18 06:43, Tian, Kevin wrote:
> > > [...]
> > >>> +
> > >>> +#include <uapi/linux/virtio_iommu.h>
> > >>> +
> > >>> +#define MSI_IOVA_BASE			0x8000000
> > >>> +#define MSI_IOVA_LENGTH			0x100000
> > >>
> > >> this is ARM specific, and according to virtio-iommu spec isn't it
> > >> better probed on the endpoint instead of hard-coding here?
> > >
> > > These values are arbitrary, not really ARM-specific even if ARM is the
> > > only user yet: we're just reserving a random IOVA region for mapping
> > MSIs.
> > > It is hard-coded because of the way iommu-dma.c works, but I don't
> > quite
> > > remember why that allocation isn't dynamic.
> >
> > The host kernel needs to have *some* MSI region in place before the
> > guest can start configuring interrupts, otherwise it won't know what
> > address to give to the underlying hardware. However, as soon as the host
> > kernel has picked a region, host userspace needs to know that it can no
> > longer use addresses in that region for DMA-able guest memory. It's a
> > lot easier when the address is fixed in hardware and the host userspace
> > will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
> > the
> > more general case where MSI writes undergo IOMMU address translation
> > so
> > it's an arbitrary IOVA, this has the potential to conflict with stuff
> > like guest memory hotplug.
> >
> > What we currently have is just the simplest option, with the host kernel
> > just picking something up-front and pretending to host userspace that
> > it's a fixed hardware address. There's certainly scope for it to be a
> > bit more dynamic in the sense of adding an interface to let userspace
> > move it around (before attaching any devices, at least), but I don't
> > think it's feasible for the host kernel to second-guess userspace enough
> > to make it entirely transparent like it is in the DMA API domain case.
> >
> > Of course, that's all assuming the host itself is using a virtio-iommu
> > (e.g. in a nested virt or emulation scenario). When it's purely within a
> > guest then an MSI reservation shouldn't matter so much, since the guest
> > won't be anywhere near the real hardware configuration anyway.
> >
> > Robin.
> 
> Curious since anyway we are defining a new iommu architecture
> is it possible to avoid those ARM-specific burden completely?
> 

OK, after some study around those tricks below is my learning:

- MSI_IOVA window is used only on request (iommu_dma_get
_msi_page), not meant to take effect on all architectures once 
initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
for virtio-iommu driver to implement such capability;

- I thought whether hardware MSI doorbell can be always reported
on virtio-iommu since it's newly defined. Looks there is a problem
if underlying IOMMU is sw-managed MSI style - valid mapping is
expected in all level of translations, meaning guest has to manage
stage-1 mapping in nested configuration since stage-1 is owned
by guest. 

Then virtio-iommu is naturally expected to report the same MSI 
model as supported by underlying hardware. Below are some
further thoughts along this route (use 'IOMMU' to represent the
physical one and 'virtio-iommu' for virtual one):

----

In the scope of current virtio-iommu spec v.6, there is no nested
consideration yet. Guest driver is expected to use MAP/UNMAP
interface on assigned endpoints. In this case the MAP requests
(IOVA->GPA) is caught and maintained within Qemu which then 
further talks to VFIO to map IOVA->HPA in IOMMU.

Qemu can learn the MSI model of IOMMU from sysfs.

For hardware MSI doorbell (x86 and some ARM):
* Host kernel reports to Qemu as IOMMU_RESV_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
* Guest takes the range as IOMMU_RESV_MSI. reserved
* Qemu MAP database has no mapping for the doorbell
* Physical IOMMU page table has no mapping for the doorbell
* MSI from passthrough device bypass IOMMU
* MSI from emulated device bypass virtio-iommu

For software MSI doorbell (most ARM):
* Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
* Guest takes the range as IOMMU_RESV_RESERVED
* vGIC requests to map 'GPA of the virtual doorbell'
* a map request (IOVA->GPA) sent on endpoint
* Qemu maintains the mapping in MAP database
	* but no VFIO_MAP request since it's purely virtual
* GIC requests to map 'HPA of the physical doorbell'
	* e.g. triggered by VFIO enable msi
* IOMMU now includes a valid mapping (IOVA->HPA)
* MSI from emulated device go through Qemu MAP
database (IOVA->'GPA of virtual doorbell') and then hit vGIC
* MSI from passthrough device go through IOMMU
(IOVA->'HPA of physical doorbell') and then hit GIC

In this case, host doorbell is treated as reserved resource in
guest side. Guest has its own sw-management for virtual
doorbell which is only used for emulated device. two paths 
are completely separated.

If above captures the right flow, current v0.6 spec is complete
regarding to required function definition.

----

Then comes nested case, with two level page tables (stage-1
and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
is for GPA->HPA. VFIO map/unmap happens on stage-2, 
while stage-1 is directly managed by guest (and bound to
IOMMU which enables nested translation from IOVA->GPA
->HPA).

For hardware MSI, there is nothing special compared to
previous requirement. Both host/guest treat the doorbell
as reserved and guarantee no mapping in either stage-1 or 
stage-2. 

For software MSI, more consideration is required:

* for emulated device it is just fine as long as guest keeps
IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
to walk stage-1 page table upon MSI request from emulated
device to hit vGIC;

* for passthrough device however there is a problem. We
need valid mapping in both stage-1 and stage-2, while host
kernel is only responsible for stage-2:

	1) if we expect to keep same isolation policy (i.e.
host MSI fully managed by host kernel), then an identity
mapping for host-reported MSI range is expected in stage-1.
In such case we need a new type VIRTIO_IOMMU_RESV_
MEM_T_DIRECT to teach guest setup identity mapping.
it should be the right thing to add since anyway there might
be true IOMMU_RESV_DIRECT range reported from host
which also should be handled.

	2) Alternatively we could instead allow Qemu to
request dynamic change of physical doorbell mapping in 
stage2, e.g. from GPA of virtual doorbell to HPA of physical 
doorbell. But it doesn't like a good design - VFIO doesn't
assign interrupt controller to user space then why should 
VFIO allow user mapping to doorbell...

if 1) is agreed, looks the missing part in spec is just VIRTIO_
IOMMU_RESV_MEM_T_DIRECT, though the whole story 
is lengthy and fully enabling nested require many other
works. :-)

Thanks
Kevin

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] RE: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-03-23  8:27                   ` Tian, Kevin
  0 siblings, 0 replies; 61+ messages in thread
From: Tian, Kevin @ 2018-03-23  8:27 UTC (permalink / raw)
  To: 'Robin Murphy',
	Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev,
	kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, jasowang,
	Will Deacon, jintack, eric.auger.pro

> From: Tian, Kevin
> Sent: Thursday, March 22, 2018 6:06 PM
> 
> > From: Robin Murphy [mailto:robin.murphy@arm.com]
> > Sent: Wednesday, March 21, 2018 10:24 PM
> >
> > On 21/03/18 13:14, Jean-Philippe Brucker wrote:
> > > On 21/03/18 06:43, Tian, Kevin wrote:
> > > [...]
> > >>> +
> > >>> +#include <uapi/linux/virtio_iommu.h>
> > >>> +
> > >>> +#define MSI_IOVA_BASE			0x8000000
> > >>> +#define MSI_IOVA_LENGTH			0x100000
> > >>
> > >> this is ARM specific, and according to virtio-iommu spec isn't it
> > >> better probed on the endpoint instead of hard-coding here?
> > >
> > > These values are arbitrary, not really ARM-specific even if ARM is the
> > > only user yet: we're just reserving a random IOVA region for mapping
> > MSIs.
> > > It is hard-coded because of the way iommu-dma.c works, but I don't
> > quite
> > > remember why that allocation isn't dynamic.
> >
> > The host kernel needs to have *some* MSI region in place before the
> > guest can start configuring interrupts, otherwise it won't know what
> > address to give to the underlying hardware. However, as soon as the host
> > kernel has picked a region, host userspace needs to know that it can no
> > longer use addresses in that region for DMA-able guest memory. It's a
> > lot easier when the address is fixed in hardware and the host userspace
> > will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
> > the
> > more general case where MSI writes undergo IOMMU address translation
> > so
> > it's an arbitrary IOVA, this has the potential to conflict with stuff
> > like guest memory hotplug.
> >
> > What we currently have is just the simplest option, with the host kernel
> > just picking something up-front and pretending to host userspace that
> > it's a fixed hardware address. There's certainly scope for it to be a
> > bit more dynamic in the sense of adding an interface to let userspace
> > move it around (before attaching any devices, at least), but I don't
> > think it's feasible for the host kernel to second-guess userspace enough
> > to make it entirely transparent like it is in the DMA API domain case.
> >
> > Of course, that's all assuming the host itself is using a virtio-iommu
> > (e.g. in a nested virt or emulation scenario). When it's purely within a
> > guest then an MSI reservation shouldn't matter so much, since the guest
> > won't be anywhere near the real hardware configuration anyway.
> >
> > Robin.
> 
> Curious since anyway we are defining a new iommu architecture
> is it possible to avoid those ARM-specific burden completely?
> 

OK, after some study around those tricks below is my learning:

- MSI_IOVA window is used only on request (iommu_dma_get
_msi_page), not meant to take effect on all architectures once 
initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
for virtio-iommu driver to implement such capability;

- I thought whether hardware MSI doorbell can be always reported
on virtio-iommu since it's newly defined. Looks there is a problem
if underlying IOMMU is sw-managed MSI style - valid mapping is
expected in all level of translations, meaning guest has to manage
stage-1 mapping in nested configuration since stage-1 is owned
by guest. 

Then virtio-iommu is naturally expected to report the same MSI 
model as supported by underlying hardware. Below are some
further thoughts along this route (use 'IOMMU' to represent the
physical one and 'virtio-iommu' for virtual one):

----

In the scope of current virtio-iommu spec v.6, there is no nested
consideration yet. Guest driver is expected to use MAP/UNMAP
interface on assigned endpoints. In this case the MAP requests
(IOVA->GPA) is caught and maintained within Qemu which then 
further talks to VFIO to map IOVA->HPA in IOMMU.

Qemu can learn the MSI model of IOMMU from sysfs.

For hardware MSI doorbell (x86 and some ARM):
* Host kernel reports to Qemu as IOMMU_RESV_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
* Guest takes the range as IOMMU_RESV_MSI. reserved
* Qemu MAP database has no mapping for the doorbell
* Physical IOMMU page table has no mapping for the doorbell
* MSI from passthrough device bypass IOMMU
* MSI from emulated device bypass virtio-iommu

For software MSI doorbell (most ARM):
* Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
* Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
* Guest takes the range as IOMMU_RESV_RESERVED
* vGIC requests to map 'GPA of the virtual doorbell'
* a map request (IOVA->GPA) sent on endpoint
* Qemu maintains the mapping in MAP database
	* but no VFIO_MAP request since it's purely virtual
* GIC requests to map 'HPA of the physical doorbell'
	* e.g. triggered by VFIO enable msi
* IOMMU now includes a valid mapping (IOVA->HPA)
* MSI from emulated device go through Qemu MAP
database (IOVA->'GPA of virtual doorbell') and then hit vGIC
* MSI from passthrough device go through IOMMU
(IOVA->'HPA of physical doorbell') and then hit GIC

In this case, host doorbell is treated as reserved resource in
guest side. Guest has its own sw-management for virtual
doorbell which is only used for emulated device. two paths 
are completely separated.

If above captures the right flow, current v0.6 spec is complete
regarding to required function definition.

----

Then comes nested case, with two level page tables (stage-1
and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
is for GPA->HPA. VFIO map/unmap happens on stage-2, 
while stage-1 is directly managed by guest (and bound to
IOMMU which enables nested translation from IOVA->GPA
->HPA).

For hardware MSI, there is nothing special compared to
previous requirement. Both host/guest treat the doorbell
as reserved and guarantee no mapping in either stage-1 or 
stage-2. 

For software MSI, more consideration is required:

* for emulated device it is just fine as long as guest keeps
IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
to walk stage-1 page table upon MSI request from emulated
device to hit vGIC;

* for passthrough device however there is a problem. We
need valid mapping in both stage-1 and stage-2, while host
kernel is only responsible for stage-2:

	1) if we expect to keep same isolation policy (i.e.
host MSI fully managed by host kernel), then an identity
mapping for host-reported MSI range is expected in stage-1.
In such case we need a new type VIRTIO_IOMMU_RESV_
MEM_T_DIRECT to teach guest setup identity mapping.
it should be the right thing to add since anyway there might
be true IOMMU_RESV_DIRECT range reported from host
which also should be handled.

	2) Alternatively we could instead allow Qemu to
request dynamic change of physical doorbell mapping in 
stage2, e.g. from GPA of virtual doorbell to HPA of physical 
doorbell. But it doesn't like a good design - VFIO doesn't
assign interrupt controller to user space then why should 
VFIO allow user mapping to doorbell...

if 1) is agreed, looks the missing part in spec is just VIRTIO_
IOMMU_RESV_MEM_T_DIRECT, though the whole story 
is lengthy and fully enabling nested require many other
works. :-)

Thanks
Kevin


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
       [not found]   ` <20180214145340.1223-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
                       ` (3 preceding siblings ...)
  2018-03-21  6:43       ` [virtio-dev] " Tian, Kevin
@ 2018-03-23 14:48     ` Robin Murphy
  2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
  4 siblings, 1 reply; 61+ messages in thread
From: Robin Murphy @ 2018-03-23 14:48 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

On 14/02/18 14:53, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>   MAINTAINERS                       |   6 +
>   drivers/iommu/Kconfig             |  11 +
>   drivers/iommu/Makefile            |   1 +
>   drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/virtio_ids.h   |   1 +
>   include/uapi/linux/virtio_iommu.h | 116 +++++
>   6 files changed, 1095 insertions(+)
>   create mode 100644 drivers/iommu/virtio-iommu.c
>   create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3bdc260e36b7..2a181924d420 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -14818,6 +14818,12 @@ S:	Maintained
>   F:	drivers/virtio/virtio_input.c
>   F:	include/uapi/linux/virtio_input.h
>   
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>   VIRTUAL BOX GUEST DEVICE DRIVER
>   M:	Hans de Goede <hdegoede-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>   M:	Arnd Bergmann <arnd-r2nGTMty4D4@public.gmane.org>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index f3a21343e636..1ea0ec74524f 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -381,4 +381,15 @@ config QCOM_IOMMU
>   	help
>   	  Support for IOMMU on certain Qualcomm SoCs.
>   
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>   endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1fb695854809..9c68be1365e1 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>   obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>   obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>   obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> + *
> + * SPDX-License-Identifier: GPL-2.0

This wants to be a // comment at the very top of the file (thankfully 
the policy is now properly documented in-tree since 
Documentation/process/license-rules.rst got merged)

> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	unsigned long			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);
> +	}
> +
> +	return nr_received;
> +}
> +
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * The timeout is chosen arbitrarily. It's only here to prevent locking
> +	 * up the CPU in case of a device bug.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i)
> +			cpu_relax();
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
> + *                         them to return
> + *
> + * @req: array of requests
> + * @nr: array length
> + * @nr_sent: on return, contains the number of requests actually sent
> + *
> + * Return 0 on success, or an error if we failed to send some of the requests.
> + */
> +static int viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				 struct viommu_request *req, int nr,
> +				 int *nr_sent)
> +{
> +	int ret;
> +	int sent = 0;
> +	unsigned long flags;
> +
> +	*nr_sent = 0;
> +	do {
> +		spin_lock_irqsave(&viommu->request_lock, flags);
> +		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
> +		spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +		*nr_sent += sent;
> +		req += sent;
> +		nr -= sent;
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_req_sync - send one request and wait for reply
> + *
> + * @top: pointer to a virtio_iommu_req_* structure
> + *
> + * Returns 0 if the request was successful, or an error number otherwise. No
> + * distinction is done between transport and request errors.
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
> +{
> +	int ret;
> +	int nr_sent;
> +	void *bottom;
> +	size_t top_size, bottom_size;
> +	struct virtio_iommu_req_tail *tail;
> +	struct virtio_iommu_req_head *head = top;
> +	struct viommu_request req = {
> +		.written = 0
> +	};
> +
> +	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
> +	if (ret)
> +		return ret;
> +
> +	bottom = top + top_size;
> +	tail = bottom + bottom_size - sizeof(*tail);
> +
> +	sg_init_one(&req.top, top, top_size);
> +	sg_init_one(&req.bottom, bottom, bottom_size);
> +
> +	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
> +	if (ret || !req.written || nr_sent != 1) {
> +		dev_err(viommu->dev, "failed to send request\n");
> +		return -EIO;
> +	}
> +
> +	return viommu_status_to_errno(tail->status);
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size)
> +{
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
> + *	This allows the caller to reuse the buffer for the unmap request. When
> + *	the returned size is greater than zero, if a mapping is returned, the
> + *	caller must free it.

This "free multiple mappings except maybe hand one of them off to the 
caller" interface is really unintuitive. AFAICS it's only used by 
viommu_unmap() to grab mapping->req, but that doesn't seem to care about 
mapping itself, so I wonder whether it wouldn't make more sense to just 
have a global kmem_cache of struct virtio_iommu_req_unmap for that and 
avoid a lot of complexity...

> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				 unsigned long iova, size_t size,
> +				 struct viommu_mapping **out_mapping)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +
> +	if (next) {
> +		mapping = container_of(next, struct viommu_mapping, iova);
> +		/* Trying to split a mapping? */
> +		if (WARN_ON(mapping->iova.start < iova))
> +			next = NULL;
> +	}
> +
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +
> +		if (out_mapping && !(*out_mapping))
> +			*out_mapping = mapping;
> +		else
> +			kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	unsigned long flags;
> +	int i = 1, ret, nr_sent;
> +	struct viommu_request *reqs;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	size_t top_size, bottom_size;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	if (!node) {
> +		spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +		return 0;
> +	}
> +
> +	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
> +		i++;
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
> +	if (!reqs)
> +		return -ENOMEM;
> +
> +	bottom_size = sizeof(struct virtio_iommu_req_tail);
> +	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
> +
> +	i = 0;
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
> +		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
> +			    bottom_size);
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +		i++;
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
> +	kfree(reqs);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> +	return false;
> +}

The .capable callback is optional, so it's only worth implementing once 
you want it to do something beyond the default behaviour.

> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0, NULL);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach *req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->endpoints--;
> +
> +	/* DMA to the stack is forbidden, store request on the heap */
> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	*req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, req);
> +		if (ret)
> +			break;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!vdomain->endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings if any (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
> +
> +	mapping->req.map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size, NULL);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct viommu_mapping *mapping = NULL;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
> +	if (unmapped < size) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->endpoints)
> +		goto out_free;
> +
> +	if (WARN_ON(!mapping))
> +		return 0;
> +
> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};

...In fact, the kmem_cache idea might be moot since it looks like with a 
bit of restructuring you could get away with just a single per-viommu 
virtio_iommu_req_unmap structure; this lot could be passed around on the 
stack until request_lock is taken, at which point it would be copied 
into the 'real' DMA-able structure. The equivalent might apply to 
viommu_map() too - now that I'm looking at it, it seems like it would 
take pretty minimal effort to encapsulate the whole business cleanly in 
viommu_send_req_sync(), which could do something like this instead of 
going through viommu_send_reqs_sync():

	...
	spin_lock_irqsave(&viommu->request_lock, flags);
	viommu_copy_req(viommu->dma_req, req);
	ret = _viommu_send_reqs_sync(viommu, viommu->dma_req, 1, &sent);
	spin_unlock_irqrestore(&viommu->request_lock, flags);
	...

> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +
> +out_free:
> +	kfree(mapping);
> +
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (!IS_ERR(group))
> +		iommu_group_put(group);

Since you create the sysfs IOMMU device, maybe also create the links for 
the masters?

> +
> +	return PTR_ERR_OR_ZERO(group);
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{

You need to remove dev from its group, too (basically, .remove_device 
should always undo everything .add_device did)

It would also be good practice to verify that dev->iommu_fwspec exists 
and is one of yours before touching anything, although having checked 
the core code I see we do currently just about get away with it thanks 
to the horrible per-bus ops.

> +	kfree(dev->iommu_fwspec->iommu_priv);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);

I'm sure a DT binding somewhere needs to document the appropriate value 
and meaning for #iommu-cells - I guess that probably falls under the 
virtio-mmio binding?

> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.capable		= viommu_capable,
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.of_xlate		= viommu_of_xlate,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +};
> +
> +static int viommu_init_vq(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vq = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +
> +	ret = viommu_init_vq(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio");
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>   #define VIRTIO_ID_INPUT        18 /* virtio input */
>   #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>   #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>   
>   #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..0de9b44db14d
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,116 @@
> +/*
> + * Virtio-iommu definition v0.6
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause

Again, at the top, although in /* */ here since it's a header.

Robin.

> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;
> +} __packed;
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};
> +
> +#endif
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
                     ` (4 preceding siblings ...)
  (?)
@ 2018-03-23 14:48   ` Robin Murphy
  -1 siblings, 0 replies; 61+ messages in thread
From: Robin Murphy @ 2018-03-23 14:48 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, joro,
	eric.auger.pro

On 14/02/18 14:53, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>   MAINTAINERS                       |   6 +
>   drivers/iommu/Kconfig             |  11 +
>   drivers/iommu/Makefile            |   1 +
>   drivers/iommu/virtio-iommu.c      | 960 ++++++++++++++++++++++++++++++++++++++
>   include/uapi/linux/virtio_ids.h   |   1 +
>   include/uapi/linux/virtio_iommu.h | 116 +++++
>   6 files changed, 1095 insertions(+)
>   create mode 100644 drivers/iommu/virtio-iommu.c
>   create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 3bdc260e36b7..2a181924d420 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -14818,6 +14818,12 @@ S:	Maintained
>   F:	drivers/virtio/virtio_input.c
>   F:	include/uapi/linux/virtio_input.h
>   
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>   VIRTUAL BOX GUEST DEVICE DRIVER
>   M:	Hans de Goede <hdegoede@redhat.com>
>   M:	Arnd Bergmann <arnd@arndb.de>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index f3a21343e636..1ea0ec74524f 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -381,4 +381,15 @@ config QCOM_IOMMU
>   	help
>   	  Support for IOMMU on certain Qualcomm SoCs.
>   
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>   endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 1fb695854809..9c68be1365e1 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -29,3 +29,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>   obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>   obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>   obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..a9c9245e8ba2
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,960 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0

This wants to be a // comment at the very top of the file (thankfully 
the policy is now properly documented in-tree since 
Documentation/process/license-rules.rst got merged)

> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	unsigned long			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);
> +	}
> +
> +	return nr_received;
> +}
> +
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * The timeout is chosen arbitrarily. It's only here to prevent locking
> +	 * up the CPU in case of a device bug.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i)
> +			cpu_relax();
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
> + *                         them to return
> + *
> + * @req: array of requests
> + * @nr: array length
> + * @nr_sent: on return, contains the number of requests actually sent
> + *
> + * Return 0 on success, or an error if we failed to send some of the requests.
> + */
> +static int viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				 struct viommu_request *req, int nr,
> +				 int *nr_sent)
> +{
> +	int ret;
> +	int sent = 0;
> +	unsigned long flags;
> +
> +	*nr_sent = 0;
> +	do {
> +		spin_lock_irqsave(&viommu->request_lock, flags);
> +		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
> +		spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +		*nr_sent += sent;
> +		req += sent;
> +		nr -= sent;
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_req_sync - send one request and wait for reply
> + *
> + * @top: pointer to a virtio_iommu_req_* structure
> + *
> + * Returns 0 if the request was successful, or an error number otherwise. No
> + * distinction is done between transport and request errors.
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
> +{
> +	int ret;
> +	int nr_sent;
> +	void *bottom;
> +	size_t top_size, bottom_size;
> +	struct virtio_iommu_req_tail *tail;
> +	struct virtio_iommu_req_head *head = top;
> +	struct viommu_request req = {
> +		.written = 0
> +	};
> +
> +	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
> +	if (ret)
> +		return ret;
> +
> +	bottom = top + top_size;
> +	tail = bottom + bottom_size - sizeof(*tail);
> +
> +	sg_init_one(&req.top, top, top_size);
> +	sg_init_one(&req.bottom, bottom, bottom_size);
> +
> +	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
> +	if (ret || !req.written || nr_sent != 1) {
> +		dev_err(viommu->dev, "failed to send request\n");
> +		return -EIO;
> +	}
> +
> +	return viommu_status_to_errno(tail->status);
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size)
> +{
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
> + *	This allows the caller to reuse the buffer for the unmap request. When
> + *	the returned size is greater than zero, if a mapping is returned, the
> + *	caller must free it.

This "free multiple mappings except maybe hand one of them off to the 
caller" interface is really unintuitive. AFAICS it's only used by 
viommu_unmap() to grab mapping->req, but that doesn't seem to care about 
mapping itself, so I wonder whether it wouldn't make more sense to just 
have a global kmem_cache of struct virtio_iommu_req_unmap for that and 
avoid a lot of complexity...

> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				 unsigned long iova, size_t size,
> +				 struct viommu_mapping **out_mapping)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +
> +	if (next) {
> +		mapping = container_of(next, struct viommu_mapping, iova);
> +		/* Trying to split a mapping? */
> +		if (WARN_ON(mapping->iova.start < iova))
> +			next = NULL;
> +	}
> +
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +
> +		if (out_mapping && !(*out_mapping))
> +			*out_mapping = mapping;
> +		else
> +			kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	unsigned long flags;
> +	int i = 1, ret, nr_sent;
> +	struct viommu_request *reqs;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	size_t top_size, bottom_size;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	if (!node) {
> +		spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +		return 0;
> +	}
> +
> +	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
> +		i++;
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
> +	if (!reqs)
> +		return -ENOMEM;
> +
> +	bottom_size = sizeof(struct virtio_iommu_req_tail);
> +	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
> +
> +	i = 0;
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
> +		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
> +			    bottom_size);
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +		i++;
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
> +	kfree(reqs);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> +	return false;
> +}

The .capable callback is optional, so it's only worth implementing once 
you want it to do something beyond the default behaviour.

> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0, NULL);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach *req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->endpoints--;
> +
> +	/* DMA to the stack is forbidden, store request on the heap */
> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	*req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, req);
> +		if (ret)
> +			break;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!vdomain->endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings if any (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
> +
> +	mapping->req.map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size, NULL);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct viommu_mapping *mapping = NULL;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
> +	if (unmapped < size) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->endpoints)
> +		goto out_free;
> +
> +	if (WARN_ON(!mapping))
> +		return 0;
> +
> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};

...In fact, the kmem_cache idea might be moot since it looks like with a 
bit of restructuring you could get away with just a single per-viommu 
virtio_iommu_req_unmap structure; this lot could be passed around on the 
stack until request_lock is taken, at which point it would be copied 
into the 'real' DMA-able structure. The equivalent might apply to 
viommu_map() too - now that I'm looking at it, it seems like it would 
take pretty minimal effort to encapsulate the whole business cleanly in 
viommu_send_req_sync(), which could do something like this instead of 
going through viommu_send_reqs_sync():

	...
	spin_lock_irqsave(&viommu->request_lock, flags);
	viommu_copy_req(viommu->dma_req, req);
	ret = _viommu_send_reqs_sync(viommu, viommu->dma_req, 1, &sent);
	spin_unlock_irqrestore(&viommu->request_lock, flags);
	...

> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +
> +out_free:
> +	kfree(mapping);
> +
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (!IS_ERR(group))
> +		iommu_group_put(group);

Since you create the sysfs IOMMU device, maybe also create the links for 
the masters?

> +
> +	return PTR_ERR_OR_ZERO(group);
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{

You need to remove dev from its group, too (basically, .remove_device 
should always undo everything .add_device did)

It would also be good practice to verify that dev->iommu_fwspec exists 
and is one of yours before touching anything, although having checked 
the core code I see we do currently just about get away with it thanks 
to the horrible per-bus ops.

> +	kfree(dev->iommu_fwspec->iommu_priv);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);

I'm sure a DT binding somewhere needs to document the appropriate value 
and meaning for #iommu-cells - I guess that probably falls under the 
virtio-mmio binding?

> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.capable		= viommu_capable,
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.of_xlate		= viommu_of_xlate,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +};
> +
> +static int viommu_init_vq(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vq = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +
> +	ret = viommu_init_vq(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio");
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>   #define VIRTIO_ID_INPUT        18 /* virtio input */
>   #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>   #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>   
>   #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..0de9b44db14d
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,116 @@
> +/*
> + * Virtio-iommu definition v0.6
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause

Again, at the top, although in /* */ here since it's a header.

Robin.

> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;
> +} __packed;
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};
> +
> +#endif
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/4] iommu/virtio: Add probe request
       [not found]   ` <20180214145340.1223-3-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
@ 2018-03-23 15:00     ` Robin Murphy
  2018-04-11 18:33       ` Jean-Philippe Brucker
  2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
  0 siblings, 2 replies; 61+ messages in thread
From: Robin Murphy @ 2018-03-23 15:00 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	kvm-u79uwXL29TY76Z2rM5mHXA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg
  Cc: jayachandran.nair-YGCgFSpz5w/QT0dZR+AlfA,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	jintack-eQaUEPhvms7ENvBUuze7eA,
	eric.auger.pro-Re5JQEeQqe8AvxtiuMwx3w

On 14/02/18 14:53, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>   drivers/iommu/virtio-iommu.c      | 163 ++++++++++++++++++++++++++++++++++++--
>   include/uapi/linux/virtio_iommu.h |  37 +++++++++
>   2 files changed, 193 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index a9c9245e8ba2..3ac4b38eaf19 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>   	struct iommu_domain_geometry	geometry;
>   	u64				pgsize_bitmap;
>   	u8				domain_bits;
> +	u32				probe_size;
>   };
>   
>   struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>   struct viommu_endpoint {
>   	struct viommu_dev		*viommu;
>   	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>   };
>   
>   struct viommu_request {
> @@ -140,6 +142,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>   	case VIRTIO_IOMMU_T_UNMAP:
>   		size = sizeof(r->unmap);
>   		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>   	default:
>   		return -EINVAL;
>   	}
> @@ -448,6 +454,105 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>   	return ret;
>   }
>   
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	/*
> +	 * Treat unknown subtype as RESERVED, but urge users to update their
> +	 * driver.
> +	 */
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);

Might as well avoid the extra comparisons by incorporating this into the 
switch statement, i.e.:

	default:
		dev_warn(vdev->viommu_dev->dev, ...);
		/* Fallthrough */
	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
		...

(dev_warn is generally preferable to pr_warn when feasible)

> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>   /* IOMMU API */
>   
>   static bool viommu_capable(enum iommu_cap cap)
> @@ -703,6 +808,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>   
>   static int viommu_add_device(struct device *dev)
>   {
> +	int ret;
>   	struct iommu_group *group;
>   	struct viommu_endpoint *vdev;
>   	struct viommu_dev *viommu = NULL;
> @@ -720,8 +826,16 @@ static int viommu_add_device(struct device *dev)
>   		return -ENOMEM;
>   
>   	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>   	fwspec->iommu_priv = vdev;
>   
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>   	/*
>   	 * Last step creates a default domain and attaches to it. Everything
>   	 * must be ready.
> @@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
>   
>   static void viommu_remove_device(struct device *dev)
>   {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;

Oh good :) I guess that was just a patch-splitting issue. The group 
thing still applies, though.

Robin.

> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>   }
>   
>   static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -753,15 +879,33 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>   
>   static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>   {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>   	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>   
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>   
> -	list_add_tail(&region->list, head);
>   	iommu_dma_get_resv_regions(dev, head);
>   }
>   
> @@ -852,6 +996,10 @@ static int viommu_probe(struct virtio_device *vdev)
>   			     struct virtio_iommu_config, domain_bits,
>   			     &viommu->domain_bits);
>   
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>   	viommu->geometry = (struct iommu_domain_geometry) {
>   		.aperture_start	= input_start,
>   		.aperture_end	= input_end,
> @@ -933,6 +1081,7 @@ static unsigned int features[] = {
>   	VIRTIO_IOMMU_F_MAP_UNMAP,
>   	VIRTIO_IOMMU_F_DOMAIN_BITS,
>   	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>   };
>   
>   static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 0de9b44db14d..2335d9ed4676 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -15,6 +15,7 @@
>   #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>   #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>   #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>   
>   struct virtio_iommu_config {
>   	/* Supported page sizes */
> @@ -26,6 +27,9 @@ struct virtio_iommu_config {
>   	} input_range;
>   	/* Max domain ID size */
>   	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>   } __packed;
>   
>   /* Request types */
> @@ -33,6 +37,7 @@ struct virtio_iommu_config {
>   #define VIRTIO_IOMMU_T_DETACH			0x02
>   #define VIRTIO_IOMMU_T_MAP			0x03
>   #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>   
>   /* Status types */
>   #define VIRTIO_IOMMU_S_OK			0x00
> @@ -104,6 +109,37 @@ struct virtio_iommu_req_unmap {
>   	struct virtio_iommu_req_tail		tail;
>   } __packed;
>   
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>   union virtio_iommu_req {
>   	struct virtio_iommu_req_head		head;
>   
> @@ -111,6 +147,7 @@ union virtio_iommu_req {
>   	struct virtio_iommu_req_detach		detach;
>   	struct virtio_iommu_req_map		map;
>   	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>   };
>   
>   #endif
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/4] iommu/virtio: Add probe request
  2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-03-23 15:00   ` Robin Murphy
  -1 siblings, 0 replies; 61+ messages in thread
From: Robin Murphy @ 2018-03-23 15:00 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, will.deacon, jintack, eric.auger, joro,
	eric.auger.pro

On 14/02/18 14:53, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>   drivers/iommu/virtio-iommu.c      | 163 ++++++++++++++++++++++++++++++++++++--
>   include/uapi/linux/virtio_iommu.h |  37 +++++++++
>   2 files changed, 193 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index a9c9245e8ba2..3ac4b38eaf19 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>   	struct iommu_domain_geometry	geometry;
>   	u64				pgsize_bitmap;
>   	u8				domain_bits;
> +	u32				probe_size;
>   };
>   
>   struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>   struct viommu_endpoint {
>   	struct viommu_dev		*viommu;
>   	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>   };
>   
>   struct viommu_request {
> @@ -140,6 +142,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>   	case VIRTIO_IOMMU_T_UNMAP:
>   		size = sizeof(r->unmap);
>   		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>   	default:
>   		return -EINVAL;
>   	}
> @@ -448,6 +454,105 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>   	return ret;
>   }
>   
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	/*
> +	 * Treat unknown subtype as RESERVED, but urge users to update their
> +	 * driver.
> +	 */
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);

Might as well avoid the extra comparisons by incorporating this into the 
switch statement, i.e.:

	default:
		dev_warn(vdev->viommu_dev->dev, ...);
		/* Fallthrough */
	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
		...

(dev_warn is generally preferable to pr_warn when feasible)

> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>   /* IOMMU API */
>   
>   static bool viommu_capable(enum iommu_cap cap)
> @@ -703,6 +808,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>   
>   static int viommu_add_device(struct device *dev)
>   {
> +	int ret;
>   	struct iommu_group *group;
>   	struct viommu_endpoint *vdev;
>   	struct viommu_dev *viommu = NULL;
> @@ -720,8 +826,16 @@ static int viommu_add_device(struct device *dev)
>   		return -ENOMEM;
>   
>   	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>   	fwspec->iommu_priv = vdev;
>   
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>   	/*
>   	 * Last step creates a default domain and attaches to it. Everything
>   	 * must be ready.
> @@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
>   
>   static void viommu_remove_device(struct device *dev)
>   {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;

Oh good :) I guess that was just a patch-splitting issue. The group 
thing still applies, though.

Robin.

> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>   }
>   
>   static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -753,15 +879,33 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>   
>   static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>   {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>   	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>   
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>   
> -	list_add_tail(&region->list, head);
>   	iommu_dma_get_resv_regions(dev, head);
>   }
>   
> @@ -852,6 +996,10 @@ static int viommu_probe(struct virtio_device *vdev)
>   			     struct virtio_iommu_config, domain_bits,
>   			     &viommu->domain_bits);
>   
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>   	viommu->geometry = (struct iommu_domain_geometry) {
>   		.aperture_start	= input_start,
>   		.aperture_end	= input_end,
> @@ -933,6 +1081,7 @@ static unsigned int features[] = {
>   	VIRTIO_IOMMU_F_MAP_UNMAP,
>   	VIRTIO_IOMMU_F_DOMAIN_BITS,
>   	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>   };
>   
>   static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 0de9b44db14d..2335d9ed4676 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -15,6 +15,7 @@
>   #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>   #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>   #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>   
>   struct virtio_iommu_config {
>   	/* Supported page sizes */
> @@ -26,6 +27,9 @@ struct virtio_iommu_config {
>   	} input_range;
>   	/* Max domain ID size */
>   	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>   } __packed;
>   
>   /* Request types */
> @@ -33,6 +37,7 @@ struct virtio_iommu_config {
>   #define VIRTIO_IOMMU_T_DETACH			0x02
>   #define VIRTIO_IOMMU_T_MAP			0x03
>   #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>   
>   /* Status types */
>   #define VIRTIO_IOMMU_S_OK			0x00
> @@ -104,6 +109,37 @@ struct virtio_iommu_req_unmap {
>   	struct virtio_iommu_req_tail		tail;
>   } __packed;
>   
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>   union virtio_iommu_req {
>   	struct virtio_iommu_req_head		head;
>   
> @@ -111,6 +147,7 @@ union virtio_iommu_req {
>   	struct virtio_iommu_req_detach		detach;
>   	struct virtio_iommu_req_map		map;
>   	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>   };
>   
>   #endif
> 

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-23 14:48     ` Robin Murphy
@ 2018-04-11 18:33         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:33 UTC (permalink / raw)
  To: Robin Murphy, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, Lorenzo Pieralisi, Tomasz.Nowicki, tnowicki,
	mst, Marc Zyngier, Will Deacon, jintack, eric.auger, joro,
	eric.auger.pro

On 23/03/18 14:48, Robin Murphy wrote:
[..]
>> + * Virtio driver for the paravirtualized IOMMU
>> + *
>> + * Copyright (C) 2018 ARM Limited
>> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> + *
>> + * SPDX-License-Identifier: GPL-2.0
> 
> This wants to be a // comment at the very top of the file (thankfully 
> the policy is now properly documented in-tree since 
> Documentation/process/license-rules.rst got merged)

Ok

[...]
>> +
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
>> + *	This allows the caller to reuse the buffer for the unmap request. When
>> + *	the returned size is greater than zero, if a mapping is returned, the
>> + *	caller must free it.
> 
> This "free multiple mappings except maybe hand one of them off to the 
> caller" interface is really unintuitive. AFAICS it's only used by 
> viommu_unmap() to grab mapping->req, but that doesn't seem to care about 
> mapping itself, so I wonder whether it wouldn't make more sense to just 
> have a global kmem_cache of struct virtio_iommu_req_unmap for that and 
> avoid a lot of complexity...

Well it's a small complication for what I hoped would be a meanignful
performance difference, but more below.

>> +
>> +/* IOMMU API */
>> +
>> +static bool viommu_capable(enum iommu_cap cap)
>> +{
>> +	return false;
>> +}
> 
> The .capable callback is optional, so it's only worth implementing once 
> you want it to do something beyond the default behaviour.
> 

Ah, right

[...]
>> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
>> +			   size_t size)
>> +{
>> +	int ret = 0;
>> +	size_t unmapped;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
>> +	if (unmapped < size) {
>> +		ret = -EINVAL;
>> +		goto out_free;
>> +	}
>> +
>> +	/* Device already removed all mappings after detach. */
>> +	if (!vdomain->endpoints)
>> +		goto out_free;
>> +
>> +	if (WARN_ON(!mapping))
>> +		return 0;
>> +
>> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
>> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
>> +	};
> 
> ...In fact, the kmem_cache idea might be moot since it looks like with a 
> bit of restructuring you could get away with just a single per-viommu 
> virtio_iommu_req_unmap structure; this lot could be passed around on the 
> stack until request_lock is taken, at which point it would be copied 
> into the 'real' DMA-able structure. The equivalent might apply to 
> viommu_map() too - now that I'm looking at it, it seems like it would 
> take pretty minimal effort to encapsulate the whole business cleanly in 
> viommu_send_req_sync(), which could do something like this instead of 
> going through viommu_send_reqs_sync():
> 
> 	...
> 	spin_lock_irqsave(&viommu->request_lock, flags);
> 	viommu_copy_req(viommu->dma_req, req);
> 	ret = _viommu_send_reqs_sync(viommu, viommu->dma_req, 1, &sent);
> 	spin_unlock_irqrestore(&viommu->request_lock, flags);
> 	...

I'll have to come back to that sorry, still conducting some experiments
with map/unmap.

I'd rather avoid introducing a single dma_req per viommu, because I'd
like to move to the iotlb_range_add/iotlb_sync interface as soon as
possible, and the request logic changes a lot when multiple threads are
susceptible to interleave map/unmap requests.

I ran some tests, and adding a kmem_cache (or simply using kmemdup, it
doesn't make a noticeable difference at our scale) reduces netperf
stream/maerts throughput by 1.1%/1.4% (+/- 0.5% over 30 tests). That's
for a virtio-net device (1 tx/rx vq), and with a vfio device the
difference isn't measurable. At this point I'm not fussy about such
small difference, so don't mind simplifying viommu_del_mapping.

[...]
>> +	/*
>> +	 * Last step creates a default domain and attaches to it. Everything
>> +	 * must be ready.
>> +	 */
>> +	group = iommu_group_get_for_dev(dev);
>> +	if (!IS_ERR(group))
>> +		iommu_group_put(group);
> 
> Since you create the sysfs IOMMU device, maybe also create the links for 
> the masters?

Ok

>> +
>> +	return PTR_ERR_OR_ZERO(group);
>> +}
>> +
>> +static void viommu_remove_device(struct device *dev)
>> +{
> 
> You need to remove dev from its group, too (basically, .remove_device 
> should always undo everything .add_device did)
> 
> It would also be good practice to verify that dev->iommu_fwspec exists 
> and is one of yours before touching anything, although having checked 
> the core code I see we do currently just about get away with it thanks 
> to the horrible per-bus ops.

Ok

> 
>> +	kfree(dev->iommu_fwspec->iommu_priv);
>> +}
>> +
>> +static struct iommu_group *viommu_device_group(struct device *dev)
>> +{
>> +	if (dev_is_pci(dev))
>> +		return pci_device_group(dev);
>> +	else
>> +		return generic_device_group(dev);
>> +}
>> +
>> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>> +{
>> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> 
> I'm sure a DT binding somewhere needs to document the appropriate value 
> and meaning for #iommu-cells - I guess that probably falls under the 
> virtio-mmio binding?

Yes I guess mmio.txt would be the best place for this.

[...]
>> +/*
>> + * Virtio-iommu definition v0.6
>> + *
>> + * Copyright (C) 2018 ARM Ltd.
>> + *
>> + * SPDX-License-Identifier: BSD-3-Clause
> 
> Again, at the top, although in /* */ here since it's a header.

Right

Thanks for the review,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-04-11 18:33         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:33 UTC (permalink / raw)
  To: Robin Murphy, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Will Deacon,
	Lorenzo Pieralisi, eric.auger, eric.auger.pro, peterx,
	bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian, jintack,
	Tomasz.Nowicki

On 23/03/18 14:48, Robin Murphy wrote:
[..]
>> + * Virtio driver for the paravirtualized IOMMU
>> + *
>> + * Copyright (C) 2018 ARM Limited
>> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> + *
>> + * SPDX-License-Identifier: GPL-2.0
> 
> This wants to be a // comment at the very top of the file (thankfully 
> the policy is now properly documented in-tree since 
> Documentation/process/license-rules.rst got merged)

Ok

[...]
>> +
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
>> + *	This allows the caller to reuse the buffer for the unmap request. When
>> + *	the returned size is greater than zero, if a mapping is returned, the
>> + *	caller must free it.
> 
> This "free multiple mappings except maybe hand one of them off to the 
> caller" interface is really unintuitive. AFAICS it's only used by 
> viommu_unmap() to grab mapping->req, but that doesn't seem to care about 
> mapping itself, so I wonder whether it wouldn't make more sense to just 
> have a global kmem_cache of struct virtio_iommu_req_unmap for that and 
> avoid a lot of complexity...

Well it's a small complication for what I hoped would be a meanignful
performance difference, but more below.

>> +
>> +/* IOMMU API */
>> +
>> +static bool viommu_capable(enum iommu_cap cap)
>> +{
>> +	return false;
>> +}
> 
> The .capable callback is optional, so it's only worth implementing once 
> you want it to do something beyond the default behaviour.
> 

Ah, right

[...]
>> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
>> +			   size_t size)
>> +{
>> +	int ret = 0;
>> +	size_t unmapped;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
>> +	if (unmapped < size) {
>> +		ret = -EINVAL;
>> +		goto out_free;
>> +	}
>> +
>> +	/* Device already removed all mappings after detach. */
>> +	if (!vdomain->endpoints)
>> +		goto out_free;
>> +
>> +	if (WARN_ON(!mapping))
>> +		return 0;
>> +
>> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
>> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
>> +	};
> 
> ...In fact, the kmem_cache idea might be moot since it looks like with a 
> bit of restructuring you could get away with just a single per-viommu 
> virtio_iommu_req_unmap structure; this lot could be passed around on the 
> stack until request_lock is taken, at which point it would be copied 
> into the 'real' DMA-able structure. The equivalent might apply to 
> viommu_map() too - now that I'm looking at it, it seems like it would 
> take pretty minimal effort to encapsulate the whole business cleanly in 
> viommu_send_req_sync(), which could do something like this instead of 
> going through viommu_send_reqs_sync():
> 
> 	...
> 	spin_lock_irqsave(&viommu->request_lock, flags);
> 	viommu_copy_req(viommu->dma_req, req);
> 	ret = _viommu_send_reqs_sync(viommu, viommu->dma_req, 1, &sent);
> 	spin_unlock_irqrestore(&viommu->request_lock, flags);
> 	...

I'll have to come back to that sorry, still conducting some experiments
with map/unmap.

I'd rather avoid introducing a single dma_req per viommu, because I'd
like to move to the iotlb_range_add/iotlb_sync interface as soon as
possible, and the request logic changes a lot when multiple threads are
susceptible to interleave map/unmap requests.

I ran some tests, and adding a kmem_cache (or simply using kmemdup, it
doesn't make a noticeable difference at our scale) reduces netperf
stream/maerts throughput by 1.1%/1.4% (+/- 0.5% over 30 tests). That's
for a virtio-net device (1 tx/rx vq), and with a vfio device the
difference isn't measurable. At this point I'm not fussy about such
small difference, so don't mind simplifying viommu_del_mapping.

[...]
>> +	/*
>> +	 * Last step creates a default domain and attaches to it. Everything
>> +	 * must be ready.
>> +	 */
>> +	group = iommu_group_get_for_dev(dev);
>> +	if (!IS_ERR(group))
>> +		iommu_group_put(group);
> 
> Since you create the sysfs IOMMU device, maybe also create the links for 
> the masters?

Ok

>> +
>> +	return PTR_ERR_OR_ZERO(group);
>> +}
>> +
>> +static void viommu_remove_device(struct device *dev)
>> +{
> 
> You need to remove dev from its group, too (basically, .remove_device 
> should always undo everything .add_device did)
> 
> It would also be good practice to verify that dev->iommu_fwspec exists 
> and is one of yours before touching anything, although having checked 
> the core code I see we do currently just about get away with it thanks 
> to the horrible per-bus ops.

Ok

> 
>> +	kfree(dev->iommu_fwspec->iommu_priv);
>> +}
>> +
>> +static struct iommu_group *viommu_device_group(struct device *dev)
>> +{
>> +	if (dev_is_pci(dev))
>> +		return pci_device_group(dev);
>> +	else
>> +		return generic_device_group(dev);
>> +}
>> +
>> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>> +{
>> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> 
> I'm sure a DT binding somewhere needs to document the appropriate value 
> and meaning for #iommu-cells - I guess that probably falls under the 
> virtio-mmio binding?

Yes I guess mmio.txt would be the best place for this.

[...]
>> +/*
>> + * Virtio-iommu definition v0.6
>> + *
>> + * Copyright (C) 2018 ARM Ltd.
>> + *
>> + * SPDX-License-Identifier: BSD-3-Clause
> 
> Again, at the top, although in /* */ here since it's a header.

Right

Thanks for the review,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/4] iommu/virtio: Add probe request
  2018-03-23 15:00     ` Robin Murphy
@ 2018-04-11 18:33         ` Jean-Philippe Brucker
  2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:33 UTC (permalink / raw)
  To: Robin Murphy, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Will Deacon,
	Lorenzo Pieralisi, eric.auger, eric.auger.pro, peterx,
	bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian, jintack

On 23/03/18 15:00, Robin Murphy wrote:
[...]
>> +	/*
>> +	 * Treat unknown subtype as RESERVED, but urge users to update their
>> +	 * driver.
>> +	 */
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> 
> Might as well avoid the extra comparisons by incorporating this into the 
> switch statement, i.e.:
> 
> 	default:
> 		dev_warn(vdev->viommu_dev->dev, ...);
> 		/* Fallthrough */
> 	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> 		...
> 
> (dev_warn is generally preferable to pr_warn when feasible)

Alright, that's nicer

[...]
>>   	/*
>>   	 * Last step creates a default domain and attaches to it. Everything
>>   	 * must be ready.
>> @@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
>>   
>>   static void viommu_remove_device(struct device *dev)
>>   {
>> -	kfree(dev->iommu_fwspec->iommu_priv);
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_resv_region *entry, *next;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
> 
> Oh good :) I guess that was just a patch-splitting issue. The group 
> thing still applies, though.

Ok

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 2/4] iommu/virtio: Add probe request
  2018-03-23 15:00     ` Robin Murphy
@ 2018-04-11 18:33       ` Jean-Philippe Brucker
  2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
  1 sibling, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:33 UTC (permalink / raw)
  To: Robin Murphy, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: jayachandran.nair, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, Will Deacon, jintack, eric.auger, joro,
	eric.auger.pro

On 23/03/18 15:00, Robin Murphy wrote:
[...]
>> +	/*
>> +	 * Treat unknown subtype as RESERVED, but urge users to update their
>> +	 * driver.
>> +	 */
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> 
> Might as well avoid the extra comparisons by incorporating this into the 
> switch statement, i.e.:
> 
> 	default:
> 		dev_warn(vdev->viommu_dev->dev, ...);
> 		/* Fallthrough */
> 	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> 		...
> 
> (dev_warn is generally preferable to pr_warn when feasible)

Alright, that's nicer

[...]
>>   	/*
>>   	 * Last step creates a default domain and attaches to it. Everything
>>   	 * must be ready.
>> @@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
>>   
>>   static void viommu_remove_device(struct device *dev)
>>   {
>> -	kfree(dev->iommu_fwspec->iommu_priv);
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_resv_region *entry, *next;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
> 
> Oh good :) I guess that was just a patch-splitting issue. The group 
> thing still applies, though.

Ok

Thanks,
Jean

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 2/4] iommu/virtio: Add probe request
@ 2018-04-11 18:33         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:33 UTC (permalink / raw)
  To: Robin Murphy, iommu, kvm, virtualization, virtio-dev, kvmarm
  Cc: joro, alex.williamson, mst, jasowang, Marc Zyngier, Will Deacon,
	Lorenzo Pieralisi, eric.auger, eric.auger.pro, peterx,
	bharat.bhushan, tnowicki, jayachandran.nair, kevin.tian, jintack

On 23/03/18 15:00, Robin Murphy wrote:
[...]
>> +	/*
>> +	 * Treat unknown subtype as RESERVED, but urge users to update their
>> +	 * driver.
>> +	 */
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI)
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> 
> Might as well avoid the extra comparisons by incorporating this into the 
> switch statement, i.e.:
> 
> 	default:
> 		dev_warn(vdev->viommu_dev->dev, ...);
> 		/* Fallthrough */
> 	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> 		...
> 
> (dev_warn is generally preferable to pr_warn when feasible)

Alright, that's nicer

[...]
>>   	/*
>>   	 * Last step creates a default domain and attaches to it. Everything
>>   	 * must be ready.
>> @@ -735,7 +849,19 @@ static int viommu_add_device(struct device *dev)
>>   
>>   static void viommu_remove_device(struct device *dev)
>>   {
>> -	kfree(dev->iommu_fwspec->iommu_priv);
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_resv_region *entry, *next;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
> 
> Oh good :) I guess that was just a patch-splitting issue. The group 
> thing still applies, though.

Ok

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-23  8:27                   ` [virtio-dev] " Tian, Kevin
@ 2018-04-11 18:35                     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:35 UTC (permalink / raw)
  To: Tian, Kevin, Robin Murphy, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, jasowang,
	Will Deacon, jintack, eric.auger.pro

On 23/03/18 08:27, Tian, Kevin wrote:
>>> The host kernel needs to have *some* MSI region in place before the
>>> guest can start configuring interrupts, otherwise it won't know what
>>> address to give to the underlying hardware. However, as soon as the host
>>> kernel has picked a region, host userspace needs to know that it can no
>>> longer use addresses in that region for DMA-able guest memory. It's a
>>> lot easier when the address is fixed in hardware and the host userspace
>>> will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
>>> the
>>> more general case where MSI writes undergo IOMMU address translation
>>> so
>>> it's an arbitrary IOVA, this has the potential to conflict with stuff
>>> like guest memory hotplug.
>>>
>>> What we currently have is just the simplest option, with the host kernel
>>> just picking something up-front and pretending to host userspace that
>>> it's a fixed hardware address. There's certainly scope for it to be a
>>> bit more dynamic in the sense of adding an interface to let userspace
>>> move it around (before attaching any devices, at least), but I don't
>>> think it's feasible for the host kernel to second-guess userspace enough
>>> to make it entirely transparent like it is in the DMA API domain case.
>>>
>>> Of course, that's all assuming the host itself is using a virtio-iommu
>>> (e.g. in a nested virt or emulation scenario). When it's purely within a
>>> guest then an MSI reservation shouldn't matter so much, since the guest
>>> won't be anywhere near the real hardware configuration anyway.
>>>
>>> Robin.
>>
>> Curious since anyway we are defining a new iommu architecture
>> is it possible to avoid those ARM-specific burden completely?
>>
> 
> OK, after some study around those tricks below is my learning:
> 
> - MSI_IOVA window is used only on request (iommu_dma_get
> _msi_page), not meant to take effect on all architectures once 
> initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
> for virtio-iommu driver to implement such capability;
> 
> - I thought whether hardware MSI doorbell can be always reported
> on virtio-iommu since it's newly defined. Looks there is a problem
> if underlying IOMMU is sw-managed MSI style - valid mapping is
> expected in all level of translations, meaning guest has to manage
> stage-1 mapping in nested configuration since stage-1 is owned
> by guest. 
> 
> Then virtio-iommu is naturally expected to report the same MSI 
> model as supported by underlying hardware. Below are some
> further thoughts along this route (use 'IOMMU' to represent the
> physical one and 'virtio-iommu' for virtual one):
> 
> ----
> 
> In the scope of current virtio-iommu spec v.6, there is no nested
> consideration yet. Guest driver is expected to use MAP/UNMAP
> interface on assigned endpoints. In this case the MAP requests
> (IOVA->GPA) is caught and maintained within Qemu which then 
> further talks to VFIO to map IOVA->HPA in IOMMU.
> 
> Qemu can learn the MSI model of IOMMU from sysfs.
> 
> For hardware MSI doorbell (x86 and some ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
> * Guest takes the range as IOMMU_RESV_MSI. reserved
> * Qemu MAP database has no mapping for the doorbell
> * Physical IOMMU page table has no mapping for the doorbell
> * MSI from passthrough device bypass IOMMU
> * MSI from emulated device bypass virtio-iommu
> 
> For software MSI doorbell (most ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
> * Guest takes the range as IOMMU_RESV_RESERVED
> * vGIC requests to map 'GPA of the virtual doorbell'
> * a map request (IOVA->GPA) sent on endpoint
> * Qemu maintains the mapping in MAP database
> 	* but no VFIO_MAP request since it's purely virtual
> * GIC requests to map 'HPA of the physical doorbell'
> 	* e.g. triggered by VFIO enable msi
> * IOMMU now includes a valid mapping (IOVA->HPA)
> * MSI from emulated device go through Qemu MAP
> database (IOVA->'GPA of virtual doorbell') and then hit vGIC
> * MSI from passthrough device go through IOMMU
> (IOVA->'HPA of physical doorbell') and then hit GIC
> 
> In this case, host doorbell is treated as reserved resource in
> guest side. Guest has its own sw-management for virtual
> doorbell which is only used for emulated device. two paths 
> are completely separated.
> 
> If above captures the right flow, current v0.6 spec is complete
> regarding to required function definition.

Yes I think this summarizes well the current state or SW/HW MSI

> Then comes nested case, with two level page tables (stage-1
> and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
> is for GPA->HPA. VFIO map/unmap happens on stage-2, 
> while stage-1 is directly managed by guest (and bound to
> IOMMU which enables nested translation from IOVA->GPA
> ->HPA).
> 
> For hardware MSI, there is nothing special compared to
> previous requirement. Both host/guest treat the doorbell
> as reserved and guarantee no mapping in either stage-1 or 
> stage-2. 
> 
> For software MSI, more consideration is required:
> 
> * for emulated device it is just fine as long as guest keeps
> IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
> to walk stage-1 page table upon MSI request from emulated
> device to hit vGIC;
> 
> * for passthrough device however there is a problem. We
> need valid mapping in both stage-1 and stage-2, while host
> kernel is only responsible for stage-2:
> 
> 	1) if we expect to keep same isolation policy (i.e.
> host MSI fully managed by host kernel), then an identity
> mapping for host-reported MSI range is expected in stage-1.
> In such case we need a new type VIRTIO_IOMMU_RESV_
> MEM_T_DIRECT to teach guest setup identity mapping.
> it should be the right thing to add since anyway there might
> be true IOMMU_RESV_DIRECT range reported from host
> which also should be handled.
> 
> 	2) Alternatively we could instead allow Qemu to
> request dynamic change of physical doorbell mapping in 
> stage2, e.g. from GPA of virtual doorbell to HPA of physical 
> doorbell. But it doesn't like a good design - VFIO doesn't
> assign interrupt controller to user space then why should 
> VFIO allow user mapping to doorbell...
> 
> if 1) is agreed, looks the missing part in spec is just VIRTIO_
> IOMMU_RESV_MEM_T_DIRECT, though the whole story 
> is lengthy and fully enabling nested require many other
> works. :-)

This is a great write-up, thanks. As said on the v0.6 thread [1], I also
prefer 1), because it doesn't require any additional interface in the
host kernel, and it doesn't force host userspace to guess which doorbell
address the guest is writing into the MSI-X table.

Thanks,
Jean

[1] https://www.mail-archive.com/virtualization@lists.linux-foundation.org/msg30104.html

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 1/4] iommu: Add virtio-iommu driver
  2018-03-23  8:27                   ` [virtio-dev] " Tian, Kevin
  (?)
  (?)
@ 2018-04-11 18:35                   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:35 UTC (permalink / raw)
  To: Tian, Kevin, Robin Murphy, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, Will Deacon,
	jintack, eric.auger.pro

On 23/03/18 08:27, Tian, Kevin wrote:
>>> The host kernel needs to have *some* MSI region in place before the
>>> guest can start configuring interrupts, otherwise it won't know what
>>> address to give to the underlying hardware. However, as soon as the host
>>> kernel has picked a region, host userspace needs to know that it can no
>>> longer use addresses in that region for DMA-able guest memory. It's a
>>> lot easier when the address is fixed in hardware and the host userspace
>>> will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
>>> the
>>> more general case where MSI writes undergo IOMMU address translation
>>> so
>>> it's an arbitrary IOVA, this has the potential to conflict with stuff
>>> like guest memory hotplug.
>>>
>>> What we currently have is just the simplest option, with the host kernel
>>> just picking something up-front and pretending to host userspace that
>>> it's a fixed hardware address. There's certainly scope for it to be a
>>> bit more dynamic in the sense of adding an interface to let userspace
>>> move it around (before attaching any devices, at least), but I don't
>>> think it's feasible for the host kernel to second-guess userspace enough
>>> to make it entirely transparent like it is in the DMA API domain case.
>>>
>>> Of course, that's all assuming the host itself is using a virtio-iommu
>>> (e.g. in a nested virt or emulation scenario). When it's purely within a
>>> guest then an MSI reservation shouldn't matter so much, since the guest
>>> won't be anywhere near the real hardware configuration anyway.
>>>
>>> Robin.
>>
>> Curious since anyway we are defining a new iommu architecture
>> is it possible to avoid those ARM-specific burden completely?
>>
> 
> OK, after some study around those tricks below is my learning:
> 
> - MSI_IOVA window is used only on request (iommu_dma_get
> _msi_page), not meant to take effect on all architectures once 
> initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
> for virtio-iommu driver to implement such capability;
> 
> - I thought whether hardware MSI doorbell can be always reported
> on virtio-iommu since it's newly defined. Looks there is a problem
> if underlying IOMMU is sw-managed MSI style - valid mapping is
> expected in all level of translations, meaning guest has to manage
> stage-1 mapping in nested configuration since stage-1 is owned
> by guest. 
> 
> Then virtio-iommu is naturally expected to report the same MSI 
> model as supported by underlying hardware. Below are some
> further thoughts along this route (use 'IOMMU' to represent the
> physical one and 'virtio-iommu' for virtual one):
> 
> ----
> 
> In the scope of current virtio-iommu spec v.6, there is no nested
> consideration yet. Guest driver is expected to use MAP/UNMAP
> interface on assigned endpoints. In this case the MAP requests
> (IOVA->GPA) is caught and maintained within Qemu which then 
> further talks to VFIO to map IOVA->HPA in IOMMU.
> 
> Qemu can learn the MSI model of IOMMU from sysfs.
> 
> For hardware MSI doorbell (x86 and some ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
> * Guest takes the range as IOMMU_RESV_MSI. reserved
> * Qemu MAP database has no mapping for the doorbell
> * Physical IOMMU page table has no mapping for the doorbell
> * MSI from passthrough device bypass IOMMU
> * MSI from emulated device bypass virtio-iommu
> 
> For software MSI doorbell (most ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
> * Guest takes the range as IOMMU_RESV_RESERVED
> * vGIC requests to map 'GPA of the virtual doorbell'
> * a map request (IOVA->GPA) sent on endpoint
> * Qemu maintains the mapping in MAP database
> 	* but no VFIO_MAP request since it's purely virtual
> * GIC requests to map 'HPA of the physical doorbell'
> 	* e.g. triggered by VFIO enable msi
> * IOMMU now includes a valid mapping (IOVA->HPA)
> * MSI from emulated device go through Qemu MAP
> database (IOVA->'GPA of virtual doorbell') and then hit vGIC
> * MSI from passthrough device go through IOMMU
> (IOVA->'HPA of physical doorbell') and then hit GIC
> 
> In this case, host doorbell is treated as reserved resource in
> guest side. Guest has its own sw-management for virtual
> doorbell which is only used for emulated device. two paths 
> are completely separated.
> 
> If above captures the right flow, current v0.6 spec is complete
> regarding to required function definition.

Yes I think this summarizes well the current state or SW/HW MSI

> Then comes nested case, with two level page tables (stage-1
> and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
> is for GPA->HPA. VFIO map/unmap happens on stage-2, 
> while stage-1 is directly managed by guest (and bound to
> IOMMU which enables nested translation from IOVA->GPA
> ->HPA).
> 
> For hardware MSI, there is nothing special compared to
> previous requirement. Both host/guest treat the doorbell
> as reserved and guarantee no mapping in either stage-1 or 
> stage-2. 
> 
> For software MSI, more consideration is required:
> 
> * for emulated device it is just fine as long as guest keeps
> IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
> to walk stage-1 page table upon MSI request from emulated
> device to hit vGIC;
> 
> * for passthrough device however there is a problem. We
> need valid mapping in both stage-1 and stage-2, while host
> kernel is only responsible for stage-2:
> 
> 	1) if we expect to keep same isolation policy (i.e.
> host MSI fully managed by host kernel), then an identity
> mapping for host-reported MSI range is expected in stage-1.
> In such case we need a new type VIRTIO_IOMMU_RESV_
> MEM_T_DIRECT to teach guest setup identity mapping.
> it should be the right thing to add since anyway there might
> be true IOMMU_RESV_DIRECT range reported from host
> which also should be handled.
> 
> 	2) Alternatively we could instead allow Qemu to
> request dynamic change of physical doorbell mapping in 
> stage2, e.g. from GPA of virtual doorbell to HPA of physical 
> doorbell. But it doesn't like a good design - VFIO doesn't
> assign interrupt controller to user space then why should 
> VFIO allow user mapping to doorbell...
> 
> if 1) is agreed, looks the missing part in spec is just VIRTIO_
> IOMMU_RESV_MEM_T_DIRECT, though the whole story 
> is lengthy and fully enabling nested require many other
> works. :-)

This is a great write-up, thanks. As said on the v0.6 thread [1], I also
prefer 1), because it doesn't require any additional interface in the
host kernel, and it doesn't force host userspace to guess which doorbell
address the guest is writing into the MSI-X table.

Thanks,
Jean

[1] https://www.mail-archive.com/virtualization@lists.linux-foundation.org/msg30104.html

^ permalink raw reply	[flat|nested] 61+ messages in thread

* [virtio-dev] Re: [PATCH 1/4] iommu: Add virtio-iommu driver
@ 2018-04-11 18:35                     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 61+ messages in thread
From: Jean-Philippe Brucker @ 2018-04-11 18:35 UTC (permalink / raw)
  To: Tian, Kevin, Robin Murphy, iommu, kvm, virtualization,
	virtio-dev, kvmarm
  Cc: jayachandran.nair, tnowicki, mst, Marc Zyngier, jasowang,
	Will Deacon, jintack, eric.auger.pro

On 23/03/18 08:27, Tian, Kevin wrote:
>>> The host kernel needs to have *some* MSI region in place before the
>>> guest can start configuring interrupts, otherwise it won't know what
>>> address to give to the underlying hardware. However, as soon as the host
>>> kernel has picked a region, host userspace needs to know that it can no
>>> longer use addresses in that region for DMA-able guest memory. It's a
>>> lot easier when the address is fixed in hardware and the host userspace
>>> will never be stupid enough to try and VFIO_IOMMU_DMA_MAP it, but in
>>> the
>>> more general case where MSI writes undergo IOMMU address translation
>>> so
>>> it's an arbitrary IOVA, this has the potential to conflict with stuff
>>> like guest memory hotplug.
>>>
>>> What we currently have is just the simplest option, with the host kernel
>>> just picking something up-front and pretending to host userspace that
>>> it's a fixed hardware address. There's certainly scope for it to be a
>>> bit more dynamic in the sense of adding an interface to let userspace
>>> move it around (before attaching any devices, at least), but I don't
>>> think it's feasible for the host kernel to second-guess userspace enough
>>> to make it entirely transparent like it is in the DMA API domain case.
>>>
>>> Of course, that's all assuming the host itself is using a virtio-iommu
>>> (e.g. in a nested virt or emulation scenario). When it's purely within a
>>> guest then an MSI reservation shouldn't matter so much, since the guest
>>> won't be anywhere near the real hardware configuration anyway.
>>>
>>> Robin.
>>
>> Curious since anyway we are defining a new iommu architecture
>> is it possible to avoid those ARM-specific burden completely?
>>
> 
> OK, after some study around those tricks below is my learning:
> 
> - MSI_IOVA window is used only on request (iommu_dma_get
> _msi_page), not meant to take effect on all architectures once 
> initialized. e.g. ARM GIC does it but not x86. So it is reasonable 
> for virtio-iommu driver to implement such capability;
> 
> - I thought whether hardware MSI doorbell can be always reported
> on virtio-iommu since it's newly defined. Looks there is a problem
> if underlying IOMMU is sw-managed MSI style - valid mapping is
> expected in all level of translations, meaning guest has to manage
> stage-1 mapping in nested configuration since stage-1 is owned
> by guest. 
> 
> Then virtio-iommu is naturally expected to report the same MSI 
> model as supported by underlying hardware. Below are some
> further thoughts along this route (use 'IOMMU' to represent the
> physical one and 'virtio-iommu' for virtual one):
> 
> ----
> 
> In the scope of current virtio-iommu spec v.6, there is no nested
> consideration yet. Guest driver is expected to use MAP/UNMAP
> interface on assigned endpoints. In this case the MAP requests
> (IOVA->GPA) is caught and maintained within Qemu which then 
> further talks to VFIO to map IOVA->HPA in IOMMU.
> 
> Qemu can learn the MSI model of IOMMU from sysfs.
> 
> For hardware MSI doorbell (x86 and some ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_MSI
> * Guest takes the range as IOMMU_RESV_MSI. reserved
> * Qemu MAP database has no mapping for the doorbell
> * Physical IOMMU page table has no mapping for the doorbell
> * MSI from passthrough device bypass IOMMU
> * MSI from emulated device bypass virtio-iommu
> 
> For software MSI doorbell (most ARM):
> * Host kernel reports to Qemu as IOMMU_RESV_SW_MSI
> * Qemu report to guest as VIRTIO_IOMMU_RESV_MEM_T_RESERVED
> * Guest takes the range as IOMMU_RESV_RESERVED
> * vGIC requests to map 'GPA of the virtual doorbell'
> * a map request (IOVA->GPA) sent on endpoint
> * Qemu maintains the mapping in MAP database
> 	* but no VFIO_MAP request since it's purely virtual
> * GIC requests to map 'HPA of the physical doorbell'
> 	* e.g. triggered by VFIO enable msi
> * IOMMU now includes a valid mapping (IOVA->HPA)
> * MSI from emulated device go through Qemu MAP
> database (IOVA->'GPA of virtual doorbell') and then hit vGIC
> * MSI from passthrough device go through IOMMU
> (IOVA->'HPA of physical doorbell') and then hit GIC
> 
> In this case, host doorbell is treated as reserved resource in
> guest side. Guest has its own sw-management for virtual
> doorbell which is only used for emulated device. two paths 
> are completely separated.
> 
> If above captures the right flow, current v0.6 spec is complete
> regarding to required function definition.

Yes I think this summarizes well the current state or SW/HW MSI

> Then comes nested case, with two level page tables (stage-1
> and stage-2) in IOMMU. stage-1 is for IOVA->GPA and stage-2
> is for GPA->HPA. VFIO map/unmap happens on stage-2, 
> while stage-1 is directly managed by guest (and bound to
> IOMMU which enables nested translation from IOVA->GPA
> ->HPA).
> 
> For hardware MSI, there is nothing special compared to
> previous requirement. Both host/guest treat the doorbell
> as reserved and guarantee no mapping in either stage-1 or 
> stage-2. 
> 
> For software MSI, more consideration is required:
> 
> * for emulated device it is just fine as long as guest keeps
> IOVA->'GPA of virtual doorbell' in stage-1. Qemu is expected
> to walk stage-1 page table upon MSI request from emulated
> device to hit vGIC;
> 
> * for passthrough device however there is a problem. We
> need valid mapping in both stage-1 and stage-2, while host
> kernel is only responsible for stage-2:
> 
> 	1) if we expect to keep same isolation policy (i.e.
> host MSI fully managed by host kernel), then an identity
> mapping for host-reported MSI range is expected in stage-1.
> In such case we need a new type VIRTIO_IOMMU_RESV_
> MEM_T_DIRECT to teach guest setup identity mapping.
> it should be the right thing to add since anyway there might
> be true IOMMU_RESV_DIRECT range reported from host
> which also should be handled.
> 
> 	2) Alternatively we could instead allow Qemu to
> request dynamic change of physical doorbell mapping in 
> stage2, e.g. from GPA of virtual doorbell to HPA of physical 
> doorbell. But it doesn't like a good design - VFIO doesn't
> assign interrupt controller to user space then why should 
> VFIO allow user mapping to doorbell...
> 
> if 1) is agreed, looks the missing part in spec is just VIRTIO_
> IOMMU_RESV_MEM_T_DIRECT, though the whole story 
> is lengthy and fully enabling nested require many other
> works. :-)

This is a great write-up, thanks. As said on the v0.6 thread [1], I also
prefer 1), because it doesn't require any additional interface in the
host kernel, and it doesn't force host userspace to guess which doorbell
address the guest is writing into the MSI-X table.

Thanks,
Jean

[1] https://www.mail-archive.com/virtualization@lists.linux-foundation.org/msg30104.html

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2018-04-11 18:35 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-14 14:53 [PATCH 0/4] Add virtio-iommu driver Jean-Philippe Brucker
2018-02-14 14:53 ` [virtio-dev] " Jean-Philippe Brucker
2018-02-14 14:53 ` [PATCH 1/4] iommu: " Jean-Philippe Brucker
2018-02-14 14:53 ` Jean-Philippe Brucker
2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
2018-02-21 20:12   ` kbuild test robot
2018-02-21 21:08   ` kbuild test robot
     [not found]   ` <20180214145340.1223-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-02-19 12:23     ` Tomasz Nowicki
2018-02-20 11:30       ` Jean-Philippe Brucker
2018-02-20 11:30       ` Jean-Philippe Brucker
2018-02-20 11:30         ` [virtio-dev] " Jean-Philippe Brucker
2018-02-21 20:12     ` kbuild test robot
     [not found]       ` <201802220455.lMEb6LLi%fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
2018-02-22 11:04         ` Jean-Philippe Brucker
2018-02-22 11:04           ` [virtio-dev] " Jean-Philippe Brucker
2018-02-27 14:47           ` Michael S. Tsirkin
     [not found]           ` <e5ffc52f-4510-f757-aa83-2a99af3ae06b-5wv7dgnIgG8@public.gmane.org>
2018-02-27 14:47             ` Michael S. Tsirkin
2018-02-27 14:47               ` [virtio-dev] " Michael S. Tsirkin
2018-02-21 21:08     ` kbuild test robot
2018-03-21  6:43     ` Tian, Kevin
2018-03-21  6:43       ` [virtio-dev] " Tian, Kevin
2018-03-21 13:14       ` Jean-Philippe Brucker
     [not found]       ` <AADFC41AFE54684AB9EE6CBC0274A5D19108B0FE-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-03-21 13:14         ` Jean-Philippe Brucker
2018-03-21 13:14           ` [virtio-dev] " Jean-Philippe Brucker
2018-03-21 14:23           ` Robin Murphy
2018-03-22 10:06             ` Tian, Kevin
2018-03-22 10:06               ` [virtio-dev] " Tian, Kevin
     [not found]             ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42@SHSMSX101.ccr.corp.intel.com>
2018-03-23  8:27               ` Tian, Kevin
     [not found]               ` <AADFC41AFE54684AB9EE6CBC0274A5D19108DC42-0J0gbvR4kThpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2018-03-23  8:27                 ` Tian, Kevin
2018-03-23  8:27                   ` [virtio-dev] " Tian, Kevin
2018-04-11 18:35                   ` Jean-Philippe Brucker
2018-04-11 18:35                     ` [virtio-dev] " Jean-Philippe Brucker
2018-04-11 18:35                   ` Jean-Philippe Brucker
2018-03-23 14:48     ` Robin Murphy
2018-04-11 18:33       ` Jean-Philippe Brucker
2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
2018-03-21  6:43   ` Tian, Kevin
2018-03-23 14:48   ` Robin Murphy
2018-02-14 14:53 ` [PATCH 2/4] iommu/virtio: Add probe request Jean-Philippe Brucker
2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
2018-03-23 15:00   ` Robin Murphy
     [not found]   ` <20180214145340.1223-3-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-03-23 15:00     ` Robin Murphy
2018-04-11 18:33       ` Jean-Philippe Brucker
2018-04-11 18:33       ` Jean-Philippe Brucker
2018-04-11 18:33         ` [virtio-dev] " Jean-Philippe Brucker
2018-02-14 14:53 ` Jean-Philippe Brucker
2018-02-14 14:53 ` [PATCH 3/4] iommu/virtio: Add event queue Jean-Philippe Brucker
2018-02-14 14:53 ` Jean-Philippe Brucker
2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
2018-02-22  1:35   ` kbuild test robot
     [not found]   ` <20180214145340.1223-4-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-02-22  1:35     ` kbuild test robot
2018-02-14 14:53 ` [PATCH 4/4] vfio: Allow type-1 IOMMU instantiation with a virtio-iommu Jean-Philippe Brucker
2018-02-14 14:53   ` [virtio-dev] " Jean-Philippe Brucker
2018-02-14 15:26   ` Alex Williamson
     [not found]   ` <20180214145340.1223-5-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-02-14 15:26     ` Alex Williamson
     [not found]       ` <20180214082639.54556efb-DGNDKt5SQtizQB+pC5nmwQ@public.gmane.org>
2018-02-14 15:35         ` Robin Murphy
2018-02-15 13:53           ` Jean-Philippe Brucker
2018-02-15 13:53           ` Jean-Philippe Brucker
2018-02-15 13:53             ` [virtio-dev] " Jean-Philippe Brucker
2018-02-15 13:53             ` Jean-Philippe Brucker
2018-02-14 15:35       ` Robin Murphy
2018-02-14 14:53 ` Jean-Philippe Brucker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.