All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/5] Add virtio-iommu driver
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

Implement the virtio-iommu driver following version 0.5 of the
specification [1]. Previous version of this code was sent back in April
[2], implementing the first public RFC. Since then there has been lots of
progress and discussion on the specification side, and I think the driver
is in a good shape now.

The reason patches 1-3 are only RFC is that I'm waiting on feedback from
the Virtio TC to reserve a device ID.

List of changes since previous RFC:
* Add per-endpoint probe request, for hardware MSI and reserved regions.
* Add a virtqueue for the device to report translation faults. Only
  non-recoverable ones at the moment.
* Removed the iommu_map_sg specialization for now, because none of the
  device drivers I use for testing (virtio, ixgbe and internal DMA
  engines) seem to use map_sg. This kind of feature is a lot more
  interesting when accompanied by benchmark numbers, and can be added back
  during future optimization work.
* Many fixes and cleanup

The driver works out of the box on DT-based systems, but ACPI support
still needs to be tested and discussed. In the specification I proposed
IORT tables as a nice candidate for describing the virtual topology.
Patches 4 and 5 propose small changes to the IORT driver for
instantiating a paravirtualized IOMMU. The IORT node is described in the
specification [1]. x86 support will also require some hacks since the
driver is based on the IOMMU DMA ops, that x86 doesn't use.

Eric's latest QEMU device [3] works with v0.4. For the moment you can use
the kvmtool device [4] to test v0.5 on arm64, and inject arbitrary fault
with the debug tool. The driver can also be pulled from my Linux tree [5].

[1] https://www.spinics.net/lists/kvm/msg157402.html
[2] https://patchwork.kernel.org/patch/9670273/
[3] https://lists.gnu.org/archive/html/qemu-arm/2017-09/msg00413.html
[4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/base
[5] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.5-dev

Jean-Philippe Brucker (5):
  iommu: Add virtio-iommu driver
  iommu/virtio-iommu: Add probe request
  iommu/virtio-iommu: Add event queue
  ACPI/IORT: Support paravirtualized IOMMU
  ACPI/IORT: Move IORT to the ACPI folder

 drivers/acpi/Kconfig              |    3 +
 drivers/acpi/Makefile             |    1 +
 drivers/acpi/arm64/Kconfig        |    3 -
 drivers/acpi/arm64/Makefile       |    1 -
 drivers/acpi/{arm64 => }/iort.c   |   95 ++-
 drivers/iommu/Kconfig             |   12 +
 drivers/iommu/Makefile            |    1 +
 drivers/iommu/virtio-iommu.c      | 1219 +++++++++++++++++++++++++++++++++++++
 include/acpi/actbl2.h             |   18 +-
 include/uapi/linux/virtio_ids.h   |    1 +
 include/uapi/linux/virtio_iommu.h |  195 ++++++
 11 files changed, 1537 insertions(+), 12 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (92%)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.14.3


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 0/5] Add virtio-iommu driver
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

Implement the virtio-iommu driver following version 0.5 of the
specification [1]. Previous version of this code was sent back in April
[2], implementing the first public RFC. Since then there has been lots of
progress and discussion on the specification side, and I think the driver
is in a good shape now.

The reason patches 1-3 are only RFC is that I'm waiting on feedback from
the Virtio TC to reserve a device ID.

List of changes since previous RFC:
* Add per-endpoint probe request, for hardware MSI and reserved regions.
* Add a virtqueue for the device to report translation faults. Only
  non-recoverable ones at the moment.
* Removed the iommu_map_sg specialization for now, because none of the
  device drivers I use for testing (virtio, ixgbe and internal DMA
  engines) seem to use map_sg. This kind of feature is a lot more
  interesting when accompanied by benchmark numbers, and can be added back
  during future optimization work.
* Many fixes and cleanup

The driver works out of the box on DT-based systems, but ACPI support
still needs to be tested and discussed. In the specification I proposed
IORT tables as a nice candidate for describing the virtual topology.
Patches 4 and 5 propose small changes to the IORT driver for
instantiating a paravirtualized IOMMU. The IORT node is described in the
specification [1]. x86 support will also require some hacks since the
driver is based on the IOMMU DMA ops, that x86 doesn't use.

Eric's latest QEMU device [3] works with v0.4. For the moment you can use
the kvmtool device [4] to test v0.5 on arm64, and inject arbitrary fault
with the debug tool. The driver can also be pulled from my Linux tree [5].

[1] https://www.spinics.net/lists/kvm/msg157402.html
[2] https://patchwork.kernel.org/patch/9670273/
[3] https://lists.gnu.org/archive/html/qemu-arm/2017-09/msg00413.html
[4] git://linux-arm.org/kvmtool-jpb.git virtio-iommu/base
[5] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.5-dev

Jean-Philippe Brucker (5):
  iommu: Add virtio-iommu driver
  iommu/virtio-iommu: Add probe request
  iommu/virtio-iommu: Add event queue
  ACPI/IORT: Support paravirtualized IOMMU
  ACPI/IORT: Move IORT to the ACPI folder

 drivers/acpi/Kconfig              |    3 +
 drivers/acpi/Makefile             |    1 +
 drivers/acpi/arm64/Kconfig        |    3 -
 drivers/acpi/arm64/Makefile       |    1 -
 drivers/acpi/{arm64 => }/iort.c   |   95 ++-
 drivers/iommu/Kconfig             |   12 +
 drivers/iommu/Makefile            |    1 +
 drivers/iommu/virtio-iommu.c      | 1219 +++++++++++++++++++++++++++++++++++++
 include/acpi/actbl2.h             |   18 +-
 include/uapi/linux/virtio_ids.h   |    1 +
 include/uapi/linux/virtio_iommu.h |  195 ++++++
 11 files changed, 1537 insertions(+), 12 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (92%)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code is to create requests and send them through virtio.
Implementing the IOMMU API is fairly straightforward since the
virtio-iommu MAP/UNMAP interface is almost identical.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 140 ++++++
 5 files changed, 1111 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 17b212f56e6a..7271e59e8b23 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -403,4 +403,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index dca71fe1c885..432242f3a328 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..feb8c8925c3a
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,958 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2017 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	refcount_t			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+/* Must be called with request_lock held */
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
+	 * notion of time and this just prevents locking up a CPU if the device
+	 * dies.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i) {
+			/*
+			 * FIXME: what's a good way to yield to host? A second
+			 * virtqueue_kick won't have any effect since we haven't
+			 * added any descriptor.
+			 */
+			udelay(10);
+		}
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	struct viommu_request req = {0};
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. Caller
+ *	must always free the returned mapping, whether the function succeeds or
+ *	not.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all devices,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ *
+ * Caller should hold the mapping lock if necessary.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node)
+		return 0;
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false; /* :( */
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+	refcount_set(&vdomain->endpoints, 0);
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * When attaching the device to a new domain, it will be detached from
+	 * the old one and, if as as a result the old domain isn't attached to
+	 * any device, all mappings are removed from the old domain and it is
+	 * freed. (Note that we can't use get_domain_for_dev here, it returns
+	 * the default domain during initial attach.)
+	 *
+	 * Take note of the device disappearing, so we can ignore unmap request
+	 * on stale domains (that is, between this detach and the upcoming
+	 * free.)
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		refcount_dec(&vdev->vdomain->endpoints);
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!refcount_read(&vdomain->endpoints)) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any.
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	refcount_inc(&vdomain->endpoints);
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.phys_addr	= cpu_to_le64(paddr),
+		.size		= cpu_to_le64(size),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!refcount_read(&vdomain->endpoints))
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!refcount_read(&vdomain->endpoints))
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.size		= cpu_to_le64(unmapped),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	if (mapping)
+		kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_viommu;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_viommu;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_unregister(&viommu->iommu);
+
+err_free_viommu:
+	kfree(viommu);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_unregister(&viommu->iommu);
+	kfree(viommu);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..934ed3d3cd3f 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..2b4cd2042897
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,140 @@
+/*
+ * Virtio-iommu definition v0.5
+ *
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This header is BSD licensed so anyone can use the definitions
+ * to implement compatible drivers/servers:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of ARM Ltd. nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8 					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					phys_addr;
+	__le64					size;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+__packed
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					size;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
  (?)
  (?)
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, eric.auger,
	lv.zheng, sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code is to create requests and send them through virtio.
Implementing the IOMMU API is fairly straightforward since the
virtio-iommu MAP/UNMAP interface is almost identical.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 140 ++++++
 5 files changed, 1111 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 17b212f56e6a..7271e59e8b23 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -403,4 +403,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index dca71fe1c885..432242f3a328 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..feb8c8925c3a
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,958 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2017 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	refcount_t			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+/* Must be called with request_lock held */
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
+	 * notion of time and this just prevents locking up a CPU if the device
+	 * dies.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i) {
+			/*
+			 * FIXME: what's a good way to yield to host? A second
+			 * virtqueue_kick won't have any effect since we haven't
+			 * added any descriptor.
+			 */
+			udelay(10);
+		}
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	struct viommu_request req = {0};
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. Caller
+ *	must always free the returned mapping, whether the function succeeds or
+ *	not.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all devices,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ *
+ * Caller should hold the mapping lock if necessary.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node)
+		return 0;
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false; /* :( */
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+	refcount_set(&vdomain->endpoints, 0);
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * When attaching the device to a new domain, it will be detached from
+	 * the old one and, if as as a result the old domain isn't attached to
+	 * any device, all mappings are removed from the old domain and it is
+	 * freed. (Note that we can't use get_domain_for_dev here, it returns
+	 * the default domain during initial attach.)
+	 *
+	 * Take note of the device disappearing, so we can ignore unmap request
+	 * on stale domains (that is, between this detach and the upcoming
+	 * free.)
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		refcount_dec(&vdev->vdomain->endpoints);
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!refcount_read(&vdomain->endpoints)) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any.
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	refcount_inc(&vdomain->endpoints);
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.phys_addr	= cpu_to_le64(paddr),
+		.size		= cpu_to_le64(size),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!refcount_read(&vdomain->endpoints))
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!refcount_read(&vdomain->endpoints))
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.size		= cpu_to_le64(unmapped),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	if (mapping)
+		kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_viommu;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_viommu;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_unregister(&viommu->iommu);
+
+err_free_viommu:
+	kfree(viommu);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_unregister(&viommu->iommu);
+	kfree(viommu);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..934ed3d3cd3f 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..2b4cd2042897
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,140 @@
+/*
+ * Virtio-iommu definition v0.5
+ *
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This header is BSD licensed so anyone can use the definitions
+ * to implement compatible drivers/servers:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of ARM Ltd. nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8 					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					phys_addr;
+	__le64					size;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+__packed
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					size;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio-mmio transport without emulating
page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code is to create requests and send them through virtio.
Implementing the IOMMU API is fairly straightforward since the
virtio-iommu MAP/UNMAP interface is almost identical.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 140 ++++++
 5 files changed, 1111 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 17b212f56e6a..7271e59e8b23 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -403,4 +403,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO_MMIO
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index dca71fe1c885..432242f3a328 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..feb8c8925c3a
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,958 @@
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2017 ARM Limited
+ * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vq;
+	/* Serialize anything touching the request queue */
+	spinlock_t			request_lock;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	union {
+		struct virtio_iommu_req_map map;
+		struct virtio_iommu_req_unmap unmap;
+	} req;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	/* Number of endpoints attached to this domain */
+	refcount_t			endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct scatterlist		top;
+	struct scatterlist		bottom;
+
+	int				written;
+	struct list_head		list;
+};
+
+#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
+
+/* Virtio transport */
+
+static int viommu_status_to_errno(u8 status)
+{
+	switch (status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+/*
+ * viommu_get_req_size - compute request size
+ *
+ * A virtio-iommu request is split into one device-read-only part (top) and one
+ * device-write-only part (bottom). Given a request, return the sizes of the two
+ * parts in @top and @bottom.
+ *
+ * Return 0 on success, or an error when the request seems invalid.
+ */
+static int viommu_get_req_size(struct viommu_dev *viommu,
+			       struct virtio_iommu_req_head *req, size_t *top,
+			       size_t *bottom)
+{
+	size_t size;
+	union virtio_iommu_req *r = (void *)req;
+
+	*bottom = sizeof(struct virtio_iommu_req_tail);
+
+	switch (req->type) {
+	case VIRTIO_IOMMU_T_ATTACH:
+		size = sizeof(r->attach);
+		break;
+	case VIRTIO_IOMMU_T_DETACH:
+		size = sizeof(r->detach);
+		break;
+	case VIRTIO_IOMMU_T_MAP:
+		size = sizeof(r->map);
+		break;
+	case VIRTIO_IOMMU_T_UNMAP:
+		size = sizeof(r->unmap);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	*top = size - *bottom;
+	return 0;
+}
+
+static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
+			       struct list_head *sent)
+{
+
+	unsigned int len;
+	int nr_received = 0;
+	struct viommu_request *req, *pending;
+
+	pending = list_first_entry_or_null(sent, struct viommu_request, list);
+	if (WARN_ON(!pending))
+		return 0;
+
+	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+		if (req != pending) {
+			dev_warn(viommu->dev, "discarding stale request\n");
+			continue;
+		}
+
+		pending->written = len;
+
+		if (++nr_received == nr_sent) {
+			WARN_ON(!list_is_last(&pending->list, sent));
+			break;
+		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
+			break;
+		}
+
+		pending = list_next_entry(pending, list);
+	}
+
+	return nr_received;
+}
+
+/* Must be called with request_lock held */
+static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
+				  struct viommu_request *req, int nr,
+				  int *nr_sent)
+{
+	int i, ret;
+	ktime_t timeout;
+	LIST_HEAD(pending);
+	int nr_received = 0;
+	struct scatterlist *sg[2];
+	/*
+	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
+	 * notion of time and this just prevents locking up a CPU if the device
+	 * dies.
+	 */
+	unsigned long timeout_ms = 1000;
+
+	*nr_sent = 0;
+
+	for (i = 0; i < nr; i++, req++) {
+		req->written = 0;
+
+		sg[0] = &req->top;
+		sg[1] = &req->bottom;
+
+		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
+					GFP_ATOMIC);
+		if (ret)
+			break;
+
+		list_add_tail(&req->list, &pending);
+	}
+
+	if (i && !virtqueue_kick(viommu->vq))
+		return -EPIPE;
+
+	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
+	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
+		nr_received += viommu_receive_resp(viommu, i - nr_received,
+						   &pending);
+		if (nr_received < i) {
+			/*
+			 * FIXME: what's a good way to yield to host? A second
+			 * virtqueue_kick won't have any effect since we haven't
+			 * added any descriptor.
+			 */
+			udelay(10);
+		}
+	}
+
+	if (nr_received != i)
+		ret = -ETIMEDOUT;
+
+	if (ret == -ENOSPC && nr_received)
+		/*
+		 * We've freed some space since virtio told us that the ring is
+		 * full, tell the caller to come back for more.
+		 */
+		ret = -EAGAIN;
+
+	*nr_sent = nr_received;
+
+	return ret;
+}
+
+/*
+ * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
+ *                         them to return
+ *
+ * @req: array of requests
+ * @nr: array length
+ * @nr_sent: on return, contains the number of requests actually sent
+ *
+ * Return 0 on success, or an error if we failed to send some of the requests.
+ */
+static int viommu_send_reqs_sync(struct viommu_dev *viommu,
+				 struct viommu_request *req, int nr,
+				 int *nr_sent)
+{
+	int ret;
+	int sent = 0;
+	unsigned long flags;
+
+	*nr_sent = 0;
+	do {
+		spin_lock_irqsave(&viommu->request_lock, flags);
+		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
+		spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+		*nr_sent += sent;
+		req += sent;
+		nr -= sent;
+	} while (ret == -EAGAIN);
+
+	return ret;
+}
+
+/*
+ * viommu_send_req_sync - send one request and wait for reply
+ *
+ * @top: pointer to a virtio_iommu_req_* structure
+ *
+ * Returns 0 if the request was successful, or an error number otherwise. No
+ * distinction is done between transport and request errors.
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
+{
+	int ret;
+	int nr_sent;
+	void *bottom;
+	struct viommu_request req = {0};
+	size_t top_size, bottom_size;
+	struct virtio_iommu_req_tail *tail;
+	struct virtio_iommu_req_head *head = top;
+
+	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
+	if (ret)
+		return ret;
+
+	bottom = top + top_size;
+	tail = bottom + bottom_size - sizeof(*tail);
+
+	sg_init_one(&req.top, top, top_size);
+	sg_init_one(&req.bottom, bottom, bottom_size);
+
+	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
+	if (ret || !req.written || nr_sent != 1) {
+		dev_err(viommu->dev, "failed to send request\n");
+		return -EIO;
+	}
+
+	return viommu_status_to_errno(tail->status);
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size)
+{
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ * @out_mapping: if not NULL, the first removed mapping is returned in there.
+ *	This allows the caller to reuse the buffer for the unmap request. Caller
+ *	must always free the returned mapping, whether the function succeeds or
+ *	not.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				 unsigned long iova, size_t size,
+				 struct viommu_mapping **out_mapping)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+
+	if (next) {
+		mapping = container_of(next, struct viommu_mapping, iova);
+		/* Trying to split a mapping? */
+		if (WARN_ON(mapping->iova.start < iova))
+			next = NULL;
+	}
+
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+
+		next = interval_tree_iter_next(node, iova, last);
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+
+		if (out_mapping && !(*out_mapping))
+			*out_mapping = mapping;
+		else
+			kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all devices,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ *
+ * Caller should hold the mapping lock if necessary.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int i = 1, ret, nr_sent;
+	struct viommu_request *reqs;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	size_t top_size, bottom_size;
+
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	if (!node)
+		return 0;
+
+	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
+		i++;
+
+	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
+	if (!reqs)
+		return -ENOMEM;
+
+	bottom_size = sizeof(struct virtio_iommu_req_tail);
+	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
+
+	i = 0;
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
+		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
+			    bottom_size);
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+		i++;
+	}
+
+	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
+	kfree(reqs);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static bool viommu_capable(enum iommu_cap cap)
+{
+	return false; /* :( */
+}
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+	refcount_set(&vdomain->endpoints, 0);
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	/* ida limits size to 31 bits. A value of 0 means "max" */
+	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
+				  1U << viommu->domain_bits;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0, NULL);
+
+	if (vdomain->viommu)
+		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach *req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * When attaching the device to a new domain, it will be detached from
+	 * the old one and, if as as a result the old domain isn't attached to
+	 * any device, all mappings are removed from the old domain and it is
+	 * freed. (Note that we can't use get_domain_for_dev here, it returns
+	 * the default domain during initial attach.)
+	 *
+	 * Take note of the device disappearing, so we can ignore unmap request
+	 * on stale domains (that is, between this detach and the upcoming
+	 * free.)
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		refcount_dec(&vdev->vdomain->endpoints);
+
+	/* DMA to the stack is forbidden, store request on the heap */
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return -ENOMEM;
+
+	*req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, req);
+		if (ret)
+			break;
+	}
+
+	kfree(req);
+
+	if (ret)
+		return ret;
+
+	if (!refcount_read(&vdomain->endpoints)) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings if any.
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	refcount_inc(&vdomain->endpoints);
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
+	if (!mapping)
+		return -ENOMEM;
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
+
+	mapping->req.map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.phys_addr	= cpu_to_le64(paddr),
+		.size		= cpu_to_le64(size),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!refcount_read(&vdomain->endpoints))
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size, NULL);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct viommu_mapping *mapping = NULL;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
+	if (unmapped < size) {
+		ret = -EINVAL;
+		goto out_free;
+	}
+
+	/* Device already removed all mappings after detach. */
+	if (!refcount_read(&vdomain->endpoints))
+		goto out_free;
+
+	if (WARN_ON(!mapping))
+		return 0;
+
+	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_addr	= cpu_to_le64(iova),
+		.size		= cpu_to_le64(unmapped),
+	};
+
+	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
+
+out_free:
+	if (mapping)
+		kfree(mapping);
+
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (!IS_ERR(group))
+		iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	kfree(dev->iommu_fwspec->iommu_priv);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops = {
+	.capable		= viommu_capable,
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.map_sg			= default_iommu_map_sg,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.of_xlate		= viommu_of_xlate,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+};
+
+static int viommu_init_vq(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vq = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+
+	ret = viommu_init_vq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_viommu;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_viommu;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_unregister(&viommu->iommu);
+
+err_free_viommu:
+	kfree(viommu);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_unregister(&viommu->iommu);
+	kfree(viommu);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..934ed3d3cd3f 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..2b4cd2042897
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,140 @@
+/*
+ * Virtio-iommu definition v0.5
+ *
+ * Copyright (C) 2017 ARM Ltd.
+ *
+ * This header is BSD licensed so anyone can use the definitions
+ * to implement compatible drivers/servers:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of ARM Ltd. nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8 					domain_bits;
+} __packed;
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+} __packed;
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+
+	__le32					endpoint;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					phys_addr;
+	__le64					size;
+	__le32					flags;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+__packed
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+
+	__le32					domain;
+	__le64					virt_addr;
+	__le64					size;
+	__le32					reserved;
+
+	struct virtio_iommu_req_tail		tail;
+} __packed;
+
+union virtio_iommu_req {
+	struct virtio_iommu_req_head		head;
+
+	struct virtio_iommu_req_attach		attach;
+	struct virtio_iommu_req_detach		detach;
+	struct virtio_iommu_req_map		map;
+	struct virtio_iommu_req_unmap		unmap;
+};
+
+#endif
-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 195 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index feb8c8925c3a..79e0add94e05 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
+		/* Please update your driver. */
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret) {
+		kfree(probe);
+		return ret;
+	}
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+	kfree(probe);
+
+	return 0;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
 
-	list_add_tail(&region->list, head);
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
+
+	iommu_dma_get_resv_regions(dev, head);
 }
 
 static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
@@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -931,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2b4cd2042897..eec90a2ab547 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -38,6 +38,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -49,6 +50,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8 					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -56,6 +60,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -135,6 +171,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (3 preceding siblings ...)
  (?)
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, eric.auger,
	lv.zheng, sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 195 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index feb8c8925c3a..79e0add94e05 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
+		/* Please update your driver. */
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret) {
+		kfree(probe);
+		return ret;
+	}
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+	kfree(probe);
+
+	return 0;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
 
-	list_add_tail(&region->list, head);
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
+
+	iommu_dma_get_resv_regions(dev, head);
 }
 
 static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
@@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -931,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2b4cd2042897..eec90a2ab547 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -38,6 +38,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -49,6 +50,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8 					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -56,6 +60,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -135,6 +171,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  37 +++++++++
 2 files changed, 195 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index feb8c8925c3a..79e0add94e05 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -45,6 +45,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -72,6 +73,7 @@ struct viommu_domain {
 struct viommu_endpoint {
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
 	case VIRTIO_IOMMU_T_UNMAP:
 		size = sizeof(r->unmap);
 		break;
+	case VIRTIO_IOMMU_T_PROBE:
+		*bottom += viommu->probe_size;
+		size = sizeof(r->probe) + *bottom;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 addr = le64_to_cpu(mem->addr);
+	u64 size = le64_to_cpu(mem->size);
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(addr, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+	default:
+		region = iommu_alloc_resv_region(addr, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+
+	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
+	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
+		/* Please update your driver. */
+		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		/* Trouble ahead. */
+		return -EINVAL;
+
+	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
+			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe);
+	if (ret) {
+		kfree(probe);
+		return ret;
+	}
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
+			break;
+		default:
+			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += sizeof(*prop) + len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+	kfree(probe);
+
+	return 0;
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
 
 static int viommu_add_device(struct device *dev)
 {
+	int ret;
 	struct iommu_group *group;
 	struct viommu_endpoint *vdev;
 	struct viommu_dev *viommu = NULL;
@@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
 		return -ENOMEM;
 
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	/*
 	 * Last step creates a default domain and attaches to it. Everything
 	 * must be ready.
@@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
 
 static void viommu_remove_device(struct device *dev)
 {
-	kfree(dev->iommu_fwspec->iommu_priv);
+	struct viommu_endpoint *vdev;
+	struct iommu_resv_region *entry, *next;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
+		kfree(entry);
+
+	kfree(vdev);
 }
 
 static struct iommu_group *viommu_device_group(struct device *dev)
@@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
 
-	list_add_tail(&region->list, head);
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
+
+	iommu_dma_get_resv_regions(dev, head);
 }
 
 static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
@@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -931,6 +1081,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index 2b4cd2042897..eec90a2ab547 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -38,6 +38,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -49,6 +50,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8 					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 } __packed;
 
 /* Request types */
@@ -56,6 +60,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 } __packed;
 
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					addr;
+	__le64					size;
+} __packed;
+
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+	__u8					value[];
+} __packed;
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/* Tail follows the variable-length properties array (no padding) */
+} __packed;
+
 union virtio_iommu_req {
 	struct virtio_iommu_req_head		head;
 
@@ -135,6 +171,7 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_detach		detach;
 	struct virtio_iommu_req_map		map;
 	struct virtio_iommu_req_unmap		unmap;
+	struct virtio_iommu_req_probe		probe;
 };
 
 #endif
-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

The event queue offers a way for the device to report access faults from
devices. It is implemented on virtqueue #1, whenever the host needs to
signal a fault it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 142 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 79e0add94e05..fe0d449bf489 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,7 +43,7 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
 
@@ -84,6 +90,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
 
 /* Virtio transport */
@@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * dies.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return 0;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
+				  GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		goto err_free_viommu;
 
@@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index eec90a2ab547..b57543b4145b 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -174,4 +174,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (5 preceding siblings ...)
  (?)
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, eric.auger,
	lv.zheng, sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

The event queue offers a way for the device to report access faults from
devices. It is implemented on virtqueue #1, whenever the host needs to
signal a fault it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 142 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 79e0add94e05..fe0d449bf489 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,7 +43,7 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
 
@@ -84,6 +90,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
 
 /* Virtio transport */
@@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * dies.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return 0;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
+				  GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		goto err_free_viommu;
 
@@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index eec90a2ab547..b57543b4145b 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -174,4 +174,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

The event queue offers a way for the device to report access faults from
devices. It is implemented on virtqueue #1, whenever the host needs to
signal a fault it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
 include/uapi/linux/virtio_iommu.h |  18 +++++
 2 files changed, 142 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 79e0add94e05..fe0d449bf489 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -30,6 +30,12 @@
 #define MSI_IOVA_BASE			0x8000000
 #define MSI_IOVA_LENGTH			0x100000
 
+enum viommu_vq_idx {
+	VIOMMU_REQUEST_VQ	= 0,
+	VIOMMU_EVENT_VQ		= 1,
+	VIOMMU_NUM_VQS		= 2,
+};
+
 struct viommu_dev {
 	struct iommu_device		iommu;
 	struct device			*dev;
@@ -37,7 +43,7 @@ struct viommu_dev {
 
 	struct ida			domain_ids;
 
-	struct virtqueue		*vq;
+	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
 	/* Serialize anything touching the request queue */
 	spinlock_t			request_lock;
 
@@ -84,6 +90,15 @@ struct viommu_request {
 	struct list_head		list;
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
 
 /* Virtio transport */
@@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
 	unsigned int len;
 	int nr_received = 0;
 	struct viommu_request *req, *pending;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	pending = list_first_entry_or_null(sent, struct viommu_request, list);
 	if (WARN_ON(!pending))
 		return 0;
 
-	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
 		if (req != pending) {
 			dev_warn(viommu->dev, "discarding stale request\n");
 			continue;
@@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 	 * dies.
 	 */
 	unsigned long timeout_ms = 1000;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
 
 	*nr_sent = 0;
 
@@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
 		sg[0] = &req->top;
 		sg[1] = &req->bottom;
 
-		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
-					GFP_ATOMIC);
+		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
 		if (ret)
 			break;
 
 		list_add_tail(&req->list, &pending);
 	}
 
-	if (i && !virtqueue_kick(viommu->vq))
+	if (i && !virtqueue_kick(vq))
 		return -EPIPE;
 
 	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
@@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return 0;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static bool viommu_capable(enum iommu_cap cap)
@@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
 	.put_resv_regions	= viommu_put_resv_regions,
 };
 
-static int viommu_init_vq(struct viommu_dev *viommu)
+static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
+
+	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
+			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
+
+	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
+				  GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
 	}
 
-	viommu->vq = ret;
+	if (!virtqueue_kick(vq))
+		return -EPIPE;
+
+	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
 
 	return 0;
 }
@@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
 	viommu->dev = dev;
 	viommu->vdev = vdev;
 
-	ret = viommu_init_vq(viommu);
+	ret = viommu_init_vqs(viommu);
 	if (ret)
 		goto err_free_viommu;
 
@@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_viommu;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index eec90a2ab547..b57543b4145b 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -174,4 +174,22 @@ union virtio_iommu_req {
 	struct virtio_iommu_req_probe		probe;
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					padding[3];
+	__le32					flags;
+	__le32					endpoint;
+	__le64					address;
+} __packed;
+
 #endif
-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

To describe the virtual topology in relation to a virtio-iommu device,
ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
for it.

This is a RFC because the IORT specification doesn't describe the
paravirtualized node at the moment, it is only provided as an example in
the virtio-iommu spec. What we need to do first is confirm that x86
kernels are able to use the IORT driver with the virtio-iommu. There isn't
anything specific to arm64 in the driver but there might be other blockers
we're not aware of (I know for example that x86 also requires custom DMA
ops rather than iommu-dma ones, but it's unrelated) so this needs to be
tested on the x86 prototype.

Rationale: virtio-iommu requires an ACPI table to be passed between host
and guest that describes its relation to PCI and platform endpoints in the
virtual system. A table that maps PCI RIDs and integrated devices to IOMMU
device IDs, telling the IOMMU driver which endpoints it manages.

As far as I'm aware, there are three existing tables that solve this
problem: Intel DMAR, AMD IVRS and ARM IORT. The first two are specific to
Intel VT-d and AMD IOMMU respectively, while the third describes multiple
remapping devices -- currently only ARM IOMMUs and MSI controllers, but it
is easy to extend.

IORT table and drivers are easiest to extend and they do the job, so
rather than introducing a fourth solution to solve a generic problem,
reuse what exists.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/arm64/iort.c | 95 +++++++++++++++++++++++++++++++++++++++++++----
 drivers/iommu/Kconfig     |  1 +
 include/acpi/actbl2.h     | 18 ++++++++-
 3 files changed, 106 insertions(+), 8 deletions(-)

diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index fde279b0a6d8..c7132e4a0560 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -29,7 +29,8 @@
 #define IORT_TYPE_MASK(type)	(1 << (type))
 #define IORT_MSI_TYPE		(1 << ACPI_IORT_NODE_ITS_GROUP)
 #define IORT_IOMMU_TYPE		((1 << ACPI_IORT_NODE_SMMU) |	\
-				(1 << ACPI_IORT_NODE_SMMU_V3))
+				(1 << ACPI_IORT_NODE_SMMU_V3) | \
+				(1 << ACPI_IORT_NODE_PARAVIRT))
 
 /* Until ACPICA headers cover IORT rev. C */
 #ifndef ACPI_IORT_SMMU_V3_CAVIUM_CN99XX
@@ -616,6 +617,8 @@ static inline bool iort_iommu_driver_enabled(u8 type)
 		return IS_BUILTIN(CONFIG_ARM_SMMU_V3);
 	case ACPI_IORT_NODE_SMMU:
 		return IS_BUILTIN(CONFIG_ARM_SMMU);
+	case ACPI_IORT_NODE_PARAVIRT:
+		return IS_BUILTIN(CONFIG_VIRTIO_IOMMU);
 	default:
 		pr_warn("IORT node type %u does not describe an SMMU\n", type);
 		return false;
@@ -1062,6 +1065,48 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
 	return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
 }
 
+static int __init paravirt_count_resources(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	/* Mem + IRQs */
+	return 1 + pviommu->interrupt_count;
+}
+
+static void __init paravirt_init_resources(struct resource *res,
+					   struct acpi_iort_node *node)
+{
+	int i;
+	int num_res = 0;
+	int hw_irq, trigger;
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	res[num_res].start = pviommu->base_address;
+	res[num_res].end = pviommu->base_address + pviommu->span - 1;
+	res[num_res].flags = IORESOURCE_MEM;
+	num_res++;
+
+	for (i = 0; i < pviommu->interrupt_count; i++) {
+		hw_irq = IORT_IRQ_MASK(pviommu->interrupts[i]);
+		trigger = IORT_IRQ_TRIGGER_MASK(pviommu->interrupts[i]);
+
+		acpi_iort_register_irq(hw_irq, "pviommu", trigger, &res[num_res++]);
+	}
+}
+
+static bool __init paravirt_is_coherent(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	return pviommu->flags & ACPI_IORT_NODE_PV_CACHE_COHERENT;
+}
+
 struct iort_iommu_config {
 	const char *name;
 	int (*iommu_init)(struct acpi_iort_node *node);
@@ -1088,6 +1133,13 @@ static const struct iort_iommu_config iort_arm_smmu_cfg __initconst = {
 	.iommu_init_resources = arm_smmu_init_resources
 };
 
+static const struct iort_iommu_config iort_paravirt_cfg __initconst = {
+	.name = "pviommu",
+	.iommu_is_coherent = paravirt_is_coherent,
+	.iommu_count_resources = paravirt_count_resources,
+	.iommu_init_resources = paravirt_init_resources
+};
+
 static __init
 const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 {
@@ -1096,18 +1148,22 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 		return &iort_arm_smmu_v3_cfg;
 	case ACPI_IORT_NODE_SMMU:
 		return &iort_arm_smmu_cfg;
+	case ACPI_IORT_NODE_PARAVIRT:
+		return &iort_paravirt_cfg;
 	default:
 		return NULL;
 	}
 }
 
 /**
- * iort_add_smmu_platform_device() - Allocate a platform device for SMMU
- * @node: Pointer to SMMU ACPI IORT node
+ * iort_add_iommu_platform_device() - Allocate a platform device for an IOMMU
+ * @node: Pointer to IOMMU ACPI IORT node
+ * @name: Base name of the device
  *
  * Returns: 0 on success, <0 failure
  */
-static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+static int __init iort_add_iommu_platform_device(struct acpi_iort_node *node,
+						 const char *name)
 {
 	struct fwnode_handle *fwnode;
 	struct platform_device *pdev;
@@ -1119,7 +1175,7 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	if (!ops)
 		return -ENODEV;
 
-	pdev = platform_device_alloc(ops->name, PLATFORM_DEVID_AUTO);
+	pdev = platform_device_alloc(name, PLATFORM_DEVID_AUTO);
 	if (!pdev)
 		return -ENOMEM;
 
@@ -1189,6 +1245,28 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	return ret;
 }
 
+static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+{
+	const struct iort_iommu_config *ops = iort_get_iommu_cfg(node);
+
+	if (!ops)
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, ops->name);
+}
+
+static int __init iort_add_paravirt_platform_device(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	if (WARN_ON(pviommu->model != ACPI_IORT_NODE_PV_VIRTIO_IOMMU))
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, "virtio-mmio");
+}
+
 static bool __init iort_enable_acs(struct acpi_iort_node *iort_node)
 {
 	if (iort_node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) {
@@ -1250,7 +1328,8 @@ static void __init iort_init_platform_devices(void)
 			acs_enabled = iort_enable_acs(iort_node);
 
 		if ((iort_node->type == ACPI_IORT_NODE_SMMU) ||
-			(iort_node->type == ACPI_IORT_NODE_SMMU_V3)) {
+			(iort_node->type == ACPI_IORT_NODE_SMMU_V3) ||
+			(iort_node->type == ACPI_IORT_NODE_PARAVIRT)) {
 
 			fwnode = acpi_alloc_fwnode_static();
 			if (!fwnode)
@@ -1258,7 +1337,9 @@ static void __init iort_init_platform_devices(void)
 
 			iort_set_fwnode(iort_node, fwnode);
 
-			ret = iort_add_smmu_platform_device(iort_node);
+			ret = iort_node->type == ACPI_IORT_NODE_PARAVIRT ?
+				iort_add_paravirt_platform_device(iort_node) :
+				iort_add_smmu_platform_device(iort_node);
 			if (ret) {
 				iort_delete_fwnode(iort_node);
 				acpi_free_fwnode_static(fwnode);
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 7271e59e8b23..3e28a5d682c3 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -409,6 +409,7 @@ config VIRTIO_IOMMU
 	select IOMMU_API
 	select INTERVAL_TREE
 	select ARM_DMA_USE_IOMMU if ARM
+	select ACPI_IORT
 	help
 	  Para-virtualised IOMMU driver with virtio.
 
diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index 686b6f8c09dc..23ae5bbc36d1 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -696,7 +696,8 @@ enum acpi_iort_node_type {
 	ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
 	ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
 	ACPI_IORT_NODE_SMMU = 0x03,
-	ACPI_IORT_NODE_SMMU_V3 = 0x04
+	ACPI_IORT_NODE_SMMU_V3 = 0x04,
+	ACPI_IORT_NODE_PARAVIRT = 0x05,
 };
 
 struct acpi_iort_id_mapping {
@@ -824,6 +825,21 @@ struct acpi_iort_smmu_v3 {
 #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE     (1<<1)
 #define ACPI_IORT_SMMU_V3_PXM_VALID         (1<<3)
 
+struct acpi_iort_pviommu {
+	u64 base_address;
+	u64 span;
+	u32 model;
+	u32 flags;
+	u32 interrupt_count;
+	u64 interrupts[];
+};
+
+enum acpi_iort_paravirt_node_model {
+	ACPI_IORT_NODE_PV_VIRTIO_IOMMU = 0x00,
+};
+
+#define ACPI_IORT_NODE_PV_CACHE_COHERENT    (1<<0)
+
 /*******************************************************************************
  *
  * IVRS - I/O Virtualization Reporting Structure
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (6 preceding siblings ...)
  (?)
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, eric.auger,
	lv.zheng, sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

To describe the virtual topology in relation to a virtio-iommu device,
ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
for it.

This is a RFC because the IORT specification doesn't describe the
paravirtualized node at the moment, it is only provided as an example in
the virtio-iommu spec. What we need to do first is confirm that x86
kernels are able to use the IORT driver with the virtio-iommu. There isn't
anything specific to arm64 in the driver but there might be other blockers
we're not aware of (I know for example that x86 also requires custom DMA
ops rather than iommu-dma ones, but it's unrelated) so this needs to be
tested on the x86 prototype.

Rationale: virtio-iommu requires an ACPI table to be passed between host
and guest that describes its relation to PCI and platform endpoints in the
virtual system. A table that maps PCI RIDs and integrated devices to IOMMU
device IDs, telling the IOMMU driver which endpoints it manages.

As far as I'm aware, there are three existing tables that solve this
problem: Intel DMAR, AMD IVRS and ARM IORT. The first two are specific to
Intel VT-d and AMD IOMMU respectively, while the third describes multiple
remapping devices -- currently only ARM IOMMUs and MSI controllers, but it
is easy to extend.

IORT table and drivers are easiest to extend and they do the job, so
rather than introducing a fourth solution to solve a generic problem,
reuse what exists.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/arm64/iort.c | 95 +++++++++++++++++++++++++++++++++++++++++++----
 drivers/iommu/Kconfig     |  1 +
 include/acpi/actbl2.h     | 18 ++++++++-
 3 files changed, 106 insertions(+), 8 deletions(-)

diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index fde279b0a6d8..c7132e4a0560 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -29,7 +29,8 @@
 #define IORT_TYPE_MASK(type)	(1 << (type))
 #define IORT_MSI_TYPE		(1 << ACPI_IORT_NODE_ITS_GROUP)
 #define IORT_IOMMU_TYPE		((1 << ACPI_IORT_NODE_SMMU) |	\
-				(1 << ACPI_IORT_NODE_SMMU_V3))
+				(1 << ACPI_IORT_NODE_SMMU_V3) | \
+				(1 << ACPI_IORT_NODE_PARAVIRT))
 
 /* Until ACPICA headers cover IORT rev. C */
 #ifndef ACPI_IORT_SMMU_V3_CAVIUM_CN99XX
@@ -616,6 +617,8 @@ static inline bool iort_iommu_driver_enabled(u8 type)
 		return IS_BUILTIN(CONFIG_ARM_SMMU_V3);
 	case ACPI_IORT_NODE_SMMU:
 		return IS_BUILTIN(CONFIG_ARM_SMMU);
+	case ACPI_IORT_NODE_PARAVIRT:
+		return IS_BUILTIN(CONFIG_VIRTIO_IOMMU);
 	default:
 		pr_warn("IORT node type %u does not describe an SMMU\n", type);
 		return false;
@@ -1062,6 +1065,48 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
 	return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
 }
 
+static int __init paravirt_count_resources(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	/* Mem + IRQs */
+	return 1 + pviommu->interrupt_count;
+}
+
+static void __init paravirt_init_resources(struct resource *res,
+					   struct acpi_iort_node *node)
+{
+	int i;
+	int num_res = 0;
+	int hw_irq, trigger;
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	res[num_res].start = pviommu->base_address;
+	res[num_res].end = pviommu->base_address + pviommu->span - 1;
+	res[num_res].flags = IORESOURCE_MEM;
+	num_res++;
+
+	for (i = 0; i < pviommu->interrupt_count; i++) {
+		hw_irq = IORT_IRQ_MASK(pviommu->interrupts[i]);
+		trigger = IORT_IRQ_TRIGGER_MASK(pviommu->interrupts[i]);
+
+		acpi_iort_register_irq(hw_irq, "pviommu", trigger, &res[num_res++]);
+	}
+}
+
+static bool __init paravirt_is_coherent(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	return pviommu->flags & ACPI_IORT_NODE_PV_CACHE_COHERENT;
+}
+
 struct iort_iommu_config {
 	const char *name;
 	int (*iommu_init)(struct acpi_iort_node *node);
@@ -1088,6 +1133,13 @@ static const struct iort_iommu_config iort_arm_smmu_cfg __initconst = {
 	.iommu_init_resources = arm_smmu_init_resources
 };
 
+static const struct iort_iommu_config iort_paravirt_cfg __initconst = {
+	.name = "pviommu",
+	.iommu_is_coherent = paravirt_is_coherent,
+	.iommu_count_resources = paravirt_count_resources,
+	.iommu_init_resources = paravirt_init_resources
+};
+
 static __init
 const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 {
@@ -1096,18 +1148,22 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 		return &iort_arm_smmu_v3_cfg;
 	case ACPI_IORT_NODE_SMMU:
 		return &iort_arm_smmu_cfg;
+	case ACPI_IORT_NODE_PARAVIRT:
+		return &iort_paravirt_cfg;
 	default:
 		return NULL;
 	}
 }
 
 /**
- * iort_add_smmu_platform_device() - Allocate a platform device for SMMU
- * @node: Pointer to SMMU ACPI IORT node
+ * iort_add_iommu_platform_device() - Allocate a platform device for an IOMMU
+ * @node: Pointer to IOMMU ACPI IORT node
+ * @name: Base name of the device
  *
  * Returns: 0 on success, <0 failure
  */
-static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+static int __init iort_add_iommu_platform_device(struct acpi_iort_node *node,
+						 const char *name)
 {
 	struct fwnode_handle *fwnode;
 	struct platform_device *pdev;
@@ -1119,7 +1175,7 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	if (!ops)
 		return -ENODEV;
 
-	pdev = platform_device_alloc(ops->name, PLATFORM_DEVID_AUTO);
+	pdev = platform_device_alloc(name, PLATFORM_DEVID_AUTO);
 	if (!pdev)
 		return -ENOMEM;
 
@@ -1189,6 +1245,28 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	return ret;
 }
 
+static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+{
+	const struct iort_iommu_config *ops = iort_get_iommu_cfg(node);
+
+	if (!ops)
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, ops->name);
+}
+
+static int __init iort_add_paravirt_platform_device(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	if (WARN_ON(pviommu->model != ACPI_IORT_NODE_PV_VIRTIO_IOMMU))
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, "virtio-mmio");
+}
+
 static bool __init iort_enable_acs(struct acpi_iort_node *iort_node)
 {
 	if (iort_node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) {
@@ -1250,7 +1328,8 @@ static void __init iort_init_platform_devices(void)
 			acs_enabled = iort_enable_acs(iort_node);
 
 		if ((iort_node->type == ACPI_IORT_NODE_SMMU) ||
-			(iort_node->type == ACPI_IORT_NODE_SMMU_V3)) {
+			(iort_node->type == ACPI_IORT_NODE_SMMU_V3) ||
+			(iort_node->type == ACPI_IORT_NODE_PARAVIRT)) {
 
 			fwnode = acpi_alloc_fwnode_static();
 			if (!fwnode)
@@ -1258,7 +1337,9 @@ static void __init iort_init_platform_devices(void)
 
 			iort_set_fwnode(iort_node, fwnode);
 
-			ret = iort_add_smmu_platform_device(iort_node);
+			ret = iort_node->type == ACPI_IORT_NODE_PARAVIRT ?
+				iort_add_paravirt_platform_device(iort_node) :
+				iort_add_smmu_platform_device(iort_node);
 			if (ret) {
 				iort_delete_fwnode(iort_node);
 				acpi_free_fwnode_static(fwnode);
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 7271e59e8b23..3e28a5d682c3 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -409,6 +409,7 @@ config VIRTIO_IOMMU
 	select IOMMU_API
 	select INTERVAL_TREE
 	select ARM_DMA_USE_IOMMU if ARM
+	select ACPI_IORT
 	help
 	  Para-virtualised IOMMU driver with virtio.
 
diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index 686b6f8c09dc..23ae5bbc36d1 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -696,7 +696,8 @@ enum acpi_iort_node_type {
 	ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
 	ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
 	ACPI_IORT_NODE_SMMU = 0x03,
-	ACPI_IORT_NODE_SMMU_V3 = 0x04
+	ACPI_IORT_NODE_SMMU_V3 = 0x04,
+	ACPI_IORT_NODE_PARAVIRT = 0x05,
 };
 
 struct acpi_iort_id_mapping {
@@ -824,6 +825,21 @@ struct acpi_iort_smmu_v3 {
 #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE     (1<<1)
 #define ACPI_IORT_SMMU_V3_PXM_VALID         (1<<3)
 
+struct acpi_iort_pviommu {
+	u64 base_address;
+	u64 span;
+	u32 model;
+	u32 flags;
+	u32 interrupt_count;
+	u64 interrupts[];
+};
+
+enum acpi_iort_paravirt_node_model {
+	ACPI_IORT_NODE_PV_VIRTIO_IOMMU = 0x00,
+};
+
+#define ACPI_IORT_NODE_PV_CACHE_COHERENT    (1<<0)
+
 /*******************************************************************************
  *
  * IVRS - I/O Virtualization Reporting Structure
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

To describe the virtual topology in relation to a virtio-iommu device,
ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
for it.

This is a RFC because the IORT specification doesn't describe the
paravirtualized node at the moment, it is only provided as an example in
the virtio-iommu spec. What we need to do first is confirm that x86
kernels are able to use the IORT driver with the virtio-iommu. There isn't
anything specific to arm64 in the driver but there might be other blockers
we're not aware of (I know for example that x86 also requires custom DMA
ops rather than iommu-dma ones, but it's unrelated) so this needs to be
tested on the x86 prototype.

Rationale: virtio-iommu requires an ACPI table to be passed between host
and guest that describes its relation to PCI and platform endpoints in the
virtual system. A table that maps PCI RIDs and integrated devices to IOMMU
device IDs, telling the IOMMU driver which endpoints it manages.

As far as I'm aware, there are three existing tables that solve this
problem: Intel DMAR, AMD IVRS and ARM IORT. The first two are specific to
Intel VT-d and AMD IOMMU respectively, while the third describes multiple
remapping devices -- currently only ARM IOMMUs and MSI controllers, but it
is easy to extend.

IORT table and drivers are easiest to extend and they do the job, so
rather than introducing a fourth solution to solve a generic problem,
reuse what exists.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/arm64/iort.c | 95 +++++++++++++++++++++++++++++++++++++++++++----
 drivers/iommu/Kconfig     |  1 +
 include/acpi/actbl2.h     | 18 ++++++++-
 3 files changed, 106 insertions(+), 8 deletions(-)

diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index fde279b0a6d8..c7132e4a0560 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -29,7 +29,8 @@
 #define IORT_TYPE_MASK(type)	(1 << (type))
 #define IORT_MSI_TYPE		(1 << ACPI_IORT_NODE_ITS_GROUP)
 #define IORT_IOMMU_TYPE		((1 << ACPI_IORT_NODE_SMMU) |	\
-				(1 << ACPI_IORT_NODE_SMMU_V3))
+				(1 << ACPI_IORT_NODE_SMMU_V3) | \
+				(1 << ACPI_IORT_NODE_PARAVIRT))
 
 /* Until ACPICA headers cover IORT rev. C */
 #ifndef ACPI_IORT_SMMU_V3_CAVIUM_CN99XX
@@ -616,6 +617,8 @@ static inline bool iort_iommu_driver_enabled(u8 type)
 		return IS_BUILTIN(CONFIG_ARM_SMMU_V3);
 	case ACPI_IORT_NODE_SMMU:
 		return IS_BUILTIN(CONFIG_ARM_SMMU);
+	case ACPI_IORT_NODE_PARAVIRT:
+		return IS_BUILTIN(CONFIG_VIRTIO_IOMMU);
 	default:
 		pr_warn("IORT node type %u does not describe an SMMU\n", type);
 		return false;
@@ -1062,6 +1065,48 @@ static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node)
 	return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK;
 }
 
+static int __init paravirt_count_resources(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	/* Mem + IRQs */
+	return 1 + pviommu->interrupt_count;
+}
+
+static void __init paravirt_init_resources(struct resource *res,
+					   struct acpi_iort_node *node)
+{
+	int i;
+	int num_res = 0;
+	int hw_irq, trigger;
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	res[num_res].start = pviommu->base_address;
+	res[num_res].end = pviommu->base_address + pviommu->span - 1;
+	res[num_res].flags = IORESOURCE_MEM;
+	num_res++;
+
+	for (i = 0; i < pviommu->interrupt_count; i++) {
+		hw_irq = IORT_IRQ_MASK(pviommu->interrupts[i]);
+		trigger = IORT_IRQ_TRIGGER_MASK(pviommu->interrupts[i]);
+
+		acpi_iort_register_irq(hw_irq, "pviommu", trigger, &res[num_res++]);
+	}
+}
+
+static bool __init paravirt_is_coherent(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	return pviommu->flags & ACPI_IORT_NODE_PV_CACHE_COHERENT;
+}
+
 struct iort_iommu_config {
 	const char *name;
 	int (*iommu_init)(struct acpi_iort_node *node);
@@ -1088,6 +1133,13 @@ static const struct iort_iommu_config iort_arm_smmu_cfg __initconst = {
 	.iommu_init_resources = arm_smmu_init_resources
 };
 
+static const struct iort_iommu_config iort_paravirt_cfg __initconst = {
+	.name = "pviommu",
+	.iommu_is_coherent = paravirt_is_coherent,
+	.iommu_count_resources = paravirt_count_resources,
+	.iommu_init_resources = paravirt_init_resources
+};
+
 static __init
 const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 {
@@ -1096,18 +1148,22 @@ const struct iort_iommu_config *iort_get_iommu_cfg(struct acpi_iort_node *node)
 		return &iort_arm_smmu_v3_cfg;
 	case ACPI_IORT_NODE_SMMU:
 		return &iort_arm_smmu_cfg;
+	case ACPI_IORT_NODE_PARAVIRT:
+		return &iort_paravirt_cfg;
 	default:
 		return NULL;
 	}
 }
 
 /**
- * iort_add_smmu_platform_device() - Allocate a platform device for SMMU
- * @node: Pointer to SMMU ACPI IORT node
+ * iort_add_iommu_platform_device() - Allocate a platform device for an IOMMU
+ * @node: Pointer to IOMMU ACPI IORT node
+ * @name: Base name of the device
  *
  * Returns: 0 on success, <0 failure
  */
-static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+static int __init iort_add_iommu_platform_device(struct acpi_iort_node *node,
+						 const char *name)
 {
 	struct fwnode_handle *fwnode;
 	struct platform_device *pdev;
@@ -1119,7 +1175,7 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	if (!ops)
 		return -ENODEV;
 
-	pdev = platform_device_alloc(ops->name, PLATFORM_DEVID_AUTO);
+	pdev = platform_device_alloc(name, PLATFORM_DEVID_AUTO);
 	if (!pdev)
 		return -ENOMEM;
 
@@ -1189,6 +1245,28 @@ static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
 	return ret;
 }
 
+static int __init iort_add_smmu_platform_device(struct acpi_iort_node *node)
+{
+	const struct iort_iommu_config *ops = iort_get_iommu_cfg(node);
+
+	if (!ops)
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, ops->name);
+}
+
+static int __init iort_add_paravirt_platform_device(struct acpi_iort_node *node)
+{
+	struct acpi_iort_pviommu *pviommu;
+
+	pviommu = (struct acpi_iort_pviommu *)node->node_data;
+
+	if (WARN_ON(pviommu->model != ACPI_IORT_NODE_PV_VIRTIO_IOMMU))
+		return -ENODEV;
+
+	return iort_add_iommu_platform_device(node, "virtio-mmio");
+}
+
 static bool __init iort_enable_acs(struct acpi_iort_node *iort_node)
 {
 	if (iort_node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) {
@@ -1250,7 +1328,8 @@ static void __init iort_init_platform_devices(void)
 			acs_enabled = iort_enable_acs(iort_node);
 
 		if ((iort_node->type == ACPI_IORT_NODE_SMMU) ||
-			(iort_node->type == ACPI_IORT_NODE_SMMU_V3)) {
+			(iort_node->type == ACPI_IORT_NODE_SMMU_V3) ||
+			(iort_node->type == ACPI_IORT_NODE_PARAVIRT)) {
 
 			fwnode = acpi_alloc_fwnode_static();
 			if (!fwnode)
@@ -1258,7 +1337,9 @@ static void __init iort_init_platform_devices(void)
 
 			iort_set_fwnode(iort_node, fwnode);
 
-			ret = iort_add_smmu_platform_device(iort_node);
+			ret = iort_node->type == ACPI_IORT_NODE_PARAVIRT ?
+				iort_add_paravirt_platform_device(iort_node) :
+				iort_add_smmu_platform_device(iort_node);
 			if (ret) {
 				iort_delete_fwnode(iort_node);
 				acpi_free_fwnode_static(fwnode);
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 7271e59e8b23..3e28a5d682c3 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -409,6 +409,7 @@ config VIRTIO_IOMMU
 	select IOMMU_API
 	select INTERVAL_TREE
 	select ARM_DMA_USE_IOMMU if ARM
+	select ACPI_IORT
 	help
 	  Para-virtualised IOMMU driver with virtio.
 
diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index 686b6f8c09dc..23ae5bbc36d1 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -696,7 +696,8 @@ enum acpi_iort_node_type {
 	ACPI_IORT_NODE_NAMED_COMPONENT = 0x01,
 	ACPI_IORT_NODE_PCI_ROOT_COMPLEX = 0x02,
 	ACPI_IORT_NODE_SMMU = 0x03,
-	ACPI_IORT_NODE_SMMU_V3 = 0x04
+	ACPI_IORT_NODE_SMMU_V3 = 0x04,
+	ACPI_IORT_NODE_PARAVIRT = 0x05,
 };
 
 struct acpi_iort_id_mapping {
@@ -824,6 +825,21 @@ struct acpi_iort_smmu_v3 {
 #define ACPI_IORT_SMMU_V3_HTTU_OVERRIDE     (1<<1)
 #define ACPI_IORT_SMMU_V3_PXM_VALID         (1<<3)
 
+struct acpi_iort_pviommu {
+	u64 base_address;
+	u64 span;
+	u32 model;
+	u32 flags;
+	u32 interrupt_count;
+	u64 interrupts[];
+};
+
+enum acpi_iort_paravirt_node_model {
+	ACPI_IORT_NODE_PV_VIRTIO_IOMMU = 0x00,
+};
+
+#define ACPI_IORT_NODE_PV_CACHE_COHERENT    (1<<0)
+
 /*******************************************************************************
  *
  * IVRS - I/O Virtualization Reporting Structure
-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 5/5] ACPI/IORT: Move IORT to the ACPI folder
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, jasowang, rjw, robert.moore,
	alex.williamson, lv.zheng, sudeep.holla, lenb, robin.murphy,
	joro, hanjun.guo

IORT can be used (by QEMU) to describe a virtual topology containing an
architecture-agnostic paravirtualized device. The rationale behind this
blasphemy is explained in patch 4/5.

In order to build IORT for x86 systems, the driver has to be moved outside
of arm64/. Since there is nothing specific to arm64 in the driver, it
simply requires moving Makefile and Kconfig entries.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/Kconfig            | 3 +++
 drivers/acpi/Makefile           | 1 +
 drivers/acpi/arm64/Kconfig      | 3 ---
 drivers/acpi/arm64/Makefile     | 1 -
 drivers/acpi/{arm64 => }/iort.c | 0
 5 files changed, 4 insertions(+), 4 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (100%)

diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 5b1938f4b626..ce40275646c8 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -536,4 +536,7 @@ if ARM64
 source "drivers/acpi/arm64/Kconfig"
 endif
 
+config ACPI_IORT
+	bool
+
 endif	# ACPI
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index cd1abc9bc325..689c470c013b 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -112,3 +112,4 @@ video-objs			+= acpi_video.o video_detect.o
 obj-y				+= dptf/
 
 obj-$(CONFIG_ARM64)		+= arm64/
+obj-$(CONFIG_ACPI_IORT) 	+= iort.o
diff --git a/drivers/acpi/arm64/Kconfig b/drivers/acpi/arm64/Kconfig
index 5a6f80fce0d6..403f917ab274 100644
--- a/drivers/acpi/arm64/Kconfig
+++ b/drivers/acpi/arm64/Kconfig
@@ -2,8 +2,5 @@
 # ACPI Configuration for ARM64
 #
 
-config ACPI_IORT
-	bool
-
 config ACPI_GTDT
 	bool
diff --git a/drivers/acpi/arm64/Makefile b/drivers/acpi/arm64/Makefile
index 1017def2ea12..47925dc6cfc8 100644
--- a/drivers/acpi/arm64/Makefile
+++ b/drivers/acpi/arm64/Makefile
@@ -1,2 +1 @@
-obj-$(CONFIG_ACPI_IORT) 	+= iort.o
 obj-$(CONFIG_ACPI_GTDT) 	+= gtdt.o
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/iort.c
similarity index 100%
rename from drivers/acpi/arm64/iort.c
rename to drivers/acpi/iort.c
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC PATCH v2 5/5] ACPI/IORT: Move IORT to the ACPI folder
  2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
                   ` (9 preceding siblings ...)
  (?)
@ 2017-11-17 18:52 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, eric.auger,
	lv.zheng, sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

IORT can be used (by QEMU) to describe a virtual topology containing an
architecture-agnostic paravirtualized device. The rationale behind this
blasphemy is explained in patch 4/5.

In order to build IORT for x86 systems, the driver has to be moved outside
of arm64/. Since there is nothing specific to arm64 in the driver, it
simply requires moving Makefile and Kconfig entries.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/Kconfig            | 3 +++
 drivers/acpi/Makefile           | 1 +
 drivers/acpi/arm64/Kconfig      | 3 ---
 drivers/acpi/arm64/Makefile     | 1 -
 drivers/acpi/{arm64 => }/iort.c | 0
 5 files changed, 4 insertions(+), 4 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (100%)

diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 5b1938f4b626..ce40275646c8 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -536,4 +536,7 @@ if ARM64
 source "drivers/acpi/arm64/Kconfig"
 endif
 
+config ACPI_IORT
+	bool
+
 endif	# ACPI
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index cd1abc9bc325..689c470c013b 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -112,3 +112,4 @@ video-objs			+= acpi_video.o video_detect.o
 obj-y				+= dptf/
 
 obj-$(CONFIG_ARM64)		+= arm64/
+obj-$(CONFIG_ACPI_IORT) 	+= iort.o
diff --git a/drivers/acpi/arm64/Kconfig b/drivers/acpi/arm64/Kconfig
index 5a6f80fce0d6..403f917ab274 100644
--- a/drivers/acpi/arm64/Kconfig
+++ b/drivers/acpi/arm64/Kconfig
@@ -2,8 +2,5 @@
 # ACPI Configuration for ARM64
 #
 
-config ACPI_IORT
-	bool
-
 config ACPI_GTDT
 	bool
diff --git a/drivers/acpi/arm64/Makefile b/drivers/acpi/arm64/Makefile
index 1017def2ea12..47925dc6cfc8 100644
--- a/drivers/acpi/arm64/Makefile
+++ b/drivers/acpi/arm64/Makefile
@@ -1,2 +1 @@
-obj-$(CONFIG_ACPI_IORT) 	+= iort.o
 obj-$(CONFIG_ACPI_GTDT) 	+= gtdt.o
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/iort.c
similarity index 100%
rename from drivers/acpi/arm64/iort.c
rename to drivers/acpi/iort.c
-- 
2.14.3

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [virtio-dev] [RFC PATCH v2 5/5] ACPI/IORT: Move IORT to the ACPI folder
@ 2017-11-17 18:52   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-17 18:52 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

IORT can be used (by QEMU) to describe a virtual topology containing an
architecture-agnostic paravirtualized device. The rationale behind this
blasphemy is explained in patch 4/5.

In order to build IORT for x86 systems, the driver has to be moved outside
of arm64/. Since there is nothing specific to arm64 in the driver, it
simply requires moving Makefile and Kconfig entries.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/acpi/Kconfig            | 3 +++
 drivers/acpi/Makefile           | 1 +
 drivers/acpi/arm64/Kconfig      | 3 ---
 drivers/acpi/arm64/Makefile     | 1 -
 drivers/acpi/{arm64 => }/iort.c | 0
 5 files changed, 4 insertions(+), 4 deletions(-)
 rename drivers/acpi/{arm64 => }/iort.c (100%)

diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 5b1938f4b626..ce40275646c8 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -536,4 +536,7 @@ if ARM64
 source "drivers/acpi/arm64/Kconfig"
 endif
 
+config ACPI_IORT
+	bool
+
 endif	# ACPI
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index cd1abc9bc325..689c470c013b 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -112,3 +112,4 @@ video-objs			+= acpi_video.o video_detect.o
 obj-y				+= dptf/
 
 obj-$(CONFIG_ARM64)		+= arm64/
+obj-$(CONFIG_ACPI_IORT) 	+= iort.o
diff --git a/drivers/acpi/arm64/Kconfig b/drivers/acpi/arm64/Kconfig
index 5a6f80fce0d6..403f917ab274 100644
--- a/drivers/acpi/arm64/Kconfig
+++ b/drivers/acpi/arm64/Kconfig
@@ -2,8 +2,5 @@
 # ACPI Configuration for ARM64
 #
 
-config ACPI_IORT
-	bool
-
 config ACPI_GTDT
 	bool
diff --git a/drivers/acpi/arm64/Makefile b/drivers/acpi/arm64/Makefile
index 1017def2ea12..47925dc6cfc8 100644
--- a/drivers/acpi/arm64/Makefile
+++ b/drivers/acpi/arm64/Makefile
@@ -1,2 +1 @@
-obj-$(CONFIG_ACPI_IORT) 	+= iort.o
 obj-$(CONFIG_ACPI_GTDT) 	+= gtdt.o
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/iort.c
similarity index 100%
rename from drivers/acpi/arm64/iort.c
rename to drivers/acpi/iort.c
-- 
2.14.3


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-29 15:17     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
[...]
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;

Refcount_t is the wrong tool for this, I went back to an unsigned int (and
rebased onto v4.15-rc1), on branch virtio-iommu/base.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2017-11-29 15:17   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, eric.auger,
	lv.zheng, Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
[...]
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;

Refcount_t is the wrong tool for this, I went back to an unsigned int (and
rebased onto v4.15-rc1), on branch virtio-iommu/base.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
@ 2017-11-29 15:17     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
[...]
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;

Refcount_t is the wrong tool for this, I went back to an unsigned int (and
rebased onto v4.15-rc1), on branch virtio-iommu/base.

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2017-11-29 15:17     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
> To describe the virtual topology in relation to a virtio-iommu device,
> ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
> for it.
> 
> This is a RFC because the IORT specification doesn't describe the
> paravirtualized node at the moment, it is only provided as an example in
> the virtio-iommu spec. What we need to do first is confirm that x86
> kernels are able to use the IORT driver with the virtio-iommu. There isn't
> anything specific to arm64 in the driver but there might be other blockers
> we're not aware of (I know for example that x86 also requires custom DMA
> ops rather than iommu-dma ones, but it's unrelated) so this needs to be
> tested on the x86 prototype.

I tested IORT with an x86 guest, putting a virtio-iommu on the PCI bus and
it worked fine :)

x86 still requires additional code [1] for using the IOMMU DMA ops,
and I'm not comfortable enough with x86 to write the patch, but for
instantiating virtio-iommus and for enumerating endpoints/buses connected
to them, IORT should suffice.

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=patch;h=e910e224b58712151dda06df595a53ff07edef63;hp=e7f9475480c24c5f973711984b30cf6746ff3ec8

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2017-11-29 15:17   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, eric.auger,
	lv.zheng, Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
> To describe the virtual topology in relation to a virtio-iommu device,
> ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
> for it.
> 
> This is a RFC because the IORT specification doesn't describe the
> paravirtualized node at the moment, it is only provided as an example in
> the virtio-iommu spec. What we need to do first is confirm that x86
> kernels are able to use the IORT driver with the virtio-iommu. There isn't
> anything specific to arm64 in the driver but there might be other blockers
> we're not aware of (I know for example that x86 also requires custom DMA
> ops rather than iommu-dma ones, but it's unrelated) so this needs to be
> tested on the x86 prototype.

I tested IORT with an x86 guest, putting a virtio-iommu on the PCI bus and
it worked fine :)

x86 still requires additional code [1] for using the IOMMU DMA ops,
and I'm not comfortable enough with x86 to write the patch, but for
instantiating virtio-iommus and for enumerating endpoints/buses connected
to them, IORT should suffice.

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=patch;h=e910e224b58712151dda06df595a53ff07edef63;hp=e7f9475480c24c5f973711984b30cf6746ff3ec8

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [virtio-dev] [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU
@ 2017-11-29 15:17     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2017-11-29 15:17 UTC (permalink / raw)
  To: iommu, devel, linux-acpi, kvm, kvmarm, virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, eric.auger,
	bharat.bhushan, Jayachandran.Nair, ashok.raj, peterx

On 17/11/17 18:52, Jean-Philippe Brucker wrote:
> To describe the virtual topology in relation to a virtio-iommu device,
> ACPI-based systems use a "paravirtualized IOMMU" IORT node. Add support
> for it.
> 
> This is a RFC because the IORT specification doesn't describe the
> paravirtualized node at the moment, it is only provided as an example in
> the virtio-iommu spec. What we need to do first is confirm that x86
> kernels are able to use the IORT driver with the virtio-iommu. There isn't
> anything specific to arm64 in the driver but there might be other blockers
> we're not aware of (I know for example that x86 also requires custom DMA
> ops rather than iommu-dma ones, but it's unrelated) so this needs to be
> tested on the x86 prototype.

I tested IORT with an x86 guest, putting a virtio-iommu on the PCI bus and
it worked fine :)

x86 still requires additional code [1] for using the IOMMU DMA ops,
and I'm not comfortable enough with x86 to write the patch, but for
instantiating virtio-iommus and for enumerating endpoints/buses connected
to them, IORT should suffice.

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=patch;h=e910e224b58712151dda06df595a53ff07edef63;hp=e7f9475480c24c5f973711984b30cf6746ff3ec8

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-01-15 15:12     ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-15 15:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

please find some comments below.

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
handles
> requests.
> 
> The bulk of the code is to create requests and send them through virtio.
> Implementing the IOMMU API is fairly straightforward since the
> virtio-iommu MAP/UNMAP interface is almost identical.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 140 ++++++
>  5 files changed, 1111 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 17b212f56e6a..7271e59e8b23 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -403,4 +403,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index dca71fe1c885..432242f3a328 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..feb8c8925c3a
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,958 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2017 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
for all but virtio_iommu_req_probe, which has a variable bottom size
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);
> +	}
> +
> +	return nr_received;
> +}
> +
> +/* Must be called with request_lock held */
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
> +	 * notion of time and this just prevents locking up a CPU if the device
> +	 * dies.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
I don't really understand how you choose your timeout value: 1s per sent
request.
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i) {
> +			/*
> +			 * FIXME: what's a good way to yield to host? A second
> +			 * virtqueue_kick won't have any effect since we haven't
> +			 * added any descriptor.
> +			 */
> +			udelay(10);
could you explain why udelay gets used here?
> +		}
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
> + *                         them to return
> + *
> + * @req: array of requests
> + * @nr: array length
> + * @nr_sent: on return, contains the number of requests actually sent
> + *
> + * Return 0 on success, or an error if we failed to send some of the requests.
> + */
> +static int viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				 struct viommu_request *req, int nr,
> +				 int *nr_sent)
> +{
> +	int ret;
> +	int sent = 0;
> +	unsigned long flags;
> +
> +	*nr_sent = 0;
> +	do {
> +		spin_lock_irqsave(&viommu->request_lock, flags);
> +		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
> +		spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +		*nr_sent += sent;
> +		req += sent;
> +		nr -= sent;
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_req_sync - send one request and wait for reply
> + *
> + * @top: pointer to a virtio_iommu_req_* structure
> + *
> + * Returns 0 if the request was successful, or an error number otherwise. No
> + * distinction is done between transport and request errors.
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
> +{
> +	int ret;
> +	int nr_sent;
> +	void *bottom;
> +	struct viommu_request req = {0};
         ^
drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
‘req.top’) [-Wmissing-braces]

> +	size_t top_size, bottom_size;
> +	struct virtio_iommu_req_tail *tail;
> +	struct virtio_iommu_req_head *head = top;
> +
> +	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
> +	if (ret)
> +		return ret;
> +
> +	bottom = top + top_size;
> +	tail = bottom + bottom_size - sizeof(*tail);
> +
> +	sg_init_one(&req.top, top, top_size);
> +	sg_init_one(&req.bottom, bottom, bottom_size);
> +
> +	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
> +	if (ret || !req.written || nr_sent != 1) {
> +		dev_err(viommu->dev, "failed to send request\n");
> +		return -EIO;
> +	}
> +
> +	return viommu_status_to_errno(tail->status);
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size)
> +{
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
> + *	This allows the caller to reuse the buffer for the unmap request. Caller
> + *	must always free the returned mapping, whether the function succeeds or
> + *	not.
if unmapped > 0?
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				 unsigned long iova, size_t size,
> +				 struct viommu_mapping **out_mapping)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +
> +	if (next) {
> +		mapping = container_of(next, struct viommu_mapping, iova);
> +		/* Trying to split a mapping? */
> +		if (WARN_ON(mapping->iova.start < iova))
> +			next = NULL;
> +	}
> +
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +
> +		if (out_mapping && !(*out_mapping))
> +			*out_mapping = mapping;
> +		else
> +			kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all devices,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + *
> + * Caller should hold the mapping lock if necessary.
the only caller does not hold the lock. At this point we are attaching
our fisrt ep to the domain. I think it would be worth a comment in the
caller.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int i = 1, ret, nr_sent;
> +	struct viommu_request *reqs;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	size_t top_size, bottom_size;
> +
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	if (!node)
> +		return 0;
> +
> +	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
> +		i++;
> +
> +	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
> +	if (!reqs)
> +		return -ENOMEM;
> +
> +	bottom_size = sizeof(struct virtio_iommu_req_tail);
> +	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
> +
> +	i = 0;
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
> +		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
> +			    bottom_size);
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +		i++;
> +	}
> +
> +	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
> +	kfree(reqs);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> +	return false; /* :( */
> +}
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +	refcount_set(&vdomain->endpoints, 0);
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0, NULL);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach *req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * When attaching the device to a new domain, it will be detached from
> +	 * the old one and, if as as a result the old domain isn't attached to
as as
> +	 * any device, all mappings are removed from the old domain and it is
> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
> +	 * the default domain during initial attach.)
I don't see where the old domain is freed. I see you descrement the
endpoints ref count. Also if you replay the mapping, I guess the
mappings were not destroyed?
> +	 *
> +	 * Take note of the device disappearing, so we can ignore unmap request
> +	 * on stale domains (that is, between this detach and the upcoming
> +	 * free.)
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		refcount_dec(&vdev->vdomain->endpoints);
> +
> +	/* DMA to the stack is forbidden, store request on the heap */
> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	*req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, req);
> +		if (ret)
> +			break;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!refcount_read(&vdomain->endpoints)) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings if any.
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	refcount_inc(&vdomain->endpoints);
This does not work for me as the ref counter is initialized to 0 and
ref_count does not work if the counter is 0. It emits a WARN_ON and
stays at 0. I worked around the issue by explicitly setting
recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
> +
> +	mapping->req.map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.phys_addr	= cpu_to_le64(paddr),
> +		.size		= cpu_to_le64(size),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!refcount_read(&vdomain->endpoints))
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size, NULL);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct viommu_mapping *mapping = NULL;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
> +	if (unmapped < size) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!refcount_read(&vdomain->endpoints))
> +		goto out_free;
> +
> +	if (WARN_ON(!mapping))
> +		return 0;
> +
> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.size		= cpu_to_le64(unmapped),
> +	};
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +
> +out_free:
> +	if (mapping)
> +		kfree(mapping);
> +
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (!IS_ERR(group))
> +		iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	kfree(dev->iommu_fwspec->iommu_priv);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.capable		= viommu_capable,
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.of_xlate		= viommu_of_xlate,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +};
> +
> +static int viommu_init_vq(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vq = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +
> +	ret = viommu_init_vq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_viommu;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_unregister(&viommu->iommu);
> +
> +err_free_viommu:
> +	kfree(viommu);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_unregister(&viommu->iommu);
> +	kfree(viommu);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..934ed3d3cd3f 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..2b4cd2042897
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,140 @@
> +/*
> + * Virtio-iommu definition v0.5
> + *
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This header is BSD licensed so anyone can use the definitions
> + * to implement compatible drivers/servers:
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of ARM Ltd. nor the names of its contributors
> + *    may be used to endorse or promote products derived from this software
> + *    without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
> + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
> + * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> + * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
> + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
Get a warning
./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
without #include <linux/types.h>

> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8 					domain_bits;
> +} __packed;
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					phys_addr;
> +	__le64					size;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +__packed
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					size;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};
> +
> +#endif
> 
Thanks

Eric


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
                     ` (3 preceding siblings ...)
  (?)
@ 2018-01-15 15:12   ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-15 15:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, lv.zheng,
	sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

Hi Jean-Philippe,

please find some comments below.

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
handles
> requests.
> 
> The bulk of the code is to create requests and send them through virtio.
> Implementing the IOMMU API is fairly straightforward since the
> virtio-iommu MAP/UNMAP interface is almost identical.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 140 ++++++
>  5 files changed, 1111 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 17b212f56e6a..7271e59e8b23 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -403,4 +403,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index dca71fe1c885..432242f3a328 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..feb8c8925c3a
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,958 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2017 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
for all but virtio_iommu_req_probe, which has a variable bottom size
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);
> +	}
> +
> +	return nr_received;
> +}
> +
> +/* Must be called with request_lock held */
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
> +	 * notion of time and this just prevents locking up a CPU if the device
> +	 * dies.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
I don't really understand how you choose your timeout value: 1s per sent
request.
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i) {
> +			/*
> +			 * FIXME: what's a good way to yield to host? A second
> +			 * virtqueue_kick won't have any effect since we haven't
> +			 * added any descriptor.
> +			 */
> +			udelay(10);
could you explain why udelay gets used here?
> +		}
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
> + *                         them to return
> + *
> + * @req: array of requests
> + * @nr: array length
> + * @nr_sent: on return, contains the number of requests actually sent
> + *
> + * Return 0 on success, or an error if we failed to send some of the requests.
> + */
> +static int viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				 struct viommu_request *req, int nr,
> +				 int *nr_sent)
> +{
> +	int ret;
> +	int sent = 0;
> +	unsigned long flags;
> +
> +	*nr_sent = 0;
> +	do {
> +		spin_lock_irqsave(&viommu->request_lock, flags);
> +		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
> +		spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +		*nr_sent += sent;
> +		req += sent;
> +		nr -= sent;
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_req_sync - send one request and wait for reply
> + *
> + * @top: pointer to a virtio_iommu_req_* structure
> + *
> + * Returns 0 if the request was successful, or an error number otherwise. No
> + * distinction is done between transport and request errors.
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
> +{
> +	int ret;
> +	int nr_sent;
> +	void *bottom;
> +	struct viommu_request req = {0};
         ^
drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
‘req.top’) [-Wmissing-braces]

> +	size_t top_size, bottom_size;
> +	struct virtio_iommu_req_tail *tail;
> +	struct virtio_iommu_req_head *head = top;
> +
> +	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
> +	if (ret)
> +		return ret;
> +
> +	bottom = top + top_size;
> +	tail = bottom + bottom_size - sizeof(*tail);
> +
> +	sg_init_one(&req.top, top, top_size);
> +	sg_init_one(&req.bottom, bottom, bottom_size);
> +
> +	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
> +	if (ret || !req.written || nr_sent != 1) {
> +		dev_err(viommu->dev, "failed to send request\n");
> +		return -EIO;
> +	}
> +
> +	return viommu_status_to_errno(tail->status);
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size)
> +{
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
> + *	This allows the caller to reuse the buffer for the unmap request. Caller
> + *	must always free the returned mapping, whether the function succeeds or
> + *	not.
if unmapped > 0?
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				 unsigned long iova, size_t size,
> +				 struct viommu_mapping **out_mapping)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +
> +	if (next) {
> +		mapping = container_of(next, struct viommu_mapping, iova);
> +		/* Trying to split a mapping? */
> +		if (WARN_ON(mapping->iova.start < iova))
> +			next = NULL;
> +	}
> +
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +
> +		if (out_mapping && !(*out_mapping))
> +			*out_mapping = mapping;
> +		else
> +			kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all devices,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + *
> + * Caller should hold the mapping lock if necessary.
the only caller does not hold the lock. At this point we are attaching
our fisrt ep to the domain. I think it would be worth a comment in the
caller.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int i = 1, ret, nr_sent;
> +	struct viommu_request *reqs;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	size_t top_size, bottom_size;
> +
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	if (!node)
> +		return 0;
> +
> +	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
> +		i++;
> +
> +	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
> +	if (!reqs)
> +		return -ENOMEM;
> +
> +	bottom_size = sizeof(struct virtio_iommu_req_tail);
> +	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
> +
> +	i = 0;
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
> +		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
> +			    bottom_size);
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +		i++;
> +	}
> +
> +	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
> +	kfree(reqs);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> +	return false; /* :( */
> +}
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +	refcount_set(&vdomain->endpoints, 0);
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0, NULL);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach *req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * When attaching the device to a new domain, it will be detached from
> +	 * the old one and, if as as a result the old domain isn't attached to
as as
> +	 * any device, all mappings are removed from the old domain and it is
> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
> +	 * the default domain during initial attach.)
I don't see where the old domain is freed. I see you descrement the
endpoints ref count. Also if you replay the mapping, I guess the
mappings were not destroyed?
> +	 *
> +	 * Take note of the device disappearing, so we can ignore unmap request
> +	 * on stale domains (that is, between this detach and the upcoming
> +	 * free.)
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		refcount_dec(&vdev->vdomain->endpoints);
> +
> +	/* DMA to the stack is forbidden, store request on the heap */
> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	*req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, req);
> +		if (ret)
> +			break;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!refcount_read(&vdomain->endpoints)) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings if any.
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	refcount_inc(&vdomain->endpoints);
This does not work for me as the ref counter is initialized to 0 and
ref_count does not work if the counter is 0. It emits a WARN_ON and
stays at 0. I worked around the issue by explicitly setting
recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
> +
> +	mapping->req.map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.phys_addr	= cpu_to_le64(paddr),
> +		.size		= cpu_to_le64(size),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!refcount_read(&vdomain->endpoints))
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size, NULL);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct viommu_mapping *mapping = NULL;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
> +	if (unmapped < size) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!refcount_read(&vdomain->endpoints))
> +		goto out_free;
> +
> +	if (WARN_ON(!mapping))
> +		return 0;
> +
> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.size		= cpu_to_le64(unmapped),
> +	};
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +
> +out_free:
> +	if (mapping)
> +		kfree(mapping);
> +
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (!IS_ERR(group))
> +		iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	kfree(dev->iommu_fwspec->iommu_priv);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.capable		= viommu_capable,
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.of_xlate		= viommu_of_xlate,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +};
> +
> +static int viommu_init_vq(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vq = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +
> +	ret = viommu_init_vq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_viommu;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_unregister(&viommu->iommu);
> +
> +err_free_viommu:
> +	kfree(viommu);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_unregister(&viommu->iommu);
> +	kfree(viommu);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..934ed3d3cd3f 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..2b4cd2042897
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,140 @@
> +/*
> + * Virtio-iommu definition v0.5
> + *
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This header is BSD licensed so anyone can use the definitions
> + * to implement compatible drivers/servers:
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of ARM Ltd. nor the names of its contributors
> + *    may be used to endorse or promote products derived from this software
> + *    without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
> + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
> + * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> + * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
> + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
Get a warning
./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
without #include <linux/types.h>

> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8 					domain_bits;
> +} __packed;
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					phys_addr;
> +	__le64					size;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +__packed
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					size;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};
> +
> +#endif
> 
Thanks

Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
@ 2018-01-15 15:12     ` Auger Eric
  0 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-15 15:12 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

please find some comments below.

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio-mmio transport without emulating
> page tables. This implementation handle ATTACH, DETACH, MAP and UNMAP
handles
> requests.
> 
> The bulk of the code is to create requests and send them through virtio.
> Implementing the IOMMU API is fairly straightforward since the
> virtio-iommu MAP/UNMAP interface is almost identical.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 958 ++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 140 ++++++
>  5 files changed, 1111 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 17b212f56e6a..7271e59e8b23 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -403,4 +403,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO_MMIO
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index dca71fe1c885..432242f3a328 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..feb8c8925c3a
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,958 @@
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2017 ARM Limited
> + * Author: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vq;
> +	/* Serialize anything touching the request queue */
> +	spinlock_t			request_lock;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	union {
> +		struct virtio_iommu_req_map map;
> +		struct virtio_iommu_req_unmap unmap;
> +	} req;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	/* Number of endpoints attached to this domain */
> +	refcount_t			endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct scatterlist		top;
> +	struct scatterlist		bottom;
> +
> +	int				written;
> +	struct list_head		list;
> +};
> +
> +#define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
> +
> +/* Virtio transport */
> +
> +static int viommu_status_to_errno(u8 status)
> +{
> +	switch (status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +/*
> + * viommu_get_req_size - compute request size
> + *
> + * A virtio-iommu request is split into one device-read-only part (top) and one
> + * device-write-only part (bottom). Given a request, return the sizes of the two
> + * parts in @top and @bottom.
for all but virtio_iommu_req_probe, which has a variable bottom size
> + *
> + * Return 0 on success, or an error when the request seems invalid.
> + */
> +static int viommu_get_req_size(struct viommu_dev *viommu,
> +			       struct virtio_iommu_req_head *req, size_t *top,
> +			       size_t *bottom)
> +{
> +	size_t size;
> +	union virtio_iommu_req *r = (void *)req;
> +
> +	*bottom = sizeof(struct virtio_iommu_req_tail);
> +
> +	switch (req->type) {
> +	case VIRTIO_IOMMU_T_ATTACH:
> +		size = sizeof(r->attach);
> +		break;
> +	case VIRTIO_IOMMU_T_DETACH:
> +		size = sizeof(r->detach);
> +		break;
> +	case VIRTIO_IOMMU_T_MAP:
> +		size = sizeof(r->map);
> +		break;
> +	case VIRTIO_IOMMU_T_UNMAP:
> +		size = sizeof(r->unmap);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	*top = size - *bottom;
> +	return 0;
> +}
> +
> +static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
> +			       struct list_head *sent)
> +{
> +
> +	unsigned int len;
> +	int nr_received = 0;
> +	struct viommu_request *req, *pending;
> +
> +	pending = list_first_entry_or_null(sent, struct viommu_request, list);
> +	if (WARN_ON(!pending))
> +		return 0;
> +
> +	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +		if (req != pending) {
> +			dev_warn(viommu->dev, "discarding stale request\n");
> +			continue;
> +		}
> +
> +		pending->written = len;
> +
> +		if (++nr_received == nr_sent) {
> +			WARN_ON(!list_is_last(&pending->list, sent));
> +			break;
> +		} else if (WARN_ON(list_is_last(&pending->list, sent))) {
> +			break;
> +		}
> +
> +		pending = list_next_entry(pending, list);
> +	}
> +
> +	return nr_received;
> +}
> +
> +/* Must be called with request_lock held */
> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				  struct viommu_request *req, int nr,
> +				  int *nr_sent)
> +{
> +	int i, ret;
> +	ktime_t timeout;
> +	LIST_HEAD(pending);
> +	int nr_received = 0;
> +	struct scatterlist *sg[2];
> +	/*
> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
> +	 * notion of time and this just prevents locking up a CPU if the device
> +	 * dies.
> +	 */
> +	unsigned long timeout_ms = 1000;
> +
> +	*nr_sent = 0;
> +
> +	for (i = 0; i < nr; i++, req++) {
> +		req->written = 0;
> +
> +		sg[0] = &req->top;
> +		sg[1] = &req->bottom;
> +
> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> +					GFP_ATOMIC);
> +		if (ret)
> +			break;
> +
> +		list_add_tail(&req->list, &pending);
> +	}
> +
> +	if (i && !virtqueue_kick(viommu->vq))
> +		return -EPIPE;
> +
> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
I don't really understand how you choose your timeout value: 1s per sent
request.
> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
> +						   &pending);
> +		if (nr_received < i) {
> +			/*
> +			 * FIXME: what's a good way to yield to host? A second
> +			 * virtqueue_kick won't have any effect since we haven't
> +			 * added any descriptor.
> +			 */
> +			udelay(10);
could you explain why udelay gets used here?
> +		}
> +	}
> +
> +	if (nr_received != i)
> +		ret = -ETIMEDOUT;
> +
> +	if (ret == -ENOSPC && nr_received)
> +		/*
> +		 * We've freed some space since virtio told us that the ring is
> +		 * full, tell the caller to come back for more.
> +		 */
> +		ret = -EAGAIN;
> +
> +	*nr_sent = nr_received;
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_reqs_sync - add a batch of requests, kick the host and wait for
> + *                         them to return
> + *
> + * @req: array of requests
> + * @nr: array length
> + * @nr_sent: on return, contains the number of requests actually sent
> + *
> + * Return 0 on success, or an error if we failed to send some of the requests.
> + */
> +static int viommu_send_reqs_sync(struct viommu_dev *viommu,
> +				 struct viommu_request *req, int nr,
> +				 int *nr_sent)
> +{
> +	int ret;
> +	int sent = 0;
> +	unsigned long flags;
> +
> +	*nr_sent = 0;
> +	do {
> +		spin_lock_irqsave(&viommu->request_lock, flags);
> +		ret = _viommu_send_reqs_sync(viommu, req, nr, &sent);
> +		spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +		*nr_sent += sent;
> +		req += sent;
> +		nr -= sent;
> +	} while (ret == -EAGAIN);
> +
> +	return ret;
> +}
> +
> +/*
> + * viommu_send_req_sync - send one request and wait for reply
> + *
> + * @top: pointer to a virtio_iommu_req_* structure
> + *
> + * Returns 0 if the request was successful, or an error number otherwise. No
> + * distinction is done between transport and request errors.
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
> +{
> +	int ret;
> +	int nr_sent;
> +	void *bottom;
> +	struct viommu_request req = {0};
         ^
drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
‘req.top’) [-Wmissing-braces]

> +	size_t top_size, bottom_size;
> +	struct virtio_iommu_req_tail *tail;
> +	struct virtio_iommu_req_head *head = top;
> +
> +	ret = viommu_get_req_size(viommu, head, &top_size, &bottom_size);
> +	if (ret)
> +		return ret;
> +
> +	bottom = top + top_size;
> +	tail = bottom + bottom_size - sizeof(*tail);
> +
> +	sg_init_one(&req.top, top, top_size);
> +	sg_init_one(&req.bottom, bottom, bottom_size);
> +
> +	ret = viommu_send_reqs_sync(viommu, &req, 1, &nr_sent);
> +	if (ret || !req.written || nr_sent != 1) {
> +		dev_err(viommu->dev, "failed to send request\n");
> +		return -EIO;
> +	}
> +
> +	return viommu_status_to_errno(tail->status);
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size)
> +{
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
> + *	This allows the caller to reuse the buffer for the unmap request. Caller
> + *	must always free the returned mapping, whether the function succeeds or
> + *	not.
if unmapped > 0?
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				 unsigned long iova, size_t size,
> +				 struct viommu_mapping **out_mapping)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +
> +	if (next) {
> +		mapping = container_of(next, struct viommu_mapping, iova);
> +		/* Trying to split a mapping? */
> +		if (WARN_ON(mapping->iova.start < iova))
> +			next = NULL;
> +	}
> +
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +
> +		if (out_mapping && !(*out_mapping))
> +			*out_mapping = mapping;
> +		else
> +			kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all devices,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + *
> + * Caller should hold the mapping lock if necessary.
the only caller does not hold the lock. At this point we are attaching
our fisrt ep to the domain. I think it would be worth a comment in the
caller.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int i = 1, ret, nr_sent;
> +	struct viommu_request *reqs;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	size_t top_size, bottom_size;
> +
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	if (!node)
> +		return 0;
> +
> +	while ((node = interval_tree_iter_next(node, 0, -1UL)) != NULL)
> +		i++;
> +
> +	reqs = kcalloc(i, sizeof(*reqs), GFP_KERNEL);
> +	if (!reqs)
> +		return -ENOMEM;
> +
> +	bottom_size = sizeof(struct virtio_iommu_req_tail);
> +	top_size = sizeof(struct virtio_iommu_req_map) - bottom_size;
> +
> +	i = 0;
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		sg_init_one(&reqs[i].top, &mapping->req.map, top_size);
> +		sg_init_one(&reqs[i].bottom, &mapping->req.map.tail,
> +			    bottom_size);
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +		i++;
> +	}
> +
> +	ret = viommu_send_reqs_sync(vdomain->viommu, reqs, i, &nr_sent);
> +	kfree(reqs);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static bool viommu_capable(enum iommu_cap cap)
> +{
> +	return false; /* :( */
> +}
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +	refcount_set(&vdomain->endpoints, 0);
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	/* ida limits size to 31 bits. A value of 0 means "max" */
> +	unsigned int max_domain = viommu->domain_bits >= 31 ? 0 :
> +				  1U << viommu->domain_bits;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_simple_get(&viommu->domain_ids, 0, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0, NULL);
> +
> +	if (vdomain->viommu)
> +		ida_simple_remove(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach *req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * When attaching the device to a new domain, it will be detached from
> +	 * the old one and, if as as a result the old domain isn't attached to
as as
> +	 * any device, all mappings are removed from the old domain and it is
> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
> +	 * the default domain during initial attach.)
I don't see where the old domain is freed. I see you descrement the
endpoints ref count. Also if you replay the mapping, I guess the
mappings were not destroyed?
> +	 *
> +	 * Take note of the device disappearing, so we can ignore unmap request
> +	 * on stale domains (that is, between this detach and the upcoming
> +	 * free.)
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		refcount_dec(&vdev->vdomain->endpoints);
> +
> +	/* DMA to the stack is forbidden, store request on the heap */
> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	*req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, req);
> +		if (ret)
> +			break;
> +	}
> +
> +	kfree(req);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!refcount_read(&vdomain->endpoints)) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings if any.
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	refcount_inc(&vdomain->endpoints);
This does not work for me as the ref counter is initialized to 0 and
ref_count does not work if the counter is 0. It emits a WARN_ON and
stays at 0. I worked around the issue by explicitly setting
recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0);
> +
> +	mapping->req.map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.phys_addr	= cpu_to_le64(paddr),
> +		.size		= cpu_to_le64(size),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!refcount_read(&vdomain->endpoints))
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size, NULL);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct viommu_mapping *mapping = NULL;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size, &mapping);
> +	if (unmapped < size) {
> +		ret = -EINVAL;
> +		goto out_free;
> +	}
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!refcount_read(&vdomain->endpoints))
> +		goto out_free;
> +
> +	if (WARN_ON(!mapping))
> +		return 0;
> +
> +	mapping->req.unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_addr	= cpu_to_le64(iova),
> +		.size		= cpu_to_le64(unmapped),
> +	};
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &mapping->req);
> +
> +out_free:
> +	if (mapping)
> +		kfree(mapping);
> +
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (!IS_ERR(group))
> +		iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	kfree(dev->iommu_fwspec->iommu_priv);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.capable		= viommu_capable,
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.map_sg			= default_iommu_map_sg,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.of_xlate		= viommu_of_xlate,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +};
> +
> +static int viommu_init_vq(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vq = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	viommu = kzalloc(sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +
> +	ret = viommu_init_vq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_viommu;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_viommu;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_unregister(&viommu->iommu);
> +
> +err_free_viommu:
> +	kfree(viommu);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_unregister(&viommu->iommu);
> +	kfree(viommu);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +IOMMU_OF_DECLARE(viommu, "virtio,mmio", NULL);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..934ed3d3cd3f 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU	    61216 /* virtio IOMMU (temporary) */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..2b4cd2042897
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,140 @@
> +/*
> + * Virtio-iommu definition v0.5
> + *
> + * Copyright (C) 2017 ARM Ltd.
> + *
> + * This header is BSD licensed so anyone can use the definitions
> + * to implement compatible drivers/servers:
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of ARM Ltd. nor the names of its contributors
> + *    may be used to endorse or promote products derived from this software
> + *    without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
> + * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
> + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
> + * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> + * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
> + * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
Get a warning
./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
without #include <linux/types.h>

> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {
> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8 					domain_bits;
> +} __packed;
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +} __packed;
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					endpoint;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					phys_addr;
> +	__le64					size;
> +	__le32					flags;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +__packed
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +
> +	__le32					domain;
> +	__le64					virt_addr;
> +	__le64					size;
> +	__le32					reserved;
> +
> +	struct virtio_iommu_req_tail		tail;
> +} __packed;
> +
> +union virtio_iommu_req {
> +	struct virtio_iommu_req_head		head;
> +
> +	struct virtio_iommu_req_attach		attach;
> +	struct virtio_iommu_req_detach		detach;
> +	struct virtio_iommu_req_map		map;
> +	struct virtio_iommu_req_unmap		unmap;
> +};
> +
> +#endif
> 
Thanks

Eric


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-01-16  9:25     ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
why not adding this in the switch default case and do not call list_add
in case the subtype region is not recognized?
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
goto out?
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
this change may belong to the 1st patch.
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 
Besides those minor comments, looks good to me.

Thanks

Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
  (?)
@ 2018-01-16  9:25   ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, lv.zheng,
	sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
why not adding this in the switch default case and do not call list_add
in case the subtype region is not recognized?
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
goto out?
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
this change may belong to the 1st patch.
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 
Besides those minor comments, looks good to me.

Thanks

Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2018-01-16  9:25     ` Auger Eric
  0 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
why not adding this in the switch default case and do not call list_add
in case the subtype region is not recognized?
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
goto out?
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
this change may belong to the 1st patch.
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 
Besides those minor comments, looks good to me.

Thanks

Eric

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-01-16 10:10     ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 10:10 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The event queue offers a way for the device to report access faults from
> devices.
end points?
 It is implemented on virtqueue #1, whenever the host needs to
> signal a fault it fills one of the buffers offered by the guest and
> interrupts it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
>  include/uapi/linux/virtio_iommu.h |  18 +++++
>  2 files changed, 142 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 79e0add94e05..fe0d449bf489 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -30,6 +30,12 @@
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
>  
> +enum viommu_vq_idx {
> +	VIOMMU_REQUEST_VQ	= 0,
> +	VIOMMU_EVENT_VQ		= 1,
> +	VIOMMU_NUM_VQS		= 2,
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -37,7 +43,7 @@ struct viommu_dev {
>  
>  	struct ida			domain_ids;
>  
> -	struct virtqueue		*vq;
> +	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
>  	/* Serialize anything touching the request queue */
>  	spinlock_t			request_lock;
>  
> @@ -84,6 +90,15 @@ struct viommu_request {
>  	struct list_head		list;
>  };
>  
> +#define VIOMMU_FAULT_RESV_MASK		0xffffff00
> +
> +struct viommu_event {
> +	union {
> +		u32			head;
> +		struct virtio_iommu_fault fault;
> +	};
> +};
> +
>  #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
>  
>  /* Virtio transport */
> @@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>  	unsigned int len;
>  	int nr_received = 0;
>  	struct viommu_request *req, *pending;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>  	if (WARN_ON(!pending))
>  		return 0;
>  
> -	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
>  		if (req != pending) {
>  			dev_warn(viommu->dev, "discarding stale request\n");
>  			continue;
> @@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  	 * dies.
>  	 */
>  	unsigned long timeout_ms = 1000;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	*nr_sent = 0;
>  
> @@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  		sg[0] = &req->top;
>  		sg[1] = &req->bottom;
>  
> -		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> -					GFP_ATOMIC);
> +		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>  		if (ret)
>  			break;
>  
>  		list_add_tail(&req->list, &pending);
>  	}
>  
> -	if (i && !virtqueue_kick(viommu->vq))
> +	if (i && !virtqueue_kick(vq))
>  		return -EPIPE;
>  
>  	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> @@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return 0;
>  }
>  
> +static int viommu_fault_handler(struct viommu_dev *viommu,
> +				struct virtio_iommu_fault *fault)
> +{
> +	char *reason_str;
> +
> +	u8 reason	= fault->reason;
> +	u32 flags	= le32_to_cpu(fault->flags);
> +	u32 endpoint	= le32_to_cpu(fault->endpoint);
> +	u64 address	= le64_to_cpu(fault->address);
> +
> +	switch (reason) {
> +	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
> +		reason_str = "domain";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_MAPPING:
> +		reason_str = "page";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
> +	default:
> +		reason_str = "unknown";
> +		break;
> +	}
> +
> +	/* TODO: find EP by ID and report_iommu_fault */
> +	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
> +				    reason_str, endpoint, address,
> +				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
> +	else
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
> +				    reason_str, endpoint);
> +
> +	return 0;
> +}
> +
> +static void viommu_event_handler(struct virtqueue *vq)
> +{
> +	int ret;
> +	unsigned int len;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evt;
> +	struct viommu_dev *viommu = vq->vdev->priv;
> +
> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
> +		if (len > sizeof(*evt)) {
> +			dev_err(viommu->dev,
> +				"invalid event buffer (len %u != %zu)\n",
> +				len, sizeof(*evt));
> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
> +			viommu_fault_handler(viommu, &evt->fault);
> +		}
> +
> +		sg_init_one(sg, evt, sizeof(*evt));
in case of above error case, ie. len > sizeof(*evt), is it safe to push
evt again?
> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
> +		if (ret)
> +			dev_err(viommu->dev, "could not add event buffer\n");
> +	}
> +
> +	if (!virtqueue_kick(vq))
> +		dev_err(viommu->dev, "kick failed\n");
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>  	.put_resv_regions	= viommu_put_resv_regions,
>  };
>  
> -static int viommu_init_vq(struct viommu_dev *viommu)
> +static int viommu_init_vqs(struct viommu_dev *viommu)
>  {
>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> -	const char *name = "request";
> -	void *ret;
> +	const char *names[] = { "request", "event" };
> +	vq_callback_t *callbacks[] = {
> +		NULL, /* No async requests */
> +		viommu_event_handler,
> +	};
> +
> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
> +			       names, NULL);
> +}
>  
> -	ret = virtio_find_single_vq(vdev, NULL, name);
> -	if (IS_ERR(ret)) {
> -		dev_err(viommu->dev, "cannot find VQ\n");
> -		return PTR_ERR(ret);
> +static int viommu_fill_evtq(struct viommu_dev *viommu)
> +{
> +	int i, ret;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evts;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
> +
> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
> +				  GFP_KERNEL);
> +	if (!evts)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < nr_evts; i++) {
> +		sg_init_one(sg, &evts[i], sizeof(*evts));
> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
who does the deallocation of evts?

Thanks

Eric
> +		if (ret)
> +			return ret;
>  	}
>  
> -	viommu->vq = ret;
> +	if (!virtqueue_kick(vq))
> +		return -EPIPE;
> +
> +	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
>  
>  	return 0;
>  }
> @@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  
> -	ret = viommu_init_vq(viommu);
> +	ret = viommu_init_vqs(viommu);
>  	if (ret)
>  		goto err_free_viommu;
>  
> @@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> +	/* Populate the event queue with buffers */
> +	ret = viommu_fill_evtq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
>  	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>  				     virtio_bus_name(vdev));
>  	if (ret)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index eec90a2ab547..b57543b4145b 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -174,4 +174,22 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_probe		probe;
>  };
>  
> +/* Fault types */
> +#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
> +#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
> +#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
> +
> +#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
> +
> +struct virtio_iommu_fault {
> +	__u8					reason;
> +	__u8					padding[3];
> +	__le32					flags;
> +	__le32					endpoint;
> +	__le64					address;
> +} __packed;
> +
>  #endif
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
  (?)
  (?)
@ 2018-01-16 10:10   ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 10:10 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, lv.zheng,
	sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

Hi,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The event queue offers a way for the device to report access faults from
> devices.
end points?
 It is implemented on virtqueue #1, whenever the host needs to
> signal a fault it fills one of the buffers offered by the guest and
> interrupts it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
>  include/uapi/linux/virtio_iommu.h |  18 +++++
>  2 files changed, 142 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 79e0add94e05..fe0d449bf489 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -30,6 +30,12 @@
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
>  
> +enum viommu_vq_idx {
> +	VIOMMU_REQUEST_VQ	= 0,
> +	VIOMMU_EVENT_VQ		= 1,
> +	VIOMMU_NUM_VQS		= 2,
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -37,7 +43,7 @@ struct viommu_dev {
>  
>  	struct ida			domain_ids;
>  
> -	struct virtqueue		*vq;
> +	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
>  	/* Serialize anything touching the request queue */
>  	spinlock_t			request_lock;
>  
> @@ -84,6 +90,15 @@ struct viommu_request {
>  	struct list_head		list;
>  };
>  
> +#define VIOMMU_FAULT_RESV_MASK		0xffffff00
> +
> +struct viommu_event {
> +	union {
> +		u32			head;
> +		struct virtio_iommu_fault fault;
> +	};
> +};
> +
>  #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
>  
>  /* Virtio transport */
> @@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>  	unsigned int len;
>  	int nr_received = 0;
>  	struct viommu_request *req, *pending;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>  	if (WARN_ON(!pending))
>  		return 0;
>  
> -	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
>  		if (req != pending) {
>  			dev_warn(viommu->dev, "discarding stale request\n");
>  			continue;
> @@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  	 * dies.
>  	 */
>  	unsigned long timeout_ms = 1000;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	*nr_sent = 0;
>  
> @@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  		sg[0] = &req->top;
>  		sg[1] = &req->bottom;
>  
> -		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> -					GFP_ATOMIC);
> +		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>  		if (ret)
>  			break;
>  
>  		list_add_tail(&req->list, &pending);
>  	}
>  
> -	if (i && !virtqueue_kick(viommu->vq))
> +	if (i && !virtqueue_kick(vq))
>  		return -EPIPE;
>  
>  	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> @@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return 0;
>  }
>  
> +static int viommu_fault_handler(struct viommu_dev *viommu,
> +				struct virtio_iommu_fault *fault)
> +{
> +	char *reason_str;
> +
> +	u8 reason	= fault->reason;
> +	u32 flags	= le32_to_cpu(fault->flags);
> +	u32 endpoint	= le32_to_cpu(fault->endpoint);
> +	u64 address	= le64_to_cpu(fault->address);
> +
> +	switch (reason) {
> +	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
> +		reason_str = "domain";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_MAPPING:
> +		reason_str = "page";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
> +	default:
> +		reason_str = "unknown";
> +		break;
> +	}
> +
> +	/* TODO: find EP by ID and report_iommu_fault */
> +	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
> +				    reason_str, endpoint, address,
> +				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
> +	else
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
> +				    reason_str, endpoint);
> +
> +	return 0;
> +}
> +
> +static void viommu_event_handler(struct virtqueue *vq)
> +{
> +	int ret;
> +	unsigned int len;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evt;
> +	struct viommu_dev *viommu = vq->vdev->priv;
> +
> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
> +		if (len > sizeof(*evt)) {
> +			dev_err(viommu->dev,
> +				"invalid event buffer (len %u != %zu)\n",
> +				len, sizeof(*evt));
> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
> +			viommu_fault_handler(viommu, &evt->fault);
> +		}
> +
> +		sg_init_one(sg, evt, sizeof(*evt));
in case of above error case, ie. len > sizeof(*evt), is it safe to push
evt again?
> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
> +		if (ret)
> +			dev_err(viommu->dev, "could not add event buffer\n");
> +	}
> +
> +	if (!virtqueue_kick(vq))
> +		dev_err(viommu->dev, "kick failed\n");
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>  	.put_resv_regions	= viommu_put_resv_regions,
>  };
>  
> -static int viommu_init_vq(struct viommu_dev *viommu)
> +static int viommu_init_vqs(struct viommu_dev *viommu)
>  {
>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> -	const char *name = "request";
> -	void *ret;
> +	const char *names[] = { "request", "event" };
> +	vq_callback_t *callbacks[] = {
> +		NULL, /* No async requests */
> +		viommu_event_handler,
> +	};
> +
> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
> +			       names, NULL);
> +}
>  
> -	ret = virtio_find_single_vq(vdev, NULL, name);
> -	if (IS_ERR(ret)) {
> -		dev_err(viommu->dev, "cannot find VQ\n");
> -		return PTR_ERR(ret);
> +static int viommu_fill_evtq(struct viommu_dev *viommu)
> +{
> +	int i, ret;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evts;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
> +
> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
> +				  GFP_KERNEL);
> +	if (!evts)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < nr_evts; i++) {
> +		sg_init_one(sg, &evts[i], sizeof(*evts));
> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
who does the deallocation of evts?

Thanks

Eric
> +		if (ret)
> +			return ret;
>  	}
>  
> -	viommu->vq = ret;
> +	if (!virtqueue_kick(vq))
> +		return -EPIPE;
> +
> +	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
>  
>  	return 0;
>  }
> @@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  
> -	ret = viommu_init_vq(viommu);
> +	ret = viommu_init_vqs(viommu);
>  	if (ret)
>  		goto err_free_viommu;
>  
> @@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> +	/* Populate the event queue with buffers */
> +	ret = viommu_fill_evtq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
>  	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>  				     virtio_bus_name(vdev));
>  	if (ret)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index eec90a2ab547..b57543b4145b 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -174,4 +174,22 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_probe		probe;
>  };
>  
> +/* Fault types */
> +#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
> +#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
> +#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
> +
> +#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
> +
> +struct virtio_iommu_fault {
> +	__u8					reason;
> +	__u8					padding[3];
> +	__le32					flags;
> +	__le32					endpoint;
> +	__le64					address;
> +} __packed;
> +
>  #endif
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
@ 2018-01-16 10:10     ` Auger Eric
  0 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 10:10 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> The event queue offers a way for the device to report access faults from
> devices.
end points?
 It is implemented on virtqueue #1, whenever the host needs to
> signal a fault it fills one of the buffers offered by the guest and
> interrupts it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 138 ++++++++++++++++++++++++++++++++++----
>  include/uapi/linux/virtio_iommu.h |  18 +++++
>  2 files changed, 142 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 79e0add94e05..fe0d449bf489 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -30,6 +30,12 @@
>  #define MSI_IOVA_BASE			0x8000000
>  #define MSI_IOVA_LENGTH			0x100000
>  
> +enum viommu_vq_idx {
> +	VIOMMU_REQUEST_VQ	= 0,
> +	VIOMMU_EVENT_VQ		= 1,
> +	VIOMMU_NUM_VQS		= 2,
> +};
> +
>  struct viommu_dev {
>  	struct iommu_device		iommu;
>  	struct device			*dev;
> @@ -37,7 +43,7 @@ struct viommu_dev {
>  
>  	struct ida			domain_ids;
>  
> -	struct virtqueue		*vq;
> +	struct virtqueue		*vqs[VIOMMU_NUM_VQS];
>  	/* Serialize anything touching the request queue */
>  	spinlock_t			request_lock;
>  
> @@ -84,6 +90,15 @@ struct viommu_request {
>  	struct list_head		list;
>  };
>  
> +#define VIOMMU_FAULT_RESV_MASK		0xffffff00
> +
> +struct viommu_event {
> +	union {
> +		u32			head;
> +		struct virtio_iommu_fault fault;
> +	};
> +};
> +
>  #define to_viommu_domain(domain) container_of(domain, struct viommu_domain, domain)
>  
>  /* Virtio transport */
> @@ -160,12 +175,13 @@ static int viommu_receive_resp(struct viommu_dev *viommu, int nr_sent,
>  	unsigned int len;
>  	int nr_received = 0;
>  	struct viommu_request *req, *pending;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	pending = list_first_entry_or_null(sent, struct viommu_request, list);
>  	if (WARN_ON(!pending))
>  		return 0;
>  
> -	while ((req = virtqueue_get_buf(viommu->vq, &len)) != NULL) {
> +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
>  		if (req != pending) {
>  			dev_warn(viommu->dev, "discarding stale request\n");
>  			continue;
> @@ -202,6 +218,7 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  	 * dies.
>  	 */
>  	unsigned long timeout_ms = 1000;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>  
>  	*nr_sent = 0;
>  
> @@ -211,15 +228,14 @@ static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>  		sg[0] = &req->top;
>  		sg[1] = &req->bottom;
>  
> -		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
> -					GFP_ATOMIC);
> +		ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>  		if (ret)
>  			break;
>  
>  		list_add_tail(&req->list, &pending);
>  	}
>  
> -	if (i && !virtqueue_kick(viommu->vq))
> +	if (i && !virtqueue_kick(vq))
>  		return -EPIPE;
>  
>  	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> @@ -554,6 +570,70 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>  	return 0;
>  }
>  
> +static int viommu_fault_handler(struct viommu_dev *viommu,
> +				struct virtio_iommu_fault *fault)
> +{
> +	char *reason_str;
> +
> +	u8 reason	= fault->reason;
> +	u32 flags	= le32_to_cpu(fault->flags);
> +	u32 endpoint	= le32_to_cpu(fault->endpoint);
> +	u64 address	= le64_to_cpu(fault->address);
> +
> +	switch (reason) {
> +	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
> +		reason_str = "domain";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_MAPPING:
> +		reason_str = "page";
> +		break;
> +	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
> +	default:
> +		reason_str = "unknown";
> +		break;
> +	}
> +
> +	/* TODO: find EP by ID and report_iommu_fault */
> +	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
> +				    reason_str, endpoint, address,
> +				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
> +				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
> +	else
> +		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
> +				    reason_str, endpoint);
> +
> +	return 0;
> +}
> +
> +static void viommu_event_handler(struct virtqueue *vq)
> +{
> +	int ret;
> +	unsigned int len;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evt;
> +	struct viommu_dev *viommu = vq->vdev->priv;
> +
> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
> +		if (len > sizeof(*evt)) {
> +			dev_err(viommu->dev,
> +				"invalid event buffer (len %u != %zu)\n",
> +				len, sizeof(*evt));
> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
> +			viommu_fault_handler(viommu, &evt->fault);
> +		}
> +
> +		sg_init_one(sg, evt, sizeof(*evt));
in case of above error case, ie. len > sizeof(*evt), is it safe to push
evt again?
> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
> +		if (ret)
> +			dev_err(viommu->dev, "could not add event buffer\n");
> +	}
> +
> +	if (!virtqueue_kick(vq))
> +		dev_err(viommu->dev, "kick failed\n");
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>  	.put_resv_regions	= viommu_put_resv_regions,
>  };
>  
> -static int viommu_init_vq(struct viommu_dev *viommu)
> +static int viommu_init_vqs(struct viommu_dev *viommu)
>  {
>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> -	const char *name = "request";
> -	void *ret;
> +	const char *names[] = { "request", "event" };
> +	vq_callback_t *callbacks[] = {
> +		NULL, /* No async requests */
> +		viommu_event_handler,
> +	};
> +
> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
> +			       names, NULL);
> +}
>  
> -	ret = virtio_find_single_vq(vdev, NULL, name);
> -	if (IS_ERR(ret)) {
> -		dev_err(viommu->dev, "cannot find VQ\n");
> -		return PTR_ERR(ret);
> +static int viommu_fill_evtq(struct viommu_dev *viommu)
> +{
> +	int i, ret;
> +	struct scatterlist sg[1];
> +	struct viommu_event *evts;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
> +
> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
> +				  GFP_KERNEL);
> +	if (!evts)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < nr_evts; i++) {
> +		sg_init_one(sg, &evts[i], sizeof(*evts));
> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
who does the deallocation of evts?

Thanks

Eric
> +		if (ret)
> +			return ret;
>  	}
>  
> -	viommu->vq = ret;
> +	if (!virtqueue_kick(vq))
> +		return -EPIPE;
> +
> +	dev_info(viommu->dev, "%zu event buffers\n", nr_evts);
>  
>  	return 0;
>  }
> @@ -973,7 +1078,7 @@ static int viommu_probe(struct virtio_device *vdev)
>  	viommu->dev = dev;
>  	viommu->vdev = vdev;
>  
> -	ret = viommu_init_vq(viommu);
> +	ret = viommu_init_vqs(viommu);
>  	if (ret)
>  		goto err_free_viommu;
>  
> @@ -1014,6 +1119,11 @@ static int viommu_probe(struct virtio_device *vdev)
>  
>  	virtio_device_ready(vdev);
>  
> +	/* Populate the event queue with buffers */
> +	ret = viommu_fill_evtq(viommu);
> +	if (ret)
> +		goto err_free_viommu;
> +
>  	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>  				     virtio_bus_name(vdev));
>  	if (ret)
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index eec90a2ab547..b57543b4145b 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -174,4 +174,22 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_probe		probe;
>  };
>  
> +/* Fault types */
> +#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
> +#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
> +#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
> +
> +#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
> +#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
> +#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
> +
> +struct virtio_iommu_fault {
> +	__u8					reason;
> +	__u8					padding[3];
> +	__le32					flags;
> +	__le32					endpoint;
> +	__le64					address;
> +} __packed;
> +
>  #endif
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2018-01-15 15:12     ` [virtio-dev] " Auger Eric
@ 2018-01-16 17:45       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:45 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 15/01/18 15:12, Auger Eric wrote:
[...]
>> +/*
>> + * viommu_get_req_size - compute request size
>> + *
>> + * A virtio-iommu request is split into one device-read-only part (top) and one
>> + * device-write-only part (bottom). Given a request, return the sizes of the two
>> + * parts in @top and @bottom.
> for all but virtio_iommu_req_probe, which has a variable bottom size

The comment still stands for the probe request, viommu_get_req_size will
return @bottom depending on viommu->probe_size.

[...]
>> +/* Must be called with request_lock held */
>> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>> +				  struct viommu_request *req, int nr,
>> +				  int *nr_sent)
>> +{
>> +	int i, ret;
>> +	ktime_t timeout;
>> +	LIST_HEAD(pending);
>> +	int nr_received = 0;
>> +	struct scatterlist *sg[2];
>> +	/*
>> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
>> +	 * notion of time and this just prevents locking up a CPU if the device
>> +	 * dies.
>> +	 */
>> +	unsigned long timeout_ms = 1000;
>> +
>> +	*nr_sent = 0;
>> +
>> +	for (i = 0; i < nr; i++, req++) {
>> +		req->written = 0;
>> +
>> +		sg[0] = &req->top;
>> +		sg[1] = &req->bottom;
>> +
>> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
>> +					GFP_ATOMIC);
>> +		if (ret)
>> +			break;
>> +
>> +		list_add_tail(&req->list, &pending);
>> +	}
>> +
>> +	if (i && !virtqueue_kick(viommu->vq))
>> +		return -EPIPE;
>> +
>> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> I don't really understand how you choose your timeout value: 1s per sent
> request.

1s isn't a good timeout value, but I don't know what's good. In a
prototype I have, even 1s isn't enough. The attach request (for nested
mode) requires my device to pin down the whole guest memory, and the guest
ends up timing out on the fastmodel because the request takes too long and
the model timer runs too fast...

I was tempted to simply remove this timeout, but I still think having a
way out when the host device fails is preferable. Otherwise this
completely locks up the CPU.

>> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
>> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
>> +						   &pending);
>> +		if (nr_received < i) {
>> +			/*
>> +			 * FIXME: what's a good way to yield to host? A second
>> +			 * virtqueue_kick won't have any effect since we haven't
>> +			 * added any descriptor.
>> +			 */
>> +			udelay(10);
> could you explain why udelay gets used here?

I was hoping this could switch to the host quicker than cpu_relax(),
allowing it to handle the request faster (on ARM udelay could do WFE
instead of YIELD). The value is completely arbitrary.

Maybe I can replace this with cpu_relax for now. I'd like to refine this
function anyway when working on performance improvements, but I'm not too
hopeful we'll get something nicer here.

[...]
>> +/*
>> + * viommu_send_req_sync - send one request and wait for reply
>> + *
>> + * @top: pointer to a virtio_iommu_req_* structure
>> + *
>> + * Returns 0 if the request was successful, or an error number otherwise. No
>> + * distinction is done between transport and request errors.
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
>> +{
>> +	int ret;
>> +	int nr_sent;
>> +	void *bottom;
>> +	struct viommu_request req = {0};
>          ^
> drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
> ‘req.top’) [-Wmissing-braces]

Ok

[...]
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
>> + *	This allows the caller to reuse the buffer for the unmap request. Caller
>> + *	must always free the returned mapping, whether the function succeeds or
>> + *	not.
> if unmapped > 0?

Ok

>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				 unsigned long iova, size_t size,
>> +				 struct viommu_mapping **out_mapping)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +
>> +	if (next) {
>> +		mapping = container_of(next, struct viommu_mapping, iova);
>> +		/* Trying to split a mapping? */
>> +		if (WARN_ON(mapping->iova.start < iova))
>> +			next = NULL;
>> +	}
>> +
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +
>> +		if (out_mapping && !(*out_mapping))
>> +			*out_mapping = mapping;
>> +		else
>> +			kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all devices,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + *
>> + * Caller should hold the mapping lock if necessary.
> the only caller does not hold the lock. At this point we are attaching
> our fisrt ep to the domain. I think it would be worth a comment in the
> caller.

Right

[...]
>> +	/*
>> +	 * When attaching the device to a new domain, it will be detached from
>> +	 * the old one and, if as as a result the old domain isn't attached to
> as as
>> +	 * any device, all mappings are removed from the old domain and it is
>> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
>> +	 * the default domain during initial attach.)
> I don't see where the old domain is freed. I see you descrement the
> endpoints ref count. Also if you replay the mapping, I guess the
> mappings were not destroyed?

That comment applies to the virtio-iommu domain - the virtio-iommu
device needs to destroy the old domain when attaching to a new domain,
to avoid any mapping leak. I'll change the comment to clarify this.

In the kernel the domain lifetime differs from the virtio-iommu domain,
the detached domain isn't freed until someone calls free_domain()
explicitly:

- Initially each device is attached to the default domain, used for
  kernel DMA.
- Then VFIO attaches the group to a new domain. The IOMMU core calls
  attach_dev(). Default domain is detached and the VFIO domain is
  attached.
- When VFIO removes the device from its container, the IOMMU core
  attaches the default domain again. The default domain may still have
  mappings (SW_MSI for instance) but the device deleted them, so they
  need to be replayed after the attach request.
- VFIO issues iommu_domain_free when no group is attached to the VFIO
  domain anymore.

>> +	 *
>> +	 * Take note of the device disappearing, so we can ignore unmap request
>> +	 * on stale domains (that is, between this detach and the upcoming
>> +	 * free.)
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		refcount_dec(&vdev->vdomain->endpoints);
>> +
>> +	/* DMA to the stack is forbidden, store request on the heap */
>> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	*req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, req);
>> +		if (ret)
>> +			break;
>> +	}
>> +
>> +	kfree(req);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (!refcount_read(&vdomain->endpoints)) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings if any.
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	refcount_inc(&vdomain->endpoints);
> This does not work for me as the ref counter is initialized to 0 and
> ref_count does not work if the counter is 0. It emits a WARN_ON and
> stays at 0. I worked around the issue by explicitly setting
> recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.

Yes, I went back to a simple int instead of refcount

[...]
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
> Get a warning
> ./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
> without #include <linux/types.h>

Right, adding HEADERS_CHECK to my config :)

Thanks a lot the review,
Jean


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
  2018-01-15 15:12     ` [virtio-dev] " Auger Eric
  (?)
@ 2018-01-16 17:45     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:45 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, lv.zheng,
	Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 15/01/18 15:12, Auger Eric wrote:
[...]
>> +/*
>> + * viommu_get_req_size - compute request size
>> + *
>> + * A virtio-iommu request is split into one device-read-only part (top) and one
>> + * device-write-only part (bottom). Given a request, return the sizes of the two
>> + * parts in @top and @bottom.
> for all but virtio_iommu_req_probe, which has a variable bottom size

The comment still stands for the probe request, viommu_get_req_size will
return @bottom depending on viommu->probe_size.

[...]
>> +/* Must be called with request_lock held */
>> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>> +				  struct viommu_request *req, int nr,
>> +				  int *nr_sent)
>> +{
>> +	int i, ret;
>> +	ktime_t timeout;
>> +	LIST_HEAD(pending);
>> +	int nr_received = 0;
>> +	struct scatterlist *sg[2];
>> +	/*
>> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
>> +	 * notion of time and this just prevents locking up a CPU if the device
>> +	 * dies.
>> +	 */
>> +	unsigned long timeout_ms = 1000;
>> +
>> +	*nr_sent = 0;
>> +
>> +	for (i = 0; i < nr; i++, req++) {
>> +		req->written = 0;
>> +
>> +		sg[0] = &req->top;
>> +		sg[1] = &req->bottom;
>> +
>> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
>> +					GFP_ATOMIC);
>> +		if (ret)
>> +			break;
>> +
>> +		list_add_tail(&req->list, &pending);
>> +	}
>> +
>> +	if (i && !virtqueue_kick(viommu->vq))
>> +		return -EPIPE;
>> +
>> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> I don't really understand how you choose your timeout value: 1s per sent
> request.

1s isn't a good timeout value, but I don't know what's good. In a
prototype I have, even 1s isn't enough. The attach request (for nested
mode) requires my device to pin down the whole guest memory, and the guest
ends up timing out on the fastmodel because the request takes too long and
the model timer runs too fast...

I was tempted to simply remove this timeout, but I still think having a
way out when the host device fails is preferable. Otherwise this
completely locks up the CPU.

>> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
>> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
>> +						   &pending);
>> +		if (nr_received < i) {
>> +			/*
>> +			 * FIXME: what's a good way to yield to host? A second
>> +			 * virtqueue_kick won't have any effect since we haven't
>> +			 * added any descriptor.
>> +			 */
>> +			udelay(10);
> could you explain why udelay gets used here?

I was hoping this could switch to the host quicker than cpu_relax(),
allowing it to handle the request faster (on ARM udelay could do WFE
instead of YIELD). The value is completely arbitrary.

Maybe I can replace this with cpu_relax for now. I'd like to refine this
function anyway when working on performance improvements, but I'm not too
hopeful we'll get something nicer here.

[...]
>> +/*
>> + * viommu_send_req_sync - send one request and wait for reply
>> + *
>> + * @top: pointer to a virtio_iommu_req_* structure
>> + *
>> + * Returns 0 if the request was successful, or an error number otherwise. No
>> + * distinction is done between transport and request errors.
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
>> +{
>> +	int ret;
>> +	int nr_sent;
>> +	void *bottom;
>> +	struct viommu_request req = {0};
>          ^
> drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
> ‘req.top’) [-Wmissing-braces]

Ok

[...]
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
>> + *	This allows the caller to reuse the buffer for the unmap request. Caller
>> + *	must always free the returned mapping, whether the function succeeds or
>> + *	not.
> if unmapped > 0?

Ok

>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				 unsigned long iova, size_t size,
>> +				 struct viommu_mapping **out_mapping)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +
>> +	if (next) {
>> +		mapping = container_of(next, struct viommu_mapping, iova);
>> +		/* Trying to split a mapping? */
>> +		if (WARN_ON(mapping->iova.start < iova))
>> +			next = NULL;
>> +	}
>> +
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +
>> +		if (out_mapping && !(*out_mapping))
>> +			*out_mapping = mapping;
>> +		else
>> +			kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all devices,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + *
>> + * Caller should hold the mapping lock if necessary.
> the only caller does not hold the lock. At this point we are attaching
> our fisrt ep to the domain. I think it would be worth a comment in the
> caller.

Right

[...]
>> +	/*
>> +	 * When attaching the device to a new domain, it will be detached from
>> +	 * the old one and, if as as a result the old domain isn't attached to
> as as
>> +	 * any device, all mappings are removed from the old domain and it is
>> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
>> +	 * the default domain during initial attach.)
> I don't see where the old domain is freed. I see you descrement the
> endpoints ref count. Also if you replay the mapping, I guess the
> mappings were not destroyed?

That comment applies to the virtio-iommu domain - the virtio-iommu
device needs to destroy the old domain when attaching to a new domain,
to avoid any mapping leak. I'll change the comment to clarify this.

In the kernel the domain lifetime differs from the virtio-iommu domain,
the detached domain isn't freed until someone calls free_domain()
explicitly:

- Initially each device is attached to the default domain, used for
  kernel DMA.
- Then VFIO attaches the group to a new domain. The IOMMU core calls
  attach_dev(). Default domain is detached and the VFIO domain is
  attached.
- When VFIO removes the device from its container, the IOMMU core
  attaches the default domain again. The default domain may still have
  mappings (SW_MSI for instance) but the device deleted them, so they
  need to be replayed after the attach request.
- VFIO issues iommu_domain_free when no group is attached to the VFIO
  domain anymore.

>> +	 *
>> +	 * Take note of the device disappearing, so we can ignore unmap request
>> +	 * on stale domains (that is, between this detach and the upcoming
>> +	 * free.)
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		refcount_dec(&vdev->vdomain->endpoints);
>> +
>> +	/* DMA to the stack is forbidden, store request on the heap */
>> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	*req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, req);
>> +		if (ret)
>> +			break;
>> +	}
>> +
>> +	kfree(req);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (!refcount_read(&vdomain->endpoints)) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings if any.
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	refcount_inc(&vdomain->endpoints);
> This does not work for me as the ref counter is initialized to 0 and
> ref_count does not work if the counter is 0. It emits a WARN_ON and
> stays at 0. I worked around the issue by explicitly setting
> recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.

Yes, I went back to a simple int instead of refcount

[...]
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
> Get a warning
> ./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
> without #include <linux/types.h>

Right, adding HEADERS_CHECK to my config :)

Thanks a lot the review,
Jean

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 1/5] iommu: Add virtio-iommu driver
@ 2018-01-16 17:45       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:45 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 15/01/18 15:12, Auger Eric wrote:
[...]
>> +/*
>> + * viommu_get_req_size - compute request size
>> + *
>> + * A virtio-iommu request is split into one device-read-only part (top) and one
>> + * device-write-only part (bottom). Given a request, return the sizes of the two
>> + * parts in @top and @bottom.
> for all but virtio_iommu_req_probe, which has a variable bottom size

The comment still stands for the probe request, viommu_get_req_size will
return @bottom depending on viommu->probe_size.

[...]
>> +/* Must be called with request_lock held */
>> +static int _viommu_send_reqs_sync(struct viommu_dev *viommu,
>> +				  struct viommu_request *req, int nr,
>> +				  int *nr_sent)
>> +{
>> +	int i, ret;
>> +	ktime_t timeout;
>> +	LIST_HEAD(pending);
>> +	int nr_received = 0;
>> +	struct scatterlist *sg[2];
>> +	/*
>> +	 * Yes, 1s timeout. As a guest, we don't necessarily have a precise
>> +	 * notion of time and this just prevents locking up a CPU if the device
>> +	 * dies.
>> +	 */
>> +	unsigned long timeout_ms = 1000;
>> +
>> +	*nr_sent = 0;
>> +
>> +	for (i = 0; i < nr; i++, req++) {
>> +		req->written = 0;
>> +
>> +		sg[0] = &req->top;
>> +		sg[1] = &req->bottom;
>> +
>> +		ret = virtqueue_add_sgs(viommu->vq, sg, 1, 1, req,
>> +					GFP_ATOMIC);
>> +		if (ret)
>> +			break;
>> +
>> +		list_add_tail(&req->list, &pending);
>> +	}
>> +
>> +	if (i && !virtqueue_kick(viommu->vq))
>> +		return -EPIPE;
>> +
>> +	timeout = ktime_add_ms(ktime_get(), timeout_ms * i);
> I don't really understand how you choose your timeout value: 1s per sent
> request.

1s isn't a good timeout value, but I don't know what's good. In a
prototype I have, even 1s isn't enough. The attach request (for nested
mode) requires my device to pin down the whole guest memory, and the guest
ends up timing out on the fastmodel because the request takes too long and
the model timer runs too fast...

I was tempted to simply remove this timeout, but I still think having a
way out when the host device fails is preferable. Otherwise this
completely locks up the CPU.

>> +	while (nr_received < i && ktime_before(ktime_get(), timeout)) {
>> +		nr_received += viommu_receive_resp(viommu, i - nr_received,
>> +						   &pending);
>> +		if (nr_received < i) {
>> +			/*
>> +			 * FIXME: what's a good way to yield to host? A second
>> +			 * virtqueue_kick won't have any effect since we haven't
>> +			 * added any descriptor.
>> +			 */
>> +			udelay(10);
> could you explain why udelay gets used here?

I was hoping this could switch to the host quicker than cpu_relax(),
allowing it to handle the request faster (on ARM udelay could do WFE
instead of YIELD). The value is completely arbitrary.

Maybe I can replace this with cpu_relax for now. I'd like to refine this
function anyway when working on performance improvements, but I'm not too
hopeful we'll get something nicer here.

[...]
>> +/*
>> + * viommu_send_req_sync - send one request and wait for reply
>> + *
>> + * @top: pointer to a virtio_iommu_req_* structure
>> + *
>> + * Returns 0 if the request was successful, or an error number otherwise. No
>> + * distinction is done between transport and request errors.
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *top)
>> +{
>> +	int ret;
>> +	int nr_sent;
>> +	void *bottom;
>> +	struct viommu_request req = {0};
>          ^
> drivers/iommu/virtio-iommu.c:326:9: warning: (near initialization for
> ‘req.top’) [-Wmissing-braces]

Ok

[...]
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + * @out_mapping: if not NULL, the first removed mapping is returned in there.
>> + *	This allows the caller to reuse the buffer for the unmap request. Caller
>> + *	must always free the returned mapping, whether the function succeeds or
>> + *	not.
> if unmapped > 0?

Ok

>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				 unsigned long iova, size_t size,
>> +				 struct viommu_mapping **out_mapping)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +
>> +	if (next) {
>> +		mapping = container_of(next, struct viommu_mapping, iova);
>> +		/* Trying to split a mapping? */
>> +		if (WARN_ON(mapping->iova.start < iova))
>> +			next = NULL;
>> +	}
>> +
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +
>> +		if (out_mapping && !(*out_mapping))
>> +			*out_mapping = mapping;
>> +		else
>> +			kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all devices,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + *
>> + * Caller should hold the mapping lock if necessary.
> the only caller does not hold the lock. At this point we are attaching
> our fisrt ep to the domain. I think it would be worth a comment in the
> caller.

Right

[...]
>> +	/*
>> +	 * When attaching the device to a new domain, it will be detached from
>> +	 * the old one and, if as as a result the old domain isn't attached to
> as as
>> +	 * any device, all mappings are removed from the old domain and it is
>> +	 * freed. (Note that we can't use get_domain_for_dev here, it returns
>> +	 * the default domain during initial attach.)
> I don't see where the old domain is freed. I see you descrement the
> endpoints ref count. Also if you replay the mapping, I guess the
> mappings were not destroyed?

That comment applies to the virtio-iommu domain - the virtio-iommu
device needs to destroy the old domain when attaching to a new domain,
to avoid any mapping leak. I'll change the comment to clarify this.

In the kernel the domain lifetime differs from the virtio-iommu domain,
the detached domain isn't freed until someone calls free_domain()
explicitly:

- Initially each device is attached to the default domain, used for
  kernel DMA.
- Then VFIO attaches the group to a new domain. The IOMMU core calls
  attach_dev(). Default domain is detached and the VFIO domain is
  attached.
- When VFIO removes the device from its container, the IOMMU core
  attaches the default domain again. The default domain may still have
  mappings (SW_MSI for instance) but the device deleted them, so they
  need to be replayed after the attach request.
- VFIO issues iommu_domain_free when no group is attached to the VFIO
  domain anymore.

>> +	 *
>> +	 * Take note of the device disappearing, so we can ignore unmap request
>> +	 * on stale domains (that is, between this detach and the upcoming
>> +	 * free.)
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		refcount_dec(&vdev->vdomain->endpoints);
>> +
>> +	/* DMA to the stack is forbidden, store request on the heap */
>> +	req = kzalloc(sizeof(*req), GFP_KERNEL);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	*req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, req);
>> +		if (ret)
>> +			break;
>> +	}
>> +
>> +	kfree(req);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (!refcount_read(&vdomain->endpoints)) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings if any.
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	refcount_inc(&vdomain->endpoints);
> This does not work for me as the ref counter is initialized to 0 and
> ref_count does not work if the counter is 0. It emits a WARN_ON and
> stays at 0. I worked around the issue by explicitly setting
> recount_set(&vdomain->endpoints, 1) if it was 0 and reffount_inc otherwise.

Yes, I went back to a simple int instead of refcount

[...]
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
> Get a warning
> ./usr/include/linux/virtio_iommu.h:45: found __[us]{8,16,32,64} type
> without #include <linux/types.h>

Right, adding HEADERS_CHECK to my config :)

Thanks a lot the review,
Jean


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-16  9:25     ` [virtio-dev] " Auger Eric
@ 2018-01-16 17:46       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:46 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 16/01/18 09:25, Auger Eric wrote:
[...]
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	u64 addr = le64_to_cpu(mem->addr);
>> +	u64 size = le64_to_cpu(mem->size);
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
>> +		/* Please update your driver. */
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
>> +		return -EINVAL;
>> +	}
> why not adding this in the switch default case and do not call list_add
> in case the subtype region is not recognized?

Even if the subtype isn't recognized, I think the range should still be
reserved, so that the guest kernel doesn't map it and break something.
That's why I put the following in the spec, 2.6.8.2.1 Driver
Requirements: Property RESV_MEM:

"""
The driver SHOULD treat any subtype it doesn’t recognize as if it was
VIRTIO_IOMMU_RESV_MEM_T_RESERVED.
"""

>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		/* Trouble ahead. */
>> +		return -EINVAL;
>> +
>> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
>> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe);
>> +	if (ret) {
> goto out?

Ok

[...]
>> +
>> +	iommu_dma_get_resv_regions(dev, head);
> this change may belong to the 1st patch.

Indeed

Thanks,
Jean

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-16  9:25     ` [virtio-dev] " Auger Eric
  (?)
@ 2018-01-16 17:46     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:46 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, lv.zheng,
	Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 16/01/18 09:25, Auger Eric wrote:
[...]
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	u64 addr = le64_to_cpu(mem->addr);
>> +	u64 size = le64_to_cpu(mem->size);
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
>> +		/* Please update your driver. */
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
>> +		return -EINVAL;
>> +	}
> why not adding this in the switch default case and do not call list_add
> in case the subtype region is not recognized?

Even if the subtype isn't recognized, I think the range should still be
reserved, so that the guest kernel doesn't map it and break something.
That's why I put the following in the spec, 2.6.8.2.1 Driver
Requirements: Property RESV_MEM:

"""
The driver SHOULD treat any subtype it doesn’t recognize as if it was
VIRTIO_IOMMU_RESV_MEM_T_RESERVED.
"""

>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		/* Trouble ahead. */
>> +		return -EINVAL;
>> +
>> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
>> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe);
>> +	if (ret) {
> goto out?

Ok

[...]
>> +
>> +	iommu_dma_get_resv_regions(dev, head);
> this change may belong to the 1st patch.

Indeed

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2018-01-16 17:46       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:46 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 16/01/18 09:25, Auger Eric wrote:
[...]
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	u64 addr = le64_to_cpu(mem->addr);
>> +	u64 size = le64_to_cpu(mem->size);
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +
>> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
>> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
>> +		/* Please update your driver. */
>> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
>> +		return -EINVAL;
>> +	}
> why not adding this in the switch default case and do not call list_add
> in case the subtype region is not recognized?

Even if the subtype isn't recognized, I think the range should still be
reserved, so that the guest kernel doesn't map it and break something.
That's why I put the following in the spec, 2.6.8.2.1 Driver
Requirements: Property RESV_MEM:

"""
The driver SHOULD treat any subtype it doesn’t recognize as if it was
VIRTIO_IOMMU_RESV_MEM_T_RESERVED.
"""

>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		/* Trouble ahead. */
>> +		return -EINVAL;
>> +
>> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
>> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe);
>> +	if (ret) {
> goto out?

Ok

[...]
>> +
>> +	iommu_dma_get_resv_regions(dev, head);
> this change may belong to the 1st patch.

Indeed

Thanks,
Jean

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2018-01-16 10:10     ` [virtio-dev] " Auger Eric
@ 2018-01-16 17:48       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:48 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 16/01/18 10:10, Auger Eric wrote:
> Hi,
> 
> On 17/11/17 19:52, Jean-Philippe Brucker wrote:
>> The event queue offers a way for the device to report access faults from
>> devices.
> end points?

Yes

[...]
>> +static void viommu_event_handler(struct virtqueue *vq)
>> +{
>> +	int ret;
>> +	unsigned int len;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evt;
>> +	struct viommu_dev *viommu = vq->vdev->priv;
>> +
>> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
>> +		if (len > sizeof(*evt)) {
>> +			dev_err(viommu->dev,
>> +				"invalid event buffer (len %u != %zu)\n",
>> +				len, sizeof(*evt));
>> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
>> +			viommu_fault_handler(viommu, &evt->fault);
>> +		}
>> +
>> +		sg_init_one(sg, evt, sizeof(*evt));
> in case of above error case, ie. len > sizeof(*evt), is it safe to push
> evt again?

I think it is, len is just what the device reports as written. The above
error would be a device bug, very unlikely and not worth a special
treatment (freeing the buffer), maybe not even worth the dev_err()
actually.

>> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
>> +		if (ret)
>> +			dev_err(viommu->dev, "could not add event buffer\n");
>> +	}
>> +
>> +	if (!virtqueue_kick(vq))
>> +		dev_err(viommu->dev, "kick failed\n");
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static bool viommu_capable(enum iommu_cap cap)
>> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>>  	.put_resv_regions	= viommu_put_resv_regions,
>>  };
>>  
>> -static int viommu_init_vq(struct viommu_dev *viommu)
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>>  {
>>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> -	const char *name = "request";
>> -	void *ret;
>> +	const char *names[] = { "request", "event" };
>> +	vq_callback_t *callbacks[] = {
>> +		NULL, /* No async requests */
>> +		viommu_event_handler,
>> +	};
>> +
>> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
>> +			       names, NULL);
>> +}
>>  
>> -	ret = virtio_find_single_vq(vdev, NULL, name);
>> -	if (IS_ERR(ret)) {
>> -		dev_err(viommu->dev, "cannot find VQ\n");
>> -		return PTR_ERR(ret);
>> +static int viommu_fill_evtq(struct viommu_dev *viommu)
>> +{
>> +	int i, ret;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evts;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
>> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
>> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
>> +
>> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
>> +				  GFP_KERNEL);
>> +	if (!evts)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < nr_evts; i++) {
>> +		sg_init_one(sg, &evts[i], sizeof(*evts));
>> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
> who does the deallocation of evts?

I think it should be viommu_remove, which should also remove the
virtqueues. I'll add this.

Thanks,
Jean


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
  2018-01-16 10:10     ` [virtio-dev] " Auger Eric
  (?)
  (?)
@ 2018-01-16 17:48     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:48 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, lv.zheng,
	Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 16/01/18 10:10, Auger Eric wrote:
> Hi,
> 
> On 17/11/17 19:52, Jean-Philippe Brucker wrote:
>> The event queue offers a way for the device to report access faults from
>> devices.
> end points?

Yes

[...]
>> +static void viommu_event_handler(struct virtqueue *vq)
>> +{
>> +	int ret;
>> +	unsigned int len;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evt;
>> +	struct viommu_dev *viommu = vq->vdev->priv;
>> +
>> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
>> +		if (len > sizeof(*evt)) {
>> +			dev_err(viommu->dev,
>> +				"invalid event buffer (len %u != %zu)\n",
>> +				len, sizeof(*evt));
>> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
>> +			viommu_fault_handler(viommu, &evt->fault);
>> +		}
>> +
>> +		sg_init_one(sg, evt, sizeof(*evt));
> in case of above error case, ie. len > sizeof(*evt), is it safe to push
> evt again?

I think it is, len is just what the device reports as written. The above
error would be a device bug, very unlikely and not worth a special
treatment (freeing the buffer), maybe not even worth the dev_err()
actually.

>> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
>> +		if (ret)
>> +			dev_err(viommu->dev, "could not add event buffer\n");
>> +	}
>> +
>> +	if (!virtqueue_kick(vq))
>> +		dev_err(viommu->dev, "kick failed\n");
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static bool viommu_capable(enum iommu_cap cap)
>> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>>  	.put_resv_regions	= viommu_put_resv_regions,
>>  };
>>  
>> -static int viommu_init_vq(struct viommu_dev *viommu)
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>>  {
>>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> -	const char *name = "request";
>> -	void *ret;
>> +	const char *names[] = { "request", "event" };
>> +	vq_callback_t *callbacks[] = {
>> +		NULL, /* No async requests */
>> +		viommu_event_handler,
>> +	};
>> +
>> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
>> +			       names, NULL);
>> +}
>>  
>> -	ret = virtio_find_single_vq(vdev, NULL, name);
>> -	if (IS_ERR(ret)) {
>> -		dev_err(viommu->dev, "cannot find VQ\n");
>> -		return PTR_ERR(ret);
>> +static int viommu_fill_evtq(struct viommu_dev *viommu)
>> +{
>> +	int i, ret;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evts;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
>> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
>> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
>> +
>> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
>> +				  GFP_KERNEL);
>> +	if (!evts)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < nr_evts; i++) {
>> +		sg_init_one(sg, &evts[i], sizeof(*evts));
>> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
> who does the deallocation of evts?

I think it should be viommu_remove, which should also remove the
virtqueues. I'll add this.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue
@ 2018-01-16 17:48       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-16 17:48 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 16/01/18 10:10, Auger Eric wrote:
> Hi,
> 
> On 17/11/17 19:52, Jean-Philippe Brucker wrote:
>> The event queue offers a way for the device to report access faults from
>> devices.
> end points?

Yes

[...]
>> +static void viommu_event_handler(struct virtqueue *vq)
>> +{
>> +	int ret;
>> +	unsigned int len;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evt;
>> +	struct viommu_dev *viommu = vq->vdev->priv;
>> +
>> +	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
>> +		if (len > sizeof(*evt)) {
>> +			dev_err(viommu->dev,
>> +				"invalid event buffer (len %u != %zu)\n",
>> +				len, sizeof(*evt));
>> +		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
>> +			viommu_fault_handler(viommu, &evt->fault);
>> +		}
>> +
>> +		sg_init_one(sg, evt, sizeof(*evt));
> in case of above error case, ie. len > sizeof(*evt), is it safe to push
> evt again?

I think it is, len is just what the device reports as written. The above
error would be a device bug, very unlikely and not worth a special
treatment (freeing the buffer), maybe not even worth the dev_err()
actually.

>> +		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
>> +		if (ret)
>> +			dev_err(viommu->dev, "could not add event buffer\n");
>> +	}
>> +
>> +	if (!virtqueue_kick(vq))
>> +		dev_err(viommu->dev, "kick failed\n");
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static bool viommu_capable(enum iommu_cap cap)
>> @@ -938,19 +1018,44 @@ static struct iommu_ops viommu_ops = {
>>  	.put_resv_regions	= viommu_put_resv_regions,
>>  };
>>  
>> -static int viommu_init_vq(struct viommu_dev *viommu)
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>>  {
>>  	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> -	const char *name = "request";
>> -	void *ret;
>> +	const char *names[] = { "request", "event" };
>> +	vq_callback_t *callbacks[] = {
>> +		NULL, /* No async requests */
>> +		viommu_event_handler,
>> +	};
>> +
>> +	return virtio_find_vqs(vdev, VIOMMU_NUM_VQS, viommu->vqs, callbacks,
>> +			       names, NULL);
>> +}
>>  
>> -	ret = virtio_find_single_vq(vdev, NULL, name);
>> -	if (IS_ERR(ret)) {
>> -		dev_err(viommu->dev, "cannot find VQ\n");
>> -		return PTR_ERR(ret);
>> +static int viommu_fill_evtq(struct viommu_dev *viommu)
>> +{
>> +	int i, ret;
>> +	struct scatterlist sg[1];
>> +	struct viommu_event *evts;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
>> +	size_t nr_evts = min_t(size_t, PAGE_SIZE / sizeof(struct viommu_event),
>> +			       viommu->vqs[VIOMMU_EVENT_VQ]->num_free);
>> +
>> +	evts = devm_kmalloc_array(viommu->dev, nr_evts, sizeof(*evts),
>> +				  GFP_KERNEL);
>> +	if (!evts)
>> +		return -ENOMEM;
>> +
>> +	for (i = 0; i < nr_evts; i++) {
>> +		sg_init_one(sg, &evts[i], sizeof(*evts));
>> +		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
> who does the deallocation of evts?

I think it should be viommu_remove, which should also remove the
virtqueues. I'll add this.

Thanks,
Jean


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-01-16 23:26     ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 23:26 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
if (!region)
	return -ENOMEM;
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
same.

There is another issue related to the exclusion of iovas belonging to
reserved regions. Typically on x86, when attempting to run
virtio-blk-pci with iommu I eventually saw the driver using iova
belonging to the IOAPIC regions to map phys addr and this stalled qemu
with a drown trace:

"virtio: bogus descriptor or out of resources"

those regions need to be excluded from the iova allocator. This was
resolved by adding
if (iommu_dma_init_domain(domain,
			  vdev->viommu->geometry.aperture_start,
			  vdev->viommu->geometry.aperture_end,
			  dev))
in viommu_attach_dev()

Thanks

Eric
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
                     ` (3 preceding siblings ...)
  (?)
@ 2018-01-16 23:26   ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 23:26 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, lorenzo.pieralisi, ashok.raj, mst,
	marc.zyngier, will.deacon, rjw, robert.moore, lv.zheng,
	sudeep.holla, lenb, robin.murphy, joro, hanjun.guo

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
if (!region)
	return -ENOMEM;
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
same.

There is another issue related to the exclusion of iovas belonging to
reserved regions. Typically on x86, when attempting to run
virtio-blk-pci with iommu I eventually saw the driver using iova
belonging to the IOAPIC regions to map phys addr and this stalled qemu
with a drown trace:

"virtio: bogus descriptor or out of resources"

those regions need to be excluded from the iova allocator. This was
resolved by adding
if (iommu_dma_init_domain(domain,
			  vdev->viommu->geometry.aperture_start,
			  vdev->viommu->geometry.aperture_end,
			  dev))
in viommu_attach_dev()

Thanks

Eric
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2018-01-16 23:26     ` Auger Eric
  0 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-16 23:26 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	sudeep.holla, hanjun.guo, lorenzo.pieralisi, lenb, rjw,
	marc.zyngier, robin.murphy, will.deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 17/11/17 19:52, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 165 ++++++++++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  37 +++++++++
>  2 files changed, 195 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index feb8c8925c3a..79e0add94e05 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -45,6 +45,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -72,6 +73,7 @@ struct viommu_domain {
>  struct viommu_endpoint {
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -139,6 +141,10 @@ static int viommu_get_req_size(struct viommu_dev *viommu,
>  	case VIRTIO_IOMMU_T_UNMAP:
>  		size = sizeof(r->unmap);
>  		break;
> +	case VIRTIO_IOMMU_T_PROBE:
> +		*bottom += viommu->probe_size;
> +		size = sizeof(r->probe) + *bottom;
> +		break;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -448,6 +454,106 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 addr = le64_to_cpu(mem->addr);
> +	u64 size = le64_to_cpu(mem->size);
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(addr, size, prot,
> +						 IOMMU_RESV_MSI);
if (!region)
	return -ENOMEM;
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +	default:
> +		region = iommu_alloc_resv_region(addr, size, 0,
> +						 IOMMU_RESV_RESERVED);
same.

There is another issue related to the exclusion of iovas belonging to
reserved regions. Typically on x86, when attempting to run
virtio-blk-pci with iommu I eventually saw the driver using iova
belonging to the IOAPIC regions to map phys addr and this stalled qemu
with a drown trace:

"virtio: bogus descriptor or out of resources"

those regions need to be excluded from the iova allocator. This was
resolved by adding
if (iommu_dma_init_domain(domain,
			  vdev->viommu->geometry.aperture_start,
			  vdev->viommu->geometry.aperture_end,
			  dev))
in viommu_attach_dev()

Thanks

Eric
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +
> +	if (mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_RESERVED &&
> +	    mem->subtype != VIRTIO_IOMMU_RESV_MEM_T_MSI) {
> +		/* Please update your driver. */
> +		pr_warn("unknown resv mem subtype 0x%x\n", mem->subtype);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		/* Trouble ahead. */
> +		return -EINVAL;
> +
> +	probe = kzalloc(sizeof(*probe) + viommu->probe_size +
> +			sizeof(struct virtio_iommu_req_tail), GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe);
> +	if (ret) {
> +		kfree(probe);
> +		return ret;
> +	}
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop->value, len);
> +			break;
> +		default:
> +			dev_dbg(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += sizeof(*prop) + len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +	kfree(probe);
> +
> +	return 0;
> +}
> +
>  /* IOMMU API */
>  
>  static bool viommu_capable(enum iommu_cap cap)
> @@ -706,6 +812,7 @@ static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>  
>  static int viommu_add_device(struct device *dev)
>  {
> +	int ret;
>  	struct iommu_group *group;
>  	struct viommu_endpoint *vdev;
>  	struct viommu_dev *viommu = NULL;
> @@ -723,8 +830,16 @@ static int viommu_add_device(struct device *dev)
>  		return -ENOMEM;
>  
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/*
>  	 * Last step creates a default domain and attaches to it. Everything
>  	 * must be ready.
> @@ -738,7 +853,19 @@ static int viommu_add_device(struct device *dev)
>  
>  static void viommu_remove_device(struct device *dev)
>  {
> -	kfree(dev->iommu_fwspec->iommu_priv);
> +	struct viommu_endpoint *vdev;
> +	struct iommu_resv_region *entry, *next;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	list_for_each_entry_safe(entry, next, &vdev->resv_regions, list)
> +		kfree(entry);
> +
> +	kfree(vdev);
>  }
>  
>  static struct iommu_group *viommu_device_group(struct device *dev)
> @@ -756,15 +883,34 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
> +
> +	iommu_dma_get_resv_regions(dev, head);
>  }
>  
>  static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> @@ -854,6 +1000,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -931,6 +1081,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index 2b4cd2042897..eec90a2ab547 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -38,6 +38,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -49,6 +50,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8 					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  } __packed;
>  
>  /* Request types */
> @@ -56,6 +60,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -128,6 +133,37 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  } __packed;
>  
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					addr;
> +	__le64					size;
> +} __packed;
> +
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +	__u8					value[];
> +} __packed;
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/* Tail follows the variable-length properties array (no padding) */
> +} __packed;
> +
>  union virtio_iommu_req {
>  	struct virtio_iommu_req_head		head;
>  
> @@ -135,6 +171,7 @@ union virtio_iommu_req {
>  	struct virtio_iommu_req_detach		detach;
>  	struct virtio_iommu_req_map		map;
>  	struct virtio_iommu_req_unmap		unmap;
> +	struct virtio_iommu_req_probe		probe;
>  };
>  
>  #endif
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-16 23:26     ` [virtio-dev] " Auger Eric
@ 2018-01-19 16:21       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-19 16:21 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, jasowang, rjw, robert.moore,
	alex.williamson, lv.zheng, Sudeep Holla, lenb, Robin Murphy,
	joro, hanjun.guo

On 16/01/18 23:26, Auger Eric wrote:
[...]
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
> if (!region)
> 	return -ENOMEM;
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> same.

I'll add them, thanks

> There is another issue related to the exclusion of iovas belonging to
> reserved regions. Typically on x86, when attempting to run
> virtio-blk-pci with iommu I eventually saw the driver using iova
> belonging to the IOAPIC regions to map phys addr and this stalled qemu
> with a drown trace:
> 
> "virtio: bogus descriptor or out of resources"
> 
> those regions need to be excluded from the iova allocator. This was
> resolved by adding
> if (iommu_dma_init_domain(domain,
> 			  vdev->viommu->geometry.aperture_start,
> 			  vdev->viommu->geometry.aperture_end,
> 			  dev))
> in viommu_attach_dev()

The most recent hack for x86 [1] does call iommu_dma_init_domain() in
attach_dev(). Is it buggy?

We probably shouldn't call iommu_dma_init_domain() unconditionally
(outside of CONFIG_X86 that is), since it's normally done by the arch
(arch/arm64/mm/dma-mapping.c)

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
on branch virtio-iommu/v0.5-x86

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-16 23:26     ` [virtio-dev] " Auger Eric
  (?)
  (?)
@ 2018-01-19 16:21     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-19 16:21 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, lv.zheng,
	Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

On 16/01/18 23:26, Auger Eric wrote:
[...]
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
> if (!region)
> 	return -ENOMEM;
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> same.

I'll add them, thanks

> There is another issue related to the exclusion of iovas belonging to
> reserved regions. Typically on x86, when attempting to run
> virtio-blk-pci with iommu I eventually saw the driver using iova
> belonging to the IOAPIC regions to map phys addr and this stalled qemu
> with a drown trace:
> 
> "virtio: bogus descriptor or out of resources"
> 
> those regions need to be excluded from the iova allocator. This was
> resolved by adding
> if (iommu_dma_init_domain(domain,
> 			  vdev->viommu->geometry.aperture_start,
> 			  vdev->viommu->geometry.aperture_end,
> 			  dev))
> in viommu_attach_dev()

The most recent hack for x86 [1] does call iommu_dma_init_domain() in
attach_dev(). Is it buggy?

We probably shouldn't call iommu_dma_init_domain() unconditionally
(outside of CONFIG_X86 that is), since it's normally done by the arch
(arch/arm64/mm/dma-mapping.c)

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
on branch virtio-iommu/v0.5-x86

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2018-01-19 16:21       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 50+ messages in thread
From: Jean-Philippe Brucker @ 2018-01-19 16:21 UTC (permalink / raw)
  To: Auger Eric, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

On 16/01/18 23:26, Auger Eric wrote:
[...]
>> +	switch (mem->subtype) {
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(addr, size, prot,
>> +						 IOMMU_RESV_MSI);
> if (!region)
> 	return -ENOMEM;
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +	default:
>> +		region = iommu_alloc_resv_region(addr, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> same.

I'll add them, thanks

> There is another issue related to the exclusion of iovas belonging to
> reserved regions. Typically on x86, when attempting to run
> virtio-blk-pci with iommu I eventually saw the driver using iova
> belonging to the IOAPIC regions to map phys addr and this stalled qemu
> with a drown trace:
> 
> "virtio: bogus descriptor or out of resources"
> 
> those regions need to be excluded from the iova allocator. This was
> resolved by adding
> if (iommu_dma_init_domain(domain,
> 			  vdev->viommu->geometry.aperture_start,
> 			  vdev->viommu->geometry.aperture_end,
> 			  dev))
> in viommu_attach_dev()

The most recent hack for x86 [1] does call iommu_dma_init_domain() in
attach_dev(). Is it buggy?

We probably shouldn't call iommu_dma_init_domain() unconditionally
(outside of CONFIG_X86 that is), since it's normally done by the arch
(arch/arm64/mm/dma-mapping.c)

Thanks,
Jean

[1]
http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
on branch virtio-iommu/v0.5-x86

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-19 16:21       ` [virtio-dev] " Jean-Philippe Brucker
@ 2018-01-19 17:22         ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-19 17:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 19/01/18 17:21, Jean-Philippe Brucker wrote:
> On 16/01/18 23:26, Auger Eric wrote:
> [...]
>>> +	switch (mem->subtype) {
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>>> +		region = iommu_alloc_resv_region(addr, size, prot,
>>> +						 IOMMU_RESV_MSI);
>> if (!region)
>> 	return -ENOMEM;
>>> +		break;
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>>> +	default:
>>> +		region = iommu_alloc_resv_region(addr, size, 0,
>>> +						 IOMMU_RESV_RESERVED);
>> same.
> 
> I'll add them, thanks
> 
>> There is another issue related to the exclusion of iovas belonging to
>> reserved regions. Typically on x86, when attempting to run
>> virtio-blk-pci with iommu I eventually saw the driver using iova
>> belonging to the IOAPIC regions to map phys addr and this stalled qemu
>> with a drown trace:
>>
>> "virtio: bogus descriptor or out of resources"
>>
>> those regions need to be excluded from the iova allocator. This was
>> resolved by adding
>> if (iommu_dma_init_domain(domain,
>> 			  vdev->viommu->geometry.aperture_start,
>> 			  vdev->viommu->geometry.aperture_end,
>> 			  dev))
>> in viommu_attach_dev()
> 
> The most recent hack for x86 [1] does call iommu_dma_init_domain() in
> attach_dev(). Is it buggy?

Oh OK. Actually I have never used that branch and in my version the last
arg of iommu_dma_init_domain was NULL hence the issue. But I was
thinking more generally that care must be taken to exclude all those
regions.

Thanks

Eric
> 
> We probably shouldn't call iommu_dma_init_domain() unconditionally
> (outside of CONFIG_X86 that is), since it's normally done by the arch
> (arch/arm64/mm/dma-mapping.c)
> 
> Thanks,
> Jean
> 
> [1]
> http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
> on branch virtio-iommu/v0.5-x86
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
  2018-01-19 16:21       ` [virtio-dev] " Jean-Philippe Brucker
  (?)
@ 2018-01-19 17:22       ` Auger Eric
  -1 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-19 17:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: Jayachandran.Nair, Lorenzo Pieralisi, ashok.raj, mst,
	Marc Zyngier, Will Deacon, rjw, robert.moore, lv.zheng,
	Sudeep Holla, lenb, Robin Murphy, joro, hanjun.guo

Hi Jean-Philippe,

On 19/01/18 17:21, Jean-Philippe Brucker wrote:
> On 16/01/18 23:26, Auger Eric wrote:
> [...]
>>> +	switch (mem->subtype) {
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>>> +		region = iommu_alloc_resv_region(addr, size, prot,
>>> +						 IOMMU_RESV_MSI);
>> if (!region)
>> 	return -ENOMEM;
>>> +		break;
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>>> +	default:
>>> +		region = iommu_alloc_resv_region(addr, size, 0,
>>> +						 IOMMU_RESV_RESERVED);
>> same.
> 
> I'll add them, thanks
> 
>> There is another issue related to the exclusion of iovas belonging to
>> reserved regions. Typically on x86, when attempting to run
>> virtio-blk-pci with iommu I eventually saw the driver using iova
>> belonging to the IOAPIC regions to map phys addr and this stalled qemu
>> with a drown trace:
>>
>> "virtio: bogus descriptor or out of resources"
>>
>> those regions need to be excluded from the iova allocator. This was
>> resolved by adding
>> if (iommu_dma_init_domain(domain,
>> 			  vdev->viommu->geometry.aperture_start,
>> 			  vdev->viommu->geometry.aperture_end,
>> 			  dev))
>> in viommu_attach_dev()
> 
> The most recent hack for x86 [1] does call iommu_dma_init_domain() in
> attach_dev(). Is it buggy?

Oh OK. Actually I have never used that branch and in my version the last
arg of iommu_dma_init_domain was NULL hence the issue. But I was
thinking more generally that care must be taken to exclude all those
regions.

Thanks

Eric
> 
> We probably shouldn't call iommu_dma_init_domain() unconditionally
> (outside of CONFIG_X86 that is), since it's normally done by the arch
> (arch/arm64/mm/dma-mapping.c)
> 
> Thanks,
> Jean
> 
> [1]
> http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
> on branch virtio-iommu/v0.5-x86
> 

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [virtio-dev] Re: [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request
@ 2018-01-19 17:22         ` Auger Eric
  0 siblings, 0 replies; 50+ messages in thread
From: Auger Eric @ 2018-01-19 17:22 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, devel, linux-acpi, kvm, kvmarm,
	virtualization, virtio-dev
  Cc: jasowang, mst, lv.zheng, robert.moore, joro, alex.williamson,
	Sudeep Holla, hanjun.guo, Lorenzo Pieralisi, lenb, rjw,
	Marc Zyngier, Robin Murphy, Will Deacon, bharat.bhushan,
	Jayachandran.Nair, ashok.raj, peterx

Hi Jean-Philippe,

On 19/01/18 17:21, Jean-Philippe Brucker wrote:
> On 16/01/18 23:26, Auger Eric wrote:
> [...]
>>> +	switch (mem->subtype) {
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>>> +		region = iommu_alloc_resv_region(addr, size, prot,
>>> +						 IOMMU_RESV_MSI);
>> if (!region)
>> 	return -ENOMEM;
>>> +		break;
>>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>>> +	default:
>>> +		region = iommu_alloc_resv_region(addr, size, 0,
>>> +						 IOMMU_RESV_RESERVED);
>> same.
> 
> I'll add them, thanks
> 
>> There is another issue related to the exclusion of iovas belonging to
>> reserved regions. Typically on x86, when attempting to run
>> virtio-blk-pci with iommu I eventually saw the driver using iova
>> belonging to the IOAPIC regions to map phys addr and this stalled qemu
>> with a drown trace:
>>
>> "virtio: bogus descriptor or out of resources"
>>
>> those regions need to be excluded from the iova allocator. This was
>> resolved by adding
>> if (iommu_dma_init_domain(domain,
>> 			  vdev->viommu->geometry.aperture_start,
>> 			  vdev->viommu->geometry.aperture_end,
>> 			  dev))
>> in viommu_attach_dev()
> 
> The most recent hack for x86 [1] does call iommu_dma_init_domain() in
> attach_dev(). Is it buggy?

Oh OK. Actually I have never used that branch and in my version the last
arg of iommu_dma_init_domain was NULL hence the issue. But I was
thinking more generally that care must be taken to exclude all those
regions.

Thanks

Eric
> 
> We probably shouldn't call iommu_dma_init_domain() unconditionally
> (outside of CONFIG_X86 that is), since it's normally done by the arch
> (arch/arm64/mm/dma-mapping.c)
> 
> Thanks,
> Jean
> 
> [1]
> http://www.linux-arm.org/git?p=linux-jpb.git;a=commitdiff;h=e910e224b58712151dda06df595a53ff07edef63
> on branch virtio-iommu/v0.5-x86
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2018-01-19 17:23 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-17 18:52 [RFC PATCH v2 0/5] Add virtio-iommu driver Jean-Philippe Brucker
2017-11-17 18:52 ` [virtio-dev] " Jean-Philippe Brucker
2017-11-17 18:52 ` [RFC PATCH v2 1/5] iommu: " Jean-Philippe Brucker
2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
2017-11-29 15:17   ` Jean-Philippe Brucker
2017-11-29 15:17   ` Jean-Philippe Brucker
2017-11-29 15:17     ` Jean-Philippe Brucker
2018-01-15 15:12   ` Auger Eric
2018-01-15 15:12     ` [virtio-dev] " Auger Eric
2018-01-16 17:45     ` Jean-Philippe Brucker
2018-01-16 17:45     ` Jean-Philippe Brucker
2018-01-16 17:45       ` [virtio-dev] " Jean-Philippe Brucker
2018-01-15 15:12   ` Auger Eric
2017-11-17 18:52 ` Jean-Philippe Brucker
2017-11-17 18:52 ` [RFC PATCH v2 2/5] iommu/virtio-iommu: Add probe request Jean-Philippe Brucker
2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
2018-01-16  9:25   ` Auger Eric
2018-01-16  9:25     ` [virtio-dev] " Auger Eric
2018-01-16 17:46     ` Jean-Philippe Brucker
2018-01-16 17:46     ` Jean-Philippe Brucker
2018-01-16 17:46       ` [virtio-dev] " Jean-Philippe Brucker
2018-01-16  9:25   ` Auger Eric
2018-01-16 23:26   ` Auger Eric
2018-01-16 23:26     ` [virtio-dev] " Auger Eric
2018-01-19 16:21     ` Jean-Philippe Brucker
2018-01-19 16:21       ` [virtio-dev] " Jean-Philippe Brucker
2018-01-19 17:22       ` Auger Eric
2018-01-19 17:22       ` Auger Eric
2018-01-19 17:22         ` [virtio-dev] " Auger Eric
2018-01-19 16:21     ` Jean-Philippe Brucker
2018-01-16 23:26   ` Auger Eric
2017-11-17 18:52 ` Jean-Philippe Brucker
2017-11-17 18:52 ` [RFC PATCH v2 3/5] iommu/virtio-iommu: Add event queue Jean-Philippe Brucker
2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
2018-01-16 10:10   ` Auger Eric
2018-01-16 10:10     ` [virtio-dev] " Auger Eric
2018-01-16 17:48     ` Jean-Philippe Brucker
2018-01-16 17:48       ` [virtio-dev] " Jean-Philippe Brucker
2018-01-16 17:48     ` Jean-Philippe Brucker
2018-01-16 10:10   ` Auger Eric
2017-11-17 18:52 ` Jean-Philippe Brucker
2017-11-17 18:52 ` [RFC PATCH v2 4/5] ACPI/IORT: Support paravirtualized IOMMU Jean-Philippe Brucker
2017-11-17 18:52 ` Jean-Philippe Brucker
2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
2017-11-29 15:17   ` Jean-Philippe Brucker
2017-11-29 15:17   ` Jean-Philippe Brucker
2017-11-29 15:17     ` Jean-Philippe Brucker
2017-11-17 18:52 ` [RFC PATCH v2 5/5] ACPI/IORT: Move IORT to the ACPI folder Jean-Philippe Brucker
2017-11-17 18:52   ` [virtio-dev] " Jean-Philippe Brucker
2017-11-17 18:52 ` Jean-Philippe Brucker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.