All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8

Implement the virtio-iommu driver, following specification v0.8 [1].
Changes since v2 [2]:

* Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
  would like to phase out the MMIO transport. This produces a complex
  topology where the programming interface of the IOMMU could appear
  lower than the endpoints that it translates. It's not unheard of (e.g.
  AMD IOMMU), and the guest easily copes with this.
  
  The "Firmware description" section of the specification has been
  updated with all combinations of PCI, MMIO and DT, ACPI.

* Fix structures layout, they don't need the "packed" attribute anymore.

* While we're at it, add domain parameter to DETACH request, and leave
  some padding. This way the next version, that adds PASID support,
  won't have to introduce a "DETACH2" request to stay backward
  compatible.

* Require virtio device 1.0+. Remove legacy transport notes from the
  specification.

* Request timeout is now only enabled with DEBUG.

* The patch for VFIO Kconfig (previously patch 5/5) is in next.

You can find Linux driver and kvmtool device on branches
virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
next only produced a trivial conflict). Branch virtio-iommu/devel
contains a few patches that I'd like to send once the base is upstream:

* virtio-iommu as a module. It got *much* nicer after Rob's probe
  deferral rework, but I still have a bug to fix when re-loading the
  virtio-iommu module.

* ACPI support requires a minor IORT spec update (reservation of node
  ID). I think it should be easier to obtain once the device and drivers
  are upstream.

[1] Virtio-iommu specification v0.8, diff from v0.7, and sources
    git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
    http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
    http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf

[2] [PATCH v2 0/5] Add virtio-iommu driver
    https://www.spinics.net/lists/kvm/msg170655.html

[3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
    git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8

Jean-Philippe Brucker (7):
  dt-bindings: virtio-mmio: Add IOMMU description
  dt-bindings: virtio: Add virtio-pci-iommu node
  PCI: OF: allow endpoints to bypass the iommu
  PCI: OF: Initialize dev->fwnode appropriately
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue

 .../devicetree/bindings/virtio/iommu.txt      |   66 +
 .../devicetree/bindings/virtio/mmio.txt       |   30 +
 MAINTAINERS                                   |    7 +
 drivers/iommu/Kconfig                         |   11 +
 drivers/iommu/Makefile                        |    1 +
 drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
 drivers/pci/of.c                              |   14 +-
 include/uapi/linux/virtio_ids.h               |    1 +
 include/uapi/linux/virtio_iommu.h             |  159 +++
 9 files changed, 1457 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

Implement the virtio-iommu driver, following specification v0.8 [1].
Changes since v2 [2]:

* Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
  would like to phase out the MMIO transport. This produces a complex
  topology where the programming interface of the IOMMU could appear
  lower than the endpoints that it translates. It's not unheard of (e.g.
  AMD IOMMU), and the guest easily copes with this.
  
  The "Firmware description" section of the specification has been
  updated with all combinations of PCI, MMIO and DT, ACPI.

* Fix structures layout, they don't need the "packed" attribute anymore.

* While we're at it, add domain parameter to DETACH request, and leave
  some padding. This way the next version, that adds PASID support,
  won't have to introduce a "DETACH2" request to stay backward
  compatible.

* Require virtio device 1.0+. Remove legacy transport notes from the
  specification.

* Request timeout is now only enabled with DEBUG.

* The patch for VFIO Kconfig (previously patch 5/5) is in next.

You can find Linux driver and kvmtool device on branches
virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
next only produced a trivial conflict). Branch virtio-iommu/devel
contains a few patches that I'd like to send once the base is upstream:

* virtio-iommu as a module. It got *much* nicer after Rob's probe
  deferral rework, but I still have a bug to fix when re-loading the
  virtio-iommu module.

* ACPI support requires a minor IORT spec update (reservation of node
  ID). I think it should be easier to obtain once the device and drivers
  are upstream.

[1] Virtio-iommu specification v0.8, diff from v0.7, and sources
    git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
    http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
    http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf

[2] [PATCH v2 0/5] Add virtio-iommu driver
    https://www.spinics.net/lists/kvm/msg170655.html

[3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
    git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8

Jean-Philippe Brucker (7):
  dt-bindings: virtio-mmio: Add IOMMU description
  dt-bindings: virtio: Add virtio-pci-iommu node
  PCI: OF: allow endpoints to bypass the iommu
  PCI: OF: Initialize dev->fwnode appropriately
  iommu: Add virtio-iommu driver
  iommu/virtio: Add probe request
  iommu/virtio: Add event queue

 .../devicetree/bindings/virtio/iommu.txt      |   66 +
 .../devicetree/bindings/virtio/mmio.txt       |   30 +
 MAINTAINERS                                   |    7 +
 drivers/iommu/Kconfig                         |   11 +
 drivers/iommu/Makefile                        |    1 +
 drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
 drivers/pci/of.c                              |   14 +-
 include/uapi/linux/virtio_ids.h               |    1 +
 include/uapi/linux/virtio_iommu.h             |  159 +++
 9 files changed, 1457 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

-- 
2.19.1


^ permalink raw reply	[flat|nested] 101+ messages in thread

* [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

The nature of a virtio-mmio node is discovered by the virtio driver at
probe time. However the DMA relation between devices must be described
statically. When a virtio-mmio node is a virtio-iommu device, it needs an
"#iommu-cells" property as specified by bindings/iommu/iommu.txt.

Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
requires an "iommus" property. Describe these requirements in the
device-tree bindings documentation.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
index 5069c1b8e193..748595473b36 100644
--- a/Documentation/devicetree/bindings/virtio/mmio.txt
+++ b/Documentation/devicetree/bindings/virtio/mmio.txt
@@ -8,10 +8,40 @@ Required properties:
 - reg:		control registers base address and size including configuration space
 - interrupts:	interrupt generated by the device
 
+Required properties for virtio-iommu:
+
+- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
+		linked to DMA masters using the "iommus" or "iommu-map"
+		properties [1][2]. #iommu-cells specifies the size of the
+		"iommus" property. For virtio-iommu #iommu-cells must be
+		1, each cell describing a single endpoint ID.
+
+Optional properties:
+
+- iommus:	If the device accesses memory through an IOMMU, it should
+		have an "iommus" property [1]. Since virtio-iommu itself
+		does not access memory through an IOMMU, the "virtio,mmio"
+		node cannot have both an "#iommu-cells" and an "iommus"
+		property.
+
 Example:
 
 	virtio_block@3000 {
 		compatible = "virtio,mmio";
 		reg = <0x3000 0x100>;
 		interrupts = <41>;
+
+		/* Device has endpoint ID 23 */
+		iommus = <&viommu 23>
 	}
+
+	viommu: virtio_iommu@3100 {
+		compatible = "virtio,mmio";
+		reg = <0x3100 0x100>;
+		interrupts = <42>;
+
+		#iommu-cells = <1>
+	}
+
+[1] Documentation/devicetree/bindings/iommu/iommu.txt
+[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

The nature of a virtio-mmio node is discovered by the virtio driver at
probe time. However the DMA relation between devices must be described
statically. When a virtio-mmio node is a virtio-iommu device, it needs an
"#iommu-cells" property as specified by bindings/iommu/iommu.txt.

Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
requires an "iommus" property. Describe these requirements in the
device-tree bindings documentation.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
index 5069c1b8e193..748595473b36 100644
--- a/Documentation/devicetree/bindings/virtio/mmio.txt
+++ b/Documentation/devicetree/bindings/virtio/mmio.txt
@@ -8,10 +8,40 @@ Required properties:
 - reg:		control registers base address and size including configuration space
 - interrupts:	interrupt generated by the device
 
+Required properties for virtio-iommu:
+
+- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
+		linked to DMA masters using the "iommus" or "iommu-map"
+		properties [1][2]. #iommu-cells specifies the size of the
+		"iommus" property. For virtio-iommu #iommu-cells must be
+		1, each cell describing a single endpoint ID.
+
+Optional properties:
+
+- iommus:	If the device accesses memory through an IOMMU, it should
+		have an "iommus" property [1]. Since virtio-iommu itself
+		does not access memory through an IOMMU, the "virtio,mmio"
+		node cannot have both an "#iommu-cells" and an "iommus"
+		property.
+
 Example:
 
 	virtio_block@3000 {
 		compatible = "virtio,mmio";
 		reg = <0x3000 0x100>;
 		interrupts = <41>;
+
+		/* Device has endpoint ID 23 */
+		iommus = <&viommu 23>
 	}
+
+	viommu: virtio_iommu@3100 {
+		compatible = "virtio,mmio";
+		reg = <0x3100 0x100>;
+		interrupts = <42>;
+
+		#iommu-cells = <1>
+	}
+
+[1] Documentation/devicetree/bindings/iommu/iommu.txt
+[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
  2018-10-12 14:59 ` Jean-Philippe Brucker
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

The nature of a virtio-mmio node is discovered by the virtio driver at
probe time. However the DMA relation between devices must be described
statically. When a virtio-mmio node is a virtio-iommu device, it needs an
"#iommu-cells" property as specified by bindings/iommu/iommu.txt.

Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
requires an "iommus" property. Describe these requirements in the
device-tree bindings documentation.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
index 5069c1b8e193..748595473b36 100644
--- a/Documentation/devicetree/bindings/virtio/mmio.txt
+++ b/Documentation/devicetree/bindings/virtio/mmio.txt
@@ -8,10 +8,40 @@ Required properties:
 - reg:		control registers base address and size including configuration space
 - interrupts:	interrupt generated by the device
 
+Required properties for virtio-iommu:
+
+- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
+		linked to DMA masters using the "iommus" or "iommu-map"
+		properties [1][2]. #iommu-cells specifies the size of the
+		"iommus" property. For virtio-iommu #iommu-cells must be
+		1, each cell describing a single endpoint ID.
+
+Optional properties:
+
+- iommus:	If the device accesses memory through an IOMMU, it should
+		have an "iommus" property [1]. Since virtio-iommu itself
+		does not access memory through an IOMMU, the "virtio,mmio"
+		node cannot have both an "#iommu-cells" and an "iommus"
+		property.
+
 Example:
 
 	virtio_block@3000 {
 		compatible = "virtio,mmio";
 		reg = <0x3000 0x100>;
 		interrupts = <41>;
+
+		/* Device has endpoint ID 23 */
+		iommus = <&viommu 23>
 	}
+
+	viommu: virtio_iommu@3100 {
+		compatible = "virtio,mmio";
+		reg = <0x3100 0x100>;
+		interrupts = <42>;
+
+		#iommu-cells = <1>
+	}
+
+[1] Documentation/devicetree/bindings/iommu/iommu.txt
+[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

Some systems implement virtio-iommu as a PCI endpoint. The operating
systems needs to discover the relationship between IOMMU and masters long
before the PCI endpoint gets probed. Add a PCI child node to describe the
virtio-iommu device.

The virtio-pci-iommu is conceptually split between a PCI programming
interface and a translation component on the parent bus. The latter
doesn't have a node in the device tree. The virtio-pci-iommu node
describes both, by linking the PCI endpoint to "iommus" property of DMA
master nodes and to "iommu-map" properties of bus nodes.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt

diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
new file mode 100644
index 000000000000..2407fea0651c
--- /dev/null
+++ b/Documentation/devicetree/bindings/virtio/iommu.txt
@@ -0,0 +1,66 @@
+* virtio IOMMU PCI device
+
+When virtio-iommu uses the PCI transport, its programming interface is
+discovered dynamically by the PCI probing infrastructure. However the
+device tree statically describes the relation between IOMMU and DMA
+masters. Therefore, the PCI root complex that hosts the virtio-iommu
+contains a child node representing the IOMMU device explicitly.
+
+Required properties:
+
+- compatible:	Should be "virtio,pci-iommu"
+- reg:		PCI address of the IOMMU. As defined in the PCI Bus
+		Binding reference [1], the reg property is a five-cell
+		address encoded as (phys.hi phys.mid phys.lo size.hi
+		size.lo). phys.hi should contain the device's BDF as
+		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
+		should be zero.
+- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
+		an endpoint ID, described by the "iommus" property [2].
+		For virtio-iommu, #iommu-cells must be 1.
+
+Notes:
+
+- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
+  virtio-iommu node doesn't have an "iommus" property, and is omitted from
+  the iommu-map property of the root complex.
+
+Example:
+
+pcie@10000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+
+	/* The IOMMU programming interface uses slot 00:01.0 */
+	iommu0: iommu@0008 {
+		compatible = "virtio,pci-iommu";
+		reg = <0x00000800 0 0 0 0>;
+		#iommu-cells = <1>;
+	};
+
+	/*
+	 * The IOMMU manages all functions in this PCI domain except
+	 * itself. Omit BDF 00:01.0.
+	 */
+	iommu-map = <0x0 &iommu0 0x0 0x8>
+		    <0x9 &iommu0 0x9 0xfff7>;
+};
+
+pcie@20000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+	/*
+	 * The IOMMU also manages all functions from this domain,
+	 * with endpoint IDs 0x10000 - 0x1ffff
+	 */
+	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
+};
+
+ethernet@fe001000 {
+	...
+	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
+	iommus = <&iommu0 0x20000>;
+};
+
+[1] Documentation/devicetree/bindings/pci/pci.txt
+[2] Documentation/devicetree/bindings/iommu/iommu.txt
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

Some systems implement virtio-iommu as a PCI endpoint. The operating
systems needs to discover the relationship between IOMMU and masters long
before the PCI endpoint gets probed. Add a PCI child node to describe the
virtio-iommu device.

The virtio-pci-iommu is conceptually split between a PCI programming
interface and a translation component on the parent bus. The latter
doesn't have a node in the device tree. The virtio-pci-iommu node
describes both, by linking the PCI endpoint to "iommus" property of DMA
master nodes and to "iommu-map" properties of bus nodes.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt

diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
new file mode 100644
index 000000000000..2407fea0651c
--- /dev/null
+++ b/Documentation/devicetree/bindings/virtio/iommu.txt
@@ -0,0 +1,66 @@
+* virtio IOMMU PCI device
+
+When virtio-iommu uses the PCI transport, its programming interface is
+discovered dynamically by the PCI probing infrastructure. However the
+device tree statically describes the relation between IOMMU and DMA
+masters. Therefore, the PCI root complex that hosts the virtio-iommu
+contains a child node representing the IOMMU device explicitly.
+
+Required properties:
+
+- compatible:	Should be "virtio,pci-iommu"
+- reg:		PCI address of the IOMMU. As defined in the PCI Bus
+		Binding reference [1], the reg property is a five-cell
+		address encoded as (phys.hi phys.mid phys.lo size.hi
+		size.lo). phys.hi should contain the device's BDF as
+		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
+		should be zero.
+- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
+		an endpoint ID, described by the "iommus" property [2].
+		For virtio-iommu, #iommu-cells must be 1.
+
+Notes:
+
+- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
+  virtio-iommu node doesn't have an "iommus" property, and is omitted from
+  the iommu-map property of the root complex.
+
+Example:
+
+pcie@10000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+
+	/* The IOMMU programming interface uses slot 00:01.0 */
+	iommu0: iommu@0008 {
+		compatible = "virtio,pci-iommu";
+		reg = <0x00000800 0 0 0 0>;
+		#iommu-cells = <1>;
+	};
+
+	/*
+	 * The IOMMU manages all functions in this PCI domain except
+	 * itself. Omit BDF 00:01.0.
+	 */
+	iommu-map = <0x0 &iommu0 0x0 0x8>
+		    <0x9 &iommu0 0x9 0xfff7>;
+};
+
+pcie@20000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+	/*
+	 * The IOMMU also manages all functions from this domain,
+	 * with endpoint IDs 0x10000 - 0x1ffff
+	 */
+	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
+};
+
+ethernet@fe001000 {
+	...
+	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
+	iommus = <&iommu0 0x20000>;
+};
+
+[1] Documentation/devicetree/bindings/pci/pci.txt
+[2] Documentation/devicetree/bindings/iommu/iommu.txt
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (2 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

Some systems implement virtio-iommu as a PCI endpoint. The operating
systems needs to discover the relationship between IOMMU and masters long
before the PCI endpoint gets probed. Add a PCI child node to describe the
virtio-iommu device.

The virtio-pci-iommu is conceptually split between a PCI programming
interface and a translation component on the parent bus. The latter
doesn't have a node in the device tree. The virtio-pci-iommu node
describes both, by linking the PCI endpoint to "iommus" property of DMA
master nodes and to "iommu-map" properties of bus nodes.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
 1 file changed, 66 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt

diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
new file mode 100644
index 000000000000..2407fea0651c
--- /dev/null
+++ b/Documentation/devicetree/bindings/virtio/iommu.txt
@@ -0,0 +1,66 @@
+* virtio IOMMU PCI device
+
+When virtio-iommu uses the PCI transport, its programming interface is
+discovered dynamically by the PCI probing infrastructure. However the
+device tree statically describes the relation between IOMMU and DMA
+masters. Therefore, the PCI root complex that hosts the virtio-iommu
+contains a child node representing the IOMMU device explicitly.
+
+Required properties:
+
+- compatible:	Should be "virtio,pci-iommu"
+- reg:		PCI address of the IOMMU. As defined in the PCI Bus
+		Binding reference [1], the reg property is a five-cell
+		address encoded as (phys.hi phys.mid phys.lo size.hi
+		size.lo). phys.hi should contain the device's BDF as
+		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
+		should be zero.
+- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
+		an endpoint ID, described by the "iommus" property [2].
+		For virtio-iommu, #iommu-cells must be 1.
+
+Notes:
+
+- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
+  virtio-iommu node doesn't have an "iommus" property, and is omitted from
+  the iommu-map property of the root complex.
+
+Example:
+
+pcie@10000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+
+	/* The IOMMU programming interface uses slot 00:01.0 */
+	iommu0: iommu@0008 {
+		compatible = "virtio,pci-iommu";
+		reg = <0x00000800 0 0 0 0>;
+		#iommu-cells = <1>;
+	};
+
+	/*
+	 * The IOMMU manages all functions in this PCI domain except
+	 * itself. Omit BDF 00:01.0.
+	 */
+	iommu-map = <0x0 &iommu0 0x0 0x8>
+		    <0x9 &iommu0 0x9 0xfff7>;
+};
+
+pcie@20000000 {
+	compatible = "pci-host-ecam-generic";
+	...
+	/*
+	 * The IOMMU also manages all functions from this domain,
+	 * with endpoint IDs 0x10000 - 0x1ffff
+	 */
+	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
+};
+
+ethernet@fe001000 {
+	...
+	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
+	iommus = <&iommu0 0x20000>;
+};
+
+[1] Documentation/devicetree/bindings/pci/pci.txt
+[2] Documentation/devicetree/bindings/iommu/iommu.txt
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

Using the iommu-map binding, endpoints in a given PCI domain can be
managed by different IOMMUs. Some virtual machines may allow a subset of
endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
PCI root complex has an iommu-map property, the driver requires all
endpoints to be described by the property. Allow the iommu-map property to
have gaps.

Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
which is invalid since MSIs always reach an MSI controller. Thankfully
Linux will error out later, when attempting to find an MSI domain for the
device.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 1836b8ddf292..2f5015bdb256 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
 		return 0;
 	}
 
-	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
-		np, map_name, rid, target && *target ? *target : NULL);
-	return -EFAULT;
+	/* Bypasses translation */
+	if (id_out)
+		*id_out = rid;
+	return 0;
 }
 
 #if IS_ENABLED(CONFIG_OF_IRQ)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

Using the iommu-map binding, endpoints in a given PCI domain can be
managed by different IOMMUs. Some virtual machines may allow a subset of
endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
PCI root complex has an iommu-map property, the driver requires all
endpoints to be described by the property. Allow the iommu-map property to
have gaps.

Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
which is invalid since MSIs always reach an MSI controller. Thankfully
Linux will error out later, when attempting to find an MSI domain for the
device.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 1836b8ddf292..2f5015bdb256 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
 		return 0;
 	}
 
-	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
-		np, map_name, rid, target && *target ? *target : NULL);
-	return -EFAULT;
+	/* Bypasses translation */
+	if (id_out)
+		*id_out = rid;
+	return 0;
 }
 
 #if IS_ENABLED(CONFIG_OF_IRQ)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (4 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

Using the iommu-map binding, endpoints in a given PCI domain can be
managed by different IOMMUs. Some virtual machines may allow a subset of
endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
PCI root complex has an iommu-map property, the driver requires all
endpoints to be described by the property. Allow the iommu-map property to
have gaps.

Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
which is invalid since MSIs always reach an MSI controller. Thankfully
Linux will error out later, when attempting to find an MSI domain for the
device.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 1836b8ddf292..2f5015bdb256 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
 		return 0;
 	}
 
-	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
-		np, map_name, rid, target && *target ? *target : NULL);
-	return -EFAULT;
+	/* Bypasses translation */
+	if (id_out)
+		*id_out = rid;
+	return 0;
 }
 
 #if IS_ENABLED(CONFIG_OF_IRQ)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

For PCI devices that have an OF node, set the fwnode as well. This way
drivers that rely on fwnode don't need the special case described by
commit f94277af03ea ("of/platform: Initialise dev->fwnode appropriately").

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 2f5015bdb256..8026417fab38 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -21,12 +21,15 @@ void pci_set_of_node(struct pci_dev *dev)
 		return;
 	dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
 						    dev->devfn);
+	if (dev->dev.of_node)
+		dev->dev.fwnode = &dev->dev.of_node->fwnode;
 }
 
 void pci_release_of_node(struct pci_dev *dev)
 {
 	of_node_put(dev->dev.of_node);
 	dev->dev.of_node = NULL;
+	dev->dev.fwnode = NULL;
 }
 
 void pci_set_bus_of_node(struct pci_bus *bus)
@@ -35,12 +38,16 @@ void pci_set_bus_of_node(struct pci_bus *bus)
 		bus->dev.of_node = pcibios_get_phb_of_node(bus);
 	else
 		bus->dev.of_node = of_node_get(bus->self->dev.of_node);
+
+	if (bus->dev.of_node)
+		bus->dev.fwnode = &bus->dev.of_node->fwnode;
 }
 
 void pci_release_bus_of_node(struct pci_bus *bus)
 {
 	of_node_put(bus->dev.of_node);
 	bus->dev.of_node = NULL;
+	bus->dev.fwnode = NULL;
 }
 
 struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

For PCI devices that have an OF node, set the fwnode as well. This way
drivers that rely on fwnode don't need the special case described by
commit f94277af03ea ("of/platform: Initialise dev->fwnode appropriately").

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 2f5015bdb256..8026417fab38 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -21,12 +21,15 @@ void pci_set_of_node(struct pci_dev *dev)
 		return;
 	dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
 						    dev->devfn);
+	if (dev->dev.of_node)
+		dev->dev.fwnode = &dev->dev.of_node->fwnode;
 }
 
 void pci_release_of_node(struct pci_dev *dev)
 {
 	of_node_put(dev->dev.of_node);
 	dev->dev.of_node = NULL;
+	dev->dev.fwnode = NULL;
 }
 
 void pci_set_bus_of_node(struct pci_bus *bus)
@@ -35,12 +38,16 @@ void pci_set_bus_of_node(struct pci_bus *bus)
 		bus->dev.of_node = pcibios_get_phb_of_node(bus);
 	else
 		bus->dev.of_node = of_node_get(bus->self->dev.of_node);
+
+	if (bus->dev.of_node)
+		bus->dev.fwnode = &bus->dev.of_node->fwnode;
 }
 
 void pci_release_bus_of_node(struct pci_bus *bus)
 {
 	of_node_put(bus->dev.of_node);
 	bus->dev.of_node = NULL;
+	bus->dev.fwnode = NULL;
 }
 
 struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (7 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

For PCI devices that have an OF node, set the fwnode as well. This way
drivers that rely on fwnode don't need the special case described by
commit f94277af03ea ("of/platform: Initialise dev->fwnode appropriately").

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/pci/of.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/pci/of.c b/drivers/pci/of.c
index 2f5015bdb256..8026417fab38 100644
--- a/drivers/pci/of.c
+++ b/drivers/pci/of.c
@@ -21,12 +21,15 @@ void pci_set_of_node(struct pci_dev *dev)
 		return;
 	dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
 						    dev->devfn);
+	if (dev->dev.of_node)
+		dev->dev.fwnode = &dev->dev.of_node->fwnode;
 }
 
 void pci_release_of_node(struct pci_dev *dev)
 {
 	of_node_put(dev->dev.of_node);
 	dev->dev.of_node = NULL;
+	dev->dev.fwnode = NULL;
 }
 
 void pci_set_bus_of_node(struct pci_bus *bus)
@@ -35,12 +38,16 @@ void pci_set_bus_of_node(struct pci_bus *bus)
 		bus->dev.of_node = pcibios_get_phb_of_node(bus);
 	else
 		bus->dev.of_node = of_node_get(bus->self->dev.of_node);
+
+	if (bus->dev.of_node)
+		bus->dev.fwnode = &bus->dev.of_node->fwnode;
 }
 
 void pci_release_bus_of_node(struct pci_bus *bus)
 {
 	of_node_put(bus->dev.of_node);
 	bus->dev.of_node = NULL;
+	bus->dev.fwnode = NULL;
 }
 
 struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio transport without emulating page
tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   7 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 101 ++++
 6 files changed, 1059 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 48a65c3a4189..f02fa65f47e2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15599,6 +15599,13 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+L:	virtualization@lists.linux-foundation.org
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index c60395b7470f..2dc016dc2b92 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -414,4 +414,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO=y
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index ab5eba6edf82..4cd643408e49 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..9fb38cd3b727
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,938 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 Arm Limited
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+#define VIOMMU_REQUEST_VQ		0
+#define VIOMMU_NR_VQS			1
+
+/*
+ * During development, it is convenient to time out rather than wait
+ * indefinitely in atomic context when a device misbehaves and a request doesn't
+ * return. In production however, some requests shouldn't return until they are
+ * successful.
+ */
+#ifdef DEBUG
+#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
+#endif
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vqs[VIOMMU_NR_VQS];
+	spinlock_t			request_lock;
+	struct list_head		requests;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	u32				flags;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	unsigned long			nr_endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct list_head		list;
+	void				*writeback;
+	unsigned int			write_offset;
+	unsigned int			len;
+	char				buf[];
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+static int viommu_get_req_errno(void *buf, size_t len)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	switch (tail->status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+static void viommu_set_req_status(void *buf, size_t len, int status)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	tail->status = status;
+}
+
+static off_t viommu_get_req_offset(struct viommu_dev *viommu,
+				   struct virtio_iommu_req_head *req,
+				   size_t len)
+{
+	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
+
+	return len - tail_size;
+}
+
+/*
+ * __viommu_sync_req - Complete all in-flight requests
+ *
+ * Wait for all added requests to complete. When this function returns, all
+ * requests that were in-flight at the time of the call have completed.
+ */
+static int __viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret = 0;
+	unsigned int len;
+	size_t write_len;
+	ktime_t timeout = 0;
+	struct viommu_request *req;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+#ifdef DEBUG
+	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
+#endif
+	virtqueue_kick(vq);
+
+	while (!list_empty(&viommu->requests)) {
+		len = 0;
+		req = virtqueue_get_buf(vq, &len);
+		if (req == NULL) {
+			if (!timeout || ktime_before(ktime_get(), timeout))
+				continue;
+
+			/* After timeout, remove all requests */
+			req = list_first_entry(&viommu->requests,
+					       struct viommu_request, list);
+			ret = -ETIMEDOUT;
+		}
+
+		if (!len)
+			viommu_set_req_status(req->buf, req->len,
+					      VIRTIO_IOMMU_S_IOERR);
+
+		write_len = req->len - req->write_offset;
+		if (req->writeback && len >= write_len)
+			memcpy(req->writeback, req->buf + req->write_offset,
+			       write_len);
+
+		list_del(&req->list);
+		kfree(req);
+	}
+
+	return ret;
+}
+
+static int viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_sync_req(viommu);
+	if (ret)
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * __viommu_add_request - Add one request to the queue
+ * @buf: pointer to the request buffer
+ * @len: length of the request buffer
+ * @writeback: copy data back to the buffer when the request completes.
+ *
+ * Add a request to the queue. Only synchronize the queue if it's already full.
+ * Otherwise don't kick the queue nor wait for requests to complete.
+ *
+ * When @writeback is true, data written by the device, including the request
+ * status, is copied into @buf after the request completes. This is unsafe if
+ * the caller allocates @buf on stack and drops the lock between add_req() and
+ * sync_req().
+ *
+ * Return 0 if the request was successfully added to the queue.
+ */
+static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
+			    bool writeback)
+{
+	int ret;
+	off_t write_offset;
+	struct viommu_request *req;
+	struct scatterlist top_sg, bottom_sg;
+	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+
+	write_offset = viommu_get_req_offset(viommu, buf, len);
+	if (!write_offset)
+		return -EINVAL;
+
+	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
+	if (!req)
+		return -ENOMEM;
+
+	req->len = len;
+	if (writeback) {
+		req->writeback = buf + write_offset;
+		req->write_offset = write_offset;
+	}
+	memcpy(&req->buf, buf, write_offset);
+
+	sg_init_one(&top_sg, req->buf, write_offset);
+	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
+
+	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	if (ret == -ENOSPC) {
+		/* If the queue is full, sync and retry */
+		if (!__viommu_sync_req(viommu))
+			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	}
+	if (ret)
+		goto err_free;
+
+	list_add_tail(&req->list, &viommu->requests);
+	return 0;
+
+err_free:
+	kfree(req);
+	return ret;
+}
+
+static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_add_req(viommu, buf, len, false);
+	if (ret)
+		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * Send a request and wait for it to complete. Return the request status (as an
+ * errno)
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
+				size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+
+	ret = __viommu_add_req(viommu, buf, len, true);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
+		goto out_unlock;
+	}
+
+	ret = __viommu_sync_req(viommu);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+		/* Fall-through (get the actual request status) */
+	}
+
+	ret = viommu_get_req_errno(buf, len);
+out_unlock:
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+	return ret;
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size, u32 flags)
+{
+	unsigned long irqflags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+	mapping->flags		= flags;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				  unsigned long iova, size_t size)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+		next = interval_tree_iter_next(node, iova, last);
+
+		/* Trying to split a mapping? */
+		if (mapping->iova.start < iova)
+			break;
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+		kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int ret;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct virtio_iommu_req_map map;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		map = (struct virtio_iommu_req_map) {
+			.head.type	= VIRTIO_IOMMU_T_MAP,
+			.domain		= cpu_to_le32(vdomain->id),
+			.virt_start	= cpu_to_le64(mapping->iova.start),
+			.virt_end	= cpu_to_le64(mapping->iova.last),
+			.phys_start	= cpu_to_le64(mapping->paddr),
+			.flags		= cpu_to_le32(mapping->flags),
+		};
+
+		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+		if (ret)
+			break;
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
+				  (1U << viommu->domain_bits) - 1;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0);
+
+	if (vdomain->viommu)
+		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->nr_endpoints--;
+
+	req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req.endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
+		if (ret)
+			return ret;
+	}
+
+	if (!vdomain->nr_endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->nr_endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct virtio_iommu_req_map map;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
+		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
+	if (!mapping)
+		return -ENOMEM;
+
+	map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->nr_endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct virtio_iommu_req_unmap unmap;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size);
+	if (unmapped < size)
+		return 0;
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->nr_endpoints)
+		return unmapped;
+
+	unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static void viommu_iotlb_sync(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	viommu_sync_req(vdomain->viommu);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	int ret;
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	ret = iommu_device_link(&viommu->iommu, dev);
+	if (ret)
+		goto err_free_dev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		goto err_unlink_dev;
+	}
+
+	iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+
+err_unlink_dev:
+	iommu_device_unlink(&viommu->iommu, dev);
+
+err_free_dev:
+	kfree(vdev);
+
+	return ret;
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	struct viommu_endpoint *vdev;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	iommu_group_remove_device(dev);
+	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	kfree(vdev);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static struct iommu_ops viommu_ops = {
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.iotlb_sync		= viommu_iotlb_sync,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+	.of_xlate		= viommu_of_xlate,
+};
+
+static int viommu_init_vqs(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
+		return -ENODEV;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+	INIT_LIST_HEAD(&viommu->requests);
+
+	ret = viommu_init_vqs(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..e808fc7fbe82
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/*
+ * Virtio-iommu definition v0.8
+ *
+ * Copyright (C) 2018 Arm Ltd.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+};
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC |	\
+						 VIRTIO_IOMMU_MAP_F_MMIO)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__u8					reserved[4];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 5/7] iommu: Add virtio-iommu driver
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio transport without emulating page
tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   7 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 101 ++++
 6 files changed, 1059 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 48a65c3a4189..f02fa65f47e2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15599,6 +15599,13 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+L:	virtualization@lists.linux-foundation.org
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index c60395b7470f..2dc016dc2b92 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -414,4 +414,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO=y
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index ab5eba6edf82..4cd643408e49 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..9fb38cd3b727
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,938 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 Arm Limited
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+#define VIOMMU_REQUEST_VQ		0
+#define VIOMMU_NR_VQS			1
+
+/*
+ * During development, it is convenient to time out rather than wait
+ * indefinitely in atomic context when a device misbehaves and a request doesn't
+ * return. In production however, some requests shouldn't return until they are
+ * successful.
+ */
+#ifdef DEBUG
+#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
+#endif
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vqs[VIOMMU_NR_VQS];
+	spinlock_t			request_lock;
+	struct list_head		requests;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	u32				flags;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	unsigned long			nr_endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct list_head		list;
+	void				*writeback;
+	unsigned int			write_offset;
+	unsigned int			len;
+	char				buf[];
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+static int viommu_get_req_errno(void *buf, size_t len)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	switch (tail->status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+static void viommu_set_req_status(void *buf, size_t len, int status)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	tail->status = status;
+}
+
+static off_t viommu_get_req_offset(struct viommu_dev *viommu,
+				   struct virtio_iommu_req_head *req,
+				   size_t len)
+{
+	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
+
+	return len - tail_size;
+}
+
+/*
+ * __viommu_sync_req - Complete all in-flight requests
+ *
+ * Wait for all added requests to complete. When this function returns, all
+ * requests that were in-flight at the time of the call have completed.
+ */
+static int __viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret = 0;
+	unsigned int len;
+	size_t write_len;
+	ktime_t timeout = 0;
+	struct viommu_request *req;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+#ifdef DEBUG
+	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
+#endif
+	virtqueue_kick(vq);
+
+	while (!list_empty(&viommu->requests)) {
+		len = 0;
+		req = virtqueue_get_buf(vq, &len);
+		if (req == NULL) {
+			if (!timeout || ktime_before(ktime_get(), timeout))
+				continue;
+
+			/* After timeout, remove all requests */
+			req = list_first_entry(&viommu->requests,
+					       struct viommu_request, list);
+			ret = -ETIMEDOUT;
+		}
+
+		if (!len)
+			viommu_set_req_status(req->buf, req->len,
+					      VIRTIO_IOMMU_S_IOERR);
+
+		write_len = req->len - req->write_offset;
+		if (req->writeback && len >= write_len)
+			memcpy(req->writeback, req->buf + req->write_offset,
+			       write_len);
+
+		list_del(&req->list);
+		kfree(req);
+	}
+
+	return ret;
+}
+
+static int viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_sync_req(viommu);
+	if (ret)
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * __viommu_add_request - Add one request to the queue
+ * @buf: pointer to the request buffer
+ * @len: length of the request buffer
+ * @writeback: copy data back to the buffer when the request completes.
+ *
+ * Add a request to the queue. Only synchronize the queue if it's already full.
+ * Otherwise don't kick the queue nor wait for requests to complete.
+ *
+ * When @writeback is true, data written by the device, including the request
+ * status, is copied into @buf after the request completes. This is unsafe if
+ * the caller allocates @buf on stack and drops the lock between add_req() and
+ * sync_req().
+ *
+ * Return 0 if the request was successfully added to the queue.
+ */
+static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
+			    bool writeback)
+{
+	int ret;
+	off_t write_offset;
+	struct viommu_request *req;
+	struct scatterlist top_sg, bottom_sg;
+	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+
+	write_offset = viommu_get_req_offset(viommu, buf, len);
+	if (!write_offset)
+		return -EINVAL;
+
+	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
+	if (!req)
+		return -ENOMEM;
+
+	req->len = len;
+	if (writeback) {
+		req->writeback = buf + write_offset;
+		req->write_offset = write_offset;
+	}
+	memcpy(&req->buf, buf, write_offset);
+
+	sg_init_one(&top_sg, req->buf, write_offset);
+	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
+
+	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	if (ret == -ENOSPC) {
+		/* If the queue is full, sync and retry */
+		if (!__viommu_sync_req(viommu))
+			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	}
+	if (ret)
+		goto err_free;
+
+	list_add_tail(&req->list, &viommu->requests);
+	return 0;
+
+err_free:
+	kfree(req);
+	return ret;
+}
+
+static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_add_req(viommu, buf, len, false);
+	if (ret)
+		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * Send a request and wait for it to complete. Return the request status (as an
+ * errno)
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
+				size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+
+	ret = __viommu_add_req(viommu, buf, len, true);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
+		goto out_unlock;
+	}
+
+	ret = __viommu_sync_req(viommu);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+		/* Fall-through (get the actual request status) */
+	}
+
+	ret = viommu_get_req_errno(buf, len);
+out_unlock:
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+	return ret;
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size, u32 flags)
+{
+	unsigned long irqflags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+	mapping->flags		= flags;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				  unsigned long iova, size_t size)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+		next = interval_tree_iter_next(node, iova, last);
+
+		/* Trying to split a mapping? */
+		if (mapping->iova.start < iova)
+			break;
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+		kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int ret;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct virtio_iommu_req_map map;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		map = (struct virtio_iommu_req_map) {
+			.head.type	= VIRTIO_IOMMU_T_MAP,
+			.domain		= cpu_to_le32(vdomain->id),
+			.virt_start	= cpu_to_le64(mapping->iova.start),
+			.virt_end	= cpu_to_le64(mapping->iova.last),
+			.phys_start	= cpu_to_le64(mapping->paddr),
+			.flags		= cpu_to_le32(mapping->flags),
+		};
+
+		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+		if (ret)
+			break;
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
+				  (1U << viommu->domain_bits) - 1;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0);
+
+	if (vdomain->viommu)
+		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->nr_endpoints--;
+
+	req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req.endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
+		if (ret)
+			return ret;
+	}
+
+	if (!vdomain->nr_endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->nr_endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct virtio_iommu_req_map map;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
+		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
+	if (!mapping)
+		return -ENOMEM;
+
+	map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->nr_endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct virtio_iommu_req_unmap unmap;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size);
+	if (unmapped < size)
+		return 0;
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->nr_endpoints)
+		return unmapped;
+
+	unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static void viommu_iotlb_sync(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	viommu_sync_req(vdomain->viommu);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	int ret;
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	ret = iommu_device_link(&viommu->iommu, dev);
+	if (ret)
+		goto err_free_dev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		goto err_unlink_dev;
+	}
+
+	iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+
+err_unlink_dev:
+	iommu_device_unlink(&viommu->iommu, dev);
+
+err_free_dev:
+	kfree(vdev);
+
+	return ret;
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	struct viommu_endpoint *vdev;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	iommu_group_remove_device(dev);
+	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	kfree(vdev);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static struct iommu_ops viommu_ops = {
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.iotlb_sync		= viommu_iotlb_sync,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+	.of_xlate		= viommu_of_xlate,
+};
+
+static int viommu_init_vqs(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
+		return -ENODEV;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+	INIT_LIST_HEAD(&viommu->requests);
+
+	ret = viommu_init_vqs(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..e808fc7fbe82
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/*
+ * Virtio-iommu definition v0.8
+ *
+ * Copyright (C) 2018 Arm Ltd.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+};
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC |	\
+						 VIRTIO_IOMMU_MAP_F_MMIO)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__u8					reserved[4];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#endif
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (9 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
requests such as map/unmap over virtio transport without emulating page
tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
requests.

The bulk of the code transforms calls coming from the IOMMU API into
corresponding virtio requests. Mappings are kept in an interval tree
instead of page tables.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 MAINTAINERS                       |   7 +
 drivers/iommu/Kconfig             |  11 +
 drivers/iommu/Makefile            |   1 +
 drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h   |   1 +
 include/uapi/linux/virtio_iommu.h | 101 ++++
 6 files changed, 1059 insertions(+)
 create mode 100644 drivers/iommu/virtio-iommu.c
 create mode 100644 include/uapi/linux/virtio_iommu.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 48a65c3a4189..f02fa65f47e2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -15599,6 +15599,13 @@ S:	Maintained
 F:	drivers/virtio/virtio_input.c
 F:	include/uapi/linux/virtio_input.h
 
+VIRTIO IOMMU DRIVER
+M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
+L:	virtualization@lists.linux-foundation.org
+S:	Maintained
+F:	drivers/iommu/virtio-iommu.c
+F:	include/uapi/linux/virtio_iommu.h
+
 VIRTUAL BOX GUEST DEVICE DRIVER
 M:	Hans de Goede <hdegoede@redhat.com>
 M:	Arnd Bergmann <arnd@arndb.de>
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index c60395b7470f..2dc016dc2b92 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -414,4 +414,15 @@ config QCOM_IOMMU
 	help
 	  Support for IOMMU on certain Qualcomm SoCs.
 
+config VIRTIO_IOMMU
+	bool "Virtio IOMMU driver"
+	depends on VIRTIO=y
+	select IOMMU_API
+	select INTERVAL_TREE
+	select ARM_DMA_USE_IOMMU if ARM
+	help
+	  Para-virtualised IOMMU driver with virtio.
+
+	  Say Y here if you intend to run this kernel as a guest.
+
 endif # IOMMU_SUPPORT
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index ab5eba6edf82..4cd643408e49 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
 obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
 obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
+obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
new file mode 100644
index 000000000000..9fb38cd3b727
--- /dev/null
+++ b/drivers/iommu/virtio-iommu.c
@@ -0,0 +1,938 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Virtio driver for the paravirtualized IOMMU
+ *
+ * Copyright (C) 2018 Arm Limited
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/amba/bus.h>
+#include <linux/delay.h>
+#include <linux/dma-iommu.h>
+#include <linux/freezer.h>
+#include <linux/interval_tree.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/of_iommu.h>
+#include <linux/of_platform.h>
+#include <linux/pci.h>
+#include <linux/platform_device.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio_ids.h>
+#include <linux/wait.h>
+
+#include <uapi/linux/virtio_iommu.h>
+
+#define MSI_IOVA_BASE			0x8000000
+#define MSI_IOVA_LENGTH			0x100000
+
+#define VIOMMU_REQUEST_VQ		0
+#define VIOMMU_NR_VQS			1
+
+/*
+ * During development, it is convenient to time out rather than wait
+ * indefinitely in atomic context when a device misbehaves and a request doesn't
+ * return. In production however, some requests shouldn't return until they are
+ * successful.
+ */
+#ifdef DEBUG
+#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
+#endif
+
+struct viommu_dev {
+	struct iommu_device		iommu;
+	struct device			*dev;
+	struct virtio_device		*vdev;
+
+	struct ida			domain_ids;
+
+	struct virtqueue		*vqs[VIOMMU_NR_VQS];
+	spinlock_t			request_lock;
+	struct list_head		requests;
+
+	/* Device configuration */
+	struct iommu_domain_geometry	geometry;
+	u64				pgsize_bitmap;
+	u8				domain_bits;
+};
+
+struct viommu_mapping {
+	phys_addr_t			paddr;
+	struct interval_tree_node	iova;
+	u32				flags;
+};
+
+struct viommu_domain {
+	struct iommu_domain		domain;
+	struct viommu_dev		*viommu;
+	struct mutex			mutex;
+	unsigned int			id;
+
+	spinlock_t			mappings_lock;
+	struct rb_root_cached		mappings;
+
+	unsigned long			nr_endpoints;
+};
+
+struct viommu_endpoint {
+	struct viommu_dev		*viommu;
+	struct viommu_domain		*vdomain;
+};
+
+struct viommu_request {
+	struct list_head		list;
+	void				*writeback;
+	unsigned int			write_offset;
+	unsigned int			len;
+	char				buf[];
+};
+
+#define to_viommu_domain(domain)	\
+	container_of(domain, struct viommu_domain, domain)
+
+static int viommu_get_req_errno(void *buf, size_t len)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	switch (tail->status) {
+	case VIRTIO_IOMMU_S_OK:
+		return 0;
+	case VIRTIO_IOMMU_S_UNSUPP:
+		return -ENOSYS;
+	case VIRTIO_IOMMU_S_INVAL:
+		return -EINVAL;
+	case VIRTIO_IOMMU_S_RANGE:
+		return -ERANGE;
+	case VIRTIO_IOMMU_S_NOENT:
+		return -ENOENT;
+	case VIRTIO_IOMMU_S_FAULT:
+		return -EFAULT;
+	case VIRTIO_IOMMU_S_IOERR:
+	case VIRTIO_IOMMU_S_DEVERR:
+	default:
+		return -EIO;
+	}
+}
+
+static void viommu_set_req_status(void *buf, size_t len, int status)
+{
+	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
+
+	tail->status = status;
+}
+
+static off_t viommu_get_req_offset(struct viommu_dev *viommu,
+				   struct virtio_iommu_req_head *req,
+				   size_t len)
+{
+	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
+
+	return len - tail_size;
+}
+
+/*
+ * __viommu_sync_req - Complete all in-flight requests
+ *
+ * Wait for all added requests to complete. When this function returns, all
+ * requests that were in-flight at the time of the call have completed.
+ */
+static int __viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret = 0;
+	unsigned int len;
+	size_t write_len;
+	ktime_t timeout = 0;
+	struct viommu_request *req;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+#ifdef DEBUG
+	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
+#endif
+	virtqueue_kick(vq);
+
+	while (!list_empty(&viommu->requests)) {
+		len = 0;
+		req = virtqueue_get_buf(vq, &len);
+		if (req == NULL) {
+			if (!timeout || ktime_before(ktime_get(), timeout))
+				continue;
+
+			/* After timeout, remove all requests */
+			req = list_first_entry(&viommu->requests,
+					       struct viommu_request, list);
+			ret = -ETIMEDOUT;
+		}
+
+		if (!len)
+			viommu_set_req_status(req->buf, req->len,
+					      VIRTIO_IOMMU_S_IOERR);
+
+		write_len = req->len - req->write_offset;
+		if (req->writeback && len >= write_len)
+			memcpy(req->writeback, req->buf + req->write_offset,
+			       write_len);
+
+		list_del(&req->list);
+		kfree(req);
+	}
+
+	return ret;
+}
+
+static int viommu_sync_req(struct viommu_dev *viommu)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_sync_req(viommu);
+	if (ret)
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * __viommu_add_request - Add one request to the queue
+ * @buf: pointer to the request buffer
+ * @len: length of the request buffer
+ * @writeback: copy data back to the buffer when the request completes.
+ *
+ * Add a request to the queue. Only synchronize the queue if it's already full.
+ * Otherwise don't kick the queue nor wait for requests to complete.
+ *
+ * When @writeback is true, data written by the device, including the request
+ * status, is copied into @buf after the request completes. This is unsafe if
+ * the caller allocates @buf on stack and drops the lock between add_req() and
+ * sync_req().
+ *
+ * Return 0 if the request was successfully added to the queue.
+ */
+static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
+			    bool writeback)
+{
+	int ret;
+	off_t write_offset;
+	struct viommu_request *req;
+	struct scatterlist top_sg, bottom_sg;
+	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
+	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
+
+	assert_spin_locked(&viommu->request_lock);
+
+	write_offset = viommu_get_req_offset(viommu, buf, len);
+	if (!write_offset)
+		return -EINVAL;
+
+	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
+	if (!req)
+		return -ENOMEM;
+
+	req->len = len;
+	if (writeback) {
+		req->writeback = buf + write_offset;
+		req->write_offset = write_offset;
+	}
+	memcpy(&req->buf, buf, write_offset);
+
+	sg_init_one(&top_sg, req->buf, write_offset);
+	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
+
+	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	if (ret == -ENOSPC) {
+		/* If the queue is full, sync and retry */
+		if (!__viommu_sync_req(viommu))
+			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
+	}
+	if (ret)
+		goto err_free;
+
+	list_add_tail(&req->list, &viommu->requests);
+	return 0;
+
+err_free:
+	kfree(req);
+	return ret;
+}
+
+static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+	ret = __viommu_add_req(viommu, buf, len, false);
+	if (ret)
+		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+
+	return ret;
+}
+
+/*
+ * Send a request and wait for it to complete. Return the request status (as an
+ * errno)
+ */
+static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
+				size_t len)
+{
+	int ret;
+	unsigned long flags;
+
+	spin_lock_irqsave(&viommu->request_lock, flags);
+
+	ret = __viommu_add_req(viommu, buf, len, true);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
+		goto out_unlock;
+	}
+
+	ret = __viommu_sync_req(viommu);
+	if (ret) {
+		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
+		/* Fall-through (get the actual request status) */
+	}
+
+	ret = viommu_get_req_errno(buf, len);
+out_unlock:
+	spin_unlock_irqrestore(&viommu->request_lock, flags);
+	return ret;
+}
+
+/*
+ * viommu_add_mapping - add a mapping to the internal tree
+ *
+ * On success, return the new mapping. Otherwise return NULL.
+ */
+static struct viommu_mapping *
+viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
+		   phys_addr_t paddr, size_t size, u32 flags)
+{
+	unsigned long irqflags;
+	struct viommu_mapping *mapping;
+
+	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
+	if (!mapping)
+		return NULL;
+
+	mapping->paddr		= paddr;
+	mapping->iova.start	= iova;
+	mapping->iova.last	= iova + size - 1;
+	mapping->flags		= flags;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
+	interval_tree_insert(&mapping->iova, &vdomain->mappings);
+	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
+
+	return mapping;
+}
+
+/*
+ * viommu_del_mappings - remove mappings from the internal tree
+ *
+ * @vdomain: the domain
+ * @iova: start of the range
+ * @size: size of the range. A size of 0 corresponds to the entire address
+ *	space.
+ *
+ * On success, returns the number of unmapped bytes (>= size)
+ */
+static size_t viommu_del_mappings(struct viommu_domain *vdomain,
+				  unsigned long iova, size_t size)
+{
+	size_t unmapped = 0;
+	unsigned long flags;
+	unsigned long last = iova + size - 1;
+	struct viommu_mapping *mapping = NULL;
+	struct interval_tree_node *node, *next;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
+	while (next) {
+		node = next;
+		mapping = container_of(node, struct viommu_mapping, iova);
+		next = interval_tree_iter_next(node, iova, last);
+
+		/* Trying to split a mapping? */
+		if (mapping->iova.start < iova)
+			break;
+
+		/*
+		 * Note that for a partial range, this will return the full
+		 * mapping so we avoid sending split requests to the device.
+		 */
+		unmapped += mapping->iova.last - mapping->iova.start + 1;
+
+		interval_tree_remove(node, &vdomain->mappings);
+		kfree(mapping);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return unmapped;
+}
+
+/*
+ * viommu_replay_mappings - re-send MAP requests
+ *
+ * When reattaching a domain that was previously detached from all endpoints,
+ * mappings were deleted from the device. Re-create the mappings available in
+ * the internal tree.
+ */
+static int viommu_replay_mappings(struct viommu_domain *vdomain)
+{
+	int ret;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct virtio_iommu_req_map map;
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
+	while (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		map = (struct virtio_iommu_req_map) {
+			.head.type	= VIRTIO_IOMMU_T_MAP,
+			.domain		= cpu_to_le32(vdomain->id),
+			.virt_start	= cpu_to_le64(mapping->iova.start),
+			.virt_end	= cpu_to_le64(mapping->iova.last),
+			.phys_start	= cpu_to_le64(mapping->paddr),
+			.flags		= cpu_to_le32(mapping->flags),
+		};
+
+		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+		if (ret)
+			break;
+
+		node = interval_tree_iter_next(node, 0, -1UL);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return ret;
+}
+
+/* IOMMU API */
+
+static struct iommu_domain *viommu_domain_alloc(unsigned type)
+{
+	struct viommu_domain *vdomain;
+
+	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+		return NULL;
+
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return NULL;
+
+	mutex_init(&vdomain->mutex);
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
+	if (type == IOMMU_DOMAIN_DMA &&
+	    iommu_get_dma_cookie(&vdomain->domain)) {
+		kfree(vdomain);
+		return NULL;
+	}
+
+	return &vdomain->domain;
+}
+
+static int viommu_domain_finalise(struct viommu_dev *viommu,
+				  struct iommu_domain *domain)
+{
+	int ret;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
+				  (1U << viommu->domain_bits) - 1;
+
+	vdomain->viommu		= viommu;
+
+	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+	domain->geometry	= viommu->geometry;
+
+	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
+	if (ret >= 0)
+		vdomain->id = (unsigned int)ret;
+
+	return ret > 0 ? 0 : ret;
+}
+
+static void viommu_domain_free(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	iommu_put_dma_cookie(domain);
+
+	/* Free all remaining mappings (size 2^64) */
+	viommu_del_mappings(vdomain, 0, 0);
+
+	if (vdomain->viommu)
+		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
+
+	kfree(vdomain);
+}
+
+static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	int i;
+	int ret = 0;
+	struct virtio_iommu_req_attach req;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	mutex_lock(&vdomain->mutex);
+	if (!vdomain->viommu) {
+		/*
+		 * Initialize the domain proper now that we know which viommu
+		 * owns it.
+		 */
+		ret = viommu_domain_finalise(vdev->viommu, domain);
+	} else if (vdomain->viommu != vdev->viommu) {
+		dev_err(dev, "cannot attach to foreign vIOMMU\n");
+		ret = -EXDEV;
+	}
+	mutex_unlock(&vdomain->mutex);
+
+	if (ret)
+		return ret;
+
+	/*
+	 * In the virtio-iommu device, when attaching the endpoint to a new
+	 * domain, it is detached from the old one and, if as as a result the
+	 * old domain isn't attached to any endpoint, all mappings are removed
+	 * from the old domain and it is freed.
+	 *
+	 * In the driver the old domain still exists, and its mappings will be
+	 * recreated if it gets reattached to an endpoint. Otherwise it will be
+	 * freed explicitly.
+	 *
+	 * vdev->vdomain is protected by group->mutex
+	 */
+	if (vdev->vdomain)
+		vdev->vdomain->nr_endpoints--;
+
+	req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdomain->id),
+	};
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req.endpoint = cpu_to_le32(fwspec->ids[i]);
+
+		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
+		if (ret)
+			return ret;
+	}
+
+	if (!vdomain->nr_endpoints) {
+		/*
+		 * This endpoint is the first to be attached to the domain.
+		 * Replay existing mappings (e.g. SW MSI).
+		 */
+		ret = viommu_replay_mappings(vdomain);
+		if (ret)
+			return ret;
+	}
+
+	vdomain->nr_endpoints++;
+	vdev->vdomain = vdomain;
+
+	return 0;
+}
+
+static int viommu_map(struct iommu_domain *domain, unsigned long iova,
+		      phys_addr_t paddr, size_t size, int prot)
+{
+	int ret;
+	int flags;
+	struct viommu_mapping *mapping;
+	struct virtio_iommu_req_map map;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
+		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
+		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
+
+	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
+	if (!mapping)
+		return -ENOMEM;
+
+	map = (struct virtio_iommu_req_map) {
+		.head.type	= VIRTIO_IOMMU_T_MAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.phys_start	= cpu_to_le64(paddr),
+		.virt_end	= cpu_to_le64(iova + size - 1),
+		.flags		= cpu_to_le32(flags),
+	};
+
+	if (!vdomain->nr_endpoints)
+		return 0;
+
+	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
+	if (ret)
+		viommu_del_mappings(vdomain, iova, size);
+
+	return ret;
+}
+
+static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
+			   size_t size)
+{
+	int ret = 0;
+	size_t unmapped;
+	struct virtio_iommu_req_unmap unmap;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	unmapped = viommu_del_mappings(vdomain, iova, size);
+	if (unmapped < size)
+		return 0;
+
+	/* Device already removed all mappings after detach. */
+	if (!vdomain->nr_endpoints)
+		return unmapped;
+
+	unmap = (struct virtio_iommu_req_unmap) {
+		.head.type	= VIRTIO_IOMMU_T_UNMAP,
+		.domain		= cpu_to_le32(vdomain->id),
+		.virt_start	= cpu_to_le64(iova),
+		.virt_end	= cpu_to_le64(iova + unmapped - 1),
+	};
+
+	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
+	return ret ? 0 : unmapped;
+}
+
+static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
+				       dma_addr_t iova)
+{
+	u64 paddr = 0;
+	unsigned long flags;
+	struct viommu_mapping *mapping;
+	struct interval_tree_node *node;
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	spin_lock_irqsave(&vdomain->mappings_lock, flags);
+	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
+	if (node) {
+		mapping = container_of(node, struct viommu_mapping, iova);
+		paddr = mapping->paddr + (iova - mapping->iova.start);
+	}
+	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
+
+	return paddr;
+}
+
+static void viommu_iotlb_sync(struct iommu_domain *domain)
+{
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	viommu_sync_req(vdomain->viommu);
+}
+
+static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *region;
+	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
+					 IOMMU_RESV_SW_MSI);
+	if (!region)
+		return;
+
+	list_add_tail(&region->list, head);
+	iommu_dma_get_resv_regions(dev, head);
+}
+
+static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *entry, *next;
+
+	list_for_each_entry_safe(entry, next, head, list)
+		kfree(entry);
+}
+
+static struct iommu_ops viommu_ops;
+static struct virtio_driver virtio_iommu_drv;
+
+static int viommu_match_node(struct device *dev, void *data)
+{
+	return dev->parent->fwnode == data;
+}
+
+static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
+{
+	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
+						fwnode, viommu_match_node);
+	put_device(dev);
+
+	return dev ? dev_to_virtio(dev)->priv : NULL;
+}
+
+static int viommu_add_device(struct device *dev)
+{
+	int ret;
+	struct iommu_group *group;
+	struct viommu_endpoint *vdev;
+	struct viommu_dev *viommu = NULL;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return -ENODEV;
+
+	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
+	if (!viommu)
+		return -ENODEV;
+
+	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+	if (!vdev)
+		return -ENOMEM;
+
+	vdev->viommu = viommu;
+	fwspec->iommu_priv = vdev;
+
+	ret = iommu_device_link(&viommu->iommu, dev);
+	if (ret)
+		goto err_free_dev;
+
+	/*
+	 * Last step creates a default domain and attaches to it. Everything
+	 * must be ready.
+	 */
+	group = iommu_group_get_for_dev(dev);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		goto err_unlink_dev;
+	}
+
+	iommu_group_put(group);
+
+	return PTR_ERR_OR_ZERO(group);
+
+err_unlink_dev:
+	iommu_device_unlink(&viommu->iommu, dev);
+
+err_free_dev:
+	kfree(vdev);
+
+	return ret;
+}
+
+static void viommu_remove_device(struct device *dev)
+{
+	struct viommu_endpoint *vdev;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+
+	if (!fwspec || fwspec->ops != &viommu_ops)
+		return;
+
+	vdev = fwspec->iommu_priv;
+
+	iommu_group_remove_device(dev);
+	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	kfree(vdev);
+}
+
+static struct iommu_group *viommu_device_group(struct device *dev)
+{
+	if (dev_is_pci(dev))
+		return pci_device_group(dev);
+	else
+		return generic_device_group(dev);
+}
+
+static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
+{
+	return iommu_fwspec_add_ids(dev, args->args, 1);
+}
+
+static struct iommu_ops viommu_ops = {
+	.domain_alloc		= viommu_domain_alloc,
+	.domain_free		= viommu_domain_free,
+	.attach_dev		= viommu_attach_dev,
+	.map			= viommu_map,
+	.unmap			= viommu_unmap,
+	.iova_to_phys		= viommu_iova_to_phys,
+	.iotlb_sync		= viommu_iotlb_sync,
+	.add_device		= viommu_add_device,
+	.remove_device		= viommu_remove_device,
+	.device_group		= viommu_device_group,
+	.get_resv_regions	= viommu_get_resv_regions,
+	.put_resv_regions	= viommu_put_resv_regions,
+	.of_xlate		= viommu_of_xlate,
+};
+
+static int viommu_init_vqs(struct viommu_dev *viommu)
+{
+	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
+	const char *name = "request";
+	void *ret;
+
+	ret = virtio_find_single_vq(vdev, NULL, name);
+	if (IS_ERR(ret)) {
+		dev_err(viommu->dev, "cannot find VQ\n");
+		return PTR_ERR(ret);
+	}
+
+	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+
+	return 0;
+}
+
+static int viommu_probe(struct virtio_device *vdev)
+{
+	struct device *parent_dev = vdev->dev.parent;
+	struct viommu_dev *viommu = NULL;
+	struct device *dev = &vdev->dev;
+	u64 input_start = 0;
+	u64 input_end = -1UL;
+	int ret;
+
+	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
+		return -ENODEV;
+
+	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
+	if (!viommu)
+		return -ENOMEM;
+
+	spin_lock_init(&viommu->request_lock);
+	ida_init(&viommu->domain_ids);
+	viommu->dev = dev;
+	viommu->vdev = vdev;
+	INIT_LIST_HEAD(&viommu->requests);
+
+	ret = viommu_init_vqs(viommu);
+	if (ret)
+		return ret;
+
+	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
+		     &viommu->pgsize_bitmap);
+
+	if (!viommu->pgsize_bitmap) {
+		ret = -EINVAL;
+		goto err_free_vqs;
+	}
+
+	viommu->domain_bits = 32;
+
+	/* Optional features */
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.start,
+			     &input_start);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
+			     struct virtio_iommu_config, input_range.end,
+			     &input_end);
+
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
+			     struct virtio_iommu_config, domain_bits,
+			     &viommu->domain_bits);
+
+	viommu->geometry = (struct iommu_domain_geometry) {
+		.aperture_start	= input_start,
+		.aperture_end	= input_end,
+		.force_aperture	= true,
+	};
+
+	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
+
+	virtio_device_ready(vdev);
+
+	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
+				     virtio_bus_name(vdev));
+	if (ret)
+		goto err_free_vqs;
+
+	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
+	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
+
+	iommu_device_register(&viommu->iommu);
+
+#ifdef CONFIG_PCI
+	if (pci_bus_type.iommu_ops != &viommu_ops) {
+		pci_request_acs();
+		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+#ifdef CONFIG_ARM_AMBA
+	if (amba_bustype.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+#endif
+	if (platform_bus_type.iommu_ops != &viommu_ops) {
+		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
+		if (ret)
+			goto err_unregister;
+	}
+
+	vdev->priv = viommu;
+
+	dev_info(dev, "input address: %u bits\n",
+		 order_base_2(viommu->geometry.aperture_end));
+	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
+
+	return 0;
+
+err_unregister:
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+err_free_vqs:
+	vdev->config->del_vqs(vdev);
+
+	return ret;
+}
+
+static void viommu_remove(struct virtio_device *vdev)
+{
+	struct viommu_dev *viommu = vdev->priv;
+
+	iommu_device_sysfs_remove(&viommu->iommu);
+	iommu_device_unregister(&viommu->iommu);
+
+	/* Stop all virtqueues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	dev_info(&vdev->dev, "device removed\n");
+}
+
+static void viommu_config_changed(struct virtio_device *vdev)
+{
+	dev_warn(&vdev->dev, "config changed\n");
+}
+
+static unsigned int features[] = {
+	VIRTIO_IOMMU_F_MAP_UNMAP,
+	VIRTIO_IOMMU_F_DOMAIN_BITS,
+	VIRTIO_IOMMU_F_INPUT_RANGE,
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_iommu_drv = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.feature_table		= features,
+	.feature_table_size	= ARRAY_SIZE(features),
+	.probe			= viommu_probe,
+	.remove			= viommu_remove,
+	.config_changed		= viommu_config_changed,
+};
+
+module_virtio_driver(virtio_iommu_drv);
+
+MODULE_DESCRIPTION("Virtio IOMMU driver");
+MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2d4f4d..cfe47c5d9a56 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
new file mode 100644
index 000000000000..e808fc7fbe82
--- /dev/null
+++ b/include/uapi/linux/virtio_iommu.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/*
+ * Virtio-iommu definition v0.8
+ *
+ * Copyright (C) 2018 Arm Ltd.
+ */
+#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
+#define _UAPI_LINUX_VIRTIO_IOMMU_H
+
+#include <linux/types.h>
+
+/* Feature bits */
+#define VIRTIO_IOMMU_F_INPUT_RANGE		0
+#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
+#define VIRTIO_IOMMU_F_MAP_UNMAP		2
+#define VIRTIO_IOMMU_F_BYPASS			3
+
+struct virtio_iommu_config {
+	/* Supported page sizes */
+	__u64					page_size_mask;
+	/* Supported IOVA range */
+	struct virtio_iommu_range {
+		__u64				start;
+		__u64				end;
+	} input_range;
+	/* Max domain ID size */
+	__u8					domain_bits;
+};
+
+/* Request types */
+#define VIRTIO_IOMMU_T_ATTACH			0x01
+#define VIRTIO_IOMMU_T_DETACH			0x02
+#define VIRTIO_IOMMU_T_MAP			0x03
+#define VIRTIO_IOMMU_T_UNMAP			0x04
+
+/* Status types */
+#define VIRTIO_IOMMU_S_OK			0x00
+#define VIRTIO_IOMMU_S_IOERR			0x01
+#define VIRTIO_IOMMU_S_UNSUPP			0x02
+#define VIRTIO_IOMMU_S_DEVERR			0x03
+#define VIRTIO_IOMMU_S_INVAL			0x04
+#define VIRTIO_IOMMU_S_RANGE			0x05
+#define VIRTIO_IOMMU_S_NOENT			0x06
+#define VIRTIO_IOMMU_S_FAULT			0x07
+
+struct virtio_iommu_req_head {
+	__u8					type;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_tail {
+	__u8					status;
+	__u8					reserved[3];
+};
+
+struct virtio_iommu_req_attach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_detach {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le32					endpoint;
+	__u8					reserved[8];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
+#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
+#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
+
+#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
+						 VIRTIO_IOMMU_MAP_F_WRITE |	\
+						 VIRTIO_IOMMU_MAP_F_EXEC |	\
+						 VIRTIO_IOMMU_MAP_F_MMIO)
+
+struct virtio_iommu_req_map {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__le64					phys_start;
+	__le32					flags;
+	struct virtio_iommu_req_tail		tail;
+};
+
+struct virtio_iommu_req_unmap {
+	struct virtio_iommu_req_head		head;
+	__le32					domain;
+	__le64					virt_start;
+	__le64					virt_end;
+	__u8					reserved[4];
+	struct virtio_iommu_req_tail		tail;
+};
+
+#endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  39 ++++++++
 2 files changed, 180 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 9fb38cd3b727..8eaf66770469 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -56,6 +56,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -77,8 +78,10 @@ struct viommu_domain {
 };
 
 struct viommu_endpoint {
+	struct device			*dev;
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
 {
 	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
 
+	if (req->type == VIRTIO_IOMMU_T_PROBE)
+		return len - viommu->probe_size - tail_size;
+
 	return len - tail_size;
 }
 
@@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 start = le64_to_cpu(mem->start);
+	u64 end = le64_to_cpu(mem->end);
+	size_t size = end - start + 1;
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	default:
+		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
+			 mem->subtype);
+		/* Fall-through */
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+		region = iommu_alloc_resv_region(start, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(start, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	size_t probe_len;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		return -EINVAL;
+
+	probe_len = sizeof(*probe) + viommu->probe_size +
+		    sizeof(struct virtio_iommu_req_tail);
+	probe = kzalloc(probe_len, GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe, probe_len);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length) + sizeof(*prop);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
+			break;
+		default:
+			dev_err(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
 	if (!vdev)
 		return -ENOMEM;
 
+	vdev->dev = dev;
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	ret = iommu_device_link(&viommu->iommu, dev);
 	if (ret)
 		goto err_free_dev;
@@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
 	iommu_device_unlink(&viommu->iommu, dev);
 
 err_free_dev:
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 
 	return ret;
@@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
 
 	iommu_group_remove_device(dev);
 	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 }
 
@@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -913,6 +1047,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index e808fc7fbe82..feed74586bb0 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -14,6 +14,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -25,6 +26,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 };
 
 /* Request types */
@@ -32,6 +36,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 };
 
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+};
+
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	struct virtio_iommu_probe_property	head;
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					start;
+	__le64					end;
+};
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/*
+	 * Tail follows the variable-length properties array. No padding,
+	 * property lengths are all aligned on 8 bytes.
+	 */
+};
+
 #endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  39 ++++++++
 2 files changed, 180 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 9fb38cd3b727..8eaf66770469 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -56,6 +56,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -77,8 +78,10 @@ struct viommu_domain {
 };
 
 struct viommu_endpoint {
+	struct device			*dev;
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
 {
 	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
 
+	if (req->type == VIRTIO_IOMMU_T_PROBE)
+		return len - viommu->probe_size - tail_size;
+
 	return len - tail_size;
 }
 
@@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 start = le64_to_cpu(mem->start);
+	u64 end = le64_to_cpu(mem->end);
+	size_t size = end - start + 1;
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	default:
+		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
+			 mem->subtype);
+		/* Fall-through */
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+		region = iommu_alloc_resv_region(start, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(start, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	size_t probe_len;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		return -EINVAL;
+
+	probe_len = sizeof(*probe) + viommu->probe_size +
+		    sizeof(struct virtio_iommu_req_tail);
+	probe = kzalloc(probe_len, GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe, probe_len);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length) + sizeof(*prop);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
+			break;
+		default:
+			dev_err(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
 	if (!vdev)
 		return -ENOMEM;
 
+	vdev->dev = dev;
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	ret = iommu_device_link(&viommu->iommu, dev);
 	if (ret)
 		goto err_free_dev;
@@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
 	iommu_device_unlink(&viommu->iommu, dev);
 
 err_free_dev:
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 
 	return ret;
@@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
 
 	iommu_group_remove_device(dev);
 	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 }
 
@@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -913,6 +1047,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index e808fc7fbe82..feed74586bb0 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -14,6 +14,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -25,6 +26,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 };
 
 /* Request types */
@@ -32,6 +36,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 };
 
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+};
+
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	struct virtio_iommu_probe_property	head;
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					start;
+	__le64					end;
+};
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/*
+	 * Tail follows the variable-length properties array. No padding,
+	 * property lengths are all aligned on 8 bytes.
+	 */
+};
+
 #endif
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (11 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

When the device offers the probe feature, send a probe request for each
device managed by the IOMMU. Extract RESV_MEM information. When we
encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
This will tell other subsystems that there is no need to map the MSI
doorbell in the virtio-iommu, because MSIs bypass it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
 include/uapi/linux/virtio_iommu.h |  39 ++++++++
 2 files changed, 180 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 9fb38cd3b727..8eaf66770469 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -56,6 +56,7 @@ struct viommu_dev {
 	struct iommu_domain_geometry	geometry;
 	u64				pgsize_bitmap;
 	u8				domain_bits;
+	u32				probe_size;
 };
 
 struct viommu_mapping {
@@ -77,8 +78,10 @@ struct viommu_domain {
 };
 
 struct viommu_endpoint {
+	struct device			*dev;
 	struct viommu_dev		*viommu;
 	struct viommu_domain		*vdomain;
+	struct list_head		resv_regions;
 };
 
 struct viommu_request {
@@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
 {
 	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
 
+	if (req->type == VIRTIO_IOMMU_T_PROBE)
+		return len - viommu->probe_size - tail_size;
+
 	return len - tail_size;
 }
 
@@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
 	return ret;
 }
 
+static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+			       struct virtio_iommu_probe_resv_mem *mem,
+			       size_t len)
+{
+	struct iommu_resv_region *region = NULL;
+	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
+
+	u64 start = le64_to_cpu(mem->start);
+	u64 end = le64_to_cpu(mem->end);
+	size_t size = end - start + 1;
+
+	if (len < sizeof(*mem))
+		return -EINVAL;
+
+	switch (mem->subtype) {
+	default:
+		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
+			 mem->subtype);
+		/* Fall-through */
+	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
+		region = iommu_alloc_resv_region(start, size, 0,
+						 IOMMU_RESV_RESERVED);
+		break;
+	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
+		region = iommu_alloc_resv_region(start, size, prot,
+						 IOMMU_RESV_MSI);
+		break;
+	}
+
+	list_add(&vdev->resv_regions, &region->list);
+	return 0;
+}
+
+static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
+{
+	int ret;
+	u16 type, len;
+	size_t cur = 0;
+	size_t probe_len;
+	struct virtio_iommu_req_probe *probe;
+	struct virtio_iommu_probe_property *prop;
+	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
+	struct viommu_endpoint *vdev = fwspec->iommu_priv;
+
+	if (!fwspec->num_ids)
+		return -EINVAL;
+
+	probe_len = sizeof(*probe) + viommu->probe_size +
+		    sizeof(struct virtio_iommu_req_tail);
+	probe = kzalloc(probe_len, GFP_KERNEL);
+	if (!probe)
+		return -ENOMEM;
+
+	probe->head.type = VIRTIO_IOMMU_T_PROBE;
+	/*
+	 * For now, assume that properties of an endpoint that outputs multiple
+	 * IDs are consistent. Only probe the first one.
+	 */
+	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
+
+	ret = viommu_send_req_sync(viommu, probe, probe_len);
+	if (ret)
+		goto out_free;
+
+	prop = (void *)probe->properties;
+	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+
+	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
+	       cur < viommu->probe_size) {
+		len = le16_to_cpu(prop->length) + sizeof(*prop);
+
+		switch (type) {
+		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
+			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
+			break;
+		default:
+			dev_err(dev, "unknown viommu prop 0x%x\n", type);
+		}
+
+		if (ret)
+			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
+
+		cur += len;
+		if (cur >= viommu->probe_size)
+			break;
+
+		prop = (void *)probe->properties + cur;
+		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
+	}
+
+out_free:
+	kfree(probe);
+	return ret;
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
 
 static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 {
-	struct iommu_resv_region *region;
+	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
+	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
 	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
 
-	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
-					 IOMMU_RESV_SW_MSI);
-	if (!region)
-		return;
+	list_for_each_entry(entry, &vdev->resv_regions, list) {
+		/*
+		 * If the device registered a bypass MSI windows, use it.
+		 * Otherwise add a software-mapped region
+		 */
+		if (entry->type == IOMMU_RESV_MSI)
+			msi = entry;
+
+		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
+		if (!new_entry)
+			return;
+		list_add_tail(&new_entry->list, head);
+	}
+
+	if (!msi) {
+		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
+					      prot, IOMMU_RESV_SW_MSI);
+		if (!msi)
+			return;
+
+		list_add_tail(&msi->list, head);
+	}
 
-	list_add_tail(&region->list, head);
 	iommu_dma_get_resv_regions(dev, head);
 }
 
@@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
 	if (!vdev)
 		return -ENOMEM;
 
+	vdev->dev = dev;
 	vdev->viommu = viommu;
+	INIT_LIST_HEAD(&vdev->resv_regions);
 	fwspec->iommu_priv = vdev;
 
+	if (viommu->probe_size) {
+		/* Get additional information for this endpoint */
+		ret = viommu_probe_endpoint(viommu, dev);
+		if (ret)
+			return ret;
+	}
+
 	ret = iommu_device_link(&viommu->iommu, dev);
 	if (ret)
 		goto err_free_dev;
@@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
 	iommu_device_unlink(&viommu->iommu, dev);
 
 err_free_dev:
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 
 	return ret;
@@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
 
 	iommu_group_remove_device(dev);
 	iommu_device_unlink(&vdev->viommu->iommu, dev);
+	viommu_put_resv_regions(dev, &vdev->resv_regions);
 	kfree(vdev);
 }
 
@@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
 			     struct virtio_iommu_config, domain_bits,
 			     &viommu->domain_bits);
 
+	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
+			     struct virtio_iommu_config, probe_size,
+			     &viommu->probe_size);
+
 	viommu->geometry = (struct iommu_domain_geometry) {
 		.aperture_start	= input_start,
 		.aperture_end	= input_end,
@@ -913,6 +1047,7 @@ static unsigned int features[] = {
 	VIRTIO_IOMMU_F_MAP_UNMAP,
 	VIRTIO_IOMMU_F_DOMAIN_BITS,
 	VIRTIO_IOMMU_F_INPUT_RANGE,
+	VIRTIO_IOMMU_F_PROBE,
 };
 
 static struct virtio_device_id id_table[] = {
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index e808fc7fbe82..feed74586bb0 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -14,6 +14,7 @@
 #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
 #define VIRTIO_IOMMU_F_MAP_UNMAP		2
 #define VIRTIO_IOMMU_F_BYPASS			3
+#define VIRTIO_IOMMU_F_PROBE			4
 
 struct virtio_iommu_config {
 	/* Supported page sizes */
@@ -25,6 +26,9 @@ struct virtio_iommu_config {
 	} input_range;
 	/* Max domain ID size */
 	__u8					domain_bits;
+	__u8					padding[3];
+	/* Probe buffer size */
+	__u32					probe_size;
 };
 
 /* Request types */
@@ -32,6 +36,7 @@ struct virtio_iommu_config {
 #define VIRTIO_IOMMU_T_DETACH			0x02
 #define VIRTIO_IOMMU_T_MAP			0x03
 #define VIRTIO_IOMMU_T_UNMAP			0x04
+#define VIRTIO_IOMMU_T_PROBE			0x05
 
 /* Status types */
 #define VIRTIO_IOMMU_S_OK			0x00
@@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
 	struct virtio_iommu_req_tail		tail;
 };
 
+#define VIRTIO_IOMMU_PROBE_T_NONE		0
+#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
+
+#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
+
+struct virtio_iommu_probe_property {
+	__le16					type;
+	__le16					length;
+};
+
+#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
+#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
+
+struct virtio_iommu_probe_resv_mem {
+	struct virtio_iommu_probe_property	head;
+	__u8					subtype;
+	__u8					reserved[3];
+	__le64					start;
+	__le64					end;
+};
+
+struct virtio_iommu_req_probe {
+	struct virtio_iommu_req_head		head;
+	__le32					endpoint;
+	__u8					reserved[64];
+
+	__u8					properties[];
+
+	/*
+	 * Tail follows the variable-length properties array. No padding,
+	 * property lengths are all aligned on 8 bytes.
+	 */
+};
+
 #endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 7/7] iommu/virtio: Add event queue
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 116 +++++++++++++++++++++++++++---
 include/uapi/linux/virtio_iommu.h |  19 +++++
 2 files changed, 126 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 8eaf66770469..0fe8ed2a290c 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -29,7 +29,8 @@
 #define MSI_IOVA_LENGTH			0x100000
 
 #define VIOMMU_REQUEST_VQ		0
-#define VIOMMU_NR_VQS			1
+#define VIOMMU_EVENT_VQ			1
+#define VIOMMU_NR_VQS			2
 
 /*
  * During development, it is convenient to time out rather than wait
@@ -51,6 +52,7 @@ struct viommu_dev {
 	struct virtqueue		*vqs[VIOMMU_NR_VQS];
 	spinlock_t			request_lock;
 	struct list_head		requests;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -92,6 +94,15 @@ struct viommu_request {
 	char				buf[];
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -515,6 +526,69 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -899,16 +973,35 @@ static struct iommu_ops viommu_ops = {
 static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
-	}
+	return virtio_find_vqs(vdev, VIOMMU_NR_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = vq->num_free;
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
+	}
 
 	return 0;
 }
@@ -976,6 +1069,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index feed74586bb0..04647c266d6f 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -137,4 +137,23 @@ struct virtio_iommu_req_probe {
 	 */
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					reserved[3];
+	__le32					flags;
+	__le32					endpoint;
+	__u8					reserved2[4];
+	__le64					address;
+};
+
 #endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 7/7] iommu/virtio: Add event queue
@ 2018-10-12 14:59   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 116 +++++++++++++++++++++++++++---
 include/uapi/linux/virtio_iommu.h |  19 +++++
 2 files changed, 126 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 8eaf66770469..0fe8ed2a290c 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -29,7 +29,8 @@
 #define MSI_IOVA_LENGTH			0x100000
 
 #define VIOMMU_REQUEST_VQ		0
-#define VIOMMU_NR_VQS			1
+#define VIOMMU_EVENT_VQ			1
+#define VIOMMU_NR_VQS			2
 
 /*
  * During development, it is convenient to time out rather than wait
@@ -51,6 +52,7 @@ struct viommu_dev {
 	struct virtqueue		*vqs[VIOMMU_NR_VQS];
 	spinlock_t			request_lock;
 	struct list_head		requests;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -92,6 +94,15 @@ struct viommu_request {
 	char				buf[];
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -515,6 +526,69 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -899,16 +973,35 @@ static struct iommu_ops viommu_ops = {
 static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
-	}
+	return virtio_find_vqs(vdev, VIOMMU_NR_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = vq->num_free;
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
+	}
 
 	return 0;
 }
@@ -976,6 +1069,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index feed74586bb0..04647c266d6f 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -137,4 +137,23 @@ struct virtio_iommu_req_probe {
 	 */
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					reserved[3];
+	__le32					flags;
+	__le32					endpoint;
+	__u8					reserved2[4];
+	__le64					address;
+};
+
 #endif
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH v3 7/7] iommu/virtio: Add event queue
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (12 preceding siblings ...)
  (?)
@ 2018-10-12 14:59 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 14:59 UTC (permalink / raw)
  To: iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, eric.auger,
	robh+dt, robin.murphy, joro

The event queue offers a way for the device to report access faults from
endpoints. It is implemented on virtqueue #1. Whenever the host needs to
signal a fault, it fills one of the buffers offered by the guest and
interrupts it.

Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
 drivers/iommu/virtio-iommu.c      | 116 +++++++++++++++++++++++++++---
 include/uapi/linux/virtio_iommu.h |  19 +++++
 2 files changed, 126 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 8eaf66770469..0fe8ed2a290c 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -29,7 +29,8 @@
 #define MSI_IOVA_LENGTH			0x100000
 
 #define VIOMMU_REQUEST_VQ		0
-#define VIOMMU_NR_VQS			1
+#define VIOMMU_EVENT_VQ			1
+#define VIOMMU_NR_VQS			2
 
 /*
  * During development, it is convenient to time out rather than wait
@@ -51,6 +52,7 @@ struct viommu_dev {
 	struct virtqueue		*vqs[VIOMMU_NR_VQS];
 	spinlock_t			request_lock;
 	struct list_head		requests;
+	void				*evts;
 
 	/* Device configuration */
 	struct iommu_domain_geometry	geometry;
@@ -92,6 +94,15 @@ struct viommu_request {
 	char				buf[];
 };
 
+#define VIOMMU_FAULT_RESV_MASK		0xffffff00
+
+struct viommu_event {
+	union {
+		u32			head;
+		struct virtio_iommu_fault fault;
+	};
+};
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -515,6 +526,69 @@ static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
 	return ret;
 }
 
+static int viommu_fault_handler(struct viommu_dev *viommu,
+				struct virtio_iommu_fault *fault)
+{
+	char *reason_str;
+
+	u8 reason	= fault->reason;
+	u32 flags	= le32_to_cpu(fault->flags);
+	u32 endpoint	= le32_to_cpu(fault->endpoint);
+	u64 address	= le64_to_cpu(fault->address);
+
+	switch (reason) {
+	case VIRTIO_IOMMU_FAULT_R_DOMAIN:
+		reason_str = "domain";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_MAPPING:
+		reason_str = "page";
+		break;
+	case VIRTIO_IOMMU_FAULT_R_UNKNOWN:
+	default:
+		reason_str = "unknown";
+		break;
+	}
+
+	/* TODO: find EP by ID and report_iommu_fault */
+	if (flags & VIRTIO_IOMMU_FAULT_F_ADDRESS)
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u at %#llx [%s%s%s]\n",
+				    reason_str, endpoint, address,
+				    flags & VIRTIO_IOMMU_FAULT_F_READ ? "R" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_WRITE ? "W" : "",
+				    flags & VIRTIO_IOMMU_FAULT_F_EXEC ? "X" : "");
+	else
+		dev_err_ratelimited(viommu->dev, "%s fault from EP %u\n",
+				    reason_str, endpoint);
+	return 0;
+}
+
+static void viommu_event_handler(struct virtqueue *vq)
+{
+	int ret;
+	unsigned int len;
+	struct scatterlist sg[1];
+	struct viommu_event *evt;
+	struct viommu_dev *viommu = vq->vdev->priv;
+
+	while ((evt = virtqueue_get_buf(vq, &len)) != NULL) {
+		if (len > sizeof(*evt)) {
+			dev_err(viommu->dev,
+				"invalid event buffer (len %u != %zu)\n",
+				len, sizeof(*evt));
+		} else if (!(evt->head & VIOMMU_FAULT_RESV_MASK)) {
+			viommu_fault_handler(viommu, &evt->fault);
+		}
+
+		sg_init_one(sg, evt, sizeof(*evt));
+		ret = virtqueue_add_inbuf(vq, sg, 1, evt, GFP_ATOMIC);
+		if (ret)
+			dev_err(viommu->dev, "could not add event buffer\n");
+	}
+
+	if (!virtqueue_kick(vq))
+		dev_err(viommu->dev, "kick failed\n");
+}
+
 /* IOMMU API */
 
 static struct iommu_domain *viommu_domain_alloc(unsigned type)
@@ -899,16 +973,35 @@ static struct iommu_ops viommu_ops = {
 static int viommu_init_vqs(struct viommu_dev *viommu)
 {
 	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
-	const char *name = "request";
-	void *ret;
+	const char *names[] = { "request", "event" };
+	vq_callback_t *callbacks[] = {
+		NULL, /* No async requests */
+		viommu_event_handler,
+	};
 
-	ret = virtio_find_single_vq(vdev, NULL, name);
-	if (IS_ERR(ret)) {
-		dev_err(viommu->dev, "cannot find VQ\n");
-		return PTR_ERR(ret);
-	}
+	return virtio_find_vqs(vdev, VIOMMU_NR_VQS, viommu->vqs, callbacks,
+			       names, NULL);
+}
 
-	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
+static int viommu_fill_evtq(struct viommu_dev *viommu)
+{
+	int i, ret;
+	struct scatterlist sg[1];
+	struct viommu_event *evts;
+	struct virtqueue *vq = viommu->vqs[VIOMMU_EVENT_VQ];
+	size_t nr_evts = vq->num_free;
+
+	viommu->evts = evts = devm_kmalloc_array(viommu->dev, nr_evts,
+						 sizeof(*evts), GFP_KERNEL);
+	if (!evts)
+		return -ENOMEM;
+
+	for (i = 0; i < nr_evts; i++) {
+		sg_init_one(sg, &evts[i], sizeof(*evts));
+		ret = virtqueue_add_inbuf(vq, sg, 1, &evts[i], GFP_KERNEL);
+		if (ret)
+			return ret;
+	}
 
 	return 0;
 }
@@ -976,6 +1069,11 @@ static int viommu_probe(struct virtio_device *vdev)
 
 	virtio_device_ready(vdev);
 
+	/* Populate the event queue with buffers */
+	ret = viommu_fill_evtq(viommu);
+	if (ret)
+		goto err_free_vqs;
+
 	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
 				     virtio_bus_name(vdev));
 	if (ret)
diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
index feed74586bb0..04647c266d6f 100644
--- a/include/uapi/linux/virtio_iommu.h
+++ b/include/uapi/linux/virtio_iommu.h
@@ -137,4 +137,23 @@ struct virtio_iommu_req_probe {
 	 */
 };
 
+/* Fault types */
+#define VIRTIO_IOMMU_FAULT_R_UNKNOWN		0
+#define VIRTIO_IOMMU_FAULT_R_DOMAIN		1
+#define VIRTIO_IOMMU_FAULT_R_MAPPING		2
+
+#define VIRTIO_IOMMU_FAULT_F_READ		(1 << 0)
+#define VIRTIO_IOMMU_FAULT_F_WRITE		(1 << 1)
+#define VIRTIO_IOMMU_FAULT_F_EXEC		(1 << 2)
+#define VIRTIO_IOMMU_FAULT_F_ADDRESS		(1 << 8)
+
+struct virtio_iommu_fault {
+	__u8					reason;
+	__u8					reserved[3];
+	__le32					flags;
+	__le32					endpoint;
+	__u8					reserved2[4];
+	__le64					address;
+};
+
 #endif
-- 
2.19.1

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-12 16:35     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 16:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, devicetree, jasowang,
	linux-pci, joro, will.deacon, virtualization, iommu, robh+dt,
	marc.zyngier, robin.murphy, kvmarm

On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio transport without emulating page
> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  MAINTAINERS                       |   7 +
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 101 ++++
>  6 files changed, 1059 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 48a65c3a4189..f02fa65f47e2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -15599,6 +15599,13 @@ S:	Maintained
>  F:	drivers/virtio/virtio_input.c
>  F:	include/uapi/linux/virtio_input.h
>  
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> +L:	virtualization@lists.linux-foundation.org
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>  VIRTUAL BOX GUEST DEVICE DRIVER
>  M:	Hans de Goede <hdegoede@redhat.com>
>  M:	Arnd Bergmann <arnd@arndb.de>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index c60395b7470f..2dc016dc2b92 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO=y
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index ab5eba6edf82..4cd643408e49 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..9fb38cd3b727
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,938 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 Arm Limited
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +#define VIOMMU_REQUEST_VQ		0
> +#define VIOMMU_NR_VQS			1
> +
> +/*
> + * During development, it is convenient to time out rather than wait
> + * indefinitely in atomic context when a device misbehaves and a request doesn't
> + * return. In production however, some requests shouldn't return until they are
> + * successful.
> + */
> +#ifdef DEBUG
> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
> +#endif
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
> +	spinlock_t			request_lock;
> +	struct list_head		requests;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	u32				flags;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	unsigned long			nr_endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct list_head		list;
> +	void				*writeback;
> +	unsigned int			write_offset;
> +	unsigned int			len;
> +	char				buf[];
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +static int viommu_get_req_errno(void *buf, size_t len)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	switch (tail->status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +static void viommu_set_req_status(void *buf, size_t len, int status)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	tail->status = status;
> +}
> +
> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
> +				   struct virtio_iommu_req_head *req,
> +				   size_t len)
> +{
> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
> +
> +	return len - tail_size;
> +}
> +
> +/*
> + * __viommu_sync_req - Complete all in-flight requests
> + *
> + * Wait for all added requests to complete. When this function returns, all
> + * requests that were in-flight at the time of the call have completed.
> + */
> +static int __viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret = 0;
> +	unsigned int len;
> +	size_t write_len;
> +	ktime_t timeout = 0;
> +	struct viommu_request *req;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +#ifdef DEBUG
> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
> +#endif
> +	virtqueue_kick(vq);
> +
> +	while (!list_empty(&viommu->requests)) {
> +		len = 0;
> +		req = virtqueue_get_buf(vq, &len);
> +		if (req == NULL) {
> +			if (!timeout || ktime_before(ktime_get(), timeout))
> +				continue;
> +
> +			/* After timeout, remove all requests */
> +			req = list_first_entry(&viommu->requests,
> +					       struct viommu_request, list);
> +			ret = -ETIMEDOUT;
> +		}
> +
> +		if (!len)
> +			viommu_set_req_status(req->buf, req->len,
> +					      VIRTIO_IOMMU_S_IOERR);
> +
> +		write_len = req->len - req->write_offset;
> +		if (req->writeback && len >= write_len)
> +			memcpy(req->writeback, req->buf + req->write_offset,
> +			       write_len);
> +
> +		list_del(&req->list);
> +		kfree(req);

So with DEBUG set, this will actually free memory that device still
DMA's into. Hardly pretty. I think you want to mark device broken,
queue the request and then wait for device to be reset.


> +	}
> +
> +	return ret;
> +}
> +
> +static int viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_sync_req(viommu);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * __viommu_add_request - Add one request to the queue
> + * @buf: pointer to the request buffer
> + * @len: length of the request buffer
> + * @writeback: copy data back to the buffer when the request completes.
> + *
> + * Add a request to the queue. Only synchronize the queue if it's already full.
> + * Otherwise don't kick the queue nor wait for requests to complete.
> + *
> + * When @writeback is true, data written by the device, including the request
> + * status, is copied into @buf after the request completes. This is unsafe if
> + * the caller allocates @buf on stack and drops the lock between add_req() and
> + * sync_req().
> + *
> + * Return 0 if the request was successfully added to the queue.
> + */
> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
> +			    bool writeback)
> +{
> +	int ret;
> +	off_t write_offset;
> +	struct viommu_request *req;
> +	struct scatterlist top_sg, bottom_sg;
> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +
> +	write_offset = viommu_get_req_offset(viommu, buf, len);
> +	if (!write_offset)
> +		return -EINVAL;
> +
> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	req->len = len;
> +	if (writeback) {
> +		req->writeback = buf + write_offset;
> +		req->write_offset = write_offset;
> +	}
> +	memcpy(&req->buf, buf, write_offset);
> +
> +	sg_init_one(&top_sg, req->buf, write_offset);
> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
> +
> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	if (ret == -ENOSPC) {
> +		/* If the queue is full, sync and retry */
> +		if (!__viommu_sync_req(viommu))
> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	}
> +	if (ret)
> +		goto err_free;
> +
> +	list_add_tail(&req->list, &viommu->requests);
> +	return 0;
> +
> +err_free:
> +	kfree(req);
> +	return ret;
> +}
> +
> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_add_req(viommu, buf, len, false);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * Send a request and wait for it to complete. Return the request status (as an
> + * errno)
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
> +				size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +
> +	ret = __viommu_add_req(viommu, buf, len, true);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
> +		goto out_unlock;
> +	}
> +
> +	ret = __viommu_sync_req(viommu);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +		/* Fall-through (get the actual request status) */
> +	}
> +
> +	ret = viommu_get_req_errno(buf, len);
> +out_unlock:
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +	return ret;
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size, u32 flags)
> +{
> +	unsigned long irqflags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +	mapping->flags		= flags;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				  unsigned long iova, size_t size)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/* Trying to split a mapping? */
> +		if (mapping->iova.start < iova)
> +			break;
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +		kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int ret;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct virtio_iommu_req_map map;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		map = (struct virtio_iommu_req_map) {
> +			.head.type	= VIRTIO_IOMMU_T_MAP,
> +			.domain		= cpu_to_le32(vdomain->id),
> +			.virt_start	= cpu_to_le64(mapping->iova.start),
> +			.virt_end	= cpu_to_le64(mapping->iova.last),
> +			.phys_start	= cpu_to_le64(mapping->paddr),
> +			.flags		= cpu_to_le32(mapping->flags),
> +		};
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +		if (ret)
> +			break;
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
> +				  (1U << viommu->domain_bits) - 1;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0);
> +
> +	if (vdomain->viommu)
> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->nr_endpoints--;
> +
> +	req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (!vdomain->nr_endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->nr_endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct virtio_iommu_req_map map;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->nr_endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct virtio_iommu_req_unmap unmap;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size);
> +	if (unmapped < size)
> +		return 0;
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->nr_endpoints)
> +		return unmapped;
> +
> +	unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};
> +
> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static void viommu_iotlb_sync(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	viommu_sync_req(vdomain->viommu);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	int ret;
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	ret = iommu_device_link(&viommu->iommu, dev);
> +	if (ret)
> +		goto err_free_dev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_unlink_dev;
> +	}
> +
> +	iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +
> +err_unlink_dev:
> +	iommu_device_unlink(&viommu->iommu, dev);
> +
> +err_free_dev:
> +	kfree(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	struct viommu_endpoint *vdev;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	iommu_group_remove_device(dev);
> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	kfree(vdev);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.iotlb_sync		= viommu_iotlb_sync,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +	.of_xlate		= viommu_of_xlate,
> +};
> +
> +static int viommu_init_vqs(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
> +		return -ENODEV;

I'm a bit confused about what will happen if this device
happens to be behind an iommu itself.

If we can't handle that, should we clear PLATFORM_IOMMU
e.g. like the balloon does?


> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +	INIT_LIST_HEAD(&viommu->requests);
> +
> +	ret = viommu_init_vqs(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..e808fc7fbe82
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: BSD-3-Clause */
> +/*
> + * Virtio-iommu definition v0.8
> + *
> + * Copyright (C) 2018 Arm Ltd.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {

I'd rather we moved the definition outside even though gcc allows it -
some old userspace compilers might not.

> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;

Let's add explicit padding here as well?

> +};
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
> +						 VIRTIO_IOMMU_MAP_F_MMIO)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__u8					reserved[4];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#endif
> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
@ 2018-10-12 16:35     ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 16:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, jasowang, robh+dt, mark.rutland, eric.auger,
	tnowicki, kevin.tian, marc.zyngier, robin.murphy, will.deacon,
	lorenzo.pieralisi

On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio transport without emulating page
> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  MAINTAINERS                       |   7 +
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 101 ++++
>  6 files changed, 1059 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 48a65c3a4189..f02fa65f47e2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -15599,6 +15599,13 @@ S:	Maintained
>  F:	drivers/virtio/virtio_input.c
>  F:	include/uapi/linux/virtio_input.h
>  
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> +L:	virtualization@lists.linux-foundation.org
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>  VIRTUAL BOX GUEST DEVICE DRIVER
>  M:	Hans de Goede <hdegoede@redhat.com>
>  M:	Arnd Bergmann <arnd@arndb.de>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index c60395b7470f..2dc016dc2b92 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO=y
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index ab5eba6edf82..4cd643408e49 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..9fb38cd3b727
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,938 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 Arm Limited
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +#define VIOMMU_REQUEST_VQ		0
> +#define VIOMMU_NR_VQS			1
> +
> +/*
> + * During development, it is convenient to time out rather than wait
> + * indefinitely in atomic context when a device misbehaves and a request doesn't
> + * return. In production however, some requests shouldn't return until they are
> + * successful.
> + */
> +#ifdef DEBUG
> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
> +#endif
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
> +	spinlock_t			request_lock;
> +	struct list_head		requests;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	u32				flags;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	unsigned long			nr_endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct list_head		list;
> +	void				*writeback;
> +	unsigned int			write_offset;
> +	unsigned int			len;
> +	char				buf[];
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +static int viommu_get_req_errno(void *buf, size_t len)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	switch (tail->status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +static void viommu_set_req_status(void *buf, size_t len, int status)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	tail->status = status;
> +}
> +
> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
> +				   struct virtio_iommu_req_head *req,
> +				   size_t len)
> +{
> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
> +
> +	return len - tail_size;
> +}
> +
> +/*
> + * __viommu_sync_req - Complete all in-flight requests
> + *
> + * Wait for all added requests to complete. When this function returns, all
> + * requests that were in-flight at the time of the call have completed.
> + */
> +static int __viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret = 0;
> +	unsigned int len;
> +	size_t write_len;
> +	ktime_t timeout = 0;
> +	struct viommu_request *req;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +#ifdef DEBUG
> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
> +#endif
> +	virtqueue_kick(vq);
> +
> +	while (!list_empty(&viommu->requests)) {
> +		len = 0;
> +		req = virtqueue_get_buf(vq, &len);
> +		if (req == NULL) {
> +			if (!timeout || ktime_before(ktime_get(), timeout))
> +				continue;
> +
> +			/* After timeout, remove all requests */
> +			req = list_first_entry(&viommu->requests,
> +					       struct viommu_request, list);
> +			ret = -ETIMEDOUT;
> +		}
> +
> +		if (!len)
> +			viommu_set_req_status(req->buf, req->len,
> +					      VIRTIO_IOMMU_S_IOERR);
> +
> +		write_len = req->len - req->write_offset;
> +		if (req->writeback && len >= write_len)
> +			memcpy(req->writeback, req->buf + req->write_offset,
> +			       write_len);
> +
> +		list_del(&req->list);
> +		kfree(req);

So with DEBUG set, this will actually free memory that device still
DMA's into. Hardly pretty. I think you want to mark device broken,
queue the request and then wait for device to be reset.


> +	}
> +
> +	return ret;
> +}
> +
> +static int viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_sync_req(viommu);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * __viommu_add_request - Add one request to the queue
> + * @buf: pointer to the request buffer
> + * @len: length of the request buffer
> + * @writeback: copy data back to the buffer when the request completes.
> + *
> + * Add a request to the queue. Only synchronize the queue if it's already full.
> + * Otherwise don't kick the queue nor wait for requests to complete.
> + *
> + * When @writeback is true, data written by the device, including the request
> + * status, is copied into @buf after the request completes. This is unsafe if
> + * the caller allocates @buf on stack and drops the lock between add_req() and
> + * sync_req().
> + *
> + * Return 0 if the request was successfully added to the queue.
> + */
> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
> +			    bool writeback)
> +{
> +	int ret;
> +	off_t write_offset;
> +	struct viommu_request *req;
> +	struct scatterlist top_sg, bottom_sg;
> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +
> +	write_offset = viommu_get_req_offset(viommu, buf, len);
> +	if (!write_offset)
> +		return -EINVAL;
> +
> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	req->len = len;
> +	if (writeback) {
> +		req->writeback = buf + write_offset;
> +		req->write_offset = write_offset;
> +	}
> +	memcpy(&req->buf, buf, write_offset);
> +
> +	sg_init_one(&top_sg, req->buf, write_offset);
> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
> +
> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	if (ret == -ENOSPC) {
> +		/* If the queue is full, sync and retry */
> +		if (!__viommu_sync_req(viommu))
> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	}
> +	if (ret)
> +		goto err_free;
> +
> +	list_add_tail(&req->list, &viommu->requests);
> +	return 0;
> +
> +err_free:
> +	kfree(req);
> +	return ret;
> +}
> +
> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_add_req(viommu, buf, len, false);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * Send a request and wait for it to complete. Return the request status (as an
> + * errno)
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
> +				size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +
> +	ret = __viommu_add_req(viommu, buf, len, true);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
> +		goto out_unlock;
> +	}
> +
> +	ret = __viommu_sync_req(viommu);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +		/* Fall-through (get the actual request status) */
> +	}
> +
> +	ret = viommu_get_req_errno(buf, len);
> +out_unlock:
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +	return ret;
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size, u32 flags)
> +{
> +	unsigned long irqflags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +	mapping->flags		= flags;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				  unsigned long iova, size_t size)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/* Trying to split a mapping? */
> +		if (mapping->iova.start < iova)
> +			break;
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +		kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int ret;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct virtio_iommu_req_map map;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		map = (struct virtio_iommu_req_map) {
> +			.head.type	= VIRTIO_IOMMU_T_MAP,
> +			.domain		= cpu_to_le32(vdomain->id),
> +			.virt_start	= cpu_to_le64(mapping->iova.start),
> +			.virt_end	= cpu_to_le64(mapping->iova.last),
> +			.phys_start	= cpu_to_le64(mapping->paddr),
> +			.flags		= cpu_to_le32(mapping->flags),
> +		};
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +		if (ret)
> +			break;
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
> +				  (1U << viommu->domain_bits) - 1;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0);
> +
> +	if (vdomain->viommu)
> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->nr_endpoints--;
> +
> +	req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (!vdomain->nr_endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->nr_endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct virtio_iommu_req_map map;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->nr_endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct virtio_iommu_req_unmap unmap;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size);
> +	if (unmapped < size)
> +		return 0;
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->nr_endpoints)
> +		return unmapped;
> +
> +	unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};
> +
> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static void viommu_iotlb_sync(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	viommu_sync_req(vdomain->viommu);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	int ret;
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	ret = iommu_device_link(&viommu->iommu, dev);
> +	if (ret)
> +		goto err_free_dev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_unlink_dev;
> +	}
> +
> +	iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +
> +err_unlink_dev:
> +	iommu_device_unlink(&viommu->iommu, dev);
> +
> +err_free_dev:
> +	kfree(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	struct viommu_endpoint *vdev;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	iommu_group_remove_device(dev);
> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	kfree(vdev);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.iotlb_sync		= viommu_iotlb_sync,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +	.of_xlate		= viommu_of_xlate,
> +};
> +
> +static int viommu_init_vqs(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
> +		return -ENODEV;

I'm a bit confused about what will happen if this device
happens to be behind an iommu itself.

If we can't handle that, should we clear PLATFORM_IOMMU
e.g. like the balloon does?


> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +	INIT_LIST_HEAD(&viommu->requests);
> +
> +	ret = viommu_init_vqs(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..e808fc7fbe82
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: BSD-3-Clause */
> +/*
> + * Virtio-iommu definition v0.8
> + *
> + * Copyright (C) 2018 Arm Ltd.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {

I'd rather we moved the definition outside even though gcc allows it -
some old userspace compilers might not.

> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;

Let's add explicit padding here as well?

> +};
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
> +						 VIRTIO_IOMMU_MAP_F_MMIO)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__u8					reserved[4];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#endif
> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 14:59   ` Jean-Philippe Brucker
  (?)
@ 2018-10-12 16:35   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 16:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, will.deacon, virtualization,
	eric.auger, iommu, robh+dt, marc.zyngier, robin.murphy, kvmarm

On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
> requests such as map/unmap over virtio transport without emulating page
> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
> requests.
> 
> The bulk of the code transforms calls coming from the IOMMU API into
> corresponding virtio requests. Mappings are kept in an interval tree
> instead of page tables.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  MAINTAINERS                       |   7 +
>  drivers/iommu/Kconfig             |  11 +
>  drivers/iommu/Makefile            |   1 +
>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>  include/uapi/linux/virtio_ids.h   |   1 +
>  include/uapi/linux/virtio_iommu.h | 101 ++++
>  6 files changed, 1059 insertions(+)
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 48a65c3a4189..f02fa65f47e2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -15599,6 +15599,13 @@ S:	Maintained
>  F:	drivers/virtio/virtio_input.c
>  F:	include/uapi/linux/virtio_input.h
>  
> +VIRTIO IOMMU DRIVER
> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> +L:	virtualization@lists.linux-foundation.org
> +S:	Maintained
> +F:	drivers/iommu/virtio-iommu.c
> +F:	include/uapi/linux/virtio_iommu.h
> +
>  VIRTUAL BOX GUEST DEVICE DRIVER
>  M:	Hans de Goede <hdegoede@redhat.com>
>  M:	Arnd Bergmann <arnd@arndb.de>
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index c60395b7470f..2dc016dc2b92 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>  	help
>  	  Support for IOMMU on certain Qualcomm SoCs.
>  
> +config VIRTIO_IOMMU
> +	bool "Virtio IOMMU driver"
> +	depends on VIRTIO=y
> +	select IOMMU_API
> +	select INTERVAL_TREE
> +	select ARM_DMA_USE_IOMMU if ARM
> +	help
> +	  Para-virtualised IOMMU driver with virtio.
> +
> +	  Say Y here if you intend to run this kernel as a guest.
> +
>  endif # IOMMU_SUPPORT
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index ab5eba6edf82..4cd643408e49 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> new file mode 100644
> index 000000000000..9fb38cd3b727
> --- /dev/null
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -0,0 +1,938 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Virtio driver for the paravirtualized IOMMU
> + *
> + * Copyright (C) 2018 Arm Limited
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/amba/bus.h>
> +#include <linux/delay.h>
> +#include <linux/dma-iommu.h>
> +#include <linux/freezer.h>
> +#include <linux/interval_tree.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/of_iommu.h>
> +#include <linux/of_platform.h>
> +#include <linux/pci.h>
> +#include <linux/platform_device.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_config.h>
> +#include <linux/virtio_ids.h>
> +#include <linux/wait.h>
> +
> +#include <uapi/linux/virtio_iommu.h>
> +
> +#define MSI_IOVA_BASE			0x8000000
> +#define MSI_IOVA_LENGTH			0x100000
> +
> +#define VIOMMU_REQUEST_VQ		0
> +#define VIOMMU_NR_VQS			1
> +
> +/*
> + * During development, it is convenient to time out rather than wait
> + * indefinitely in atomic context when a device misbehaves and a request doesn't
> + * return. In production however, some requests shouldn't return until they are
> + * successful.
> + */
> +#ifdef DEBUG
> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
> +#endif
> +
> +struct viommu_dev {
> +	struct iommu_device		iommu;
> +	struct device			*dev;
> +	struct virtio_device		*vdev;
> +
> +	struct ida			domain_ids;
> +
> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
> +	spinlock_t			request_lock;
> +	struct list_head		requests;
> +
> +	/* Device configuration */
> +	struct iommu_domain_geometry	geometry;
> +	u64				pgsize_bitmap;
> +	u8				domain_bits;
> +};
> +
> +struct viommu_mapping {
> +	phys_addr_t			paddr;
> +	struct interval_tree_node	iova;
> +	u32				flags;
> +};
> +
> +struct viommu_domain {
> +	struct iommu_domain		domain;
> +	struct viommu_dev		*viommu;
> +	struct mutex			mutex;
> +	unsigned int			id;
> +
> +	spinlock_t			mappings_lock;
> +	struct rb_root_cached		mappings;
> +
> +	unsigned long			nr_endpoints;
> +};
> +
> +struct viommu_endpoint {
> +	struct viommu_dev		*viommu;
> +	struct viommu_domain		*vdomain;
> +};
> +
> +struct viommu_request {
> +	struct list_head		list;
> +	void				*writeback;
> +	unsigned int			write_offset;
> +	unsigned int			len;
> +	char				buf[];
> +};
> +
> +#define to_viommu_domain(domain)	\
> +	container_of(domain, struct viommu_domain, domain)
> +
> +static int viommu_get_req_errno(void *buf, size_t len)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	switch (tail->status) {
> +	case VIRTIO_IOMMU_S_OK:
> +		return 0;
> +	case VIRTIO_IOMMU_S_UNSUPP:
> +		return -ENOSYS;
> +	case VIRTIO_IOMMU_S_INVAL:
> +		return -EINVAL;
> +	case VIRTIO_IOMMU_S_RANGE:
> +		return -ERANGE;
> +	case VIRTIO_IOMMU_S_NOENT:
> +		return -ENOENT;
> +	case VIRTIO_IOMMU_S_FAULT:
> +		return -EFAULT;
> +	case VIRTIO_IOMMU_S_IOERR:
> +	case VIRTIO_IOMMU_S_DEVERR:
> +	default:
> +		return -EIO;
> +	}
> +}
> +
> +static void viommu_set_req_status(void *buf, size_t len, int status)
> +{
> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
> +
> +	tail->status = status;
> +}
> +
> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
> +				   struct virtio_iommu_req_head *req,
> +				   size_t len)
> +{
> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
> +
> +	return len - tail_size;
> +}
> +
> +/*
> + * __viommu_sync_req - Complete all in-flight requests
> + *
> + * Wait for all added requests to complete. When this function returns, all
> + * requests that were in-flight at the time of the call have completed.
> + */
> +static int __viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret = 0;
> +	unsigned int len;
> +	size_t write_len;
> +	ktime_t timeout = 0;
> +	struct viommu_request *req;
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +#ifdef DEBUG
> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
> +#endif
> +	virtqueue_kick(vq);
> +
> +	while (!list_empty(&viommu->requests)) {
> +		len = 0;
> +		req = virtqueue_get_buf(vq, &len);
> +		if (req == NULL) {
> +			if (!timeout || ktime_before(ktime_get(), timeout))
> +				continue;
> +
> +			/* After timeout, remove all requests */
> +			req = list_first_entry(&viommu->requests,
> +					       struct viommu_request, list);
> +			ret = -ETIMEDOUT;
> +		}
> +
> +		if (!len)
> +			viommu_set_req_status(req->buf, req->len,
> +					      VIRTIO_IOMMU_S_IOERR);
> +
> +		write_len = req->len - req->write_offset;
> +		if (req->writeback && len >= write_len)
> +			memcpy(req->writeback, req->buf + req->write_offset,
> +			       write_len);
> +
> +		list_del(&req->list);
> +		kfree(req);

So with DEBUG set, this will actually free memory that device still
DMA's into. Hardly pretty. I think you want to mark device broken,
queue the request and then wait for device to be reset.


> +	}
> +
> +	return ret;
> +}
> +
> +static int viommu_sync_req(struct viommu_dev *viommu)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_sync_req(viommu);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * __viommu_add_request - Add one request to the queue
> + * @buf: pointer to the request buffer
> + * @len: length of the request buffer
> + * @writeback: copy data back to the buffer when the request completes.
> + *
> + * Add a request to the queue. Only synchronize the queue if it's already full.
> + * Otherwise don't kick the queue nor wait for requests to complete.
> + *
> + * When @writeback is true, data written by the device, including the request
> + * status, is copied into @buf after the request completes. This is unsafe if
> + * the caller allocates @buf on stack and drops the lock between add_req() and
> + * sync_req().
> + *
> + * Return 0 if the request was successfully added to the queue.
> + */
> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
> +			    bool writeback)
> +{
> +	int ret;
> +	off_t write_offset;
> +	struct viommu_request *req;
> +	struct scatterlist top_sg, bottom_sg;
> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
> +
> +	assert_spin_locked(&viommu->request_lock);
> +
> +	write_offset = viommu_get_req_offset(viommu, buf, len);
> +	if (!write_offset)
> +		return -EINVAL;
> +
> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
> +	if (!req)
> +		return -ENOMEM;
> +
> +	req->len = len;
> +	if (writeback) {
> +		req->writeback = buf + write_offset;
> +		req->write_offset = write_offset;
> +	}
> +	memcpy(&req->buf, buf, write_offset);
> +
> +	sg_init_one(&top_sg, req->buf, write_offset);
> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
> +
> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	if (ret == -ENOSPC) {
> +		/* If the queue is full, sync and retry */
> +		if (!__viommu_sync_req(viommu))
> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
> +	}
> +	if (ret)
> +		goto err_free;
> +
> +	list_add_tail(&req->list, &viommu->requests);
> +	return 0;
> +
> +err_free:
> +	kfree(req);
> +	return ret;
> +}
> +
> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +	ret = __viommu_add_req(viommu, buf, len, false);
> +	if (ret)
> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +
> +	return ret;
> +}
> +
> +/*
> + * Send a request and wait for it to complete. Return the request status (as an
> + * errno)
> + */
> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
> +				size_t len)
> +{
> +	int ret;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&viommu->request_lock, flags);
> +
> +	ret = __viommu_add_req(viommu, buf, len, true);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
> +		goto out_unlock;
> +	}
> +
> +	ret = __viommu_sync_req(viommu);
> +	if (ret) {
> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
> +		/* Fall-through (get the actual request status) */
> +	}
> +
> +	ret = viommu_get_req_errno(buf, len);
> +out_unlock:
> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
> +	return ret;
> +}
> +
> +/*
> + * viommu_add_mapping - add a mapping to the internal tree
> + *
> + * On success, return the new mapping. Otherwise return NULL.
> + */
> +static struct viommu_mapping *
> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
> +		   phys_addr_t paddr, size_t size, u32 flags)
> +{
> +	unsigned long irqflags;
> +	struct viommu_mapping *mapping;
> +
> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
> +	if (!mapping)
> +		return NULL;
> +
> +	mapping->paddr		= paddr;
> +	mapping->iova.start	= iova;
> +	mapping->iova.last	= iova + size - 1;
> +	mapping->flags		= flags;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
> +
> +	return mapping;
> +}
> +
> +/*
> + * viommu_del_mappings - remove mappings from the internal tree
> + *
> + * @vdomain: the domain
> + * @iova: start of the range
> + * @size: size of the range. A size of 0 corresponds to the entire address
> + *	space.
> + *
> + * On success, returns the number of unmapped bytes (>= size)
> + */
> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
> +				  unsigned long iova, size_t size)
> +{
> +	size_t unmapped = 0;
> +	unsigned long flags;
> +	unsigned long last = iova + size - 1;
> +	struct viommu_mapping *mapping = NULL;
> +	struct interval_tree_node *node, *next;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
> +	while (next) {
> +		node = next;
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		next = interval_tree_iter_next(node, iova, last);
> +
> +		/* Trying to split a mapping? */
> +		if (mapping->iova.start < iova)
> +			break;
> +
> +		/*
> +		 * Note that for a partial range, this will return the full
> +		 * mapping so we avoid sending split requests to the device.
> +		 */
> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
> +
> +		interval_tree_remove(node, &vdomain->mappings);
> +		kfree(mapping);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return unmapped;
> +}
> +
> +/*
> + * viommu_replay_mappings - re-send MAP requests
> + *
> + * When reattaching a domain that was previously detached from all endpoints,
> + * mappings were deleted from the device. Re-create the mappings available in
> + * the internal tree.
> + */
> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
> +{
> +	int ret;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct virtio_iommu_req_map map;
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
> +	while (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		map = (struct virtio_iommu_req_map) {
> +			.head.type	= VIRTIO_IOMMU_T_MAP,
> +			.domain		= cpu_to_le32(vdomain->id),
> +			.virt_start	= cpu_to_le64(mapping->iova.start),
> +			.virt_end	= cpu_to_le64(mapping->iova.last),
> +			.phys_start	= cpu_to_le64(mapping->paddr),
> +			.flags		= cpu_to_le32(mapping->flags),
> +		};
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +		if (ret)
> +			break;
> +
> +		node = interval_tree_iter_next(node, 0, -1UL);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return ret;
> +}
> +
> +/* IOMMU API */
> +
> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +{
> +	struct viommu_domain *vdomain;
> +
> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
> +		return NULL;
> +
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return NULL;
> +
> +	mutex_init(&vdomain->mutex);
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
> +	if (type == IOMMU_DOMAIN_DMA &&
> +	    iommu_get_dma_cookie(&vdomain->domain)) {
> +		kfree(vdomain);
> +		return NULL;
> +	}
> +
> +	return &vdomain->domain;
> +}
> +
> +static int viommu_domain_finalise(struct viommu_dev *viommu,
> +				  struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
> +				  (1U << viommu->domain_bits) - 1;
> +
> +	vdomain->viommu		= viommu;
> +
> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> +	domain->geometry	= viommu->geometry;
> +
> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
> +	if (ret >= 0)
> +		vdomain->id = (unsigned int)ret;
> +
> +	return ret > 0 ? 0 : ret;
> +}
> +
> +static void viommu_domain_free(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	iommu_put_dma_cookie(domain);
> +
> +	/* Free all remaining mappings (size 2^64) */
> +	viommu_del_mappings(vdomain, 0, 0);
> +
> +	if (vdomain->viommu)
> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
> +
> +	kfree(vdomain);
> +}
> +
> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	int i;
> +	int ret = 0;
> +	struct virtio_iommu_req_attach req;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	mutex_lock(&vdomain->mutex);
> +	if (!vdomain->viommu) {
> +		/*
> +		 * Initialize the domain proper now that we know which viommu
> +		 * owns it.
> +		 */
> +		ret = viommu_domain_finalise(vdev->viommu, domain);
> +	} else if (vdomain->viommu != vdev->viommu) {
> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
> +		ret = -EXDEV;
> +	}
> +	mutex_unlock(&vdomain->mutex);
> +
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * In the virtio-iommu device, when attaching the endpoint to a new
> +	 * domain, it is detached from the old one and, if as as a result the
> +	 * old domain isn't attached to any endpoint, all mappings are removed
> +	 * from the old domain and it is freed.
> +	 *
> +	 * In the driver the old domain still exists, and its mappings will be
> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
> +	 * freed explicitly.
> +	 *
> +	 * vdev->vdomain is protected by group->mutex
> +	 */
> +	if (vdev->vdomain)
> +		vdev->vdomain->nr_endpoints--;
> +
> +	req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdomain->id),
> +	};
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
> +
> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
> +		if (ret)
> +			return ret;
> +	}
> +
> +	if (!vdomain->nr_endpoints) {
> +		/*
> +		 * This endpoint is the first to be attached to the domain.
> +		 * Replay existing mappings (e.g. SW MSI).
> +		 */
> +		ret = viommu_replay_mappings(vdomain);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	vdomain->nr_endpoints++;
> +	vdev->vdomain = vdomain;
> +
> +	return 0;
> +}
> +
> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
> +		      phys_addr_t paddr, size_t size, int prot)
> +{
> +	int ret;
> +	int flags;
> +	struct viommu_mapping *mapping;
> +	struct virtio_iommu_req_map map;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
> +
> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
> +	if (!mapping)
> +		return -ENOMEM;
> +
> +	map = (struct virtio_iommu_req_map) {
> +		.head.type	= VIRTIO_IOMMU_T_MAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.phys_start	= cpu_to_le64(paddr),
> +		.virt_end	= cpu_to_le64(iova + size - 1),
> +		.flags		= cpu_to_le32(flags),
> +	};
> +
> +	if (!vdomain->nr_endpoints)
> +		return 0;
> +
> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
> +	if (ret)
> +		viommu_del_mappings(vdomain, iova, size);
> +
> +	return ret;
> +}
> +
> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
> +			   size_t size)
> +{
> +	int ret = 0;
> +	size_t unmapped;
> +	struct virtio_iommu_req_unmap unmap;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	unmapped = viommu_del_mappings(vdomain, iova, size);
> +	if (unmapped < size)
> +		return 0;
> +
> +	/* Device already removed all mappings after detach. */
> +	if (!vdomain->nr_endpoints)
> +		return unmapped;
> +
> +	unmap = (struct virtio_iommu_req_unmap) {
> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
> +		.domain		= cpu_to_le32(vdomain->id),
> +		.virt_start	= cpu_to_le64(iova),
> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
> +	};
> +
> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
> +	return ret ? 0 : unmapped;
> +}
> +
> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
> +				       dma_addr_t iova)
> +{
> +	u64 paddr = 0;
> +	unsigned long flags;
> +	struct viommu_mapping *mapping;
> +	struct interval_tree_node *node;
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
> +	if (node) {
> +		mapping = container_of(node, struct viommu_mapping, iova);
> +		paddr = mapping->paddr + (iova - mapping->iova.start);
> +	}
> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
> +
> +	return paddr;
> +}
> +
> +static void viommu_iotlb_sync(struct iommu_domain *domain)
> +{
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	viommu_sync_req(vdomain->viommu);
> +}
> +
> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *region;
> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> +					 IOMMU_RESV_SW_MSI);
> +	if (!region)
> +		return;
> +
> +	list_add_tail(&region->list, head);
> +	iommu_dma_get_resv_regions(dev, head);
> +}
> +
> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *entry, *next;
> +
> +	list_for_each_entry_safe(entry, next, head, list)
> +		kfree(entry);
> +}
> +
> +static struct iommu_ops viommu_ops;
> +static struct virtio_driver virtio_iommu_drv;
> +
> +static int viommu_match_node(struct device *dev, void *data)
> +{
> +	return dev->parent->fwnode == data;
> +}
> +
> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
> +{
> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
> +						fwnode, viommu_match_node);
> +	put_device(dev);
> +
> +	return dev ? dev_to_virtio(dev)->priv : NULL;
> +}
> +
> +static int viommu_add_device(struct device *dev)
> +{
> +	int ret;
> +	struct iommu_group *group;
> +	struct viommu_endpoint *vdev;
> +	struct viommu_dev *viommu = NULL;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return -ENODEV;
> +
> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
> +	if (!viommu)
> +		return -ENODEV;
> +
> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
> +	if (!vdev)
> +		return -ENOMEM;
> +
> +	vdev->viommu = viommu;
> +	fwspec->iommu_priv = vdev;
> +
> +	ret = iommu_device_link(&viommu->iommu, dev);
> +	if (ret)
> +		goto err_free_dev;
> +
> +	/*
> +	 * Last step creates a default domain and attaches to it. Everything
> +	 * must be ready.
> +	 */
> +	group = iommu_group_get_for_dev(dev);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		goto err_unlink_dev;
> +	}
> +
> +	iommu_group_put(group);
> +
> +	return PTR_ERR_OR_ZERO(group);
> +
> +err_unlink_dev:
> +	iommu_device_unlink(&viommu->iommu, dev);
> +
> +err_free_dev:
> +	kfree(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove_device(struct device *dev)
> +{
> +	struct viommu_endpoint *vdev;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +
> +	if (!fwspec || fwspec->ops != &viommu_ops)
> +		return;
> +
> +	vdev = fwspec->iommu_priv;
> +
> +	iommu_group_remove_device(dev);
> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	kfree(vdev);
> +}
> +
> +static struct iommu_group *viommu_device_group(struct device *dev)
> +{
> +	if (dev_is_pci(dev))
> +		return pci_device_group(dev);
> +	else
> +		return generic_device_group(dev);
> +}
> +
> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
> +{
> +	return iommu_fwspec_add_ids(dev, args->args, 1);
> +}
> +
> +static struct iommu_ops viommu_ops = {
> +	.domain_alloc		= viommu_domain_alloc,
> +	.domain_free		= viommu_domain_free,
> +	.attach_dev		= viommu_attach_dev,
> +	.map			= viommu_map,
> +	.unmap			= viommu_unmap,
> +	.iova_to_phys		= viommu_iova_to_phys,
> +	.iotlb_sync		= viommu_iotlb_sync,
> +	.add_device		= viommu_add_device,
> +	.remove_device		= viommu_remove_device,
> +	.device_group		= viommu_device_group,
> +	.get_resv_regions	= viommu_get_resv_regions,
> +	.put_resv_regions	= viommu_put_resv_regions,
> +	.of_xlate		= viommu_of_xlate,
> +};
> +
> +static int viommu_init_vqs(struct viommu_dev *viommu)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
> +	const char *name = "request";
> +	void *ret;
> +
> +	ret = virtio_find_single_vq(vdev, NULL, name);
> +	if (IS_ERR(ret)) {
> +		dev_err(viommu->dev, "cannot find VQ\n");
> +		return PTR_ERR(ret);
> +	}
> +
> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
> +
> +	return 0;
> +}
> +
> +static int viommu_probe(struct virtio_device *vdev)
> +{
> +	struct device *parent_dev = vdev->dev.parent;
> +	struct viommu_dev *viommu = NULL;
> +	struct device *dev = &vdev->dev;
> +	u64 input_start = 0;
> +	u64 input_end = -1UL;
> +	int ret;
> +
> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
> +		return -ENODEV;

I'm a bit confused about what will happen if this device
happens to be behind an iommu itself.

If we can't handle that, should we clear PLATFORM_IOMMU
e.g. like the balloon does?


> +
> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
> +	if (!viommu)
> +		return -ENOMEM;
> +
> +	spin_lock_init(&viommu->request_lock);
> +	ida_init(&viommu->domain_ids);
> +	viommu->dev = dev;
> +	viommu->vdev = vdev;
> +	INIT_LIST_HEAD(&viommu->requests);
> +
> +	ret = viommu_init_vqs(viommu);
> +	if (ret)
> +		return ret;
> +
> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
> +		     &viommu->pgsize_bitmap);
> +
> +	if (!viommu->pgsize_bitmap) {
> +		ret = -EINVAL;
> +		goto err_free_vqs;
> +	}
> +
> +	viommu->domain_bits = 32;
> +
> +	/* Optional features */
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.start,
> +			     &input_start);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
> +			     struct virtio_iommu_config, input_range.end,
> +			     &input_end);
> +
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
> +			     struct virtio_iommu_config, domain_bits,
> +			     &viommu->domain_bits);
> +
> +	viommu->geometry = (struct iommu_domain_geometry) {
> +		.aperture_start	= input_start,
> +		.aperture_end	= input_end,
> +		.force_aperture	= true,
> +	};
> +
> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
> +
> +	virtio_device_ready(vdev);
> +
> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
> +				     virtio_bus_name(vdev));
> +	if (ret)
> +		goto err_free_vqs;
> +
> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
> +
> +	iommu_device_register(&viommu->iommu);
> +
> +#ifdef CONFIG_PCI
> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
> +		pci_request_acs();
> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +#ifdef CONFIG_ARM_AMBA
> +	if (amba_bustype.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +#endif
> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
> +		if (ret)
> +			goto err_unregister;
> +	}
> +
> +	vdev->priv = viommu;
> +
> +	dev_info(dev, "input address: %u bits\n",
> +		 order_base_2(viommu->geometry.aperture_end));
> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
> +
> +	return 0;
> +
> +err_unregister:
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +err_free_vqs:
> +	vdev->config->del_vqs(vdev);
> +
> +	return ret;
> +}
> +
> +static void viommu_remove(struct virtio_device *vdev)
> +{
> +	struct viommu_dev *viommu = vdev->priv;
> +
> +	iommu_device_sysfs_remove(&viommu->iommu);
> +	iommu_device_unregister(&viommu->iommu);
> +
> +	/* Stop all virtqueues */
> +	vdev->config->reset(vdev);
> +	vdev->config->del_vqs(vdev);
> +
> +	dev_info(&vdev->dev, "device removed\n");
> +}
> +
> +static void viommu_config_changed(struct virtio_device *vdev)
> +{
> +	dev_warn(&vdev->dev, "config changed\n");
> +}
> +
> +static unsigned int features[] = {
> +	VIRTIO_IOMMU_F_MAP_UNMAP,
> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
> +	VIRTIO_IOMMU_F_INPUT_RANGE,
> +};
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> +static struct virtio_driver virtio_iommu_drv = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.feature_table		= features,
> +	.feature_table_size	= ARRAY_SIZE(features),
> +	.probe			= viommu_probe,
> +	.remove			= viommu_remove,
> +	.config_changed		= viommu_config_changed,
> +};
> +
> +module_virtio_driver(virtio_iommu_drv);
> +
> +MODULE_DESCRIPTION("Virtio IOMMU driver");
> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> new file mode 100644
> index 000000000000..e808fc7fbe82
> --- /dev/null
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: BSD-3-Clause */
> +/*
> + * Virtio-iommu definition v0.8
> + *
> + * Copyright (C) 2018 Arm Ltd.
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +/* Feature bits */
> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
> +#define VIRTIO_IOMMU_F_BYPASS			3
> +
> +struct virtio_iommu_config {
> +	/* Supported page sizes */
> +	__u64					page_size_mask;
> +	/* Supported IOVA range */
> +	struct virtio_iommu_range {

I'd rather we moved the definition outside even though gcc allows it -
some old userspace compilers might not.

> +		__u64				start;
> +		__u64				end;
> +	} input_range;
> +	/* Max domain ID size */
> +	__u8					domain_bits;

Let's add explicit padding here as well?

> +};
> +
> +/* Request types */
> +#define VIRTIO_IOMMU_T_ATTACH			0x01
> +#define VIRTIO_IOMMU_T_DETACH			0x02
> +#define VIRTIO_IOMMU_T_MAP			0x03
> +#define VIRTIO_IOMMU_T_UNMAP			0x04
> +
> +/* Status types */
> +#define VIRTIO_IOMMU_S_OK			0x00
> +#define VIRTIO_IOMMU_S_IOERR			0x01
> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
> +#define VIRTIO_IOMMU_S_DEVERR			0x03
> +#define VIRTIO_IOMMU_S_INVAL			0x04
> +#define VIRTIO_IOMMU_S_RANGE			0x05
> +#define VIRTIO_IOMMU_S_NOENT			0x06
> +#define VIRTIO_IOMMU_S_FAULT			0x07
> +
> +struct virtio_iommu_req_head {
> +	__u8					type;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_tail {
> +	__u8					status;
> +	__u8					reserved[3];
> +};
> +
> +struct virtio_iommu_req_attach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_detach {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le32					endpoint;
> +	__u8					reserved[8];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
> +
> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
> +						 VIRTIO_IOMMU_MAP_F_MMIO)
> +
> +struct virtio_iommu_req_map {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__le64					phys_start;
> +	__le32					flags;
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +struct virtio_iommu_req_unmap {
> +	struct virtio_iommu_req_head		head;
> +	__le32					domain;
> +	__le64					virt_start;
> +	__le64					virt_end;
> +	__u8					reserved[4];
> +	struct virtio_iommu_req_tail		tail;
> +};
> +
> +#endif
> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-12 16:42     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 16:42 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, will.deacon, virtualization,
	eric.auger, iommu, robh+dt, marc.zyngier, robin.murphy, kvmarm

On Fri, Oct 12, 2018 at 03:59:16PM +0100, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;


Maybe validate that size, end and start all make sense
and e.g. fit within phys_addr_t and size_t?

> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */

Oh so you do add the padding in this patch, it's just a
question of moving this line to the earlier patch then.

> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-10-12 16:42     ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 16:42 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, jasowang, robh+dt, mark.rutland, eric.auger,
	tnowicki, kevin.tian, marc.zyngier, robin.murphy, will.deacon,
	lorenzo.pieralisi

On Fri, Oct 12, 2018 at 03:59:16PM +0100, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;


Maybe validate that size, end and start all make sense
and e.g. fit within phys_addr_t and size_t?

> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */

Oh so you do add the padding in this patch, it's just a
question of moving this line to the earlier patch then.

> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-12 17:00     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 17:00 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On Fri, Oct 12, 2018 at 03:59:10PM +0100, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h

This all looks good to me. Minor nits:
- I think DEBUG mode is best just removed for now
- Slightly wrong patch splitup causing a misaligned structure
  in uapi until all patches are applied.

You should Cc Bjorn on the pci change - I'd like to see his ack on it
being merged through my tree.

And pls Cc the virtio-dev list on any virtio uapi changes.

At a feature level I have some ideas for more features we
could add, but for now I think I'll put this version in -next
while you iron out the above wrinkles. Hope you can make the
merge window.

> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-12 17:00     ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 17:00 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, jasowang, robh+dt, mark.rutland, eric.auger,
	tnowicki, kevin.tian, marc.zyngier, robin.murphy, will.deacon,
	lorenzo.pieralisi

On Fri, Oct 12, 2018 at 03:59:10PM +0100, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h

This all looks good to me. Minor nits:
- I think DEBUG mode is best just removed for now
- Slightly wrong patch splitup causing a misaligned structure
  in uapi until all patches are applied.

You should Cc Bjorn on the pci change - I'd like to see his ack on it
being merged through my tree.

And pls Cc the virtio-dev list on any virtio uapi changes.

At a feature level I have some ideas for more features we
could add, but for now I think I'll put this version in -next
while you iron out the above wrinkles. Hope you can make the
merge window.

> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (14 preceding siblings ...)
  (?)
@ 2018-10-12 17:00 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-12 17:00 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, will.deacon, virtualization,
	eric.auger, iommu, robh+dt, marc.zyngier, robin.murphy, kvmarm

On Fri, Oct 12, 2018 at 03:59:10PM +0100, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h

This all looks good to me. Minor nits:
- I think DEBUG mode is best just removed for now
- Slightly wrong patch splitup causing a misaligned structure
  in uapi until all patches are applied.

You should Cc Bjorn on the pci change - I'd like to see his ack on it
being merged through my tree.

And pls Cc the virtio-dev list on any virtio uapi changes.

At a feature level I have some ideas for more features we
could add, but for now I think I'll put this version in -next
while you iron out the above wrinkles. Hope you can make the
merge window.

> -- 
> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 16:35     ` Michael S. Tsirkin
@ 2018-10-12 18:54         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On 12/10/2018 17:35, Michael S. Tsirkin wrote:
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.

Ok, let's remove DEBUG for the moment

>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
>
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?

I think the DMA API can handle this device doing DMA through another
IOMMU. I haven't tested this case because it is very unusual (IOMMUs
themselves generally access the physical address space) but I don't see
anything preventing it.

What we can't handle is a device performing DMA through two
daisy-chained IOMMUs, but clearing PLATFORM_IOMMU on the first one
wouldn't make things work in that case, we'd need some core changes.

>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?

Ok

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
@ 2018-10-12 18:54         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, kevin.tian, tnowicki, devicetree,
	jasowang, linux-pci, will.deacon, virtualization, iommu, robh+dt,
	marc.zyngier, robin.murphy, kvmarm

On 12/10/2018 17:35, Michael S. Tsirkin wrote:
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.

Ok, let's remove DEBUG for the moment

>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
>
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?

I think the DMA API can handle this device doing DMA through another
IOMMU. I haven't tested this case because it is very unusual (IOMMUs
themselves generally access the physical address space) but I don't see
anything preventing it.

What we can't handle is a device performing DMA through two
daisy-chained IOMMUs, but clearing PLATFORM_IOMMU on the first one
wouldn't make things work in that case, we'd need some core changes.

>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?

Ok

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 16:35     ` Michael S. Tsirkin
  (?)
  (?)
@ 2018-10-12 18:54     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:54 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, tnowicki, devicetree, marc.zyngier,
	linux-pci, will.deacon, virtualization, iommu, robh+dt,
	robin.murphy, kvmarm

On 12/10/2018 17:35, Michael S. Tsirkin wrote:
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.

Ok, let's remove DEBUG for the moment

>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
>
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?

I think the DMA API can handle this device doing DMA through another
IOMMU. I haven't tested this case because it is very unusual (IOMMUs
themselves generally access the physical address space) but I don't see
anything preventing it.

What we can't handle is a device performing DMA through two
daisy-chained IOMMUs, but clearing PLATFORM_IOMMU on the first one
wouldn't make things work in that case, we'd need some core changes.

>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?

Ok

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 17:00     ` Michael S. Tsirkin
@ 2018-10-12 18:55         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:55 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On 12/10/2018 18:00, Michael S. Tsirkin wrote:
> This all looks good to me. Minor nits:
> - I think DEBUG mode is best just removed for now
> - Slightly wrong patch splitup causing a misaligned structure
>   in uapi until all patches are applied.

Thanks a lot for the review, I'll fix these up and send a new version

> You should Cc Bjorn on the pci change - I'd like to see his ack on it
> being merged through my tree.

Argh, I don't know how I missed him. However patches 1-4 are device tree
changes, and need acks from Rob or Mark (on Cc)

> And pls Cc the virtio-dev list on any virtio uapi changes.
> 
> At a feature level I have some ideas for more features we
> could add, but for now I think I'll put this version in -next
> while you iron out the above wrinkles. Hope you can make the
> merge window.
Thanks, I also have some work lined up for hardware acceleration and
shared address spaces.

Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-12 18:55         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:55 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, kevin.tian, tnowicki, devicetree,
	jasowang, linux-pci, will.deacon, virtualization, iommu, robh+dt,
	marc.zyngier, robin.murphy, kvmarm

On 12/10/2018 18:00, Michael S. Tsirkin wrote:
> This all looks good to me. Minor nits:
> - I think DEBUG mode is best just removed for now
> - Slightly wrong patch splitup causing a misaligned structure
>   in uapi until all patches are applied.

Thanks a lot for the review, I'll fix these up and send a new version

> You should Cc Bjorn on the pci change - I'd like to see his ack on it
> being merged through my tree.

Argh, I don't know how I missed him. However patches 1-4 are device tree
changes, and need acks from Rob or Mark (on Cc)

> And pls Cc the virtio-dev list on any virtio uapi changes.
> 
> At a feature level I have some ideas for more features we
> could add, but for now I think I'll put this version in -next
> while you iron out the above wrinkles. Hope you can make the
> merge window.
Thanks, I also have some work lined up for hardware acceleration and
shared address spaces.

Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 17:00     ` Michael S. Tsirkin
  (?)
@ 2018-10-12 18:55     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-12 18:55 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, tnowicki, devicetree, marc.zyngier,
	linux-pci, will.deacon, virtualization, iommu, robh+dt,
	robin.murphy, kvmarm

On 12/10/2018 18:00, Michael S. Tsirkin wrote:
> This all looks good to me. Minor nits:
> - I think DEBUG mode is best just removed for now
> - Slightly wrong patch splitup causing a misaligned structure
>   in uapi until all patches are applied.

Thanks a lot for the review, I'll fix these up and send a new version

> You should Cc Bjorn on the pci change - I'd like to see his ack on it
> being merged through my tree.

Argh, I don't know how I missed him. However patches 1-4 are device tree
changes, and need acks from Rob or Mark (on Cc)

> And pls Cc the virtio-dev list on any virtio uapi changes.
> 
> At a feature level I have some ideas for more features we
> could add, but for now I think I'll put this version in -next
> while you iron out the above wrinkles. Hope you can make the
> merge window.
Thanks, I also have some work lined up for hardware acceleration and
shared address spaces.

Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-12 19:41       ` Bjorn Helgaas
  -1 siblings, 0 replies; 101+ messages in thread
From: Bjorn Helgaas @ 2018-10-12 19:41 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

s/iommu/IOMMU/ in subject

On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> Using the iommu-map binding, endpoints in a given PCI domain can be
> managed by different IOMMUs. Some virtual machines may allow a subset of
> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented

s/case/cases/

> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> PCI root complex has an iommu-map property, the driver requires all
> endpoints to be described by the property. Allow the iommu-map property to
> have gaps.

I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
safe to allow devices to bypass the IOMMU.  Does this mean a typo in
iommu-map could inadvertently allow devices to bypass it?  Should we
indicate something in dmesg (and/or sysfs) about devices that bypass
it?

> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,

s/of_pci_map_rid/of_pci_map_rid()/

> which is invalid since MSIs always reach an MSI controller. Thankfully
> Linux will error out later, when attempting to find an MSI domain for the
> device.

Not clear to me what "error out" means here.  In a userspace program,
I would infer that the program exits with an error message, but I
doubt you mean that Linux exits.

> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/pci/of.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> index 1836b8ddf292..2f5015bdb256 100644
> --- a/drivers/pci/of.c
> +++ b/drivers/pci/of.c
> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>  		return 0;
>  	}
>  
> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
> -		np, map_name, rid, target && *target ? *target : NULL);
> -	return -EFAULT;
> +	/* Bypasses translation */
> +	if (id_out)
> +		*id_out = rid;
> +	return 0;
>  }
>  
>  #if IS_ENABLED(CONFIG_OF_IRQ)
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-12 19:41       ` Bjorn Helgaas
  0 siblings, 0 replies; 101+ messages in thread
From: Bjorn Helgaas @ 2018-10-12 19:41 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, mst, jasowang, robh+dt, mark.rutland,
	eric.auger, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

s/iommu/IOMMU/ in subject

On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> Using the iommu-map binding, endpoints in a given PCI domain can be
> managed by different IOMMUs. Some virtual machines may allow a subset of
> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented

s/case/cases/

> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> PCI root complex has an iommu-map property, the driver requires all
> endpoints to be described by the property. Allow the iommu-map property to
> have gaps.

I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
safe to allow devices to bypass the IOMMU.  Does this mean a typo in
iommu-map could inadvertently allow devices to bypass it?  Should we
indicate something in dmesg (and/or sysfs) about devices that bypass
it?

> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,

s/of_pci_map_rid/of_pci_map_rid()/

> which is invalid since MSIs always reach an MSI controller. Thankfully
> Linux will error out later, when attempting to find an MSI domain for the
> device.

Not clear to me what "error out" means here.  In a userspace program,
I would infer that the program exits with an error message, but I
doubt you mean that Linux exits.

> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/pci/of.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> index 1836b8ddf292..2f5015bdb256 100644
> --- a/drivers/pci/of.c
> +++ b/drivers/pci/of.c
> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>  		return 0;
>  	}
>  
> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
> -		np, map_name, rid, target && *target ? *target : NULL);
> -	return -EFAULT;
> +	/* Bypasses translation */
> +	if (id_out)
> +		*id_out = rid;
> +	return 0;
>  }
>  
>  #if IS_ENABLED(CONFIG_OF_IRQ)
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-12 19:44       ` Bjorn Helgaas
  -1 siblings, 0 replies; 101+ messages in thread
From: Bjorn Helgaas @ 2018-10-12 19:44 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On Fri, Oct 12, 2018 at 03:59:14PM +0100, Jean-Philippe Brucker wrote:
> For PCI devices that have an OF node, set the fwnode as well. This way
> drivers that rely on fwnode don't need the special case described by
> commit f94277af03ea ("of/platform: Initialise dev->fwnode appropriately").
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>

Acked-by: Bjorn Helgaas <bhelgaas-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

> ---
>  drivers/pci/of.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> index 2f5015bdb256..8026417fab38 100644
> --- a/drivers/pci/of.c
> +++ b/drivers/pci/of.c
> @@ -21,12 +21,15 @@ void pci_set_of_node(struct pci_dev *dev)
>  		return;
>  	dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
>  						    dev->devfn);
> +	if (dev->dev.of_node)
> +		dev->dev.fwnode = &dev->dev.of_node->fwnode;
>  }
>  
>  void pci_release_of_node(struct pci_dev *dev)
>  {
>  	of_node_put(dev->dev.of_node);
>  	dev->dev.of_node = NULL;
> +	dev->dev.fwnode = NULL;
>  }
>  
>  void pci_set_bus_of_node(struct pci_bus *bus)
> @@ -35,12 +38,16 @@ void pci_set_bus_of_node(struct pci_bus *bus)
>  		bus->dev.of_node = pcibios_get_phb_of_node(bus);
>  	else
>  		bus->dev.of_node = of_node_get(bus->self->dev.of_node);
> +
> +	if (bus->dev.of_node)
> +		bus->dev.fwnode = &bus->dev.of_node->fwnode;
>  }
>  
>  void pci_release_bus_of_node(struct pci_bus *bus)
>  {
>  	of_node_put(bus->dev.of_node);
>  	bus->dev.of_node = NULL;
> +	bus->dev.fwnode = NULL;
>  }
>  
>  struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately
@ 2018-10-12 19:44       ` Bjorn Helgaas
  0 siblings, 0 replies; 101+ messages in thread
From: Bjorn Helgaas @ 2018-10-12 19:44 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, mst, jasowang, robh+dt, mark.rutland,
	eric.auger, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

On Fri, Oct 12, 2018 at 03:59:14PM +0100, Jean-Philippe Brucker wrote:
> For PCI devices that have an OF node, set the fwnode as well. This way
> drivers that rely on fwnode don't need the special case described by
> commit f94277af03ea ("of/platform: Initialise dev->fwnode appropriately").
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>

Acked-by: Bjorn Helgaas <bhelgaas@google.com>

> ---
>  drivers/pci/of.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> index 2f5015bdb256..8026417fab38 100644
> --- a/drivers/pci/of.c
> +++ b/drivers/pci/of.c
> @@ -21,12 +21,15 @@ void pci_set_of_node(struct pci_dev *dev)
>  		return;
>  	dev->dev.of_node = of_pci_find_child_device(dev->bus->dev.of_node,
>  						    dev->devfn);
> +	if (dev->dev.of_node)
> +		dev->dev.fwnode = &dev->dev.of_node->fwnode;
>  }
>  
>  void pci_release_of_node(struct pci_dev *dev)
>  {
>  	of_node_put(dev->dev.of_node);
>  	dev->dev.of_node = NULL;
> +	dev->dev.fwnode = NULL;
>  }
>  
>  void pci_set_bus_of_node(struct pci_bus *bus)
> @@ -35,12 +38,16 @@ void pci_set_bus_of_node(struct pci_bus *bus)
>  		bus->dev.of_node = pcibios_get_phb_of_node(bus);
>  	else
>  		bus->dev.of_node = of_node_get(bus->self->dev.of_node);
> +
> +	if (bus->dev.of_node)
> +		bus->dev.fwnode = &bus->dev.of_node->fwnode;
>  }
>  
>  void pci_release_bus_of_node(struct pci_bus *bus)
>  {
>  	of_node_put(bus->dev.of_node);
>  	bus->dev.of_node = NULL;
> +	bus->dev.fwnode = NULL;
>  }
>  
>  struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 19:41       ` Bjorn Helgaas
@ 2018-10-15 10:52           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-15 10:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: mark.rutland-5wv7dgnIgG8, devicetree-u79uwXL29TY76Z2rM5mHXA,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	peter.maydell-QSEj5FYQhm4dnm+yROfE0A, Jean-Philippe Brucker,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, will.deacon-5wv7dgnIgG8,
	robin.murphy-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> > Using the iommu-map binding, endpoints in a given PCI domain can be
> > managed by different IOMMUs. Some virtual machines may allow a subset of
> > endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
> > as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> > PCI root complex has an iommu-map property, the driver requires all
> > endpoints to be described by the property. Allow the iommu-map property to
> > have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?


Thinking about this comment, I would like to ask: can't the
virtio device indicate the ranges in a portable way?
This would minimize the dependency on dt bindings and ACPI,
enabling support for systems that have neither but do
have virtio e.g. through pci.

>  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?
> 
> > Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
> 
> s/of_pci_map_rid/of_pci_map_rid()/
> 
> > which is invalid since MSIs always reach an MSI controller. Thankfully
> > Linux will error out later, when attempting to find an MSI domain for the
> > device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> > ---
> >  drivers/pci/of.c | 7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> > index 1836b8ddf292..2f5015bdb256 100644
> > --- a/drivers/pci/of.c
> > +++ b/drivers/pci/of.c
> > @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
> >  		return 0;
> >  	}
> >  
> > -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
> > -		np, map_name, rid, target && *target ? *target : NULL);
> > -	return -EFAULT;
> > +	/* Bypasses translation */
> > +	if (id_out)
> > +		*id_out = rid;
> > +	return 0;
> >  }
> >  
> >  #if IS_ENABLED(CONFIG_OF_IRQ)
> > -- 
> > 2.19.1
> > 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-15 10:52           ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-15 10:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Jean-Philippe Brucker, iommu, virtualization, devicetree,
	linux-pci, kvmarm, peter.maydell, joro, jasowang, robh+dt,
	mark.rutland, eric.auger, tnowicki, kevin.tian, marc.zyngier,
	robin.murphy, will.deacon, lorenzo.pieralisi

On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> > Using the iommu-map binding, endpoints in a given PCI domain can be
> > managed by different IOMMUs. Some virtual machines may allow a subset of
> > endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
> > as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> > PCI root complex has an iommu-map property, the driver requires all
> > endpoints to be described by the property. Allow the iommu-map property to
> > have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?


Thinking about this comment, I would like to ask: can't the
virtio device indicate the ranges in a portable way?
This would minimize the dependency on dt bindings and ACPI,
enabling support for systems that have neither but do
have virtio e.g. through pci.

>  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?
> 
> > Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
> 
> s/of_pci_map_rid/of_pci_map_rid()/
> 
> > which is invalid since MSIs always reach an MSI controller. Thankfully
> > Linux will error out later, when attempting to find an MSI domain for the
> > device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> > ---
> >  drivers/pci/of.c | 7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> > index 1836b8ddf292..2f5015bdb256 100644
> > --- a/drivers/pci/of.c
> > +++ b/drivers/pci/of.c
> > @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
> >  		return 0;
> >  	}
> >  
> > -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
> > -		np, map_name, rid, target && *target ? *target : NULL);
> > -	return -EFAULT;
> > +	/* Bypasses translation */
> > +	if (id_out)
> > +		*id_out = rid;
> > +	return 0;
> >  }
> >  
> >  #if IS_ENABLED(CONFIG_OF_IRQ)
> > -- 
> > 2.19.1
> > 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 19:41       ` Bjorn Helgaas
  (?)
@ 2018-10-15 10:52       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-15 10:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: mark.rutland, devicetree, lorenzo.pieralisi, tnowicki,
	peter.maydell, Jean-Philippe Brucker, linux-pci, joro,
	will.deacon, robin.murphy, virtualization, eric.auger, iommu,
	robh+dt, marc.zyngier, kvmarm

On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> > Using the iommu-map binding, endpoints in a given PCI domain can be
> > managed by different IOMMUs. Some virtual machines may allow a subset of
> > endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
> > as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> > PCI root complex has an iommu-map property, the driver requires all
> > endpoints to be described by the property. Allow the iommu-map property to
> > have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?


Thinking about this comment, I would like to ask: can't the
virtio device indicate the ranges in a portable way?
This would minimize the dependency on dt bindings and ACPI,
enabling support for systems that have neither but do
have virtio e.g. through pci.

>  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?
> 
> > Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
> 
> s/of_pci_map_rid/of_pci_map_rid()/
> 
> > which is invalid since MSIs always reach an MSI controller. Thankfully
> > Linux will error out later, when attempting to find an MSI domain for the
> > device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
> > Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> > ---
> >  drivers/pci/of.c | 7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/pci/of.c b/drivers/pci/of.c
> > index 1836b8ddf292..2f5015bdb256 100644
> > --- a/drivers/pci/of.c
> > +++ b/drivers/pci/of.c
> > @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
> >  		return 0;
> >  	}
> >  
> > -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
> > -		np, map_name, rid, target && *target ? *target : NULL);
> > -	return -EFAULT;
> > +	/* Bypasses translation */
> > +	if (id_out)
> > +		*id_out = rid;
> > +	return 0;
> >  }
> >  
> >  #if IS_ENABLED(CONFIG_OF_IRQ)
> > -- 
> > 2.19.1
> > 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 19:41       ` Bjorn Helgaas
@ 2018-10-15 11:32           ` Robin Murphy
  -1 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-15 11:32 UTC (permalink / raw)
  To: Bjorn Helgaas, Jean-Philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A

On 12/10/18 20:41, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>> Using the iommu-map binding, endpoints in a given PCI domain can be
>> managed by different IOMMUs. Some virtual machines may allow a subset of
>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>> PCI root complex has an iommu-map property, the driver requires all
>> endpoints to be described by the property. Allow the iommu-map property to
>> have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?

It's not really "allow devices to bypass the IOMMU" so much as "allow DT 
to describe devices which the IOMMU doesn't translate". It's a bit of an 
edge case for not-really-PCI devices, but FWIW I can certainly think of 
several ways to build real hardware like that. As for inadvertent errors 
leaving out IDs which *should* be in the map, that really depends on the 
IOMMU/driver implementation - e.g. SMMUv2 with arm-smmu.disable_bypass=0 
would treat the device as untranslated, whereas SMMUv3 would always 
generate a fault upon any transaction due to no valid stream table entry 
being programmed (not even a bypass one).

I reckon it's a sufficiently unusual case that keeping some sort of 
message probably is worthwhile (at pr_info rather than pr_err) in case 
someone does hit it by mistake.

>> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,

At worst, I suppose we could always add yet another parameter for each 
caller to choose whether a missing entry is considered an error or not.

Robin.

> s/of_pci_map_rid/of_pci_map_rid()/
> 
>> which is invalid since MSIs always reach an MSI controller. Thankfully
>> Linux will error out later, when attempting to find an MSI domain for the
>> device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
>> ---
>>   drivers/pci/of.c | 7 ++++---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
>> index 1836b8ddf292..2f5015bdb256 100644
>> --- a/drivers/pci/of.c
>> +++ b/drivers/pci/of.c
>> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>>   		return 0;
>>   	}
>>   
>> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
>> -		np, map_name, rid, target && *target ? *target : NULL);
>> -	return -EFAULT;
>> +	/* Bypasses translation */
>> +	if (id_out)
>> +		*id_out = rid;
>> +	return 0;
>>   }
>>   
>>   #if IS_ENABLED(CONFIG_OF_IRQ)
>> -- 
>> 2.19.1
>>
> _______________________________________________
> iommu mailing list
> iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-15 11:32           ` Robin Murphy
  0 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-15 11:32 UTC (permalink / raw)
  To: Bjorn Helgaas, Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, kevin.tian, tnowicki, devicetree,
	jasowang, linux-pci, mst, will.deacon, virtualization, iommu,
	robh+dt, marc.zyngier, kvmarm, nd

On 12/10/18 20:41, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>> Using the iommu-map binding, endpoints in a given PCI domain can be
>> managed by different IOMMUs. Some virtual machines may allow a subset of
>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>> PCI root complex has an iommu-map property, the driver requires all
>> endpoints to be described by the property. Allow the iommu-map property to
>> have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?

It's not really "allow devices to bypass the IOMMU" so much as "allow DT 
to describe devices which the IOMMU doesn't translate". It's a bit of an 
edge case for not-really-PCI devices, but FWIW I can certainly think of 
several ways to build real hardware like that. As for inadvertent errors 
leaving out IDs which *should* be in the map, that really depends on the 
IOMMU/driver implementation - e.g. SMMUv2 with arm-smmu.disable_bypass=0 
would treat the device as untranslated, whereas SMMUv3 would always 
generate a fault upon any transaction due to no valid stream table entry 
being programmed (not even a bypass one).

I reckon it's a sufficiently unusual case that keeping some sort of 
message probably is worthwhile (at pr_info rather than pr_err) in case 
someone does hit it by mistake.

>> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,

At worst, I suppose we could always add yet another parameter for each 
caller to choose whether a missing entry is considered an error or not.

Robin.

> s/of_pci_map_rid/of_pci_map_rid()/
> 
>> which is invalid since MSIs always reach an MSI controller. Thankfully
>> Linux will error out later, when attempting to find an MSI domain for the
>> device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>   drivers/pci/of.c | 7 ++++---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
>> index 1836b8ddf292..2f5015bdb256 100644
>> --- a/drivers/pci/of.c
>> +++ b/drivers/pci/of.c
>> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>>   		return 0;
>>   	}
>>   
>> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
>> -		np, map_name, rid, target && *target ? *target : NULL);
>> -	return -EFAULT;
>> +	/* Bypasses translation */
>> +	if (id_out)
>> +		*id_out = rid;
>> +	return 0;
>>   }
>>   
>>   #if IS_ENABLED(CONFIG_OF_IRQ)
>> -- 
>> 2.19.1
>>
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 19:41       ` Bjorn Helgaas
  (?)
  (?)
@ 2018-10-15 11:32       ` Robin Murphy
  -1 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-15 11:32 UTC (permalink / raw)
  To: Bjorn Helgaas, Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, tnowicki, devicetree, marc.zyngier,
	linux-pci, mst, will.deacon, virtualization, iommu, robh+dt

On 12/10/18 20:41, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>> Using the iommu-map binding, endpoints in a given PCI domain can be
>> managed by different IOMMUs. Some virtual machines may allow a subset of
>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>> PCI root complex has an iommu-map property, the driver requires all
>> endpoints to be described by the property. Allow the iommu-map property to
>> have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?

It's not really "allow devices to bypass the IOMMU" so much as "allow DT 
to describe devices which the IOMMU doesn't translate". It's a bit of an 
edge case for not-really-PCI devices, but FWIW I can certainly think of 
several ways to build real hardware like that. As for inadvertent errors 
leaving out IDs which *should* be in the map, that really depends on the 
IOMMU/driver implementation - e.g. SMMUv2 with arm-smmu.disable_bypass=0 
would treat the device as untranslated, whereas SMMUv3 would always 
generate a fault upon any transaction due to no valid stream table entry 
being programmed (not even a bypass one).

I reckon it's a sufficiently unusual case that keeping some sort of 
message probably is worthwhile (at pr_info rather than pr_err) in case 
someone does hit it by mistake.

>> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,

At worst, I suppose we could always add yet another parameter for each 
caller to choose whether a missing entry is considered an error or not.

Robin.

> s/of_pci_map_rid/of_pci_map_rid()/
> 
>> which is invalid since MSIs always reach an MSI controller. Thankfully
>> Linux will error out later, when attempting to find an MSI domain for the
>> device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.
> 
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>   drivers/pci/of.c | 7 ++++---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
>> index 1836b8ddf292..2f5015bdb256 100644
>> --- a/drivers/pci/of.c
>> +++ b/drivers/pci/of.c
>> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>>   		return 0;
>>   	}
>>   
>> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
>> -		np, map_name, rid, target && *target ? *target : NULL);
>> -	return -EFAULT;
>> +	/* Bypasses translation */
>> +	if (id_out)
>> +		*id_out = rid;
>> +	return 0;
>>   }
>>   
>>   #if IS_ENABLED(CONFIG_OF_IRQ)
>> -- 
>> 2.19.1
>>
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-12 19:41       ` Bjorn Helgaas
@ 2018-10-15 19:45           ` Jean-philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-philippe Brucker @ 2018-10-15 19:45 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Mark Rutland, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	Will Deacon,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A,
	jean-philippe.brucker-5wv7dgnIgG8, Marc Zyngier, Robin Murphy,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

[Replying with my personal address because we're having SMTP issues]

On 12/10/2018 20:41, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>> Using the iommu-map binding, endpoints in a given PCI domain can be
>> managed by different IOMMUs. Some virtual machines may allow a subset of
>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>> PCI root complex has an iommu-map property, the driver requires all
>> endpoints to be described by the property. Allow the iommu-map property to
>> have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?

As Robin said, a device that is absent from iommu-map will be ignored by
the IOMMU layer, so it depends on the specific IOMMU implementation and
driver. By default the SMMU and virtio-iommu drivers disable bypass, but
I'm not sure about the others. I'll try to find a more accurate title for
this patch

>  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?

Good idea, I'll replace the pr_err() below with a pr_info(), instead of
simply removing it.

>> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
> 
> s/of_pci_map_rid/of_pci_map_rid()/
> 
>> which is invalid since MSIs always reach an MSI controller. Thankfully
>> Linux will error out later, when attempting to find an MSI domain for the
>> device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.

Right, I'll clarify this. It's pci_msi_setup_msi_irqs() that returns an
error if the device is missing from msi-map, so the device driver won't be
able to request MSIs.

Thanks,
Jean

>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
>> ---
>>  drivers/pci/of.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
>> index 1836b8ddf292..2f5015bdb256 100644
>> --- a/drivers/pci/of.c
>> +++ b/drivers/pci/of.c
>> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>>  		return 0;
>>  	}
>>  
>> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
>> -		np, map_name, rid, target && *target ? *target : NULL);
>> -	return -EFAULT;
>> +	/* Bypasses translation */
>> +	if (id_out)
>> +		*id_out = rid;
>> +	return 0;
>>  }
>>  
>>  #if IS_ENABLED(CONFIG_OF_IRQ)
>> -- 
>> 2.19.1
>>
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-15 19:45           ` Jean-philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-philippe Brucker @ 2018-10-15 19:45 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, mst, jasowang, robh+dt, Mark Rutland,
	eric.auger, tnowicki, kevin.tian, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi, jean-philippe.brucker

[Replying with my personal address because we're having SMTP issues]

On 12/10/2018 20:41, Bjorn Helgaas wrote:
> s/iommu/IOMMU/ in subject
> 
> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>> Using the iommu-map binding, endpoints in a given PCI domain can be
>> managed by different IOMMUs. Some virtual machines may allow a subset of
>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> 
> s/case/cases/
> 
>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>> PCI root complex has an iommu-map property, the driver requires all
>> endpoints to be described by the property. Allow the iommu-map property to
>> have gaps.
> 
> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> iommu-map could inadvertently allow devices to bypass it?

As Robin said, a device that is absent from iommu-map will be ignored by
the IOMMU layer, so it depends on the specific IOMMU implementation and
driver. By default the SMMU and virtio-iommu drivers disable bypass, but
I'm not sure about the others. I'll try to find a more accurate title for
this patch

>  Should we
> indicate something in dmesg (and/or sysfs) about devices that bypass
> it?

Good idea, I'll replace the pr_err() below with a pr_info(), instead of
simply removing it.

>> Relaxing of_pci_map_rid also allows the msi-map property to have gaps,
> 
> s/of_pci_map_rid/of_pci_map_rid()/
> 
>> which is invalid since MSIs always reach an MSI controller. Thankfully
>> Linux will error out later, when attempting to find an MSI domain for the
>> device.
> 
> Not clear to me what "error out" means here.  In a userspace program,
> I would infer that the program exits with an error message, but I
> doubt you mean that Linux exits.

Right, I'll clarify this. It's pci_msi_setup_msi_irqs() that returns an
error if the device is missing from msi-map, so the device driver won't be
able to request MSIs.

Thanks,
Jean

>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  drivers/pci/of.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/pci/of.c b/drivers/pci/of.c
>> index 1836b8ddf292..2f5015bdb256 100644
>> --- a/drivers/pci/of.c
>> +++ b/drivers/pci/of.c
>> @@ -451,9 +451,10 @@ int of_pci_map_rid(struct device_node *np, u32 rid,
>>  		return 0;
>>  	}
>>  
>> -	pr_err("%pOF: Invalid %s translation - no match for rid 0x%x on %pOF\n",
>> -		np, map_name, rid, target && *target ? *target : NULL);
>> -	return -EFAULT;
>> +	/* Bypasses translation */
>> +	if (id_out)
>> +		*id_out = rid;
>> +	return 0;
>>  }
>>  
>>  #if IS_ENABLED(CONFIG_OF_IRQ)
>> -- 
>> 2.19.1
>>
> 


^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-15 10:52           ` Michael S. Tsirkin
@ 2018-10-15 19:46               ` Jean-philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-philippe Brucker @ 2018-10-15 19:46 UTC (permalink / raw)
  To: Michael S. Tsirkin, Bjorn Helgaas
  Cc: mark.rutland-5wv7dgnIgG8, devicetree-u79uwXL29TY76Z2rM5mHXA,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	peter.maydell-QSEj5FYQhm4dnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	jean-philippe.brucker-5wv7dgnIgG8,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

[Replying with my personal address because we're having SMTP issues]

On 15/10/2018 11:52, Michael S. Tsirkin wrote:
> On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
>> s/iommu/IOMMU/ in subject
>>
>> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>>> Using the iommu-map binding, endpoints in a given PCI domain can be
>>> managed by different IOMMUs. Some virtual machines may allow a subset of
>>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
>>
>> s/case/cases/
>>
>>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>>> PCI root complex has an iommu-map property, the driver requires all
>>> endpoints to be described by the property. Allow the iommu-map property to
>>> have gaps.
>>
>> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
>> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
>> iommu-map could inadvertently allow devices to bypass it?
> 
> 
> Thinking about this comment, I would like to ask: can't the
> virtio device indicate the ranges in a portable way?
> This would minimize the dependency on dt bindings and ACPI,
> enabling support for systems that have neither but do
> have virtio e.g. through pci.

I thought about adding a PROBE request for this in virtio-iommu, but it
wouldn't be usable by a Linux guest because of a bootstrapping problem.

Early on, Linux needs a description of device dependencies, to determine
in which order to probe them. If the device dependency was described by
virtio-iommu itself, the guest could for example initialize a NIC,
allocate buffers and start DMA on the physical address space (which aborts
if the IOMMU implementation disallows DMA by default), only to find out
once the virtio-iommu module is loaded that it needs to cancel all DMA and
reconfigure the NIC. With a static description such as iommu-map in DT or
ACPI remapping tables, the guest can defer probing of the NIC until the
IOMMU is initialized.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-15 19:46               ` Jean-philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-philippe Brucker @ 2018-10-15 19:46 UTC (permalink / raw)
  To: Michael S. Tsirkin, Bjorn Helgaas
  Cc: mark.rutland, devicetree, kevin.tian, tnowicki, peter.maydell,
	linux-pci, will.deacon, robin.murphy, virtualization, iommu,
	robh+dt, marc.zyngier, jasowang, kvmarm, jean-philippe.brucker

[Replying with my personal address because we're having SMTP issues]

On 15/10/2018 11:52, Michael S. Tsirkin wrote:
> On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
>> s/iommu/IOMMU/ in subject
>>
>> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>>> Using the iommu-map binding, endpoints in a given PCI domain can be
>>> managed by different IOMMUs. Some virtual machines may allow a subset of
>>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
>>
>> s/case/cases/
>>
>>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>>> PCI root complex has an iommu-map property, the driver requires all
>>> endpoints to be described by the property. Allow the iommu-map property to
>>> have gaps.
>>
>> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
>> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
>> iommu-map could inadvertently allow devices to bypass it?
> 
> 
> Thinking about this comment, I would like to ask: can't the
> virtio device indicate the ranges in a portable way?
> This would minimize the dependency on dt bindings and ACPI,
> enabling support for systems that have neither but do
> have virtio e.g. through pci.

I thought about adding a PROBE request for this in virtio-iommu, but it
wouldn't be usable by a Linux guest because of a bootstrapping problem.

Early on, Linux needs a description of device dependencies, to determine
in which order to probe them. If the device dependency was described by
virtio-iommu itself, the guest could for example initialize a NIC,
allocate buffers and start DMA on the physical address space (which aborts
if the IOMMU implementation disallows DMA by default), only to find out
once the virtio-iommu module is loaded that it needs to cancel all DMA and
reconfigure the NIC. With a static description such as iommu-map in DT or
ACPI remapping tables, the guest can defer probing of the NIC until the
IOMMU is initialized.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
@ 2018-10-16  9:25     ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-10-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.

I have a question wrt the FW specification. The IOMMU consumes 1 slot in
the PCI domain and one needs to leave a RID hole in the iommu-map.  It
is not obvious to me that this RID always is predictable given the pcie
enumeration mechanism. Generally we have a coarse grain mapping of RID
onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
to precisely identify the RID granted to the iommu. On QEMU this may
depend on the instantiation order of the virtio-pci device right? So
this does not look trivial to build this info. Isn't it possible to do
this exclusion at kernel level instead?

Thanks

Eric
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-16  9:25     ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-10-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.

I have a question wrt the FW specification. The IOMMU consumes 1 slot in
the PCI domain and one needs to leave a RID hole in the iommu-map.  It
is not obvious to me that this RID always is predictable given the pcie
enumeration mechanism. Generally we have a coarse grain mapping of RID
onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
to precisely identify the RID granted to the iommu. On QEMU this may
depend on the instantiation order of the virtio-pci device right? So
this does not look trivial to build this info. Isn't it possible to do
this exclusion at kernel level instead?

Thanks

Eric
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-12 14:59 ` Jean-Philippe Brucker
                   ` (16 preceding siblings ...)
  (?)
@ 2018-10-16  9:25 ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-10-16  9:25 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, robh+dt,
	robin.murphy, joro

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.8 [1].
> Changes since v2 [2]:
> 
> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>   would like to phase out the MMIO transport. This produces a complex
>   topology where the programming interface of the IOMMU could appear
>   lower than the endpoints that it translates. It's not unheard of (e.g.
>   AMD IOMMU), and the guest easily copes with this.
>   
>   The "Firmware description" section of the specification has been
>   updated with all combinations of PCI, MMIO and DT, ACPI.

I have a question wrt the FW specification. The IOMMU consumes 1 slot in
the PCI domain and one needs to leave a RID hole in the iommu-map.  It
is not obvious to me that this RID always is predictable given the pcie
enumeration mechanism. Generally we have a coarse grain mapping of RID
onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
to precisely identify the RID granted to the iommu. On QEMU this may
depend on the instantiation order of the virtio-pci device right? So
this does not look trivial to build this info. Isn't it possible to do
this exclusion at kernel level instead?

Thanks

Eric
> 
> * Fix structures layout, they don't need the "packed" attribute anymore.
> 
> * While we're at it, add domain parameter to DETACH request, and leave
>   some padding. This way the next version, that adds PASID support,
>   won't have to introduce a "DETACH2" request to stay backward
>   compatible.
> 
> * Require virtio device 1.0+. Remove legacy transport notes from the
>   specification.
> 
> * Request timeout is now only enabled with DEBUG.
> 
> * The patch for VFIO Kconfig (previously patch 5/5) is in next.
> 
> You can find Linux driver and kvmtool device on branches
> virtio-iommu/v0.8 [3] (currently based on 4.19-rc7 but rebasing onto
> next only produced a trivial conflict). Branch virtio-iommu/devel
> contains a few patches that I'd like to send once the base is upstream:
> 
> * virtio-iommu as a module. It got *much* nicer after Rob's probe
>   deferral rework, but I still have a bug to fix when re-loading the
>   virtio-iommu module.
> 
> * ACPI support requires a minor IORT spec update (reservation of node
>   ID). I think it should be easier to obtain once the device and drivers
>   are upstream.
> 
> [1] Virtio-iommu specification v0.8, diff from v0.7, and sources
>     git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.8
>     http://jpbrucker.net/virtio-iommu/spec/v0.8/virtio-iommu-v0.8.pdf
>     http://jpbrucker.net/virtio-iommu/spec/diffs/virtio-iommu-pdf-diff-v0.7-v0.8.pdf
> 
> [2] [PATCH v2 0/5] Add virtio-iommu driver
>     https://www.spinics.net/lists/kvm/msg170655.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.8
>     git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.8
> 
> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   PCI: OF: allow endpoints to bypass the iommu
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt      |   66 +
>  .../devicetree/bindings/virtio/mmio.txt       |   30 +
>  MAINTAINERS                                   |    7 +
>  drivers/iommu/Kconfig                         |   11 +
>  drivers/iommu/Makefile                        |    1 +
>  drivers/iommu/virtio-iommu.c                  | 1171 +++++++++++++++++
>  drivers/pci/of.c                              |   14 +-
>  include/uapi/linux/virtio_ids.h               |    1 +
>  include/uapi/linux/virtio_iommu.h             |  159 +++
>  9 files changed, 1457 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-16  9:25     ` Auger Eric
@ 2018-10-16 18:44       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-16 18:44 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: kevin.tian, Lorenzo Pieralisi, tnowicki, mst, Marc Zyngier,
	linux-pci, jasowang, Will Deacon, kvmarm, robh+dt, Robin Murphy,
	joro

On 16/10/2018 10:25, Auger Eric wrote:
> Hi Jean,
> 
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> Implement the virtio-iommu driver, following specification v0.8 [1].
>> Changes since v2 [2]:
>> 
>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>   would like to phase out the MMIO transport. This produces a complex
>>   topology where the programming interface of the IOMMU could appear
>>   lower than the endpoints that it translates. It's not unheard of (e.g.
>>   AMD IOMMU), and the guest easily copes with this.
>>   
>>   The "Firmware description" section of the specification has been
>>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> is not obvious to me that this RID always is predictable given the pcie
> enumeration mechanism. Generally we have a coarse grain mapping of RID
> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> to precisely identify the RID granted to the iommu. On QEMU this may
> depend on the instantiation order of the virtio-pci device right?

Yes, although it should all happen before you boot the guest, since
there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
and use it for virtio-iommu later? Or generate the iommu-map at the same
time as generating the child node of the PCI RC?

> So
> this does not look trivial to build this info. Isn't it possible to do
> this exclusion at kernel level instead?

So in theory VIRTIO_F_IOMMU_PLATFORM already does that:

VIRTIO_F_IOMMU_PLATFORM(33)
    This feature indicates that the device is behind an IOMMU that
    translates bus addresses from the device into physical addresses in
    memory. If this feature bit is set to 0, then the device emits
    physical addresses which are not translated further, even though an
    IOMMU may be present.

For better or for worse, the guest has to implement it. If this feature
bit is unset for virtio-iommu, it does DMA on the physical address
space, regardless of what the static topology description says.

In practice it doesn't quite work. If your iommu-map describes the IOMMU
as translating itself, Linux' OF code will wait for the IOMMU to be
probed before probing the IOMMU. Working around this with hacks is
possible, but I don't want to introduce more questionable code to OF and
device tree bindings if there is any other way.

Thanks,
Jean
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-16 18:44       ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-16 18:44 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	Mark Rutland, tnowicki, kevin.tian, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi

On 16/10/2018 10:25, Auger Eric wrote:
> Hi Jean,
> 
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> Implement the virtio-iommu driver, following specification v0.8 [1].
>> Changes since v2 [2]:
>> 
>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>   would like to phase out the MMIO transport. This produces a complex
>>   topology where the programming interface of the IOMMU could appear
>>   lower than the endpoints that it translates. It's not unheard of (e.g.
>>   AMD IOMMU), and the guest easily copes with this.
>>   
>>   The "Firmware description" section of the specification has been
>>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> is not obvious to me that this RID always is predictable given the pcie
> enumeration mechanism. Generally we have a coarse grain mapping of RID
> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> to precisely identify the RID granted to the iommu. On QEMU this may
> depend on the instantiation order of the virtio-pci device right?

Yes, although it should all happen before you boot the guest, since
there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
and use it for virtio-iommu later? Or generate the iommu-map at the same
time as generating the child node of the PCI RC?

> So
> this does not look trivial to build this info. Isn't it possible to do
> this exclusion at kernel level instead?

So in theory VIRTIO_F_IOMMU_PLATFORM already does that:

VIRTIO_F_IOMMU_PLATFORM(33)
    This feature indicates that the device is behind an IOMMU that
    translates bus addresses from the device into physical addresses in
    memory. If this feature bit is set to 0, then the device emits
    physical addresses which are not translated further, even though an
    IOMMU may be present.

For better or for worse, the guest has to implement it. If this feature
bit is unset for virtio-iommu, it does DMA on the physical address
space, regardless of what the static topology description says.

In practice it doesn't quite work. If your iommu-map describes the IOMMU
as translating itself, Linux' OF code will wait for the IOMMU to be
probed before probing the IOMMU. Working around this with hacks is
possible, but I don't want to introduce more questionable code to OF and
device tree bindings if there is any other way.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-16  9:25     ` Auger Eric
  (?)
  (?)
@ 2018-10-16 18:44     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-16 18:44 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: Mark Rutland, peter.maydell, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, linux-pci, Will Deacon, kvmarm, robh+dt,
	Robin Murphy, joro

On 16/10/2018 10:25, Auger Eric wrote:
> Hi Jean,
> 
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> Implement the virtio-iommu driver, following specification v0.8 [1].
>> Changes since v2 [2]:
>> 
>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>   would like to phase out the MMIO transport. This produces a complex
>>   topology where the programming interface of the IOMMU could appear
>>   lower than the endpoints that it translates. It's not unheard of (e.g.
>>   AMD IOMMU), and the guest easily copes with this.
>>   
>>   The "Firmware description" section of the specification has been
>>   updated with all combinations of PCI, MMIO and DT, ACPI.
> 
> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> is not obvious to me that this RID always is predictable given the pcie
> enumeration mechanism. Generally we have a coarse grain mapping of RID
> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> to precisely identify the RID granted to the iommu. On QEMU this may
> depend on the instantiation order of the virtio-pci device right?

Yes, although it should all happen before you boot the guest, since
there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
and use it for virtio-iommu later? Or generate the iommu-map at the same
time as generating the child node of the PCI RC?

> So
> this does not look trivial to build this info. Isn't it possible to do
> this exclusion at kernel level instead?

So in theory VIRTIO_F_IOMMU_PLATFORM already does that:

VIRTIO_F_IOMMU_PLATFORM(33)
    This feature indicates that the device is behind an IOMMU that
    translates bus addresses from the device into physical addresses in
    memory. If this feature bit is set to 0, then the device emits
    physical addresses which are not translated further, even though an
    IOMMU may be present.

For better or for worse, the guest has to implement it. If this feature
bit is unset for virtio-iommu, it does DMA on the physical address
space, regardless of what the static topology description says.

In practice it doesn't quite work. If your iommu-map describes the IOMMU
as translating itself, Linux' OF code will wait for the IOMMU to be
probed before probing the IOMMU. Working around this with hacks is
possible, but I don't want to introduce more questionable code to OF and
device tree bindings if there is any other way.

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-16 18:44       ` Jean-Philippe Brucker
@ 2018-10-16 20:31         ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-10-16 20:31 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: Mark Rutland, peter.maydell, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, linux-pci, Will Deacon, kvmarm, robh+dt,
	Robin Murphy, joro

Hi Jean,

On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
> On 16/10/2018 10:25, Auger Eric wrote:
>> Hi Jean,
>>
>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>>> Implement the virtio-iommu driver, following specification v0.8 [1].
>>> Changes since v2 [2]:
>>>
>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>>    would like to phase out the MMIO transport. This produces a complex
>>>    topology where the programming interface of the IOMMU could appear
>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
>>>    AMD IOMMU), and the guest easily copes with this.
>>>    
>>>    The "Firmware description" section of the specification has been
>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
>>
>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
>> is not obvious to me that this RID always is predictable given the pcie
>> enumeration mechanism. Generally we have a coarse grain mapping of RID
>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
>> to precisely identify the RID granted to the iommu. On QEMU this may
>> depend on the instantiation order of the virtio-pci device right?
> 
> Yes, although it should all happen before you boot the guest, since
> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
> and use it for virtio-iommu later? Or generate the iommu-map at the same
> time as generating the child node of the PCI RC?

Even when cold-plugging the PCIe devices through qemu CLI, this depends
on the order of the pcie devices in the list I guess. I need to further
experiment.
> 
>> So
>> this does not look trivial to build this info. Isn't it possible to do
>> this exclusion at kernel level instead?
> 
> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
> 
> VIRTIO_F_IOMMU_PLATFORM(33)
>     This feature indicates that the device is behind an IOMMU that
>     translates bus addresses from the device into physical addresses in
>     memory. If this feature bit is set to 0, then the device emits
>     physical addresses which are not translated further, even though an
>     IOMMU may be present.

This tells the driver to use the dma api, right? Effectively this
explicitly says whether the device is supposed to be upfront an IOMMU.
> 
> For better or for worse, the guest has to implement it. If this feature
> bit is unset for virtio-iommu, it does DMA on the physical address
> space, regardless of what the static topology description says.
> 
> In practice it doesn't quite work. If your iommu-map describes the IOMMU
> as translating itself, Linux' OF code will wait for the IOMMU to be
> probed before probing the IOMMU. Working around this with hacks is
> possible, but I don't want to introduce more questionable code to OF and
> device tree bindings if there is any other way.
Hum ok. I cannot really comment on this.

I just wanted to raise this concern about RID identfication.

Thanks

Eric
> 
> Thanks,
> Jean
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-16 20:31         ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-10-16 20:31 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	Mark Rutland, tnowicki, kevin.tian, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi

Hi Jean,

On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
> On 16/10/2018 10:25, Auger Eric wrote:
>> Hi Jean,
>>
>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>>> Implement the virtio-iommu driver, following specification v0.8 [1].
>>> Changes since v2 [2]:
>>>
>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>>    would like to phase out the MMIO transport. This produces a complex
>>>    topology where the programming interface of the IOMMU could appear
>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
>>>    AMD IOMMU), and the guest easily copes with this.
>>>    
>>>    The "Firmware description" section of the specification has been
>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
>>
>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
>> is not obvious to me that this RID always is predictable given the pcie
>> enumeration mechanism. Generally we have a coarse grain mapping of RID
>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
>> to precisely identify the RID granted to the iommu. On QEMU this may
>> depend on the instantiation order of the virtio-pci device right?
> 
> Yes, although it should all happen before you boot the guest, since
> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
> and use it for virtio-iommu later? Or generate the iommu-map at the same
> time as generating the child node of the PCI RC?

Even when cold-plugging the PCIe devices through qemu CLI, this depends
on the order of the pcie devices in the list I guess. I need to further
experiment.
> 
>> So
>> this does not look trivial to build this info. Isn't it possible to do
>> this exclusion at kernel level instead?
> 
> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
> 
> VIRTIO_F_IOMMU_PLATFORM(33)
>     This feature indicates that the device is behind an IOMMU that
>     translates bus addresses from the device into physical addresses in
>     memory. If this feature bit is set to 0, then the device emits
>     physical addresses which are not translated further, even though an
>     IOMMU may be present.

This tells the driver to use the dma api, right? Effectively this
explicitly says whether the device is supposed to be upfront an IOMMU.
> 
> For better or for worse, the guest has to implement it. If this feature
> bit is unset for virtio-iommu, it does DMA on the physical address
> space, regardless of what the static topology description says.
> 
> In practice it doesn't quite work. If your iommu-map describes the IOMMU
> as translating itself, Linux' OF code will wait for the IOMMU to be
> probed before probing the IOMMU. Working around this with hacks is
> possible, but I don't want to introduce more questionable code to OF and
> device tree bindings if there is any other way.
Hum ok. I cannot really comment on this.

I just wanted to raise this concern about RID identfication.

Thanks

Eric
> 
> Thanks,
> Jean
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-16 20:31         ` Auger Eric
@ 2018-10-17 11:54           ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-17 11:54 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: Mark Rutland, peter.maydell, Lorenzo Pieralisi, tnowicki, mst,
	Marc Zyngier, linux-pci, Will Deacon, kvmarm, robh+dt,
	Robin Murphy, joro

On 16/10/2018 21:31, Auger Eric wrote:
> Hi Jean,
> 
> On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
>> On 16/10/2018 10:25, Auger Eric wrote:
>>> Hi Jean,
>>>
>>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>>>> Implement the virtio-iommu driver, following specification v0.8 [1].
>>>> Changes since v2 [2]:
>>>>
>>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>>>    would like to phase out the MMIO transport. This produces a complex
>>>>    topology where the programming interface of the IOMMU could appear
>>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
>>>>    AMD IOMMU), and the guest easily copes with this.
>>>>    
>>>>    The "Firmware description" section of the specification has been
>>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
>>>
>>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
>>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
>>> is not obvious to me that this RID always is predictable given the pcie
>>> enumeration mechanism. Generally we have a coarse grain mapping of RID
>>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
>>> to precisely identify the RID granted to the iommu. On QEMU this may
>>> depend on the instantiation order of the virtio-pci device right?
>> 
>> Yes, although it should all happen before you boot the guest, since
>> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
>> and use it for virtio-iommu later? Or generate the iommu-map at the same
>> time as generating the child node of the PCI RC?
> 
> Even when cold-plugging the PCIe devices through qemu CLI, this depends
> on the order of the pcie devices in the list I guess. I need to further
> experiment.

Please let me know how it goes. I guess the problem will be the same for
building IORT tables? You're also going to need a hole in the ID
mappings of the PCI root complex node.

>>> So
>>> this does not look trivial to build this info. Isn't it possible to do
>>> this exclusion at kernel level instead?
>> 
>> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
>> 
>> VIRTIO_F_IOMMU_PLATFORM(33)
>>     This feature indicates that the device is behind an IOMMU that
>>     translates bus addresses from the device into physical addresses in
>>     memory. If this feature bit is set to 0, then the device emits
>>     physical addresses which are not translated further, even though an
>>     IOMMU may be present.
> 
> This tells the driver to use the dma api, right? 

That's how Linux implements the bit, install custom DMA ops when the bit
is absent. But it doesn't work for everyone and has caused a lot of
debate (https://patchwork.ozlabs.org/cover/946708/)

> Effectively this
> explicitly says whether the device is supposed to be upfront an IOMMU.

Yes. It's quite strange if you consider hotpluggable hardware, since
those devices shouldn't get to choose whether they are managed by an
IOMMU. For the IOMMU itself, it should be fine

>> For better or for worse, the guest has to implement it. If this feature
>> bit is unset for virtio-iommu, it does DMA on the physical address
>> space, regardless of what the static topology description says.
>> 
>> In practice it doesn't quite work. If your iommu-map describes the IOMMU
>> as translating itself, Linux' OF code will wait for the IOMMU to be
>> probed before probing the IOMMU. Working around this with hacks is
>> possible, but I don't want to introduce more questionable code to OF and
>> device tree bindings if there is any other way.
> Hum ok. I cannot really comment on this.
> 
> I just wanted to raise this concern about RID identfication.

We can always try. Relaxing iommu-map further would be one additional
patch to Documentation/devicetree/bindings/pci/pci-iommu.txt, and one to
drivers/iommu/of-iommu.c. I'd rather make it a separate RFC.

Since we need acks from an OF maintainer and I'd also like Joerg's
approval for adding a new driver to the IOMMU tree, I think it's too
late for this iteration. I wasn't intending for this to go into 4.20,
just have something to discuss at KVM forum next week.

Thanks,
Jean
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-17 11:54           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-17 11:54 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	Mark Rutland, tnowicki, kevin.tian, Marc Zyngier, Robin Murphy,
	Will Deacon, Lorenzo Pieralisi

On 16/10/2018 21:31, Auger Eric wrote:
> Hi Jean,
> 
> On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
>> On 16/10/2018 10:25, Auger Eric wrote:
>>> Hi Jean,
>>>
>>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>>>> Implement the virtio-iommu driver, following specification v0.8 [1].
>>>> Changes since v2 [2]:
>>>>
>>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
>>>>    would like to phase out the MMIO transport. This produces a complex
>>>>    topology where the programming interface of the IOMMU could appear
>>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
>>>>    AMD IOMMU), and the guest easily copes with this.
>>>>    
>>>>    The "Firmware description" section of the specification has been
>>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
>>>
>>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
>>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
>>> is not obvious to me that this RID always is predictable given the pcie
>>> enumeration mechanism. Generally we have a coarse grain mapping of RID
>>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
>>> to precisely identify the RID granted to the iommu. On QEMU this may
>>> depend on the instantiation order of the virtio-pci device right?
>> 
>> Yes, although it should all happen before you boot the guest, since
>> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
>> and use it for virtio-iommu later? Or generate the iommu-map at the same
>> time as generating the child node of the PCI RC?
> 
> Even when cold-plugging the PCIe devices through qemu CLI, this depends
> on the order of the pcie devices in the list I guess. I need to further
> experiment.

Please let me know how it goes. I guess the problem will be the same for
building IORT tables? You're also going to need a hole in the ID
mappings of the PCI root complex node.

>>> So
>>> this does not look trivial to build this info. Isn't it possible to do
>>> this exclusion at kernel level instead?
>> 
>> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
>> 
>> VIRTIO_F_IOMMU_PLATFORM(33)
>>     This feature indicates that the device is behind an IOMMU that
>>     translates bus addresses from the device into physical addresses in
>>     memory. If this feature bit is set to 0, then the device emits
>>     physical addresses which are not translated further, even though an
>>     IOMMU may be present.
> 
> This tells the driver to use the dma api, right? 

That's how Linux implements the bit, install custom DMA ops when the bit
is absent. But it doesn't work for everyone and has caused a lot of
debate (https://patchwork.ozlabs.org/cover/946708/)

> Effectively this
> explicitly says whether the device is supposed to be upfront an IOMMU.

Yes. It's quite strange if you consider hotpluggable hardware, since
those devices shouldn't get to choose whether they are managed by an
IOMMU. For the IOMMU itself, it should be fine

>> For better or for worse, the guest has to implement it. If this feature
>> bit is unset for virtio-iommu, it does DMA on the physical address
>> space, regardless of what the static topology description says.
>> 
>> In practice it doesn't quite work. If your iommu-map describes the IOMMU
>> as translating itself, Linux' OF code will wait for the IOMMU to be
>> probed before probing the IOMMU. Working around this with hacks is
>> possible, but I don't want to introduce more questionable code to OF and
>> device tree bindings if there is any other way.
> Hum ok. I cannot really comment on this.
> 
> I just wanted to raise this concern about RID identfication.

We can always try. Relaxing iommu-map further would be one additional
patch to Documentation/devicetree/bindings/pci/pci-iommu.txt, and one to
drivers/iommu/of-iommu.c. I'd rather make it a separate RFC.

Since we need acks from an OF maintainer and I'd also like Joerg's
approval for adding a new driver to the IOMMU tree, I think it's too
late for this iteration. I wasn't intending for this to go into 4.20,
just have something to discuss at KVM forum next week.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-15 19:46               ` Jean-philippe Brucker
@ 2018-10-17 15:14                 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-17 15:14 UTC (permalink / raw)
  To: Jean-philippe Brucker
  Cc: mark.rutland, devicetree, tnowicki, peter.maydell, marc.zyngier,
	linux-pci, will.deacon, virtualization, jean-philippe.brucker,
	iommu, robh+dt, Bjorn Helgaas, robin.murphy, kvmarm

On Mon, Oct 15, 2018 at 08:46:41PM +0100, Jean-philippe Brucker wrote:
> [Replying with my personal address because we're having SMTP issues]
> 
> On 15/10/2018 11:52, Michael S. Tsirkin wrote:
> > On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
> >> s/iommu/IOMMU/ in subject
> >>
> >> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> >>> Using the iommu-map binding, endpoints in a given PCI domain can be
> >>> managed by different IOMMUs. Some virtual machines may allow a subset of
> >>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> >>
> >> s/case/cases/
> >>
> >>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> >>> PCI root complex has an iommu-map property, the driver requires all
> >>> endpoints to be described by the property. Allow the iommu-map property to
> >>> have gaps.
> >>
> >> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> >> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> >> iommu-map could inadvertently allow devices to bypass it?
> > 
> > 
> > Thinking about this comment, I would like to ask: can't the
> > virtio device indicate the ranges in a portable way?
> > This would minimize the dependency on dt bindings and ACPI,
> > enabling support for systems that have neither but do
> > have virtio e.g. through pci.
> 
> I thought about adding a PROBE request for this in virtio-iommu, but it
> wouldn't be usable by a Linux guest because of a bootstrapping problem.

Hmm. At some level it seems wrong to design hardware interfaces
around how Linux happens to probe things. That can change at any time
...

> Early on, Linux needs a description of device dependencies, to determine
> in which order to probe them. If the device dependency was described by
> virtio-iommu itself, the guest could for example initialize a NIC,
> allocate buffers and start DMA on the physical address space (which aborts
> if the IOMMU implementation disallows DMA by default), only to find out
> once the virtio-iommu module is loaded that it needs to cancel all DMA and
> reconfigure the NIC. With a static description such as iommu-map in DT or
> ACPI remapping tables, the guest can defer probing of the NIC until the
> IOMMU is initialized.
> 
> Thanks,
> Jean

Could you point me at the code you refer to here?

-- 
MST

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-17 15:14                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-17 15:14 UTC (permalink / raw)
  To: Jean-philippe Brucker
  Cc: Bjorn Helgaas, mark.rutland, devicetree, kevin.tian, tnowicki,
	peter.maydell, linux-pci, will.deacon, robin.murphy,
	virtualization, iommu, robh+dt, marc.zyngier, jasowang, kvmarm,
	jean-philippe.brucker

On Mon, Oct 15, 2018 at 08:46:41PM +0100, Jean-philippe Brucker wrote:
> [Replying with my personal address because we're having SMTP issues]
> 
> On 15/10/2018 11:52, Michael S. Tsirkin wrote:
> > On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
> >> s/iommu/IOMMU/ in subject
> >>
> >> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
> >>> Using the iommu-map binding, endpoints in a given PCI domain can be
> >>> managed by different IOMMUs. Some virtual machines may allow a subset of
> >>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
> >>
> >> s/case/cases/
> >>
> >>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
> >>> PCI root complex has an iommu-map property, the driver requires all
> >>> endpoints to be described by the property. Allow the iommu-map property to
> >>> have gaps.
> >>
> >> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
> >> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
> >> iommu-map could inadvertently allow devices to bypass it?
> > 
> > 
> > Thinking about this comment, I would like to ask: can't the
> > virtio device indicate the ranges in a portable way?
> > This would minimize the dependency on dt bindings and ACPI,
> > enabling support for systems that have neither but do
> > have virtio e.g. through pci.
> 
> I thought about adding a PROBE request for this in virtio-iommu, but it
> wouldn't be usable by a Linux guest because of a bootstrapping problem.

Hmm. At some level it seems wrong to design hardware interfaces
around how Linux happens to probe things. That can change at any time
...

> Early on, Linux needs a description of device dependencies, to determine
> in which order to probe them. If the device dependency was described by
> virtio-iommu itself, the guest could for example initialize a NIC,
> allocate buffers and start DMA on the physical address space (which aborts
> if the IOMMU implementation disallows DMA by default), only to find out
> once the virtio-iommu module is loaded that it needs to cancel all DMA and
> reconfigure the NIC. With a static description such as iommu-map in DT or
> ACPI remapping tables, the guest can defer probing of the NIC until the
> IOMMU is initialized.
> 
> Thanks,
> Jean

Could you point me at the code you refer to here?

-- 
MST

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-17 11:54           ` Jean-Philippe Brucker
@ 2018-10-17 15:23             ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-17 15:23 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: devicetree, kevin.tian, Lorenzo Pieralisi, tnowicki, jasowang,
	linux-pci, joro, Will Deacon, virtualization, iommu, robh+dt,
	Marc Zyngier, Robin Murphy, kvmarm

On Wed, Oct 17, 2018 at 12:54:28PM +0100, Jean-Philippe Brucker wrote:
> On 16/10/2018 21:31, Auger Eric wrote:
> > Hi Jean,
> > 
> > On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
> >> On 16/10/2018 10:25, Auger Eric wrote:
> >>> Hi Jean,
> >>>
> >>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> >>>> Implement the virtio-iommu driver, following specification v0.8 [1].
> >>>> Changes since v2 [2]:
> >>>>
> >>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
> >>>>    would like to phase out the MMIO transport. This produces a complex
> >>>>    topology where the programming interface of the IOMMU could appear
> >>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
> >>>>    AMD IOMMU), and the guest easily copes with this.
> >>>>    
> >>>>    The "Firmware description" section of the specification has been
> >>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
> >>>
> >>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> >>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> >>> is not obvious to me that this RID always is predictable given the pcie
> >>> enumeration mechanism. Generally we have a coarse grain mapping of RID
> >>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> >>> to precisely identify the RID granted to the iommu. On QEMU this may
> >>> depend on the instantiation order of the virtio-pci device right?
> >> 
> >> Yes, although it should all happen before you boot the guest, since
> >> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
> >> and use it for virtio-iommu later? Or generate the iommu-map at the same
> >> time as generating the child node of the PCI RC?
> > 
> > Even when cold-plugging the PCIe devices through qemu CLI, this depends
> > on the order of the pcie devices in the list I guess. I need to further
> > experiment.
> 
> Please let me know how it goes. I guess the problem will be the same for
> building IORT tables? You're also going to need a hole in the ID
> mappings of the PCI root complex node.
> 
> >>> So
> >>> this does not look trivial to build this info. Isn't it possible to do
> >>> this exclusion at kernel level instead?
> >> 
> >> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
> >> 
> >> VIRTIO_F_IOMMU_PLATFORM(33)
> >>     This feature indicates that the device is behind an IOMMU that
> >>     translates bus addresses from the device into physical addresses in
> >>     memory. If this feature bit is set to 0, then the device emits
> >>     physical addresses which are not translated further, even though an
> >>     IOMMU may be present.
> > 
> > This tells the driver to use the dma api, right? 
> 
> That's how Linux implements the bit, install custom DMA ops when the bit
> is absent. But it doesn't work for everyone and has caused a lot of
> debate (https://patchwork.ozlabs.org/cover/946708/)
> 
> > Effectively this
> > explicitly says whether the device is supposed to be upfront an IOMMU.
> 
> Yes. It's quite strange if you consider hotpluggable hardware, since
> those devices shouldn't get to choose whether they are managed by an
> IOMMU. For the IOMMU itself, it should be fine
> 
> >> For better or for worse, the guest has to implement it. If this feature
> >> bit is unset for virtio-iommu, it does DMA on the physical address
> >> space, regardless of what the static topology description says.
> >> 
> >> In practice it doesn't quite work. If your iommu-map describes the IOMMU
> >> as translating itself, Linux' OF code will wait for the IOMMU to be
> >> probed before probing the IOMMU. Working around this with hacks is
> >> possible, but I don't want to introduce more questionable code to OF and
> >> device tree bindings if there is any other way.
> > Hum ok. I cannot really comment on this.
> > 
> > I just wanted to raise this concern about RID identfication.
> 
> We can always try. Relaxing iommu-map further would be one additional
> patch to Documentation/devicetree/bindings/pci/pci-iommu.txt, and one to
> drivers/iommu/of-iommu.c. I'd rather make it a separate RFC.
> 
> Since we need acks from an OF maintainer and I'd also like Joerg's
> approval for adding a new driver to the IOMMU tree, I think it's too
> late for this iteration. I wasn't intending for this to go into 4.20,
> just have something to discuss at KVM forum next week.
> 
> Thanks,
> Jean

OK then. I'd appreciate it if you mark patches that aren't
intended to be merged as RFC in subject line.
Thanks!

-- 
MST

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
@ 2018-10-17 15:23             ` Michael S. Tsirkin
  0 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-17 15:23 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Auger Eric, iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, jasowang, robh+dt, Mark Rutland, tnowicki,
	kevin.tian, Marc Zyngier, Robin Murphy, Will Deacon,
	Lorenzo Pieralisi

On Wed, Oct 17, 2018 at 12:54:28PM +0100, Jean-Philippe Brucker wrote:
> On 16/10/2018 21:31, Auger Eric wrote:
> > Hi Jean,
> > 
> > On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
> >> On 16/10/2018 10:25, Auger Eric wrote:
> >>> Hi Jean,
> >>>
> >>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> >>>> Implement the virtio-iommu driver, following specification v0.8 [1].
> >>>> Changes since v2 [2]:
> >>>>
> >>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
> >>>>    would like to phase out the MMIO transport. This produces a complex
> >>>>    topology where the programming interface of the IOMMU could appear
> >>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
> >>>>    AMD IOMMU), and the guest easily copes with this.
> >>>>    
> >>>>    The "Firmware description" section of the specification has been
> >>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
> >>>
> >>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> >>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> >>> is not obvious to me that this RID always is predictable given the pcie
> >>> enumeration mechanism. Generally we have a coarse grain mapping of RID
> >>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> >>> to precisely identify the RID granted to the iommu. On QEMU this may
> >>> depend on the instantiation order of the virtio-pci device right?
> >> 
> >> Yes, although it should all happen before you boot the guest, since
> >> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
> >> and use it for virtio-iommu later? Or generate the iommu-map at the same
> >> time as generating the child node of the PCI RC?
> > 
> > Even when cold-plugging the PCIe devices through qemu CLI, this depends
> > on the order of the pcie devices in the list I guess. I need to further
> > experiment.
> 
> Please let me know how it goes. I guess the problem will be the same for
> building IORT tables? You're also going to need a hole in the ID
> mappings of the PCI root complex node.
> 
> >>> So
> >>> this does not look trivial to build this info. Isn't it possible to do
> >>> this exclusion at kernel level instead?
> >> 
> >> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
> >> 
> >> VIRTIO_F_IOMMU_PLATFORM(33)
> >>     This feature indicates that the device is behind an IOMMU that
> >>     translates bus addresses from the device into physical addresses in
> >>     memory. If this feature bit is set to 0, then the device emits
> >>     physical addresses which are not translated further, even though an
> >>     IOMMU may be present.
> > 
> > This tells the driver to use the dma api, right? 
> 
> That's how Linux implements the bit, install custom DMA ops when the bit
> is absent. But it doesn't work for everyone and has caused a lot of
> debate (https://patchwork.ozlabs.org/cover/946708/)
> 
> > Effectively this
> > explicitly says whether the device is supposed to be upfront an IOMMU.
> 
> Yes. It's quite strange if you consider hotpluggable hardware, since
> those devices shouldn't get to choose whether they are managed by an
> IOMMU. For the IOMMU itself, it should be fine
> 
> >> For better or for worse, the guest has to implement it. If this feature
> >> bit is unset for virtio-iommu, it does DMA on the physical address
> >> space, regardless of what the static topology description says.
> >> 
> >> In practice it doesn't quite work. If your iommu-map describes the IOMMU
> >> as translating itself, Linux' OF code will wait for the IOMMU to be
> >> probed before probing the IOMMU. Working around this with hacks is
> >> possible, but I don't want to introduce more questionable code to OF and
> >> device tree bindings if there is any other way.
> > Hum ok. I cannot really comment on this.
> > 
> > I just wanted to raise this concern about RID identfication.
> 
> We can always try. Relaxing iommu-map further would be one additional
> patch to Documentation/devicetree/bindings/pci/pci-iommu.txt, and one to
> drivers/iommu/of-iommu.c. I'd rather make it a separate RFC.
> 
> Since we need acks from an OF maintainer and I'd also like Joerg's
> approval for adding a new driver to the IOMMU tree, I think it's too
> late for this iteration. I wasn't intending for this to go into 4.20,
> just have something to discuss at KVM forum next week.
> 
> Thanks,
> Jean

OK then. I'd appreciate it if you mark patches that aren't
intended to be merged as RFC in subject line.
Thanks!

-- 
MST

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 0/7] Add virtio-iommu driver
  2018-10-17 11:54           ` Jean-Philippe Brucker
  (?)
  (?)
@ 2018-10-17 15:23           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 101+ messages in thread
From: Michael S. Tsirkin @ 2018-10-17 15:23 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: Mark Rutland, devicetree, Lorenzo Pieralisi, tnowicki,
	peter.maydell, linux-pci, joro, Will Deacon, virtualization,
	Auger Eric, iommu, robh+dt, Marc Zyngier, Robin Murphy, kvmarm

On Wed, Oct 17, 2018 at 12:54:28PM +0100, Jean-Philippe Brucker wrote:
> On 16/10/2018 21:31, Auger Eric wrote:
> > Hi Jean,
> > 
> > On 10/16/18 8:44 PM, Jean-Philippe Brucker wrote:
> >> On 16/10/2018 10:25, Auger Eric wrote:
> >>> Hi Jean,
> >>>
> >>> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> >>>> Implement the virtio-iommu driver, following specification v0.8 [1].
> >>>> Changes since v2 [2]:
> >>>>
> >>>> * Patches 2-4 allow virtio-iommu to use the PCI transport, since QEMU
> >>>>    would like to phase out the MMIO transport. This produces a complex
> >>>>    topology where the programming interface of the IOMMU could appear
> >>>>    lower than the endpoints that it translates. It's not unheard of (e.g.
> >>>>    AMD IOMMU), and the guest easily copes with this.
> >>>>    
> >>>>    The "Firmware description" section of the specification has been
> >>>>    updated with all combinations of PCI, MMIO and DT, ACPI.
> >>>
> >>> I have a question wrt the FW specification. The IOMMU consumes 1 slot in
> >>> the PCI domain and one needs to leave a RID hole in the iommu-map.  It
> >>> is not obvious to me that this RID always is predictable given the pcie
> >>> enumeration mechanism. Generally we have a coarse grain mapping of RID
> >>> onto iommu phandles/STREAMIDs. Here, if I understand correctly we need
> >>> to precisely identify the RID granted to the iommu. On QEMU this may
> >>> depend on the instantiation order of the virtio-pci device right?
> >> 
> >> Yes, although it should all happen before you boot the guest, since
> >> there is no hotplugging an IOMMU. Could you reserve a PCI slot upfront
> >> and use it for virtio-iommu later? Or generate the iommu-map at the same
> >> time as generating the child node of the PCI RC?
> > 
> > Even when cold-plugging the PCIe devices through qemu CLI, this depends
> > on the order of the pcie devices in the list I guess. I need to further
> > experiment.
> 
> Please let me know how it goes. I guess the problem will be the same for
> building IORT tables? You're also going to need a hole in the ID
> mappings of the PCI root complex node.
> 
> >>> So
> >>> this does not look trivial to build this info. Isn't it possible to do
> >>> this exclusion at kernel level instead?
> >> 
> >> So in theory VIRTIO_F_IOMMU_PLATFORM already does that:
> >> 
> >> VIRTIO_F_IOMMU_PLATFORM(33)
> >>     This feature indicates that the device is behind an IOMMU that
> >>     translates bus addresses from the device into physical addresses in
> >>     memory. If this feature bit is set to 0, then the device emits
> >>     physical addresses which are not translated further, even though an
> >>     IOMMU may be present.
> > 
> > This tells the driver to use the dma api, right? 
> 
> That's how Linux implements the bit, install custom DMA ops when the bit
> is absent. But it doesn't work for everyone and has caused a lot of
> debate (https://patchwork.ozlabs.org/cover/946708/)
> 
> > Effectively this
> > explicitly says whether the device is supposed to be upfront an IOMMU.
> 
> Yes. It's quite strange if you consider hotpluggable hardware, since
> those devices shouldn't get to choose whether they are managed by an
> IOMMU. For the IOMMU itself, it should be fine
> 
> >> For better or for worse, the guest has to implement it. If this feature
> >> bit is unset for virtio-iommu, it does DMA on the physical address
> >> space, regardless of what the static topology description says.
> >> 
> >> In practice it doesn't quite work. If your iommu-map describes the IOMMU
> >> as translating itself, Linux' OF code will wait for the IOMMU to be
> >> probed before probing the IOMMU. Working around this with hacks is
> >> possible, but I don't want to introduce more questionable code to OF and
> >> device tree bindings if there is any other way.
> > Hum ok. I cannot really comment on this.
> > 
> > I just wanted to raise this concern about RID identfication.
> 
> We can always try. Relaxing iommu-map further would be one additional
> patch to Documentation/devicetree/bindings/pci/pci-iommu.txt, and one to
> drivers/iommu/of-iommu.c. I'd rather make it a separate RFC.
> 
> Since we need acks from an OF maintainer and I'd also like Joerg's
> approval for adding a new driver to the IOMMU tree, I think it's too
> late for this iteration. I wasn't intending for this to go into 4.20,
> just have something to discuss at KVM forum next week.
> 
> Thanks,
> Jean

OK then. I'd appreciate it if you mark patches that aren't
intended to be merged as RFC in subject line.
Thanks!

-- 
MST

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-18  0:30       ` Rob Herring
  -1 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:30 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	linux-pci-u79uwXL29TY76Z2rM5mHXA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	marc.zyngier-5wv7dgnIgG8, robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On Fri, Oct 12, 2018 at 03:59:11PM +0100, Jean-Philippe Brucker wrote:
> The nature of a virtio-mmio node is discovered by the virtio driver at
> probe time. However the DMA relation between devices must be described
> statically. When a virtio-mmio node is a virtio-iommu device, it needs an
> "#iommu-cells" property as specified by bindings/iommu/iommu.txt.
> 
> Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
> requires an "iommus" property. Describe these requirements in the
> device-tree bindings documentation.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>  .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
>  1 file changed, 30 insertions(+)

One nit, otherwise,

Reviewed-by: Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>

> 
> diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
> index 5069c1b8e193..748595473b36 100644
> --- a/Documentation/devicetree/bindings/virtio/mmio.txt
> +++ b/Documentation/devicetree/bindings/virtio/mmio.txt
> @@ -8,10 +8,40 @@ Required properties:
>  - reg:		control registers base address and size including configuration space
>  - interrupts:	interrupt generated by the device
>  
> +Required properties for virtio-iommu:
> +
> +- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
> +		linked to DMA masters using the "iommus" or "iommu-map"
> +		properties [1][2]. #iommu-cells specifies the size of the
> +		"iommus" property. For virtio-iommu #iommu-cells must be
> +		1, each cell describing a single endpoint ID.
> +
> +Optional properties:
> +
> +- iommus:	If the device accesses memory through an IOMMU, it should
> +		have an "iommus" property [1]. Since virtio-iommu itself
> +		does not access memory through an IOMMU, the "virtio,mmio"
> +		node cannot have both an "#iommu-cells" and an "iommus"
> +		property.
> +
>  Example:
>  
>  	virtio_block@3000 {
>  		compatible = "virtio,mmio";
>  		reg = <0x3000 0x100>;
>  		interrupts = <41>;
> +
> +		/* Device has endpoint ID 23 */
> +		iommus = <&viommu 23>
>  	}
> +
> +	viommu: virtio_iommu@3100 {

iommu@3100

> +		compatible = "virtio,mmio";
> +		reg = <0x3100 0x100>;
> +		interrupts = <42>;
> +
> +		#iommu-cells = <1>
> +	}
> +
> +[1] Documentation/devicetree/bindings/iommu/iommu.txt
> +[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
@ 2018-10-18  0:30       ` Rob Herring
  0 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:30 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, mst, jasowang, mark.rutland, eric.auger,
	tnowicki, kevin.tian, marc.zyngier, robin.murphy, will.deacon,
	lorenzo.pieralisi

On Fri, Oct 12, 2018 at 03:59:11PM +0100, Jean-Philippe Brucker wrote:
> The nature of a virtio-mmio node is discovered by the virtio driver at
> probe time. However the DMA relation between devices must be described
> statically. When a virtio-mmio node is a virtio-iommu device, it needs an
> "#iommu-cells" property as specified by bindings/iommu/iommu.txt.
> 
> Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
> requires an "iommus" property. Describe these requirements in the
> device-tree bindings documentation.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
>  1 file changed, 30 insertions(+)

One nit, otherwise,

Reviewed-by: Rob Herring <robh@kernel.org>

> 
> diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
> index 5069c1b8e193..748595473b36 100644
> --- a/Documentation/devicetree/bindings/virtio/mmio.txt
> +++ b/Documentation/devicetree/bindings/virtio/mmio.txt
> @@ -8,10 +8,40 @@ Required properties:
>  - reg:		control registers base address and size including configuration space
>  - interrupts:	interrupt generated by the device
>  
> +Required properties for virtio-iommu:
> +
> +- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
> +		linked to DMA masters using the "iommus" or "iommu-map"
> +		properties [1][2]. #iommu-cells specifies the size of the
> +		"iommus" property. For virtio-iommu #iommu-cells must be
> +		1, each cell describing a single endpoint ID.
> +
> +Optional properties:
> +
> +- iommus:	If the device accesses memory through an IOMMU, it should
> +		have an "iommus" property [1]. Since virtio-iommu itself
> +		does not access memory through an IOMMU, the "virtio,mmio"
> +		node cannot have both an "#iommu-cells" and an "iommus"
> +		property.
> +
>  Example:
>  
>  	virtio_block@3000 {
>  		compatible = "virtio,mmio";
>  		reg = <0x3000 0x100>;
>  		interrupts = <41>;
> +
> +		/* Device has endpoint ID 23 */
> +		iommus = <&viommu 23>
>  	}
> +
> +	viommu: virtio_iommu@3100 {

iommu@3100

> +		compatible = "virtio,mmio";
> +		reg = <0x3100 0x100>;
> +		interrupts = <42>;
> +
> +		#iommu-cells = <1>
> +	}
> +
> +[1] Documentation/devicetree/bindings/iommu/iommu.txt
> +[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
  2018-10-12 14:59   ` Jean-Philippe Brucker
  (?)
@ 2018-10-18  0:30   ` Rob Herring
  -1 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:30 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, mst, will.deacon, virtualization,
	eric.auger, iommu, marc.zyngier, robin.murphy, kvmarm

On Fri, Oct 12, 2018 at 03:59:11PM +0100, Jean-Philippe Brucker wrote:
> The nature of a virtio-mmio node is discovered by the virtio driver at
> probe time. However the DMA relation between devices must be described
> statically. When a virtio-mmio node is a virtio-iommu device, it needs an
> "#iommu-cells" property as specified by bindings/iommu/iommu.txt.
> 
> Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
> requires an "iommus" property. Describe these requirements in the
> device-tree bindings documentation.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
>  1 file changed, 30 insertions(+)

One nit, otherwise,

Reviewed-by: Rob Herring <robh@kernel.org>

> 
> diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
> index 5069c1b8e193..748595473b36 100644
> --- a/Documentation/devicetree/bindings/virtio/mmio.txt
> +++ b/Documentation/devicetree/bindings/virtio/mmio.txt
> @@ -8,10 +8,40 @@ Required properties:
>  - reg:		control registers base address and size including configuration space
>  - interrupts:	interrupt generated by the device
>  
> +Required properties for virtio-iommu:
> +
> +- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
> +		linked to DMA masters using the "iommus" or "iommu-map"
> +		properties [1][2]. #iommu-cells specifies the size of the
> +		"iommus" property. For virtio-iommu #iommu-cells must be
> +		1, each cell describing a single endpoint ID.
> +
> +Optional properties:
> +
> +- iommus:	If the device accesses memory through an IOMMU, it should
> +		have an "iommus" property [1]. Since virtio-iommu itself
> +		does not access memory through an IOMMU, the "virtio,mmio"
> +		node cannot have both an "#iommu-cells" and an "iommus"
> +		property.
> +
>  Example:
>  
>  	virtio_block@3000 {
>  		compatible = "virtio,mmio";
>  		reg = <0x3000 0x100>;
>  		interrupts = <41>;
> +
> +		/* Device has endpoint ID 23 */
> +		iommus = <&viommu 23>
>  	}
> +
> +	viommu: virtio_iommu@3100 {

iommu@3100

> +		compatible = "virtio,mmio";
> +		reg = <0x3100 0x100>;
> +		interrupts = <42>;
> +
> +		#iommu-cells = <1>
> +	}
> +
> +[1] Documentation/devicetree/bindings/iommu/iommu.txt
> +[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-10-18  0:35     ` Rob Herring
  -1 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, devicetree, jasowang,
	linux-pci, joro, mst, will.deacon, virtualization, iommu,
	robh+dt, marc.zyngier, robin.murphy, kvmarm

On Fri, 12 Oct 2018 15:59:12 +0100, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
@ 2018-10-18  0:35     ` Rob Herring
  0 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, mst, jasowang, robh+dt, mark.rutland,
	eric.auger, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

On Fri, 12 Oct 2018 15:59:12 +0100, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59   ` Jean-Philippe Brucker
  (?)
  (?)
@ 2018-10-18  0:35   ` Rob Herring
  -1 siblings, 0 replies; 101+ messages in thread
From: Rob Herring @ 2018-10-18  0:35 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, mst, will.deacon, virtualization,
	eric.auger, iommu, robh+dt, marc.zyngier, robin.murphy, kvmarm

On Fri, 12 Oct 2018 15:59:12 +0100, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-17 15:14                 ` Michael S. Tsirkin
@ 2018-10-18 10:47                   ` Robin Murphy
  -1 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-18 10:47 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: devicetree, kevin.tian, tnowicki, marc.zyngier, linux-pci,
	jasowang, will.deacon, virtualization, iommu, robh+dt,
	Bjorn Helgaas, kvmarm

On 17/10/18 16:14, Michael S. Tsirkin wrote:
> On Mon, Oct 15, 2018 at 08:46:41PM +0100, Jean-philippe Brucker wrote:
>> [Replying with my personal address because we're having SMTP issues]
>>
>> On 15/10/2018 11:52, Michael S. Tsirkin wrote:
>>> On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
>>>> s/iommu/IOMMU/ in subject
>>>>
>>>> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>>>>> Using the iommu-map binding, endpoints in a given PCI domain can be
>>>>> managed by different IOMMUs. Some virtual machines may allow a subset of
>>>>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
>>>>
>>>> s/case/cases/
>>>>
>>>>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>>>>> PCI root complex has an iommu-map property, the driver requires all
>>>>> endpoints to be described by the property. Allow the iommu-map property to
>>>>> have gaps.
>>>>
>>>> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
>>>> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
>>>> iommu-map could inadvertently allow devices to bypass it?
>>>
>>>
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

This isn't Linux-specific though. In general it's somewhere between 
difficult and impossible to pull in an IOMMU underneath a device after 
at device is active, so if any OS wants to use an IOMMU, it's going to 
want to know up-front that it's there and which devices it translates so 
that it can program said IOMMU appropriately *before* potentially 
starting DMA and/or interrupts from the relevant devices. Linux happens 
to do things in that order (either by firmware-driven probe-deferral or 
just perilous initcall ordering) because it is the only reasonable order 
in which to do them. AFAIK the platforms which don't rely on any 
firmware description of their IOMMU tend to have a fairly static system 
architecture (such that the OS simply makes hard-coded assumptions), so 
it's not necessarily entirely clear how they would cope with 
virtio-iommu either way.

Robin.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-18 10:47                   ` Robin Murphy
  0 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-18 10:47 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: Bjorn Helgaas, mark.rutland, devicetree, kevin.tian, tnowicki,
	peter.maydell, linux-pci, will.deacon, virtualization, iommu,
	robh+dt, marc.zyngier, jasowang, kvmarm, jean-philippe.brucker

On 17/10/18 16:14, Michael S. Tsirkin wrote:
> On Mon, Oct 15, 2018 at 08:46:41PM +0100, Jean-philippe Brucker wrote:
>> [Replying with my personal address because we're having SMTP issues]
>>
>> On 15/10/2018 11:52, Michael S. Tsirkin wrote:
>>> On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
>>>> s/iommu/IOMMU/ in subject
>>>>
>>>> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>>>>> Using the iommu-map binding, endpoints in a given PCI domain can be
>>>>> managed by different IOMMUs. Some virtual machines may allow a subset of
>>>>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
>>>>
>>>> s/case/cases/
>>>>
>>>>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>>>>> PCI root complex has an iommu-map property, the driver requires all
>>>>> endpoints to be described by the property. Allow the iommu-map property to
>>>>> have gaps.
>>>>
>>>> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
>>>> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
>>>> iommu-map could inadvertently allow devices to bypass it?
>>>
>>>
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

This isn't Linux-specific though. In general it's somewhere between 
difficult and impossible to pull in an IOMMU underneath a device after 
at device is active, so if any OS wants to use an IOMMU, it's going to 
want to know up-front that it's there and which devices it translates so 
that it can program said IOMMU appropriately *before* potentially 
starting DMA and/or interrupts from the relevant devices. Linux happens 
to do things in that order (either by firmware-driven probe-deferral or 
just perilous initcall ordering) because it is the only reasonable order 
in which to do them. AFAIK the platforms which don't rely on any 
firmware description of their IOMMU tend to have a fairly static system 
architecture (such that the OS simply makes hard-coded assumptions), so 
it's not necessarily entirely clear how they would cope with 
virtio-iommu either way.

Robin.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-17 15:14                 ` Michael S. Tsirkin
  (?)
@ 2018-10-18 10:47                 ` Robin Murphy
  -1 siblings, 0 replies; 101+ messages in thread
From: Robin Murphy @ 2018-10-18 10:47 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: mark.rutland, devicetree, tnowicki, peter.maydell, marc.zyngier,
	linux-pci, will.deacon, virtualization, jean-philippe.brucker,
	iommu, robh+dt, Bjorn Helgaas, kvmarm

On 17/10/18 16:14, Michael S. Tsirkin wrote:
> On Mon, Oct 15, 2018 at 08:46:41PM +0100, Jean-philippe Brucker wrote:
>> [Replying with my personal address because we're having SMTP issues]
>>
>> On 15/10/2018 11:52, Michael S. Tsirkin wrote:
>>> On Fri, Oct 12, 2018 at 02:41:59PM -0500, Bjorn Helgaas wrote:
>>>> s/iommu/IOMMU/ in subject
>>>>
>>>> On Fri, Oct 12, 2018 at 03:59:13PM +0100, Jean-Philippe Brucker wrote:
>>>>> Using the iommu-map binding, endpoints in a given PCI domain can be
>>>>> managed by different IOMMUs. Some virtual machines may allow a subset of
>>>>> endpoints to bypass the IOMMU. In some case the IOMMU itself is presented
>>>>
>>>> s/case/cases/
>>>>
>>>>> as a PCI endpoint (e.g. AMD IOMMU and virtio-iommu). Currently, when a
>>>>> PCI root complex has an iommu-map property, the driver requires all
>>>>> endpoints to be described by the property. Allow the iommu-map property to
>>>>> have gaps.
>>>>
>>>> I'm not an IOMMU or virtio expert, so it's not obvious to me why it is
>>>> safe to allow devices to bypass the IOMMU.  Does this mean a typo in
>>>> iommu-map could inadvertently allow devices to bypass it?
>>>
>>>
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

This isn't Linux-specific though. In general it's somewhere between 
difficult and impossible to pull in an IOMMU underneath a device after 
at device is active, so if any OS wants to use an IOMMU, it's going to 
want to know up-front that it's there and which devices it translates so 
that it can program said IOMMU appropriately *before* potentially 
starting DMA and/or interrupts from the relevant devices. Linux happens 
to do things in that order (either by firmware-driven probe-deferral or 
just perilous initcall ordering) because it is the only reasonable order 
in which to do them. AFAIK the platforms which don't rely on any 
firmware description of their IOMMU tend to have a fairly static system 
architecture (such that the OS simply makes hard-coded assumptions), so 
it's not necessarily entirely clear how they would cope with 
virtio-iommu either way.

Robin.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-17 15:14                 ` Michael S. Tsirkin
@ 2018-10-22 11:27                     ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-22 11:27 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: mark.rutland-5wv7dgnIgG8, devicetree-u79uwXL29TY76Z2rM5mHXA,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	peter.maydell-QSEj5FYQhm4dnm+yROfE0A, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, Bjorn Helgaas,
	robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On 17/10/2018 16:14, Michael S. Tsirkin wrote:
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

I suspect that most other OS will also solve this class of problem using
a standard such as DT or ACPI, because they also provide dependency for
clock, interrupts, power management, etc. We can add a self-contained
PROBE method if someone makes a case for it, but it's unlikely to get
used at all, and nearly impossible to implement in Linux. The host would
still need a method to tell the guest which device to probe first, for
example with kernel parameters.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?

In drivers/base/dd.c, really_probe() calls dma_configure() before the
device driver's probe(). dma_configure() ends up calling either
of_dma_configure() or acpi_dma_configure(), which return -EPROBE_DEFER
if the device's IOMMU isn't yet available. In that case the device is
added to the deferred pending list.

After another device is successfully bound to a driver, all devices on
the pending list are retried (driver_deferred_probe_trigger()), and if
the dependency has been resolved, then dma_configure() succeeds.

Another method (used by Intel and AMD IOMMU drivers) is to initialize
the IOMMU as early as possible, after discovering it in the ACPI tables
and before probing other devices. This can't work for virtio-iommu
because the driver might be a module, in which case early init isn't
possible. We have to defer probe of all dependent devices until the
virtio and virtio-iommu modules are loaded.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
@ 2018-10-22 11:27                     ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-22 11:27 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: mark.rutland, devicetree, kevin.tian, tnowicki, peter.maydell,
	marc.zyngier, linux-pci, jasowang, will.deacon, virtualization,
	iommu, robh+dt, Bjorn Helgaas, robin.murphy, kvmarm

On 17/10/2018 16:14, Michael S. Tsirkin wrote:
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

I suspect that most other OS will also solve this class of problem using
a standard such as DT or ACPI, because they also provide dependency for
clock, interrupts, power management, etc. We can add a self-contained
PROBE method if someone makes a case for it, but it's unlikely to get
used at all, and nearly impossible to implement in Linux. The host would
still need a method to tell the guest which device to probe first, for
example with kernel parameters.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?

In drivers/base/dd.c, really_probe() calls dma_configure() before the
device driver's probe(). dma_configure() ends up calling either
of_dma_configure() or acpi_dma_configure(), which return -EPROBE_DEFER
if the device's IOMMU isn't yet available. In that case the device is
added to the deferred pending list.

After another device is successfully bound to a driver, all devices on
the pending list are retried (driver_deferred_probe_trigger()), and if
the dependency has been resolved, then dma_configure() succeeds.

Another method (used by Intel and AMD IOMMU drivers) is to initialize
the IOMMU as early as possible, after discovering it in the ACPI tables
and before probing other devices. This can't work for virtio-iommu
because the driver might be a module, in which case early init isn't
possible. We have to defer probe of all dependent devices until the
virtio and virtio-iommu modules are loaded.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu
  2018-10-17 15:14                 ` Michael S. Tsirkin
                                   ` (3 preceding siblings ...)
  (?)
@ 2018-10-22 11:27                 ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-10-22 11:27 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-philippe Brucker
  Cc: mark.rutland, devicetree, tnowicki, peter.maydell, marc.zyngier,
	linux-pci, will.deacon, virtualization, iommu, robh+dt,
	Bjorn Helgaas, robin.murphy, kvmarm

On 17/10/2018 16:14, Michael S. Tsirkin wrote:
>>> Thinking about this comment, I would like to ask: can't the
>>> virtio device indicate the ranges in a portable way?
>>> This would minimize the dependency on dt bindings and ACPI,
>>> enabling support for systems that have neither but do
>>> have virtio e.g. through pci.
>>
>> I thought about adding a PROBE request for this in virtio-iommu, but it
>> wouldn't be usable by a Linux guest because of a bootstrapping problem.
> 
> Hmm. At some level it seems wrong to design hardware interfaces
> around how Linux happens to probe things. That can change at any time
> ...

I suspect that most other OS will also solve this class of problem using
a standard such as DT or ACPI, because they also provide dependency for
clock, interrupts, power management, etc. We can add a self-contained
PROBE method if someone makes a case for it, but it's unlikely to get
used at all, and nearly impossible to implement in Linux. The host would
still need a method to tell the guest which device to probe first, for
example with kernel parameters.

>> Early on, Linux needs a description of device dependencies, to determine
>> in which order to probe them. If the device dependency was described by
>> virtio-iommu itself, the guest could for example initialize a NIC,
>> allocate buffers and start DMA on the physical address space (which aborts
>> if the IOMMU implementation disallows DMA by default), only to find out
>> once the virtio-iommu module is loaded that it needs to cancel all DMA and
>> reconfigure the NIC. With a static description such as iommu-map in DT or
>> ACPI remapping tables, the guest can defer probing of the NIC until the
>> IOMMU is initialized.
>>
>> Thanks,
>> Jean
> 
> Could you point me at the code you refer to here?

In drivers/base/dd.c, really_probe() calls dma_configure() before the
device driver's probe(). dma_configure() ends up calling either
of_dma_configure() or acpi_dma_configure(), which return -EPROBE_DEFER
if the device's IOMMU isn't yet available. In that case the device is
added to the deferred pending list.

After another device is successfully bound to a driver, all devices on
the pending list are retried (driver_deferred_probe_trigger()), and if
the dependency has been resolved, then dma_configure() succeeds.

Another method (used by Intel and AMD IOMMU drivers) is to initialize
the IOMMU as early as possible, after discovering it in the ACPI tables
and before probing other devices. This can't work for virtio-iommu
because the driver might be a module, in which case early init isn't
possible. We have to defer probe of all dependent devices until the
virtio and virtio-iommu modules are loaded.

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-11-08 14:48       ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:48 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8

Hi Jean-Philippe,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
the value[] field has disappeared but still is documented in the v0.8 spec.

Thanks

Eric
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-11-08 14:48       ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:48 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

Hi Jean-Philippe,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
the value[] field has disappeared but still is documented in the v0.8 spec.

Thanks

Eric
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59   ` Jean-Philippe Brucker
  (?)
  (?)
@ 2018-11-08 14:48   ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:48 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, robh+dt,
	robin.murphy, joro

Hi Jean-Philippe,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
the value[] field has disappeared but still is documented in the v0.8 spec.

Thanks

Eric
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 16:35     ` Michael S. Tsirkin
@ 2018-11-08 14:51       ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:51 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-Philippe Brucker
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, devicetree, jasowang,
	linux-pci, joro, will.deacon, virtualization, marc.zyngier,
	iommu, robh+dt, robin.murphy, kvmarm

Hi Jean-Philippe,

On 10/12/18 6:35 PM, Michael S. Tsirkin wrote:
> On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
>> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
>> requests such as map/unmap over virtio transport without emulating page
>> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
>> requests.
>>
>> The bulk of the code transforms calls coming from the IOMMU API into
>> corresponding virtio requests. Mappings are kept in an interval tree
>> instead of page tables.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  MAINTAINERS                       |   7 +
>>  drivers/iommu/Kconfig             |  11 +
>>  drivers/iommu/Makefile            |   1 +
>>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>>  include/uapi/linux/virtio_ids.h   |   1 +
>>  include/uapi/linux/virtio_iommu.h | 101 ++++
>>  6 files changed, 1059 insertions(+)
>>  create mode 100644 drivers/iommu/virtio-iommu.c
>>  create mode 100644 include/uapi/linux/virtio_iommu.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 48a65c3a4189..f02fa65f47e2 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -15599,6 +15599,13 @@ S:	Maintained
>>  F:	drivers/virtio/virtio_input.c
>>  F:	include/uapi/linux/virtio_input.h
>>  
>> +VIRTIO IOMMU DRIVER
>> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> +L:	virtualization@lists.linux-foundation.org
>> +S:	Maintained
>> +F:	drivers/iommu/virtio-iommu.c
>> +F:	include/uapi/linux/virtio_iommu.h
>> +
>>  VIRTUAL BOX GUEST DEVICE DRIVER
>>  M:	Hans de Goede <hdegoede@redhat.com>
>>  M:	Arnd Bergmann <arnd@arndb.de>
>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>> index c60395b7470f..2dc016dc2b92 100644
>> --- a/drivers/iommu/Kconfig
>> +++ b/drivers/iommu/Kconfig
>> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>>  	help
>>  	  Support for IOMMU on certain Qualcomm SoCs.
>>  
>> +config VIRTIO_IOMMU
>> +	bool "Virtio IOMMU driver"
>> +	depends on VIRTIO=y
>> +	select IOMMU_API
>> +	select INTERVAL_TREE
>> +	select ARM_DMA_USE_IOMMU if ARM
>> +	help
>> +	  Para-virtualised IOMMU driver with virtio.
>> +
>> +	  Say Y here if you intend to run this kernel as a guest.
>> +
>>  endif # IOMMU_SUPPORT
>> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
>> index ab5eba6edf82..4cd643408e49 100644
>> --- a/drivers/iommu/Makefile
>> +++ b/drivers/iommu/Makefile
>> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
>> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> new file mode 100644
>> index 000000000000..9fb38cd3b727
>> --- /dev/null
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -0,0 +1,938 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Virtio driver for the paravirtualized IOMMU
>> + *
>> + * Copyright (C) 2018 Arm Limited
>> + */
>> +
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> +
>> +#include <linux/amba/bus.h>
>> +#include <linux/delay.h>
>> +#include <linux/dma-iommu.h>
>> +#include <linux/freezer.h>
>> +#include <linux/interval_tree.h>
>> +#include <linux/iommu.h>
>> +#include <linux/module.h>
>> +#include <linux/of_iommu.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/pci.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/virtio.h>
>> +#include <linux/virtio_config.h>
>> +#include <linux/virtio_ids.h>
>> +#include <linux/wait.h>
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
>> +
>> +#define VIOMMU_REQUEST_VQ		0
>> +#define VIOMMU_NR_VQS			1
>> +
>> +/*
>> + * During development, it is convenient to time out rather than wait
>> + * indefinitely in atomic context when a device misbehaves and a request doesn't
>> + * return. In production however, some requests shouldn't return until they are
>> + * successful.
>> + */
>> +#ifdef DEBUG
>> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
>> +#endif
>> +
>> +struct viommu_dev {
>> +	struct iommu_device		iommu;
>> +	struct device			*dev;
>> +	struct virtio_device		*vdev;
>> +
>> +	struct ida			domain_ids;
>> +
>> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
>> +	spinlock_t			request_lock;
>> +	struct list_head		requests;
>> +
>> +	/* Device configuration */
>> +	struct iommu_domain_geometry	geometry;
>> +	u64				pgsize_bitmap;
>> +	u8				domain_bits;
>> +};
>> +
>> +struct viommu_mapping {
>> +	phys_addr_t			paddr;
>> +	struct interval_tree_node	iova;
>> +	u32				flags;
>> +};
>> +
>> +struct viommu_domain {
>> +	struct iommu_domain		domain;
>> +	struct viommu_dev		*viommu;
>> +	struct mutex			mutex;
>> +	unsigned int			id;
>> +
>> +	spinlock_t			mappings_lock;
>> +	struct rb_root_cached		mappings;
>> +
>> +	unsigned long			nr_endpoints;
>> +};
>> +
>> +struct viommu_endpoint {
>> +	struct viommu_dev		*viommu;
>> +	struct viommu_domain		*vdomain;
>> +};
>> +
>> +struct viommu_request {
>> +	struct list_head		list;
>> +	void				*writeback;
>> +	unsigned int			write_offset;
>> +	unsigned int			len;
>> +	char				buf[];
>> +};
>> +
>> +#define to_viommu_domain(domain)	\
>> +	container_of(domain, struct viommu_domain, domain)
>> +
>> +static int viommu_get_req_errno(void *buf, size_t len)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	switch (tail->status) {
>> +	case VIRTIO_IOMMU_S_OK:
>> +		return 0;
>> +	case VIRTIO_IOMMU_S_UNSUPP:
>> +		return -ENOSYS;
>> +	case VIRTIO_IOMMU_S_INVAL:
>> +		return -EINVAL;
>> +	case VIRTIO_IOMMU_S_RANGE:
>> +		return -ERANGE;
>> +	case VIRTIO_IOMMU_S_NOENT:
>> +		return -ENOENT;
>> +	case VIRTIO_IOMMU_S_FAULT:
>> +		return -EFAULT;
>> +	case VIRTIO_IOMMU_S_IOERR:
>> +	case VIRTIO_IOMMU_S_DEVERR:
>> +	default:
>> +		return -EIO;
>> +	}
>> +}
>> +
>> +static void viommu_set_req_status(void *buf, size_t len, int status)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	tail->status = status;
>> +}
>> +
>> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>> +				   struct virtio_iommu_req_head *req,
>> +				   size_t len)
>> +{
>> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>> +
>> +	return len - tail_size;
>> +}
>> +
>> +/*
>> + * __viommu_sync_req - Complete all in-flight requests
>> + *
>> + * Wait for all added requests to complete. When this function returns, all
>> + * requests that were in-flight at the time of the call have completed.
>> + */
>> +static int __viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret = 0;
>> +	unsigned int len;
>> +	size_t write_len;
>> +	ktime_t timeout = 0;
>> +	struct viommu_request *req;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +#ifdef DEBUG
>> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
>> +#endif
>> +	virtqueue_kick(vq);
>> +
>> +	while (!list_empty(&viommu->requests)) {
>> +		len = 0;
>> +		req = virtqueue_get_buf(vq, &len);
>> +		if (req == NULL) {
>> +			if (!timeout || ktime_before(ktime_get(), timeout))
>> +				continue;
>> +
>> +			/* After timeout, remove all requests */
>> +			req = list_first_entry(&viommu->requests,
>> +					       struct viommu_request, list);
>> +			ret = -ETIMEDOUT;
>> +		}
>> +
>> +		if (!len)
>> +			viommu_set_req_status(req->buf, req->len,
>> +					      VIRTIO_IOMMU_S_IOERR);
>> +
>> +		write_len = req->len - req->write_offset;
>> +		if (req->writeback && len >= write_len)
>> +			memcpy(req->writeback, req->buf + req->write_offset,
>> +			       write_len);
>> +
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.
> 
> 
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * __viommu_add_request - Add one request to the queue
>> + * @buf: pointer to the request buffer
>> + * @len: length of the request buffer
>> + * @writeback: copy data back to the buffer when the request completes.
>> + *
>> + * Add a request to the queue. Only synchronize the queue if it's already full.
>> + * Otherwise don't kick the queue nor wait for requests to complete.
>> + *
>> + * When @writeback is true, data written by the device, including the request
>> + * status, is copied into @buf after the request completes. This is unsafe if
>> + * the caller allocates @buf on stack and drops the lock between add_req() and
>> + * sync_req().
>> + *
>> + * Return 0 if the request was successfully added to the queue.
>> + */
>> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
>> +			    bool writeback)
>> +{
>> +	int ret;
>> +	off_t write_offset;
>> +	struct viommu_request *req;
>> +	struct scatterlist top_sg, bottom_sg;
>> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +
>> +	write_offset = viommu_get_req_offset(viommu, buf, len);
>> +	if (!write_offset)
>> +		return -EINVAL;
>> +
>> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	req->len = len;
>> +	if (writeback) {
>> +		req->writeback = buf + write_offset;
>> +		req->write_offset = write_offset;
>> +	}
>> +	memcpy(&req->buf, buf, write_offset);
>> +
>> +	sg_init_one(&top_sg, req->buf, write_offset);
>> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
>> +
>> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	if (ret == -ENOSPC) {
>> +		/* If the queue is full, sync and retry */
>> +		if (!__viommu_sync_req(viommu))
>> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	}
>> +	if (ret)
>> +		goto err_free;
>> +
>> +	list_add_tail(&req->list, &viommu->requests);
>> +	return 0;
>> +
>> +err_free:
>> +	kfree(req);
>> +	return ret;
>> +}
>> +
>> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_add_req(viommu, buf, len, false);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * Send a request and wait for it to complete. Return the request status (as an
>> + * errno)
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
>> +				size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +
>> +	ret = __viommu_add_req(viommu, buf, len, true);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
>> +		goto out_unlock;
>> +	}
>> +
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +		/* Fall-through (get the actual request status) */
>> +	}
>> +
>> +	ret = viommu_get_req_errno(buf, len);
>> +out_unlock:
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +	return ret;
>> +}
>> +
>> +/*
>> + * viommu_add_mapping - add a mapping to the internal tree
>> + *
>> + * On success, return the new mapping. Otherwise return NULL.
>> + */
>> +static struct viommu_mapping *
>> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
>> +		   phys_addr_t paddr, size_t size, u32 flags)
>> +{
>> +	unsigned long irqflags;
>> +	struct viommu_mapping *mapping;
>> +
>> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
>> +	if (!mapping)
>> +		return NULL;
>> +
>> +	mapping->paddr		= paddr;
>> +	mapping->iova.start	= iova;
>> +	mapping->iova.last	= iova + size - 1;
>> +	mapping->flags		= flags;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
>> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
>> +
>> +	return mapping;
>> +}
>> +
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				  unsigned long iova, size_t size)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/* Trying to split a mapping? */
>> +		if (mapping->iova.start < iova)
>> +			break;
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +		kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all endpoints,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + */
>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>> +{
>> +	int ret;
ret needs to be initialized here. Otherwise this can lead to a crash in
viommu_add_device.

Thanks

Eric
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct virtio_iommu_req_map map;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
>> +	while (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		map = (struct virtio_iommu_req_map) {
>> +			.head.type	= VIRTIO_IOMMU_T_MAP,
>> +			.domain		= cpu_to_le32(vdomain->id),
>> +			.virt_start	= cpu_to_le64(mapping->iova.start),
>> +			.virt_end	= cpu_to_le64(mapping->iova.last),
>> +			.phys_start	= cpu_to_le64(mapping->paddr),
>> +			.flags		= cpu_to_le32(mapping->flags),
>> +		};
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +		if (ret)
>> +			break;
>> +
>> +		node = interval_tree_iter_next(node, 0, -1UL);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/* IOMMU API */
>> +
>> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> +{
>> +	struct viommu_domain *vdomain;
>> +
>> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
>> +		return NULL;
>> +
>> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
>> +	if (!vdomain)
>> +		return NULL;
>> +
>> +	mutex_init(&vdomain->mutex);
>> +	spin_lock_init(&vdomain->mappings_lock);
>> +	vdomain->mappings = RB_ROOT_CACHED;
>> +
>> +	if (type == IOMMU_DOMAIN_DMA &&
>> +	    iommu_get_dma_cookie(&vdomain->domain)) {
>> +		kfree(vdomain);
>> +		return NULL;
>> +	}
>> +
>> +	return &vdomain->domain;
>> +}
>> +
>> +static int viommu_domain_finalise(struct viommu_dev *viommu,
>> +				  struct iommu_domain *domain)
>> +{
>> +	int ret;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
>> +				  (1U << viommu->domain_bits) - 1;
>> +
>> +	vdomain->viommu		= viommu;
>> +
>> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
>> +	domain->geometry	= viommu->geometry;
>> +
>> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
>> +	if (ret >= 0)
>> +		vdomain->id = (unsigned int)ret;
>> +
>> +	return ret > 0 ? 0 : ret;
>> +}
>> +
>> +static void viommu_domain_free(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	iommu_put_dma_cookie(domain);
>> +
>> +	/* Free all remaining mappings (size 2^64) */
>> +	viommu_del_mappings(vdomain, 0, 0);
>> +
>> +	if (vdomain->viommu)
>> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
>> +
>> +	kfree(vdomain);
>> +}
>> +
>> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>> +{
>> +	int i;
>> +	int ret = 0;
>> +	struct virtio_iommu_req_attach req;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	mutex_lock(&vdomain->mutex);
>> +	if (!vdomain->viommu) {
>> +		/*
>> +		 * Initialize the domain proper now that we know which viommu
>> +		 * owns it.
>> +		 */
>> +		ret = viommu_domain_finalise(vdev->viommu, domain);
>> +	} else if (vdomain->viommu != vdev->viommu) {
>> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
>> +		ret = -EXDEV;
>> +	}
>> +	mutex_unlock(&vdomain->mutex);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	/*
>> +	 * In the virtio-iommu device, when attaching the endpoint to a new
>> +	 * domain, it is detached from the old one and, if as as a result the
>> +	 * old domain isn't attached to any endpoint, all mappings are removed
>> +	 * from the old domain and it is freed.
>> +	 *
>> +	 * In the driver the old domain still exists, and its mappings will be
>> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
>> +	 * freed explicitly.
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		vdev->vdomain->nr_endpoints--;
>> +
>> +	req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	if (!vdomain->nr_endpoints) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings (e.g. SW MSI).
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	vdomain->nr_endpoints++;
>> +	vdev->vdomain = vdomain;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
>> +		      phys_addr_t paddr, size_t size, int prot)
>> +{
>> +	int ret;
>> +	int flags;
>> +	struct viommu_mapping *mapping;
>> +	struct virtio_iommu_req_map map;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
>> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
>> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
>> +
>> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
>> +	if (!mapping)
>> +		return -ENOMEM;
>> +
>> +	map = (struct virtio_iommu_req_map) {
>> +		.head.type	= VIRTIO_IOMMU_T_MAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.phys_start	= cpu_to_le64(paddr),
>> +		.virt_end	= cpu_to_le64(iova + size - 1),
>> +		.flags		= cpu_to_le32(flags),
>> +	};
>> +
>> +	if (!vdomain->nr_endpoints)
>> +		return 0;
>> +
>> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +	if (ret)
>> +		viommu_del_mappings(vdomain, iova, size);
>> +
>> +	return ret;
>> +}
>> +
>> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
>> +			   size_t size)
>> +{
>> +	int ret = 0;
>> +	size_t unmapped;
>> +	struct virtio_iommu_req_unmap unmap;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	unmapped = viommu_del_mappings(vdomain, iova, size);
>> +	if (unmapped < size)
>> +		return 0;
>> +
>> +	/* Device already removed all mappings after detach. */
>> +	if (!vdomain->nr_endpoints)
>> +		return unmapped;
>> +
>> +	unmap = (struct virtio_iommu_req_unmap) {
>> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
>> +	};
>> +
>> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
>> +	return ret ? 0 : unmapped;
>> +}
>> +
>> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
>> +				       dma_addr_t iova)
>> +{
>> +	u64 paddr = 0;
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
>> +	if (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		paddr = mapping->paddr + (iova - mapping->iova.start);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return paddr;
>> +}
>> +
>> +static void viommu_iotlb_sync(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	viommu_sync_req(vdomain->viommu);
>> +}
>> +
>> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *region;
>> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> +					 IOMMU_RESV_SW_MSI);
>> +	if (!region)
>> +		return;
>> +
>> +	list_add_tail(&region->list, head);
>> +	iommu_dma_get_resv_regions(dev, head);
>> +}
>> +
>> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *entry, *next;
>> +
>> +	list_for_each_entry_safe(entry, next, head, list)
>> +		kfree(entry);
>> +}
>> +
>> +static struct iommu_ops viommu_ops;
>> +static struct virtio_driver virtio_iommu_drv;
>> +
>> +static int viommu_match_node(struct device *dev, void *data)
>> +{
>> +	return dev->parent->fwnode == data;
>> +}
>> +
>> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
>> +						fwnode, viommu_match_node);
>> +	put_device(dev);
>> +
>> +	return dev ? dev_to_virtio(dev)->priv : NULL;
>> +}
>> +
>> +static int viommu_add_device(struct device *dev)
>> +{
>> +	int ret;
>> +	struct iommu_group *group;
>> +	struct viommu_endpoint *vdev;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return -ENODEV;
>> +
>> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
>> +	if (!viommu)
>> +		return -ENODEV;
>> +
>> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
>> +	if (!vdev)
>> +		return -ENOMEM;
>> +
>> +	vdev->viommu = viommu;
>> +	fwspec->iommu_priv = vdev;
>> +
>> +	ret = iommu_device_link(&viommu->iommu, dev);
>> +	if (ret)
>> +		goto err_free_dev;
>> +
>> +	/*
>> +	 * Last step creates a default domain and attaches to it. Everything
>> +	 * must be ready.
>> +	 */
>> +	group = iommu_group_get_for_dev(dev);
>> +	if (IS_ERR(group)) {
>> +		ret = PTR_ERR(group);
>> +		goto err_unlink_dev;
>> +	}
>> +
>> +	iommu_group_put(group);
>> +
>> +	return PTR_ERR_OR_ZERO(group);
>> +
>> +err_unlink_dev:
>> +	iommu_device_unlink(&viommu->iommu, dev);
>> +
>> +err_free_dev:
>> +	kfree(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove_device(struct device *dev)
>> +{
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
>> +
>> +	vdev = fwspec->iommu_priv;
>> +
>> +	iommu_group_remove_device(dev);
>> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	kfree(vdev);
>> +}
>> +
>> +static struct iommu_group *viommu_device_group(struct device *dev)
>> +{
>> +	if (dev_is_pci(dev))
>> +		return pci_device_group(dev);
>> +	else
>> +		return generic_device_group(dev);
>> +}
>> +
>> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>> +{
>> +	return iommu_fwspec_add_ids(dev, args->args, 1);
>> +}
>> +
>> +static struct iommu_ops viommu_ops = {
>> +	.domain_alloc		= viommu_domain_alloc,
>> +	.domain_free		= viommu_domain_free,
>> +	.attach_dev		= viommu_attach_dev,
>> +	.map			= viommu_map,
>> +	.unmap			= viommu_unmap,
>> +	.iova_to_phys		= viommu_iova_to_phys,
>> +	.iotlb_sync		= viommu_iotlb_sync,
>> +	.add_device		= viommu_add_device,
>> +	.remove_device		= viommu_remove_device,
>> +	.device_group		= viommu_device_group,
>> +	.get_resv_regions	= viommu_get_resv_regions,
>> +	.put_resv_regions	= viommu_put_resv_regions,
>> +	.of_xlate		= viommu_of_xlate,
>> +};
>> +
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>> +{
>> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> +	const char *name = "request";
>> +	void *ret;
>> +
>> +	ret = virtio_find_single_vq(vdev, NULL, name);
>> +	if (IS_ERR(ret)) {
>> +		dev_err(viommu->dev, "cannot find VQ\n");
>> +		return PTR_ERR(ret);
>> +	}
>> +
>> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
> 
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?
> 
> 
>> +
>> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
>> +	if (!viommu)
>> +		return -ENOMEM;
>> +
>> +	spin_lock_init(&viommu->request_lock);
>> +	ida_init(&viommu->domain_ids);
>> +	viommu->dev = dev;
>> +	viommu->vdev = vdev;
>> +	INIT_LIST_HEAD(&viommu->requests);
>> +
>> +	ret = viommu_init_vqs(viommu);
>> +	if (ret)
>> +		return ret;
>> +
>> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
>> +		     &viommu->pgsize_bitmap);
>> +
>> +	if (!viommu->pgsize_bitmap) {
>> +		ret = -EINVAL;
>> +		goto err_free_vqs;
>> +	}
>> +
>> +	viommu->domain_bits = 32;
>> +
>> +	/* Optional features */
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.start,
>> +			     &input_start);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.end,
>> +			     &input_end);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +			     struct virtio_iommu_config, domain_bits,
>> +			     &viommu->domain_bits);
>> +
>> +	viommu->geometry = (struct iommu_domain_geometry) {
>> +		.aperture_start	= input_start,
>> +		.aperture_end	= input_end,
>> +		.force_aperture	= true,
>> +	};
>> +
>> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
>> +
>> +	virtio_device_ready(vdev);
>> +
>> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>> +				     virtio_bus_name(vdev));
>> +	if (ret)
>> +		goto err_free_vqs;
>> +
>> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
>> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
>> +
>> +	iommu_device_register(&viommu->iommu);
>> +
>> +#ifdef CONFIG_PCI
>> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
>> +		pci_request_acs();
>> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +#ifdef CONFIG_ARM_AMBA
>> +	if (amba_bustype.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +
>> +	vdev->priv = viommu;
>> +
>> +	dev_info(dev, "input address: %u bits\n",
>> +		 order_base_2(viommu->geometry.aperture_end));
>> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
>> +
>> +	return 0;
>> +
>> +err_unregister:
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +err_free_vqs:
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove(struct virtio_device *vdev)
>> +{
>> +	struct viommu_dev *viommu = vdev->priv;
>> +
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +
>> +	/* Stop all virtqueues */
>> +	vdev->config->reset(vdev);
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	dev_info(&vdev->dev, "device removed\n");
>> +}
>> +
>> +static void viommu_config_changed(struct virtio_device *vdev)
>> +{
>> +	dev_warn(&vdev->dev, "config changed\n");
>> +}
>> +
>> +static unsigned int features[] = {
>> +	VIRTIO_IOMMU_F_MAP_UNMAP,
>> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +};
>> +
>> +static struct virtio_device_id id_table[] = {
>> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
>> +	{ 0 },
>> +};
>> +
>> +static struct virtio_driver virtio_iommu_drv = {
>> +	.driver.name		= KBUILD_MODNAME,
>> +	.driver.owner		= THIS_MODULE,
>> +	.id_table		= id_table,
>> +	.feature_table		= features,
>> +	.feature_table_size	= ARRAY_SIZE(features),
>> +	.probe			= viommu_probe,
>> +	.remove			= viommu_remove,
>> +	.config_changed		= viommu_config_changed,
>> +};
>> +
>> +module_virtio_driver(virtio_iommu_drv);
>> +
>> +MODULE_DESCRIPTION("Virtio IOMMU driver");
>> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
>> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
>> --- a/include/uapi/linux/virtio_ids.h
>> +++ b/include/uapi/linux/virtio_ids.h
>> @@ -43,5 +43,6 @@
>>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
>> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>>  
>>  #endif /* _LINUX_VIRTIO_IDS_H */
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> new file mode 100644
>> index 000000000000..e808fc7fbe82
>> --- /dev/null
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -0,0 +1,101 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause */
>> +/*
>> + * Virtio-iommu definition v0.8
>> + *
>> + * Copyright (C) 2018 Arm Ltd.
>> + */
>> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
>> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
>> +
>> +#include <linux/types.h>
>> +
>> +/* Feature bits */
>> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
>> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
>> +#define VIRTIO_IOMMU_F_BYPASS			3
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?
> 
>> +};
>> +
>> +/* Request types */
>> +#define VIRTIO_IOMMU_T_ATTACH			0x01
>> +#define VIRTIO_IOMMU_T_DETACH			0x02
>> +#define VIRTIO_IOMMU_T_MAP			0x03
>> +#define VIRTIO_IOMMU_T_UNMAP			0x04
>> +
>> +/* Status types */
>> +#define VIRTIO_IOMMU_S_OK			0x00
>> +#define VIRTIO_IOMMU_S_IOERR			0x01
>> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
>> +#define VIRTIO_IOMMU_S_DEVERR			0x03
>> +#define VIRTIO_IOMMU_S_INVAL			0x04
>> +#define VIRTIO_IOMMU_S_RANGE			0x05
>> +#define VIRTIO_IOMMU_S_NOENT			0x06
>> +#define VIRTIO_IOMMU_S_FAULT			0x07
>> +
>> +struct virtio_iommu_req_head {
>> +	__u8					type;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_tail {
>> +	__u8					status;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_attach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_detach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
>> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
>> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
>> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
>> +
>> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
>> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
>> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
>> +						 VIRTIO_IOMMU_MAP_F_MMIO)
>> +
>> +struct virtio_iommu_req_map {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__le64					phys_start;
>> +	__le32					flags;
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_unmap {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__u8					reserved[4];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#endif
>> -- 
>> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
@ 2018-11-08 14:51       ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:51 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-Philippe Brucker
  Cc: iommu, virtualization, devicetree, linux-pci, kvmarm,
	peter.maydell, joro, jasowang, robh+dt, mark.rutland, tnowicki,
	kevin.tian, marc.zyngier, robin.murphy, will.deacon,
	lorenzo.pieralisi

Hi Jean-Philippe,

On 10/12/18 6:35 PM, Michael S. Tsirkin wrote:
> On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
>> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
>> requests such as map/unmap over virtio transport without emulating page
>> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
>> requests.
>>
>> The bulk of the code transforms calls coming from the IOMMU API into
>> corresponding virtio requests. Mappings are kept in an interval tree
>> instead of page tables.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  MAINTAINERS                       |   7 +
>>  drivers/iommu/Kconfig             |  11 +
>>  drivers/iommu/Makefile            |   1 +
>>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>>  include/uapi/linux/virtio_ids.h   |   1 +
>>  include/uapi/linux/virtio_iommu.h | 101 ++++
>>  6 files changed, 1059 insertions(+)
>>  create mode 100644 drivers/iommu/virtio-iommu.c
>>  create mode 100644 include/uapi/linux/virtio_iommu.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 48a65c3a4189..f02fa65f47e2 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -15599,6 +15599,13 @@ S:	Maintained
>>  F:	drivers/virtio/virtio_input.c
>>  F:	include/uapi/linux/virtio_input.h
>>  
>> +VIRTIO IOMMU DRIVER
>> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> +L:	virtualization@lists.linux-foundation.org
>> +S:	Maintained
>> +F:	drivers/iommu/virtio-iommu.c
>> +F:	include/uapi/linux/virtio_iommu.h
>> +
>>  VIRTUAL BOX GUEST DEVICE DRIVER
>>  M:	Hans de Goede <hdegoede@redhat.com>
>>  M:	Arnd Bergmann <arnd@arndb.de>
>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>> index c60395b7470f..2dc016dc2b92 100644
>> --- a/drivers/iommu/Kconfig
>> +++ b/drivers/iommu/Kconfig
>> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>>  	help
>>  	  Support for IOMMU on certain Qualcomm SoCs.
>>  
>> +config VIRTIO_IOMMU
>> +	bool "Virtio IOMMU driver"
>> +	depends on VIRTIO=y
>> +	select IOMMU_API
>> +	select INTERVAL_TREE
>> +	select ARM_DMA_USE_IOMMU if ARM
>> +	help
>> +	  Para-virtualised IOMMU driver with virtio.
>> +
>> +	  Say Y here if you intend to run this kernel as a guest.
>> +
>>  endif # IOMMU_SUPPORT
>> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
>> index ab5eba6edf82..4cd643408e49 100644
>> --- a/drivers/iommu/Makefile
>> +++ b/drivers/iommu/Makefile
>> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
>> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> new file mode 100644
>> index 000000000000..9fb38cd3b727
>> --- /dev/null
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -0,0 +1,938 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Virtio driver for the paravirtualized IOMMU
>> + *
>> + * Copyright (C) 2018 Arm Limited
>> + */
>> +
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> +
>> +#include <linux/amba/bus.h>
>> +#include <linux/delay.h>
>> +#include <linux/dma-iommu.h>
>> +#include <linux/freezer.h>
>> +#include <linux/interval_tree.h>
>> +#include <linux/iommu.h>
>> +#include <linux/module.h>
>> +#include <linux/of_iommu.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/pci.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/virtio.h>
>> +#include <linux/virtio_config.h>
>> +#include <linux/virtio_ids.h>
>> +#include <linux/wait.h>
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
>> +
>> +#define VIOMMU_REQUEST_VQ		0
>> +#define VIOMMU_NR_VQS			1
>> +
>> +/*
>> + * During development, it is convenient to time out rather than wait
>> + * indefinitely in atomic context when a device misbehaves and a request doesn't
>> + * return. In production however, some requests shouldn't return until they are
>> + * successful.
>> + */
>> +#ifdef DEBUG
>> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
>> +#endif
>> +
>> +struct viommu_dev {
>> +	struct iommu_device		iommu;
>> +	struct device			*dev;
>> +	struct virtio_device		*vdev;
>> +
>> +	struct ida			domain_ids;
>> +
>> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
>> +	spinlock_t			request_lock;
>> +	struct list_head		requests;
>> +
>> +	/* Device configuration */
>> +	struct iommu_domain_geometry	geometry;
>> +	u64				pgsize_bitmap;
>> +	u8				domain_bits;
>> +};
>> +
>> +struct viommu_mapping {
>> +	phys_addr_t			paddr;
>> +	struct interval_tree_node	iova;
>> +	u32				flags;
>> +};
>> +
>> +struct viommu_domain {
>> +	struct iommu_domain		domain;
>> +	struct viommu_dev		*viommu;
>> +	struct mutex			mutex;
>> +	unsigned int			id;
>> +
>> +	spinlock_t			mappings_lock;
>> +	struct rb_root_cached		mappings;
>> +
>> +	unsigned long			nr_endpoints;
>> +};
>> +
>> +struct viommu_endpoint {
>> +	struct viommu_dev		*viommu;
>> +	struct viommu_domain		*vdomain;
>> +};
>> +
>> +struct viommu_request {
>> +	struct list_head		list;
>> +	void				*writeback;
>> +	unsigned int			write_offset;
>> +	unsigned int			len;
>> +	char				buf[];
>> +};
>> +
>> +#define to_viommu_domain(domain)	\
>> +	container_of(domain, struct viommu_domain, domain)
>> +
>> +static int viommu_get_req_errno(void *buf, size_t len)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	switch (tail->status) {
>> +	case VIRTIO_IOMMU_S_OK:
>> +		return 0;
>> +	case VIRTIO_IOMMU_S_UNSUPP:
>> +		return -ENOSYS;
>> +	case VIRTIO_IOMMU_S_INVAL:
>> +		return -EINVAL;
>> +	case VIRTIO_IOMMU_S_RANGE:
>> +		return -ERANGE;
>> +	case VIRTIO_IOMMU_S_NOENT:
>> +		return -ENOENT;
>> +	case VIRTIO_IOMMU_S_FAULT:
>> +		return -EFAULT;
>> +	case VIRTIO_IOMMU_S_IOERR:
>> +	case VIRTIO_IOMMU_S_DEVERR:
>> +	default:
>> +		return -EIO;
>> +	}
>> +}
>> +
>> +static void viommu_set_req_status(void *buf, size_t len, int status)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	tail->status = status;
>> +}
>> +
>> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>> +				   struct virtio_iommu_req_head *req,
>> +				   size_t len)
>> +{
>> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>> +
>> +	return len - tail_size;
>> +}
>> +
>> +/*
>> + * __viommu_sync_req - Complete all in-flight requests
>> + *
>> + * Wait for all added requests to complete. When this function returns, all
>> + * requests that were in-flight at the time of the call have completed.
>> + */
>> +static int __viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret = 0;
>> +	unsigned int len;
>> +	size_t write_len;
>> +	ktime_t timeout = 0;
>> +	struct viommu_request *req;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +#ifdef DEBUG
>> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
>> +#endif
>> +	virtqueue_kick(vq);
>> +
>> +	while (!list_empty(&viommu->requests)) {
>> +		len = 0;
>> +		req = virtqueue_get_buf(vq, &len);
>> +		if (req == NULL) {
>> +			if (!timeout || ktime_before(ktime_get(), timeout))
>> +				continue;
>> +
>> +			/* After timeout, remove all requests */
>> +			req = list_first_entry(&viommu->requests,
>> +					       struct viommu_request, list);
>> +			ret = -ETIMEDOUT;
>> +		}
>> +
>> +		if (!len)
>> +			viommu_set_req_status(req->buf, req->len,
>> +					      VIRTIO_IOMMU_S_IOERR);
>> +
>> +		write_len = req->len - req->write_offset;
>> +		if (req->writeback && len >= write_len)
>> +			memcpy(req->writeback, req->buf + req->write_offset,
>> +			       write_len);
>> +
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.
> 
> 
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * __viommu_add_request - Add one request to the queue
>> + * @buf: pointer to the request buffer
>> + * @len: length of the request buffer
>> + * @writeback: copy data back to the buffer when the request completes.
>> + *
>> + * Add a request to the queue. Only synchronize the queue if it's already full.
>> + * Otherwise don't kick the queue nor wait for requests to complete.
>> + *
>> + * When @writeback is true, data written by the device, including the request
>> + * status, is copied into @buf after the request completes. This is unsafe if
>> + * the caller allocates @buf on stack and drops the lock between add_req() and
>> + * sync_req().
>> + *
>> + * Return 0 if the request was successfully added to the queue.
>> + */
>> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
>> +			    bool writeback)
>> +{
>> +	int ret;
>> +	off_t write_offset;
>> +	struct viommu_request *req;
>> +	struct scatterlist top_sg, bottom_sg;
>> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +
>> +	write_offset = viommu_get_req_offset(viommu, buf, len);
>> +	if (!write_offset)
>> +		return -EINVAL;
>> +
>> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	req->len = len;
>> +	if (writeback) {
>> +		req->writeback = buf + write_offset;
>> +		req->write_offset = write_offset;
>> +	}
>> +	memcpy(&req->buf, buf, write_offset);
>> +
>> +	sg_init_one(&top_sg, req->buf, write_offset);
>> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
>> +
>> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	if (ret == -ENOSPC) {
>> +		/* If the queue is full, sync and retry */
>> +		if (!__viommu_sync_req(viommu))
>> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	}
>> +	if (ret)
>> +		goto err_free;
>> +
>> +	list_add_tail(&req->list, &viommu->requests);
>> +	return 0;
>> +
>> +err_free:
>> +	kfree(req);
>> +	return ret;
>> +}
>> +
>> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_add_req(viommu, buf, len, false);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * Send a request and wait for it to complete. Return the request status (as an
>> + * errno)
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
>> +				size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +
>> +	ret = __viommu_add_req(viommu, buf, len, true);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
>> +		goto out_unlock;
>> +	}
>> +
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +		/* Fall-through (get the actual request status) */
>> +	}
>> +
>> +	ret = viommu_get_req_errno(buf, len);
>> +out_unlock:
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +	return ret;
>> +}
>> +
>> +/*
>> + * viommu_add_mapping - add a mapping to the internal tree
>> + *
>> + * On success, return the new mapping. Otherwise return NULL.
>> + */
>> +static struct viommu_mapping *
>> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
>> +		   phys_addr_t paddr, size_t size, u32 flags)
>> +{
>> +	unsigned long irqflags;
>> +	struct viommu_mapping *mapping;
>> +
>> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
>> +	if (!mapping)
>> +		return NULL;
>> +
>> +	mapping->paddr		= paddr;
>> +	mapping->iova.start	= iova;
>> +	mapping->iova.last	= iova + size - 1;
>> +	mapping->flags		= flags;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
>> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
>> +
>> +	return mapping;
>> +}
>> +
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				  unsigned long iova, size_t size)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/* Trying to split a mapping? */
>> +		if (mapping->iova.start < iova)
>> +			break;
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +		kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all endpoints,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + */
>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>> +{
>> +	int ret;
ret needs to be initialized here. Otherwise this can lead to a crash in
viommu_add_device.

Thanks

Eric
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct virtio_iommu_req_map map;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
>> +	while (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		map = (struct virtio_iommu_req_map) {
>> +			.head.type	= VIRTIO_IOMMU_T_MAP,
>> +			.domain		= cpu_to_le32(vdomain->id),
>> +			.virt_start	= cpu_to_le64(mapping->iova.start),
>> +			.virt_end	= cpu_to_le64(mapping->iova.last),
>> +			.phys_start	= cpu_to_le64(mapping->paddr),
>> +			.flags		= cpu_to_le32(mapping->flags),
>> +		};
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +		if (ret)
>> +			break;
>> +
>> +		node = interval_tree_iter_next(node, 0, -1UL);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/* IOMMU API */
>> +
>> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> +{
>> +	struct viommu_domain *vdomain;
>> +
>> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
>> +		return NULL;
>> +
>> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
>> +	if (!vdomain)
>> +		return NULL;
>> +
>> +	mutex_init(&vdomain->mutex);
>> +	spin_lock_init(&vdomain->mappings_lock);
>> +	vdomain->mappings = RB_ROOT_CACHED;
>> +
>> +	if (type == IOMMU_DOMAIN_DMA &&
>> +	    iommu_get_dma_cookie(&vdomain->domain)) {
>> +		kfree(vdomain);
>> +		return NULL;
>> +	}
>> +
>> +	return &vdomain->domain;
>> +}
>> +
>> +static int viommu_domain_finalise(struct viommu_dev *viommu,
>> +				  struct iommu_domain *domain)
>> +{
>> +	int ret;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
>> +				  (1U << viommu->domain_bits) - 1;
>> +
>> +	vdomain->viommu		= viommu;
>> +
>> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
>> +	domain->geometry	= viommu->geometry;
>> +
>> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
>> +	if (ret >= 0)
>> +		vdomain->id = (unsigned int)ret;
>> +
>> +	return ret > 0 ? 0 : ret;
>> +}
>> +
>> +static void viommu_domain_free(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	iommu_put_dma_cookie(domain);
>> +
>> +	/* Free all remaining mappings (size 2^64) */
>> +	viommu_del_mappings(vdomain, 0, 0);
>> +
>> +	if (vdomain->viommu)
>> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
>> +
>> +	kfree(vdomain);
>> +}
>> +
>> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>> +{
>> +	int i;
>> +	int ret = 0;
>> +	struct virtio_iommu_req_attach req;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	mutex_lock(&vdomain->mutex);
>> +	if (!vdomain->viommu) {
>> +		/*
>> +		 * Initialize the domain proper now that we know which viommu
>> +		 * owns it.
>> +		 */
>> +		ret = viommu_domain_finalise(vdev->viommu, domain);
>> +	} else if (vdomain->viommu != vdev->viommu) {
>> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
>> +		ret = -EXDEV;
>> +	}
>> +	mutex_unlock(&vdomain->mutex);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	/*
>> +	 * In the virtio-iommu device, when attaching the endpoint to a new
>> +	 * domain, it is detached from the old one and, if as as a result the
>> +	 * old domain isn't attached to any endpoint, all mappings are removed
>> +	 * from the old domain and it is freed.
>> +	 *
>> +	 * In the driver the old domain still exists, and its mappings will be
>> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
>> +	 * freed explicitly.
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		vdev->vdomain->nr_endpoints--;
>> +
>> +	req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	if (!vdomain->nr_endpoints) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings (e.g. SW MSI).
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	vdomain->nr_endpoints++;
>> +	vdev->vdomain = vdomain;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
>> +		      phys_addr_t paddr, size_t size, int prot)
>> +{
>> +	int ret;
>> +	int flags;
>> +	struct viommu_mapping *mapping;
>> +	struct virtio_iommu_req_map map;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
>> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
>> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
>> +
>> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
>> +	if (!mapping)
>> +		return -ENOMEM;
>> +
>> +	map = (struct virtio_iommu_req_map) {
>> +		.head.type	= VIRTIO_IOMMU_T_MAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.phys_start	= cpu_to_le64(paddr),
>> +		.virt_end	= cpu_to_le64(iova + size - 1),
>> +		.flags		= cpu_to_le32(flags),
>> +	};
>> +
>> +	if (!vdomain->nr_endpoints)
>> +		return 0;
>> +
>> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +	if (ret)
>> +		viommu_del_mappings(vdomain, iova, size);
>> +
>> +	return ret;
>> +}
>> +
>> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
>> +			   size_t size)
>> +{
>> +	int ret = 0;
>> +	size_t unmapped;
>> +	struct virtio_iommu_req_unmap unmap;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	unmapped = viommu_del_mappings(vdomain, iova, size);
>> +	if (unmapped < size)
>> +		return 0;
>> +
>> +	/* Device already removed all mappings after detach. */
>> +	if (!vdomain->nr_endpoints)
>> +		return unmapped;
>> +
>> +	unmap = (struct virtio_iommu_req_unmap) {
>> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
>> +	};
>> +
>> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
>> +	return ret ? 0 : unmapped;
>> +}
>> +
>> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
>> +				       dma_addr_t iova)
>> +{
>> +	u64 paddr = 0;
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
>> +	if (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		paddr = mapping->paddr + (iova - mapping->iova.start);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return paddr;
>> +}
>> +
>> +static void viommu_iotlb_sync(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	viommu_sync_req(vdomain->viommu);
>> +}
>> +
>> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *region;
>> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> +					 IOMMU_RESV_SW_MSI);
>> +	if (!region)
>> +		return;
>> +
>> +	list_add_tail(&region->list, head);
>> +	iommu_dma_get_resv_regions(dev, head);
>> +}
>> +
>> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *entry, *next;
>> +
>> +	list_for_each_entry_safe(entry, next, head, list)
>> +		kfree(entry);
>> +}
>> +
>> +static struct iommu_ops viommu_ops;
>> +static struct virtio_driver virtio_iommu_drv;
>> +
>> +static int viommu_match_node(struct device *dev, void *data)
>> +{
>> +	return dev->parent->fwnode == data;
>> +}
>> +
>> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
>> +						fwnode, viommu_match_node);
>> +	put_device(dev);
>> +
>> +	return dev ? dev_to_virtio(dev)->priv : NULL;
>> +}
>> +
>> +static int viommu_add_device(struct device *dev)
>> +{
>> +	int ret;
>> +	struct iommu_group *group;
>> +	struct viommu_endpoint *vdev;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return -ENODEV;
>> +
>> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
>> +	if (!viommu)
>> +		return -ENODEV;
>> +
>> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
>> +	if (!vdev)
>> +		return -ENOMEM;
>> +
>> +	vdev->viommu = viommu;
>> +	fwspec->iommu_priv = vdev;
>> +
>> +	ret = iommu_device_link(&viommu->iommu, dev);
>> +	if (ret)
>> +		goto err_free_dev;
>> +
>> +	/*
>> +	 * Last step creates a default domain and attaches to it. Everything
>> +	 * must be ready.
>> +	 */
>> +	group = iommu_group_get_for_dev(dev);
>> +	if (IS_ERR(group)) {
>> +		ret = PTR_ERR(group);
>> +		goto err_unlink_dev;
>> +	}
>> +
>> +	iommu_group_put(group);
>> +
>> +	return PTR_ERR_OR_ZERO(group);
>> +
>> +err_unlink_dev:
>> +	iommu_device_unlink(&viommu->iommu, dev);
>> +
>> +err_free_dev:
>> +	kfree(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove_device(struct device *dev)
>> +{
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
>> +
>> +	vdev = fwspec->iommu_priv;
>> +
>> +	iommu_group_remove_device(dev);
>> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	kfree(vdev);
>> +}
>> +
>> +static struct iommu_group *viommu_device_group(struct device *dev)
>> +{
>> +	if (dev_is_pci(dev))
>> +		return pci_device_group(dev);
>> +	else
>> +		return generic_device_group(dev);
>> +}
>> +
>> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>> +{
>> +	return iommu_fwspec_add_ids(dev, args->args, 1);
>> +}
>> +
>> +static struct iommu_ops viommu_ops = {
>> +	.domain_alloc		= viommu_domain_alloc,
>> +	.domain_free		= viommu_domain_free,
>> +	.attach_dev		= viommu_attach_dev,
>> +	.map			= viommu_map,
>> +	.unmap			= viommu_unmap,
>> +	.iova_to_phys		= viommu_iova_to_phys,
>> +	.iotlb_sync		= viommu_iotlb_sync,
>> +	.add_device		= viommu_add_device,
>> +	.remove_device		= viommu_remove_device,
>> +	.device_group		= viommu_device_group,
>> +	.get_resv_regions	= viommu_get_resv_regions,
>> +	.put_resv_regions	= viommu_put_resv_regions,
>> +	.of_xlate		= viommu_of_xlate,
>> +};
>> +
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>> +{
>> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> +	const char *name = "request";
>> +	void *ret;
>> +
>> +	ret = virtio_find_single_vq(vdev, NULL, name);
>> +	if (IS_ERR(ret)) {
>> +		dev_err(viommu->dev, "cannot find VQ\n");
>> +		return PTR_ERR(ret);
>> +	}
>> +
>> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
> 
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?
> 
> 
>> +
>> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
>> +	if (!viommu)
>> +		return -ENOMEM;
>> +
>> +	spin_lock_init(&viommu->request_lock);
>> +	ida_init(&viommu->domain_ids);
>> +	viommu->dev = dev;
>> +	viommu->vdev = vdev;
>> +	INIT_LIST_HEAD(&viommu->requests);
>> +
>> +	ret = viommu_init_vqs(viommu);
>> +	if (ret)
>> +		return ret;
>> +
>> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
>> +		     &viommu->pgsize_bitmap);
>> +
>> +	if (!viommu->pgsize_bitmap) {
>> +		ret = -EINVAL;
>> +		goto err_free_vqs;
>> +	}
>> +
>> +	viommu->domain_bits = 32;
>> +
>> +	/* Optional features */
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.start,
>> +			     &input_start);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.end,
>> +			     &input_end);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +			     struct virtio_iommu_config, domain_bits,
>> +			     &viommu->domain_bits);
>> +
>> +	viommu->geometry = (struct iommu_domain_geometry) {
>> +		.aperture_start	= input_start,
>> +		.aperture_end	= input_end,
>> +		.force_aperture	= true,
>> +	};
>> +
>> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
>> +
>> +	virtio_device_ready(vdev);
>> +
>> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>> +				     virtio_bus_name(vdev));
>> +	if (ret)
>> +		goto err_free_vqs;
>> +
>> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
>> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
>> +
>> +	iommu_device_register(&viommu->iommu);
>> +
>> +#ifdef CONFIG_PCI
>> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
>> +		pci_request_acs();
>> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +#ifdef CONFIG_ARM_AMBA
>> +	if (amba_bustype.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +
>> +	vdev->priv = viommu;
>> +
>> +	dev_info(dev, "input address: %u bits\n",
>> +		 order_base_2(viommu->geometry.aperture_end));
>> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
>> +
>> +	return 0;
>> +
>> +err_unregister:
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +err_free_vqs:
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove(struct virtio_device *vdev)
>> +{
>> +	struct viommu_dev *viommu = vdev->priv;
>> +
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +
>> +	/* Stop all virtqueues */
>> +	vdev->config->reset(vdev);
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	dev_info(&vdev->dev, "device removed\n");
>> +}
>> +
>> +static void viommu_config_changed(struct virtio_device *vdev)
>> +{
>> +	dev_warn(&vdev->dev, "config changed\n");
>> +}
>> +
>> +static unsigned int features[] = {
>> +	VIRTIO_IOMMU_F_MAP_UNMAP,
>> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +};
>> +
>> +static struct virtio_device_id id_table[] = {
>> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
>> +	{ 0 },
>> +};
>> +
>> +static struct virtio_driver virtio_iommu_drv = {
>> +	.driver.name		= KBUILD_MODNAME,
>> +	.driver.owner		= THIS_MODULE,
>> +	.id_table		= id_table,
>> +	.feature_table		= features,
>> +	.feature_table_size	= ARRAY_SIZE(features),
>> +	.probe			= viommu_probe,
>> +	.remove			= viommu_remove,
>> +	.config_changed		= viommu_config_changed,
>> +};
>> +
>> +module_virtio_driver(virtio_iommu_drv);
>> +
>> +MODULE_DESCRIPTION("Virtio IOMMU driver");
>> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
>> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
>> --- a/include/uapi/linux/virtio_ids.h
>> +++ b/include/uapi/linux/virtio_ids.h
>> @@ -43,5 +43,6 @@
>>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
>> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>>  
>>  #endif /* _LINUX_VIRTIO_IDS_H */
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> new file mode 100644
>> index 000000000000..e808fc7fbe82
>> --- /dev/null
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -0,0 +1,101 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause */
>> +/*
>> + * Virtio-iommu definition v0.8
>> + *
>> + * Copyright (C) 2018 Arm Ltd.
>> + */
>> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
>> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
>> +
>> +#include <linux/types.h>
>> +
>> +/* Feature bits */
>> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
>> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
>> +#define VIRTIO_IOMMU_F_BYPASS			3
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?
> 
>> +};
>> +
>> +/* Request types */
>> +#define VIRTIO_IOMMU_T_ATTACH			0x01
>> +#define VIRTIO_IOMMU_T_DETACH			0x02
>> +#define VIRTIO_IOMMU_T_MAP			0x03
>> +#define VIRTIO_IOMMU_T_UNMAP			0x04
>> +
>> +/* Status types */
>> +#define VIRTIO_IOMMU_S_OK			0x00
>> +#define VIRTIO_IOMMU_S_IOERR			0x01
>> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
>> +#define VIRTIO_IOMMU_S_DEVERR			0x03
>> +#define VIRTIO_IOMMU_S_INVAL			0x04
>> +#define VIRTIO_IOMMU_S_RANGE			0x05
>> +#define VIRTIO_IOMMU_S_NOENT			0x06
>> +#define VIRTIO_IOMMU_S_FAULT			0x07
>> +
>> +struct virtio_iommu_req_head {
>> +	__u8					type;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_tail {
>> +	__u8					status;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_attach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_detach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
>> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
>> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
>> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
>> +
>> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
>> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
>> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
>> +						 VIRTIO_IOMMU_MAP_F_MMIO)
>> +
>> +struct virtio_iommu_req_map {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__le64					phys_start;
>> +	__le32					flags;
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_unmap {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__u8					reserved[4];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#endif
>> -- 
>> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-10-12 16:35     ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2018-11-08 14:51     ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-08 14:51 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jean-Philippe Brucker
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, will.deacon, virtualization,
	marc.zyngier, iommu, robh+dt, robin.murphy, kvmarm

Hi Jean-Philippe,

On 10/12/18 6:35 PM, Michael S. Tsirkin wrote:
> On Fri, Oct 12, 2018 at 03:59:15PM +0100, Jean-Philippe Brucker wrote:
>> The virtio IOMMU is a para-virtualized device, allowing to send IOMMU
>> requests such as map/unmap over virtio transport without emulating page
>> tables. This implementation handles ATTACH, DETACH, MAP and UNMAP
>> requests.
>>
>> The bulk of the code transforms calls coming from the IOMMU API into
>> corresponding virtio requests. Mappings are kept in an interval tree
>> instead of page tables.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  MAINTAINERS                       |   7 +
>>  drivers/iommu/Kconfig             |  11 +
>>  drivers/iommu/Makefile            |   1 +
>>  drivers/iommu/virtio-iommu.c      | 938 ++++++++++++++++++++++++++++++
>>  include/uapi/linux/virtio_ids.h   |   1 +
>>  include/uapi/linux/virtio_iommu.h | 101 ++++
>>  6 files changed, 1059 insertions(+)
>>  create mode 100644 drivers/iommu/virtio-iommu.c
>>  create mode 100644 include/uapi/linux/virtio_iommu.h
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index 48a65c3a4189..f02fa65f47e2 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -15599,6 +15599,13 @@ S:	Maintained
>>  F:	drivers/virtio/virtio_input.c
>>  F:	include/uapi/linux/virtio_input.h
>>  
>> +VIRTIO IOMMU DRIVER
>> +M:	Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> +L:	virtualization@lists.linux-foundation.org
>> +S:	Maintained
>> +F:	drivers/iommu/virtio-iommu.c
>> +F:	include/uapi/linux/virtio_iommu.h
>> +
>>  VIRTUAL BOX GUEST DEVICE DRIVER
>>  M:	Hans de Goede <hdegoede@redhat.com>
>>  M:	Arnd Bergmann <arnd@arndb.de>
>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>> index c60395b7470f..2dc016dc2b92 100644
>> --- a/drivers/iommu/Kconfig
>> +++ b/drivers/iommu/Kconfig
>> @@ -414,4 +414,15 @@ config QCOM_IOMMU
>>  	help
>>  	  Support for IOMMU on certain Qualcomm SoCs.
>>  
>> +config VIRTIO_IOMMU
>> +	bool "Virtio IOMMU driver"
>> +	depends on VIRTIO=y
>> +	select IOMMU_API
>> +	select INTERVAL_TREE
>> +	select ARM_DMA_USE_IOMMU if ARM
>> +	help
>> +	  Para-virtualised IOMMU driver with virtio.
>> +
>> +	  Say Y here if you intend to run this kernel as a guest.
>> +
>>  endif # IOMMU_SUPPORT
>> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
>> index ab5eba6edf82..4cd643408e49 100644
>> --- a/drivers/iommu/Makefile
>> +++ b/drivers/iommu/Makefile
>> @@ -31,3 +31,4 @@ obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
>>  obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o
>>  obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
>> +obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> new file mode 100644
>> index 000000000000..9fb38cd3b727
>> --- /dev/null
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -0,0 +1,938 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Virtio driver for the paravirtualized IOMMU
>> + *
>> + * Copyright (C) 2018 Arm Limited
>> + */
>> +
>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>> +
>> +#include <linux/amba/bus.h>
>> +#include <linux/delay.h>
>> +#include <linux/dma-iommu.h>
>> +#include <linux/freezer.h>
>> +#include <linux/interval_tree.h>
>> +#include <linux/iommu.h>
>> +#include <linux/module.h>
>> +#include <linux/of_iommu.h>
>> +#include <linux/of_platform.h>
>> +#include <linux/pci.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/virtio.h>
>> +#include <linux/virtio_config.h>
>> +#include <linux/virtio_ids.h>
>> +#include <linux/wait.h>
>> +
>> +#include <uapi/linux/virtio_iommu.h>
>> +
>> +#define MSI_IOVA_BASE			0x8000000
>> +#define MSI_IOVA_LENGTH			0x100000
>> +
>> +#define VIOMMU_REQUEST_VQ		0
>> +#define VIOMMU_NR_VQS			1
>> +
>> +/*
>> + * During development, it is convenient to time out rather than wait
>> + * indefinitely in atomic context when a device misbehaves and a request doesn't
>> + * return. In production however, some requests shouldn't return until they are
>> + * successful.
>> + */
>> +#ifdef DEBUG
>> +#define VIOMMU_REQUEST_TIMEOUT		10000 /* 10s */
>> +#endif
>> +
>> +struct viommu_dev {
>> +	struct iommu_device		iommu;
>> +	struct device			*dev;
>> +	struct virtio_device		*vdev;
>> +
>> +	struct ida			domain_ids;
>> +
>> +	struct virtqueue		*vqs[VIOMMU_NR_VQS];
>> +	spinlock_t			request_lock;
>> +	struct list_head		requests;
>> +
>> +	/* Device configuration */
>> +	struct iommu_domain_geometry	geometry;
>> +	u64				pgsize_bitmap;
>> +	u8				domain_bits;
>> +};
>> +
>> +struct viommu_mapping {
>> +	phys_addr_t			paddr;
>> +	struct interval_tree_node	iova;
>> +	u32				flags;
>> +};
>> +
>> +struct viommu_domain {
>> +	struct iommu_domain		domain;
>> +	struct viommu_dev		*viommu;
>> +	struct mutex			mutex;
>> +	unsigned int			id;
>> +
>> +	spinlock_t			mappings_lock;
>> +	struct rb_root_cached		mappings;
>> +
>> +	unsigned long			nr_endpoints;
>> +};
>> +
>> +struct viommu_endpoint {
>> +	struct viommu_dev		*viommu;
>> +	struct viommu_domain		*vdomain;
>> +};
>> +
>> +struct viommu_request {
>> +	struct list_head		list;
>> +	void				*writeback;
>> +	unsigned int			write_offset;
>> +	unsigned int			len;
>> +	char				buf[];
>> +};
>> +
>> +#define to_viommu_domain(domain)	\
>> +	container_of(domain, struct viommu_domain, domain)
>> +
>> +static int viommu_get_req_errno(void *buf, size_t len)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	switch (tail->status) {
>> +	case VIRTIO_IOMMU_S_OK:
>> +		return 0;
>> +	case VIRTIO_IOMMU_S_UNSUPP:
>> +		return -ENOSYS;
>> +	case VIRTIO_IOMMU_S_INVAL:
>> +		return -EINVAL;
>> +	case VIRTIO_IOMMU_S_RANGE:
>> +		return -ERANGE;
>> +	case VIRTIO_IOMMU_S_NOENT:
>> +		return -ENOENT;
>> +	case VIRTIO_IOMMU_S_FAULT:
>> +		return -EFAULT;
>> +	case VIRTIO_IOMMU_S_IOERR:
>> +	case VIRTIO_IOMMU_S_DEVERR:
>> +	default:
>> +		return -EIO;
>> +	}
>> +}
>> +
>> +static void viommu_set_req_status(void *buf, size_t len, int status)
>> +{
>> +	struct virtio_iommu_req_tail *tail = buf + len - sizeof(*tail);
>> +
>> +	tail->status = status;
>> +}
>> +
>> +static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>> +				   struct virtio_iommu_req_head *req,
>> +				   size_t len)
>> +{
>> +	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>> +
>> +	return len - tail_size;
>> +}
>> +
>> +/*
>> + * __viommu_sync_req - Complete all in-flight requests
>> + *
>> + * Wait for all added requests to complete. When this function returns, all
>> + * requests that were in-flight at the time of the call have completed.
>> + */
>> +static int __viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret = 0;
>> +	unsigned int len;
>> +	size_t write_len;
>> +	ktime_t timeout = 0;
>> +	struct viommu_request *req;
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +#ifdef DEBUG
>> +	timeout = ktime_add_ms(ktime_get(), VIOMMU_REQUEST_TIMEOUT);
>> +#endif
>> +	virtqueue_kick(vq);
>> +
>> +	while (!list_empty(&viommu->requests)) {
>> +		len = 0;
>> +		req = virtqueue_get_buf(vq, &len);
>> +		if (req == NULL) {
>> +			if (!timeout || ktime_before(ktime_get(), timeout))
>> +				continue;
>> +
>> +			/* After timeout, remove all requests */
>> +			req = list_first_entry(&viommu->requests,
>> +					       struct viommu_request, list);
>> +			ret = -ETIMEDOUT;
>> +		}
>> +
>> +		if (!len)
>> +			viommu_set_req_status(req->buf, req->len,
>> +					      VIRTIO_IOMMU_S_IOERR);
>> +
>> +		write_len = req->len - req->write_offset;
>> +		if (req->writeback && len >= write_len)
>> +			memcpy(req->writeback, req->buf + req->write_offset,
>> +			       write_len);
>> +
>> +		list_del(&req->list);
>> +		kfree(req);
> 
> So with DEBUG set, this will actually free memory that device still
> DMA's into. Hardly pretty. I think you want to mark device broken,
> queue the request and then wait for device to be reset.
> 
> 
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int viommu_sync_req(struct viommu_dev *viommu)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * __viommu_add_request - Add one request to the queue
>> + * @buf: pointer to the request buffer
>> + * @len: length of the request buffer
>> + * @writeback: copy data back to the buffer when the request completes.
>> + *
>> + * Add a request to the queue. Only synchronize the queue if it's already full.
>> + * Otherwise don't kick the queue nor wait for requests to complete.
>> + *
>> + * When @writeback is true, data written by the device, including the request
>> + * status, is copied into @buf after the request completes. This is unsafe if
>> + * the caller allocates @buf on stack and drops the lock between add_req() and
>> + * sync_req().
>> + *
>> + * Return 0 if the request was successfully added to the queue.
>> + */
>> +static int __viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len,
>> +			    bool writeback)
>> +{
>> +	int ret;
>> +	off_t write_offset;
>> +	struct viommu_request *req;
>> +	struct scatterlist top_sg, bottom_sg;
>> +	struct scatterlist *sg[2] = { &top_sg, &bottom_sg };
>> +	struct virtqueue *vq = viommu->vqs[VIOMMU_REQUEST_VQ];
>> +
>> +	assert_spin_locked(&viommu->request_lock);
>> +
>> +	write_offset = viommu_get_req_offset(viommu, buf, len);
>> +	if (!write_offset)
>> +		return -EINVAL;
>> +
>> +	req = kzalloc(sizeof(*req) + len, GFP_ATOMIC);
>> +	if (!req)
>> +		return -ENOMEM;
>> +
>> +	req->len = len;
>> +	if (writeback) {
>> +		req->writeback = buf + write_offset;
>> +		req->write_offset = write_offset;
>> +	}
>> +	memcpy(&req->buf, buf, write_offset);
>> +
>> +	sg_init_one(&top_sg, req->buf, write_offset);
>> +	sg_init_one(&bottom_sg, req->buf + write_offset, len - write_offset);
>> +
>> +	ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	if (ret == -ENOSPC) {
>> +		/* If the queue is full, sync and retry */
>> +		if (!__viommu_sync_req(viommu))
>> +			ret = virtqueue_add_sgs(vq, sg, 1, 1, req, GFP_ATOMIC);
>> +	}
>> +	if (ret)
>> +		goto err_free;
>> +
>> +	list_add_tail(&req->list, &viommu->requests);
>> +	return 0;
>> +
>> +err_free:
>> +	kfree(req);
>> +	return ret;
>> +}
>> +
>> +static int viommu_add_req(struct viommu_dev *viommu, void *buf, size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +	ret = __viommu_add_req(viommu, buf, len, false);
>> +	if (ret)
>> +		dev_dbg(viommu->dev, "could not add request: %d\n", ret);
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/*
>> + * Send a request and wait for it to complete. Return the request status (as an
>> + * errno)
>> + */
>> +static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
>> +				size_t len)
>> +{
>> +	int ret;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&viommu->request_lock, flags);
>> +
>> +	ret = __viommu_add_req(viommu, buf, len, true);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not add request (%d)\n", ret);
>> +		goto out_unlock;
>> +	}
>> +
>> +	ret = __viommu_sync_req(viommu);
>> +	if (ret) {
>> +		dev_dbg(viommu->dev, "could not sync requests (%d)\n", ret);
>> +		/* Fall-through (get the actual request status) */
>> +	}
>> +
>> +	ret = viommu_get_req_errno(buf, len);
>> +out_unlock:
>> +	spin_unlock_irqrestore(&viommu->request_lock, flags);
>> +	return ret;
>> +}
>> +
>> +/*
>> + * viommu_add_mapping - add a mapping to the internal tree
>> + *
>> + * On success, return the new mapping. Otherwise return NULL.
>> + */
>> +static struct viommu_mapping *
>> +viommu_add_mapping(struct viommu_domain *vdomain, unsigned long iova,
>> +		   phys_addr_t paddr, size_t size, u32 flags)
>> +{
>> +	unsigned long irqflags;
>> +	struct viommu_mapping *mapping;
>> +
>> +	mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC);
>> +	if (!mapping)
>> +		return NULL;
>> +
>> +	mapping->paddr		= paddr;
>> +	mapping->iova.start	= iova;
>> +	mapping->iova.last	= iova + size - 1;
>> +	mapping->flags		= flags;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, irqflags);
>> +	interval_tree_insert(&mapping->iova, &vdomain->mappings);
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, irqflags);
>> +
>> +	return mapping;
>> +}
>> +
>> +/*
>> + * viommu_del_mappings - remove mappings from the internal tree
>> + *
>> + * @vdomain: the domain
>> + * @iova: start of the range
>> + * @size: size of the range. A size of 0 corresponds to the entire address
>> + *	space.
>> + *
>> + * On success, returns the number of unmapped bytes (>= size)
>> + */
>> +static size_t viommu_del_mappings(struct viommu_domain *vdomain,
>> +				  unsigned long iova, size_t size)
>> +{
>> +	size_t unmapped = 0;
>> +	unsigned long flags;
>> +	unsigned long last = iova + size - 1;
>> +	struct viommu_mapping *mapping = NULL;
>> +	struct interval_tree_node *node, *next;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	next = interval_tree_iter_first(&vdomain->mappings, iova, last);
>> +	while (next) {
>> +		node = next;
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		next = interval_tree_iter_next(node, iova, last);
>> +
>> +		/* Trying to split a mapping? */
>> +		if (mapping->iova.start < iova)
>> +			break;
>> +
>> +		/*
>> +		 * Note that for a partial range, this will return the full
>> +		 * mapping so we avoid sending split requests to the device.
>> +		 */
>> +		unmapped += mapping->iova.last - mapping->iova.start + 1;
>> +
>> +		interval_tree_remove(node, &vdomain->mappings);
>> +		kfree(mapping);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return unmapped;
>> +}
>> +
>> +/*
>> + * viommu_replay_mappings - re-send MAP requests
>> + *
>> + * When reattaching a domain that was previously detached from all endpoints,
>> + * mappings were deleted from the device. Re-create the mappings available in
>> + * the internal tree.
>> + */
>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>> +{
>> +	int ret;
ret needs to be initialized here. Otherwise this can lead to a crash in
viommu_add_device.

Thanks

Eric
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct virtio_iommu_req_map map;
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, 0, -1UL);
>> +	while (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		map = (struct virtio_iommu_req_map) {
>> +			.head.type	= VIRTIO_IOMMU_T_MAP,
>> +			.domain		= cpu_to_le32(vdomain->id),
>> +			.virt_start	= cpu_to_le64(mapping->iova.start),
>> +			.virt_end	= cpu_to_le64(mapping->iova.last),
>> +			.phys_start	= cpu_to_le64(mapping->paddr),
>> +			.flags		= cpu_to_le32(mapping->flags),
>> +		};
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +		if (ret)
>> +			break;
>> +
>> +		node = interval_tree_iter_next(node, 0, -1UL);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return ret;
>> +}
>> +
>> +/* IOMMU API */
>> +
>> +static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> +{
>> +	struct viommu_domain *vdomain;
>> +
>> +	if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
>> +		return NULL;
>> +
>> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
>> +	if (!vdomain)
>> +		return NULL;
>> +
>> +	mutex_init(&vdomain->mutex);
>> +	spin_lock_init(&vdomain->mappings_lock);
>> +	vdomain->mappings = RB_ROOT_CACHED;
>> +
>> +	if (type == IOMMU_DOMAIN_DMA &&
>> +	    iommu_get_dma_cookie(&vdomain->domain)) {
>> +		kfree(vdomain);
>> +		return NULL;
>> +	}
>> +
>> +	return &vdomain->domain;
>> +}
>> +
>> +static int viommu_domain_finalise(struct viommu_dev *viommu,
>> +				  struct iommu_domain *domain)
>> +{
>> +	int ret;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +	unsigned int max_domain = viommu->domain_bits > 31 ? ~0 :
>> +				  (1U << viommu->domain_bits) - 1;
>> +
>> +	vdomain->viommu		= viommu;
>> +
>> +	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
>> +	domain->geometry	= viommu->geometry;
>> +
>> +	ret = ida_alloc_max(&viommu->domain_ids, max_domain, GFP_KERNEL);
>> +	if (ret >= 0)
>> +		vdomain->id = (unsigned int)ret;
>> +
>> +	return ret > 0 ? 0 : ret;
>> +}
>> +
>> +static void viommu_domain_free(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	iommu_put_dma_cookie(domain);
>> +
>> +	/* Free all remaining mappings (size 2^64) */
>> +	viommu_del_mappings(vdomain, 0, 0);
>> +
>> +	if (vdomain->viommu)
>> +		ida_free(&vdomain->viommu->domain_ids, vdomain->id);
>> +
>> +	kfree(vdomain);
>> +}
>> +
>> +static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>> +{
>> +	int i;
>> +	int ret = 0;
>> +	struct virtio_iommu_req_attach req;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	mutex_lock(&vdomain->mutex);
>> +	if (!vdomain->viommu) {
>> +		/*
>> +		 * Initialize the domain proper now that we know which viommu
>> +		 * owns it.
>> +		 */
>> +		ret = viommu_domain_finalise(vdev->viommu, domain);
>> +	} else if (vdomain->viommu != vdev->viommu) {
>> +		dev_err(dev, "cannot attach to foreign vIOMMU\n");
>> +		ret = -EXDEV;
>> +	}
>> +	mutex_unlock(&vdomain->mutex);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	/*
>> +	 * In the virtio-iommu device, when attaching the endpoint to a new
>> +	 * domain, it is detached from the old one and, if as as a result the
>> +	 * old domain isn't attached to any endpoint, all mappings are removed
>> +	 * from the old domain and it is freed.
>> +	 *
>> +	 * In the driver the old domain still exists, and its mappings will be
>> +	 * recreated if it gets reattached to an endpoint. Otherwise it will be
>> +	 * freed explicitly.
>> +	 *
>> +	 * vdev->vdomain is protected by group->mutex
>> +	 */
>> +	if (vdev->vdomain)
>> +		vdev->vdomain->nr_endpoints--;
>> +
>> +	req = (struct virtio_iommu_req_attach) {
>> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +	};
>> +
>> +	for (i = 0; i < fwspec->num_ids; i++) {
>> +		req.endpoint = cpu_to_le32(fwspec->ids[i]);
>> +
>> +		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	if (!vdomain->nr_endpoints) {
>> +		/*
>> +		 * This endpoint is the first to be attached to the domain.
>> +		 * Replay existing mappings (e.g. SW MSI).
>> +		 */
>> +		ret = viommu_replay_mappings(vdomain);
>> +		if (ret)
>> +			return ret;
>> +	}
>> +
>> +	vdomain->nr_endpoints++;
>> +	vdev->vdomain = vdomain;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_map(struct iommu_domain *domain, unsigned long iova,
>> +		      phys_addr_t paddr, size_t size, int prot)
>> +{
>> +	int ret;
>> +	int flags;
>> +	struct viommu_mapping *mapping;
>> +	struct virtio_iommu_req_map map;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	flags = (prot & IOMMU_READ ? VIRTIO_IOMMU_MAP_F_READ : 0) |
>> +		(prot & IOMMU_WRITE ? VIRTIO_IOMMU_MAP_F_WRITE : 0) |
>> +		(prot & IOMMU_MMIO ? VIRTIO_IOMMU_MAP_F_MMIO : 0);
>> +
>> +	mapping = viommu_add_mapping(vdomain, iova, paddr, size, flags);
>> +	if (!mapping)
>> +		return -ENOMEM;
>> +
>> +	map = (struct virtio_iommu_req_map) {
>> +		.head.type	= VIRTIO_IOMMU_T_MAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.phys_start	= cpu_to_le64(paddr),
>> +		.virt_end	= cpu_to_le64(iova + size - 1),
>> +		.flags		= cpu_to_le32(flags),
>> +	};
>> +
>> +	if (!vdomain->nr_endpoints)
>> +		return 0;
>> +
>> +	ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map));
>> +	if (ret)
>> +		viommu_del_mappings(vdomain, iova, size);
>> +
>> +	return ret;
>> +}
>> +
>> +static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
>> +			   size_t size)
>> +{
>> +	int ret = 0;
>> +	size_t unmapped;
>> +	struct virtio_iommu_req_unmap unmap;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	unmapped = viommu_del_mappings(vdomain, iova, size);
>> +	if (unmapped < size)
>> +		return 0;
>> +
>> +	/* Device already removed all mappings after detach. */
>> +	if (!vdomain->nr_endpoints)
>> +		return unmapped;
>> +
>> +	unmap = (struct virtio_iommu_req_unmap) {
>> +		.head.type	= VIRTIO_IOMMU_T_UNMAP,
>> +		.domain		= cpu_to_le32(vdomain->id),
>> +		.virt_start	= cpu_to_le64(iova),
>> +		.virt_end	= cpu_to_le64(iova + unmapped - 1),
>> +	};
>> +
>> +	ret = viommu_add_req(vdomain->viommu, &unmap, sizeof(unmap));
>> +	return ret ? 0 : unmapped;
>> +}
>> +
>> +static phys_addr_t viommu_iova_to_phys(struct iommu_domain *domain,
>> +				       dma_addr_t iova)
>> +{
>> +	u64 paddr = 0;
>> +	unsigned long flags;
>> +	struct viommu_mapping *mapping;
>> +	struct interval_tree_node *node;
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	spin_lock_irqsave(&vdomain->mappings_lock, flags);
>> +	node = interval_tree_iter_first(&vdomain->mappings, iova, iova);
>> +	if (node) {
>> +		mapping = container_of(node, struct viommu_mapping, iova);
>> +		paddr = mapping->paddr + (iova - mapping->iova.start);
>> +	}
>> +	spin_unlock_irqrestore(&vdomain->mappings_lock, flags);
>> +
>> +	return paddr;
>> +}
>> +
>> +static void viommu_iotlb_sync(struct iommu_domain *domain)
>> +{
>> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
>> +
>> +	viommu_sync_req(vdomain->viommu);
>> +}
>> +
>> +static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *region;
>> +	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
>> +	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> +					 IOMMU_RESV_SW_MSI);
>> +	if (!region)
>> +		return;
>> +
>> +	list_add_tail(&region->list, head);
>> +	iommu_dma_get_resv_regions(dev, head);
>> +}
>> +
>> +static void viommu_put_resv_regions(struct device *dev, struct list_head *head)
>> +{
>> +	struct iommu_resv_region *entry, *next;
>> +
>> +	list_for_each_entry_safe(entry, next, head, list)
>> +		kfree(entry);
>> +}
>> +
>> +static struct iommu_ops viommu_ops;
>> +static struct virtio_driver virtio_iommu_drv;
>> +
>> +static int viommu_match_node(struct device *dev, void *data)
>> +{
>> +	return dev->parent->fwnode == data;
>> +}
>> +
>> +static struct viommu_dev *viommu_get_by_fwnode(struct fwnode_handle *fwnode)
>> +{
>> +	struct device *dev = driver_find_device(&virtio_iommu_drv.driver, NULL,
>> +						fwnode, viommu_match_node);
>> +	put_device(dev);
>> +
>> +	return dev ? dev_to_virtio(dev)->priv : NULL;
>> +}
>> +
>> +static int viommu_add_device(struct device *dev)
>> +{
>> +	int ret;
>> +	struct iommu_group *group;
>> +	struct viommu_endpoint *vdev;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return -ENODEV;
>> +
>> +	viommu = viommu_get_by_fwnode(fwspec->iommu_fwnode);
>> +	if (!viommu)
>> +		return -ENODEV;
>> +
>> +	vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
>> +	if (!vdev)
>> +		return -ENOMEM;
>> +
>> +	vdev->viommu = viommu;
>> +	fwspec->iommu_priv = vdev;
>> +
>> +	ret = iommu_device_link(&viommu->iommu, dev);
>> +	if (ret)
>> +		goto err_free_dev;
>> +
>> +	/*
>> +	 * Last step creates a default domain and attaches to it. Everything
>> +	 * must be ready.
>> +	 */
>> +	group = iommu_group_get_for_dev(dev);
>> +	if (IS_ERR(group)) {
>> +		ret = PTR_ERR(group);
>> +		goto err_unlink_dev;
>> +	}
>> +
>> +	iommu_group_put(group);
>> +
>> +	return PTR_ERR_OR_ZERO(group);
>> +
>> +err_unlink_dev:
>> +	iommu_device_unlink(&viommu->iommu, dev);
>> +
>> +err_free_dev:
>> +	kfree(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove_device(struct device *dev)
>> +{
>> +	struct viommu_endpoint *vdev;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +
>> +	if (!fwspec || fwspec->ops != &viommu_ops)
>> +		return;
>> +
>> +	vdev = fwspec->iommu_priv;
>> +
>> +	iommu_group_remove_device(dev);
>> +	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	kfree(vdev);
>> +}
>> +
>> +static struct iommu_group *viommu_device_group(struct device *dev)
>> +{
>> +	if (dev_is_pci(dev))
>> +		return pci_device_group(dev);
>> +	else
>> +		return generic_device_group(dev);
>> +}
>> +
>> +static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args)
>> +{
>> +	return iommu_fwspec_add_ids(dev, args->args, 1);
>> +}
>> +
>> +static struct iommu_ops viommu_ops = {
>> +	.domain_alloc		= viommu_domain_alloc,
>> +	.domain_free		= viommu_domain_free,
>> +	.attach_dev		= viommu_attach_dev,
>> +	.map			= viommu_map,
>> +	.unmap			= viommu_unmap,
>> +	.iova_to_phys		= viommu_iova_to_phys,
>> +	.iotlb_sync		= viommu_iotlb_sync,
>> +	.add_device		= viommu_add_device,
>> +	.remove_device		= viommu_remove_device,
>> +	.device_group		= viommu_device_group,
>> +	.get_resv_regions	= viommu_get_resv_regions,
>> +	.put_resv_regions	= viommu_put_resv_regions,
>> +	.of_xlate		= viommu_of_xlate,
>> +};
>> +
>> +static int viommu_init_vqs(struct viommu_dev *viommu)
>> +{
>> +	struct virtio_device *vdev = dev_to_virtio(viommu->dev);
>> +	const char *name = "request";
>> +	void *ret;
>> +
>> +	ret = virtio_find_single_vq(vdev, NULL, name);
>> +	if (IS_ERR(ret)) {
>> +		dev_err(viommu->dev, "cannot find VQ\n");
>> +		return PTR_ERR(ret);
>> +	}
>> +
>> +	viommu->vqs[VIOMMU_REQUEST_VQ] = ret;
>> +
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe(struct virtio_device *vdev)
>> +{
>> +	struct device *parent_dev = vdev->dev.parent;
>> +	struct viommu_dev *viommu = NULL;
>> +	struct device *dev = &vdev->dev;
>> +	u64 input_start = 0;
>> +	u64 input_end = -1UL;
>> +	int ret;
>> +
>> +	if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
>> +		return -ENODEV;
> 
> I'm a bit confused about what will happen if this device
> happens to be behind an iommu itself.
> 
> If we can't handle that, should we clear PLATFORM_IOMMU
> e.g. like the balloon does?
> 
> 
>> +
>> +	viommu = devm_kzalloc(dev, sizeof(*viommu), GFP_KERNEL);
>> +	if (!viommu)
>> +		return -ENOMEM;
>> +
>> +	spin_lock_init(&viommu->request_lock);
>> +	ida_init(&viommu->domain_ids);
>> +	viommu->dev = dev;
>> +	viommu->vdev = vdev;
>> +	INIT_LIST_HEAD(&viommu->requests);
>> +
>> +	ret = viommu_init_vqs(viommu);
>> +	if (ret)
>> +		return ret;
>> +
>> +	virtio_cread(vdev, struct virtio_iommu_config, page_size_mask,
>> +		     &viommu->pgsize_bitmap);
>> +
>> +	if (!viommu->pgsize_bitmap) {
>> +		ret = -EINVAL;
>> +		goto err_free_vqs;
>> +	}
>> +
>> +	viommu->domain_bits = 32;
>> +
>> +	/* Optional features */
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.start,
>> +			     &input_start);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_INPUT_RANGE,
>> +			     struct virtio_iommu_config, input_range.end,
>> +			     &input_end);
>> +
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +			     struct virtio_iommu_config, domain_bits,
>> +			     &viommu->domain_bits);
>> +
>> +	viommu->geometry = (struct iommu_domain_geometry) {
>> +		.aperture_start	= input_start,
>> +		.aperture_end	= input_end,
>> +		.force_aperture	= true,
>> +	};
>> +
>> +	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
>> +
>> +	virtio_device_ready(vdev);
>> +
>> +	ret = iommu_device_sysfs_add(&viommu->iommu, dev, NULL, "%s",
>> +				     virtio_bus_name(vdev));
>> +	if (ret)
>> +		goto err_free_vqs;
>> +
>> +	iommu_device_set_ops(&viommu->iommu, &viommu_ops);
>> +	iommu_device_set_fwnode(&viommu->iommu, parent_dev->fwnode);
>> +
>> +	iommu_device_register(&viommu->iommu);
>> +
>> +#ifdef CONFIG_PCI
>> +	if (pci_bus_type.iommu_ops != &viommu_ops) {
>> +		pci_request_acs();
>> +		ret = bus_set_iommu(&pci_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +#ifdef CONFIG_ARM_AMBA
>> +	if (amba_bustype.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&amba_bustype, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +#endif
>> +	if (platform_bus_type.iommu_ops != &viommu_ops) {
>> +		ret = bus_set_iommu(&platform_bus_type, &viommu_ops);
>> +		if (ret)
>> +			goto err_unregister;
>> +	}
>> +
>> +	vdev->priv = viommu;
>> +
>> +	dev_info(dev, "input address: %u bits\n",
>> +		 order_base_2(viommu->geometry.aperture_end));
>> +	dev_info(dev, "page mask: %#llx\n", viommu->pgsize_bitmap);
>> +
>> +	return 0;
>> +
>> +err_unregister:
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +err_free_vqs:
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	return ret;
>> +}
>> +
>> +static void viommu_remove(struct virtio_device *vdev)
>> +{
>> +	struct viommu_dev *viommu = vdev->priv;
>> +
>> +	iommu_device_sysfs_remove(&viommu->iommu);
>> +	iommu_device_unregister(&viommu->iommu);
>> +
>> +	/* Stop all virtqueues */
>> +	vdev->config->reset(vdev);
>> +	vdev->config->del_vqs(vdev);
>> +
>> +	dev_info(&vdev->dev, "device removed\n");
>> +}
>> +
>> +static void viommu_config_changed(struct virtio_device *vdev)
>> +{
>> +	dev_warn(&vdev->dev, "config changed\n");
>> +}
>> +
>> +static unsigned int features[] = {
>> +	VIRTIO_IOMMU_F_MAP_UNMAP,
>> +	VIRTIO_IOMMU_F_DOMAIN_BITS,
>> +	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +};
>> +
>> +static struct virtio_device_id id_table[] = {
>> +	{ VIRTIO_ID_IOMMU, VIRTIO_DEV_ANY_ID },
>> +	{ 0 },
>> +};
>> +
>> +static struct virtio_driver virtio_iommu_drv = {
>> +	.driver.name		= KBUILD_MODNAME,
>> +	.driver.owner		= THIS_MODULE,
>> +	.id_table		= id_table,
>> +	.feature_table		= features,
>> +	.feature_table_size	= ARRAY_SIZE(features),
>> +	.probe			= viommu_probe,
>> +	.remove			= viommu_remove,
>> +	.config_changed		= viommu_config_changed,
>> +};
>> +
>> +module_virtio_driver(virtio_iommu_drv);
>> +
>> +MODULE_DESCRIPTION("Virtio IOMMU driver");
>> +MODULE_AUTHOR("Jean-Philippe Brucker <jean-philippe.brucker@arm.com>");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
>> index 6d5c3b2d4f4d..cfe47c5d9a56 100644
>> --- a/include/uapi/linux/virtio_ids.h
>> +++ b/include/uapi/linux/virtio_ids.h
>> @@ -43,5 +43,6 @@
>>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
>> +#define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
>>  
>>  #endif /* _LINUX_VIRTIO_IDS_H */
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> new file mode 100644
>> index 000000000000..e808fc7fbe82
>> --- /dev/null
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -0,0 +1,101 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause */
>> +/*
>> + * Virtio-iommu definition v0.8
>> + *
>> + * Copyright (C) 2018 Arm Ltd.
>> + */
>> +#ifndef _UAPI_LINUX_VIRTIO_IOMMU_H
>> +#define _UAPI_LINUX_VIRTIO_IOMMU_H
>> +
>> +#include <linux/types.h>
>> +
>> +/* Feature bits */
>> +#define VIRTIO_IOMMU_F_INPUT_RANGE		0
>> +#define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>> +#define VIRTIO_IOMMU_F_MAP_UNMAP		2
>> +#define VIRTIO_IOMMU_F_BYPASS			3
>> +
>> +struct virtio_iommu_config {
>> +	/* Supported page sizes */
>> +	__u64					page_size_mask;
>> +	/* Supported IOVA range */
>> +	struct virtio_iommu_range {
> 
> I'd rather we moved the definition outside even though gcc allows it -
> some old userspace compilers might not.
> 
>> +		__u64				start;
>> +		__u64				end;
>> +	} input_range;
>> +	/* Max domain ID size */
>> +	__u8					domain_bits;
> 
> Let's add explicit padding here as well?
> 
>> +};
>> +
>> +/* Request types */
>> +#define VIRTIO_IOMMU_T_ATTACH			0x01
>> +#define VIRTIO_IOMMU_T_DETACH			0x02
>> +#define VIRTIO_IOMMU_T_MAP			0x03
>> +#define VIRTIO_IOMMU_T_UNMAP			0x04
>> +
>> +/* Status types */
>> +#define VIRTIO_IOMMU_S_OK			0x00
>> +#define VIRTIO_IOMMU_S_IOERR			0x01
>> +#define VIRTIO_IOMMU_S_UNSUPP			0x02
>> +#define VIRTIO_IOMMU_S_DEVERR			0x03
>> +#define VIRTIO_IOMMU_S_INVAL			0x04
>> +#define VIRTIO_IOMMU_S_RANGE			0x05
>> +#define VIRTIO_IOMMU_S_NOENT			0x06
>> +#define VIRTIO_IOMMU_S_FAULT			0x07
>> +
>> +struct virtio_iommu_req_head {
>> +	__u8					type;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_tail {
>> +	__u8					status;
>> +	__u8					reserved[3];
>> +};
>> +
>> +struct virtio_iommu_req_attach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_detach {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le32					endpoint;
>> +	__u8					reserved[8];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#define VIRTIO_IOMMU_MAP_F_READ			(1 << 0)
>> +#define VIRTIO_IOMMU_MAP_F_WRITE		(1 << 1)
>> +#define VIRTIO_IOMMU_MAP_F_EXEC			(1 << 2)
>> +#define VIRTIO_IOMMU_MAP_F_MMIO			(1 << 3)
>> +
>> +#define VIRTIO_IOMMU_MAP_F_MASK			(VIRTIO_IOMMU_MAP_F_READ |	\
>> +						 VIRTIO_IOMMU_MAP_F_WRITE |	\
>> +						 VIRTIO_IOMMU_MAP_F_EXEC |	\
>> +						 VIRTIO_IOMMU_MAP_F_MMIO)
>> +
>> +struct virtio_iommu_req_map {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__le64					phys_start;
>> +	__le32					flags;
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +struct virtio_iommu_req_unmap {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					domain;
>> +	__le64					virt_start;
>> +	__le64					virt_end;
>> +	__u8					reserved[4];
>> +	struct virtio_iommu_req_tail		tail;
>> +};
>> +
>> +#endif
>> -- 
>> 2.19.1

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-11-08 14:48       ` Auger Eric
@ 2018-11-08 16:46           ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg

On 08/11/2018 14:48, Auger Eric wrote:
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
> the value[] field has disappeared but still is documented in the v0.8 spec.

Good catch. I removed value[] when reworking the
virtio_iommu_probe_resv_mem definition, because embedding a struct with
a flexible array into another violates the C99 standard, even though GCC
accepts it.

I'll remove it from the spec as well, but I probably won't publish a new
version for this change alone. The virtio spec itself has similar uses
of flexible arrays, that are given for explanation but aren't valid C
(and those wouldn't even compile).

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-11-08 16:46           ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, kevin.tian, tnowicki, mst,
	marc.zyngier, linux-pci, jasowang, will.deacon, kvmarm, robh+dt,
	robin.murphy

On 08/11/2018 14:48, Auger Eric wrote:
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
> the value[] field has disappeared but still is documented in the v0.8 spec.

Good catch. I removed value[] when reworking the
virtio_iommu_probe_resv_mem definition, because embedding a struct with
a flexible array into another violates the C99 standard, even though GCC
accepts it.

I'll remove it from the spec as well, but I probably won't publish a new
version for this change alone. The virtio spec itself has similar uses
of flexible arrays, that are given for explanation but aren't valid C
(and those wouldn't even compile).

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-11-08 14:48       ` Auger Eric
  (?)
  (?)
@ 2018-11-08 16:46       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, tnowicki, mst, marc.zyngier,
	linux-pci, will.deacon, robh+dt, robin.murphy, kvmarm

On 08/11/2018 14:48, Auger Eric wrote:
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
> the value[] field has disappeared but still is documented in the v0.8 spec.

Good catch. I removed value[] when reworking the
virtio_iommu_probe_resv_mem definition, because embedding a struct with
a flexible array into another violates the C99 standard, even though GCC
accepts it.

I'll remove it from the spec as well, but I probably won't publish a new
version for this change alone. The virtio spec itself has similar uses
of flexible arrays, that are given for explanation but aren't valid C
(and those wouldn't even compile).

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-11-08 14:51       ` Auger Eric
@ 2018-11-08 16:46         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, Michael S. Tsirkin
  Cc: lorenzo.pieralisi, tnowicki, devicetree, marc.zyngier, linux-pci,
	joro, will.deacon, virtualization, iommu, robh+dt, robin.murphy,
	kvmarm

On 08/11/2018 14:51, Auger Eric wrote:
>>> +/*
>>> + * viommu_replay_mappings - re-send MAP requests
>>> + *
>>> + * When reattaching a domain that was previously detached from all endpoints,
>>> + * mappings were deleted from the device. Re-create the mappings available in
>>> + * the internal tree.
>>> + */
>>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>> +{
>>> +	int ret;
> ret needs to be initialized here. Otherwise this can lead to a crash in
> viommu_add_device.

I actually hit this one while testing the other day :) Fixed in next version

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
@ 2018-11-08 16:46         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, linux-pci, joro, will.deacon, virtualization,
	marc.zyngier, iommu, robh+dt, robin.murphy, kvmarm

On 08/11/2018 14:51, Auger Eric wrote:
>>> +/*
>>> + * viommu_replay_mappings - re-send MAP requests
>>> + *
>>> + * When reattaching a domain that was previously detached from all endpoints,
>>> + * mappings were deleted from the device. Re-create the mappings available in
>>> + * the internal tree.
>>> + */
>>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>> +{
>>> +	int ret;
> ret needs to be initialized here. Otherwise this can lead to a crash in
> viommu_add_device.

I actually hit this one while testing the other day :) Fixed in next version

Thanks,
Jean


^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 5/7] iommu: Add virtio-iommu driver
  2018-11-08 14:51       ` Auger Eric
  (?)
  (?)
@ 2018-11-08 16:46       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-08 16:46 UTC (permalink / raw)
  To: Auger Eric, Michael S. Tsirkin
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki,
	devicetree, marc.zyngier, linux-pci, joro, will.deacon,
	virtualization, iommu, robh+dt, robin.murphy, kvmarm

On 08/11/2018 14:51, Auger Eric wrote:
>>> +/*
>>> + * viommu_replay_mappings - re-send MAP requests
>>> + *
>>> + * When reattaching a domain that was previously detached from all endpoints,
>>> + * mappings were deleted from the device. Re-create the mappings available in
>>> + * the internal tree.
>>> + */
>>> +static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>> +{
>>> +	int ret;
> ret needs to be initialized here. Otherwise this can lead to a crash in
> viommu_add_device.

I actually hit this one while testing the other day :) Fixed in next version

Thanks,
Jean

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-11-15  8:45     ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15  8:45 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, robh+dt,
	robin.murphy, joro

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> The nature of a virtio-mmio node is discovered by the virtio driver at
> probe time. However the DMA relation between devices must be described
> statically. When a virtio-mmio node is a virtio-iommu device, it needs an
> "#iommu-cells" property as specified by bindings/iommu/iommu.txt.
> 
> Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
> requires an "iommus" property. Describe these requirements in the
> device-tree bindings documentation.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
> index 5069c1b8e193..748595473b36 100644
> --- a/Documentation/devicetree/bindings/virtio/mmio.txt
> +++ b/Documentation/devicetree/bindings/virtio/mmio.txt
> @@ -8,10 +8,40 @@ Required properties:
>  - reg:		control registers base address and size including configuration space
>  - interrupts:	interrupt generated by the device
>  
> +Required properties for virtio-iommu:
> +
> +- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
> +		linked to DMA masters using the "iommus" or "iommu-map"
> +		properties [1][2]. #iommu-cells specifies the size of the
> +		"iommus" property. For virtio-iommu #iommu-cells must be
> +		1, each cell describing a single endpoint ID.
> +
> +Optional properties:
> +
> +- iommus:	If the device accesses memory through an IOMMU, it should
> +		have an "iommus" property [1]. Since virtio-iommu itself
> +		does not access memory through an IOMMU, the "virtio,mmio"
> +		node cannot have both an "#iommu-cells" and an "iommus"
> +		property.
> +
>  Example:
>  
>  	virtio_block@3000 {
>  		compatible = "virtio,mmio";
>  		reg = <0x3000 0x100>;
>  		interrupts = <41>;
> +
> +		/* Device has endpoint ID 23 */
> +		iommus = <&viommu 23>
>  	}
> +
> +	viommu: virtio_iommu@3100 {
> +		compatible = "virtio,mmio";
> +		reg = <0x3100 0x100>;
> +		interrupts = <42>;
> +
> +		#iommu-cells = <1>
> +	}
> +
> +[1] Documentation/devicetree/bindings/iommu/iommu.txt
> +[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
> 
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description
@ 2018-11-15  8:45     ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15  8:45 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> The nature of a virtio-mmio node is discovered by the virtio driver at
> probe time. However the DMA relation between devices must be described
> statically. When a virtio-mmio node is a virtio-iommu device, it needs an
> "#iommu-cells" property as specified by bindings/iommu/iommu.txt.
> 
> Otherwise, the virtio-mmio device may perform DMA through an IOMMU, which
> requires an "iommus" property. Describe these requirements in the
> device-tree bindings documentation.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/mmio.txt       | 30 +++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt b/Documentation/devicetree/bindings/virtio/mmio.txt
> index 5069c1b8e193..748595473b36 100644
> --- a/Documentation/devicetree/bindings/virtio/mmio.txt
> +++ b/Documentation/devicetree/bindings/virtio/mmio.txt
> @@ -8,10 +8,40 @@ Required properties:
>  - reg:		control registers base address and size including configuration space
>  - interrupts:	interrupt generated by the device
>  
> +Required properties for virtio-iommu:
> +
> +- #iommu-cells:	When the node corresponds to a virtio-iommu device, it is
> +		linked to DMA masters using the "iommus" or "iommu-map"
> +		properties [1][2]. #iommu-cells specifies the size of the
> +		"iommus" property. For virtio-iommu #iommu-cells must be
> +		1, each cell describing a single endpoint ID.
> +
> +Optional properties:
> +
> +- iommus:	If the device accesses memory through an IOMMU, it should
> +		have an "iommus" property [1]. Since virtio-iommu itself
> +		does not access memory through an IOMMU, the "virtio,mmio"
> +		node cannot have both an "#iommu-cells" and an "iommus"
> +		property.
> +
>  Example:
>  
>  	virtio_block@3000 {
>  		compatible = "virtio,mmio";
>  		reg = <0x3000 0x100>;
>  		interrupts = <41>;
> +
> +		/* Device has endpoint ID 23 */
> +		iommus = <&viommu 23>
>  	}
> +
> +	viommu: virtio_iommu@3100 {
> +		compatible = "virtio,mmio";
> +		reg = <0x3100 0x100>;
> +		interrupts = <42>;
> +
> +		#iommu-cells = <1>
> +	}
> +
> +[1] Documentation/devicetree/bindings/iommu/iommu.txt
> +[2] Documentation/devicetree/bindings/pci/pci-iommu.txt
> 
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-11-15  8:45     ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15  8:45 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: kevin.tian, lorenzo.pieralisi, tnowicki, mst, marc.zyngier,
	linux-pci, jasowang, will.deacon, kvmarm, robh+dt, robin.murphy,
	joro

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
s/systems/system
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 
> diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
> new file mode 100644
> index 000000000000..2407fea0651c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/virtio/iommu.txt
> @@ -0,0 +1,66 @@
> +* virtio IOMMU PCI device
> +
> +When virtio-iommu uses the PCI transport, its programming interface is
> +discovered dynamically by the PCI probing infrastructure. However the
> +device tree statically describes the relation between IOMMU and DMA
> +masters. Therefore, the PCI root complex that hosts the virtio-iommu
> +contains a child node representing the IOMMU device explicitly.
> +
> +Required properties:
> +
> +- compatible:	Should be "virtio,pci-iommu"
> +- reg:		PCI address of the IOMMU. As defined in the PCI Bus
> +		Binding reference [1], the reg property is a five-cell
> +		address encoded as (phys.hi phys.mid phys.lo size.hi
> +		size.lo). phys.hi should contain the device's BDF as
> +		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
> +		should be zero.
> +- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
> +		an endpoint ID, described by the "iommus" property [2].
> +		For virtio-iommu, #iommu-cells must be 1.
> +
> +Notes:
> +
> +- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
> +  virtio-iommu node doesn't have an "iommus" property, and is omitted from
> +  the iommu-map property of the root complex.
> +
> +Example:
> +
> +pcie@10000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +
> +	/* The IOMMU programming interface uses slot 00:01.0 */
> +	iommu0: iommu@0008 {
> +		compatible = "virtio,pci-iommu";
> +		reg = <0x00000800 0 0 0 0>;
> +		#iommu-cells = <1>;
> +	};
> +
> +	/*
> +	 * The IOMMU manages all functions in this PCI domain except
> +	 * itself. Omit BDF 00:01.0.
> +	 */
> +	iommu-map = <0x0 &iommu0 0x0 0x8>
> +		    <0x9 &iommu0 0x9 0xfff7>;
> +};
> +
> +pcie@20000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +	/*
> +	 * The IOMMU also manages all functions from this domain,
> +	 * with endpoint IDs 0x10000 - 0x1ffff
> +	 */
> +	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
> +};
> +
> +ethernet@fe001000 {
> +	...
> +	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
> +	iommus = <&iommu0 0x20000>;
> +};
> +
> +[1] Documentation/devicetree/bindings/pci/pci.txt
> +[2] Documentation/devicetree/bindings/iommu/iommu.txt
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
@ 2018-11-15  8:45     ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15  8:45 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
s/systems/system
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 
> diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
> new file mode 100644
> index 000000000000..2407fea0651c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/virtio/iommu.txt
> @@ -0,0 +1,66 @@
> +* virtio IOMMU PCI device
> +
> +When virtio-iommu uses the PCI transport, its programming interface is
> +discovered dynamically by the PCI probing infrastructure. However the
> +device tree statically describes the relation between IOMMU and DMA
> +masters. Therefore, the PCI root complex that hosts the virtio-iommu
> +contains a child node representing the IOMMU device explicitly.
> +
> +Required properties:
> +
> +- compatible:	Should be "virtio,pci-iommu"
> +- reg:		PCI address of the IOMMU. As defined in the PCI Bus
> +		Binding reference [1], the reg property is a five-cell
> +		address encoded as (phys.hi phys.mid phys.lo size.hi
> +		size.lo). phys.hi should contain the device's BDF as
> +		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
> +		should be zero.
> +- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
> +		an endpoint ID, described by the "iommus" property [2].
> +		For virtio-iommu, #iommu-cells must be 1.
> +
> +Notes:
> +
> +- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
> +  virtio-iommu node doesn't have an "iommus" property, and is omitted from
> +  the iommu-map property of the root complex.
> +
> +Example:
> +
> +pcie@10000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +
> +	/* The IOMMU programming interface uses slot 00:01.0 */
> +	iommu0: iommu@0008 {
> +		compatible = "virtio,pci-iommu";
> +		reg = <0x00000800 0 0 0 0>;
> +		#iommu-cells = <1>;
> +	};
> +
> +	/*
> +	 * The IOMMU manages all functions in this PCI domain except
> +	 * itself. Omit BDF 00:01.0.
> +	 */
> +	iommu-map = <0x0 &iommu0 0x0 0x8>
> +		    <0x9 &iommu0 0x9 0xfff7>;
> +};
> +
> +pcie@20000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +	/*
> +	 * The IOMMU also manages all functions from this domain,
> +	 * with endpoint IDs 0x10000 - 0x1ffff
> +	 */
> +	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
> +};
> +
> +ethernet@fe001000 {
> +	...
> +	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
> +	iommus = <&iommu0 0x20000>;
> +};
> +
> +[1] Documentation/devicetree/bindings/pci/pci.txt
> +[2] Documentation/devicetree/bindings/iommu/iommu.txt
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node
  2018-10-12 14:59   ` Jean-Philippe Brucker
                     ` (2 preceding siblings ...)
  (?)
@ 2018-11-15  8:45   ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15  8:45 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, robh+dt,
	robin.murphy, joro

Hi Jean,

On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> Some systems implement virtio-iommu as a PCI endpoint. The operating
> systems needs to discover the relationship between IOMMU and masters long
s/systems/system
> before the PCI endpoint gets probed. Add a PCI child node to describe the
> virtio-iommu device.
> 
> The virtio-pci-iommu is conceptually split between a PCI programming
> interface and a translation component on the parent bus. The latter
> doesn't have a node in the device tree. The virtio-pci-iommu node
> describes both, by linking the PCI endpoint to "iommus" property of DMA
> master nodes and to "iommu-map" properties of bus nodes.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  .../devicetree/bindings/virtio/iommu.txt      | 66 +++++++++++++++++++
>  1 file changed, 66 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
> 
> diff --git a/Documentation/devicetree/bindings/virtio/iommu.txt b/Documentation/devicetree/bindings/virtio/iommu.txt
> new file mode 100644
> index 000000000000..2407fea0651c
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/virtio/iommu.txt
> @@ -0,0 +1,66 @@
> +* virtio IOMMU PCI device
> +
> +When virtio-iommu uses the PCI transport, its programming interface is
> +discovered dynamically by the PCI probing infrastructure. However the
> +device tree statically describes the relation between IOMMU and DMA
> +masters. Therefore, the PCI root complex that hosts the virtio-iommu
> +contains a child node representing the IOMMU device explicitly.
> +
> +Required properties:
> +
> +- compatible:	Should be "virtio,pci-iommu"
> +- reg:		PCI address of the IOMMU. As defined in the PCI Bus
> +		Binding reference [1], the reg property is a five-cell
> +		address encoded as (phys.hi phys.mid phys.lo size.hi
> +		size.lo). phys.hi should contain the device's BDF as
> +		0b00000000 bbbbbbbb dddddfff 00000000. The other cells
> +		should be zero.
> +- #iommu-cells:	Each platform DMA master managed by the IOMMU is assigned
> +		an endpoint ID, described by the "iommus" property [2].
> +		For virtio-iommu, #iommu-cells must be 1.
> +
> +Notes:
> +
> +- DMA from the IOMMU device isn't managed by another IOMMU. Therefore the
> +  virtio-iommu node doesn't have an "iommus" property, and is omitted from
> +  the iommu-map property of the root complex.
> +
> +Example:
> +
> +pcie@10000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +
> +	/* The IOMMU programming interface uses slot 00:01.0 */
> +	iommu0: iommu@0008 {
> +		compatible = "virtio,pci-iommu";
> +		reg = <0x00000800 0 0 0 0>;
> +		#iommu-cells = <1>;
> +	};
> +
> +	/*
> +	 * The IOMMU manages all functions in this PCI domain except
> +	 * itself. Omit BDF 00:01.0.
> +	 */
> +	iommu-map = <0x0 &iommu0 0x0 0x8>
> +		    <0x9 &iommu0 0x9 0xfff7>;
> +};
> +
> +pcie@20000000 {
> +	compatible = "pci-host-ecam-generic";
> +	...
> +	/*
> +	 * The IOMMU also manages all functions from this domain,
> +	 * with endpoint IDs 0x10000 - 0x1ffff
> +	 */
> +	iommu-map = <0x0 &iommu0 0x10000 0x10000>;
> +};
> +
> +ethernet@fe001000 {
> +	...
> +	/* The IOMMU manages this platform device with endpoint ID 0x20000 */
> +	iommus = <&iommu0 0x20000>;
> +};
> +
> +[1] Documentation/devicetree/bindings/pci/pci.txt
> +[2] Documentation/devicetree/bindings/iommu/iommu.txt
Reviewed-by: Eric Auger <eric.auger@redhat.com>

Thanks

Eric

> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59   ` Jean-Philippe Brucker
@ 2018-11-15 13:20       ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15 13:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: mark.rutland-5wv7dgnIgG8, peter.maydell-QSEj5FYQhm4dnm+yROfE0A,
	kevin.tian-ral2JQCrhuEAvxtiuMwx3w,
	tnowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8,
	mst-H+wXaHxf7aLQT0dZR+AlfA, marc.zyngier-5wv7dgnIgG8,
	linux-pci-u79uwXL29TY76Z2rM5mHXA,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA, will.deacon-5wv7dgnIgG8,
	kvmarm-FPEHb7Xf0XXUo1n7N8X6UoWGPAHP3yOg,
	robh+dt-DgEjT+Ai2ygdnm+yROfE0A, robin.murphy-5wv7dgnIgG8

Hi Jean,
On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
nit: extra void line
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
need to test region
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
same
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
any bypass MSI windows?
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
leaks vdev
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

Thanks

Eric

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-11-15 13:20       ` Auger Eric
  0 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15 13:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: linux-pci, kvmarm, peter.maydell, joro, mst, jasowang, robh+dt,
	mark.rutland, tnowicki, kevin.tian, marc.zyngier, robin.murphy,
	will.deacon, lorenzo.pieralisi

Hi Jean,
On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
nit: extra void line
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
need to test region
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
same
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
any bypass MSI windows?
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
leaks vdev
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

Thanks

Eric

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-10-12 14:59   ` Jean-Philippe Brucker
                     ` (3 preceding siblings ...)
  (?)
@ 2018-11-15 13:20   ` Auger Eric
  -1 siblings, 0 replies; 101+ messages in thread
From: Auger Eric @ 2018-11-15 13:20 UTC (permalink / raw)
  To: Jean-Philippe Brucker, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, lorenzo.pieralisi, tnowicki, mst,
	marc.zyngier, linux-pci, will.deacon, kvmarm, robh+dt,
	robin.murphy, joro

Hi Jean,
On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
> When the device offers the probe feature, send a probe request for each
> device managed by the IOMMU. Extract RESV_MEM information. When we
> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
> This will tell other subsystems that there is no need to map the MSI
> doorbell in the virtio-iommu, because MSIs bypass it.
> 
> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
> ---
>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>  2 files changed, 180 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 9fb38cd3b727..8eaf66770469 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -56,6 +56,7 @@ struct viommu_dev {
>  	struct iommu_domain_geometry	geometry;
>  	u64				pgsize_bitmap;
>  	u8				domain_bits;
> +	u32				probe_size;
>  };
>  
>  struct viommu_mapping {
> @@ -77,8 +78,10 @@ struct viommu_domain {
>  };
>  
>  struct viommu_endpoint {
> +	struct device			*dev;
>  	struct viommu_dev		*viommu;
>  	struct viommu_domain		*vdomain;
> +	struct list_head		resv_regions;
>  };
>  
>  struct viommu_request {
> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>  {
>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>  
> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
> +		return len - viommu->probe_size - tail_size;
> +
>  	return len - tail_size;
>  }
>  
> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>  	return ret;
>  }
>  
> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
> +			       struct virtio_iommu_probe_resv_mem *mem,
> +			       size_t len)
> +{
> +	struct iommu_resv_region *region = NULL;
> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
> +
nit: extra void line
> +	u64 start = le64_to_cpu(mem->start);
> +	u64 end = le64_to_cpu(mem->end);
> +	size_t size = end - start + 1;
> +
> +	if (len < sizeof(*mem))
> +		return -EINVAL;
> +
> +	switch (mem->subtype) {
> +	default:
> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
> +			 mem->subtype);
> +		/* Fall-through */
> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
> +		region = iommu_alloc_resv_region(start, size, 0,
> +						 IOMMU_RESV_RESERVED);
need to test region
> +		break;
> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
> +		region = iommu_alloc_resv_region(start, size, prot,
> +						 IOMMU_RESV_MSI);
same
> +		break;
> +	}
> +
> +	list_add(&vdev->resv_regions, &region->list);
> +	return 0;
> +}
> +
> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
> +{
> +	int ret;
> +	u16 type, len;
> +	size_t cur = 0;
> +	size_t probe_len;
> +	struct virtio_iommu_req_probe *probe;
> +	struct virtio_iommu_probe_property *prop;
> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
> +
> +	if (!fwspec->num_ids)
> +		return -EINVAL;
> +
> +	probe_len = sizeof(*probe) + viommu->probe_size +
> +		    sizeof(struct virtio_iommu_req_tail);
> +	probe = kzalloc(probe_len, GFP_KERNEL);
> +	if (!probe)
> +		return -ENOMEM;
> +
> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
> +	/*
> +	 * For now, assume that properties of an endpoint that outputs multiple
> +	 * IDs are consistent. Only probe the first one.
> +	 */
> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
> +
> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
> +	if (ret)
> +		goto out_free;
> +
> +	prop = (void *)probe->properties;
> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +
> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
> +	       cur < viommu->probe_size) {
> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
> +
> +		switch (type) {
> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
> +			break;
> +		default:
> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
> +		}
> +
> +		if (ret)
> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
> +
> +		cur += len;
> +		if (cur >= viommu->probe_size)
> +			break;
> +
> +		prop = (void *)probe->properties + cur;
> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
> +	}
> +
> +out_free:
> +	kfree(probe);
> +	return ret;
> +}
> +
>  /* IOMMU API */
>  
>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>  
>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>  {
> -	struct iommu_resv_region *region;
> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>  
> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
> -					 IOMMU_RESV_SW_MSI);
> -	if (!region)
> -		return;
> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
> +		/*
> +		 * If the device registered a bypass MSI windows, use it.
any bypass MSI windows?
> +		 * Otherwise add a software-mapped region
> +		 */
> +		if (entry->type == IOMMU_RESV_MSI)
> +			msi = entry;
> +
> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
> +		if (!new_entry)
> +			return;
> +		list_add_tail(&new_entry->list, head);
> +	}
> +
> +	if (!msi) {
> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
> +					      prot, IOMMU_RESV_SW_MSI);
> +		if (!msi)
> +			return;
> +
> +		list_add_tail(&msi->list, head);
> +	}
>  
> -	list_add_tail(&region->list, head);
>  	iommu_dma_get_resv_regions(dev, head);
>  }
>  
> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>  	if (!vdev)
>  		return -ENOMEM;
>  
> +	vdev->dev = dev;
>  	vdev->viommu = viommu;
> +	INIT_LIST_HEAD(&vdev->resv_regions);
>  	fwspec->iommu_priv = vdev;
>  
> +	if (viommu->probe_size) {
> +		/* Get additional information for this endpoint */
> +		ret = viommu_probe_endpoint(viommu, dev);
> +		if (ret)
leaks vdev
> +			return ret;
> +	}
> +
>  	ret = iommu_device_link(&viommu->iommu, dev);
>  	if (ret)
>  		goto err_free_dev;
> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>  	iommu_device_unlink(&viommu->iommu, dev);
>  
>  err_free_dev:
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  
>  	return ret;
> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>  
>  	iommu_group_remove_device(dev);
>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>  	kfree(vdev);
>  }
>  
> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>  			     struct virtio_iommu_config, domain_bits,
>  			     &viommu->domain_bits);
>  
> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
> +			     struct virtio_iommu_config, probe_size,
> +			     &viommu->probe_size);
> +
>  	viommu->geometry = (struct iommu_domain_geometry) {
>  		.aperture_start	= input_start,
>  		.aperture_end	= input_end,
> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>  	VIRTIO_IOMMU_F_INPUT_RANGE,
> +	VIRTIO_IOMMU_F_PROBE,
>  };
>  
>  static struct virtio_device_id id_table[] = {
> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
> index e808fc7fbe82..feed74586bb0 100644
> --- a/include/uapi/linux/virtio_iommu.h
> +++ b/include/uapi/linux/virtio_iommu.h
> @@ -14,6 +14,7 @@
>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>  #define VIRTIO_IOMMU_F_BYPASS			3
> +#define VIRTIO_IOMMU_F_PROBE			4
>  
>  struct virtio_iommu_config {
>  	/* Supported page sizes */
> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>  	} input_range;
>  	/* Max domain ID size */
>  	__u8					domain_bits;
> +	__u8					padding[3];
> +	/* Probe buffer size */
> +	__u32					probe_size;
>  };
>  
>  /* Request types */
> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>  #define VIRTIO_IOMMU_T_DETACH			0x02
>  #define VIRTIO_IOMMU_T_MAP			0x03
>  #define VIRTIO_IOMMU_T_UNMAP			0x04
> +#define VIRTIO_IOMMU_T_PROBE			0x05
>  
>  /* Status types */
>  #define VIRTIO_IOMMU_S_OK			0x00
> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>  	struct virtio_iommu_req_tail		tail;
>  };
>  
> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
> +
> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
> +
> +struct virtio_iommu_probe_property {
> +	__le16					type;
> +	__le16					length;
> +};
> +
> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
> +
> +struct virtio_iommu_probe_resv_mem {
> +	struct virtio_iommu_probe_property	head;
> +	__u8					subtype;
> +	__u8					reserved[3];
> +	__le64					start;
> +	__le64					end;
> +};
> +
> +struct virtio_iommu_req_probe {
> +	struct virtio_iommu_req_head		head;
> +	__le32					endpoint;
> +	__u8					reserved[64];
> +
> +	__u8					properties[];
> +
> +	/*
> +	 * Tail follows the variable-length properties array. No padding,
> +	 * property lengths are all aligned on 8 bytes.
> +	 */
> +};
> +
>  #endif
> 

Thanks

Eric

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-11-15 13:20       ` Auger Eric
@ 2018-11-15 16:22         ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-15 16:22 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: kevin.tian, tnowicki, mst, marc.zyngier, linux-pci, jasowang,
	will.deacon, robh+dt, robin.murphy, kvmarm

Fixed all of these, thanks

Jean

On 15/11/2018 13:20, Auger Eric wrote:
> Hi Jean,
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> When the device offers the probe feature, send a probe request for each
>> device managed by the IOMMU. Extract RESV_MEM information. When we
>> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
>> This will tell other subsystems that there is no need to map the MSI
>> doorbell in the virtio-iommu, because MSIs bypass it.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>>  2 files changed, 180 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 9fb38cd3b727..8eaf66770469 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -56,6 +56,7 @@ struct viommu_dev {
>>  	struct iommu_domain_geometry	geometry;
>>  	u64				pgsize_bitmap;
>>  	u8				domain_bits;
>> +	u32				probe_size;
>>  };
>>  
>>  struct viommu_mapping {
>> @@ -77,8 +78,10 @@ struct viommu_domain {
>>  };
>>  
>>  struct viommu_endpoint {
>> +	struct device			*dev;
>>  	struct viommu_dev		*viommu;
>>  	struct viommu_domain		*vdomain;
>> +	struct list_head		resv_regions;
>>  };
>>  
>>  struct viommu_request {
>> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>>  {
>>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>>  
>> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
>> +		return len - viommu->probe_size - tail_size;
>> +
>>  	return len - tail_size;
>>  }
>>  
>> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>  	return ret;
>>  }
>>  
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
> nit: extra void line
>> +	u64 start = le64_to_cpu(mem->start);
>> +	u64 end = le64_to_cpu(mem->end);
>> +	size_t size = end - start + 1;
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	default:
>> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
>> +			 mem->subtype);
>> +		/* Fall-through */
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +		region = iommu_alloc_resv_region(start, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> need to test region
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(start, size, prot,
>> +						 IOMMU_RESV_MSI);
> same
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	size_t probe_len;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		return -EINVAL;
>> +
>> +	probe_len = sizeof(*probe) + viommu->probe_size +
>> +		    sizeof(struct virtio_iommu_req_tail);
>> +	probe = kzalloc(probe_len, GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	prop = (void *)probe->properties;
>> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +
>> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
>> +	       cur < viommu->probe_size) {
>> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
>> +
>> +		switch (type) {
>> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
>> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
>> +			break;
>> +		default:
>> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
>> +		}
>> +
>> +		if (ret)
>> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
>> +
>> +		cur += len;
>> +		if (cur >= viommu->probe_size)
>> +			break;
>> +
>> +		prop = (void *)probe->properties + cur;
>> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +	}
>> +
>> +out_free:
>> +	kfree(probe);
>> +	return ret;
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>>  
>>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>>  {
>> -	struct iommu_resv_region *region;
>> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
>> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>>  
>> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> -					 IOMMU_RESV_SW_MSI);
>> -	if (!region)
>> -		return;
>> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
>> +		/*
>> +		 * If the device registered a bypass MSI windows, use it.
> any bypass MSI windows?
>> +		 * Otherwise add a software-mapped region
>> +		 */
>> +		if (entry->type == IOMMU_RESV_MSI)
>> +			msi = entry;
>> +
>> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
>> +		if (!new_entry)
>> +			return;
>> +		list_add_tail(&new_entry->list, head);
>> +	}
>> +
>> +	if (!msi) {
>> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
>> +					      prot, IOMMU_RESV_SW_MSI);
>> +		if (!msi)
>> +			return;
>> +
>> +		list_add_tail(&msi->list, head);
>> +	}
>>  
>> -	list_add_tail(&region->list, head);
>>  	iommu_dma_get_resv_regions(dev, head);
>>  }
>>  
>> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>>  	if (!vdev)
>>  		return -ENOMEM;
>>  
>> +	vdev->dev = dev;
>>  	vdev->viommu = viommu;
>> +	INIT_LIST_HEAD(&vdev->resv_regions);
>>  	fwspec->iommu_priv = vdev;
>>  
>> +	if (viommu->probe_size) {
>> +		/* Get additional information for this endpoint */
>> +		ret = viommu_probe_endpoint(viommu, dev);
>> +		if (ret)
> leaks vdev
>> +			return ret;
>> +	}
>> +
>>  	ret = iommu_device_link(&viommu->iommu, dev);
>>  	if (ret)
>>  		goto err_free_dev;
>> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>>  	iommu_device_unlink(&viommu->iommu, dev);
>>  
>>  err_free_dev:
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  
>>  	return ret;
>> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>>  
>>  	iommu_group_remove_device(dev);
>>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  }
>>  
>> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>>  			     struct virtio_iommu_config, domain_bits,
>>  			     &viommu->domain_bits);
>>  
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
>> +			     struct virtio_iommu_config, probe_size,
>> +			     &viommu->probe_size);
>> +
>>  	viommu->geometry = (struct iommu_domain_geometry) {
>>  		.aperture_start	= input_start,
>>  		.aperture_end	= input_end,
>> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>>  	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +	VIRTIO_IOMMU_F_PROBE,
>>  };
>>  
>>  static struct virtio_device_id id_table[] = {
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index e808fc7fbe82..feed74586bb0 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -14,6 +14,7 @@
>>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>>  #define VIRTIO_IOMMU_F_BYPASS			3
>> +#define VIRTIO_IOMMU_F_PROBE			4
>>  
>>  struct virtio_iommu_config {
>>  	/* Supported page sizes */
>> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>>  	} input_range;
>>  	/* Max domain ID size */
>>  	__u8					domain_bits;
>> +	__u8					padding[3];
>> +	/* Probe buffer size */
>> +	__u32					probe_size;
>>  };
>>  
>>  /* Request types */
>> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>>  #define VIRTIO_IOMMU_T_DETACH			0x02
>>  #define VIRTIO_IOMMU_T_MAP			0x03
>>  #define VIRTIO_IOMMU_T_UNMAP			0x04
>> +#define VIRTIO_IOMMU_T_PROBE			0x05
>>  
>>  /* Status types */
>>  #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>>  	struct virtio_iommu_req_tail		tail;
>>  };
>>  
>> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
>> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
>> +
>> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
>> +
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
>> +};
>> +
>> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
>> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
>> +
>> +struct virtio_iommu_probe_resv_mem {
>> +	struct virtio_iommu_probe_property	head;
>> +	__u8					subtype;
>> +	__u8					reserved[3];
>> +	__le64					start;
>> +	__le64					end;
>> +};
>> +
>> +struct virtio_iommu_req_probe {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					endpoint;
>> +	__u8					reserved[64];
>> +
>> +	__u8					properties[];
>> +
>> +	/*
>> +	 * Tail follows the variable-length properties array. No padding,
>> +	 * property lengths are all aligned on 8 bytes.
>> +	 */
>> +};
>> +
>>  #endif
>>
> 
> Thanks
> 
> Eric
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
@ 2018-11-15 16:22         ` Jean-Philippe Brucker
  0 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-15 16:22 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, kevin.tian, tnowicki, mst,
	marc.zyngier, linux-pci, jasowang, will.deacon, kvmarm, robh+dt,
	robin.murphy

Fixed all of these, thanks

Jean

On 15/11/2018 13:20, Auger Eric wrote:
> Hi Jean,
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> When the device offers the probe feature, send a probe request for each
>> device managed by the IOMMU. Extract RESV_MEM information. When we
>> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
>> This will tell other subsystems that there is no need to map the MSI
>> doorbell in the virtio-iommu, because MSIs bypass it.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>>  2 files changed, 180 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 9fb38cd3b727..8eaf66770469 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -56,6 +56,7 @@ struct viommu_dev {
>>  	struct iommu_domain_geometry	geometry;
>>  	u64				pgsize_bitmap;
>>  	u8				domain_bits;
>> +	u32				probe_size;
>>  };
>>  
>>  struct viommu_mapping {
>> @@ -77,8 +78,10 @@ struct viommu_domain {
>>  };
>>  
>>  struct viommu_endpoint {
>> +	struct device			*dev;
>>  	struct viommu_dev		*viommu;
>>  	struct viommu_domain		*vdomain;
>> +	struct list_head		resv_regions;
>>  };
>>  
>>  struct viommu_request {
>> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>>  {
>>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>>  
>> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
>> +		return len - viommu->probe_size - tail_size;
>> +
>>  	return len - tail_size;
>>  }
>>  
>> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>  	return ret;
>>  }
>>  
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
> nit: extra void line
>> +	u64 start = le64_to_cpu(mem->start);
>> +	u64 end = le64_to_cpu(mem->end);
>> +	size_t size = end - start + 1;
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	default:
>> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
>> +			 mem->subtype);
>> +		/* Fall-through */
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +		region = iommu_alloc_resv_region(start, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> need to test region
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(start, size, prot,
>> +						 IOMMU_RESV_MSI);
> same
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	size_t probe_len;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		return -EINVAL;
>> +
>> +	probe_len = sizeof(*probe) + viommu->probe_size +
>> +		    sizeof(struct virtio_iommu_req_tail);
>> +	probe = kzalloc(probe_len, GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	prop = (void *)probe->properties;
>> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +
>> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
>> +	       cur < viommu->probe_size) {
>> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
>> +
>> +		switch (type) {
>> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
>> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
>> +			break;
>> +		default:
>> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
>> +		}
>> +
>> +		if (ret)
>> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
>> +
>> +		cur += len;
>> +		if (cur >= viommu->probe_size)
>> +			break;
>> +
>> +		prop = (void *)probe->properties + cur;
>> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +	}
>> +
>> +out_free:
>> +	kfree(probe);
>> +	return ret;
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>>  
>>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>>  {
>> -	struct iommu_resv_region *region;
>> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
>> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>>  
>> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> -					 IOMMU_RESV_SW_MSI);
>> -	if (!region)
>> -		return;
>> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
>> +		/*
>> +		 * If the device registered a bypass MSI windows, use it.
> any bypass MSI windows?
>> +		 * Otherwise add a software-mapped region
>> +		 */
>> +		if (entry->type == IOMMU_RESV_MSI)
>> +			msi = entry;
>> +
>> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
>> +		if (!new_entry)
>> +			return;
>> +		list_add_tail(&new_entry->list, head);
>> +	}
>> +
>> +	if (!msi) {
>> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
>> +					      prot, IOMMU_RESV_SW_MSI);
>> +		if (!msi)
>> +			return;
>> +
>> +		list_add_tail(&msi->list, head);
>> +	}
>>  
>> -	list_add_tail(&region->list, head);
>>  	iommu_dma_get_resv_regions(dev, head);
>>  }
>>  
>> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>>  	if (!vdev)
>>  		return -ENOMEM;
>>  
>> +	vdev->dev = dev;
>>  	vdev->viommu = viommu;
>> +	INIT_LIST_HEAD(&vdev->resv_regions);
>>  	fwspec->iommu_priv = vdev;
>>  
>> +	if (viommu->probe_size) {
>> +		/* Get additional information for this endpoint */
>> +		ret = viommu_probe_endpoint(viommu, dev);
>> +		if (ret)
> leaks vdev
>> +			return ret;
>> +	}
>> +
>>  	ret = iommu_device_link(&viommu->iommu, dev);
>>  	if (ret)
>>  		goto err_free_dev;
>> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>>  	iommu_device_unlink(&viommu->iommu, dev);
>>  
>>  err_free_dev:
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  
>>  	return ret;
>> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>>  
>>  	iommu_group_remove_device(dev);
>>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  }
>>  
>> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>>  			     struct virtio_iommu_config, domain_bits,
>>  			     &viommu->domain_bits);
>>  
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
>> +			     struct virtio_iommu_config, probe_size,
>> +			     &viommu->probe_size);
>> +
>>  	viommu->geometry = (struct iommu_domain_geometry) {
>>  		.aperture_start	= input_start,
>>  		.aperture_end	= input_end,
>> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>>  	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +	VIRTIO_IOMMU_F_PROBE,
>>  };
>>  
>>  static struct virtio_device_id id_table[] = {
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index e808fc7fbe82..feed74586bb0 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -14,6 +14,7 @@
>>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>>  #define VIRTIO_IOMMU_F_BYPASS			3
>> +#define VIRTIO_IOMMU_F_PROBE			4
>>  
>>  struct virtio_iommu_config {
>>  	/* Supported page sizes */
>> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>>  	} input_range;
>>  	/* Max domain ID size */
>>  	__u8					domain_bits;
>> +	__u8					padding[3];
>> +	/* Probe buffer size */
>> +	__u32					probe_size;
>>  };
>>  
>>  /* Request types */
>> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>>  #define VIRTIO_IOMMU_T_DETACH			0x02
>>  #define VIRTIO_IOMMU_T_MAP			0x03
>>  #define VIRTIO_IOMMU_T_UNMAP			0x04
>> +#define VIRTIO_IOMMU_T_PROBE			0x05
>>  
>>  /* Status types */
>>  #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>>  	struct virtio_iommu_req_tail		tail;
>>  };
>>  
>> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
>> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
>> +
>> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
>> +
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
>> +};
>> +
>> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
>> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
>> +
>> +struct virtio_iommu_probe_resv_mem {
>> +	struct virtio_iommu_probe_property	head;
>> +	__u8					subtype;
>> +	__u8					reserved[3];
>> +	__le64					start;
>> +	__le64					end;
>> +};
>> +
>> +struct virtio_iommu_req_probe {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					endpoint;
>> +	__u8					reserved[64];
>> +
>> +	__u8					properties[];
>> +
>> +	/*
>> +	 * Tail follows the variable-length properties array. No padding,
>> +	 * property lengths are all aligned on 8 bytes.
>> +	 */
>> +};
>> +
>>  #endif
>>
> 
> Thanks
> 
> Eric
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 


^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH v3 6/7] iommu/virtio: Add probe request
  2018-11-15 13:20       ` Auger Eric
  (?)
  (?)
@ 2018-11-15 16:22       ` Jean-Philippe Brucker
  -1 siblings, 0 replies; 101+ messages in thread
From: Jean-Philippe Brucker @ 2018-11-15 16:22 UTC (permalink / raw)
  To: Auger Eric, iommu, virtualization, devicetree
  Cc: mark.rutland, peter.maydell, tnowicki, mst, marc.zyngier,
	linux-pci, will.deacon, robh+dt, robin.murphy, kvmarm

Fixed all of these, thanks

Jean

On 15/11/2018 13:20, Auger Eric wrote:
> Hi Jean,
> On 10/12/18 4:59 PM, Jean-Philippe Brucker wrote:
>> When the device offers the probe feature, send a probe request for each
>> device managed by the IOMMU. Extract RESV_MEM information. When we
>> encounter a MSI doorbell region, set it up as a IOMMU_RESV_MSI region.
>> This will tell other subsystems that there is no need to map the MSI
>> doorbell in the virtio-iommu, because MSIs bypass it.
>>
>> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
>> ---
>>  drivers/iommu/virtio-iommu.c      | 147 ++++++++++++++++++++++++++++--
>>  include/uapi/linux/virtio_iommu.h |  39 ++++++++
>>  2 files changed, 180 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
>> index 9fb38cd3b727..8eaf66770469 100644
>> --- a/drivers/iommu/virtio-iommu.c
>> +++ b/drivers/iommu/virtio-iommu.c
>> @@ -56,6 +56,7 @@ struct viommu_dev {
>>  	struct iommu_domain_geometry	geometry;
>>  	u64				pgsize_bitmap;
>>  	u8				domain_bits;
>> +	u32				probe_size;
>>  };
>>  
>>  struct viommu_mapping {
>> @@ -77,8 +78,10 @@ struct viommu_domain {
>>  };
>>  
>>  struct viommu_endpoint {
>> +	struct device			*dev;
>>  	struct viommu_dev		*viommu;
>>  	struct viommu_domain		*vdomain;
>> +	struct list_head		resv_regions;
>>  };
>>  
>>  struct viommu_request {
>> @@ -129,6 +132,9 @@ static off_t viommu_get_req_offset(struct viommu_dev *viommu,
>>  {
>>  	size_t tail_size = sizeof(struct virtio_iommu_req_tail);
>>  
>> +	if (req->type == VIRTIO_IOMMU_T_PROBE)
>> +		return len - viommu->probe_size - tail_size;
>> +
>>  	return len - tail_size;
>>  }
>>  
>> @@ -414,6 +420,101 @@ static int viommu_replay_mappings(struct viommu_domain *vdomain)
>>  	return ret;
>>  }
>>  
>> +static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
>> +			       struct virtio_iommu_probe_resv_mem *mem,
>> +			       size_t len)
>> +{
>> +	struct iommu_resv_region *region = NULL;
>> +	unsigned long prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>> +
> nit: extra void line
>> +	u64 start = le64_to_cpu(mem->start);
>> +	u64 end = le64_to_cpu(mem->end);
>> +	size_t size = end - start + 1;
>> +
>> +	if (len < sizeof(*mem))
>> +		return -EINVAL;
>> +
>> +	switch (mem->subtype) {
>> +	default:
>> +		dev_warn(vdev->dev, "unknown resv mem subtype 0x%x\n",
>> +			 mem->subtype);
>> +		/* Fall-through */
>> +	case VIRTIO_IOMMU_RESV_MEM_T_RESERVED:
>> +		region = iommu_alloc_resv_region(start, size, 0,
>> +						 IOMMU_RESV_RESERVED);
> need to test region
>> +		break;
>> +	case VIRTIO_IOMMU_RESV_MEM_T_MSI:
>> +		region = iommu_alloc_resv_region(start, size, prot,
>> +						 IOMMU_RESV_MSI);
> same
>> +		break;
>> +	}
>> +
>> +	list_add(&vdev->resv_regions, &region->list);
>> +	return 0;
>> +}
>> +
>> +static int viommu_probe_endpoint(struct viommu_dev *viommu, struct device *dev)
>> +{
>> +	int ret;
>> +	u16 type, len;
>> +	size_t cur = 0;
>> +	size_t probe_len;
>> +	struct virtio_iommu_req_probe *probe;
>> +	struct virtio_iommu_probe_property *prop;
>> +	struct iommu_fwspec *fwspec = dev->iommu_fwspec;
>> +	struct viommu_endpoint *vdev = fwspec->iommu_priv;
>> +
>> +	if (!fwspec->num_ids)
>> +		return -EINVAL;
>> +
>> +	probe_len = sizeof(*probe) + viommu->probe_size +
>> +		    sizeof(struct virtio_iommu_req_tail);
>> +	probe = kzalloc(probe_len, GFP_KERNEL);
>> +	if (!probe)
>> +		return -ENOMEM;
>> +
>> +	probe->head.type = VIRTIO_IOMMU_T_PROBE;
>> +	/*
>> +	 * For now, assume that properties of an endpoint that outputs multiple
>> +	 * IDs are consistent. Only probe the first one.
>> +	 */
>> +	probe->endpoint = cpu_to_le32(fwspec->ids[0]);
>> +
>> +	ret = viommu_send_req_sync(viommu, probe, probe_len);
>> +	if (ret)
>> +		goto out_free;
>> +
>> +	prop = (void *)probe->properties;
>> +	type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +
>> +	while (type != VIRTIO_IOMMU_PROBE_T_NONE &&
>> +	       cur < viommu->probe_size) {
>> +		len = le16_to_cpu(prop->length) + sizeof(*prop);
>> +
>> +		switch (type) {
>> +		case VIRTIO_IOMMU_PROBE_T_RESV_MEM:
>> +			ret = viommu_add_resv_mem(vdev, (void *)prop, len);
>> +			break;
>> +		default:
>> +			dev_err(dev, "unknown viommu prop 0x%x\n", type);
>> +		}
>> +
>> +		if (ret)
>> +			dev_err(dev, "failed to parse viommu prop 0x%x\n", type);
>> +
>> +		cur += len;
>> +		if (cur >= viommu->probe_size)
>> +			break;
>> +
>> +		prop = (void *)probe->properties + cur;
>> +		type = le16_to_cpu(prop->type) & VIRTIO_IOMMU_PROBE_T_MASK;
>> +	}
>> +
>> +out_free:
>> +	kfree(probe);
>> +	return ret;
>> +}
>> +
>>  /* IOMMU API */
>>  
>>  static struct iommu_domain *viommu_domain_alloc(unsigned type)
>> @@ -636,15 +737,33 @@ static void viommu_iotlb_sync(struct iommu_domain *domain)
>>  
>>  static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
>>  {
>> -	struct iommu_resv_region *region;
>> +	struct iommu_resv_region *entry, *new_entry, *msi = NULL;
>> +	struct viommu_endpoint *vdev = dev->iommu_fwspec->iommu_priv;
>>  	int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
>>  
>> -	region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, prot,
>> -					 IOMMU_RESV_SW_MSI);
>> -	if (!region)
>> -		return;
>> +	list_for_each_entry(entry, &vdev->resv_regions, list) {
>> +		/*
>> +		 * If the device registered a bypass MSI windows, use it.
> any bypass MSI windows?
>> +		 * Otherwise add a software-mapped region
>> +		 */
>> +		if (entry->type == IOMMU_RESV_MSI)
>> +			msi = entry;
>> +
>> +		new_entry = kmemdup(entry, sizeof(*entry), GFP_KERNEL);
>> +		if (!new_entry)
>> +			return;
>> +		list_add_tail(&new_entry->list, head);
>> +	}
>> +
>> +	if (!msi) {
>> +		msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH,
>> +					      prot, IOMMU_RESV_SW_MSI);
>> +		if (!msi)
>> +			return;
>> +
>> +		list_add_tail(&msi->list, head);
>> +	}
>>  
>> -	list_add_tail(&region->list, head);
>>  	iommu_dma_get_resv_regions(dev, head);
>>  }
>>  
>> @@ -692,9 +811,18 @@ static int viommu_add_device(struct device *dev)
>>  	if (!vdev)
>>  		return -ENOMEM;
>>  
>> +	vdev->dev = dev;
>>  	vdev->viommu = viommu;
>> +	INIT_LIST_HEAD(&vdev->resv_regions);
>>  	fwspec->iommu_priv = vdev;
>>  
>> +	if (viommu->probe_size) {
>> +		/* Get additional information for this endpoint */
>> +		ret = viommu_probe_endpoint(viommu, dev);
>> +		if (ret)
> leaks vdev
>> +			return ret;
>> +	}
>> +
>>  	ret = iommu_device_link(&viommu->iommu, dev);
>>  	if (ret)
>>  		goto err_free_dev;
>> @@ -717,6 +845,7 @@ static int viommu_add_device(struct device *dev)
>>  	iommu_device_unlink(&viommu->iommu, dev);
>>  
>>  err_free_dev:
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  
>>  	return ret;
>> @@ -734,6 +863,7 @@ static void viommu_remove_device(struct device *dev)
>>  
>>  	iommu_group_remove_device(dev);
>>  	iommu_device_unlink(&vdev->viommu->iommu, dev);
>> +	viommu_put_resv_regions(dev, &vdev->resv_regions);
>>  	kfree(vdev);
>>  }
>>  
>> @@ -832,6 +962,10 @@ static int viommu_probe(struct virtio_device *vdev)
>>  			     struct virtio_iommu_config, domain_bits,
>>  			     &viommu->domain_bits);
>>  
>> +	virtio_cread_feature(vdev, VIRTIO_IOMMU_F_PROBE,
>> +			     struct virtio_iommu_config, probe_size,
>> +			     &viommu->probe_size);
>> +
>>  	viommu->geometry = (struct iommu_domain_geometry) {
>>  		.aperture_start	= input_start,
>>  		.aperture_end	= input_end,
>> @@ -913,6 +1047,7 @@ static unsigned int features[] = {
>>  	VIRTIO_IOMMU_F_MAP_UNMAP,
>>  	VIRTIO_IOMMU_F_DOMAIN_BITS,
>>  	VIRTIO_IOMMU_F_INPUT_RANGE,
>> +	VIRTIO_IOMMU_F_PROBE,
>>  };
>>  
>>  static struct virtio_device_id id_table[] = {
>> diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h
>> index e808fc7fbe82..feed74586bb0 100644
>> --- a/include/uapi/linux/virtio_iommu.h
>> +++ b/include/uapi/linux/virtio_iommu.h
>> @@ -14,6 +14,7 @@
>>  #define VIRTIO_IOMMU_F_DOMAIN_BITS		1
>>  #define VIRTIO_IOMMU_F_MAP_UNMAP		2
>>  #define VIRTIO_IOMMU_F_BYPASS			3
>> +#define VIRTIO_IOMMU_F_PROBE			4
>>  
>>  struct virtio_iommu_config {
>>  	/* Supported page sizes */
>> @@ -25,6 +26,9 @@ struct virtio_iommu_config {
>>  	} input_range;
>>  	/* Max domain ID size */
>>  	__u8					domain_bits;
>> +	__u8					padding[3];
>> +	/* Probe buffer size */
>> +	__u32					probe_size;
>>  };
>>  
>>  /* Request types */
>> @@ -32,6 +36,7 @@ struct virtio_iommu_config {
>>  #define VIRTIO_IOMMU_T_DETACH			0x02
>>  #define VIRTIO_IOMMU_T_MAP			0x03
>>  #define VIRTIO_IOMMU_T_UNMAP			0x04
>> +#define VIRTIO_IOMMU_T_PROBE			0x05
>>  
>>  /* Status types */
>>  #define VIRTIO_IOMMU_S_OK			0x00
>> @@ -98,4 +103,38 @@ struct virtio_iommu_req_unmap {
>>  	struct virtio_iommu_req_tail		tail;
>>  };
>>  
>> +#define VIRTIO_IOMMU_PROBE_T_NONE		0
>> +#define VIRTIO_IOMMU_PROBE_T_RESV_MEM		1
>> +
>> +#define VIRTIO_IOMMU_PROBE_T_MASK		0xfff
>> +
>> +struct virtio_iommu_probe_property {
>> +	__le16					type;
>> +	__le16					length;
>> +};
>> +
>> +#define VIRTIO_IOMMU_RESV_MEM_T_RESERVED	0
>> +#define VIRTIO_IOMMU_RESV_MEM_T_MSI		1
>> +
>> +struct virtio_iommu_probe_resv_mem {
>> +	struct virtio_iommu_probe_property	head;
>> +	__u8					subtype;
>> +	__u8					reserved[3];
>> +	__le64					start;
>> +	__le64					end;
>> +};
>> +
>> +struct virtio_iommu_req_probe {
>> +	struct virtio_iommu_req_head		head;
>> +	__le32					endpoint;
>> +	__u8					reserved[64];
>> +
>> +	__u8					properties[];
>> +
>> +	/*
>> +	 * Tail follows the variable-length properties array. No padding,
>> +	 * property lengths are all aligned on 8 bytes.
>> +	 */
>> +};
>> +
>>  #endif
>>
> 
> Thanks
> 
> Eric
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

^ permalink raw reply	[flat|nested] 101+ messages in thread

end of thread, other threads:[~2018-11-15 16:23 UTC | newest]

Thread overview: 101+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-12 14:59 [PATCH v3 0/7] Add virtio-iommu driver Jean-Philippe Brucker
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59 ` [PATCH v3 1/7] dt-bindings: virtio-mmio: Add IOMMU description Jean-Philippe Brucker
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
2018-10-18  0:30   ` Rob Herring
     [not found]   ` <20181012145917.6840-2-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-10-18  0:30     ` Rob Herring
2018-10-18  0:30       ` Rob Herring
2018-11-15  8:45   ` Auger Eric
2018-11-15  8:45     ` Auger Eric
2018-10-12 14:59 ` [PATCH v3 2/7] dt-bindings: virtio: Add virtio-pci-iommu node Jean-Philippe Brucker
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
2018-10-18  0:35   ` Rob Herring
2018-10-18  0:35     ` Rob Herring
2018-10-18  0:35   ` Rob Herring
2018-11-15  8:45   ` Auger Eric
2018-11-15  8:45   ` Auger Eric
2018-11-15  8:45     ` Auger Eric
2018-10-12 14:59 ` [PATCH v3 3/7] PCI: OF: Allow endpoints to bypass the iommu Jean-Philippe Brucker
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
     [not found]   ` <20181012145917.6840-4-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-10-12 19:41     ` Bjorn Helgaas
2018-10-12 19:41       ` Bjorn Helgaas
2018-10-15 10:52       ` Michael S. Tsirkin
2018-10-15 11:32       ` Robin Murphy
     [not found]       ` <20181012194158.GX5906-1RhO1Y9PlrlHTL0Zs8A6p5iNqAH0jzoTYJqu5kTmcBRl57MIdRCFDg@public.gmane.org>
2018-10-15 10:52         ` Michael S. Tsirkin
2018-10-15 10:52           ` Michael S. Tsirkin
     [not found]           ` <20181015065024-mutt-send-email-mst-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2018-10-15 19:46             ` Jean-philippe Brucker
2018-10-15 19:46               ` Jean-philippe Brucker
2018-10-17 15:14               ` Michael S. Tsirkin
2018-10-17 15:14                 ` Michael S. Tsirkin
2018-10-18 10:47                 ` Robin Murphy
2018-10-18 10:47                 ` Robin Murphy
2018-10-18 10:47                   ` Robin Murphy
     [not found]                 ` <20181017111100-mutt-send-email-mst-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2018-10-22 11:27                   ` Jean-Philippe Brucker
2018-10-22 11:27                     ` Jean-Philippe Brucker
2018-10-22 11:27                 ` Jean-Philippe Brucker
2018-10-15 11:32         ` Robin Murphy
2018-10-15 11:32           ` Robin Murphy
2018-10-15 19:45         ` Jean-philippe Brucker
2018-10-15 19:45           ` Jean-philippe Brucker
2018-10-12 14:59 ` [PATCH v3 4/7] PCI: OF: Initialize dev->fwnode appropriately Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
     [not found]   ` <20181012145917.6840-5-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-10-12 19:44     ` Bjorn Helgaas
2018-10-12 19:44       ` Bjorn Helgaas
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59 ` [PATCH v3 5/7] iommu: Add virtio-iommu driver Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
2018-10-12 16:35   ` Michael S. Tsirkin
2018-10-12 16:35   ` Michael S. Tsirkin
2018-10-12 16:35     ` Michael S. Tsirkin
     [not found]     ` <20181012120953-mutt-send-email-mst-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2018-10-12 18:54       ` Jean-Philippe Brucker
2018-10-12 18:54         ` Jean-Philippe Brucker
2018-10-12 18:54     ` Jean-Philippe Brucker
2018-11-08 14:51     ` Auger Eric
2018-11-08 14:51       ` Auger Eric
2018-11-08 16:46       ` Jean-Philippe Brucker
2018-11-08 16:46         ` Jean-Philippe Brucker
2018-11-08 16:46       ` Jean-Philippe Brucker
2018-11-08 14:51     ` Auger Eric
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59 ` [PATCH v3 6/7] iommu/virtio: Add probe request Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
2018-10-12 16:42   ` Michael S. Tsirkin
2018-10-12 16:42     ` Michael S. Tsirkin
2018-11-08 14:48   ` Auger Eric
     [not found]   ` <20181012145917.6840-7-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-11-08 14:48     ` Auger Eric
2018-11-08 14:48       ` Auger Eric
     [not found]       ` <295d30bb-5aef-2727-01c0-ec10c7a8fa8c-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-11-08 16:46         ` Jean-Philippe Brucker
2018-11-08 16:46           ` Jean-Philippe Brucker
2018-11-08 16:46       ` Jean-Philippe Brucker
2018-11-15 13:20     ` Auger Eric
2018-11-15 13:20       ` Auger Eric
2018-11-15 16:22       ` Jean-Philippe Brucker
2018-11-15 16:22         ` Jean-Philippe Brucker
2018-11-15 16:22       ` Jean-Philippe Brucker
2018-11-15 13:20   ` Auger Eric
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59 ` [PATCH v3 7/7] iommu/virtio: Add event queue Jean-Philippe Brucker
2018-10-12 14:59 ` Jean-Philippe Brucker
2018-10-12 14:59   ` Jean-Philippe Brucker
2018-10-12 17:00 ` [PATCH v3 0/7] Add virtio-iommu driver Michael S. Tsirkin
     [not found] ` <20181012145917.6840-1-jean-philippe.brucker-5wv7dgnIgG8@public.gmane.org>
2018-10-12 17:00   ` Michael S. Tsirkin
2018-10-12 17:00     ` Michael S. Tsirkin
2018-10-12 18:55     ` Jean-Philippe Brucker
     [not found]     ` <20181012125443-mutt-send-email-mst-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2018-10-12 18:55       ` Jean-Philippe Brucker
2018-10-12 18:55         ` Jean-Philippe Brucker
2018-10-16  9:25   ` Auger Eric
2018-10-16  9:25     ` Auger Eric
2018-10-16 18:44     ` Jean-Philippe Brucker
2018-10-16 18:44       ` Jean-Philippe Brucker
2018-10-16 20:31       ` Auger Eric
2018-10-16 20:31         ` Auger Eric
2018-10-17 11:54         ` Jean-Philippe Brucker
2018-10-17 11:54           ` Jean-Philippe Brucker
2018-10-17 15:23           ` Michael S. Tsirkin
2018-10-17 15:23             ` Michael S. Tsirkin
2018-10-17 15:23           ` Michael S. Tsirkin
2018-10-16 18:44     ` Jean-Philippe Brucker
2018-10-16  9:25 ` Auger Eric

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.