kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
@ 2019-07-11 13:56 Eric Auger
  2019-07-11 13:56 ` [PATCH v9 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE Eric Auger
                   ` (13 more replies)
  0 siblings, 14 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

This series brings the VFIO part of HW nested paging support
in the SMMUv3.

The series depends on:
[PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
(https://www.spinics.net/lists/kernel/msg3187714.html)

3 new IOCTLs are introduced that allow the userspace to
1) pass the guest stage 1 configuration
2) pass stage 1 MSI bindings
3) invalidate stage 1 related caches

They map onto the related new IOMMU API functions.

We introduce the capability to register specific interrupt
indexes (see [1]). A new DMA_FAULT interrupt index allows to register
an eventfd to be signaled whenever a stage 1 related fault
is detected at physical level. Also a specific region allows
to expose the fault records to the user space.

Best Regards

Eric

This series can be found at:
https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9

It series includes Tina's patch steming from
[1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
specific irq" plus patches originally contributed by Yi.

History:

v8 -> v9:
- introduce specific irq framework
- single fault region
- iommu_unregister_device_fault_handler failure case not handled
  yet.

v7 -> v8:
- rebase on top of v5.2-rc1 and especially
  8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
- dynamic alloc of s1_cfg/s2_cfg
- __arm_smmu_tlb_inv_asid/s1_range_nosync
- check there is no HW MSI regions
- asid invalidation using pasid extended struct (change in the uapi)
- add s1_live/s2_live checks
- move check about support of nested stages in domain finalise
- fixes in error reporting according to the discussion with Robin
- reordered the patches to have first iommu/smmuv3 patches and then
  VFIO patches

v6 -> v7:
- removed device handle from bind/unbind_guest_msi
- added "iommu/smmuv3: Nested mode single MSI doorbell per domain
  enforcement"
- added few uapi comments as suggested by Jean, Jacop and Alex

v5 -> v6:
- Fix compilation issue when CONFIG_IOMMU_API is unset

v4 -> v5:
- fix bug reported by Vincent: fault handler unregistration now happens in
  vfio_pci_release
- IOMMU_FAULT_PERM_* moved outside of struct definition + small
  uapi changes suggested by Kean-Philippe (except fetch_addr)
- iommu: introduce device fault report API: removed the PRI part.
- see individual logs for more details
- reset the ste abort flag on detach

v3 -> v4:
- took into account Alex, jean-Philippe and Robin's comments on v3
- rework of the smmuv3 driver integration
- add tear down ops for msi binding and PASID table binding
- fix S1 fault propagation
- put fault reporting patches at the beginning of the series following
  Jean-Philippe's request
- update of the cache invalidate and fault API uapis
- VFIO fault reporting rework with 2 separate regions and one mmappable
  segment for the fault queue
- moved to PATCH

v2 -> v3:
- When registering the S1 MSI binding we now store the device handle. This
  addresses Robin's comment about discimination of devices beonging to
  different S1 groups and using different physical MSI doorbells.
- Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
  set the eventfd and expose the faults through an mmappable fault region

v1 -> v2:
- Added the fault reporting capability
- asid properly passed on invalidation (fix assignment of multiple
  devices)
- see individual change logs for more info


Eric Auger (8):
  vfio: VFIO_IOMMU_SET_MSI_BINDING
  vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
  vfio/pci: Register an iommu fault handler
  vfio/pci: Allow to mmap the fault queue
  vfio: Add new IRQ for DMA fault reporting
  vfio/pci: Add framework for custom interrupt indices
  vfio/pci: Register and allow DMA FAULT IRQ signaling
  vfio: Document nested stage control

Liu, Yi L (2):
  vfio: VFIO_IOMMU_SET_PASID_TABLE
  vfio: VFIO_IOMMU_CACHE_INVALIDATE

Tina Zhang (1):
  vfio: Use capability chains to handle device specific irq

 Documentation/vfio.txt              |  77 ++++++++
 drivers/vfio/pci/vfio_pci.c         | 283 ++++++++++++++++++++++++++--
 drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
 drivers/vfio/pci/vfio_pci_private.h |  24 +++
 drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
 drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
 include/uapi/linux/vfio.h           | 109 ++++++++++-
 7 files changed, 747 insertions(+), 19 deletions(-)

-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v9 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 02/11] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

From: "Liu, Yi L" <yi.l.liu@linux.intel.com>

This patch adds an VFIO_IOMMU_SET_PASID_TABLE ioctl
which aims to pass the virtual iommu guest configuration
to the host. This latter takes the form of the so-called
PASID table.

Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Liu, Yi L <yi.l.liu@linux.intel.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>

---
v8 -> v9:
- Merge VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE into a single
  VFIO_IOMMU_SET_PASID_TABLE ioctl.

v6 -> v7:
- add a comment related to VFIO_IOMMU_DETACH_PASID_TABLE

v3 -> v4:
- restore ATTACH/DETACH
- add unwind on failure

v2 -> v3:
- s/BIND_PASID_TABLE/SET_PASID_TABLE

v1 -> v2:
- s/BIND_GUEST_STAGE/BIND_PASID_TABLE
- remove the struct device arg
---
 drivers/vfio/vfio_iommu_type1.c | 56 +++++++++++++++++++++++++++++++++
 include/uapi/linux/vfio.h       | 19 +++++++++++
 2 files changed, 75 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index add34adfadc7..757a859f96a3 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1755,6 +1755,43 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
 	return ret;
 }
 
+static void
+vfio_detach_pasid_table(struct vfio_iommu *iommu)
+{
+	struct vfio_domain *d;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry(d, &iommu->domain_list, next) {
+		iommu_detach_pasid_table(d->domain);
+	}
+	mutex_unlock(&iommu->lock);
+}
+
+static int
+vfio_attach_pasid_table(struct vfio_iommu *iommu,
+			struct vfio_iommu_type1_set_pasid_table *ustruct)
+{
+	struct vfio_domain *d;
+	int ret = 0;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry(d, &iommu->domain_list, next) {
+		ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
+		if (ret)
+			goto unwind;
+	}
+	goto unlock;
+unwind:
+	list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
+		iommu_detach_pasid_table(d->domain);
+	}
+unlock:
+	mutex_unlock(&iommu->lock);
+	return ret;
+}
+
 static long vfio_iommu_type1_ioctl(void *iommu_data,
 				   unsigned int cmd, unsigned long arg)
 {
@@ -1825,6 +1862,25 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 
 		return copy_to_user((void __user *)arg, &unmap, minsz) ?
 			-EFAULT : 0;
+	} else if (cmd == VFIO_IOMMU_SET_PASID_TABLE) {
+		struct vfio_iommu_type1_set_pasid_table ustruct;
+
+		minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table,
+				    config);
+
+		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (ustruct.argsz < minsz)
+			return -EINVAL;
+
+		if (ustruct.flags & VFIO_PASID_TABLE_FLAG_SET)
+			return vfio_attach_pasid_table(iommu, &ustruct);
+		else if (ustruct.flags & VFIO_PASID_TABLE_FLAG_UNSET) {
+			vfio_detach_pasid_table(iommu);
+			return 0;
+		} else
+			return -EINVAL;
 	}
 
 	return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 8f10748dac79..96039da0a52d 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -14,6 +14,7 @@
 
 #include <linux/types.h>
 #include <linux/ioctl.h>
+#include <linux/iommu.h>
 
 #define VFIO_API_VERSION	0
 
@@ -763,6 +764,24 @@ struct vfio_iommu_type1_dma_unmap {
 #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
 #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
 
+/**
+ * VFIO_IOMMU_SET_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
+ *			struct vfio_iommu_type1_set_pasid_table)
+ *
+ * The SET operation passes a PASID table to the host while the
+ * UNSET operation detaches the one currently programmed. Setting
+ * a table while another is already programmed replaces the old table.
+ */
+struct vfio_iommu_type1_set_pasid_table {
+	__u32	argsz;
+	__u32	flags;
+#define VFIO_PASID_TABLE_FLAG_SET	(1 << 0)
+#define VFIO_PASID_TABLE_FLAG_UNSET	(1 << 1)
+	struct iommu_pasid_table_config config; /* used on SET */
+};
+
+#define VFIO_IOMMU_SET_PASID_TABLE	_IO(VFIO_TYPE, VFIO_BASE + 22)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 02/11] vfio: VFIO_IOMMU_CACHE_INVALIDATE
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
  2019-07-11 13:56 ` [PATCH v9 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 03/11] vfio: VFIO_IOMMU_SET_MSI_BINDING Eric Auger
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

From: "Liu, Yi L" <yi.l.liu@linux.intel.com>

When the guest "owns" the stage 1 translation structures,  the host
IOMMU driver has no knowledge of caching structure updates unless
the guest invalidation requests are trapped and passed down to the
host.

This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims
at propagating guest stage1 IOMMU cache invalidations to the host.

Signed-off-by: Liu, Yi L <yi.l.liu@linux.intel.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- change the ioctl ID

v6 -> v7:
- Use iommu_capsule struct
- renamed vfio_iommu_for_each_dev into vfio_iommu_lookup_dev
  due to checkpatch error related to for_each_dev suffix

v2 -> v3:
- introduce vfio_iommu_for_each_dev back in this patch

v1 -> v2:
- s/TLB/CACHE
- remove vfio_iommu_task usage
- commit message rewording
---
 drivers/vfio/vfio_iommu_type1.c | 55 +++++++++++++++++++++++++++++++++
 include/uapi/linux/vfio.h       | 13 ++++++++
 2 files changed, 68 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 757a859f96a3..307f059d3080 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -117,6 +117,34 @@ struct vfio_regions {
 #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
 					(!list_empty(&iommu->domain_list))
 
+struct domain_capsule {
+	struct iommu_domain *domain;
+	void *data;
+};
+
+/* iommu->lock must be held */
+static int
+vfio_iommu_lookup_dev(struct vfio_iommu *iommu,
+		      int (*fn)(struct device *dev, void *data),
+		      void *data)
+{
+	struct domain_capsule dc = {.data = data};
+	struct vfio_domain *d;
+	struct vfio_group *g;
+	int ret = 0;
+
+	list_for_each_entry(d, &iommu->domain_list, next) {
+		dc.domain = d->domain;
+		list_for_each_entry(g, &d->group_list, next) {
+			ret = iommu_group_for_each_dev(g->iommu_group,
+						       &dc, fn);
+			if (ret)
+				break;
+		}
+	}
+	return ret;
+}
+
 static int put_pfn(unsigned long pfn, int prot);
 
 /*
@@ -1792,6 +1820,15 @@ vfio_attach_pasid_table(struct vfio_iommu *iommu,
 	return ret;
 }
 
+static int vfio_cache_inv_fn(struct device *dev, void *data)
+{
+	struct domain_capsule *dc = (struct domain_capsule *)data;
+	struct vfio_iommu_type1_cache_invalidate *ustruct =
+		(struct vfio_iommu_type1_cache_invalidate *)dc->data;
+
+	return iommu_cache_invalidate(dc->domain, dev, &ustruct->info);
+}
+
 static long vfio_iommu_type1_ioctl(void *iommu_data,
 				   unsigned int cmd, unsigned long arg)
 {
@@ -1881,6 +1918,24 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 			return 0;
 		} else
 			return -EINVAL;
+	} else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
+		struct vfio_iommu_type1_cache_invalidate ustruct;
+		int ret;
+
+		minsz = offsetofend(struct vfio_iommu_type1_cache_invalidate,
+				    info);
+
+		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (ustruct.argsz < minsz || ustruct.flags)
+			return -EINVAL;
+
+		mutex_lock(&iommu->lock);
+		ret = vfio_iommu_lookup_dev(iommu, vfio_cache_inv_fn,
+					    &ustruct);
+		mutex_unlock(&iommu->lock);
+		return ret;
 	}
 
 	return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 96039da0a52d..b31c25b682c5 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -782,6 +782,19 @@ struct vfio_iommu_type1_set_pasid_table {
 
 #define VFIO_IOMMU_SET_PASID_TABLE	_IO(VFIO_TYPE, VFIO_BASE + 22)
 
+/**
+ * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE + 23,
+ *			struct vfio_iommu_type1_cache_invalidate)
+ *
+ * Propagate guest IOMMU cache invalidation to the host.
+ */
+struct vfio_iommu_type1_cache_invalidate {
+	__u32   argsz;
+	__u32   flags;
+	struct iommu_cache_invalidate_info info;
+};
+#define VFIO_IOMMU_CACHE_INVALIDATE      _IO(VFIO_TYPE, VFIO_BASE + 23)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 03/11] vfio: VFIO_IOMMU_SET_MSI_BINDING
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
  2019-07-11 13:56 ` [PATCH v9 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE Eric Auger
  2019-07-11 13:56 ` [PATCH v9 02/11] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type Eric Auger
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

This patch adds the VFIO_IOMMU_SET_MSI_BINDING ioctl which aim
to (un)register the guest MSI binding to the host. This latter
then can use those stage 1 bindings to build a nested stage
binding targeting the physical MSIs.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- merge VFIO_IOMMU_BIND_MSI/VFIO_IOMMU_UNBIND_MSI into a single
  VFIO_IOMMU_SET_MSI_BINDING ioctl
- ioctl id changed

v6 -> v7:
- removed the dev arg

v3 -> v4:
- add UNBIND
- unwind on BIND error

v2 -> v3:
- adapt to new proto of bind_guest_msi
- directly use vfio_iommu_for_each_dev

v1 -> v2:
- s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
---
 drivers/vfio/vfio_iommu_type1.c | 55 +++++++++++++++++++++++++++++++++
 include/uapi/linux/vfio.h       | 20 ++++++++++++
 2 files changed, 75 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 307f059d3080..c858be878590 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1829,6 +1829,42 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
 	return iommu_cache_invalidate(dc->domain, dev, &ustruct->info);
 }
 
+static int
+vfio_bind_msi(struct vfio_iommu *iommu,
+	      dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+	struct vfio_domain *d;
+	int ret = 0;
+
+	mutex_lock(&iommu->lock);
+
+	list_for_each_entry(d, &iommu->domain_list, next) {
+		ret = iommu_bind_guest_msi(d->domain, giova, gpa, size);
+		if (ret)
+			goto unwind;
+	}
+	goto unlock;
+unwind:
+	list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
+		iommu_unbind_guest_msi(d->domain, giova);
+	}
+unlock:
+	mutex_unlock(&iommu->lock);
+	return ret;
+}
+
+static void
+vfio_unbind_msi(struct vfio_iommu *iommu, dma_addr_t giova)
+{
+	struct vfio_domain *d;
+
+	mutex_lock(&iommu->lock);
+	list_for_each_entry(d, &iommu->domain_list, next) {
+		iommu_unbind_guest_msi(d->domain, giova);
+	}
+	mutex_unlock(&iommu->lock);
+}
+
 static long vfio_iommu_type1_ioctl(void *iommu_data,
 				   unsigned int cmd, unsigned long arg)
 {
@@ -1936,6 +1972,25 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
 					    &ustruct);
 		mutex_unlock(&iommu->lock);
 		return ret;
+	} else if (cmd == VFIO_IOMMU_SET_MSI_BINDING) {
+		struct vfio_iommu_type1_set_msi_binding ustruct;
+
+		minsz = offsetofend(struct vfio_iommu_type1_set_msi_binding,
+				    size);
+
+		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+			return -EFAULT;
+
+		if (ustruct.argsz < minsz)
+			return -EINVAL;
+
+		if (ustruct.flags == VFIO_IOMMU_UNBIND_MSI)
+			vfio_unbind_msi(iommu, ustruct.iova);
+		else if (ustruct.flags == VFIO_IOMMU_BIND_MSI)
+			return vfio_bind_msi(iommu, ustruct.iova, ustruct.gpa,
+						ustruct.size);
+		else
+			return -EINVAL;
 	}
 
 	return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index b31c25b682c5..deadbd84f2cf 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -795,6 +795,26 @@ struct vfio_iommu_type1_cache_invalidate {
 };
 #define VFIO_IOMMU_CACHE_INVALIDATE      _IO(VFIO_TYPE, VFIO_BASE + 23)
 
+/**
+ * VFIO_IOMMU_SET_MSI_BINDING - _IOWR(VFIO_TYPE, VFIO_BASE + 24,
+ *			struct vfio_iommu_type1_set_msi_binding)
+ *
+ * Pass a stage 1 MSI doorbell mapping to the host so that this
+ * latter can build a nested stage2 mapping. Or conversely tear
+ * down a previously bound stage 1 MSI binding.
+ */
+struct vfio_iommu_type1_set_msi_binding {
+	__u32   argsz;
+	__u32   flags;
+#define VFIO_IOMMU_BIND_MSI	(1 << 0)
+#define VFIO_IOMMU_UNBIND_MSI	(1 << 1)
+	__u64	iova;	/* MSI guest IOVA */
+	/* Fields below are used on BIND */
+	__u64	gpa;	/* MSI guest physical address */
+	__u64	size;	/* size of stage1 mapping (bytes) */
+};
+#define VFIO_IOMMU_SET_MSI_BINDING      _IO(VFIO_TYPE, VFIO_BASE + 24)
+
 /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
 
 /*
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (2 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 03/11] vfio: VFIO_IOMMU_SET_MSI_BINDING Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 05/11] vfio/pci: Register an iommu fault handler Eric Auger
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Add a new specific DMA_FAULT region aiming to exposed nested mode
translation faults.

The region has a ring buffer that contains the actual fault
records plus a header allowing to handle it (tail/head indices,
max capacity, entry size). At the moment the region is dimensionned
for 512 fault records.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- Use a single region instead of a prod/cons region

v4 -> v5
- check cons is not null in vfio_pci_check_cons_fault

v3 -> v4:
- use 2 separate regions, respectively in read and write modes
- add the version capability
---
 drivers/vfio/pci/vfio_pci.c         | 68 +++++++++++++++++++++++++++++
 drivers/vfio/pci/vfio_pci_private.h | 10 +++++
 drivers/vfio/pci/vfio_pci_rdwr.c    | 45 +++++++++++++++++++
 include/uapi/linux/vfio.h           | 35 +++++++++++++++
 4 files changed, 158 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 703948c9fbe1..3b091cfdaf46 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -258,6 +258,69 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state)
 	return ret;
 }
 
+static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
+				       struct vfio_pci_region *region)
+{
+}
+
+static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
+					     struct vfio_pci_region *region,
+					     struct vfio_info_cap *caps)
+{
+	struct vfio_region_info_cap_fault cap = {
+		.header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
+		.header.version = 1,
+		.version = 1,
+	};
+	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
+	.rw		= vfio_pci_dma_fault_rw,
+	.release	= vfio_pci_dma_fault_release,
+	.add_capability = vfio_pci_dma_fault_add_capability,
+};
+
+#define DMA_FAULT_RING_LENGTH 512
+
+static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
+{
+	struct vfio_region_dma_fault *header;
+	size_t size;
+	int ret;
+
+	mutex_init(&vdev->fault_queue_lock);
+
+	/*
+	 * We provision 1 page for the header and space for
+	 * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
+	 */
+	size = ALIGN(sizeof(struct iommu_fault) *
+		     DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
+
+	vdev->fault_pages = kzalloc(size, GFP_KERNEL);
+	if (!vdev->fault_pages)
+		return -ENOMEM;
+
+	ret = vfio_pci_register_dev_region(vdev,
+		VFIO_REGION_TYPE_NESTED,
+		VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
+		&vfio_pci_dma_fault_regops, size,
+		VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
+		vdev->fault_pages);
+	if (ret)
+		goto out;
+
+	header = (struct vfio_region_dma_fault *)vdev->fault_pages;
+	header->entry_size = sizeof(struct iommu_fault);
+	header->nb_entries = DMA_FAULT_RING_LENGTH;
+	header->offset = sizeof(struct vfio_region_dma_fault);
+	return 0;
+out:
+	kfree(vdev->fault_pages);
+	return ret;
+}
+
 static int vfio_pci_enable(struct vfio_pci_device *vdev)
 {
 	struct pci_dev *pdev = vdev->pdev;
@@ -356,6 +419,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
 		}
 	}
 
+	ret = vfio_pci_init_dma_fault_region(vdev);
+	if (ret)
+		goto disable_exit;
+
 	vfio_pci_probe_mmaps(vdev);
 
 	return 0;
@@ -1371,6 +1438,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
 
 	vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
 	kfree(vdev->region);
+	kfree(vdev->fault_pages);
 	mutex_destroy(&vdev->ioeventfds_lock);
 
 	if (!disable_idle_d3)
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index ee6ee91718a4..fde40db3cd34 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -119,6 +119,8 @@ struct vfio_pci_device {
 	int			ioeventfds_nr;
 	struct eventfd_ctx	*err_trigger;
 	struct eventfd_ctx	*req_trigger;
+	u8			*fault_pages;
+	struct mutex		fault_queue_lock;
 	struct list_head	dummy_resources_list;
 	struct mutex		ioeventfds_lock;
 	struct list_head	ioeventfds_list;
@@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
 extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
 			       uint64_t data, int count, int fd);
 
+struct vfio_pci_fault_abi {
+	u32 entry_size;
+};
+
+extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
+				    char __user *buf, size_t count,
+				    loff_t *ppos, bool iswrite);
+
 extern int vfio_pci_init_perm_bits(void);
 extern void vfio_pci_uninit_perm_bits(void);
 
diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
index 0120d8324a40..829c5284c6be 100644
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
 	return done;
 }
 
+size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
+			     size_t count, loff_t *ppos, bool iswrite)
+{
+	unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+	loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+	void *base = vdev->region[i].data;
+	int ret = -EFAULT;
+
+	if (pos >= vdev->region[i].size)
+		return -EINVAL;
+
+	count = min(count, (size_t)(vdev->region[i].size - pos));
+
+	mutex_lock(&vdev->fault_queue_lock);
+
+	if (iswrite) {
+		struct vfio_region_dma_fault *header =
+			(struct vfio_region_dma_fault *)base;
+		u32 new_tail;
+
+		if (pos != 0 || count != 4) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+
+		if (copy_from_user((void *)&new_tail, buf, count))
+			goto unlock;
+
+		if (new_tail > header->nb_entries) {
+			ret = -EINVAL;
+			goto unlock;
+		}
+		header->tail = new_tail;
+	} else {
+		if (copy_to_user(buf, base + pos, count))
+			goto unlock;
+	}
+	*ppos += count;
+	ret = count;
+unlock:
+	mutex_unlock(&vdev->fault_queue_lock);
+	return ret;
+}
+
+
 static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
 {
 	struct vfio_pci_ioeventfd *ioeventfd = opaque;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index deadbd84f2cf..bec761edae9b 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -307,6 +307,9 @@ struct vfio_region_info_cap_type {
 #define VFIO_REGION_TYPE_GFX                    (1)
 #define VFIO_REGION_SUBTYPE_GFX_EDID            (1)
 
+#define VFIO_REGION_TYPE_NESTED			(2)
+#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT	(1)
+
 /**
  * struct vfio_region_gfx_edid - EDID region layout.
  *
@@ -701,6 +704,38 @@ struct vfio_device_ioeventfd {
 
 #define VFIO_DEVICE_IOEVENTFD		_IO(VFIO_TYPE, VFIO_BASE + 16)
 
+
+/*
+ * Capability exposed by the DMA fault region
+ * @version: ABI version
+ */
+#define VFIO_REGION_INFO_CAP_DMA_FAULT	6
+
+struct vfio_region_info_cap_fault {
+	struct vfio_info_cap_header header;
+	__u32 version;
+};
+
+/*
+ * DMA Fault Region Layout
+ * @tail: index relative to the start of the ring buffer at which the
+ *        consumer finds the next item in the buffer
+ * @entry_size: fault ring buffer entry size in bytes
+ * @nb_entries: max capacity of the fault ring buffer
+ * @offset: ring buffer offset relative to the start of the region
+ * @head: index relative to the start of the ring buffer at which the
+ *        producer (kernel) inserts items into the buffers
+ */
+struct vfio_region_dma_fault {
+	/* Write-Only */
+	__u32   tail;
+	/* Read-Only */
+	__u32   entry_size;
+	__u32	nb_entries;
+	__u32	offset;
+	__u32   head;
+};
+
 /* -------- API for Type1 VFIO IOMMU -------- */
 
 /**
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 05/11] vfio/pci: Register an iommu fault handler
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (3 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 06/11] vfio/pci: Allow to mmap the fault queue Eric Auger
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Register an IOMMU fault handler which records faults in
the DMA FAULT region ring buffer. In a subsequent patch, we
will add the signaling of a specific eventfd to allow the
userspace to be notified whenever a new fault as shown up.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- handler now takes an iommu_fault handle
- eventfd signaling moved to a subsequent patch
- check the fault type and return an error if != UNRECOV
- still the fault handler registration can fail. We need to
  reach an agreement about how to deal with the situation

v3 -> v4:
- move iommu_unregister_device_fault_handler to vfio_pci_release
---
 drivers/vfio/pci/vfio_pci.c | 42 +++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 3b091cfdaf46..c6fdbd64bfb2 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -27,6 +27,7 @@
 #include <linux/vfio.h>
 #include <linux/vgaarb.h>
 #include <linux/nospec.h>
+#include <linux/circ_buf.h>
 
 #include "vfio_pci_private.h"
 
@@ -281,6 +282,38 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
 	.add_capability = vfio_pci_dma_fault_add_capability,
 };
 
+int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
+{
+	struct vfio_pci_device *vdev = (struct vfio_pci_device *)data;
+	struct vfio_region_dma_fault *reg =
+		(struct vfio_region_dma_fault *)vdev->fault_pages;
+	struct iommu_fault *new =
+		(struct iommu_fault *)(vdev->fault_pages + reg->offset +
+			reg->head * reg->entry_size);
+	int head, tail, size;
+	int ret = 0;
+
+	if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
+		return -ENOENT;
+
+	mutex_lock(&vdev->fault_queue_lock);
+
+	head = reg->head;
+	tail = reg->tail;
+	size = reg->nb_entries;
+
+	if (CIRC_SPACE(head, tail, size) < 1) {
+		ret = -ENOSPC;
+		goto unlock;
+	}
+
+	*new = *fault;
+	reg->head = (head + 1) % size;
+unlock:
+	mutex_unlock(&vdev->fault_queue_lock);
+	return ret;
+}
+
 #define DMA_FAULT_RING_LENGTH 512
 
 static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
@@ -315,6 +348,13 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
 	header->entry_size = sizeof(struct iommu_fault);
 	header->nb_entries = DMA_FAULT_RING_LENGTH;
 	header->offset = sizeof(struct vfio_region_dma_fault);
+
+	ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
+					vfio_pci_iommu_dev_fault_handler,
+					vdev);
+	if (ret)
+		goto out;
+
 	return 0;
 out:
 	kfree(vdev->fault_pages);
@@ -530,6 +570,8 @@ static void vfio_pci_release(void *device_data)
 	if (!(--vdev->refcnt)) {
 		vfio_spapr_pci_eeh_release(vdev->pdev);
 		vfio_pci_disable(vdev);
+		/* TODO: Failure problematics */
+		iommu_unregister_device_fault_handler(&vdev->pdev->dev);
 	}
 
 	mutex_unlock(&vdev->reflck->lock);
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 06/11] vfio/pci: Allow to mmap the fault queue
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (4 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 05/11] vfio/pci: Register an iommu fault handler Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 07/11] vfio: Use capability chains to handle device specific irq Eric Auger
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

The DMA FAULT region contains the fault ring buffer.
There is benefit to let the userspace mmap this area.
Expose this mmappable area through a sparse mmap entry
and implement the mmap operation.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- remove unused index local variable in vfio_pci_fault_mmap
---
 drivers/vfio/pci/vfio_pci.c | 61 +++++++++++++++++++++++++++++++++++--
 1 file changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index c6fdbd64bfb2..7d62dcdbdf0b 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -264,21 +264,75 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
 {
 }
 
+static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
+				   struct vfio_pci_region *region,
+				   struct vm_area_struct *vma)
+{
+	u64 phys_len, req_len, pgoff, req_start;
+	unsigned long long addr;
+	unsigned int ret;
+
+	phys_len = region->size;
+
+	req_len = vma->vm_end - vma->vm_start;
+	pgoff = vma->vm_pgoff &
+		((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+	req_start = pgoff << PAGE_SHIFT;
+
+	/* only the second page of the producer fault region is mmappable */
+	if (req_start < PAGE_SIZE)
+		return -EINVAL;
+
+	if (req_start + req_len > phys_len)
+		return -EINVAL;
+
+	addr = virt_to_phys(vdev->fault_pages);
+	vma->vm_private_data = vdev;
+	vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff;
+
+	ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+			      req_len, vma->vm_page_prot);
+	return ret;
+}
+
 static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
 					     struct vfio_pci_region *region,
 					     struct vfio_info_cap *caps)
 {
+	struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
 	struct vfio_region_info_cap_fault cap = {
 		.header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
 		.header.version = 1,
 		.version = 1,
 	};
-	return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+	size_t size = sizeof(*sparse) + sizeof(*sparse->areas);
+	int ret;
+
+	ret = vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+	if (ret)
+		return ret;
+
+	sparse = kzalloc(size, GFP_KERNEL);
+	if (!sparse)
+		return -ENOMEM;
+
+	sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+	sparse->header.version = 1;
+	sparse->nr_areas = 1;
+	sparse->areas[0].offset = PAGE_SIZE;
+	sparse->areas[0].size = region->size - PAGE_SIZE;
+
+	ret = vfio_info_add_capability(caps, &sparse->header, size);
+	if (ret)
+		kfree(sparse);
+
+	return ret;
 }
 
 static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
 	.rw		= vfio_pci_dma_fault_rw,
 	.release	= vfio_pci_dma_fault_release,
+	.mmap		= vfio_pci_dma_fault_mmap,
 	.add_capability = vfio_pci_dma_fault_add_capability,
 };
 
@@ -339,7 +393,8 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
 		VFIO_REGION_TYPE_NESTED,
 		VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
 		&vfio_pci_dma_fault_regops, size,
-		VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
+		VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE |
+		VFIO_REGION_INFO_FLAG_MMAP,
 		vdev->fault_pages);
 	if (ret)
 		goto out;
@@ -347,7 +402,7 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
 	header = (struct vfio_region_dma_fault *)vdev->fault_pages;
 	header->entry_size = sizeof(struct iommu_fault);
 	header->nb_entries = DMA_FAULT_RING_LENGTH;
-	header->offset = sizeof(struct vfio_region_dma_fault);
+	header->offset = PAGE_SIZE;
 
 	ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
 					vfio_pci_iommu_dev_fault_handler,
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 07/11] vfio: Use capability chains to handle device specific irq
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (5 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 06/11] vfio/pci: Allow to mmap the fault queue Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 08/11] vfio: Add new IRQ for DMA fault reporting Eric Auger
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

From: Tina Zhang <tina.zhang@intel.com>

Caps the number of irqs with fixed indexes and uses capability chains
to chain device specific irqs.

Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
[Eric: Put cap_offset at the end of the vfio_irq_info struct,
remove GFX IRQ at the moment and remove any reference to this latter
in the commit message]

---
---
 include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index bec761edae9b..b53714ae02c5 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -452,11 +452,27 @@ struct vfio_irq_info {
 #define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
 #define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
 #define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
+#define VFIO_IRQ_INFO_FLAG_CAPS		(1 << 4) /* Info supports caps */
 	__u32	index;		/* IRQ index */
 	__u32	count;		/* Number of IRQs within this index */
+	__u32	cap_offset;	/* Offset within info struct of first cap */
 };
 #define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)
 
+/*
+ * The irq type capability allows IRQs unique to a specific device or
+ * class of devices to be exposed.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IRQ_INFO_CAP_TYPE      3
+
+struct vfio_irq_info_cap_type {
+	struct vfio_info_cap_header header;
+	__u32 type;     /* global per bus driver */
+	__u32 subtype;  /* type specific */
+};
+
 /**
  * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
  *
@@ -558,7 +574,8 @@ enum {
 	VFIO_PCI_MSIX_IRQ_INDEX,
 	VFIO_PCI_ERR_IRQ_INDEX,
 	VFIO_PCI_REQ_IRQ_INDEX,
-	VFIO_PCI_NUM_IRQS
+	VFIO_PCI_NUM_IRQS = 5	/* Fixed user ABI, IRQ indexes >=5 use   */
+				/* device specific cap to define content */
 };
 
 /*
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 08/11] vfio: Add new IRQ for DMA fault reporting
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (6 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 07/11] vfio: Use capability chains to handle device specific irq Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 09/11] vfio/pci: Add framework for custom interrupt indices Eric Auger
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Add a new IRQ type/subtype to get notification on nested
stage DMA faults.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
 include/uapi/linux/vfio.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index b53714ae02c5..58607809e81a 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -473,6 +473,9 @@ struct vfio_irq_info_cap_type {
 	__u32 subtype;  /* type specific */
 };
 
+#define VFIO_IRQ_TYPE_NESTED				(1)
+#define VFIO_IRQ_SUBTYPE_DMA_FAULT			(1)
+
 /**
  * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
  *
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 09/11] vfio/pci: Add framework for custom interrupt indices
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (7 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 08/11] vfio: Add new IRQ for DMA fault reporting Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 10/11] vfio/pci: Register and allow DMA FAULT IRQ signaling Eric Auger
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Implement IRQ capability chain infrastructure. All interrupt
indexes beyond VFIO_PCI_NUM_IRQS are handled as extended
interrupts. They are registered with a specific type/subtype
and supported flags.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
 drivers/vfio/pci/vfio_pci.c         | 100 +++++++++++++++++++++++-----
 drivers/vfio/pci/vfio_pci_intrs.c   |  62 +++++++++++++++++
 drivers/vfio/pci/vfio_pci_private.h |  14 ++++
 3 files changed, 158 insertions(+), 18 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 7d62dcdbdf0b..dcc246202094 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -541,6 +541,14 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
 				VFIO_IRQ_SET_ACTION_TRIGGER,
 				vdev->irq_type, 0, 0, NULL);
 
+	for (i = 0; i < vdev->num_ext_irqs; i++)
+		vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE |
+					VFIO_IRQ_SET_ACTION_TRIGGER,
+					VFIO_PCI_NUM_IRQS + i, 0, 0, NULL);
+	vdev->num_ext_irqs = 0;
+	kfree(vdev->ext_irqs);
+	vdev->ext_irqs = NULL;
+
 	/* Device closed, don't need mutex here */
 	list_for_each_entry_safe(ioeventfd, ioeventfd_tmp,
 				 &vdev->ioeventfds_list, next) {
@@ -697,6 +705,9 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type)
 			return 1;
 	} else if (irq_type == VFIO_PCI_REQ_IRQ_INDEX) {
 		return 1;
+	} else if (irq_type >= VFIO_PCI_NUM_IRQS &&
+		   irq_type < VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs) {
+		return 1;
 	}
 
 	return 0;
@@ -866,7 +877,7 @@ static long vfio_pci_ioctl(void *device_data,
 			info.flags |= VFIO_DEVICE_FLAGS_RESET;
 
 		info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
-		info.num_irqs = VFIO_PCI_NUM_IRQS;
+		info.num_irqs = VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs;
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
 			-EFAULT : 0;
@@ -1021,36 +1032,88 @@ static long vfio_pci_ioctl(void *device_data,
 
 	} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
 		struct vfio_irq_info info;
+		struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+		unsigned long capsz;
 
 		minsz = offsetofend(struct vfio_irq_info, count);
 
+		/* For backward compatibility, cannot require this */
+		capsz = offsetofend(struct vfio_irq_info, cap_offset);
+
 		if (copy_from_user(&info, (void __user *)arg, minsz))
 			return -EFAULT;
 
-		if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
+		if (info.argsz < minsz ||
+			info.index >= VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs)
 			return -EINVAL;
 
-		switch (info.index) {
-		case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
-		case VFIO_PCI_REQ_IRQ_INDEX:
-			break;
-		case VFIO_PCI_ERR_IRQ_INDEX:
-			if (pci_is_pcie(vdev->pdev))
-				break;
-		/* fall through */
-		default:
-			return -EINVAL;
-		}
+		if (info.argsz >= capsz)
+			minsz = capsz;
 
 		info.flags = VFIO_IRQ_INFO_EVENTFD;
 
-		info.count = vfio_pci_get_irq_count(vdev, info.index);
-
-		if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+		switch (info.index) {
+		case VFIO_PCI_INTX_IRQ_INDEX:
 			info.flags |= (VFIO_IRQ_INFO_MASKABLE |
 				       VFIO_IRQ_INFO_AUTOMASKED);
-		else
+			break;
+		case VFIO_PCI_MSI_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
+		case VFIO_PCI_REQ_IRQ_INDEX:
 			info.flags |= VFIO_IRQ_INFO_NORESIZE;
+			break;
+		case VFIO_PCI_ERR_IRQ_INDEX:
+			info.flags |= VFIO_IRQ_INFO_NORESIZE;
+			if (!pci_is_pcie(vdev->pdev))
+				return -EINVAL;
+			break;
+		/* fall through */
+		default:
+		{
+			struct vfio_irq_info_cap_type cap_type = {
+				.header.id = VFIO_IRQ_INFO_CAP_TYPE,
+				.header.version = 1 };
+			int ret, i;
+
+			if (info.index >= VFIO_PCI_NUM_IRQS +
+						vdev->num_ext_irqs)
+				return -EINVAL;
+			info.index = array_index_nospec(info.index,
+							VFIO_PCI_NUM_IRQS +
+							vdev->num_ext_irqs);
+			i = info.index - VFIO_PCI_NUM_IRQS;
+
+			info.flags = vdev->ext_irqs[i].flags;
+			cap_type.type = vdev->ext_irqs[i].type;
+			cap_type.subtype = vdev->ext_irqs[i].subtype;
+
+			ret = vfio_info_add_capability(&caps,
+					&cap_type.header,
+					sizeof(cap_type));
+			if (ret)
+				return ret;
+		}
+		}
+
+		info.count = vfio_pci_get_irq_count(vdev, info.index);
+
+		if (caps.size) {
+			info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
+			if (info.argsz < sizeof(info) + caps.size) {
+				info.argsz = sizeof(info) + caps.size;
+				info.cap_offset = 0;
+			} else {
+				vfio_info_cap_shift(&caps, sizeof(info));
+				if (copy_to_user((void __user *)arg +
+						  sizeof(info), caps.buf,
+						  caps.size)) {
+					kfree(caps.buf);
+					return -EFAULT;
+				}
+				info.cap_offset = sizeof(info);
+			}
+
+			kfree(caps.buf);
+		}
 
 		return copy_to_user((void __user *)arg, &info, minsz) ?
 			-EFAULT : 0;
@@ -1069,7 +1132,8 @@ static long vfio_pci_ioctl(void *device_data,
 		max = vfio_pci_get_irq_count(vdev, hdr.index);
 
 		ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
-						 VFIO_PCI_NUM_IRQS, &data_size);
+				VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs,
+				&data_size);
 		if (ret)
 			return ret;
 
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 3fa3f728fb39..0c3802937bd2 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -19,6 +19,7 @@
 #include <linux/vfio.h>
 #include <linux/wait.h>
 #include <linux/slab.h>
+#include <linux/nospec.h>
 
 #include "vfio_pci_private.h"
 
@@ -619,6 +620,24 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
 					       count, flags, data);
 }
 
+static int vfio_pci_set_ext_irq_trigger(struct vfio_pci_device *vdev,
+					unsigned int index, unsigned int start,
+					unsigned int count, uint32_t flags,
+					void *data)
+{
+	int i;
+
+	if (start != 0 || count > 1)
+		return -EINVAL;
+
+	index = array_index_nospec(index,
+				   VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs);
+	i = index - VFIO_PCI_NUM_IRQS;
+
+	return vfio_pci_set_ctx_trigger_single(&vdev->ext_irqs[i].trigger,
+					       count, flags, data);
+}
+
 int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
 			    unsigned index, unsigned start, unsigned count,
 			    void *data)
@@ -668,6 +687,13 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
 			break;
 		}
 		break;
+	default:
+		switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+		case VFIO_IRQ_SET_ACTION_TRIGGER:
+			func = vfio_pci_set_ext_irq_trigger;
+			break;
+		}
+		break;
 	}
 
 	if (!func)
@@ -675,3 +701,39 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
 
 	return func(vdev, index, start, count, flags, data);
 }
+
+int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+			       unsigned int type, unsigned int subtype)
+{
+	int i;
+
+	for (i = 0; i <  vdev->num_ext_irqs; i++) {
+		if (vdev->ext_irqs[i].type == type &&
+		    vdev->ext_irqs[i].subtype == subtype) {
+			return i;
+		}
+	}
+	return -EINVAL;
+}
+
+int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+			  unsigned int type, unsigned int subtype,
+			  u32 flags)
+{
+	struct vfio_ext_irq *ext_irqs;
+
+	ext_irqs = krealloc(vdev->ext_irqs,
+			    (vdev->num_ext_irqs + 1) * sizeof(*ext_irqs),
+			    GFP_KERNEL);
+	if (!ext_irqs)
+		return -ENOMEM;
+
+	vdev->ext_irqs = ext_irqs;
+
+	vdev->ext_irqs[vdev->num_ext_irqs].type = type;
+	vdev->ext_irqs[vdev->num_ext_irqs].subtype = subtype;
+	vdev->ext_irqs[vdev->num_ext_irqs].flags = flags;
+	vdev->ext_irqs[vdev->num_ext_irqs].trigger = NULL;
+	vdev->num_ext_irqs++;
+	return 0;
+}
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index fde40db3cd34..71fa7883dfdf 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -73,6 +73,13 @@ struct vfio_pci_region {
 	u32				flags;
 };
 
+struct vfio_ext_irq {
+	u32				type;
+	u32				subtype;
+	u32				flags;
+	struct eventfd_ctx		*trigger;
+};
+
 struct vfio_pci_dummy_resource {
 	struct resource		resource;
 	int			index;
@@ -96,6 +103,8 @@ struct vfio_pci_device {
 	struct vfio_pci_irq_ctx	*ctx;
 	int			num_ctx;
 	int			irq_type;
+	struct vfio_ext_irq	*ext_irqs;
+	int			num_ext_irqs;
 	int			num_regions;
 	struct vfio_pci_region	*region;
 	u8			msi_qmax;
@@ -134,6 +143,11 @@ struct vfio_pci_device {
 
 extern void vfio_pci_intx_mask(struct vfio_pci_device *vdev);
 extern void vfio_pci_intx_unmask(struct vfio_pci_device *vdev);
+extern int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+				 unsigned int type, unsigned int subtype,
+				 u32 flags);
+extern int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+				      unsigned int type, unsigned int subtype);
 
 extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev,
 				   uint32_t flags, unsigned index,
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 10/11] vfio/pci: Register and allow DMA FAULT IRQ signaling
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (8 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 09/11] vfio/pci: Add framework for custom interrupt indices Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-11 13:56 ` [PATCH v9 11/11] vfio: Document nested stage control Eric Auger
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Register the VFIO_IRQ_TYPE_NESTED/VFIO_IRQ_SUBTYPE_DMA_FAULT
IRQ that allows to signal a nested mode DMA fault.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
 drivers/vfio/pci/vfio_pci.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index dcc246202094..a9df964e40c6 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -344,7 +344,7 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
 	struct iommu_fault *new =
 		(struct iommu_fault *)(vdev->fault_pages + reg->offset +
 			reg->head * reg->entry_size);
-	int head, tail, size;
+	int head, tail, size, ext_irq_index;
 	int ret = 0;
 
 	if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
@@ -365,7 +365,19 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
 	reg->head = (head + 1) % size;
 unlock:
 	mutex_unlock(&vdev->fault_queue_lock);
-	return ret;
+	if (ret)
+		return ret;
+
+	ext_irq_index = vfio_pci_get_ext_irq_index(vdev, VFIO_IRQ_TYPE_NESTED,
+						   VFIO_IRQ_SUBTYPE_DMA_FAULT);
+	if (ext_irq_index < 0)
+		return -EINVAL;
+
+	mutex_lock(&vdev->igate);
+	if (vdev->ext_irqs[ext_irq_index].trigger)
+		eventfd_signal(vdev->ext_irqs[ext_irq_index].trigger, 1);
+	mutex_unlock(&vdev->igate);
+	return 0;
 }
 
 #define DMA_FAULT_RING_LENGTH 512
@@ -518,6 +530,12 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
 	if (ret)
 		goto disable_exit;
 
+	ret = vfio_pci_register_irq(vdev, VFIO_IRQ_TYPE_NESTED,
+				    VFIO_IRQ_SUBTYPE_DMA_FAULT,
+				    VFIO_IRQ_INFO_EVENTFD);
+	if (ret)
+		goto disable_exit;
+
 	vfio_pci_probe_mmaps(vdev);
 
 	return 0;
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v9 11/11] vfio: Document nested stage control
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (9 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 10/11] vfio/pci: Register and allow DMA FAULT IRQ signaling Eric Auger
@ 2019-07-11 13:56 ` Eric Auger
  2019-07-12  7:38 ` [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) zhangfei.gao
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 25+ messages in thread
From: Eric Auger @ 2019-07-11 13:56 UTC (permalink / raw)
  To: eric.auger.pro, eric.auger, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

The VFIO API was enhanced to support nested stage control: a bunch of
new iotcls, one DMA FAULT region and an associated specific IRQ.

Let's document the process to follow to set up nested mode.

Signed-off-by: Eric Auger <eric.auger@redhat.com>

---

v8 -> v9:
- new names for SET_MSI_BINDING and SET_PASID_TABLE
- new layout for the DMA FAULT memory region and specific IRQ

v2 -> v3:
- document the new fault API

v1 -> v2:
- use the new ioctl names
- add doc related to fault handling
---
 Documentation/vfio.txt | 77 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

diff --git a/Documentation/vfio.txt b/Documentation/vfio.txt
index f1a4d3c3ba0b..563ebcec9224 100644
--- a/Documentation/vfio.txt
+++ b/Documentation/vfio.txt
@@ -239,6 +239,83 @@ group and can access them as follows::
 	/* Gratuitous device reset and go... */
 	ioctl(device, VFIO_DEVICE_RESET);
 
+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. "Stage" corresponds to
+the ARM terminology while "level" corresponds to Intel's VTD terminology. In
+the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (IOVA -> GPA), while the hypervisor uses stage 2 for VM isolation
+(GPA -> HPA).
+
+The guest gets ownership of the stage 1 page tables and also owns stage 1
+configuration structures. The hypervisor owns the root configuration structure
+(for security reason), including stage 2 configuration. This works as long
+configuration structures and page table format are compatible between the
+virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage 2, leaving stage 1 available for
+guest usage.
+
+Once groups are attached to the container, the guest stage 1 translation
+configuration data can be passed to VFIO by using
+
+ioctl(container, VFIO_IOMMU_SET_PASID_TABLE, &pasid_table_info);
+
+This allows to combine the guest stage 1 configuration structure along with
+the hypervisor stage 2 configuration structure. Stage 1 configuration
+structures are dependent on the IOMMU type.
+
+As the stage 1 translation is fully delegated to the HW, translation faults
+encountered during the translation process need to be propagated up to
+the virtualizer and re-injected into the guest.
+
+The userspace must be prepared to receive faults. The VFIO-PCI device
+exposes one dedicated DMA FAULT region: it contains a ring buffer and
+its header that allows to manage the head/tail indices. The region is
+identified by the following index/subindex:
+- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
+
+The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
+region capability that allows the userspace to retrieve the ABI version
+of the fault records filled by the host.
+
+On top of that region, the userspace can be notified whenever a fault
+occurs at the physical level. It can use the VFIO_IRQ_TYPE_NESTED/
+VFIO_IRQ_SUBTYPE_DMA_FAULT specific IRQ to attach the eventfd to be
+signalled.
+
+The ring buffer containing the fault records can be mmapped. When
+the userspace consumes a fault in the queue, it should increment
+the consumer index to allow new fault records to replace the used ones.
+
+The queue size and the entry size can be retrieved in the header.
+The tail index should never overshoot the producer index as in any
+other circular buffer scheme. Also it must be less than the queue size
+otherwise the change fails.
+
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+ioctl(container, VFIO_IOMMU_CACHE_INVALIDATE, &inv_data);
+Those invalidations can happen at various granularity levels, page, context, ...
+
+The ARM SMMU specification introduces another challenge: MSIs are translated by
+both the virtual SMMU and the physical SMMU. To build a nested mapping for the
+IOVA programmed into the assigned device, the guest needs to pass its IOVA/MSI
+doorbell GPA binding to the host. Then the hypervisor can build a nested stage 2
+binding eventually translating into the physical MSI doorbell.
+
+This is achieved by calling
+ioctl(container, VFIO_IOMMU_SET_MSI_BINDING, &guest_binding);
+
 VFIO User API
 -------------------------------------------------------------------------------
 
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (10 preceding siblings ...)
  2019-07-11 13:56 ` [PATCH v9 11/11] vfio: Document nested stage control Eric Auger
@ 2019-07-12  7:38 ` zhangfei.gao
  2019-11-12 11:08 ` Shameerali Kolothum Thodi
  2019-11-20  8:15 ` Tomasz Nowicki
  13 siblings, 0 replies; 25+ messages in thread
From: zhangfei.gao @ 2019-07-12  7:38 UTC (permalink / raw)
  To: Eric Auger, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang



On 2019/7/11 下午9:56, Eric Auger wrote:
> This series brings the VFIO part of HW nested paging support
> in the SMMUv3.
>
> The series depends on:
> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> (https://www.spinics.net/lists/kernel/msg3187714.html)
>
> 3 new IOCTLs are introduced that allow the userspace to
> 1) pass the guest stage 1 configuration
> 2) pass stage 1 MSI bindings
> 3) invalidate stage 1 related caches
>
> They map onto the related new IOMMU API functions.
>
> We introduce the capability to register specific interrupt
> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
> an eventfd to be signaled whenever a stage 1 related fault
> is detected at physical level. Also a specific region allows
> to expose the fault records to the user space.
>
> Best Regards
>
> Eric
>
> This series can be found at:
> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>
> It series includes Tina's patch steming from
> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> specific irq" plus patches originally contributed by Yi.
>
>
Thanks Eric.

Have tested vfio mode in qemu on Hisilicon arm64 platform, using gIOVA
qemu command: -machine virt,gic_version=3,iommu=smmuv3 -device 
vfio-pci,host=0000:75:00.1

Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org>
qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
kernel: https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (11 preceding siblings ...)
  2019-07-12  7:38 ` [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) zhangfei.gao
@ 2019-11-12 11:08 ` Shameerali Kolothum Thodi
  2019-11-12 11:28   ` Auger Eric
  2019-11-20  8:15 ` Tomasz Nowicki
  13 siblings, 1 reply; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2019-11-12 11:08 UTC (permalink / raw)
  To: Eric Auger, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

[-- Attachment #1: Type: text/plain, Size: 8004 bytes --]

Hi Eric,

> -----Original Message-----
> From: kvmarm-bounces@lists.cs.columbia.edu
> [mailto:kvmarm-bounces@lists.cs.columbia.edu] On Behalf Of Eric Auger
> Sent: 11 July 2019 14:56
> To: eric.auger.pro@gmail.com; eric.auger@redhat.com;
> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> robin.murphy@arm.com
> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> marc.zyngier@arm.com; tina.zhang@intel.com
> Subject: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> 
> This series brings the VFIO part of HW nested paging support
> in the SMMUv3.
> 
> The series depends on:
> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> (https://www.spinics.net/lists/kernel/msg3187714.html)
> 
> 3 new IOCTLs are introduced that allow the userspace to
> 1) pass the guest stage 1 configuration
> 2) pass stage 1 MSI bindings
> 3) invalidate stage 1 related caches
> 
> They map onto the related new IOMMU API functions.
> 
> We introduce the capability to register specific interrupt
> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
> an eventfd to be signaled whenever a stage 1 related fault
> is detected at physical level. Also a specific region allows
> to expose the fault records to the user space.

I am trying to get this running on one of our platform that has smmuv3 dual
stage support. I am seeing some issues with this when an ixgbe vf dev is 
made pass-through and is behind a vSMMUv3 in Guest.

Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5

And this is my Qemu cmd line,

./qemu-system-aarch64
-machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host \
-kernel Image \
-drive if=none,file=ubuntu,id=fs \
-device virtio-blk-device,drive=fs \
-device vfio-pci,host=0000:01:10.1 \
-bios QEMU_EFI.fd \
-net none \
-m 4G \
-nographic -D -d -enable-kvm \
-append "console=ttyAMA0 root=/dev/vda rw acpi=force"

The basic ping from Guest works fine,
root@ubuntu:~# ping 10.202.225.185
PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
...

But if I increase ping packet size, 

root@ubuntu:~# ping -s 1024 10.202.225.185
PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
From 10.202.225.169 icmp_seq=69 Destination Host Unreachable

And from Host kernel I get,
[  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
[  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
[  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
[  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets detected
[  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
[  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected

Also noted that iperf cannot work as it fails to establish the connection with iperf
server. 

Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your reference.
I haven't debugged this further yet and thought of checking with you if this is
something you have seen already or not. Or maybe I am missing something here?

Please let me know.

Thanks,
Shameer

> Best Regards
> 
> Eric
> 
> This series can be found at:
> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> 
> It series includes Tina's patch steming from
> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> specific irq" plus patches originally contributed by Yi.
> 
> History:
> 
> v8 -> v9:
> - introduce specific irq framework
> - single fault region
> - iommu_unregister_device_fault_handler failure case not handled
>   yet.
> 
> v7 -> v8:
> - rebase on top of v5.2-rc1 and especially
>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
> - dynamic alloc of s1_cfg/s2_cfg
> - __arm_smmu_tlb_inv_asid/s1_range_nosync
> - check there is no HW MSI regions
> - asid invalidation using pasid extended struct (change in the uapi)
> - add s1_live/s2_live checks
> - move check about support of nested stages in domain finalise
> - fixes in error reporting according to the discussion with Robin
> - reordered the patches to have first iommu/smmuv3 patches and then
>   VFIO patches
> 
> v6 -> v7:
> - removed device handle from bind/unbind_guest_msi
> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
>   enforcement"
> - added few uapi comments as suggested by Jean, Jacop and Alex
> 
> v5 -> v6:
> - Fix compilation issue when CONFIG_IOMMU_API is unset
> 
> v4 -> v5:
> - fix bug reported by Vincent: fault handler unregistration now happens in
>   vfio_pci_release
> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
>   uapi changes suggested by Kean-Philippe (except fetch_addr)
> - iommu: introduce device fault report API: removed the PRI part.
> - see individual logs for more details
> - reset the ste abort flag on detach
> 
> v3 -> v4:
> - took into account Alex, jean-Philippe and Robin's comments on v3
> - rework of the smmuv3 driver integration
> - add tear down ops for msi binding and PASID table binding
> - fix S1 fault propagation
> - put fault reporting patches at the beginning of the series following
>   Jean-Philippe's request
> - update of the cache invalidate and fault API uapis
> - VFIO fault reporting rework with 2 separate regions and one mmappable
>   segment for the fault queue
> - moved to PATCH
> 
> v2 -> v3:
> - When registering the S1 MSI binding we now store the device handle. This
>   addresses Robin's comment about discimination of devices beonging to
>   different S1 groups and using different physical MSI doorbells.
> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
>   set the eventfd and expose the faults through an mmappable fault region
> 
> v1 -> v2:
> - Added the fault reporting capability
> - asid properly passed on invalidation (fix assignment of multiple
>   devices)
> - see individual change logs for more info
> 
> 
> Eric Auger (8):
>   vfio: VFIO_IOMMU_SET_MSI_BINDING
>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>   vfio/pci: Register an iommu fault handler
>   vfio/pci: Allow to mmap the fault queue
>   vfio: Add new IRQ for DMA fault reporting
>   vfio/pci: Add framework for custom interrupt indices
>   vfio/pci: Register and allow DMA FAULT IRQ signaling
>   vfio: Document nested stage control
> 
> Liu, Yi L (2):
>   vfio: VFIO_IOMMU_SET_PASID_TABLE
>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
> 
> Tina Zhang (1):
>   vfio: Use capability chains to handle device specific irq
> 
>  Documentation/vfio.txt              |  77 ++++++++
>  drivers/vfio/pci/vfio_pci.c         | 283 ++++++++++++++++++++++++++--
>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
>  include/uapi/linux/vfio.h           | 109 ++++++++++-
>  7 files changed, 747 insertions(+), 19 deletions(-)
> 
> --
> 2.20.1
> 
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

[-- Attachment #2: trace-20045-vfio-smmu.log --]
[-- Type: application/octet-stream, Size: 84267 bytes --]

vfio_realize 0.000 pid=20045 name=0000:01:10.1 group_id=0x18
smmu_add_mr 46.680 pid=20045 name=smmuv3-iommu-memory-region-8-0
vfio_dma_map_ram 639.622 pid=20045 iova_start=0x40000000 iova_end=0x13fffffff vaddr=0xfffea7e00000
vfio_listener_region_add_iommu 7108689.238 pid=20045 start=0x0 end=0xffffffffffff
smmuv3_notify_flag_add 18.980 pid=20045 iommu=smmuv3-iommu-memory-region-8-0
vfio_mdev 170.760 pid=20045 name=0000:01:10.1 is_mdev=0x0
vfio_get_device 107039.412 pid=20045 name=0000:01:10.1 flags=0x3 num_regions=0xa num_irqs=0x6
vfio_region_setup 99.511 pid=20045 dev=0000:01:10.1 index=0x0 name=0000:01:10.1 BAR 0 flags=0x7 offset=0x0 size=0x4000
vfio_region_setup 2.260 pid=20045 dev=0000:01:10.1 index=0x1 name=0000:01:10.1 BAR 1 flags=0x0 offset=0x10000000000 size=0x0
vfio_region_setup 1.800 pid=20045 dev=0000:01:10.1 index=0x2 name=0000:01:10.1 BAR 2 flags=0x0 offset=0x20000000000 size=0x0
vfio_region_setup 13.910 pid=20045 dev=0000:01:10.1 index=0x3 name=0000:01:10.1 BAR 3 flags=0xf offset=0x30000000000 size=0x4000
vfio_region_setup 1.620 pid=20045 dev=0000:01:10.1 index=0x4 name=0000:01:10.1 BAR 4 flags=0x0 offset=0x40000000000 size=0x0
vfio_region_setup 1.610 pid=20045 dev=0000:01:10.1 index=0x5 name=0000:01:10.1 BAR 5 flags=0x0 offset=0x50000000000 size=0x0
vfio_populate_device_config 1.790 pid=20045 name=0000:01:10.1 size=0x1000 offset=0x70000000000 flags=0x3
vfio_get_dev_region 10.090 pid=20045 name=0000:01:10.1 index=0x9 type=0x2 subtype=0x1
vfio_region_sparse_mmap_header 26.570 pid=20045 name=0000:01:10.1 index=0x9 nr_areas=0x1
vfio_region_sparse_mmap_entry 0.450 pid=20045 i=0x0 start=0x1000 end=0x9000
vfio_region_setup 0.500 pid=20045 dev=0000:01:10.1 index=0x9 name=0000:01:10.1 DMA FAULT 9 flags=0xf offset=0x90000000000 size=0x9000
vfio_region_mmap 498.102 pid=20045 name=0000:01:10.1 DMA FAULT 9 mmaps[0] offset=0x1000 end=0x8fff
vfio_msix_early_setup 815.212 pid=20045 name=0000:01:10.1 pos=0x70 table_bar=0x3 offset=0x0 entries=0x3
vfio_region_mmap 572.342 pid=20045 name=0000:01:10.1 BAR 0 mmaps[0] offset=0x0 end=0x3fff
vfio_region_mmap 511.352 pid=20045 name=0000:01:10.1 BAR 3 mmaps[0] offset=0x0 end=0x3fff
vfio_add_ext_cap_dropped 752.332 pid=20045 name=0000:01:10.1 cap=0xe offset=0x150
vfio_pci_read_config 20.840 pid=20045 name=0000:01:10.1 addr=0x3d len=0x1 val=0x0
vfio_get_dev_irq 55.901 pid=20045 name=0000:01:10.1 index=0x5 type=0x1 subtype=0x1
smmu_add_mr 9493.991 pid=20045 name=smmuv3-iommu-memory-region-0-1
vfio_pci_reset 13501.134 pid=20045 name=0000:01:10.1
vfio_pci_read_config 8.650 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 1.190 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_reset_flr 105446.307 pid=20045 name=0000:01:10.1
vfio_pci_read_config 3.510 pid=20045 name=0000:01:10.1 addr=0x3d len=0x1 val=0x0
vfio_pci_read_config 17149978.757 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.340 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 7.620 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.080 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 5.920 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.370 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.190 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x0
vfio_pci_read_config 4.150 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.160 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xc
vfio_pci_read_config 4.150 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x0
vfio_pci_read_config 4.160 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.640 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.740 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.360 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.630 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.740 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.830 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0x0
vfio_pci_read_config 877.663 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.300 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.440 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 5.960 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 5.960 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.470 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.280 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x0
vfio_pci_read_config 4.190 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.170 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xc
vfio_pci_read_config 4.110 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x0
vfio_pci_read_config 4.130 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.670 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.760 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.270 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.620 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.720 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.670 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0x0
vfio_pci_read_config 494.272 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 5.390 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 7.080 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 5.260 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.820 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 6.520 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.610 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 10.300 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_write_config 5.200 pid=20045 name=0000:01:10.1 addr=0x10 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 6.180 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 4.840 pid=20045 name=0000:01:10.1 addr=0x10 val=0xc len=0x4
vfio_pci_read_config 6.620 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x0
vfio_pci_write_config 4.940 pid=20045 name=0000:01:10.1 addr=0x14 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.760 pid=20045 name=0000:01:10.1 addr=0x14 val=0x0 len=0x4
vfio_pci_read_config 6.350 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.870 pid=20045 name=0000:01:10.1 addr=0x18 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.640 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.720 pid=20045 name=0000:01:10.1 addr=0x18 val=0x0 len=0x4
vfio_pci_read_config 6.160 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xc
vfio_pci_write_config 5.030 pid=20045 name=0000:01:10.1 addr=0x1c val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.680 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 4.710 pid=20045 name=0000:01:10.1 addr=0x1c val=0xc len=0x4
vfio_pci_read_config 14.240 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x0
vfio_pci_write_config 5.100 pid=20045 name=0000:01:10.1 addr=0x20 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.920 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.790 pid=20045 name=0000:01:10.1 addr=0x20 val=0x0 len=0x4
vfio_pci_read_config 6.050 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.850 pid=20045 name=0000:01:10.1 addr=0x24 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.670 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.700 pid=20045 name=0000:01:10.1 addr=0x24 val=0x0 len=0x4
vfio_pci_write_config 2061.137 pid=20045 name=0000:01:10.1 addr=0x30 val=0xfffffffffffffffe len=0x4
vfio_pci_read_config 6.540 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.140 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.850 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 6.740 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 856.603 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 5.740 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.200 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 5.860 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 5.780 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.560 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.250 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x0
vfio_pci_read_config 4.140 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.150 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xc
vfio_pci_read_config 4.220 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x0
vfio_pci_read_config 4.210 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.720 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.750 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.210 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.640 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.740 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.780 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0x0
vfio_pci_read_config 488.232 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 5.250 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 6.690 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 4.990 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.770 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 6.260 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.080 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 8.960 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_write_config 5.150 pid=20045 name=0000:01:10.1 addr=0x10 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 6.060 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 4.770 pid=20045 name=0000:01:10.1 addr=0x10 val=0xc len=0x4
vfio_pci_read_config 6.160 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x0
vfio_pci_write_config 4.910 pid=20045 name=0000:01:10.1 addr=0x14 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.770 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.780 pid=20045 name=0000:01:10.1 addr=0x14 val=0x0 len=0x4
vfio_pci_read_config 6.060 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.960 pid=20045 name=0000:01:10.1 addr=0x18 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.730 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.780 pid=20045 name=0000:01:10.1 addr=0x18 val=0x0 len=0x4
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xc
vfio_pci_write_config 4.820 pid=20045 name=0000:01:10.1 addr=0x1c val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.620 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 4.720 pid=20045 name=0000:01:10.1 addr=0x1c val=0xc len=0x4
vfio_pci_read_config 6.040 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x0
vfio_pci_write_config 4.810 pid=20045 name=0000:01:10.1 addr=0x20 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.680 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.710 pid=20045 name=0000:01:10.1 addr=0x20 val=0x0 len=0x4
vfio_pci_read_config 5.960 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.820 pid=20045 name=0000:01:10.1 addr=0x24 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.730 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.730 pid=20045 name=0000:01:10.1 addr=0x24 val=0x0 len=0x4
vfio_pci_write_config 2046.237 pid=20045 name=0000:01:10.1 addr=0x30 val=0xfffffffffffffffe len=0x4
vfio_pci_read_config 6.520 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.050 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 10.520 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 6.760 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_write_config 5325.078 pid=20045 name=0000:01:10.1 addr=0x10 val=0x0 len=0x4
vfio_pci_write_config 7.430 pid=20045 name=0000:01:10.1 addr=0x14 val=0x80 len=0x4
vfio_pci_write_config 6.250 pid=20045 name=0000:01:10.1 addr=0x1c val=0x4000 len=0x4
vfio_pci_write_config 5.850 pid=20045 name=0000:01:10.1 addr=0x20 val=0x80 len=0x4
vfio_pci_read_config 4693.125 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.840 pid=20045 name=0000:01:10.1 addr=0x4 val=0x27 len=0x2
vfio_dma_map_ram 169.691 pid=20045 iova_start=0x8000000000 iova_end=0x8000003fff vaddr=0xffffbeecc000
vfio_dma_map_ram 653.042 pid=20045 iova_start=0x8000005000 iova_end=0x8000007fff vaddr=0xffffbc2cd000
vfio_pci_read_config 753.322 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x6
vfio_pci_write_config 8.060 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_dma_unmap_ram 711.773 pid=20045 start=0x8000000000 end=0x8000003fff
vfio_dma_unmap_ram 12687.681 pid=20045 start=0x8000005000 end=0x8000007fff
vfio_pci_read_config 224.311 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 8.620 pid=20045 name=0000:01:10.1 addr=0x4 val=0x10 len=0x2
vfio_pci_read_config 15.370 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 17.140 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.320 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 5009.797 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 6.150 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 7.950 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_write_config 5.650 pid=20045 name=0000:01:10.1 addr=0x3c val=0xff len=0x1
vfio_pci_read_config 1591.855 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.930 pid=20045 name=0000:01:10.1 addr=0x4 val=0x2 len=0x2
vfio_dma_map_ram 166.691 pid=20045 iova_start=0x8000000000 iova_end=0x8000003fff vaddr=0xffffbeecc000
vfio_dma_map_ram 2710.268 pid=20045 iova_start=0x8000005000 iova_end=0x8000007fff vaddr=0xffffbc2cd000
vfio_pci_read_config 7964.117 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x2
vfio_pci_write_config 6.360 pid=20045 name=0000:01:10.1 addr=0x4 val=0x2 len=0x2
vfio_pci_read_config 12.560 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x2
vfio_pci_write_config 5.300 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_dma_unmap_ram 3811.482 pid=20045 start=0x8000000000 end=0x8000003fff
vfio_dma_unmap_ram 15226.220 pid=20045 start=0x8000005000 end=0x8000007fff
vfio_pci_read_config 1584.735 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 7.870 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.110 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.391 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.320 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.170 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.190 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.160 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.160 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.650 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.710 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.340 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.690 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.660 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.610 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 8187883.362 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 7.530 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.150 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 445.631 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.490 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.370 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 128397.422 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 7.970 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.330 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 10.320 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.660 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.280 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 9.880 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.590 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.220 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 12.410 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.950 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 21.900 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.260 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.620 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.600 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.500 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.490 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.890 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 6.010 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.580 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.990 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 6.040 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.980 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 10.570 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.580 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.210 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.060 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.600 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.440 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.990 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.480 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.980 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 6.040 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 9.250 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.490 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.160 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.081 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.580 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.460 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.440 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.970 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.490 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.890 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.940 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.970 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 6197.160 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 7.680 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.380 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 10.110 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.400 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.170 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 9.850 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.320 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.130 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 12.370 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.550 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.260 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.120 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.630 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.470 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.450 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 11.640 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 6.090 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 6.020 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.621 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.980 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 9.930 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.420 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.210 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.060 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.510 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.440 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.450 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.440 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 6.020 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.490 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.990 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 6.020 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 9.270 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 6.430 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100000
vfio_pci_read_config 6.230 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 30.910 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 5.160 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.690 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.530 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.470 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 6.000 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.460 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.890 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.920 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 601.892 pid=20045 name=0000:01:10.1 addr=0x9 len=0x1 val=0x0
vfio_pci_read_config 6.450 pid=20045 name=0000:01:10.1 addr=0xa len=0x1 val=0x0
vfio_pci_read_config 6.230 pid=20045 name=0000:01:10.1 addr=0xb len=0x1 val=0x2
vfio_pci_read_config 633732.129 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 10.460 pid=20045 name=0000:01:10.1 addr=0xe len=0x1 val=0x0
vfio_pci_read_config 6.460 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 11.220 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.960 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 6.200 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 6.180 pid=20045 name=0000:01:10.1 addr=0xa2 len=0x2 val=0x91
vfio_pci_read_config 5.830 pid=20045 name=0000:01:10.1 addr=0xa4 len=0x2 val=0x0
vfio_pci_read_config 7.980 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 890.323 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 5.470 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 4.400 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 15.250 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 6.370 pid=20045 name=0000:01:10.1 addr=0x4 val=0x400 len=0x2
vfio_pci_read_config 17.160 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x400
vfio_pci_write_config 4.810 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 12.680 pid=20045 name=0000:01:10.1 addr=0x3d len=0x1 val=0x0
vfio_pci_read_config 6.070 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_read_config 4.520 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_write_config 4.540 pid=20045 name=0000:01:10.1 addr=0x10 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.721 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 12.690 pid=20045 name=0000:01:10.1 addr=0x10 val=0xc len=0x4
vfio_pci_read_config 5.920 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_write_config 4.440 pid=20045 name=0000:01:10.1 addr=0x14 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.510 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.430 pid=20045 name=0000:01:10.1 addr=0x14 val=0x80 len=0x4
vfio_pci_read_config 1143.043 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_read_config 5.160 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.750 pid=20045 name=0000:01:10.1 addr=0x18 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.930 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_write_config 4.430 pid=20045 name=0000:01:10.1 addr=0x18 val=0x0 len=0x4
vfio_pci_read_config 7.140 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_read_config 4.390 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_write_config 4.420 pid=20045 name=0000:01:10.1 addr=0x1c val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.321 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0xffffffffffffc00c
vfio_pci_write_config 4.380 pid=20045 name=0000:01:10.1 addr=0x1c val=0x400c len=0x4
vfio_pci_read_config 5.300 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_write_config 4.320 pid=20045 name=0000:01:10.1 addr=0x20 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.210 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0xffffffffffffffff
vfio_pci_write_config 4.320 pid=20045 name=0000:01:10.1 addr=0x20 val=0x80 len=0x4
vfio_pci_read_config 1115.633 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_read_config 4.500 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.550 pid=20045 name=0000:01:10.1 addr=0x24 val=0xffffffffffffffff len=0x4
vfio_pci_read_config 5.610 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_write_config 4.450 pid=20045 name=0000:01:10.1 addr=0x24 val=0x0 len=0x4
vfio_pci_read_config 6.920 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_read_config 4.350 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_write_config 4.330 pid=20045 name=0000:01:10.1 addr=0x30 val=0xfffffffffffff800 len=0x4
vfio_pci_read_config 5.330 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_write_config 4.350 pid=20045 name=0000:01:10.1 addr=0x30 val=0x0 len=0x4
vfio_pci_read_config 6.950 pid=20045 name=0000:01:10.1 addr=0x2c len=0x2 val=0xffff
vfio_pci_read_config 5.810 pid=20045 name=0000:01:10.1 addr=0x2e len=0x2 val=0xffff
vfio_pci_read_config 6.210 pid=20045 name=0000:01:10.1 addr=0xa4 len=0x4 val=0x0
vfio_pci_read_config 5.990 pid=20045 name=0000:01:10.1 addr=0xa8 len=0x2 val=0x0
vfio_pci_read_config 11.951 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.620 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.290 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.740 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 5.900 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.300 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.240 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.620 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 5.700 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.250 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.210 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 4.170 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0x2
vfio_pci_write_config 4.480 pid=20045 name=0000:01:10.1 addr=0x72 val=0x2 len=0x2
vfio_pci_read_config 7.360 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.380 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.280 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.860 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 6.060 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.310 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.200 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.670 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 4.380 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.300 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.220 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.270 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 7.000 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.490 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.230 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.640 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 5.780 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.180 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 8.050 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.750 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 4.560 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.410 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 17.550 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.630 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 7.220 pid=20045 name=0000:01:10.1 addr=0x110 len=0x4 val=0x0
vfio_pci_write_config 5.140 pid=20045 name=0000:01:10.1 addr=0x110 val=0x0 len=0x4
vfio_pci_read_config 9.050 pid=20045 name=0000:01:10.1 addr=0x104 len=0x4 val=0x0
vfio_pci_write_config 4.660 pid=20045 name=0000:01:10.1 addr=0x104 val=0x0 len=0x4
vfio_pci_read_config 1410.165 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.120 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_write_config 8.680 pid=20045 name=0000:01:10.1 addr=0x10 val=0xc len=0x4
vfio_pci_read_config 5.690 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_write_config 4.500 pid=20045 name=0000:01:10.1 addr=0x14 val=0x80 len=0x4
vfio_pci_read_config 5.620 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_write_config 4.690 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 1209.194 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 5.400 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_write_config 8.040 pid=20045 name=0000:01:10.1 addr=0x1c val=0x400c len=0x4
vfio_pci_read_config 5.510 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_write_config 4.580 pid=20045 name=0000:01:10.1 addr=0x20 val=0x80 len=0x4
vfio_pci_read_config 5.410 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_write_config 4.520 pid=20045 name=0000:01:10.1 addr=0x4 val=0x0 len=0x2
vfio_pci_read_config 14541.038 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 6.910 pid=20045 name=0000:01:10.1 addr=0xe len=0x1 val=0x0
vfio_pci_read_config 38147.785 pid=20045 name=0000:01:10.1 addr=0xa8 len=0x2 val=0x0
vfio_pci_write_config 6.180 pid=20045 name=0000:01:10.1 addr=0xa8 val=0x10 len=0x2
vfio_pci_read_config 11.460 pid=20045 name=0000:01:10.1 addr=0xc len=0x1 val=0x0
smmuv3_read_mmio 52535.392 pid=20045 addr=0x0 val=0xd40101a size=0x4 r=0x0
smmuv3_read_mmio 10.890 pid=20045 addr=0x4 val=0x2730010 size=0x4 r=0x0
smmuv3_read_mmio 10.100 pid=20045 addr=0x14 val=0x54 size=0x4 r=0x0
smmuv3_read_mmio 5520.118 pid=20045 addr=0x20 val=0x0 size=0x4 r=0x0
smmuv3_write_mmio 19.770 pid=20045 addr=0x20 val=0x0 size=0x4 r=0x0
smmuv3_read_mmio 9.471 pid=20045 addr=0x24 val=0x0 size=0x4 r=0x0
smmuv3_write_mmio 4.490 pid=20045 addr=0x28 val=0xd75 size=0x4 r=0x0
smmuv3_write_mmio 4.000 pid=20045 addr=0x2c val=0x7 size=0x4 r=0x0
smmuv3_write_mmio 3.890 pid=20045 addr=0x80 val=0x40000000fdc40000 size=0x8 r=0x0
smmuv3_write_mmio 4.000 pid=20045 addr=0x88 val=0x10210 size=0x4 r=0x0
smmuv3_write_mmio 3.980 pid=20045 addr=0x90 val=0x40000000fdd00010 size=0x8 r=0x0
smmuv3_write_mmio 8.730 pid=20045 addr=0x98 val=0x0 size=0x4 r=0x0
smmuv3_write_mmio 4.240 pid=20045 addr=0x9c val=0x0 size=0x4 r=0x0
smmuv3_cmdq_consume_out 8.720 pid=20045 prod=0x0 cons=0x0 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x20 val=0x8 size=0x4 r=0x0
smmuv3_read_mmio 9.890 pid=20045 addr=0x24 val=0x8 size=0x4 r=0x0
smmuv3_cmdq_consume 9.520 pid=20045 prod=0x1 cons=0x0 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.340 pid=20045 opcode=SMMU_CMD_CFGI_STE_RANGE
smmuv3_cmdq_cfgi_ste_range 0.950 pid=20045 start=0x0 end=0x0
smmuv3_config_cache_inv 8.570 pid=20045 sid=0x0
smmuv3_cmdq_consume_out 1.360 pid=20045 prod=0x1 cons=0x1 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x1 size=0x4 r=0x0
smmuv3_cmdq_consume 9.730 pid=20045 prod=0x2 cons=0x1 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.590 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.440 pid=20045 prod=0x2 cons=0x2 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x2 size=0x4 r=0x0
smmuv3_read_mmio 8.440 pid=20045 addr=0x9c val=0x2 size=0x4 r=0x0
smmuv3_cmdq_consume 10.630 pid=20045 prod=0x3 cons=0x2 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.590 pid=20045 opcode=SMMU_CMD_TLBI_NSNH_ALL
smmuv3_cmdq_tlbi_nh 0.690 pid=20045
smmu_inv_notifiers_mr 0.570 pid=20045 name=smmuv3-iommu-memory-region-8-0
vfio_iommu_asid_inv_iotlb 1.180 pid=20045 asid=0x0
smmu_iotlb_inv_all 7.900 pid=20045
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x3 cons=0x3 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x3 size=0x4 r=0x0
smmuv3_cmdq_consume 10.170 pid=20045 prod=0x4 cons=0x3 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.490 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.410 pid=20045 prod=0x4 cons=0x4 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.300 pid=20045 addr=0x98 val=0x4 size=0x4 r=0x0
smmuv3_read_mmio 9.450 pid=20045 addr=0x9c val=0x4 size=0x4 r=0x0
smmuv3_write_mmio 4.630 pid=20045 addr=0xa0 val=0x40000000fde0000f size=0x8 r=0x0
smmuv3_write_mmio 3.850 pid=20045 addr=0xa8 val=0x0 size=0x4 r=0x0
smmuv3_write_mmio 3.750 pid=20045 addr=0xac val=0x0 size=0x4 r=0x0
smmuv3_cmdq_consume_out 18.990 pid=20045 prod=0x4 cons=0x4 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.300 pid=20045 addr=0x20 val=0xc size=0x4 r=0x0
smmuv3_read_mmio 8.690 pid=20045 addr=0x24 val=0xc size=0x4 r=0x0
smmuv3_write_mmio 4.320 pid=20045 addr=0x50 val=0x0 size=0x4 r=0x0
smmuv3_read_mmio 8.380 pid=20045 addr=0x54 val=0x0 size=0x4 r=0x0
smmuv3_write_mmio 4.280 pid=20045 addr=0x68 val=0x0 size=0x8 r=0x0
smmuv3_write_mmio 3.860 pid=20045 addr=0xb0 val=0x0 size=0x8 r=0x0
smmuv3_write_mmio 117.561 pid=20045 addr=0x50 val=0x5 size=0x4 r=0x0
smmuv3_read_mmio 11.770 pid=20045 addr=0x54 val=0x5 size=0x4 r=0x0
smmuv3_cmdq_consume_out 10.810 pid=20045 prod=0x4 cons=0x4 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.340 pid=20045 addr=0x20 val=0xd size=0x4 r=0x0
smmuv3_read_mmio 8.770 pid=20045 addr=0x24 val=0xd size=0x4 r=0x0
smmuv3_cmdq_consume 353248.528 pid=20045 prod=0x5 cons=0x4 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 2.850 pid=20045 opcode=SMMU_CMD_CFGI_STE
smmuv3_cmdq_cfgi_ste 1.760 pid=20045 streamid=0x8
smmuv3_config_cache_inv 9.640 pid=20045 sid=0x8
smmuv3_config_cache_miss 4.810 pid=20045 sid=0x8 hits=0x0 misses=0x1 perc=0x0
smmuv3_find_ste 3.120 pid=20045 sid=0x8 features=0x1 sid_split=0x8
smmuv3_find_ste_2lvl 1.120 pid=20045 strtab_base=0x40000000fdc40000 l1ptr=0xfdc40000 l1_ste_offset=0x0 l2ptr=0xfdc44000 l2_ste_offset=0x8 max_l2_ste=0x1ff
smmuv3_get_ste 0.470 pid=20045 addr=0xfdc44200
smmuv3_notify_config_change 3.900 pid=20045 name=smmuv3-iommu-memory-region-8-0 config=0x3 s1ctxptr=0x0
smmuv3_cmdq_consume_out 33.011 pid=20045 prod=0x5 cons=0x5 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.760 pid=20045 addr=0x98 val=0x5 size=0x4 r=0x0
smmuv3_cmdq_consume 13.740 pid=20045 prod=0x6 cons=0x5 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.970 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.510 pid=20045 prod=0x6 cons=0x6 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.490 pid=20045 addr=0x98 val=0x6 size=0x4 r=0x0
smmuv3_read_mmio 11.820 pid=20045 addr=0x9c val=0x6 size=0x4 r=0x0
smmuv3_cmdq_consume 12.020 pid=20045 prod=0x7 cons=0x6 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.590 pid=20045 opcode=SMMU_CMD_CFGI_STE
smmuv3_cmdq_cfgi_ste 0.400 pid=20045 streamid=0x8
smmuv3_config_cache_inv 5.610 pid=20045 sid=0x8
smmuv3_config_cache_miss 1.610 pid=20045 sid=0x8 hits=0x0 misses=0x2 perc=0x0
smmuv3_find_ste 0.690 pid=20045 sid=0x8 features=0x1 sid_split=0x8
smmuv3_find_ste_2lvl 0.460 pid=20045 strtab_base=0x40000000fdc40000 l1ptr=0xfdc40000 l1_ste_offset=0x0 l2ptr=0xfdc44000 l2_ste_offset=0x8 max_l2_ste=0x1ff
smmuv3_get_ste 0.460 pid=20045 addr=0xfdc44200
smmuv3_get_cd 0.710 pid=20045 addr=0xfdc41000
smmuv3_decode_cd 0.730 pid=20045 oas=0x2c
smmuv3_decode_cd_tt 0.510 pid=20045 i=0x0 tsz=0x10 ttb=0x13dbc6000 granule_sz=0xc
smmuv3_notify_config_change 0.550 pid=20045 name=smmuv3-iommu-memory-region-8-0 config=0x1 s1ctxptr=0xfdc41000
smmuv3_cmdq_consume_out 12.250 pid=20045 prod=0x7 cons=0x7 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.310 pid=20045 addr=0x98 val=0x7 size=0x4 r=0x0
smmuv3_cmdq_consume 12.250 pid=20045 prod=0x8 cons=0x7 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.490 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.460 pid=20045 prod=0x8 cons=0x8 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.390 pid=20045 addr=0x98 val=0x8 size=0x4 r=0x0
smmuv3_read_mmio 9.820 pid=20045 addr=0x9c val=0x8 size=0x4 r=0x0
smmuv3_cmdq_consume 12.830 pid=20045 prod=0x9 cons=0x8 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.720 pid=20045 opcode=SMMU_CMD_PREFETCH_CONFIG
smmuv3_cmdq_consume_out 0.680 pid=20045 prod=0x9 cons=0x9 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x9 size=0x4 r=0x0
vfio_pci_read_config 1075.003 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x0
vfio_pci_write_config 1163.594 pid=20045 name=0000:01:10.1 addr=0x4 val=0x2 len=0x2
vfio_dma_map_ram 378.351 pid=20045 iova_start=0x8000000000 iova_end=0x8000003fff vaddr=0xffffbeecc000
vfio_dma_map_ram 3817.443 pid=20045 iova_start=0x8000005000 iova_end=0x8000007fff vaddr=0xffffbc2cd000
vfio_pci_read_config 3777.922 pid=20045 name=0000:01:10.1 addr=0x3d len=0x1 val=0x0
vfio_pci_read_config 22.570 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x2
vfio_pci_write_config 7.480 pid=20045 name=0000:01:10.1 addr=0x4 val=0x6 len=0x2
vfio_pci_read_config 281.541 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 8.110 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100006
vfio_pci_read_config 6.570 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.390 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.580 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.321 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.320 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.190 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.470 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.200 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.810 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.730 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.880 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.690 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.670 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 11.430 pid=20045 name=0000:01:10.1 addr=0xa8 len=0x2 val=0x0
vfio_pci_read_config 6.460 pid=20045 name=0000:01:10.1 addr=0x6 len=0x2 val=0x10
vfio_pci_read_config 4.520 pid=20045 name=0000:01:10.1 addr=0x34 len=0x1 val=0x70
vfio_pci_read_config 4.400 pid=20045 name=0000:01:10.1 addr=0x70 len=0x2 val=0xa011
vfio_pci_read_config 5.810 pid=20045 name=0000:01:10.1 addr=0xa0 len=0x2 val=0x10
vfio_pci_read_config 4.840 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.700 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.600 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 4.270 pid=20045 name=0000:01:10.1 addr=0x100 len=0x4 val=0x10001
vfio_pci_read_config 14184.826 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0x2
vfio_pci_read_config 5.120 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0x2
vfio_pci_write_config 5.380 pid=20045 name=0000:01:10.1 addr=0x72 val=0x2 len=0x2
vfio_pci_read_config 7.090 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0x2
vfio_pci_read_config 4.400 pid=20045 name=0000:01:10.1 addr=0x74 len=0x4 val=0x3
vfio_pci_read_config 261.321 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0x2
vfio_pci_write_config 5.310 pid=20045 name=0000:01:10.1 addr=0x72 val=0xc002 len=0x2
vfio_msix_vector_do_use 4.330 pid=20045 name=0000:01:10.1 index=0x0
smmuv3_config_cache_hit 32.710 pid=20045 sid=0x8 hits=0x1 misses=0x2 perc=0x21
smmu_iotlb_cache_miss 1.270 pid=20045 asid=0x0 addr=0xfffff000 hit=0x0 miss=0x1 p=0x0
smmu_get_pte 2.030 pid=20045 baseaddr=0x13dbc6000 index=0x0 pteaddr=0x13dbc6000 pte=0x13dbc8003
smmu_ptw_level 0.420 pid=20045 level=0x0 iova=0xfffff000 subpage_size=0x8000000000 baseaddr=0x13dbc6000 offset=0x0 pte=0x13dbc8003
smmu_get_pte 0.500 pid=20045 baseaddr=0x13dbc8000 index=0x3 pteaddr=0x13dbc8018 pte=0x13dbc9003
smmu_ptw_level 0.270 pid=20045 level=0x1 iova=0xfffff000 subpage_size=0x40000000 baseaddr=0x13dbc8000 offset=0x3 pte=0x13dbc9003
smmu_get_pte 0.520 pid=20045 baseaddr=0x13dbc9000 index=0x1ff pteaddr=0x13dbc9ff8 pte=0x13dbca003
smmu_ptw_level 0.260 pid=20045 level=0x2 iova=0xfffff000 subpage_size=0x200000 baseaddr=0x13dbc9000 offset=0x1ff pte=0x13dbca003
smmu_get_pte 0.400 pid=20045 baseaddr=0x13dbca000 index=0x1ff pteaddr=0x13dbcaff8 pte=0x60000008090f4b
smmu_ptw_level 0.270 pid=20045 level=0x3 iova=0xfffff000 subpage_size=0x1000 baseaddr=0x13dbca000 offset=0x1ff pte=0x60000008090f4b
smmu_ptw_page_pte 0.630 pid=20045 stage=0x1 level=0x3 iova=0xfffff000 baseaddr=0x13dbca000 pteaddr=0x13dbcaff8 pte=0x60000008090f4b address=0x8090000
smmuv3_translate_success 2.070 pid=20045 n=smmuv3-iommu-memory-region-8-0 sid=0x8 iova=0xfffff040 translated=0x8090040 perm=0x2
vfio_register_msi_binding 0.680 pid=20045 name=0000:01:10.1 vector=0x0 giova=0xfffff040 gdb=0x8090040
vfio_msix_pba_disable 141.581 pid=20045 name=0000:01:10.1
vfio_msix_vector_release 0.970 pid=20045 name=0000:01:10.1 index=0x0
vfio_msix_enable 0.820 pid=20045 name=0000:01:10.1
vfio_pci_read_config 35.760 pid=20045 name=0000:01:10.1 addr=0x4 len=0x2 val=0x6
vfio_pci_write_config 5.560 pid=20045 name=0000:01:10.1 addr=0x4 val=0x406 len=0x2
vfio_pci_read_config 16.600 pid=20045 name=0000:01:10.1 addr=0x72 len=0x2 val=0xc002
vfio_pci_write_config 4.670 pid=20045 name=0000:01:10.1 addr=0x72 val=0x8002 len=0x2
vfio_pci_read_config 1520754.768 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 13.470 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100406
vfio_pci_read_config 7.030 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.280 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.580 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.530 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.270 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.260 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.230 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.190 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.730 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.770 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 4.430 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.770 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.790 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.850 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_pci_read_config 411082.788 pid=20045 name=0000:01:10.1 addr=0x0 len=0x4 val=0x10ed8086
vfio_pci_read_config 16.730 pid=20045 name=0000:01:10.1 addr=0x4 len=0x4 val=0x100406
vfio_pci_read_config 7.410 pid=20045 name=0000:01:10.1 addr=0x8 len=0x4 val=0x2000001
vfio_pci_read_config 6.340 pid=20045 name=0000:01:10.1 addr=0xc len=0x4 val=0x0
vfio_pci_read_config 4.540 pid=20045 name=0000:01:10.1 addr=0x10 len=0x4 val=0xc
vfio_pci_read_config 4.470 pid=20045 name=0000:01:10.1 addr=0x14 len=0x4 val=0x80
vfio_pci_read_config 4.420 pid=20045 name=0000:01:10.1 addr=0x18 len=0x4 val=0x0
vfio_pci_read_config 4.270 pid=20045 name=0000:01:10.1 addr=0x1c len=0x4 val=0x400c
vfio_pci_read_config 4.350 pid=20045 name=0000:01:10.1 addr=0x20 len=0x4 val=0x80
vfio_pci_read_config 4.290 pid=20045 name=0000:01:10.1 addr=0x24 len=0x4 val=0x0
vfio_pci_read_config 5.800 pid=20045 name=0000:01:10.1 addr=0x28 len=0x4 val=0x0
vfio_pci_read_config 5.870 pid=20045 name=0000:01:10.1 addr=0x2c len=0x4 val=0xffffffffffffffff
vfio_pci_read_config 16.080 pid=20045 name=0000:01:10.1 addr=0x30 len=0x4 val=0x0
vfio_pci_read_config 5.960 pid=20045 name=0000:01:10.1 addr=0x34 len=0x4 val=0x70
vfio_pci_read_config 5.880 pid=20045 name=0000:01:10.1 addr=0x38 len=0x4 val=0x0
vfio_pci_read_config 5.890 pid=20045 name=0000:01:10.1 addr=0x3c len=0x4 val=0xff
vfio_msix_vector_do_use 22562824.580 pid=20045 name=0000:01:10.1 index=0x0
smmuv3_config_cache_hit 70.310 pid=20045 sid=0x8 hits=0x2 misses=0x2 perc=0x32
smmu_iotlb_cache_hit 1.510 pid=20045 asid=0x0 addr=0xfffff000 hit=0x1 miss=0x1 p=0x32
smmuv3_translate_success 1.300 pid=20045 n=smmuv3-iommu-memory-region-8-0 sid=0x8 iova=0xfffff040 translated=0x8090040 perm=0x2
vfio_msix_pba_disable 21812.541 pid=20045 name=0000:01:10.1
vfio_msix_vector_do_use 112.071 pid=20045 name=0000:01:10.1 index=0x1
smmuv3_config_cache_hit 30.270 pid=20045 sid=0x8 hits=0x3 misses=0x2 perc=0x3c
smmu_iotlb_cache_hit 1.050 pid=20045 asid=0x0 addr=0xfffff000 hit=0x2 miss=0x1 p=0x42
smmuv3_translate_success 0.700 pid=20045 n=smmuv3-iommu-memory-region-8-0 sid=0x8 iova=0xfffff040 translated=0x8090040 perm=0x2
smmuv3_config_cache_hit 23880.658 pid=20045 sid=0x8 hits=0x4 misses=0x2 perc=0x42
smmu_iotlb_cache_hit 1.080 pid=20045 asid=0x0 addr=0xfffff000 hit=0x3 miss=0x1 p=0x4b
smmuv3_translate_success 0.770 pid=20045 n=smmuv3-iommu-memory-region-8-0 sid=0x8 iova=0xfffff040 translated=0x8090040 perm=0x2
vfio_register_msi_binding 0.810 pid=20045 name=0000:01:10.1 vector=0x0 giova=0xfffff040 gdb=0x8090040
smmuv3_config_cache_hit 7.930 pid=20045 sid=0x8 hits=0x5 misses=0x2 perc=0x47
smmu_iotlb_cache_hit 0.370 pid=20045 asid=0x0 addr=0xfffff000 hit=0x4 miss=0x1 p=0x50
smmuv3_translate_success 0.410 pid=20045 n=smmuv3-iommu-memory-region-8-0 sid=0x8 iova=0xfffff040 translated=0x8090040 perm=0x2
vfio_register_msi_binding 0.390 pid=20045 name=0000:01:10.1 vector=0x1 giova=0xfffff040 gdb=0x8090040
vfio_msix_pba_disable 124.240 pid=20045 name=0000:01:10.1
smmuv3_cmdq_consume 24110.859 pid=20045 prod=0xa cons=0x9 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.760 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.860 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 3.980 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 11.810 pid=20045 sid=0x8 hits=0x6 misses=0x2 perc=0x4b
vfio_iommu_addr_inv_iotlb 7.960 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 7.530 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 1.240 pid=20045 prod=0xa cons=0xa prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.440 pid=20045 addr=0x98 val=0xa size=0x4 r=0x0
smmuv3_cmdq_consume 22.681 pid=20045 prod=0xb cons=0xa prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.820 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.700 pid=20045 prod=0xb cons=0xb prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.330 pid=20045 addr=0x98 val=0xb size=0x4 r=0x0
smmuv3_read_mmio 11.920 pid=20045 addr=0x9c val=0xb size=0x4 r=0x0
smmuv3_cmdq_consume 34394695.944 pid=20045 prod=0xc cons=0xb prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 2.210 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 1.020 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.510 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 10.650 pid=20045 sid=0x8 hits=0x7 misses=0x2 perc=0x4d
vfio_iommu_addr_inv_iotlb 7.240 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 6.250 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 1.360 pid=20045 prod=0xc cons=0xc prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.470 pid=20045 addr=0x98 val=0xc size=0x4 r=0x0
smmuv3_cmdq_consume 13.490 pid=20045 prod=0xd cons=0xc prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.700 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.710 pid=20045 prod=0xd cons=0xd prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.310 pid=20045 addr=0x98 val=0xd size=0x4 r=0x0
smmuv3_read_mmio 12.910 pid=20045 addr=0x9c val=0xd size=0x4 r=0x0
smmuv3_cmdq_consume 65.210 pid=20045 prod=0xe cons=0xd prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.850 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.520 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 7.990 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 8.750 pid=20045 sid=0x8 hits=0x8 misses=0x2 perc=0x50
vfio_iommu_addr_inv_iotlb 6.000 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 3.100 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.590 pid=20045 prod=0xe cons=0xe prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.300 pid=20045 addr=0x98 val=0xe size=0x4 r=0x0
smmuv3_cmdq_consume 11.871 pid=20045 prod=0xf cons=0xe prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.530 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.480 pid=20045 prod=0xf cons=0xf prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0xf size=0x4 r=0x0
smmuv3_read_mmio 9.870 pid=20045 addr=0x9c val=0xf size=0x4 r=0x0
smmuv3_cmdq_consume 1008557.687 pid=20045 prod=0x10 cons=0xf prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.370 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.600 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.240 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.990 pid=20045 sid=0x8 hits=0x9 misses=0x2 perc=0x51
vfio_iommu_addr_inv_iotlb 6.910 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 18.180 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.850 pid=20045 prod=0x10 cons=0x10 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x10 size=0x4 r=0x0
smmuv3_cmdq_consume 14.500 pid=20045 prod=0x11 cons=0x10 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.730 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.390 pid=20045 prod=0x11 cons=0x11 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x11 size=0x4 r=0x0
smmuv3_read_mmio 9.980 pid=20045 addr=0x9c val=0x11 size=0x4 r=0x0
smmuv3_cmdq_consume 1001207.474 pid=20045 prod=0x12 cons=0x11 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.500 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.600 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.070 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.140 pid=20045 sid=0x8 hits=0xa misses=0x2 perc=0x53
vfio_iommu_addr_inv_iotlb 5.780 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.690 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.710 pid=20045 prod=0x12 cons=0x12 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.360 pid=20045 addr=0x98 val=0x12 size=0x4 r=0x0
smmuv3_cmdq_consume 12.820 pid=20045 prod=0x13 cons=0x12 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.620 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.480 pid=20045 prod=0x13 cons=0x13 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x13 size=0x4 r=0x0
smmuv3_read_mmio 9.850 pid=20045 addr=0x9c val=0x13 size=0x4 r=0x0
smmuv3_cmdq_consume 4005428.621 pid=20045 prod=0x14 cons=0x13 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 2.440 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 1.120 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.190 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 9.350 pid=20045 sid=0x8 hits=0xb misses=0x2 perc=0x54
vfio_iommu_addr_inv_iotlb 6.700 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 6.701 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 1.310 pid=20045 prod=0x14 cons=0x14 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.580 pid=20045 addr=0x98 val=0x14 size=0x4 r=0x0
smmuv3_cmdq_consume 15.260 pid=20045 prod=0x15 cons=0x14 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.710 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.590 pid=20045 prod=0x15 cons=0x15 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x15 size=0x4 r=0x0
smmuv3_read_mmio 11.180 pid=20045 addr=0x9c val=0x15 size=0x4 r=0x0
smmuv3_cmdq_consume 999923.717 pid=20045 prod=0x16 cons=0x15 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.350 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.500 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.080 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.740 pid=20045 sid=0x8 hits=0xc misses=0x2 perc=0x55
vfio_iommu_addr_inv_iotlb 6.810 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.510 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.810 pid=20045 prod=0x16 cons=0x16 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x16 size=0x4 r=0x0
smmuv3_cmdq_consume 12.910 pid=20045 prod=0x17 cons=0x16 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.720 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.570 pid=20045 prod=0x17 cons=0x17 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x17 size=0x4 r=0x0
smmuv3_read_mmio 11.620 pid=20045 addr=0x9c val=0x17 size=0x4 r=0x0
smmuv3_cmdq_consume 999897.117 pid=20045 prod=0x18 cons=0x17 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.800 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.660 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.470 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 10.100 pid=20045 sid=0x8 hits=0xd misses=0x2 perc=0x56
vfio_iommu_addr_inv_iotlb 6.750 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 5.190 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.780 pid=20045 prod=0x18 cons=0x18 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.340 pid=20045 addr=0x98 val=0x18 size=0x4 r=0x0
smmuv3_cmdq_consume 13.840 pid=20045 prod=0x19 cons=0x18 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.560 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.540 pid=20045 prod=0x19 cons=0x19 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x19 size=0x4 r=0x0
smmuv3_read_mmio 10.561 pid=20045 addr=0x9c val=0x19 size=0x4 r=0x0
smmuv3_cmdq_consume 8567683.795 pid=20045 prod=0x1a cons=0x19 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.790 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.780 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.090 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 10.390 pid=20045 sid=0x8 hits=0xe misses=0x2 perc=0x57
vfio_iommu_addr_inv_iotlb 8.400 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 6.270 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 1.520 pid=20045 prod=0x1a cons=0x1a prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.430 pid=20045 addr=0x98 val=0x1a size=0x4 r=0x0
smmuv3_cmdq_consume 13.290 pid=20045 prod=0x1b cons=0x1a prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.710 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 9.060 pid=20045 prod=0x1b cons=0x1b prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.250 pid=20045 addr=0x98 val=0x1b size=0x4 r=0x0
smmuv3_read_mmio 11.610 pid=20045 addr=0x9c val=0x1b size=0x4 r=0x0
smmuv3_cmdq_consume 1001488.409 pid=20045 prod=0x1c cons=0x1b prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.350 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.510 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.140 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 8.430 pid=20045 sid=0x8 hits=0xf misses=0x2 perc=0x58
vfio_iommu_addr_inv_iotlb 7.020 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.790 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.860 pid=20045 prod=0x1c cons=0x1c prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.380 pid=20045 addr=0x98 val=0x1c size=0x4 r=0x0
smmuv3_cmdq_consume 13.840 pid=20045 prod=0x1d cons=0x1c prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.630 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.560 pid=20045 prod=0x1d cons=0x1d prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.250 pid=20045 addr=0x98 val=0x1d size=0x4 r=0x0
smmuv3_read_mmio 10.790 pid=20045 addr=0x9c val=0x1d size=0x4 r=0x0
smmuv3_cmdq_consume 1023947.522 pid=20045 prod=0x1e cons=0x1d prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.460 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.560 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.040 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.580 pid=20045 sid=0x8 hits=0x10 misses=0x2 perc=0x58
vfio_iommu_addr_inv_iotlb 6.440 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 5.530 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.770 pid=20045 prod=0x1e cons=0x1e prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.300 pid=20045 addr=0x98 val=0x1e size=0x4 r=0x0
smmuv3_cmdq_consume 13.360 pid=20045 prod=0x1f cons=0x1e prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.570 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.440 pid=20045 prod=0x1f cons=0x1f prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x1f size=0x4 r=0x0
smmuv3_read_mmio 11.200 pid=20045 addr=0x9c val=0x1f size=0x4 r=0x0
smmuv3_cmdq_consume 1023869.032 pid=20045 prod=0x20 cons=0x1f prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.210 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.460 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.610 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.560 pid=20045 sid=0x8 hits=0x11 misses=0x2 perc=0x59
vfio_iommu_addr_inv_iotlb 6.250 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.780 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.720 pid=20045 prod=0x20 cons=0x20 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.310 pid=20045 addr=0x98 val=0x20 size=0x4 r=0x0
smmuv3_cmdq_consume 13.620 pid=20045 prod=0x21 cons=0x20 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.580 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x21 cons=0x21 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x21 size=0x4 r=0x0
smmuv3_read_mmio 13.450 pid=20045 addr=0x9c val=0x21 size=0x4 r=0x0
smmuv3_cmdq_consume 1023938.061 pid=20045 prod=0x22 cons=0x21 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.250 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.480 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.700 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.460 pid=20045 sid=0x8 hits=0x12 misses=0x2 perc=0x5a
vfio_iommu_addr_inv_iotlb 7.050 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.110 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.680 pid=20045 prod=0x22 cons=0x22 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.390 pid=20045 addr=0x98 val=0x22 size=0x4 r=0x0
smmuv3_cmdq_consume 12.710 pid=20045 prod=0x23 cons=0x22 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.610 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.460 pid=20045 prod=0x23 cons=0x23 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x23 size=0x4 r=0x0
smmuv3_read_mmio 10.460 pid=20045 addr=0x9c val=0x23 size=0x4 r=0x0
smmuv3_cmdq_consume 1024052.102 pid=20045 prod=0x24 cons=0x23 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.120 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.440 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.960 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.210 pid=20045 sid=0x8 hits=0x13 misses=0x2 perc=0x5a
vfio_iommu_addr_inv_iotlb 7.030 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.660 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.760 pid=20045 prod=0x24 cons=0x24 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.380 pid=20045 addr=0x98 val=0x24 size=0x4 r=0x0
smmuv3_cmdq_consume 12.590 pid=20045 prod=0x25 cons=0x24 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.580 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x25 cons=0x25 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x25 size=0x4 r=0x0
smmuv3_read_mmio 10.670 pid=20045 addr=0x9c val=0x25 size=0x4 r=0x0
smmuv3_cmdq_consume 1023942.570 pid=20045 prod=0x26 cons=0x25 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.290 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 8.940 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.380 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.880 pid=20045 sid=0x8 hits=0x14 misses=0x2 perc=0x5a
vfio_iommu_addr_inv_iotlb 6.790 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.131 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.690 pid=20045 prod=0x26 cons=0x26 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x26 size=0x4 r=0x0
smmuv3_cmdq_consume 13.750 pid=20045 prod=0x27 cons=0x26 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.730 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.440 pid=20045 prod=0x27 cons=0x27 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.250 pid=20045 addr=0x98 val=0x27 size=0x4 r=0x0
smmuv3_read_mmio 10.630 pid=20045 addr=0x9c val=0x27 size=0x4 r=0x0
smmuv3_cmdq_consume 1023901.060 pid=20045 prod=0x28 cons=0x27 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.290 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.500 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.310 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.260 pid=20045 sid=0x8 hits=0x15 misses=0x2 perc=0x5b
vfio_iommu_addr_inv_iotlb 6.500 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.750 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 1.080 pid=20045 prod=0x28 cons=0x28 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.380 pid=20045 addr=0x98 val=0x28 size=0x4 r=0x0
smmuv3_cmdq_consume 13.100 pid=20045 prod=0x29 cons=0x28 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.510 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x29 cons=0x29 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x29 size=0x4 r=0x0
smmuv3_read_mmio 11.040 pid=20045 addr=0x9c val=0x29 size=0x4 r=0x0
smmuv3_cmdq_consume 1023891.720 pid=20045 prod=0x2a cons=0x29 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.010 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.500 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.800 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.880 pid=20045 sid=0x8 hits=0x16 misses=0x2 perc=0x5b
vfio_iommu_addr_inv_iotlb 6.510 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 3.370 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.630 pid=20045 prod=0x2a cons=0x2a prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.380 pid=20045 addr=0x98 val=0x2a size=0x4 r=0x0
smmuv3_cmdq_consume 12.800 pid=20045 prod=0x2b cons=0x2a prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.690 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.510 pid=20045 prod=0x2b cons=0x2b prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x2b size=0x4 r=0x0
smmuv3_read_mmio 10.360 pid=20045 addr=0x9c val=0x2b size=0x4 r=0x0
smmuv3_cmdq_consume 1024021.400 pid=20045 prod=0x2c cons=0x2b prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.070 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.410 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 6.970 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.270 pid=20045 sid=0x8 hits=0x17 misses=0x2 perc=0x5c
vfio_iommu_addr_inv_iotlb 7.240 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.920 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.640 pid=20045 prod=0x2c cons=0x2c prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.420 pid=20045 addr=0x98 val=0x2c size=0x4 r=0x0
smmuv3_cmdq_consume 12.430 pid=20045 prod=0x2d cons=0x2c prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.580 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.480 pid=20045 prod=0x2d cons=0x2d prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x2d size=0x4 r=0x0
smmuv3_read_mmio 10.350 pid=20045 addr=0x9c val=0x2d size=0x4 r=0x0
smmuv3_cmdq_consume 1023960.109 pid=20045 prod=0x2e cons=0x2d prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.110 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.520 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.370 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.650 pid=20045 sid=0x8 hits=0x18 misses=0x2 perc=0x5c
vfio_iommu_addr_inv_iotlb 6.950 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.290 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.730 pid=20045 prod=0x2e cons=0x2e prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.490 pid=20045 addr=0x98 val=0x2e size=0x4 r=0x0
smmuv3_cmdq_consume 12.991 pid=20045 prod=0x2f cons=0x2e prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.660 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.440 pid=20045 prod=0x2f cons=0x2f prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x2f size=0x4 r=0x0
smmuv3_read_mmio 10.920 pid=20045 addr=0x9c val=0x2f size=0x4 r=0x0
smmuv3_cmdq_consume 1023868.888 pid=20045 prod=0x30 cons=0x2f prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.030 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.480 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.110 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.150 pid=20045 sid=0x8 hits=0x19 misses=0x2 perc=0x5c
vfio_iommu_addr_inv_iotlb 6.200 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 21.311 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.650 pid=20045 prod=0x30 cons=0x30 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.330 pid=20045 addr=0x98 val=0x30 size=0x4 r=0x0
smmuv3_cmdq_consume 13.350 pid=20045 prod=0x31 cons=0x30 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.660 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.430 pid=20045 prod=0x31 cons=0x31 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.270 pid=20045 addr=0x98 val=0x31 size=0x4 r=0x0
smmuv3_read_mmio 10.540 pid=20045 addr=0x9c val=0x31 size=0x4 r=0x0
smmuv3_cmdq_consume 1023945.358 pid=20045 prod=0x32 cons=0x31 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.130 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.540 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.280 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.460 pid=20045 sid=0x8 hits=0x1a misses=0x2 perc=0x5c
vfio_iommu_addr_inv_iotlb 6.560 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.210 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.670 pid=20045 prod=0x32 cons=0x32 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x32 size=0x4 r=0x0
smmuv3_cmdq_consume 12.890 pid=20045 prod=0x33 cons=0x32 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.650 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.400 pid=20045 prod=0x33 cons=0x33 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x33 size=0x4 r=0x0
smmuv3_read_mmio 10.720 pid=20045 addr=0x9c val=0x33 size=0x4 r=0x0
smmuv3_cmdq_consume 1023937.319 pid=20045 prod=0x34 cons=0x33 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.910 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.400 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.610 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.050 pid=20045 sid=0x8 hits=0x1b misses=0x2 perc=0x5d
vfio_iommu_addr_inv_iotlb 7.300 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.770 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.630 pid=20045 prod=0x34 cons=0x34 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.330 pid=20045 addr=0x98 val=0x34 size=0x4 r=0x0
smmuv3_cmdq_consume 12.190 pid=20045 prod=0x35 cons=0x34 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.750 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.420 pid=20045 prod=0x35 cons=0x35 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x35 size=0x4 r=0x0
smmuv3_read_mmio 10.040 pid=20045 addr=0x9c val=0x35 size=0x4 r=0x0
smmuv3_cmdq_consume 1024026.488 pid=20045 prod=0x36 cons=0x35 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.170 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.500 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 7.650 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.980 pid=20045 sid=0x8 hits=0x1c misses=0x2 perc=0x5d
vfio_iommu_addr_inv_iotlb 8.040 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.720 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.760 pid=20045 prod=0x36 cons=0x36 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.470 pid=20045 addr=0x98 val=0x36 size=0x4 r=0x0
smmuv3_cmdq_consume 13.500 pid=20045 prod=0x37 cons=0x36 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.740 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x37 cons=0x37 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.290 pid=20045 addr=0x98 val=0x37 size=0x4 r=0x0
smmuv3_read_mmio 11.230 pid=20045 addr=0x9c val=0x37 size=0x4 r=0x0
smmuv3_cmdq_consume 1023945.928 pid=20045 prod=0x38 cons=0x37 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.160 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.540 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.530 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.270 pid=20045 sid=0x8 hits=0x1d misses=0x2 perc=0x5d
vfio_iommu_addr_inv_iotlb 7.070 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.650 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.750 pid=20045 prod=0x38 cons=0x38 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.370 pid=20045 addr=0x98 val=0x38 size=0x4 r=0x0
smmuv3_cmdq_consume 13.210 pid=20045 prod=0x39 cons=0x38 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.630 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.450 pid=20045 prod=0x39 cons=0x39 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x39 size=0x4 r=0x0
smmuv3_read_mmio 10.830 pid=20045 addr=0x9c val=0x39 size=0x4 r=0x0
smmuv3_cmdq_consume 1023936.808 pid=20045 prod=0x3a cons=0x39 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.210 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.510 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.770 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.740 pid=20045 sid=0x8 hits=0x1e misses=0x2 perc=0x5d
vfio_iommu_addr_inv_iotlb 6.390 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.460 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.610 pid=20045 prod=0x3a cons=0x3a prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.400 pid=20045 addr=0x98 val=0x3a size=0x4 r=0x0
smmuv3_cmdq_consume 12.890 pid=20045 prod=0x3b cons=0x3a prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 8.190 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.440 pid=20045 prod=0x3b cons=0x3b prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.260 pid=20045 addr=0x98 val=0x3b size=0x4 r=0x0
smmuv3_read_mmio 12.510 pid=20045 addr=0x9c val=0x3b size=0x4 r=0x0
smmuv3_cmdq_consume 1023876.547 pid=20045 prod=0x3c cons=0x3b prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.130 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.480 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 9.030 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 6.960 pid=20045 sid=0x8 hits=0x1f misses=0x2 perc=0x5d
vfio_iommu_addr_inv_iotlb 7.050 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.600 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.670 pid=20045 prod=0x3c cons=0x3c prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x3c size=0x4 r=0x0
smmuv3_cmdq_consume 12.550 pid=20045 prod=0x3d cons=0x3c prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.570 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.400 pid=20045 prod=0x3d cons=0x3d prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x3d size=0x4 r=0x0
smmuv3_read_mmio 11.030 pid=20045 addr=0x9c val=0x3d size=0x4 r=0x0
smmuv3_cmdq_consume 1024016.837 pid=20045 prod=0x3e cons=0x3d prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.150 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.500 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.650 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.730 pid=20045 sid=0x8 hits=0x20 misses=0x2 perc=0x5e
vfio_iommu_addr_inv_iotlb 6.410 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.270 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.860 pid=20045 prod=0x3e cons=0x3e prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.350 pid=20045 addr=0x98 val=0x3e size=0x4 r=0x0
smmuv3_cmdq_consume 13.270 pid=20045 prod=0x3f cons=0x3e prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.690 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.490 pid=20045 prod=0x3f cons=0x3f prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.290 pid=20045 addr=0x98 val=0x3f size=0x4 r=0x0
smmuv3_read_mmio 12.700 pid=20045 addr=0x9c val=0x3f size=0x4 r=0x0
smmuv3_cmdq_consume 1023942.877 pid=20045 prod=0x40 cons=0x3f prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 1.340 pid=20045 opcode=SMMU_CMD_TLBI_NH_VA
smmuv3_cmdq_tlbi_nh_va 0.470 pid=20045 vmid=0x0 asid=0x0 addr=0xffdf6000 leaf=0x1
smmuv3_inv_notifiers_iova 8.710 pid=20045 name=smmuv3-iommu-memory-region-8-0 asid=0x0 iova=0xffdf6000
smmuv3_config_cache_hit 7.080 pid=20045 sid=0x8 hits=0x21 misses=0x2 perc=0x5e
vfio_iommu_addr_inv_iotlb 7.480 pid=20045 asid=0x0 addr=0xffdf6000 size=0x1000 nb_granules=0x1 leaf=0x1
smmu_iotlb_inv_iova 4.230 pid=20045 asid=0x0 addr=0xffdf6000
smmuv3_cmdq_consume_out 0.650 pid=20045 prod=0x40 cons=0x40 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.420 pid=20045 addr=0x98 val=0x40 size=0x4 r=0x0
smmuv3_cmdq_consume 12.510 pid=20045 prod=0x41 cons=0x40 prod_wrap=0x0 cons_wrap=0x0
smmuv3_cmdq_opcode 0.680 pid=20045 opcode=SMMU_CMD_SYNC
smmuv3_cmdq_consume_out 0.380 pid=20045 prod=0x41 cons=0x41 prod_wrap=0x0 cons_wrap=0x0
smmuv3_write_mmio 0.280 pid=20045 addr=0x98 val=0x41 size=0x4 r=0x0
smmuv3_read_mmio 10.130 pid=20045 addr=0x9c val=0x41 size=0x4 r=0x0

[-- Attachment #3: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 11:08 ` Shameerali Kolothum Thodi
@ 2019-11-12 11:28   ` Auger Eric
  2019-11-12 13:06     ` Shameerali Kolothum Thodi
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2019-11-12 11:28 UTC (permalink / raw)
  To: Shameerali Kolothum Thodi, eric.auger.pro, iommu, linux-kernel,
	kvm, kvmarm, joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Shameer,
On 11/12/19 12:08 PM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
> 
>> -----Original Message-----
>> From: kvmarm-bounces@lists.cs.columbia.edu
>> [mailto:kvmarm-bounces@lists.cs.columbia.edu] On Behalf Of Eric Auger
>> Sent: 11 July 2019 14:56
>> To: eric.auger.pro@gmail.com; eric.auger@redhat.com;
>> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
>> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
>> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
>> robin.murphy@arm.com
>> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
>> marc.zyngier@arm.com; tina.zhang@intel.com
>> Subject: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
>>
>> This series brings the VFIO part of HW nested paging support
>> in the SMMUv3.
>>
>> The series depends on:
>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>> (https://www.spinics.net/lists/kernel/msg3187714.html)
>>
>> 3 new IOCTLs are introduced that allow the userspace to
>> 1) pass the guest stage 1 configuration
>> 2) pass stage 1 MSI bindings
>> 3) invalidate stage 1 related caches
>>
>> They map onto the related new IOMMU API functions.
>>
>> We introduce the capability to register specific interrupt
>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
>> an eventfd to be signaled whenever a stage 1 related fault
>> is detected at physical level. Also a specific region allows
>> to expose the fault records to the user space.
> 
> I am trying to get this running on one of our platform that has smmuv3 dual
> stage support. I am seeing some issues with this when an ixgbe vf dev is 
> made pass-through and is behind a vSMMUv3 in Guest.
> 
> Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
> 
> And this is my Qemu cmd line,
> 
> ./qemu-system-aarch64
> -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host \
> -kernel Image \
> -drive if=none,file=ubuntu,id=fs \
> -device virtio-blk-device,drive=fs \
> -device vfio-pci,host=0000:01:10.1 \
> -bios QEMU_EFI.fd \
> -net none \
> -m 4G \
> -nographic -D -d -enable-kvm \
> -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
> 
> The basic ping from Guest works fine,
> root@ubuntu:~# ping 10.202.225.185
> PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
> 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
> 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
> ...
> 
> But if I increase ping packet size, 
> 
> root@ubuntu:~# ping -s 1024 10.202.225.185
> PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
> 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
> 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
> From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
> From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
> From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
> From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
> 
> And from Host kernel I get,
> [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets detected
> [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> 
> Also noted that iperf cannot work as it fails to establish the connection with iperf
> server. 
> 
> Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your reference.
> I haven't debugged this further yet and thought of checking with you if this is
> something you have seen already or not. Or maybe I am missing something here?

Please can you try to edit and modify hw/vfio/common.c, function
vfio_iommu_unmap_notify


/*
    if (size <= 0x10000) {
        ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
        ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
        ustruct.info.addr_info.flags = IOMMU_INV_ADDR_FLAGS_ARCHID;
        if (iotlb->leaf) {
            ustruct.info.addr_info.flags |= IOMMU_INV_ADDR_FLAGS_LEAF;
        }
        ustruct.info.addr_info.archid = iotlb->arch_id;
        ustruct.info.addr_info.addr = start;
        ustruct.info.addr_info.granule_size = size;
        ustruct.info.addr_info.nb_granules = 1;
        trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
                                        iotlb->leaf);
    } else {
*/
        ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
        ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
        ustruct.info.pasid_info.archid = iotlb->arch_id;
        ustruct.info.pasid_info.flags = IOMMU_INV_PASID_FLAGS_ARCHID;
        trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
//    }

This modification leads to invalidate the whole asid each time we get a
guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
saw this was the cause of such kind of issues. Please let me know if it
fixes your perf issues and then we may discuss further about the test
configuration.

Thanks

Eric



> 
> Please let me know.
> 
> Thanks,
> Shameer
> 
>> Best Regards
>>
>> Eric
>>
>> This series can be found at:
>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>
>> It series includes Tina's patch steming from
>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
>> specific irq" plus patches originally contributed by Yi.
>>
>> History:
>>
>> v8 -> v9:
>> - introduce specific irq framework
>> - single fault region
>> - iommu_unregister_device_fault_handler failure case not handled
>>   yet.
>>
>> v7 -> v8:
>> - rebase on top of v5.2-rc1 and especially
>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
>> - dynamic alloc of s1_cfg/s2_cfg
>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
>> - check there is no HW MSI regions
>> - asid invalidation using pasid extended struct (change in the uapi)
>> - add s1_live/s2_live checks
>> - move check about support of nested stages in domain finalise
>> - fixes in error reporting according to the discussion with Robin
>> - reordered the patches to have first iommu/smmuv3 patches and then
>>   VFIO patches
>>
>> v6 -> v7:
>> - removed device handle from bind/unbind_guest_msi
>> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
>>   enforcement"
>> - added few uapi comments as suggested by Jean, Jacop and Alex
>>
>> v5 -> v6:
>> - Fix compilation issue when CONFIG_IOMMU_API is unset
>>
>> v4 -> v5:
>> - fix bug reported by Vincent: fault handler unregistration now happens in
>>   vfio_pci_release
>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
>> - iommu: introduce device fault report API: removed the PRI part.
>> - see individual logs for more details
>> - reset the ste abort flag on detach
>>
>> v3 -> v4:
>> - took into account Alex, jean-Philippe and Robin's comments on v3
>> - rework of the smmuv3 driver integration
>> - add tear down ops for msi binding and PASID table binding
>> - fix S1 fault propagation
>> - put fault reporting patches at the beginning of the series following
>>   Jean-Philippe's request
>> - update of the cache invalidate and fault API uapis
>> - VFIO fault reporting rework with 2 separate regions and one mmappable
>>   segment for the fault queue
>> - moved to PATCH
>>
>> v2 -> v3:
>> - When registering the S1 MSI binding we now store the device handle. This
>>   addresses Robin's comment about discimination of devices beonging to
>>   different S1 groups and using different physical MSI doorbells.
>> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
>>   set the eventfd and expose the faults through an mmappable fault region
>>
>> v1 -> v2:
>> - Added the fault reporting capability
>> - asid properly passed on invalidation (fix assignment of multiple
>>   devices)
>> - see individual change logs for more info
>>
>>
>> Eric Auger (8):
>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>>   vfio/pci: Register an iommu fault handler
>>   vfio/pci: Allow to mmap the fault queue
>>   vfio: Add new IRQ for DMA fault reporting
>>   vfio/pci: Add framework for custom interrupt indices
>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
>>   vfio: Document nested stage control
>>
>> Liu, Yi L (2):
>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>
>> Tina Zhang (1):
>>   vfio: Use capability chains to handle device specific irq
>>
>>  Documentation/vfio.txt              |  77 ++++++++
>>  drivers/vfio/pci/vfio_pci.c         | 283 ++++++++++++++++++++++++++--
>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
>>  7 files changed, 747 insertions(+), 19 deletions(-)
>>
>> --
>> 2.20.1
>>
>> _______________________________________________
>> kvmarm mailing list
>> kvmarm@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 11:28   ` Auger Eric
@ 2019-11-12 13:06     ` Shameerali Kolothum Thodi
  2019-11-12 13:21       ` Auger Eric
  0 siblings, 1 reply; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2019-11-12 13:06 UTC (permalink / raw)
  To: Auger Eric, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Eric,

> -----Original Message-----
> From: Auger Eric [mailto:eric.auger@redhat.com]
> Sent: 12 November 2019 11:29
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> eric.auger.pro@gmail.com; iommu@lists.linux-foundation.org;
> linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> robin.murphy@arm.com
> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
> Subject: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> 
> Hi Shameer,
> On 11/12/19 12:08 PM, Shameerali Kolothum Thodi wrote:
> > Hi Eric,
> >
> >> -----Original Message-----
> >> From: kvmarm-bounces@lists.cs.columbia.edu
> >> [mailto:kvmarm-bounces@lists.cs.columbia.edu] On Behalf Of Eric Auger
> >> Sent: 11 July 2019 14:56
> >> To: eric.auger.pro@gmail.com; eric.auger@redhat.com;
> >> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
> >> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> >> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> >> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> >> robin.murphy@arm.com
> >> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> >> marc.zyngier@arm.com; tina.zhang@intel.com
> >> Subject: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> >>
> >> This series brings the VFIO part of HW nested paging support
> >> in the SMMUv3.
> >>
> >> The series depends on:
> >> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> >> (https://www.spinics.net/lists/kernel/msg3187714.html)
> >>
> >> 3 new IOCTLs are introduced that allow the userspace to
> >> 1) pass the guest stage 1 configuration
> >> 2) pass stage 1 MSI bindings
> >> 3) invalidate stage 1 related caches
> >>
> >> They map onto the related new IOMMU API functions.
> >>
> >> We introduce the capability to register specific interrupt
> >> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
> >> an eventfd to be signaled whenever a stage 1 related fault
> >> is detected at physical level. Also a specific region allows
> >> to expose the fault records to the user space.
> >
> > I am trying to get this running on one of our platform that has smmuv3 dual
> > stage support. I am seeing some issues with this when an ixgbe vf dev is
> > made pass-through and is behind a vSMMUv3 in Guest.
> >
> > Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> > Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
> >
> > And this is my Qemu cmd line,
> >
> > ./qemu-system-aarch64
> > -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host \
> > -kernel Image \
> > -drive if=none,file=ubuntu,id=fs \
> > -device virtio-blk-device,drive=fs \
> > -device vfio-pci,host=0000:01:10.1 \
> > -bios QEMU_EFI.fd \
> > -net none \
> > -m 4G \
> > -nographic -D -d -enable-kvm \
> > -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
> >
> > The basic ping from Guest works fine,
> > root@ubuntu:~# ping 10.202.225.185
> > PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
> > 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
> > 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
> > ...
> >
> > But if I increase ping packet size,
> >
> > root@ubuntu:~# ping -s 1024 10.202.225.185
> > PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
> > 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
> > 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
> > From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
> > From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
> > From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
> > From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
> >
> > And from Host kernel I get,
> > [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> > [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> > [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> > [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets detected
> > [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> > [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> >
> > Also noted that iperf cannot work as it fails to establish the connection with
> iperf
> > server.
> >
> > Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your
> reference.
> > I haven't debugged this further yet and thought of checking with you if this is
> > something you have seen already or not. Or maybe I am missing something
> here?
> 
> Please can you try to edit and modify hw/vfio/common.c, function
> vfio_iommu_unmap_notify
> 
> 
> /*
>     if (size <= 0x10000) {
>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>         ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
>         ustruct.info.addr_info.flags = IOMMU_INV_ADDR_FLAGS_ARCHID;
>         if (iotlb->leaf) {
>             ustruct.info.addr_info.flags |=
> IOMMU_INV_ADDR_FLAGS_LEAF;
>         }
>         ustruct.info.addr_info.archid = iotlb->arch_id;
>         ustruct.info.addr_info.addr = start;
>         ustruct.info.addr_info.granule_size = size;
>         ustruct.info.addr_info.nb_granules = 1;
>         trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
>                                         iotlb->leaf);
>     } else {
> */
>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>         ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
>         ustruct.info.pasid_info.archid = iotlb->arch_id;
>         ustruct.info.pasid_info.flags = IOMMU_INV_PASID_FLAGS_ARCHID;
>         trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
> //    }
> 
> This modification leads to invalidate the whole asid each time we get a
> guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
> saw this was the cause of such kind of issues. Please let me know if it
> fixes your perf issues

Yes, this seems to fix the issue.

root@ubuntu:~# iperf -c 10.202.225.185
------------------------------------------------------------
Client connecting to 10.202.225.185, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.202.225.169 port 47996 connected with 10.202.225.185 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec
root@ubuntu:~#

But the performance seems to be very poor as this is a 10Gbps interface(Of course
invalidating the whole asid may not be very helpful). It is interesting that why the
single iova invalidation is not working.

 and then we may discuss further about the test
> configuration.

Sure. Please let me know.

Cheers,
Shameer 

> Thanks
> 
> Eric
> 
> 
> 
> >
> > Please let me know.
> >
> > Thanks,
> > Shameer
> >
> >> Best Regards
> >>
> >> Eric
> >>
> >> This series can be found at:
> >> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> >>
> >> It series includes Tina's patch steming from
> >> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> >> specific irq" plus patches originally contributed by Yi.
> >>
> >> History:
> >>
> >> v8 -> v9:
> >> - introduce specific irq framework
> >> - single fault region
> >> - iommu_unregister_device_fault_handler failure case not handled
> >>   yet.
> >>
> >> v7 -> v8:
> >> - rebase on top of v5.2-rc1 and especially
> >>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
> >> - dynamic alloc of s1_cfg/s2_cfg
> >> - __arm_smmu_tlb_inv_asid/s1_range_nosync
> >> - check there is no HW MSI regions
> >> - asid invalidation using pasid extended struct (change in the uapi)
> >> - add s1_live/s2_live checks
> >> - move check about support of nested stages in domain finalise
> >> - fixes in error reporting according to the discussion with Robin
> >> - reordered the patches to have first iommu/smmuv3 patches and then
> >>   VFIO patches
> >>
> >> v6 -> v7:
> >> - removed device handle from bind/unbind_guest_msi
> >> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
> >>   enforcement"
> >> - added few uapi comments as suggested by Jean, Jacop and Alex
> >>
> >> v5 -> v6:
> >> - Fix compilation issue when CONFIG_IOMMU_API is unset
> >>
> >> v4 -> v5:
> >> - fix bug reported by Vincent: fault handler unregistration now happens in
> >>   vfio_pci_release
> >> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
> >>   uapi changes suggested by Kean-Philippe (except fetch_addr)
> >> - iommu: introduce device fault report API: removed the PRI part.
> >> - see individual logs for more details
> >> - reset the ste abort flag on detach
> >>
> >> v3 -> v4:
> >> - took into account Alex, jean-Philippe and Robin's comments on v3
> >> - rework of the smmuv3 driver integration
> >> - add tear down ops for msi binding and PASID table binding
> >> - fix S1 fault propagation
> >> - put fault reporting patches at the beginning of the series following
> >>   Jean-Philippe's request
> >> - update of the cache invalidate and fault API uapis
> >> - VFIO fault reporting rework with 2 separate regions and one mmappable
> >>   segment for the fault queue
> >> - moved to PATCH
> >>
> >> v2 -> v3:
> >> - When registering the S1 MSI binding we now store the device handle. This
> >>   addresses Robin's comment about discimination of devices beonging to
> >>   different S1 groups and using different physical MSI doorbells.
> >> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
> >>   set the eventfd and expose the faults through an mmappable fault region
> >>
> >> v1 -> v2:
> >> - Added the fault reporting capability
> >> - asid properly passed on invalidation (fix assignment of multiple
> >>   devices)
> >> - see individual change logs for more info
> >>
> >>
> >> Eric Auger (8):
> >>   vfio: VFIO_IOMMU_SET_MSI_BINDING
> >>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
> >>   vfio/pci: Register an iommu fault handler
> >>   vfio/pci: Allow to mmap the fault queue
> >>   vfio: Add new IRQ for DMA fault reporting
> >>   vfio/pci: Add framework for custom interrupt indices
> >>   vfio/pci: Register and allow DMA FAULT IRQ signaling
> >>   vfio: Document nested stage control
> >>
> >> Liu, Yi L (2):
> >>   vfio: VFIO_IOMMU_SET_PASID_TABLE
> >>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
> >>
> >> Tina Zhang (1):
> >>   vfio: Use capability chains to handle device specific irq
> >>
> >>  Documentation/vfio.txt              |  77 ++++++++
> >>  drivers/vfio/pci/vfio_pci.c         | 283
> ++++++++++++++++++++++++++--
> >>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
> >>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
> >>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
> >>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
> >>  include/uapi/linux/vfio.h           | 109 ++++++++++-
> >>  7 files changed, 747 insertions(+), 19 deletions(-)
> >>
> >> --
> >> 2.20.1
> >>
> >> _______________________________________________
> >> kvmarm mailing list
> >> kvmarm@lists.cs.columbia.edu
> >> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 13:06     ` Shameerali Kolothum Thodi
@ 2019-11-12 13:21       ` Auger Eric
  2019-11-12 14:21         ` Shameerali Kolothum Thodi
  2019-11-12 17:56         ` Shameerali Kolothum Thodi
  0 siblings, 2 replies; 25+ messages in thread
From: Auger Eric @ 2019-11-12 13:21 UTC (permalink / raw)
  To: Shameerali Kolothum Thodi, eric.auger.pro, iommu, linux-kernel,
	kvm, kvmarm, joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Shameer,

On 11/12/19 2:06 PM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
> 
>> -----Original Message-----
>> From: Auger Eric [mailto:eric.auger@redhat.com]
>> Sent: 12 November 2019 11:29
>> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
>> eric.auger.pro@gmail.com; iommu@lists.linux-foundation.org;
>> linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
>> kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
>> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
>> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
>> robin.murphy@arm.com
>> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
>> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
>> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
>> Subject: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
>>
>> Hi Shameer,
>> On 11/12/19 12:08 PM, Shameerali Kolothum Thodi wrote:
>>> Hi Eric,
>>>
>>>> -----Original Message-----
>>>> From: kvmarm-bounces@lists.cs.columbia.edu
>>>> [mailto:kvmarm-bounces@lists.cs.columbia.edu] On Behalf Of Eric Auger
>>>> Sent: 11 July 2019 14:56
>>>> To: eric.auger.pro@gmail.com; eric.auger@redhat.com;
>>>> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
>>>> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
>>>> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
>>>> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
>>>> robin.murphy@arm.com
>>>> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
>>>> marc.zyngier@arm.com; tina.zhang@intel.com
>>>> Subject: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
>>>>
>>>> This series brings the VFIO part of HW nested paging support
>>>> in the SMMUv3.
>>>>
>>>> The series depends on:
>>>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>>>> (https://www.spinics.net/lists/kernel/msg3187714.html)
>>>>
>>>> 3 new IOCTLs are introduced that allow the userspace to
>>>> 1) pass the guest stage 1 configuration
>>>> 2) pass stage 1 MSI bindings
>>>> 3) invalidate stage 1 related caches
>>>>
>>>> They map onto the related new IOMMU API functions.
>>>>
>>>> We introduce the capability to register specific interrupt
>>>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
>>>> an eventfd to be signaled whenever a stage 1 related fault
>>>> is detected at physical level. Also a specific region allows
>>>> to expose the fault records to the user space.
>>>
>>> I am trying to get this running on one of our platform that has smmuv3 dual
>>> stage support. I am seeing some issues with this when an ixgbe vf dev is
>>> made pass-through and is behind a vSMMUv3 in Guest.
>>>
>>> Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>> Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
>>>
>>> And this is my Qemu cmd line,
>>>
>>> ./qemu-system-aarch64
>>> -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host \
>>> -kernel Image \
>>> -drive if=none,file=ubuntu,id=fs \
>>> -device virtio-blk-device,drive=fs \
>>> -device vfio-pci,host=0000:01:10.1 \
>>> -bios QEMU_EFI.fd \
>>> -net none \
>>> -m 4G \
>>> -nographic -D -d -enable-kvm \
>>> -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
>>>
>>> The basic ping from Guest works fine,
>>> root@ubuntu:~# ping 10.202.225.185
>>> PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
>>> 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
>>> 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
>>> ...
>>>
>>> But if I increase ping packet size,
>>>
>>> root@ubuntu:~# ping -s 1024 10.202.225.185
>>> PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
>>> 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
>>> 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
>>> From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
>>> From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
>>> From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
>>> From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
>>>
>>> And from Host kernel I get,
>>> [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
>>> [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
>>> [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
>>> [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets detected
>>> [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
>>> [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
>>>
>>> Also noted that iperf cannot work as it fails to establish the connection with
>> iperf
>>> server.
>>>
>>> Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your
>> reference.
>>> I haven't debugged this further yet and thought of checking with you if this is
>>> something you have seen already or not. Or maybe I am missing something
>> here?
>>
>> Please can you try to edit and modify hw/vfio/common.c, function
>> vfio_iommu_unmap_notify
>>
>>
>> /*
>>     if (size <= 0x10000) {
>>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>>         ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
>>         ustruct.info.addr_info.flags = IOMMU_INV_ADDR_FLAGS_ARCHID;
>>         if (iotlb->leaf) {
>>             ustruct.info.addr_info.flags |=
>> IOMMU_INV_ADDR_FLAGS_LEAF;
>>         }
>>         ustruct.info.addr_info.archid = iotlb->arch_id;
>>         ustruct.info.addr_info.addr = start;
>>         ustruct.info.addr_info.granule_size = size;
>>         ustruct.info.addr_info.nb_granules = 1;
>>         trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
>>                                         iotlb->leaf);
>>     } else {
>> */
>>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>>         ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
>>         ustruct.info.pasid_info.archid = iotlb->arch_id;
>>         ustruct.info.pasid_info.flags = IOMMU_INV_PASID_FLAGS_ARCHID;
>>         trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
>> //    }
>>
>> This modification leads to invalidate the whole asid each time we get a
>> guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
>> saw this was the cause of such kind of issues. Please let me know if it
>> fixes your perf issues
> 
> Yes, this seems to fix the issue.
> 
> root@ubuntu:~# iperf -c 10.202.225.185
> ------------------------------------------------------------
> Client connecting to 10.202.225.185, TCP port 5001
> TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.202.225.169 port 47996 connected with 10.202.225.185 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec
> root@ubuntu:~#
> 
> But the performance seems to be very poor as this is a 10Gbps interface(Of course
> invalidating the whole asid may not be very helpful). It is interesting that why the
> single iova invalidation is not working.
> 
>  and then we may discuss further about the test
>> configuration.
> 
> Sure. Please let me know.

I reported that issue earlier on the ML. I have not been able to find
any integration issue in the kernel/qemu code but maybe I am too blind
now as I wrote it ;-) When I get a guest stage1 TLBI I cascade it down
to the physical IOMMU. I also pass the LEAF flag.

As you are an expert of the SMMUv3 PMU, if your implementation has any
and you have cycles to look at this, it would be helpful to run it and
see if something weird gets highlighted.

Thanks

Eric
> 
> Cheers,
> Shameer 
> 
>> Thanks
>>
>> Eric
>>
>>
>>
>>>
>>> Please let me know.
>>>
>>> Thanks,
>>> Shameer
>>>
>>>> Best Regards
>>>>
>>>> Eric
>>>>
>>>> This series can be found at:
>>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>>>
>>>> It series includes Tina's patch steming from
>>>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
>>>> specific irq" plus patches originally contributed by Yi.
>>>>
>>>> History:
>>>>
>>>> v8 -> v9:
>>>> - introduce specific irq framework
>>>> - single fault region
>>>> - iommu_unregister_device_fault_handler failure case not handled
>>>>   yet.
>>>>
>>>> v7 -> v8:
>>>> - rebase on top of v5.2-rc1 and especially
>>>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
>>>> - dynamic alloc of s1_cfg/s2_cfg
>>>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
>>>> - check there is no HW MSI regions
>>>> - asid invalidation using pasid extended struct (change in the uapi)
>>>> - add s1_live/s2_live checks
>>>> - move check about support of nested stages in domain finalise
>>>> - fixes in error reporting according to the discussion with Robin
>>>> - reordered the patches to have first iommu/smmuv3 patches and then
>>>>   VFIO patches
>>>>
>>>> v6 -> v7:
>>>> - removed device handle from bind/unbind_guest_msi
>>>> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
>>>>   enforcement"
>>>> - added few uapi comments as suggested by Jean, Jacop and Alex
>>>>
>>>> v5 -> v6:
>>>> - Fix compilation issue when CONFIG_IOMMU_API is unset
>>>>
>>>> v4 -> v5:
>>>> - fix bug reported by Vincent: fault handler unregistration now happens in
>>>>   vfio_pci_release
>>>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
>>>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
>>>> - iommu: introduce device fault report API: removed the PRI part.
>>>> - see individual logs for more details
>>>> - reset the ste abort flag on detach
>>>>
>>>> v3 -> v4:
>>>> - took into account Alex, jean-Philippe and Robin's comments on v3
>>>> - rework of the smmuv3 driver integration
>>>> - add tear down ops for msi binding and PASID table binding
>>>> - fix S1 fault propagation
>>>> - put fault reporting patches at the beginning of the series following
>>>>   Jean-Philippe's request
>>>> - update of the cache invalidate and fault API uapis
>>>> - VFIO fault reporting rework with 2 separate regions and one mmappable
>>>>   segment for the fault queue
>>>> - moved to PATCH
>>>>
>>>> v2 -> v3:
>>>> - When registering the S1 MSI binding we now store the device handle. This
>>>>   addresses Robin's comment about discimination of devices beonging to
>>>>   different S1 groups and using different physical MSI doorbells.
>>>> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
>>>>   set the eventfd and expose the faults through an mmappable fault region
>>>>
>>>> v1 -> v2:
>>>> - Added the fault reporting capability
>>>> - asid properly passed on invalidation (fix assignment of multiple
>>>>   devices)
>>>> - see individual change logs for more info
>>>>
>>>>
>>>> Eric Auger (8):
>>>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
>>>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>>>>   vfio/pci: Register an iommu fault handler
>>>>   vfio/pci: Allow to mmap the fault queue
>>>>   vfio: Add new IRQ for DMA fault reporting
>>>>   vfio/pci: Add framework for custom interrupt indices
>>>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
>>>>   vfio: Document nested stage control
>>>>
>>>> Liu, Yi L (2):
>>>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
>>>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>>>
>>>> Tina Zhang (1):
>>>>   vfio: Use capability chains to handle device specific irq
>>>>
>>>>  Documentation/vfio.txt              |  77 ++++++++
>>>>  drivers/vfio/pci/vfio_pci.c         | 283
>> ++++++++++++++++++++++++++--
>>>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
>>>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
>>>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
>>>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
>>>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
>>>>  7 files changed, 747 insertions(+), 19 deletions(-)
>>>>
>>>> --
>>>> 2.20.1
>>>>
>>>> _______________________________________________
>>>> kvmarm mailing list
>>>> kvmarm@lists.cs.columbia.edu
>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 13:21       ` Auger Eric
@ 2019-11-12 14:21         ` Shameerali Kolothum Thodi
  2019-11-12 17:56         ` Shameerali Kolothum Thodi
  1 sibling, 0 replies; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2019-11-12 14:21 UTC (permalink / raw)
  To: Auger Eric, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)



> -----Original Message-----
> From: Auger Eric [mailto:eric.auger@redhat.com]
> Sent: 12 November 2019 13:22
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> eric.auger.pro@gmail.com; iommu@lists.linux-foundation.org;
> linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> robin.murphy@arm.com
> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
> Subject: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> 
> Hi Shameer,
> 
> On 11/12/19 2:06 PM, Shameerali Kolothum Thodi wrote:
> > Hi Eric,
> >
> >> -----Original Message-----
> >> From: Auger Eric [mailto:eric.auger@redhat.com]
> >> Sent: 12 November 2019 11:29
> >> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> >> eric.auger.pro@gmail.com; iommu@lists.linux-foundation.org;
> >> linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> >> kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> >> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> >> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> >> robin.murphy@arm.com
> >> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> >> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
> >> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
> >> Subject: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> >>
> >> Hi Shameer,
> >> On 11/12/19 12:08 PM, Shameerali Kolothum Thodi wrote:
> >>> Hi Eric,
> >>>
> >>>> -----Original Message-----
> >>>> From: kvmarm-bounces@lists.cs.columbia.edu
> >>>> [mailto:kvmarm-bounces@lists.cs.columbia.edu] On Behalf Of Eric Auger
> >>>> Sent: 11 July 2019 14:56
> >>>> To: eric.auger.pro@gmail.com; eric.auger@redhat.com;
> >>>> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
> >>>> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> >>>> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> >>>> yi.l.liu@intel.com; jean-philippe.brucker@arm.com;
> will.deacon@arm.com;
> >>>> robin.murphy@arm.com
> >>>> Cc: kevin.tian@intel.com; vincent.stehle@arm.com;
> ashok.raj@intel.com;
> >>>> marc.zyngier@arm.com; tina.zhang@intel.com
> >>>> Subject: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> >>>>
> >>>> This series brings the VFIO part of HW nested paging support
> >>>> in the SMMUv3.
> >>>>
> >>>> The series depends on:
> >>>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> >>>> (https://www.spinics.net/lists/kernel/msg3187714.html)
> >>>>
> >>>> 3 new IOCTLs are introduced that allow the userspace to
> >>>> 1) pass the guest stage 1 configuration
> >>>> 2) pass stage 1 MSI bindings
> >>>> 3) invalidate stage 1 related caches
> >>>>
> >>>> They map onto the related new IOMMU API functions.
> >>>>
> >>>> We introduce the capability to register specific interrupt
> >>>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
> >>>> an eventfd to be signaled whenever a stage 1 related fault
> >>>> is detected at physical level. Also a specific region allows
> >>>> to expose the fault records to the user space.
> >>>
> >>> I am trying to get this running on one of our platform that has smmuv3 dual
> >>> stage support. I am seeing some issues with this when an ixgbe vf dev is
> >>> made pass-through and is behind a vSMMUv3 in Guest.
> >>>
> >>> Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> >>> Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
> >>>
> >>> And this is my Qemu cmd line,
> >>>
> >>> ./qemu-system-aarch64
> >>> -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host \
> >>> -kernel Image \
> >>> -drive if=none,file=ubuntu,id=fs \
> >>> -device virtio-blk-device,drive=fs \
> >>> -device vfio-pci,host=0000:01:10.1 \
> >>> -bios QEMU_EFI.fd \
> >>> -net none \
> >>> -m 4G \
> >>> -nographic -D -d -enable-kvm \
> >>> -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
> >>>
> >>> The basic ping from Guest works fine,
> >>> root@ubuntu:~# ping 10.202.225.185
> >>> PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
> >>> 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
> >>> 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
> >>> ...
> >>>
> >>> But if I increase ping packet size,
> >>>
> >>> root@ubuntu:~# ping -s 1024 10.202.225.185
> >>> PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
> >>> 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
> >>> 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
> >>> From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
> >>> From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
> >>> From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
> >>> From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
> >>>
> >>> And from Host kernel I get,
> >>> [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> >>> [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> >>> [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> >>> [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets detected
> >>> [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets detected
> >>> [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets detected
> >>>
> >>> Also noted that iperf cannot work as it fails to establish the connection
> with
> >> iperf
> >>> server.
> >>>
> >>> Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your
> >> reference.
> >>> I haven't debugged this further yet and thought of checking with you if this
> is
> >>> something you have seen already or not. Or maybe I am missing something
> >> here?
> >>
> >> Please can you try to edit and modify hw/vfio/common.c, function
> >> vfio_iommu_unmap_notify
> >>
> >>
> >> /*
> >>     if (size <= 0x10000) {
> >>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
> >>         ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
> >>         ustruct.info.addr_info.flags =
> IOMMU_INV_ADDR_FLAGS_ARCHID;
> >>         if (iotlb->leaf) {
> >>             ustruct.info.addr_info.flags |=
> >> IOMMU_INV_ADDR_FLAGS_LEAF;
> >>         }
> >>         ustruct.info.addr_info.archid = iotlb->arch_id;
> >>         ustruct.info.addr_info.addr = start;
> >>         ustruct.info.addr_info.granule_size = size;
> >>         ustruct.info.addr_info.nb_granules = 1;
> >>         trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
> >>                                         iotlb->leaf);
> >>     } else {
> >> */
> >>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
> >>         ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
> >>         ustruct.info.pasid_info.archid = iotlb->arch_id;
> >>         ustruct.info.pasid_info.flags =
> IOMMU_INV_PASID_FLAGS_ARCHID;
> >>         trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
> >> //    }
> >>
> >> This modification leads to invalidate the whole asid each time we get a
> >> guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
> >> saw this was the cause of such kind of issues. Please let me know if it
> >> fixes your perf issues
> >
> > Yes, this seems to fix the issue.
> >
> > root@ubuntu:~# iperf -c 10.202.225.185
> > ------------------------------------------------------------
> > Client connecting to 10.202.225.185, TCP port 5001
> > TCP window size: 85.0 KByte (default)
> > ------------------------------------------------------------
> > [  3] local 10.202.225.169 port 47996 connected with 10.202.225.185 port
> 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec
> > root@ubuntu:~#
> >
> > But the performance seems to be very poor as this is a 10Gbps interface(Of
> course
> > invalidating the whole asid may not be very helpful). It is interesting that why
> the
> > single iova invalidation is not working.
> >
> >  and then we may discuss further about the test
> >> configuration.
> >
> > Sure. Please let me know.
> 
> I reported that issue earlier on the ML. I have not been able to find
> any integration issue in the kernel/qemu code but maybe I am too blind
> now as I wrote it ;-) When I get a guest stage1 TLBI I cascade it down
> to the physical IOMMU. I also pass the LEAF flag.

Ok.

> As you are an expert of the SMMUv3 PMU, if your implementation has any
> and you have cycles to look at this, it would be helpful to run it and
> see if something weird gets highlighted.

:). Sure. I will give it a try and report back if anything suspicious.

Thanks,
Shameer

 
> Thanks
> 
> Eric
> >
> > Cheers,
> > Shameer
> >
> >> Thanks
> >>
> >> Eric
> >>
> >>
> >>
> >>>
> >>> Please let me know.
> >>>
> >>> Thanks,
> >>> Shameer
> >>>
> >>>> Best Regards
> >>>>
> >>>> Eric
> >>>>
> >>>> This series can be found at:
> >>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> >>>>
> >>>> It series includes Tina's patch steming from
> >>>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> >>>> specific irq" plus patches originally contributed by Yi.
> >>>>
> >>>> History:
> >>>>
> >>>> v8 -> v9:
> >>>> - introduce specific irq framework
> >>>> - single fault region
> >>>> - iommu_unregister_device_fault_handler failure case not handled
> >>>>   yet.
> >>>>
> >>>> v7 -> v8:
> >>>> - rebase on top of v5.2-rc1 and especially
> >>>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain pointer
> >>>> - dynamic alloc of s1_cfg/s2_cfg
> >>>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
> >>>> - check there is no HW MSI regions
> >>>> - asid invalidation using pasid extended struct (change in the uapi)
> >>>> - add s1_live/s2_live checks
> >>>> - move check about support of nested stages in domain finalise
> >>>> - fixes in error reporting according to the discussion with Robin
> >>>> - reordered the patches to have first iommu/smmuv3 patches and then
> >>>>   VFIO patches
> >>>>
> >>>> v6 -> v7:
> >>>> - removed device handle from bind/unbind_guest_msi
> >>>> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
> >>>>   enforcement"
> >>>> - added few uapi comments as suggested by Jean, Jacop and Alex
> >>>>
> >>>> v5 -> v6:
> >>>> - Fix compilation issue when CONFIG_IOMMU_API is unset
> >>>>
> >>>> v4 -> v5:
> >>>> - fix bug reported by Vincent: fault handler unregistration now happens in
> >>>>   vfio_pci_release
> >>>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
> >>>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
> >>>> - iommu: introduce device fault report API: removed the PRI part.
> >>>> - see individual logs for more details
> >>>> - reset the ste abort flag on detach
> >>>>
> >>>> v3 -> v4:
> >>>> - took into account Alex, jean-Philippe and Robin's comments on v3
> >>>> - rework of the smmuv3 driver integration
> >>>> - add tear down ops for msi binding and PASID table binding
> >>>> - fix S1 fault propagation
> >>>> - put fault reporting patches at the beginning of the series following
> >>>>   Jean-Philippe's request
> >>>> - update of the cache invalidate and fault API uapis
> >>>> - VFIO fault reporting rework with 2 separate regions and one mmappable
> >>>>   segment for the fault queue
> >>>> - moved to PATCH
> >>>>
> >>>> v2 -> v3:
> >>>> - When registering the S1 MSI binding we now store the device handle.
> This
> >>>>   addresses Robin's comment about discimination of devices beonging
> to
> >>>>   different S1 groups and using different physical MSI doorbells.
> >>>> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX
> to
> >>>>   set the eventfd and expose the faults through an mmappable fault
> region
> >>>>
> >>>> v1 -> v2:
> >>>> - Added the fault reporting capability
> >>>> - asid properly passed on invalidation (fix assignment of multiple
> >>>>   devices)
> >>>> - see individual change logs for more info
> >>>>
> >>>>
> >>>> Eric Auger (8):
> >>>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
> >>>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
> >>>>   vfio/pci: Register an iommu fault handler
> >>>>   vfio/pci: Allow to mmap the fault queue
> >>>>   vfio: Add new IRQ for DMA fault reporting
> >>>>   vfio/pci: Add framework for custom interrupt indices
> >>>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
> >>>>   vfio: Document nested stage control
> >>>>
> >>>> Liu, Yi L (2):
> >>>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
> >>>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
> >>>>
> >>>> Tina Zhang (1):
> >>>>   vfio: Use capability chains to handle device specific irq
> >>>>
> >>>>  Documentation/vfio.txt              |  77 ++++++++
> >>>>  drivers/vfio/pci/vfio_pci.c         | 283
> >> ++++++++++++++++++++++++++--
> >>>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
> >>>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
> >>>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
> >>>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
> >>>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
> >>>>  7 files changed, 747 insertions(+), 19 deletions(-)
> >>>>
> >>>> --
> >>>> 2.20.1
> >>>>
> >>>> _______________________________________________
> >>>> kvmarm mailing list
> >>>> kvmarm@lists.cs.columbia.edu
> >>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> >

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 13:21       ` Auger Eric
  2019-11-12 14:21         ` Shameerali Kolothum Thodi
@ 2019-11-12 17:56         ` Shameerali Kolothum Thodi
  2019-11-12 20:34           ` Auger Eric
  1 sibling, 1 reply; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2019-11-12 17:56 UTC (permalink / raw)
  To: Auger Eric, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Eric,

> -----Original Message-----
> From: Shameerali Kolothum Thodi
> Sent: 12 November 2019 14:21
> To: 'Auger Eric' <eric.auger@redhat.com>; eric.auger.pro@gmail.com;
> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> robin.murphy@arm.com
> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
> Subject: RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> 
[...]
> > >>> I am trying to get this running on one of our platform that has smmuv3
> dual
> > >>> stage support. I am seeing some issues with this when an ixgbe vf dev is
> > >>> made pass-through and is behind a vSMMUv3 in Guest.
> > >>>
> > >>> Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> > >>> Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
> > >>>
> > >>> And this is my Qemu cmd line,
> > >>>
> > >>> ./qemu-system-aarch64
> > >>> -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host
> \
> > >>> -kernel Image \
> > >>> -drive if=none,file=ubuntu,id=fs \
> > >>> -device virtio-blk-device,drive=fs \
> > >>> -device vfio-pci,host=0000:01:10.1 \
> > >>> -bios QEMU_EFI.fd \
> > >>> -net none \
> > >>> -m 4G \
> > >>> -nographic -D -d -enable-kvm \
> > >>> -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
> > >>>
> > >>> The basic ping from Guest works fine,
> > >>> root@ubuntu:~# ping 10.202.225.185
> > >>> PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
> > >>> 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
> > >>> 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
> > >>> ...
> > >>>
> > >>> But if I increase ping packet size,
> > >>>
> > >>> root@ubuntu:~# ping -s 1024 10.202.225.185
> > >>> PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
> > >>> 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
> > >>> 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
> > >>> From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
> > >>> From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
> > >>> From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
> > >>> From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
> > >>>
> > >>> And from Host kernel I get,
> > >>> [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets
> detected
> > >>> [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
> detected
> > >>> [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
> detected
> > >>> [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets
> detected
> > >>> [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
> detected
> > >>> [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets
> detected
> > >>>
> > >>> Also noted that iperf cannot work as it fails to establish the connection
> > with
> > >> iperf
> > >>> server.
> > >>>
> > >>> Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your
> > >> reference.
> > >>> I haven't debugged this further yet and thought of checking with you if
> this
> > is
> > >>> something you have seen already or not. Or maybe I am missing
> something
> > >> here?
> > >>
> > >> Please can you try to edit and modify hw/vfio/common.c, function
> > >> vfio_iommu_unmap_notify
> > >>
> > >>
> > >> /*
> > >>     if (size <= 0x10000) {
> > >>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
> > >>         ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
> > >>         ustruct.info.addr_info.flags =
> > IOMMU_INV_ADDR_FLAGS_ARCHID;
> > >>         if (iotlb->leaf) {
> > >>             ustruct.info.addr_info.flags |=
> > >> IOMMU_INV_ADDR_FLAGS_LEAF;
> > >>         }
> > >>         ustruct.info.addr_info.archid = iotlb->arch_id;
> > >>         ustruct.info.addr_info.addr = start;
> > >>         ustruct.info.addr_info.granule_size = size;
> > >>         ustruct.info.addr_info.nb_granules = 1;
> > >>         trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
> > >>                                         iotlb->leaf);
> > >>     } else {
> > >> */
> > >>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
> > >>         ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
> > >>         ustruct.info.pasid_info.archid = iotlb->arch_id;
> > >>         ustruct.info.pasid_info.flags =
> > IOMMU_INV_PASID_FLAGS_ARCHID;
> > >>         trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
> > >> //    }
> > >>
> > >> This modification leads to invalidate the whole asid each time we get a
> > >> guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
> > >> saw this was the cause of such kind of issues. Please let me know if it
> > >> fixes your perf issues
> > >
> > > Yes, this seems to fix the issue.
> > >
> > > root@ubuntu:~# iperf -c 10.202.225.185
> > > ------------------------------------------------------------
> > > Client connecting to 10.202.225.185, TCP port 5001
> > > TCP window size: 85.0 KByte (default)
> > > ------------------------------------------------------------
> > > [  3] local 10.202.225.169 port 47996 connected with 10.202.225.185 port
> > 5001
> > > [ ID] Interval       Transfer     Bandwidth
> > > [  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec
> > > root@ubuntu:~#
> > >
> > > But the performance seems to be very poor as this is a 10Gbps interface(Of
> > course
> > > invalidating the whole asid may not be very helpful). It is interesting that
> why
> > the
> > > single iova invalidation is not working.
> > >
> > >  and then we may discuss further about the test
> > >> configuration.
> > >
> > > Sure. Please let me know.
> >
> > I reported that issue earlier on the ML. I have not been able to find
> > any integration issue in the kernel/qemu code but maybe I am too blind
> > now as I wrote it ;-) When I get a guest stage1 TLBI I cascade it down
> > to the physical IOMMU. I also pass the LEAF flag.
> 
> Ok.
> 
> > As you are an expert of the SMMUv3 PMU, if your implementation has any
> > and you have cycles to look at this, it would be helpful to run it and
> > see if something weird gets highlighted.
> 
> :). Sure. I will give it a try and report back if anything suspicious.

I just noted that CMDQ_OP_TLBI_NH_VA is missing the vmid filed which seems
to be the cause for single IOVA TLBI not working properly.

I had this fix in arm-smmuv3.c,

@@ -947,6 +947,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
		break;
	case CMDQ_OP_TLBI_NH_VA:
+		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;


With this, your original qemu branch is working. 

root@ubuntu:~# iperf -c 10.202.225.185
------------------------------------------------------------
Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.202.225.169 port 44894 connected with 10.202.225.185 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  3.21 GBytes  2.76 Gbits/sec

Could you please check this...

I also have a rebase of your patches on top of 5.4-rc5. This has some optimizations
From Will such as batched TLBI inv. Please find it here,

https://github.com/hisilicon/kernel-dev/tree/private-vSMMUv3-v9-v5.4-rc5

This gives me a better performance with iperf,

root@ubuntu:~# iperf -c 10.202.225.185
------------------------------------------------------------
Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.202.225.169 port 55450 connected with 10.202.225.185 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.91 GBytes  4.22 Gbits/sec root@ubuntu:~#

If possible please check this branch as well.

Thanks,
Shameer

> Thanks,
> Shameer
> 
> 
> > Thanks
> >
> > Eric
> > >
> > > Cheers,
> > > Shameer
> > >
> > >> Thanks
> > >>
> > >> Eric
> > >>
> > >>
> > >>
> > >>>
> > >>> Please let me know.
> > >>>
> > >>> Thanks,
> > >>> Shameer
> > >>>
> > >>>> Best Regards
> > >>>>
> > >>>> Eric
> > >>>>
> > >>>> This series can be found at:
> > >>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> > >>>>
> > >>>> It series includes Tina's patch steming from
> > >>>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> > >>>> specific irq" plus patches originally contributed by Yi.
> > >>>>
> > >>>> History:
> > >>>>
> > >>>> v8 -> v9:
> > >>>> - introduce specific irq framework
> > >>>> - single fault region
> > >>>> - iommu_unregister_device_fault_handler failure case not handled
> > >>>>   yet.
> > >>>>
> > >>>> v7 -> v8:
> > >>>> - rebase on top of v5.2-rc1 and especially
> > >>>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain
> pointer
> > >>>> - dynamic alloc of s1_cfg/s2_cfg
> > >>>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
> > >>>> - check there is no HW MSI regions
> > >>>> - asid invalidation using pasid extended struct (change in the uapi)
> > >>>> - add s1_live/s2_live checks
> > >>>> - move check about support of nested stages in domain finalise
> > >>>> - fixes in error reporting according to the discussion with Robin
> > >>>> - reordered the patches to have first iommu/smmuv3 patches and then
> > >>>>   VFIO patches
> > >>>>
> > >>>> v6 -> v7:
> > >>>> - removed device handle from bind/unbind_guest_msi
> > >>>> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
> > >>>>   enforcement"
> > >>>> - added few uapi comments as suggested by Jean, Jacop and Alex
> > >>>>
> > >>>> v5 -> v6:
> > >>>> - Fix compilation issue when CONFIG_IOMMU_API is unset
> > >>>>
> > >>>> v4 -> v5:
> > >>>> - fix bug reported by Vincent: fault handler unregistration now happens
> in
> > >>>>   vfio_pci_release
> > >>>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
> > >>>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
> > >>>> - iommu: introduce device fault report API: removed the PRI part.
> > >>>> - see individual logs for more details
> > >>>> - reset the ste abort flag on detach
> > >>>>
> > >>>> v3 -> v4:
> > >>>> - took into account Alex, jean-Philippe and Robin's comments on v3
> > >>>> - rework of the smmuv3 driver integration
> > >>>> - add tear down ops for msi binding and PASID table binding
> > >>>> - fix S1 fault propagation
> > >>>> - put fault reporting patches at the beginning of the series following
> > >>>>   Jean-Philippe's request
> > >>>> - update of the cache invalidate and fault API uapis
> > >>>> - VFIO fault reporting rework with 2 separate regions and one
> mmappable
> > >>>>   segment for the fault queue
> > >>>> - moved to PATCH
> > >>>>
> > >>>> v2 -> v3:
> > >>>> - When registering the S1 MSI binding we now store the device handle.
> > This
> > >>>>   addresses Robin's comment about discimination of devices beonging
> > to
> > >>>>   different S1 groups and using different physical MSI doorbells.
> > >>>> - Change the fault reporting API: use
> VFIO_PCI_DMA_FAULT_IRQ_INDEX
> > to
> > >>>>   set the eventfd and expose the faults through an mmappable fault
> > region
> > >>>>
> > >>>> v1 -> v2:
> > >>>> - Added the fault reporting capability
> > >>>> - asid properly passed on invalidation (fix assignment of multiple
> > >>>>   devices)
> > >>>> - see individual change logs for more info
> > >>>>
> > >>>>
> > >>>> Eric Auger (8):
> > >>>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
> > >>>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
> > >>>>   vfio/pci: Register an iommu fault handler
> > >>>>   vfio/pci: Allow to mmap the fault queue
> > >>>>   vfio: Add new IRQ for DMA fault reporting
> > >>>>   vfio/pci: Add framework for custom interrupt indices
> > >>>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
> > >>>>   vfio: Document nested stage control
> > >>>>
> > >>>> Liu, Yi L (2):
> > >>>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
> > >>>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
> > >>>>
> > >>>> Tina Zhang (1):
> > >>>>   vfio: Use capability chains to handle device specific irq
> > >>>>
> > >>>>  Documentation/vfio.txt              |  77 ++++++++
> > >>>>  drivers/vfio/pci/vfio_pci.c         | 283
> > >> ++++++++++++++++++++++++++--
> > >>>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
> > >>>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
> > >>>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
> > >>>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
> > >>>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
> > >>>>  7 files changed, 747 insertions(+), 19 deletions(-)
> > >>>>
> > >>>> --
> > >>>> 2.20.1
> > >>>>
> > >>>> _______________________________________________
> > >>>> kvmarm mailing list
> > >>>> kvmarm@lists.cs.columbia.edu
> > >>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> > >

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 17:56         ` Shameerali Kolothum Thodi
@ 2019-11-12 20:34           ` Auger Eric
  2019-11-13 16:24             ` Shameerali Kolothum Thodi
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2019-11-12 20:34 UTC (permalink / raw)
  To: Shameerali Kolothum Thodi, eric.auger.pro, iommu, linux-kernel,
	kvm, kvmarm, joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Shameer,

On 11/12/19 6:56 PM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
> 
>> -----Original Message-----
>> From: Shameerali Kolothum Thodi
>> Sent: 12 November 2019 14:21
>> To: 'Auger Eric' <eric.auger@redhat.com>; eric.auger.pro@gmail.com;
>> iommu@lists.linux-foundation.org; linux-kernel@vger.kernel.org;
>> kvm@vger.kernel.org; kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
>> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
>> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
>> robin.murphy@arm.com
>> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
>> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
>> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
>> Subject: RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
>>
> [...]
>>>>>> I am trying to get this running on one of our platform that has smmuv3
>> dual
>>>>>> stage support. I am seeing some issues with this when an ixgbe vf dev is
>>>>>> made pass-through and is behind a vSMMUv3 in Guest.
>>>>>>
>>>>>> Kernel used : https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>>>>> Qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
>>>>>>
>>>>>> And this is my Qemu cmd line,
>>>>>>
>>>>>> ./qemu-system-aarch64
>>>>>> -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 -cpu host
>> \
>>>>>> -kernel Image \
>>>>>> -drive if=none,file=ubuntu,id=fs \
>>>>>> -device virtio-blk-device,drive=fs \
>>>>>> -device vfio-pci,host=0000:01:10.1 \
>>>>>> -bios QEMU_EFI.fd \
>>>>>> -net none \
>>>>>> -m 4G \
>>>>>> -nographic -D -d -enable-kvm \
>>>>>> -append "console=ttyAMA0 root=/dev/vda rw acpi=force"
>>>>>>
>>>>>> The basic ping from Guest works fine,
>>>>>> root@ubuntu:~# ping 10.202.225.185
>>>>>> PING 10.202.225.185 (10.202.225.185) 56(84) bytes of data.
>>>>>> 64 bytes from 10.202.225.185: icmp_seq=2 ttl=64 time=0.207 ms
>>>>>> 64 bytes from 10.202.225.185: icmp_seq=3 ttl=64 time=0.203 ms
>>>>>> ...
>>>>>>
>>>>>> But if I increase ping packet size,
>>>>>>
>>>>>> root@ubuntu:~# ping -s 1024 10.202.225.185
>>>>>> PING 10.202.225.185 (10.202.225.185) 1024(1052) bytes of data.
>>>>>> 1032 bytes from 10.202.225.185: icmp_seq=22 ttl=64 time=0.292 ms
>>>>>> 1032 bytes from 10.202.225.185: icmp_seq=23 ttl=64 time=0.207 ms
>>>>>> From 10.202.225.169 icmp_seq=66 Destination Host Unreachable
>>>>>> From 10.202.225.169 icmp_seq=67 Destination Host Unreachable
>>>>>> From 10.202.225.169 icmp_seq=68 Destination Host Unreachable
>>>>>> From 10.202.225.169 icmp_seq=69 Destination Host Unreachable
>>>>>>
>>>>>> And from Host kernel I get,
>>>>>> [  819.970742] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets
>> detected
>>>>>> [  824.002707] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
>> detected
>>>>>> [  828.034683] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
>> detected
>>>>>> [  830.050673] ixgbe 0000:01:00.1 enp1s0f1: 4 Spoofed packets
>> detected
>>>>>> [  832.066659] ixgbe 0000:01:00.1 enp1s0f1: 1 Spoofed packets
>> detected
>>>>>> [  834.082640] ixgbe 0000:01:00.1 enp1s0f1: 3 Spoofed packets
>> detected
>>>>>>
>>>>>> Also noted that iperf cannot work as it fails to establish the connection
>>> with
>>>>> iperf
>>>>>> server.
>>>>>>
>>>>>> Please find attached the trace logs(vfio*, smmuv3*) from Qemu for your
>>>>> reference.
>>>>>> I haven't debugged this further yet and thought of checking with you if
>> this
>>> is
>>>>>> something you have seen already or not. Or maybe I am missing
>> something
>>>>> here?
>>>>>
>>>>> Please can you try to edit and modify hw/vfio/common.c, function
>>>>> vfio_iommu_unmap_notify
>>>>>
>>>>>
>>>>> /*
>>>>>     if (size <= 0x10000) {
>>>>>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>>>>>         ustruct.info.granularity = IOMMU_INV_GRANU_ADDR;
>>>>>         ustruct.info.addr_info.flags =
>>> IOMMU_INV_ADDR_FLAGS_ARCHID;
>>>>>         if (iotlb->leaf) {
>>>>>             ustruct.info.addr_info.flags |=
>>>>> IOMMU_INV_ADDR_FLAGS_LEAF;
>>>>>         }
>>>>>         ustruct.info.addr_info.archid = iotlb->arch_id;
>>>>>         ustruct.info.addr_info.addr = start;
>>>>>         ustruct.info.addr_info.granule_size = size;
>>>>>         ustruct.info.addr_info.nb_granules = 1;
>>>>>         trace_vfio_iommu_addr_inv_iotlb(iotlb->arch_id, start, size, 1,
>>>>>                                         iotlb->leaf);
>>>>>     } else {
>>>>> */
>>>>>         ustruct.info.cache = IOMMU_CACHE_INV_TYPE_IOTLB;
>>>>>         ustruct.info.granularity = IOMMU_INV_GRANU_PASID;
>>>>>         ustruct.info.pasid_info.archid = iotlb->arch_id;
>>>>>         ustruct.info.pasid_info.flags =
>>> IOMMU_INV_PASID_FLAGS_ARCHID;
>>>>>         trace_vfio_iommu_asid_inv_iotlb(iotlb->arch_id);
>>>>> //    }
>>>>>
>>>>> This modification leads to invalidate the whole asid each time we get a
>>>>> guest TLBI instead of invalidating the single IOVA (TLBI). On my end, I
>>>>> saw this was the cause of such kind of issues. Please let me know if it
>>>>> fixes your perf issues
>>>>
>>>> Yes, this seems to fix the issue.
>>>>
>>>> root@ubuntu:~# iperf -c 10.202.225.185
>>>> ------------------------------------------------------------
>>>> Client connecting to 10.202.225.185, TCP port 5001
>>>> TCP window size: 85.0 KByte (default)
>>>> ------------------------------------------------------------
>>>> [  3] local 10.202.225.169 port 47996 connected with 10.202.225.185 port
>>> 5001
>>>> [ ID] Interval       Transfer     Bandwidth
>>>> [  3]  0.0-10.0 sec  2.27 GBytes  1.95 Gbits/sec
>>>> root@ubuntu:~#
>>>>
>>>> But the performance seems to be very poor as this is a 10Gbps interface(Of
>>> course
>>>> invalidating the whole asid may not be very helpful). It is interesting that
>> why
>>> the
>>>> single iova invalidation is not working.
>>>>
>>>>  and then we may discuss further about the test
>>>>> configuration.
>>>>
>>>> Sure. Please let me know.
>>>
>>> I reported that issue earlier on the ML. I have not been able to find
>>> any integration issue in the kernel/qemu code but maybe I am too blind
>>> now as I wrote it ;-) When I get a guest stage1 TLBI I cascade it down
>>> to the physical IOMMU. I also pass the LEAF flag.
>>
>> Ok.
>>
>>> As you are an expert of the SMMUv3 PMU, if your implementation has any
>>> and you have cycles to look at this, it would be helpful to run it and
>>> see if something weird gets highlighted.
>>
>> :). Sure. I will give it a try and report back if anything suspicious.
> 
> I just noted that CMDQ_OP_TLBI_NH_VA is missing the vmid filed which seems
> to be the cause for single IOVA TLBI not working properly.
> 
> I had this fix in arm-smmuv3.c,
> 
> @@ -947,6 +947,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
> 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
> 		break;
> 	case CMDQ_OP_TLBI_NH_VA:
> +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
Damn, I did not see that! That's it. ASID invalidation fills this field
indeed. You may post an independent patch for that.> 		cmd[0] |=
FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
> 
> 
> With this, your original qemu branch is working. 
> 
> root@ubuntu:~# iperf -c 10.202.225.185
> ------------------------------------------------------------
> Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.202.225.169 port 44894 connected with 10.202.225.185 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  3.21 GBytes  2.76 Gbits/sec
> 
> Could you please check this...
> 
> I also have a rebase of your patches on top of 5.4-rc5. This has some optimizations
> From Will such as batched TLBI inv. Please find it here,
> 
> https://github.com/hisilicon/kernel-dev/tree/private-vSMMUv3-v9-v5.4-rc5
> 
> This gives me a better performance with iperf,
> 
> root@ubuntu:~# iperf -c 10.202.225.185
> ------------------------------------------------------------
> Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.202.225.169 port 55450 connected with 10.202.225.185 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  4.91 GBytes  4.22 Gbits/sec root@ubuntu:~#
> 
> If possible please check this branch as well.

To be honest I don't really know what to do with this work. Despite the
efforts, this has suffered from a lack of traction in the community. My
last attempt to explain the use cases, upon Will's request at Plumber,
has not received any comment (https://lkml.org/lkml/2019/9/20/104).

I think I will post a rebased version with your fix, as a matter to get
a clean snapshot. If you think this work is useful for your projects,
please let it know on the ML.

Thank you again!

Eric
> 
> Thanks,
> Shameer
> 
>> Thanks,
>> Shameer
>>
>>
>>> Thanks
>>>
>>> Eric
>>>>
>>>> Cheers,
>>>> Shameer
>>>>
>>>>> Thanks
>>>>>
>>>>> Eric
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> Please let me know.
>>>>>>
>>>>>> Thanks,
>>>>>> Shameer
>>>>>>
>>>>>>> Best Regards
>>>>>>>
>>>>>>> Eric
>>>>>>>
>>>>>>> This series can be found at:
>>>>>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>>>>>>
>>>>>>> It series includes Tina's patch steming from
>>>>>>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
>>>>>>> specific irq" plus patches originally contributed by Yi.
>>>>>>>
>>>>>>> History:
>>>>>>>
>>>>>>> v8 -> v9:
>>>>>>> - introduce specific irq framework
>>>>>>> - single fault region
>>>>>>> - iommu_unregister_device_fault_handler failure case not handled
>>>>>>>   yet.
>>>>>>>
>>>>>>> v7 -> v8:
>>>>>>> - rebase on top of v5.2-rc1 and especially
>>>>>>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain
>> pointer
>>>>>>> - dynamic alloc of s1_cfg/s2_cfg
>>>>>>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
>>>>>>> - check there is no HW MSI regions
>>>>>>> - asid invalidation using pasid extended struct (change in the uapi)
>>>>>>> - add s1_live/s2_live checks
>>>>>>> - move check about support of nested stages in domain finalise
>>>>>>> - fixes in error reporting according to the discussion with Robin
>>>>>>> - reordered the patches to have first iommu/smmuv3 patches and then
>>>>>>>   VFIO patches
>>>>>>>
>>>>>>> v6 -> v7:
>>>>>>> - removed device handle from bind/unbind_guest_msi
>>>>>>> - added "iommu/smmuv3: Nested mode single MSI doorbell per domain
>>>>>>>   enforcement"
>>>>>>> - added few uapi comments as suggested by Jean, Jacop and Alex
>>>>>>>
>>>>>>> v5 -> v6:
>>>>>>> - Fix compilation issue when CONFIG_IOMMU_API is unset
>>>>>>>
>>>>>>> v4 -> v5:
>>>>>>> - fix bug reported by Vincent: fault handler unregistration now happens
>> in
>>>>>>>   vfio_pci_release
>>>>>>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
>>>>>>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
>>>>>>> - iommu: introduce device fault report API: removed the PRI part.
>>>>>>> - see individual logs for more details
>>>>>>> - reset the ste abort flag on detach
>>>>>>>
>>>>>>> v3 -> v4:
>>>>>>> - took into account Alex, jean-Philippe and Robin's comments on v3
>>>>>>> - rework of the smmuv3 driver integration
>>>>>>> - add tear down ops for msi binding and PASID table binding
>>>>>>> - fix S1 fault propagation
>>>>>>> - put fault reporting patches at the beginning of the series following
>>>>>>>   Jean-Philippe's request
>>>>>>> - update of the cache invalidate and fault API uapis
>>>>>>> - VFIO fault reporting rework with 2 separate regions and one
>> mmappable
>>>>>>>   segment for the fault queue
>>>>>>> - moved to PATCH
>>>>>>>
>>>>>>> v2 -> v3:
>>>>>>> - When registering the S1 MSI binding we now store the device handle.
>>> This
>>>>>>>   addresses Robin's comment about discimination of devices beonging
>>> to
>>>>>>>   different S1 groups and using different physical MSI doorbells.
>>>>>>> - Change the fault reporting API: use
>> VFIO_PCI_DMA_FAULT_IRQ_INDEX
>>> to
>>>>>>>   set the eventfd and expose the faults through an mmappable fault
>>> region
>>>>>>>
>>>>>>> v1 -> v2:
>>>>>>> - Added the fault reporting capability
>>>>>>> - asid properly passed on invalidation (fix assignment of multiple
>>>>>>>   devices)
>>>>>>> - see individual change logs for more info
>>>>>>>
>>>>>>>
>>>>>>> Eric Auger (8):
>>>>>>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
>>>>>>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>>>>>>>   vfio/pci: Register an iommu fault handler
>>>>>>>   vfio/pci: Allow to mmap the fault queue
>>>>>>>   vfio: Add new IRQ for DMA fault reporting
>>>>>>>   vfio/pci: Add framework for custom interrupt indices
>>>>>>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
>>>>>>>   vfio: Document nested stage control
>>>>>>>
>>>>>>> Liu, Yi L (2):
>>>>>>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
>>>>>>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>>>>>>
>>>>>>> Tina Zhang (1):
>>>>>>>   vfio: Use capability chains to handle device specific irq
>>>>>>>
>>>>>>>  Documentation/vfio.txt              |  77 ++++++++
>>>>>>>  drivers/vfio/pci/vfio_pci.c         | 283
>>>>> ++++++++++++++++++++++++++--
>>>>>>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
>>>>>>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
>>>>>>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
>>>>>>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
>>>>>>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
>>>>>>>  7 files changed, 747 insertions(+), 19 deletions(-)
>>>>>>>
>>>>>>> --
>>>>>>> 2.20.1
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> kvmarm mailing list
>>>>>>> kvmarm@lists.cs.columbia.edu
>>>>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>>
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-12 20:34           ` Auger Eric
@ 2019-11-13 16:24             ` Shameerali Kolothum Thodi
  0 siblings, 0 replies; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2019-11-13 16:24 UTC (permalink / raw)
  To: Auger Eric, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, Linuxarm,
	tina.zhang, xuwei (O)

Hi Eric,

> -----Original Message-----
> From: Auger Eric [mailto:eric.auger@redhat.com]
> Sent: 12 November 2019 20:35
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>;
> eric.auger.pro@gmail.com; iommu@lists.linux-foundation.org;
> linux-kernel@vger.kernel.org; kvm@vger.kernel.org;
> kvmarm@lists.cs.columbia.edu; joro@8bytes.org;
> alex.williamson@redhat.com; jacob.jun.pan@linux.intel.com;
> yi.l.liu@intel.com; jean-philippe.brucker@arm.com; will.deacon@arm.com;
> robin.murphy@arm.com
> Cc: kevin.tian@intel.com; vincent.stehle@arm.com; ashok.raj@intel.com;
> marc.zyngier@arm.com; tina.zhang@intel.com; Linuxarm
> <linuxarm@huawei.com>; xuwei (O) <xuwei5@huawei.com>
> Subject: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
> 
> Hi Shameer,
> 

[..]

> >
> > I just noted that CMDQ_OP_TLBI_NH_VA is missing the vmid filed which
> seems
> > to be the cause for single IOVA TLBI not working properly.
> >
> > I had this fix in arm-smmuv3.c,
> >
> > @@ -947,6 +947,7 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd,
> struct arm_smmu_cmdq_ent *ent)
> > 		cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
> > 		break;
> > 	case CMDQ_OP_TLBI_NH_VA:
> > +		cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
> Damn, I did not see that! That's it. ASID invalidation fills this field
> indeed. You may post an independent patch for that.

Sure. Just did that.
" iommu/arm-smmu-v3: Populate VMID field for CMDQ_OP_TLBI_NH_VA"

> 		cmd[0] |=
> FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
> > 		cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
> > 		cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK;
> >
> >
> > With this, your original qemu branch is working.
> >
> > root@ubuntu:~# iperf -c 10.202.225.185
> > ------------------------------------------------------------
> > Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0
> KByte (default)
> > ------------------------------------------------------------
> > [  3] local 10.202.225.169 port 44894 connected with 10.202.225.185 port
> 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec  3.21 GBytes  2.76 Gbits/sec
> >
> > Could you please check this...
> >
> > I also have a rebase of your patches on top of 5.4-rc5. This has some
> optimizations
> > From Will such as batched TLBI inv. Please find it here,
> >
> > https://github.com/hisilicon/kernel-dev/tree/private-vSMMUv3-v9-v5.4-rc5
> >
> > This gives me a better performance with iperf,
> >
> > root@ubuntu:~# iperf -c 10.202.225.185
> > ------------------------------------------------------------
> > Client connecting to 10.202.225.185, TCP port 5001 TCP window size: 85.0
> KByte (default)
> > ------------------------------------------------------------
> > [  3] local 10.202.225.169 port 55450 connected with 10.202.225.185 port
> 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  3]  0.0-10.0 sec  4.91 GBytes  4.22 Gbits/sec root@ubuntu:~#
> >
> > If possible please check this branch as well.
> 
> To be honest I don't really know what to do with this work. Despite the
> efforts, this has suffered from a lack of traction in the community. My
> last attempt to explain the use cases, upon Will's request at Plumber,
> has not received any comment (https://lkml.org/lkml/2019/9/20/104).
> 
> I think I will post a rebased version with your fix, as a matter to get
> a clean snapshot.

Thanks. That makes sense.

 If you think this work is useful for your projects,
> please let it know on the ML.

Right. While SVA use case is definitely the one we are very much interested, I will
check within our team the priority for use case 1(native drivers in Guest) you
mentioned in the above link. 

Cheers,
Shameer

> Thank you again!
> 
> Eric
> >
> > Thanks,
> > Shameer
> >
> >> Thanks,
> >> Shameer
> >>
> >>
> >>> Thanks
> >>>
> >>> Eric
> >>>>
> >>>> Cheers,
> >>>> Shameer
> >>>>
> >>>>> Thanks
> >>>>>
> >>>>> Eric
> >>>>>
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> Please let me know.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Shameer
> >>>>>>
> >>>>>>> Best Regards
> >>>>>>>
> >>>>>>> Eric
> >>>>>>>
> >>>>>>> This series can be found at:
> >>>>>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> >>>>>>>
> >>>>>>> It series includes Tina's patch steming from
> >>>>>>> [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
> >>>>>>> specific irq" plus patches originally contributed by Yi.
> >>>>>>>
> >>>>>>> History:
> >>>>>>>
> >>>>>>> v8 -> v9:
> >>>>>>> - introduce specific irq framework
> >>>>>>> - single fault region
> >>>>>>> - iommu_unregister_device_fault_handler failure case not handled
> >>>>>>>   yet.
> >>>>>>>
> >>>>>>> v7 -> v8:
> >>>>>>> - rebase on top of v5.2-rc1 and especially
> >>>>>>>   8be39a1a04c1  iommu/arm-smmu-v3: Add a master->domain
> >> pointer
> >>>>>>> - dynamic alloc of s1_cfg/s2_cfg
> >>>>>>> - __arm_smmu_tlb_inv_asid/s1_range_nosync
> >>>>>>> - check there is no HW MSI regions
> >>>>>>> - asid invalidation using pasid extended struct (change in the uapi)
> >>>>>>> - add s1_live/s2_live checks
> >>>>>>> - move check about support of nested stages in domain finalise
> >>>>>>> - fixes in error reporting according to the discussion with Robin
> >>>>>>> - reordered the patches to have first iommu/smmuv3 patches and
> then
> >>>>>>>   VFIO patches
> >>>>>>>
> >>>>>>> v6 -> v7:
> >>>>>>> - removed device handle from bind/unbind_guest_msi
> >>>>>>> - added "iommu/smmuv3: Nested mode single MSI doorbell per
> domain
> >>>>>>>   enforcement"
> >>>>>>> - added few uapi comments as suggested by Jean, Jacop and Alex
> >>>>>>>
> >>>>>>> v5 -> v6:
> >>>>>>> - Fix compilation issue when CONFIG_IOMMU_API is unset
> >>>>>>>
> >>>>>>> v4 -> v5:
> >>>>>>> - fix bug reported by Vincent: fault handler unregistration now
> happens
> >> in
> >>>>>>>   vfio_pci_release
> >>>>>>> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
> >>>>>>>   uapi changes suggested by Kean-Philippe (except fetch_addr)
> >>>>>>> - iommu: introduce device fault report API: removed the PRI part.
> >>>>>>> - see individual logs for more details
> >>>>>>> - reset the ste abort flag on detach
> >>>>>>>
> >>>>>>> v3 -> v4:
> >>>>>>> - took into account Alex, jean-Philippe and Robin's comments on v3
> >>>>>>> - rework of the smmuv3 driver integration
> >>>>>>> - add tear down ops for msi binding and PASID table binding
> >>>>>>> - fix S1 fault propagation
> >>>>>>> - put fault reporting patches at the beginning of the series following
> >>>>>>>   Jean-Philippe's request
> >>>>>>> - update of the cache invalidate and fault API uapis
> >>>>>>> - VFIO fault reporting rework with 2 separate regions and one
> >> mmappable
> >>>>>>>   segment for the fault queue
> >>>>>>> - moved to PATCH
> >>>>>>>
> >>>>>>> v2 -> v3:
> >>>>>>> - When registering the S1 MSI binding we now store the device
> handle.
> >>> This
> >>>>>>>   addresses Robin's comment about discimination of devices
> beonging
> >>> to
> >>>>>>>   different S1 groups and using different physical MSI doorbells.
> >>>>>>> - Change the fault reporting API: use
> >> VFIO_PCI_DMA_FAULT_IRQ_INDEX
> >>> to
> >>>>>>>   set the eventfd and expose the faults through an mmappable fault
> >>> region
> >>>>>>>
> >>>>>>> v1 -> v2:
> >>>>>>> - Added the fault reporting capability
> >>>>>>> - asid properly passed on invalidation (fix assignment of multiple
> >>>>>>>   devices)
> >>>>>>> - see individual change logs for more info
> >>>>>>>
> >>>>>>>
> >>>>>>> Eric Auger (8):
> >>>>>>>   vfio: VFIO_IOMMU_SET_MSI_BINDING
> >>>>>>>   vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
> >>>>>>>   vfio/pci: Register an iommu fault handler
> >>>>>>>   vfio/pci: Allow to mmap the fault queue
> >>>>>>>   vfio: Add new IRQ for DMA fault reporting
> >>>>>>>   vfio/pci: Add framework for custom interrupt indices
> >>>>>>>   vfio/pci: Register and allow DMA FAULT IRQ signaling
> >>>>>>>   vfio: Document nested stage control
> >>>>>>>
> >>>>>>> Liu, Yi L (2):
> >>>>>>>   vfio: VFIO_IOMMU_SET_PASID_TABLE
> >>>>>>>   vfio: VFIO_IOMMU_CACHE_INVALIDATE
> >>>>>>>
> >>>>>>> Tina Zhang (1):
> >>>>>>>   vfio: Use capability chains to handle device specific irq
> >>>>>>>
> >>>>>>>  Documentation/vfio.txt              |  77 ++++++++
> >>>>>>>  drivers/vfio/pci/vfio_pci.c         | 283
> >>>>> ++++++++++++++++++++++++++--
> >>>>>>>  drivers/vfio/pci/vfio_pci_intrs.c   |  62 ++++++
> >>>>>>>  drivers/vfio/pci/vfio_pci_private.h |  24 +++
> >>>>>>>  drivers/vfio/pci/vfio_pci_rdwr.c    |  45 +++++
> >>>>>>>  drivers/vfio/vfio_iommu_type1.c     | 166 ++++++++++++++++
> >>>>>>>  include/uapi/linux/vfio.h           | 109 ++++++++++-
> >>>>>>>  7 files changed, 747 insertions(+), 19 deletions(-)
> >>>>>>>
> >>>>>>> --
> >>>>>>> 2.20.1
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> kvmarm mailing list
> >>>>>>> kvmarm@lists.cs.columbia.edu
> >>>>>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> >>>>
> >

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
                   ` (12 preceding siblings ...)
  2019-11-12 11:08 ` Shameerali Kolothum Thodi
@ 2019-11-20  8:15 ` Tomasz Nowicki
  2019-11-20 10:18   ` Auger Eric
  13 siblings, 1 reply; 25+ messages in thread
From: Tomasz Nowicki @ 2019-11-20  8:15 UTC (permalink / raw)
  To: Eric Auger, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Hi Eric,

On 11.07.2019 15:56, Eric Auger wrote:
> This series brings the VFIO part of HW nested paging support
> in the SMMUv3.
> 
> The series depends on:
> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
> (https://www.spinics.net/lists/kernel/msg3187714.html)
> 
> 3 new IOCTLs are introduced that allow the userspace to
> 1) pass the guest stage 1 configuration
> 2) pass stage 1 MSI bindings
> 3) invalidate stage 1 related caches
> 
> They map onto the related new IOMMU API functions.
> 
> We introduce the capability to register specific interrupt
> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
> an eventfd to be signaled whenever a stage 1 related fault
> is detected at physical level. Also a specific region allows
> to expose the fault records to the user space.
> 
> Best Regards
> 
> Eric
> 
> This series can be found at:
> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9

I think you have already tested on ThunderX2, but as a formality, for 
the whole series:

Tested-by: Tomasz Nowicki <tnowicki@marvell.com>
qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
kernel: https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9 + 
Shameer's fix patch

In my test I assigned Intel 82574L NIC and perform iperf tests.

Other folks from Marvell claimed this to be important feature so I asked 
them to review and speak up on mailing list.

Thanks,
Tomasz
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-20  8:15 ` Tomasz Nowicki
@ 2019-11-20 10:18   ` Auger Eric
  2020-03-03 12:57     ` zhangfei
  0 siblings, 1 reply; 25+ messages in thread
From: Auger Eric @ 2019-11-20 10:18 UTC (permalink / raw)
  To: Tomasz Nowicki, eric.auger.pro, iommu, linux-kernel, kvm, kvmarm,
	joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang

Hi Tomasz,

On 11/20/19 9:15 AM, Tomasz Nowicki wrote:
> Hi Eric,
> 
> On 11.07.2019 15:56, Eric Auger wrote:
>> This series brings the VFIO part of HW nested paging support
>> in the SMMUv3.
>>
>> The series depends on:
>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>> (https://www.spinics.net/lists/kernel/msg3187714.html)
>>
>> 3 new IOCTLs are introduced that allow the userspace to
>> 1) pass the guest stage 1 configuration
>> 2) pass stage 1 MSI bindings
>> 3) invalidate stage 1 related caches
>>
>> They map onto the related new IOMMU API functions.
>>
>> We introduce the capability to register specific interrupt
>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
>> an eventfd to be signaled whenever a stage 1 related fault
>> is detected at physical level. Also a specific region allows
>> to expose the fault records to the user space.
>>
>> Best Regards
>>
>> Eric
>>
>> This series can be found at:
>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
> 
> I think you have already tested on ThunderX2, but as a formality, for 
> the whole series:
> 
> Tested-by: Tomasz Nowicki <tnowicki@marvell.com>
> qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
> kernel: https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9 + 
> Shameer's fix patch
> 
> In my test I assigned Intel 82574L NIC and perform iperf tests.

Thank you for your testing efforts.
> 
> Other folks from Marvell claimed this to be important feature so I asked 
> them to review and speak up on mailing list.

That's nice to read that!  So it is time for me to rebase both the iommu
and vfio parts. I will submit something quickly. Then I would encourage
the review efforts to focus first on the iommu part.

Thanks

Eric
> 
> Thanks,
> Tomasz
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2019-11-20 10:18   ` Auger Eric
@ 2020-03-03 12:57     ` zhangfei
  2020-03-03 13:14       ` Auger Eric
  0 siblings, 1 reply; 25+ messages in thread
From: zhangfei @ 2020-03-03 12:57 UTC (permalink / raw)
  To: Auger Eric, Tomasz Nowicki, eric.auger.pro, iommu, linux-kernel,
	kvm, kvmarm, joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang,
	wangzhou1, Kenneth Lee

Hi, Eric

On 2019/11/20 下午6:18, Auger Eric wrote:
>
>>> This series brings the VFIO part of HW nested paging support
>>> in the SMMUv3.
>>>
>>> The series depends on:
>>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>>> (https://www.spinics.net/lists/kernel/msg3187714.html)
>>>
>>> 3 new IOCTLs are introduced that allow the userspace to
>>> 1) pass the guest stage 1 configuration
>>> 2) pass stage 1 MSI bindings
>>> 3) invalidate stage 1 related caches
>>>
>>> They map onto the related new IOMMU API functions.
>>>
>>> We introduce the capability to register specific interrupt
>>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
>>> an eventfd to be signaled whenever a stage 1 related fault
>>> is detected at physical level. Also a specific region allows
>>> to expose the fault records to the user space.
>>>
>>> Best Regards
>>>
>>> Eric
>>>
>>> This series can be found at:
>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>> I think you have already tested on ThunderX2, but as a formality, for
>> the whole series:
>>
>> Tested-by: Tomasz Nowicki <tnowicki@marvell.com>
>> qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
>> kernel: https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9 +
>> Shameer's fix patch
>>
>> In my test I assigned Intel 82574L NIC and perform iperf tests.
> Thank you for your testing efforts.
>> Other folks from Marvell claimed this to be important feature so I asked
>> them to review and speak up on mailing list.
> That's nice to read that!  So it is time for me to rebase both the iommu
> and vfio parts. I will submit something quickly. Then I would encourage
> the review efforts to focus first on the iommu part.
>
>
vSVA feature is also very important to us, it will be great if vSVA can 
be supported in guest world.

We just submitted uacce for accelerator, which will be supporting SVA on 
host, thanks to Jean's effort.

https://lkml.org/lkml/2020/2/11/54


However, supporting vSVA in guest is also a key component for accelerator.

Looking forward this going to be happen.


Any respin, I will be very happy to test.


Thanks




_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part)
  2020-03-03 12:57     ` zhangfei
@ 2020-03-03 13:14       ` Auger Eric
  0 siblings, 0 replies; 25+ messages in thread
From: Auger Eric @ 2020-03-03 13:14 UTC (permalink / raw)
  To: zhangfei, Tomasz Nowicki, eric.auger.pro, iommu, linux-kernel,
	kvm, kvmarm, joro, alex.williamson, jacob.jun.pan, yi.l.liu,
	jean-philippe.brucker, will.deacon, robin.murphy
  Cc: kevin.tian, vincent.stehle, ashok.raj, marc.zyngier, tina.zhang,
	wangzhou1, Kenneth Lee

Hi Zhangfei,

On 3/3/20 1:57 PM, zhangfei wrote:
> Hi, Eric
> 
> On 2019/11/20 下午6:18, Auger Eric wrote:
>>
>>>> This series brings the VFIO part of HW nested paging support
>>>> in the SMMUv3.
>>>>
>>>> The series depends on:
>>>> [PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
>>>> (https://www.spinics.net/lists/kernel/msg3187714.html)
>>>>
>>>> 3 new IOCTLs are introduced that allow the userspace to
>>>> 1) pass the guest stage 1 configuration
>>>> 2) pass stage 1 MSI bindings
>>>> 3) invalidate stage 1 related caches
>>>>
>>>> They map onto the related new IOMMU API functions.
>>>>
>>>> We introduce the capability to register specific interrupt
>>>> indexes (see [1]). A new DMA_FAULT interrupt index allows to register
>>>> an eventfd to be signaled whenever a stage 1 related fault
>>>> is detected at physical level. Also a specific region allows
>>>> to expose the fault records to the user space.
>>>>
>>>> Best Regards
>>>>
>>>> Eric
>>>>
>>>> This series can be found at:
>>>> https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
>>> I think you have already tested on ThunderX2, but as a formality, for
>>> the whole series:
>>>
>>> Tested-by: Tomasz Nowicki <tnowicki@marvell.com>
>>> qemu: https://github.com/eauger/qemu/tree/v4.1.0-rc0-2stage-rfcv5
>>> kernel: https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9 +
>>> Shameer's fix patch
>>>
>>> In my test I assigned Intel 82574L NIC and perform iperf tests.
>> Thank you for your testing efforts.
>>> Other folks from Marvell claimed this to be important feature so I asked
>>> them to review and speak up on mailing list.
>> That's nice to read that!  So it is time for me to rebase both the iommu
>> and vfio parts. I will submit something quickly. Then I would encourage
>> the review efforts to focus first on the iommu part.
>>
>>
> vSVA feature is also very important to us, it will be great if vSVA can
> be supported in guest world.
> 
> We just submitted uacce for accelerator, which will be supporting SVA on
> host, thanks to Jean's effort.
> 
> https://lkml.org/lkml/2020/2/11/54
> 
> 
> However, supporting vSVA in guest is also a key component for accelerator.
> 
> Looking forward this going to be happen.
> 
> 
> Any respin, I will be very happy to test.

OK. Based on your interest and Marvell's interest too, I will respin
both iommu & vfio series.

Thanks

Eric
> 
> 
> Thanks
> 
> 
> 
> 

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-03-04 10:35 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-11 13:56 [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) Eric Auger
2019-07-11 13:56 ` [PATCH v9 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE Eric Auger
2019-07-11 13:56 ` [PATCH v9 02/11] vfio: VFIO_IOMMU_CACHE_INVALIDATE Eric Auger
2019-07-11 13:56 ` [PATCH v9 03/11] vfio: VFIO_IOMMU_SET_MSI_BINDING Eric Auger
2019-07-11 13:56 ` [PATCH v9 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type Eric Auger
2019-07-11 13:56 ` [PATCH v9 05/11] vfio/pci: Register an iommu fault handler Eric Auger
2019-07-11 13:56 ` [PATCH v9 06/11] vfio/pci: Allow to mmap the fault queue Eric Auger
2019-07-11 13:56 ` [PATCH v9 07/11] vfio: Use capability chains to handle device specific irq Eric Auger
2019-07-11 13:56 ` [PATCH v9 08/11] vfio: Add new IRQ for DMA fault reporting Eric Auger
2019-07-11 13:56 ` [PATCH v9 09/11] vfio/pci: Add framework for custom interrupt indices Eric Auger
2019-07-11 13:56 ` [PATCH v9 10/11] vfio/pci: Register and allow DMA FAULT IRQ signaling Eric Auger
2019-07-11 13:56 ` [PATCH v9 11/11] vfio: Document nested stage control Eric Auger
2019-07-12  7:38 ` [PATCH v9 00/11] SMMUv3 Nested Stage Setup (VFIO part) zhangfei.gao
2019-11-12 11:08 ` Shameerali Kolothum Thodi
2019-11-12 11:28   ` Auger Eric
2019-11-12 13:06     ` Shameerali Kolothum Thodi
2019-11-12 13:21       ` Auger Eric
2019-11-12 14:21         ` Shameerali Kolothum Thodi
2019-11-12 17:56         ` Shameerali Kolothum Thodi
2019-11-12 20:34           ` Auger Eric
2019-11-13 16:24             ` Shameerali Kolothum Thodi
2019-11-20  8:15 ` Tomasz Nowicki
2019-11-20 10:18   ` Auger Eric
2020-03-03 12:57     ` zhangfei
2020-03-03 13:14       ` Auger Eric

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).