linux-mediatek.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
@ 2023-08-25 10:11 Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 1/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Niklas Schnelle
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

Hi All,

This patch series converts s390's PCI support from its platform specific DMA
API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
The conversion itself is done in patches 3-4 with patch 2 providing the final
necessary IOMMU driver improvement to handle s390's special IOTLB flush
out-of-resource indication in virtualized environments. The conversion
itself only touches the s390 IOMMU driver and s390 arch code moving over
remaining functions from the s390 DMA API implementation. No changes to
common code are necessary.

After patch 4 the basic conversion is done and on our partitioning
machine hypervisor LPAR performance matches the previous implementation.
When running under z/VM or KVM however, performance plummets to about
half of the existing code due to a much higher rate of IOTLB flushes for
unmapped pages. Due to the hypervisors use of IOTLB flushes to
synchronize their shadow tables these are very expensive and minimizing
them is key for regaining the performance loss.

To this end patches 5-6 add a new, single queue, IOTLB flushing scheme
as an alternative to the existing per-CPU flush queues. Introducing an
alternative scheme was suggested by Robin Murphy[1]. The single queue
mode is introduced in patch 4 together with a new .shadow_on_flush flag
bit in struct dev_iommu. This allows IOMMU drivers to indicate that
their IOTLB flushes do the extra work of shadowing. This then lets the
dma-iommu code use a single queue.

Then patch 6 enables variable queue sizes using power of 2 values and
shift/mask to keep performance as close to the fixed size queue code as
possible. A larger queue size and timeout is used by dma-iommu when
shadow_on_flush is set. This same scheme may also be used by other IOMMU
drivers with similar requirements. Particularly virtio-iommu may be
a candidate.

I tested this code on s390x with LPAR, z/VM and KVM hypervisors on an
AMD Ryzen x86 system with native IOMMU and a guest with a modified
virtio-iommu[4] that set .shadow_on_flush = true.

This code is also available in the b4/dma_iommu topic branch of my
git.kernel.org repository[3] with tags matching the version sent.

NOTE: Due to the large drop in performance I think we should not merge
the DMA API conversion (patch 4) until we have a more suited IOVA
flushing scheme with similar improvements as the proposed changes.

Best regards,
Niklas

[0] https://lore.kernel.org/linux-iommu/20221109142903.4080275-1-schnelle@linux.ibm.com/
[1] https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@arm.com/
[2] https://lore.kernel.org/linux-iommu/a8e778da-7b41-a6ba-83c3-c366a426c3da@arm.com/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/
[4] https://lore.kernel.org/lkml/20230726111433.1105665-1-schnelle@linux.ibm.com/

---
Changes in v12:
- Rebased on v6.5-rc7
- Changed queue type flag to an enum
- Incorporated feedback from Robin Murphy
  - Set options centrally and only once in iommu_dma_init_domain() with
    new helper iommu_dma_init_options()
  - Do not reset options of failing to init FQ
  - Fixed rebase mishap that partially rolled back patch 2
  - Simplified patch 4 by simply no claiming the deferred flush
    capability for ISM
  - Inlined and removed fq_flush_percpu()
  - Changed vzalloc() to vmalloc() for queue
- Added Acked-by's from Robin Murphy
- Link to v11: https://lore.kernel.org/r/20230717-dma_iommu-v11-0-a7a0b83c355c@linux.ibm.com

Changes in v11:
- Rebased on v6.5-rc2
- Added patch to force IOMMU_DOMAIN_DMA on s390 specific ISM devices
- Dropped the patch to properly set DMA mask on ISM devices which went upstream separately.
- s390 IOMMU driver now uses IOMMU_CAP_DEFERRED_FLUSH to enable DMA-FQ
  leaving no uses of IOMMU_DOMAIN_DMA_FQ in the driver.
- Link to v10: https://lore.kernel.org/r/20230310-dma_iommu-v10-0-f1fbd8310854@linux.ibm.com

Changes in v10:
- Rebased on v6.4-rc3
- Removed the .tune_dma_iommu() op in favor of a .shadow_on_flush flag
  in struct dev_iommu which then let's the dma-iommu choose a single
  queue and larger timeouts and IOVA counts. This leaves the dma-iommu
  with full responsibility for the settings.
- The above change affects patches 5 and 6 and lead to a new subject for
  patch 6 since the flush queue size and timeout is no longer driver
  controlled
- Link to v9: https://lore.kernel.org/r/20230310-dma_iommu-v9-0-65bb8edd2beb@linux.ibm.com

Changes in v9:
- Rebased on v6.4-rc2
- Re-ordered iommu_group_store_type() to allow passing the device to
  iommu_dma_init_fq()
- Link to v8: https://lore.kernel.org/r/20230310-dma_iommu-v8-0-2347dfbed7af@linux.ibm.com

---
Niklas Schnelle (6):
      iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
      s390/pci: prepare is_passed_through() for dma-iommu
      s390/pci: Use dma-iommu layer
      iommu/s390: Disable deferred flush for ISM devices
      iommu/dma: Allow a single FQ in addition to per-CPU FQs
      iommu/dma: Use a large flush queue and timeout for shadow_on_flush

 Documentation/admin-guide/kernel-parameters.txt |   9 +-
 arch/s390/include/asm/pci.h                     |   7 -
 arch/s390/include/asm/pci_clp.h                 |   3 +
 arch/s390/include/asm/pci_dma.h                 | 119 +---
 arch/s390/pci/Makefile                          |   2 +-
 arch/s390/pci/pci.c                             |  22 +-
 arch/s390/pci/pci_bus.c                         |   5 -
 arch/s390/pci/pci_debug.c                       |  12 +-
 arch/s390/pci/pci_dma.c                         | 735 ------------------------
 arch/s390/pci/pci_event.c                       |  17 +-
 arch/s390/pci/pci_sysfs.c                       |  19 +-
 drivers/iommu/Kconfig                           |   4 +-
 drivers/iommu/amd/iommu.c                       |   5 +-
 drivers/iommu/apple-dart.c                      |   5 +-
 drivers/iommu/dma-iommu.c                       | 200 +++++--
 drivers/iommu/intel/iommu.c                     |   5 +-
 drivers/iommu/iommu.c                           |  20 +-
 drivers/iommu/msm_iommu.c                       |   5 +-
 drivers/iommu/mtk_iommu.c                       |   5 +-
 drivers/iommu/s390-iommu.c                      | 425 ++++++++++++--
 drivers/iommu/sprd-iommu.c                      |   5 +-
 drivers/iommu/sun50i-iommu.c                    |   6 +-
 drivers/iommu/tegra-gart.c                      |   5 +-
 include/linux/iommu.h                           |   6 +-
 24 files changed, 643 insertions(+), 1003 deletions(-)
---
base-commit: 706a741595047797872e669b3101429ab8d378ef
change-id: 20230310-dma_iommu-5e048c538647

Best regards,
-- 
Niklas Schnelle
Linux on Z Development

IBM Deutschland Research & Development GmbH
Vorsitzender des Aufsichtsrats: Gregor Pillen
Geschäftsführung: David Faller
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294
IBM Data Privacy Statement - https://www.ibm.com/privacy 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v12 1/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 2/6] s390/pci: prepare is_passed_through() for dma-iommu Niklas Schnelle
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync
mappings by letting the hypervisor inspect the synced IOVA range and
updating a shadow table. This however means that .iotlb_sync_map can
fail as the hypervisor may run out of resources while doing the sync.
This can be due to the hypervisor being unable to pin guest pages, due
to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit
or lack of other resources. Either way such a failure to sync a mapping
should result in a DMA_MAPPING_ERROR.

Now especially when running with batched IOTLB flushes for unmap it may
be that some IOVAs have already been invalidated but not yet synced via
.iotlb_sync_map. Thus if the hypervisor indicates running out of
resources, first do a global flush allowing the hypervisor to free
resources associated with these mappings as well a retry creating the
new mappings and only if that also fails report this error to callers.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Jernej Skrabec <jernej.skrabec@gmail.com> # sun50i
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 drivers/iommu/amd/iommu.c    |  5 +++--
 drivers/iommu/apple-dart.c   |  5 +++--
 drivers/iommu/intel/iommu.c  |  5 +++--
 drivers/iommu/iommu.c        | 20 ++++++++++++++++----
 drivers/iommu/msm_iommu.c    |  5 +++--
 drivers/iommu/mtk_iommu.c    |  5 +++--
 drivers/iommu/s390-iommu.c   | 29 +++++++++++++++++++++++------
 drivers/iommu/sprd-iommu.c   |  5 +++--
 drivers/iommu/sun50i-iommu.c |  6 ++++--
 drivers/iommu/tegra-gart.c   |  5 +++--
 include/linux/iommu.h        |  4 ++--
 11 files changed, 66 insertions(+), 28 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index c3b58a8389b9..019d700ed0eb 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2219,14 +2219,15 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
-				     unsigned long iova, size_t size)
+static int amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
+				    unsigned long iova, size_t size)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
 
 	if (ops->map_pages)
 		domain_flush_np_cache(domain, iova, size);
+	return 0;
 }
 
 static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c
index 8af64b57f048..d061493db634 100644
--- a/drivers/iommu/apple-dart.c
+++ b/drivers/iommu/apple-dart.c
@@ -506,10 +506,11 @@ static void apple_dart_iotlb_sync(struct iommu_domain *domain,
 	apple_dart_domain_flush_tlb(to_dart_domain(domain));
 }
 
-static void apple_dart_iotlb_sync_map(struct iommu_domain *domain,
-				      unsigned long iova, size_t size)
+static int apple_dart_iotlb_sync_map(struct iommu_domain *domain,
+				     unsigned long iova, size_t size)
 {
 	apple_dart_domain_flush_tlb(to_dart_domain(domain));
+	return 0;
 }
 
 static phys_addr_t apple_dart_iova_to_phys(struct iommu_domain *domain,
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 5c8c5cdc36cf..7c83493f0a42 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4697,8 +4697,8 @@ static bool risky_device(struct pci_dev *pdev)
 	return false;
 }
 
-static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
-				       unsigned long iova, size_t size)
+static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
+				      unsigned long iova, size_t size)
 {
 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
 	unsigned long pages = aligned_nrpages(iova, size);
@@ -4708,6 +4708,7 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
 
 	xa_for_each(&dmar_domain->iommu_array, i, info)
 		__mapping_notify_one(info->iommu, dmar_domain, pfn, pages);
+	return 0;
 }
 
 static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index caaf563d38ae..fd9f79731d6a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2413,8 +2413,17 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 		return -EINVAL;
 
 	ret = __iommu_map(domain, iova, paddr, size, prot, gfp);
-	if (ret == 0 && ops->iotlb_sync_map)
-		ops->iotlb_sync_map(domain, iova, size);
+	if (ret == 0 && ops->iotlb_sync_map) {
+		ret = ops->iotlb_sync_map(domain, iova, size);
+		if (ret)
+			goto out_err;
+	}
+
+	return ret;
+
+out_err:
+	/* undo mappings already done */
+	iommu_unmap(domain, iova, size);
 
 	return ret;
 }
@@ -2555,8 +2564,11 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 			sg = sg_next(sg);
 	}
 
-	if (ops->iotlb_sync_map)
-		ops->iotlb_sync_map(domain, iova, mapped);
+	if (ops->iotlb_sync_map) {
+		ret = ops->iotlb_sync_map(domain, iova, mapped);
+		if (ret)
+			goto out_err;
+	}
 	return mapped;
 
 out_err:
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index 79d89bad5132..47926d3290e6 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -486,12 +486,13 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
 	return ret;
 }
 
-static void msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
-			       size_t size)
+static int msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+			      size_t size)
 {
 	struct msm_priv *priv = to_msm_priv(domain);
 
 	__flush_iotlb_range(iova, size, SZ_4K, false, priv);
+	return 0;
 }
 
 static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index e93906d6e112..c1bcec1979b0 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -794,12 +794,13 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain,
 	mtk_iommu_tlb_flush_range_sync(gather->start, length, dom->bank);
 }
 
-static void mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
-			       size_t size)
+static int mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+			      size_t size)
 {
 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 
 	mtk_iommu_tlb_flush_range_sync(iova, size, dom->bank);
+	return 0;
 }
 
 static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index fbf59a8db29b..6723d77489e8 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -205,6 +205,12 @@ static void s390_iommu_release_device(struct device *dev)
 		__s390_iommu_detach_device(zdev);
 }
 
+static int zpci_refresh_all(struct zpci_dev *zdev)
+{
+	return zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
+				  zdev->end_dma - zdev->start_dma + 1);
+}
+
 static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
 {
 	struct s390_domain *s390_domain = to_s390_domain(domain);
@@ -212,8 +218,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
-		zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
-				   zdev->end_dma - zdev->start_dma + 1);
+		zpci_refresh_all(zdev);
 	}
 	rcu_read_unlock();
 }
@@ -237,20 +242,32 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain,
 	rcu_read_unlock();
 }
 
-static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
-				      unsigned long iova, size_t size)
+static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
+				     unsigned long iova, size_t size)
 {
 	struct s390_domain *s390_domain = to_s390_domain(domain);
 	struct zpci_dev *zdev;
+	int ret = 0;
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
 		if (!zdev->tlb_refresh)
 			continue;
-		zpci_refresh_trans((u64)zdev->fh << 32,
-				   iova, size);
+		ret = zpci_refresh_trans((u64)zdev->fh << 32,
+					 iova, size);
+		/*
+		 * let the hypervisor discover invalidated entries
+		 * allowing it to free IOVAs and unpin pages
+		 */
+		if (ret == -ENOMEM) {
+			ret = zpci_refresh_all(zdev);
+			if (ret)
+				break;
+		}
 	}
 	rcu_read_unlock();
+
+	return ret;
 }
 
 static int s390_iommu_validate_trans(struct s390_domain *s390_domain,
diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
index 39e34fdeccda..18d61fe29ca0 100644
--- a/drivers/iommu/sprd-iommu.c
+++ b/drivers/iommu/sprd-iommu.c
@@ -343,8 +343,8 @@ static size_t sprd_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
 	return size;
 }
 
-static void sprd_iommu_sync_map(struct iommu_domain *domain,
-				unsigned long iova, size_t size)
+static int sprd_iommu_sync_map(struct iommu_domain *domain,
+			       unsigned long iova, size_t size)
 {
 	struct sprd_iommu_domain *dom = to_sprd_domain(domain);
 	unsigned int reg;
@@ -356,6 +356,7 @@ static void sprd_iommu_sync_map(struct iommu_domain *domain,
 
 	/* clear IOMMU TLB buffer after page table updated */
 	sprd_iommu_write(dom->sdev, reg, 0xffffffff);
+	return 0;
 }
 
 static void sprd_iommu_sync(struct iommu_domain *domain,
diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
index 74c5cb93e900..45c90fa27631 100644
--- a/drivers/iommu/sun50i-iommu.c
+++ b/drivers/iommu/sun50i-iommu.c
@@ -402,8 +402,8 @@ static void sun50i_iommu_flush_iotlb_all(struct iommu_domain *domain)
 	spin_unlock_irqrestore(&iommu->iommu_lock, flags);
 }
 
-static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
-					unsigned long iova, size_t size)
+static int sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
+				       unsigned long iova, size_t size)
 {
 	struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
 	struct sun50i_iommu *iommu = sun50i_domain->iommu;
@@ -412,6 +412,8 @@ static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
 	spin_lock_irqsave(&iommu->iommu_lock, flags);
 	sun50i_iommu_zap_range(iommu, iova, size);
 	spin_unlock_irqrestore(&iommu->iommu_lock, flags);
+
+	return 0;
 }
 
 static void sun50i_iommu_iotlb_sync(struct iommu_domain *domain,
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index a482ff838b53..44966d7b07ba 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -252,10 +252,11 @@ static int gart_iommu_of_xlate(struct device *dev,
 	return 0;
 }
 
-static void gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
-				size_t size)
+static int gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+			       size_t size)
 {
 	FLUSH_GART_REGS(gart_handle);
+	return 0;
 }
 
 static void gart_iommu_sync(struct iommu_domain *domain,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index d31642596675..182cc4c71e62 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -339,8 +339,8 @@ struct iommu_domain_ops {
 			      struct iommu_iotlb_gather *iotlb_gather);
 
 	void (*flush_iotlb_all)(struct iommu_domain *domain);
-	void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,
-			       size_t size);
+	int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,
+			      size_t size);
 	void (*iotlb_sync)(struct iommu_domain *domain,
 			   struct iommu_iotlb_gather *iotlb_gather);
 

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v12 2/6] s390/pci: prepare is_passed_through() for dma-iommu
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 1/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 3/6] s390/pci: Use dma-iommu layer Niklas Schnelle
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

With the IOMMU always controlled through the IOMMU driver testing for
zdev->s390_domain is not a valid indication of the device being
passed-through. Instead test if zdev->kzdev is set.

Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 arch/s390/pci/pci_event.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
index b9324ca2eb94..4ef5a6a1d618 100644
--- a/arch/s390/pci/pci_event.c
+++ b/arch/s390/pci/pci_event.c
@@ -59,9 +59,16 @@ static inline bool ers_result_indicates_abort(pci_ers_result_t ers_res)
 	}
 }
 
-static bool is_passed_through(struct zpci_dev *zdev)
+static bool is_passed_through(struct pci_dev *pdev)
 {
-	return zdev->s390_domain;
+	struct zpci_dev *zdev = to_zpci(pdev);
+	bool ret;
+
+	mutex_lock(&zdev->kzdev_lock);
+	ret = !!zdev->kzdev;
+	mutex_unlock(&zdev->kzdev_lock);
+
+	return ret;
 }
 
 static bool is_driver_supported(struct pci_driver *driver)
@@ -176,7 +183,7 @@ static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
 	}
 	pdev->error_state = pci_channel_io_frozen;
 
-	if (is_passed_through(to_zpci(pdev))) {
+	if (is_passed_through(pdev)) {
 		pr_info("%s: Cannot be recovered in the host because it is a pass-through device\n",
 			pci_name(pdev));
 		goto out_unlock;
@@ -239,7 +246,7 @@ static void zpci_event_io_failure(struct pci_dev *pdev, pci_channel_state_t es)
 	 * we will inject the error event and let the guest recover the device
 	 * itself.
 	 */
-	if (is_passed_through(to_zpci(pdev)))
+	if (is_passed_through(pdev))
 		goto out;
 	driver = to_pci_driver(pdev->dev.driver);
 	if (driver && driver->err_handler && driver->err_handler->error_detected)

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v12 3/6] s390/pci: Use dma-iommu layer
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 1/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 2/6] s390/pci: prepare is_passed_through() for dma-iommu Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices Niklas Schnelle
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

While s390 already has a standard IOMMU driver and previous changes have
added I/O TLB flushing operations this driver is currently only used for
user-space PCI access such as vfio-pci. For the DMA API s390 instead
utilizes its own implementation in arch/s390/pci/pci_dma.c which drives
the same hardware and shares some code but requires a complex and
fragile hand over between DMA API and IOMMU API use of a device and
despite code sharing still leads to significant duplication and
maintenance effort. Let's utilize the common code DMAP API
implementation from drivers/iommu/dma-iommu.c instead allowing us to
get rid of arch/s390/pci/pci_dma.c.

Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 Documentation/admin-guide/kernel-parameters.txt |   9 +-
 arch/s390/include/asm/pci.h                     |   7 -
 arch/s390/include/asm/pci_clp.h                 |   3 +
 arch/s390/include/asm/pci_dma.h                 | 119 +---
 arch/s390/pci/Makefile                          |   2 +-
 arch/s390/pci/pci.c                             |  22 +-
 arch/s390/pci/pci_bus.c                         |   5 -
 arch/s390/pci/pci_debug.c                       |  12 +-
 arch/s390/pci/pci_dma.c                         | 735 ------------------------
 arch/s390/pci/pci_event.c                       |   2 -
 arch/s390/pci/pci_sysfs.c                       |  19 +-
 drivers/iommu/Kconfig                           |   4 +-
 drivers/iommu/s390-iommu.c                      | 391 ++++++++++++-
 13 files changed, 407 insertions(+), 923 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 722b6eca2e93..91adad8587d5 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2208,7 +2208,7 @@
 			  forcing Dual Address Cycle for PCI cards supporting
 			  greater than 32-bit addressing.
 
-	iommu.strict=	[ARM64, X86] Configure TLB invalidation behaviour
+	iommu.strict=	[ARM64, X86, S390] Configure TLB invalidation behaviour
 			Format: { "0" | "1" }
 			0 - Lazy mode.
 			  Request that DMA unmap operations use deferred
@@ -5534,9 +5534,10 @@
 	s390_iommu=	[HW,S390]
 			Set s390 IOTLB flushing mode
 		strict
-			With strict flushing every unmap operation will result in
-			an IOTLB flush. Default is lazy flushing before reuse,
-			which is faster.
+			With strict flushing every unmap operation will result
+			in an IOTLB flush. Default is lazy flushing before
+			reuse, which is faster. Deprecated, equivalent to
+			iommu.strict=1.
 
 	s390_iommu_aperture=	[KNL,S390]
 			Specifies the size of the per device DMA address space
diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
index b248694e0024..3f74f1cf37df 100644
--- a/arch/s390/include/asm/pci.h
+++ b/arch/s390/include/asm/pci.h
@@ -159,13 +159,6 @@ struct zpci_dev {
 	unsigned long	*dma_table;
 	int		tlb_refresh;
 
-	spinlock_t	iommu_bitmap_lock;
-	unsigned long	*iommu_bitmap;
-	unsigned long	*lazy_bitmap;
-	unsigned long	iommu_size;
-	unsigned long	iommu_pages;
-	unsigned int	next_bit;
-
 	struct iommu_device iommu_dev;  /* IOMMU core handle */
 
 	char res_name[16];
diff --git a/arch/s390/include/asm/pci_clp.h b/arch/s390/include/asm/pci_clp.h
index d6189ed14f84..f0c677ddd270 100644
--- a/arch/s390/include/asm/pci_clp.h
+++ b/arch/s390/include/asm/pci_clp.h
@@ -50,6 +50,9 @@ struct clp_fh_list_entry {
 #define CLP_UTIL_STR_LEN	64
 #define CLP_PFIP_NR_SEGMENTS	4
 
+/* PCI function type numbers */
+#define PCI_FUNC_TYPE_ISM	0x5	/* ISM device */
+
 extern bool zpci_unique_uid;
 
 struct clp_rsp_slpc_pci {
diff --git a/arch/s390/include/asm/pci_dma.h b/arch/s390/include/asm/pci_dma.h
index 7119c04c51c5..42d7cc4262ca 100644
--- a/arch/s390/include/asm/pci_dma.h
+++ b/arch/s390/include/asm/pci_dma.h
@@ -82,117 +82,16 @@ enum zpci_ioat_dtype {
 #define ZPCI_TABLE_VALID_MASK		0x20
 #define ZPCI_TABLE_PROT_MASK		0x200
 
-static inline unsigned int calc_rtx(dma_addr_t ptr)
-{
-	return ((unsigned long) ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
-}
+struct zpci_iommu_ctrs {
+	atomic64_t		mapped_pages;
+	atomic64_t		unmapped_pages;
+	atomic64_t		global_rpcits;
+	atomic64_t		sync_map_rpcits;
+	atomic64_t		sync_rpcits;
+};
 
-static inline unsigned int calc_sx(dma_addr_t ptr)
-{
-	return ((unsigned long) ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK;
-}
-
-static inline unsigned int calc_px(dma_addr_t ptr)
-{
-	return ((unsigned long) ptr >> PAGE_SHIFT) & ZPCI_PT_MASK;
-}
-
-static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa)
-{
-	*entry &= ZPCI_PTE_FLAG_MASK;
-	*entry |= (pfaa & ZPCI_PTE_ADDR_MASK);
-}
-
-static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto)
-{
-	*entry &= ZPCI_RTE_FLAG_MASK;
-	*entry |= (sto & ZPCI_RTE_ADDR_MASK);
-	*entry |= ZPCI_TABLE_TYPE_RTX;
-}
-
-static inline void set_st_pto(unsigned long *entry, phys_addr_t pto)
-{
-	*entry &= ZPCI_STE_FLAG_MASK;
-	*entry |= (pto & ZPCI_STE_ADDR_MASK);
-	*entry |= ZPCI_TABLE_TYPE_SX;
-}
-
-static inline void validate_rt_entry(unsigned long *entry)
-{
-	*entry &= ~ZPCI_TABLE_VALID_MASK;
-	*entry &= ~ZPCI_TABLE_OFFSET_MASK;
-	*entry |= ZPCI_TABLE_VALID;
-	*entry |= ZPCI_TABLE_LEN_RTX;
-}
-
-static inline void validate_st_entry(unsigned long *entry)
-{
-	*entry &= ~ZPCI_TABLE_VALID_MASK;
-	*entry |= ZPCI_TABLE_VALID;
-}
-
-static inline void invalidate_pt_entry(unsigned long *entry)
-{
-	WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID);
-	*entry &= ~ZPCI_PTE_VALID_MASK;
-	*entry |= ZPCI_PTE_INVALID;
-}
-
-static inline void validate_pt_entry(unsigned long *entry)
-{
-	WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID);
-	*entry &= ~ZPCI_PTE_VALID_MASK;
-	*entry |= ZPCI_PTE_VALID;
-}
-
-static inline void entry_set_protected(unsigned long *entry)
-{
-	*entry &= ~ZPCI_TABLE_PROT_MASK;
-	*entry |= ZPCI_TABLE_PROTECTED;
-}
-
-static inline void entry_clr_protected(unsigned long *entry)
-{
-	*entry &= ~ZPCI_TABLE_PROT_MASK;
-	*entry |= ZPCI_TABLE_UNPROTECTED;
-}
-
-static inline int reg_entry_isvalid(unsigned long entry)
-{
-	return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID;
-}
-
-static inline int pt_entry_isvalid(unsigned long entry)
-{
-	return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID;
-}
-
-static inline unsigned long *get_rt_sto(unsigned long entry)
-{
-	if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX)
-		return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK);
-	else
-		return NULL;
-
-}
-
-static inline unsigned long *get_st_pto(unsigned long entry)
-{
-	if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX)
-		return phys_to_virt(entry & ZPCI_STE_ADDR_MASK);
-	else
-		return NULL;
-}
-
-/* Prototypes */
-void dma_free_seg_table(unsigned long);
-unsigned long *dma_alloc_cpu_table(gfp_t gfp);
-void dma_cleanup_tables(unsigned long *);
-unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr,
-				  gfp_t gfp);
-void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags);
-
-extern const struct dma_map_ops s390_pci_dma_ops;
+struct zpci_dev;
 
+struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev);
 
 #endif
diff --git a/arch/s390/pci/Makefile b/arch/s390/pci/Makefile
index 5ae31ca9dd44..0547a10406e7 100644
--- a/arch/s390/pci/Makefile
+++ b/arch/s390/pci/Makefile
@@ -3,7 +3,7 @@
 # Makefile for the s390 PCI subsystem.
 #
 
-obj-$(CONFIG_PCI)	+= pci.o pci_irq.o pci_dma.o pci_clp.o pci_sysfs.o \
+obj-$(CONFIG_PCI)	+= pci.o pci_irq.o pci_clp.o pci_sysfs.o \
 			   pci_event.o pci_debug.o pci_insn.o pci_mmio.o \
 			   pci_bus.o pci_kvm_hook.o
 obj-$(CONFIG_PCI_IOV)	+= pci_iov.o
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index afc3f33788da..36aea977bbe0 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -124,7 +124,11 @@ int zpci_register_ioat(struct zpci_dev *zdev, u8 dmaas,
 
 	WARN_ON_ONCE(iota & 0x3fff);
 	fib.pba = base;
-	fib.pal = limit;
+	/* Work around off by one in ISM virt device */
+	if (zdev->pft == PCI_FUNC_TYPE_ISM && limit > base)
+		fib.pal = limit + (1 << 12);
+	else
+		fib.pal = limit;
 	fib.iota = iota | ZPCI_IOTA_RTTO_FLAG;
 	fib.gd = zdev->gisa;
 	cc = zpci_mod_fc(req, &fib, status);
@@ -619,7 +623,6 @@ int pcibios_device_add(struct pci_dev *pdev)
 		pdev->no_vf_scan = 1;
 
 	pdev->dev.groups = zpci_attr_groups;
-	pdev->dev.dma_ops = &s390_pci_dma_ops;
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_STD_NUM_BARS; i++) {
@@ -793,8 +796,6 @@ int zpci_hot_reset_device(struct zpci_dev *zdev)
 	if (zdev->dma_table)
 		rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
 					virt_to_phys(zdev->dma_table), &status);
-	else
-		rc = zpci_dma_init_device(zdev);
 	if (rc) {
 		zpci_disable_device(zdev);
 		return rc;
@@ -902,11 +903,6 @@ int zpci_deconfigure_device(struct zpci_dev *zdev)
 	if (zdev->zbus->bus)
 		zpci_bus_remove_device(zdev, false);
 
-	if (zdev->dma_table) {
-		rc = zpci_dma_exit_device(zdev);
-		if (rc)
-			return rc;
-	}
 	if (zdev_enabled(zdev)) {
 		rc = zpci_disable_device(zdev);
 		if (rc)
@@ -955,8 +951,6 @@ void zpci_release_device(struct kref *kref)
 	if (zdev->zbus->bus)
 		zpci_bus_remove_device(zdev, false);
 
-	if (zdev->dma_table)
-		zpci_dma_exit_device(zdev);
 	if (zdev_enabled(zdev))
 		zpci_disable_device(zdev);
 
@@ -1146,10 +1140,6 @@ static int __init pci_base_init(void)
 	if (rc)
 		goto out_irq;
 
-	rc = zpci_dma_init();
-	if (rc)
-		goto out_dma;
-
 	rc = clp_scan_pci_devices();
 	if (rc)
 		goto out_find;
@@ -1159,8 +1149,6 @@ static int __init pci_base_init(void)
 	return 0;
 
 out_find:
-	zpci_dma_exit();
-out_dma:
 	zpci_irq_exit();
 out_irq:
 	zpci_mem_exit();
diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
index 32245b970a0c..daa5d7450c7d 100644
--- a/arch/s390/pci/pci_bus.c
+++ b/arch/s390/pci/pci_bus.c
@@ -47,11 +47,6 @@ static int zpci_bus_prepare_device(struct zpci_dev *zdev)
 		rc = zpci_enable_device(zdev);
 		if (rc)
 			return rc;
-		rc = zpci_dma_init_device(zdev);
-		if (rc) {
-			zpci_disable_device(zdev);
-			return rc;
-		}
 	}
 
 	if (!zdev->has_resources) {
diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index ca6bd98eec13..6dde2263c79d 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -53,9 +53,11 @@ static char *pci_fmt3_names[] = {
 };
 
 static char *pci_sw_names[] = {
-	"Allocated pages",
 	"Mapped pages",
 	"Unmapped pages",
+	"Global RPCITs",
+	"Sync Map RPCITs",
+	"Sync RPCITs",
 };
 
 static void pci_fmb_show(struct seq_file *m, char *name[], int length,
@@ -69,10 +71,14 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length,
 
 static void pci_sw_counter_show(struct seq_file *m)
 {
-	struct zpci_dev *zdev = m->private;
-	atomic64_t *counter = &zdev->allocated_pages;
+	struct zpci_iommu_ctrs  *ctrs = zpci_get_iommu_ctrs(m->private);
+	atomic64_t *counter;
 	int i;
 
+	if (!ctrs)
+		return;
+
+	counter = &ctrs->mapped_pages;
 	for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
 		seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
 			   atomic64_read(counter));
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
deleted file mode 100644
index 2d9b01d7ca4c..000000000000
--- a/arch/s390/pci/pci_dma.c
+++ /dev/null
@@ -1,735 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright IBM Corp. 2012
- *
- * Author(s):
- *   Jan Glauber <jang@linux.vnet.ibm.com>
- */
-
-#include <linux/kernel.h>
-#include <linux/slab.h>
-#include <linux/export.h>
-#include <linux/iommu-helper.h>
-#include <linux/dma-map-ops.h>
-#include <linux/vmalloc.h>
-#include <linux/pci.h>
-#include <asm/pci_dma.h>
-
-static struct kmem_cache *dma_region_table_cache;
-static struct kmem_cache *dma_page_table_cache;
-static int s390_iommu_strict;
-static u64 s390_iommu_aperture;
-static u32 s390_iommu_aperture_factor = 1;
-
-static int zpci_refresh_global(struct zpci_dev *zdev)
-{
-	return zpci_refresh_trans((u64) zdev->fh << 32, zdev->start_dma,
-				  zdev->iommu_pages * PAGE_SIZE);
-}
-
-unsigned long *dma_alloc_cpu_table(gfp_t gfp)
-{
-	unsigned long *table, *entry;
-
-	table = kmem_cache_alloc(dma_region_table_cache, gfp);
-	if (!table)
-		return NULL;
-
-	for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++)
-		*entry = ZPCI_TABLE_INVALID;
-	return table;
-}
-
-static void dma_free_cpu_table(void *table)
-{
-	kmem_cache_free(dma_region_table_cache, table);
-}
-
-static unsigned long *dma_alloc_page_table(gfp_t gfp)
-{
-	unsigned long *table, *entry;
-
-	table = kmem_cache_alloc(dma_page_table_cache, gfp);
-	if (!table)
-		return NULL;
-
-	for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++)
-		*entry = ZPCI_PTE_INVALID;
-	return table;
-}
-
-static void dma_free_page_table(void *table)
-{
-	kmem_cache_free(dma_page_table_cache, table);
-}
-
-static unsigned long *dma_get_seg_table_origin(unsigned long *rtep, gfp_t gfp)
-{
-	unsigned long old_rte, rte;
-	unsigned long *sto;
-
-	rte = READ_ONCE(*rtep);
-	if (reg_entry_isvalid(rte)) {
-		sto = get_rt_sto(rte);
-	} else {
-		sto = dma_alloc_cpu_table(gfp);
-		if (!sto)
-			return NULL;
-
-		set_rt_sto(&rte, virt_to_phys(sto));
-		validate_rt_entry(&rte);
-		entry_clr_protected(&rte);
-
-		old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte);
-		if (old_rte != ZPCI_TABLE_INVALID) {
-			/* Somone else was faster, use theirs */
-			dma_free_cpu_table(sto);
-			sto = get_rt_sto(old_rte);
-		}
-	}
-	return sto;
-}
-
-static unsigned long *dma_get_page_table_origin(unsigned long *step, gfp_t gfp)
-{
-	unsigned long old_ste, ste;
-	unsigned long *pto;
-
-	ste = READ_ONCE(*step);
-	if (reg_entry_isvalid(ste)) {
-		pto = get_st_pto(ste);
-	} else {
-		pto = dma_alloc_page_table(gfp);
-		if (!pto)
-			return NULL;
-		set_st_pto(&ste, virt_to_phys(pto));
-		validate_st_entry(&ste);
-		entry_clr_protected(&ste);
-
-		old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste);
-		if (old_ste != ZPCI_TABLE_INVALID) {
-			/* Somone else was faster, use theirs */
-			dma_free_page_table(pto);
-			pto = get_st_pto(old_ste);
-		}
-	}
-	return pto;
-}
-
-unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr,
-				  gfp_t gfp)
-{
-	unsigned long *sto, *pto;
-	unsigned int rtx, sx, px;
-
-	rtx = calc_rtx(dma_addr);
-	sto = dma_get_seg_table_origin(&rto[rtx], gfp);
-	if (!sto)
-		return NULL;
-
-	sx = calc_sx(dma_addr);
-	pto = dma_get_page_table_origin(&sto[sx], gfp);
-	if (!pto)
-		return NULL;
-
-	px = calc_px(dma_addr);
-	return &pto[px];
-}
-
-void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags)
-{
-	unsigned long pte;
-
-	pte = READ_ONCE(*ptep);
-	if (flags & ZPCI_PTE_INVALID) {
-		invalidate_pt_entry(&pte);
-	} else {
-		set_pt_pfaa(&pte, page_addr);
-		validate_pt_entry(&pte);
-	}
-
-	if (flags & ZPCI_TABLE_PROTECTED)
-		entry_set_protected(&pte);
-	else
-		entry_clr_protected(&pte);
-
-	xchg(ptep, pte);
-}
-
-static int __dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa,
-			      dma_addr_t dma_addr, size_t size, int flags)
-{
-	unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	phys_addr_t page_addr = (pa & PAGE_MASK);
-	unsigned long *entry;
-	int i, rc = 0;
-
-	if (!nr_pages)
-		return -EINVAL;
-
-	if (!zdev->dma_table)
-		return -EINVAL;
-
-	for (i = 0; i < nr_pages; i++) {
-		entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr,
-					   GFP_ATOMIC);
-		if (!entry) {
-			rc = -ENOMEM;
-			goto undo_cpu_trans;
-		}
-		dma_update_cpu_trans(entry, page_addr, flags);
-		page_addr += PAGE_SIZE;
-		dma_addr += PAGE_SIZE;
-	}
-
-undo_cpu_trans:
-	if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) {
-		flags = ZPCI_PTE_INVALID;
-		while (i-- > 0) {
-			page_addr -= PAGE_SIZE;
-			dma_addr -= PAGE_SIZE;
-			entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr,
-						   GFP_ATOMIC);
-			if (!entry)
-				break;
-			dma_update_cpu_trans(entry, page_addr, flags);
-		}
-	}
-	return rc;
-}
-
-static int __dma_purge_tlb(struct zpci_dev *zdev, dma_addr_t dma_addr,
-			   size_t size, int flags)
-{
-	unsigned long irqflags;
-	int ret;
-
-	/*
-	 * With zdev->tlb_refresh == 0, rpcit is not required to establish new
-	 * translations when previously invalid translation-table entries are
-	 * validated. With lazy unmap, rpcit is skipped for previously valid
-	 * entries, but a global rpcit is then required before any address can
-	 * be re-used, i.e. after each iommu bitmap wrap-around.
-	 */
-	if ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID) {
-		if (!zdev->tlb_refresh)
-			return 0;
-	} else {
-		if (!s390_iommu_strict)
-			return 0;
-	}
-
-	ret = zpci_refresh_trans((u64) zdev->fh << 32, dma_addr,
-				 PAGE_ALIGN(size));
-	if (ret == -ENOMEM && !s390_iommu_strict) {
-		/* enable the hypervisor to free some resources */
-		if (zpci_refresh_global(zdev))
-			goto out;
-
-		spin_lock_irqsave(&zdev->iommu_bitmap_lock, irqflags);
-		bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap,
-			      zdev->lazy_bitmap, zdev->iommu_pages);
-		bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages);
-		spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, irqflags);
-		ret = 0;
-	}
-out:
-	return ret;
-}
-
-static int dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa,
-			    dma_addr_t dma_addr, size_t size, int flags)
-{
-	int rc;
-
-	rc = __dma_update_trans(zdev, pa, dma_addr, size, flags);
-	if (rc)
-		return rc;
-
-	rc = __dma_purge_tlb(zdev, dma_addr, size, flags);
-	if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID))
-		__dma_update_trans(zdev, pa, dma_addr, size, ZPCI_PTE_INVALID);
-
-	return rc;
-}
-
-void dma_free_seg_table(unsigned long entry)
-{
-	unsigned long *sto = get_rt_sto(entry);
-	int sx;
-
-	for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++)
-		if (reg_entry_isvalid(sto[sx]))
-			dma_free_page_table(get_st_pto(sto[sx]));
-
-	dma_free_cpu_table(sto);
-}
-
-void dma_cleanup_tables(unsigned long *table)
-{
-	int rtx;
-
-	if (!table)
-		return;
-
-	for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++)
-		if (reg_entry_isvalid(table[rtx]))
-			dma_free_seg_table(table[rtx]);
-
-	dma_free_cpu_table(table);
-}
-
-static unsigned long __dma_alloc_iommu(struct device *dev,
-				       unsigned long start, int size)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-
-	return iommu_area_alloc(zdev->iommu_bitmap, zdev->iommu_pages,
-				start, size, zdev->start_dma >> PAGE_SHIFT,
-				dma_get_seg_boundary_nr_pages(dev, PAGE_SHIFT),
-				0);
-}
-
-static dma_addr_t dma_alloc_address(struct device *dev, int size)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	unsigned long offset, flags;
-
-	spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags);
-	offset = __dma_alloc_iommu(dev, zdev->next_bit, size);
-	if (offset == -1) {
-		if (!s390_iommu_strict) {
-			/* global flush before DMA addresses are reused */
-			if (zpci_refresh_global(zdev))
-				goto out_error;
-
-			bitmap_andnot(zdev->iommu_bitmap, zdev->iommu_bitmap,
-				      zdev->lazy_bitmap, zdev->iommu_pages);
-			bitmap_zero(zdev->lazy_bitmap, zdev->iommu_pages);
-		}
-		/* wrap-around */
-		offset = __dma_alloc_iommu(dev, 0, size);
-		if (offset == -1)
-			goto out_error;
-	}
-	zdev->next_bit = offset + size;
-	spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags);
-
-	return zdev->start_dma + offset * PAGE_SIZE;
-
-out_error:
-	spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags);
-	return DMA_MAPPING_ERROR;
-}
-
-static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	unsigned long flags, offset;
-
-	offset = (dma_addr - zdev->start_dma) >> PAGE_SHIFT;
-
-	spin_lock_irqsave(&zdev->iommu_bitmap_lock, flags);
-	if (!zdev->iommu_bitmap)
-		goto out;
-
-	if (s390_iommu_strict)
-		bitmap_clear(zdev->iommu_bitmap, offset, size);
-	else
-		bitmap_set(zdev->lazy_bitmap, offset, size);
-
-out:
-	spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags);
-}
-
-static inline void zpci_err_dma(unsigned long rc, unsigned long addr)
-{
-	struct {
-		unsigned long rc;
-		unsigned long addr;
-	} __packed data = {rc, addr};
-
-	zpci_err_hex(&data, sizeof(data));
-}
-
-static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page,
-				     unsigned long offset, size_t size,
-				     enum dma_data_direction direction,
-				     unsigned long attrs)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	unsigned long pa = page_to_phys(page) + offset;
-	int flags = ZPCI_PTE_VALID;
-	unsigned long nr_pages;
-	dma_addr_t dma_addr;
-	int ret;
-
-	/* This rounds up number of pages based on size and offset */
-	nr_pages = iommu_num_pages(pa, size, PAGE_SIZE);
-	dma_addr = dma_alloc_address(dev, nr_pages);
-	if (dma_addr == DMA_MAPPING_ERROR) {
-		ret = -ENOSPC;
-		goto out_err;
-	}
-
-	/* Use rounded up size */
-	size = nr_pages * PAGE_SIZE;
-
-	if (direction == DMA_NONE || direction == DMA_TO_DEVICE)
-		flags |= ZPCI_TABLE_PROTECTED;
-
-	ret = dma_update_trans(zdev, pa, dma_addr, size, flags);
-	if (ret)
-		goto out_free;
-
-	atomic64_add(nr_pages, &zdev->mapped_pages);
-	return dma_addr + (offset & ~PAGE_MASK);
-
-out_free:
-	dma_free_address(dev, dma_addr, nr_pages);
-out_err:
-	zpci_err("map error:\n");
-	zpci_err_dma(ret, pa);
-	return DMA_MAPPING_ERROR;
-}
-
-static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr,
-				 size_t size, enum dma_data_direction direction,
-				 unsigned long attrs)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	int npages, ret;
-
-	npages = iommu_num_pages(dma_addr, size, PAGE_SIZE);
-	dma_addr = dma_addr & PAGE_MASK;
-	ret = dma_update_trans(zdev, 0, dma_addr, npages * PAGE_SIZE,
-			       ZPCI_PTE_INVALID);
-	if (ret) {
-		zpci_err("unmap error:\n");
-		zpci_err_dma(ret, dma_addr);
-		return;
-	}
-
-	atomic64_add(npages, &zdev->unmapped_pages);
-	dma_free_address(dev, dma_addr, npages);
-}
-
-static void *s390_dma_alloc(struct device *dev, size_t size,
-			    dma_addr_t *dma_handle, gfp_t flag,
-			    unsigned long attrs)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	struct page *page;
-	phys_addr_t pa;
-	dma_addr_t map;
-
-	size = PAGE_ALIGN(size);
-	page = alloc_pages(flag | __GFP_ZERO, get_order(size));
-	if (!page)
-		return NULL;
-
-	pa = page_to_phys(page);
-	map = s390_dma_map_pages(dev, page, 0, size, DMA_BIDIRECTIONAL, 0);
-	if (dma_mapping_error(dev, map)) {
-		__free_pages(page, get_order(size));
-		return NULL;
-	}
-
-	atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages);
-	if (dma_handle)
-		*dma_handle = map;
-	return phys_to_virt(pa);
-}
-
-static void s390_dma_free(struct device *dev, size_t size,
-			  void *vaddr, dma_addr_t dma_handle,
-			  unsigned long attrs)
-{
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-
-	size = PAGE_ALIGN(size);
-	atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages);
-	s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, 0);
-	free_pages((unsigned long)vaddr, get_order(size));
-}
-
-/* Map a segment into a contiguous dma address area */
-static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
-			     size_t size, dma_addr_t *handle,
-			     enum dma_data_direction dir)
-{
-	unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
-	dma_addr_t dma_addr_base, dma_addr;
-	int flags = ZPCI_PTE_VALID;
-	struct scatterlist *s;
-	phys_addr_t pa = 0;
-	int ret;
-
-	dma_addr_base = dma_alloc_address(dev, nr_pages);
-	if (dma_addr_base == DMA_MAPPING_ERROR)
-		return -ENOMEM;
-
-	dma_addr = dma_addr_base;
-	if (dir == DMA_NONE || dir == DMA_TO_DEVICE)
-		flags |= ZPCI_TABLE_PROTECTED;
-
-	for (s = sg; dma_addr < dma_addr_base + size; s = sg_next(s)) {
-		pa = page_to_phys(sg_page(s));
-		ret = __dma_update_trans(zdev, pa, dma_addr,
-					 s->offset + s->length, flags);
-		if (ret)
-			goto unmap;
-
-		dma_addr += s->offset + s->length;
-	}
-	ret = __dma_purge_tlb(zdev, dma_addr_base, size, flags);
-	if (ret)
-		goto unmap;
-
-	*handle = dma_addr_base;
-	atomic64_add(nr_pages, &zdev->mapped_pages);
-
-	return ret;
-
-unmap:
-	dma_update_trans(zdev, 0, dma_addr_base, dma_addr - dma_addr_base,
-			 ZPCI_PTE_INVALID);
-	dma_free_address(dev, dma_addr_base, nr_pages);
-	zpci_err("map error:\n");
-	zpci_err_dma(ret, pa);
-	return ret;
-}
-
-static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
-			   int nr_elements, enum dma_data_direction dir,
-			   unsigned long attrs)
-{
-	struct scatterlist *s = sg, *start = sg, *dma = sg;
-	unsigned int max = dma_get_max_seg_size(dev);
-	unsigned int size = s->offset + s->length;
-	unsigned int offset = s->offset;
-	int count = 0, i, ret;
-
-	for (i = 1; i < nr_elements; i++) {
-		s = sg_next(s);
-
-		s->dma_length = 0;
-
-		if (s->offset || (size & ~PAGE_MASK) ||
-		    size + s->length > max) {
-			ret = __s390_dma_map_sg(dev, start, size,
-						&dma->dma_address, dir);
-			if (ret)
-				goto unmap;
-
-			dma->dma_address += offset;
-			dma->dma_length = size - offset;
-
-			size = offset = s->offset;
-			start = s;
-			dma = sg_next(dma);
-			count++;
-		}
-		size += s->length;
-	}
-	ret = __s390_dma_map_sg(dev, start, size, &dma->dma_address, dir);
-	if (ret)
-		goto unmap;
-
-	dma->dma_address += offset;
-	dma->dma_length = size - offset;
-
-	return count + 1;
-unmap:
-	for_each_sg(sg, s, count, i)
-		s390_dma_unmap_pages(dev, sg_dma_address(s), sg_dma_len(s),
-				     dir, attrs);
-
-	return ret;
-}
-
-static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
-			      int nr_elements, enum dma_data_direction dir,
-			      unsigned long attrs)
-{
-	struct scatterlist *s;
-	int i;
-
-	for_each_sg(sg, s, nr_elements, i) {
-		if (s->dma_length)
-			s390_dma_unmap_pages(dev, s->dma_address, s->dma_length,
-					     dir, attrs);
-		s->dma_address = 0;
-		s->dma_length = 0;
-	}
-}
-	
-int zpci_dma_init_device(struct zpci_dev *zdev)
-{
-	u8 status;
-	int rc;
-
-	/*
-	 * At this point, if the device is part of an IOMMU domain, this would
-	 * be a strong hint towards a bug in the IOMMU API (common) code and/or
-	 * simultaneous access via IOMMU and DMA API. So let's issue a warning.
-	 */
-	WARN_ON(zdev->s390_domain);
-
-	spin_lock_init(&zdev->iommu_bitmap_lock);
-
-	zdev->dma_table = dma_alloc_cpu_table(GFP_KERNEL);
-	if (!zdev->dma_table) {
-		rc = -ENOMEM;
-		goto out;
-	}
-
-	/*
-	 * Restrict the iommu bitmap size to the minimum of the following:
-	 * - s390_iommu_aperture which defaults to high_memory
-	 * - 3-level pagetable address limit minus start_dma offset
-	 * - DMA address range allowed by the hardware (clp query pci fn)
-	 *
-	 * Also set zdev->end_dma to the actual end address of the usable
-	 * range, instead of the theoretical maximum as reported by hardware.
-	 *
-	 * This limits the number of concurrently usable DMA mappings since
-	 * for each DMA mapped memory address we need a DMA address including
-	 * extra DMA addresses for multiple mappings of the same memory address.
-	 */
-	zdev->start_dma = PAGE_ALIGN(zdev->start_dma);
-	zdev->iommu_size = min3(s390_iommu_aperture,
-				ZPCI_TABLE_SIZE_RT - zdev->start_dma,
-				zdev->end_dma - zdev->start_dma + 1);
-	zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1;
-	zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT;
-	zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8);
-	if (!zdev->iommu_bitmap) {
-		rc = -ENOMEM;
-		goto free_dma_table;
-	}
-	if (!s390_iommu_strict) {
-		zdev->lazy_bitmap = vzalloc(zdev->iommu_pages / 8);
-		if (!zdev->lazy_bitmap) {
-			rc = -ENOMEM;
-			goto free_bitmap;
-		}
-
-	}
-	if (zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
-			       virt_to_phys(zdev->dma_table), &status)) {
-		rc = -EIO;
-		goto free_bitmap;
-	}
-
-	return 0;
-free_bitmap:
-	vfree(zdev->iommu_bitmap);
-	zdev->iommu_bitmap = NULL;
-	vfree(zdev->lazy_bitmap);
-	zdev->lazy_bitmap = NULL;
-free_dma_table:
-	dma_free_cpu_table(zdev->dma_table);
-	zdev->dma_table = NULL;
-out:
-	return rc;
-}
-
-int zpci_dma_exit_device(struct zpci_dev *zdev)
-{
-	int cc = 0;
-
-	/*
-	 * At this point, if the device is part of an IOMMU domain, this would
-	 * be a strong hint towards a bug in the IOMMU API (common) code and/or
-	 * simultaneous access via IOMMU and DMA API. So let's issue a warning.
-	 */
-	WARN_ON(zdev->s390_domain);
-	if (zdev_enabled(zdev))
-		cc = zpci_unregister_ioat(zdev, 0);
-	/*
-	 * cc == 3 indicates the function is gone already. This can happen
-	 * if the function was deconfigured/disabled suddenly and we have not
-	 * received a new handle yet.
-	 */
-	if (cc && cc != 3)
-		return -EIO;
-
-	dma_cleanup_tables(zdev->dma_table);
-	zdev->dma_table = NULL;
-	vfree(zdev->iommu_bitmap);
-	zdev->iommu_bitmap = NULL;
-	vfree(zdev->lazy_bitmap);
-	zdev->lazy_bitmap = NULL;
-	zdev->next_bit = 0;
-	return 0;
-}
-
-static int __init dma_alloc_cpu_table_caches(void)
-{
-	dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables",
-					ZPCI_TABLE_SIZE, ZPCI_TABLE_ALIGN,
-					0, NULL);
-	if (!dma_region_table_cache)
-		return -ENOMEM;
-
-	dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables",
-					ZPCI_PT_SIZE, ZPCI_PT_ALIGN,
-					0, NULL);
-	if (!dma_page_table_cache) {
-		kmem_cache_destroy(dma_region_table_cache);
-		return -ENOMEM;
-	}
-	return 0;
-}
-
-int __init zpci_dma_init(void)
-{
-	s390_iommu_aperture = (u64)virt_to_phys(high_memory);
-	if (!s390_iommu_aperture_factor)
-		s390_iommu_aperture = ULONG_MAX;
-	else
-		s390_iommu_aperture *= s390_iommu_aperture_factor;
-
-	return dma_alloc_cpu_table_caches();
-}
-
-void zpci_dma_exit(void)
-{
-	kmem_cache_destroy(dma_page_table_cache);
-	kmem_cache_destroy(dma_region_table_cache);
-}
-
-const struct dma_map_ops s390_pci_dma_ops = {
-	.alloc		= s390_dma_alloc,
-	.free		= s390_dma_free,
-	.map_sg		= s390_dma_map_sg,
-	.unmap_sg	= s390_dma_unmap_sg,
-	.map_page	= s390_dma_map_pages,
-	.unmap_page	= s390_dma_unmap_pages,
-	.mmap		= dma_common_mmap,
-	.get_sgtable	= dma_common_get_sgtable,
-	.alloc_pages	= dma_common_alloc_pages,
-	.free_pages	= dma_common_free_pages,
-	/* dma_supported is unconditionally true without a callback */
-};
-EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
-
-static int __init s390_iommu_setup(char *str)
-{
-	if (!strcmp(str, "strict"))
-		s390_iommu_strict = 1;
-	return 1;
-}
-
-__setup("s390_iommu=", s390_iommu_setup);
-
-static int __init s390_iommu_aperture_setup(char *str)
-{
-	if (kstrtou32(str, 10, &s390_iommu_aperture_factor))
-		s390_iommu_aperture_factor = 1;
-	return 1;
-}
-
-__setup("s390_iommu_aperture=", s390_iommu_aperture_setup);
diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
index 4ef5a6a1d618..4d9773ef9e0a 100644
--- a/arch/s390/pci/pci_event.c
+++ b/arch/s390/pci/pci_event.c
@@ -313,8 +313,6 @@ static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh)
 	/* Even though the device is already gone we still
 	 * need to free zPCI resources as part of the disable.
 	 */
-	if (zdev->dma_table)
-		zpci_dma_exit_device(zdev);
 	if (zdev_enabled(zdev))
 		zpci_disable_device(zdev);
 	zdev->state = ZPCI_FN_STATE_STANDBY;
diff --git a/arch/s390/pci/pci_sysfs.c b/arch/s390/pci/pci_sysfs.c
index cae280e5c047..8a7abac51816 100644
--- a/arch/s390/pci/pci_sysfs.c
+++ b/arch/s390/pci/pci_sysfs.c
@@ -56,6 +56,7 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr,
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct zpci_dev *zdev = to_zpci(pdev);
 	int ret = 0;
+	u8 status;
 
 	/* Can't use device_remove_self() here as that would lead us to lock
 	 * the pci_rescan_remove_lock while holding the device' kernfs lock.
@@ -82,12 +83,6 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr,
 	pci_lock_rescan_remove();
 	if (pci_dev_is_added(pdev)) {
 		pci_stop_and_remove_bus_device(pdev);
-		if (zdev->dma_table) {
-			ret = zpci_dma_exit_device(zdev);
-			if (ret)
-				goto out;
-		}
-
 		if (zdev_enabled(zdev)) {
 			ret = zpci_disable_device(zdev);
 			/*
@@ -105,14 +100,16 @@ static ssize_t recover_store(struct device *dev, struct device_attribute *attr,
 		ret = zpci_enable_device(zdev);
 		if (ret)
 			goto out;
-		ret = zpci_dma_init_device(zdev);
-		if (ret) {
-			zpci_disable_device(zdev);
-			goto out;
+
+		if (zdev->dma_table) {
+			ret = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
+						 virt_to_phys(zdev->dma_table), &status);
+			if (ret)
+				zpci_disable_device(zdev);
 		}
-		pci_rescan_bus(zdev->zbus->bus);
 	}
 out:
+	pci_rescan_bus(zdev->zbus->bus);
 	pci_unlock_rescan_remove();
 	if (kn)
 		sysfs_unbreak_active_protection(kn);
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 2b12b583ef4b..5dc4c8e8a08c 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -91,7 +91,7 @@ config IOMMU_DEBUGFS
 choice
 	prompt "IOMMU default domain type"
 	depends on IOMMU_API
-	default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64
+	default IOMMU_DEFAULT_DMA_LAZY if X86 || IA64 || S390
 	default IOMMU_DEFAULT_DMA_STRICT
 	help
 	  Choose the type of IOMMU domain used to manage DMA API usage by
@@ -146,7 +146,7 @@ config OF_IOMMU
 
 # IOMMU-agnostic DMA-mapping layer
 config IOMMU_DMA
-	def_bool ARM64 || IA64 || X86
+	def_bool ARM64 || IA64 || X86 || S390
 	select DMA_OPS
 	select IOMMU_API
 	select IOMMU_IOVA
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 6723d77489e8..f6d6c60e5634 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -14,16 +14,300 @@
 #include <linux/rcupdate.h>
 #include <asm/pci_dma.h>
 
+#include "dma-iommu.h"
+
 static const struct iommu_ops s390_iommu_ops;
 
+static struct kmem_cache *dma_region_table_cache;
+static struct kmem_cache *dma_page_table_cache;
+
+static u64 s390_iommu_aperture;
+static u32 s390_iommu_aperture_factor = 1;
+
 struct s390_domain {
 	struct iommu_domain	domain;
 	struct list_head	devices;
+	struct zpci_iommu_ctrs	ctrs;
 	unsigned long		*dma_table;
 	spinlock_t		list_lock;
 	struct rcu_head		rcu;
 };
 
+static inline unsigned int calc_rtx(dma_addr_t ptr)
+{
+	return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
+}
+
+static inline unsigned int calc_sx(dma_addr_t ptr)
+{
+	return ((unsigned long)ptr >> ZPCI_ST_SHIFT) & ZPCI_INDEX_MASK;
+}
+
+static inline unsigned int calc_px(dma_addr_t ptr)
+{
+	return ((unsigned long)ptr >> PAGE_SHIFT) & ZPCI_PT_MASK;
+}
+
+static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa)
+{
+	*entry &= ZPCI_PTE_FLAG_MASK;
+	*entry |= (pfaa & ZPCI_PTE_ADDR_MASK);
+}
+
+static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto)
+{
+	*entry &= ZPCI_RTE_FLAG_MASK;
+	*entry |= (sto & ZPCI_RTE_ADDR_MASK);
+	*entry |= ZPCI_TABLE_TYPE_RTX;
+}
+
+static inline void set_st_pto(unsigned long *entry, phys_addr_t pto)
+{
+	*entry &= ZPCI_STE_FLAG_MASK;
+	*entry |= (pto & ZPCI_STE_ADDR_MASK);
+	*entry |= ZPCI_TABLE_TYPE_SX;
+}
+
+static inline void validate_rt_entry(unsigned long *entry)
+{
+	*entry &= ~ZPCI_TABLE_VALID_MASK;
+	*entry &= ~ZPCI_TABLE_OFFSET_MASK;
+	*entry |= ZPCI_TABLE_VALID;
+	*entry |= ZPCI_TABLE_LEN_RTX;
+}
+
+static inline void validate_st_entry(unsigned long *entry)
+{
+	*entry &= ~ZPCI_TABLE_VALID_MASK;
+	*entry |= ZPCI_TABLE_VALID;
+}
+
+static inline void invalidate_pt_entry(unsigned long *entry)
+{
+	WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_INVALID);
+	*entry &= ~ZPCI_PTE_VALID_MASK;
+	*entry |= ZPCI_PTE_INVALID;
+}
+
+static inline void validate_pt_entry(unsigned long *entry)
+{
+	WARN_ON_ONCE((*entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID);
+	*entry &= ~ZPCI_PTE_VALID_MASK;
+	*entry |= ZPCI_PTE_VALID;
+}
+
+static inline void entry_set_protected(unsigned long *entry)
+{
+	*entry &= ~ZPCI_TABLE_PROT_MASK;
+	*entry |= ZPCI_TABLE_PROTECTED;
+}
+
+static inline void entry_clr_protected(unsigned long *entry)
+{
+	*entry &= ~ZPCI_TABLE_PROT_MASK;
+	*entry |= ZPCI_TABLE_UNPROTECTED;
+}
+
+static inline int reg_entry_isvalid(unsigned long entry)
+{
+	return (entry & ZPCI_TABLE_VALID_MASK) == ZPCI_TABLE_VALID;
+}
+
+static inline int pt_entry_isvalid(unsigned long entry)
+{
+	return (entry & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID;
+}
+
+static inline unsigned long *get_rt_sto(unsigned long entry)
+{
+	if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX)
+		return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK);
+	else
+		return NULL;
+}
+
+static inline unsigned long *get_st_pto(unsigned long entry)
+{
+	if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX)
+		return phys_to_virt(entry & ZPCI_STE_ADDR_MASK);
+	else
+		return NULL;
+}
+
+static int __init dma_alloc_cpu_table_caches(void)
+{
+	dma_region_table_cache = kmem_cache_create("PCI_DMA_region_tables",
+						   ZPCI_TABLE_SIZE,
+						   ZPCI_TABLE_ALIGN,
+						   0, NULL);
+	if (!dma_region_table_cache)
+		return -ENOMEM;
+
+	dma_page_table_cache = kmem_cache_create("PCI_DMA_page_tables",
+						 ZPCI_PT_SIZE,
+						 ZPCI_PT_ALIGN,
+						 0, NULL);
+	if (!dma_page_table_cache) {
+		kmem_cache_destroy(dma_region_table_cache);
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+static unsigned long *dma_alloc_cpu_table(gfp_t gfp)
+{
+	unsigned long *table, *entry;
+
+	table = kmem_cache_alloc(dma_region_table_cache, gfp);
+	if (!table)
+		return NULL;
+
+	for (entry = table; entry < table + ZPCI_TABLE_ENTRIES; entry++)
+		*entry = ZPCI_TABLE_INVALID;
+	return table;
+}
+
+static void dma_free_cpu_table(void *table)
+{
+	kmem_cache_free(dma_region_table_cache, table);
+}
+
+static void dma_free_page_table(void *table)
+{
+	kmem_cache_free(dma_page_table_cache, table);
+}
+
+static void dma_free_seg_table(unsigned long entry)
+{
+	unsigned long *sto = get_rt_sto(entry);
+	int sx;
+
+	for (sx = 0; sx < ZPCI_TABLE_ENTRIES; sx++)
+		if (reg_entry_isvalid(sto[sx]))
+			dma_free_page_table(get_st_pto(sto[sx]));
+
+	dma_free_cpu_table(sto);
+}
+
+static void dma_cleanup_tables(unsigned long *table)
+{
+	int rtx;
+
+	if (!table)
+		return;
+
+	for (rtx = 0; rtx < ZPCI_TABLE_ENTRIES; rtx++)
+		if (reg_entry_isvalid(table[rtx]))
+			dma_free_seg_table(table[rtx]);
+
+	dma_free_cpu_table(table);
+}
+
+static unsigned long *dma_alloc_page_table(gfp_t gfp)
+{
+	unsigned long *table, *entry;
+
+	table = kmem_cache_alloc(dma_page_table_cache, gfp);
+	if (!table)
+		return NULL;
+
+	for (entry = table; entry < table + ZPCI_PT_ENTRIES; entry++)
+		*entry = ZPCI_PTE_INVALID;
+	return table;
+}
+
+static unsigned long *dma_get_seg_table_origin(unsigned long *rtep, gfp_t gfp)
+{
+	unsigned long old_rte, rte;
+	unsigned long *sto;
+
+	rte = READ_ONCE(*rtep);
+	if (reg_entry_isvalid(rte)) {
+		sto = get_rt_sto(rte);
+	} else {
+		sto = dma_alloc_cpu_table(gfp);
+		if (!sto)
+			return NULL;
+
+		set_rt_sto(&rte, virt_to_phys(sto));
+		validate_rt_entry(&rte);
+		entry_clr_protected(&rte);
+
+		old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte);
+		if (old_rte != ZPCI_TABLE_INVALID) {
+			/* Somone else was faster, use theirs */
+			dma_free_cpu_table(sto);
+			sto = get_rt_sto(old_rte);
+		}
+	}
+	return sto;
+}
+
+static unsigned long *dma_get_page_table_origin(unsigned long *step, gfp_t gfp)
+{
+	unsigned long old_ste, ste;
+	unsigned long *pto;
+
+	ste = READ_ONCE(*step);
+	if (reg_entry_isvalid(ste)) {
+		pto = get_st_pto(ste);
+	} else {
+		pto = dma_alloc_page_table(gfp);
+		if (!pto)
+			return NULL;
+		set_st_pto(&ste, virt_to_phys(pto));
+		validate_st_entry(&ste);
+		entry_clr_protected(&ste);
+
+		old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste);
+		if (old_ste != ZPCI_TABLE_INVALID) {
+			/* Somone else was faster, use theirs */
+			dma_free_page_table(pto);
+			pto = get_st_pto(old_ste);
+		}
+	}
+	return pto;
+}
+
+static unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr, gfp_t gfp)
+{
+	unsigned long *sto, *pto;
+	unsigned int rtx, sx, px;
+
+	rtx = calc_rtx(dma_addr);
+	sto = dma_get_seg_table_origin(&rto[rtx], gfp);
+	if (!sto)
+		return NULL;
+
+	sx = calc_sx(dma_addr);
+	pto = dma_get_page_table_origin(&sto[sx], gfp);
+	if (!pto)
+		return NULL;
+
+	px = calc_px(dma_addr);
+	return &pto[px];
+}
+
+static void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags)
+{
+	unsigned long pte;
+
+	pte = READ_ONCE(*ptep);
+	if (flags & ZPCI_PTE_INVALID) {
+		invalidate_pt_entry(&pte);
+	} else {
+		set_pt_pfaa(&pte, page_addr);
+		validate_pt_entry(&pte);
+	}
+
+	if (flags & ZPCI_TABLE_PROTECTED)
+		entry_set_protected(&pte);
+	else
+		entry_clr_protected(&pte);
+
+	xchg(ptep, pte);
+}
+
 static struct s390_domain *to_s390_domain(struct iommu_domain *dom)
 {
 	return container_of(dom, struct s390_domain, domain);
@@ -34,6 +318,8 @@ static bool s390_iommu_capable(struct device *dev, enum iommu_cap cap)
 	switch (cap) {
 	case IOMMU_CAP_CACHE_COHERENCY:
 		return true;
+	case IOMMU_CAP_DEFERRED_FLUSH:
+		return true;
 	default:
 		return false;
 	}
@@ -43,9 +329,13 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type)
 {
 	struct s390_domain *s390_domain;
 
-	if (domain_type != IOMMU_DOMAIN_UNMANAGED)
+	switch (domain_type) {
+	case IOMMU_DOMAIN_DMA:
+	case IOMMU_DOMAIN_UNMANAGED:
+		break;
+	default:
 		return NULL;
-
+	}
 	s390_domain = kzalloc(sizeof(*s390_domain), GFP_KERNEL);
 	if (!s390_domain)
 		return NULL;
@@ -84,14 +374,13 @@ static void s390_domain_free(struct iommu_domain *domain)
 	call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain);
 }
 
-static void __s390_iommu_detach_device(struct zpci_dev *zdev)
+static void s390_iommu_detach_device(struct iommu_domain *domain,
+				     struct device *dev)
 {
-	struct s390_domain *s390_domain = zdev->s390_domain;
+	struct s390_domain *s390_domain = to_s390_domain(domain);
+	struct zpci_dev *zdev = to_zpci_dev(dev);
 	unsigned long flags;
 
-	if (!s390_domain)
-		return;
-
 	spin_lock_irqsave(&s390_domain->list_lock, flags);
 	list_del_rcu(&zdev->iommu_list);
 	spin_unlock_irqrestore(&s390_domain->list_lock, flags);
@@ -118,9 +407,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
 		return -EINVAL;
 
 	if (zdev->s390_domain)
-		__s390_iommu_detach_device(zdev);
-	else if (zdev->dma_table)
-		zpci_dma_exit_device(zdev);
+		s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
 
 	cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
 				virt_to_phys(s390_domain->dma_table), &status);
@@ -130,7 +417,6 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
 	 */
 	if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
 		return -EIO;
-	zdev->dma_table = s390_domain->dma_table;
 
 	zdev->dma_table = s390_domain->dma_table;
 	zdev->s390_domain = s390_domain;
@@ -142,14 +428,6 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
 	return 0;
 }
 
-static void s390_iommu_set_platform_dma(struct device *dev)
-{
-	struct zpci_dev *zdev = to_zpci_dev(dev);
-
-	__s390_iommu_detach_device(zdev);
-	zpci_dma_init_device(zdev);
-}
-
 static void s390_iommu_get_resv_regions(struct device *dev,
 					struct list_head *list)
 {
@@ -202,7 +480,7 @@ static void s390_iommu_release_device(struct device *dev)
 	 * to the device, but keep it attached to other devices in the group.
 	 */
 	if (zdev)
-		__s390_iommu_detach_device(zdev);
+		s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
 }
 
 static int zpci_refresh_all(struct zpci_dev *zdev)
@@ -218,6 +496,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
+		atomic64_inc(&s390_domain->ctrs.global_rpcits);
 		zpci_refresh_all(zdev);
 	}
 	rcu_read_unlock();
@@ -236,6 +515,7 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain,
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
+		atomic64_inc(&s390_domain->ctrs.sync_rpcits);
 		zpci_refresh_trans((u64)zdev->fh << 32, gather->start,
 				   size);
 	}
@@ -253,6 +533,7 @@ static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
 	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
 		if (!zdev->tlb_refresh)
 			continue;
+		atomic64_inc(&s390_domain->ctrs.sync_map_rpcits);
 		ret = zpci_refresh_trans((u64)zdev->fh << 32,
 					 iova, size);
 		/*
@@ -347,16 +628,15 @@ static int s390_iommu_map_pages(struct iommu_domain *domain,
 	if (!IS_ALIGNED(iova | paddr, pgsize))
 		return -EINVAL;
 
-	if (!(prot & IOMMU_READ))
-		return -EINVAL;
-
 	if (!(prot & IOMMU_WRITE))
 		flags |= ZPCI_TABLE_PROTECTED;
 
 	rc = s390_iommu_validate_trans(s390_domain, paddr, iova,
-				       pgcount, flags, gfp);
-	if (!rc)
+				     pgcount, flags, gfp);
+	if (!rc) {
 		*mapped = size;
+		atomic64_add(pgcount, &s390_domain->ctrs.mapped_pages);
+	}
 
 	return rc;
 }
@@ -412,12 +692,27 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
 		return 0;
 
 	iommu_iotlb_gather_add_range(gather, iova, size);
+	atomic64_add(pgcount, &s390_domain->ctrs.unmapped_pages);
 
 	return size;
 }
 
+static void s390_iommu_probe_finalize(struct device *dev)
+{
+	iommu_dma_forcedac = true;
+	iommu_setup_dma_ops(dev, 0, U64_MAX);
+}
+
+struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
+{
+	if (!zdev || !zdev->s390_domain)
+		return NULL;
+	return &zdev->s390_domain->ctrs;
+}
+
 int zpci_init_iommu(struct zpci_dev *zdev)
 {
+	u64 aperture_size;
 	int rc = 0;
 
 	rc = iommu_device_sysfs_add(&zdev->iommu_dev, NULL, NULL,
@@ -429,6 +724,12 @@ int zpci_init_iommu(struct zpci_dev *zdev)
 	if (rc)
 		goto out_sysfs;
 
+	zdev->start_dma = PAGE_ALIGN(zdev->start_dma);
+	aperture_size = min3(s390_iommu_aperture,
+			     ZPCI_TABLE_SIZE_RT - zdev->start_dma,
+			     zdev->end_dma - zdev->start_dma + 1);
+	zdev->end_dma = zdev->start_dma + aperture_size - 1;
+
 	return 0;
 
 out_sysfs:
@@ -444,13 +745,51 @@ void zpci_destroy_iommu(struct zpci_dev *zdev)
 	iommu_device_sysfs_remove(&zdev->iommu_dev);
 }
 
+static int __init s390_iommu_setup(char *str)
+{
+	if (!strcmp(str, "strict")) {
+		pr_warn("s390_iommu=strict deprecated; use iommu.strict=1 instead\n");
+		iommu_set_dma_strict();
+	}
+	return 1;
+}
+
+__setup("s390_iommu=", s390_iommu_setup);
+
+static int __init s390_iommu_aperture_setup(char *str)
+{
+	if (kstrtou32(str, 10, &s390_iommu_aperture_factor))
+		s390_iommu_aperture_factor = 1;
+	return 1;
+}
+
+__setup("s390_iommu_aperture=", s390_iommu_aperture_setup);
+
+static int __init s390_iommu_init(void)
+{
+	int rc;
+
+	s390_iommu_aperture = (u64)virt_to_phys(high_memory);
+	if (!s390_iommu_aperture_factor)
+		s390_iommu_aperture = ULONG_MAX;
+	else
+		s390_iommu_aperture *= s390_iommu_aperture_factor;
+
+	rc = dma_alloc_cpu_table_caches();
+	if (rc)
+		return rc;
+
+	return rc;
+}
+subsys_initcall(s390_iommu_init);
+
 static const struct iommu_ops s390_iommu_ops = {
 	.capable = s390_iommu_capable,
 	.domain_alloc = s390_domain_alloc,
 	.probe_device = s390_iommu_probe_device,
+	.probe_finalize = s390_iommu_probe_finalize,
 	.release_device = s390_iommu_release_device,
 	.device_group = generic_device_group,
-	.set_platform_dma_ops = s390_iommu_set_platform_dma,
 	.pgsize_bitmap = SZ_4K,
 	.get_resv_regions = s390_iommu_get_resv_regions,
 	.default_domain_ops = &(const struct iommu_domain_ops) {

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (2 preceding siblings ...)
  2023-08-25 10:11 ` [PATCH v12 3/6] s390/pci: Use dma-iommu layer Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-08-25 18:23   ` Matthew Rosato
  2023-08-25 10:11 ` [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs Niklas Schnelle
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

ISM devices are virtual PCI devices used for cross-LPAR communication.
Unlike real PCI devices ISM devices do not use the hardware IOMMU but
inspects IOMMU translation tables directly on IOTLB flush (s390 RPCIT
instruction).

ISM devices keep their DMA allocations static and only very rarely DMA
unmap at all. For each IOTLB flush that occurs after unmap the ISM
devices will however inspect the area of the IOVA space indicated by the
flush. This means that for the global IOTLB flushes used by the flush
queue mechanism the entire IOVA space would be inspected. In principle
this would be fine, albeit potentially unnecessarily slow, it turns out
however that ISM devices are sensitive to seeing IOVA addresses that are
currently in use in the IOVA range being flushed. Seeing such in-use
IOVA addresses will cause the ISM device to enter an error state and
become unusable.

Fix this by claiming IOMMU_CAP_DEFERRED_FLUSH only for non-ISM devices.
This makes sure IOTLB flushes only cover IOVAs that have been unmapped
and also restricts the range of the IOTLB flush potentially reducing
latency spikes.

Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 drivers/iommu/s390-iommu.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index f6d6c60e5634..8310180a102c 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -315,11 +315,13 @@ static struct s390_domain *to_s390_domain(struct iommu_domain *dom)
 
 static bool s390_iommu_capable(struct device *dev, enum iommu_cap cap)
 {
+	struct zpci_dev *zdev = to_zpci_dev(dev);
+
 	switch (cap) {
 	case IOMMU_CAP_CACHE_COHERENCY:
 		return true;
 	case IOMMU_CAP_DEFERRED_FLUSH:
-		return true;
+		return zdev->pft != PCI_FUNC_TYPE_ISM;
 	default:
 		return false;
 	}

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (3 preceding siblings ...)
  2023-08-25 10:11 ` [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-09-11 12:06   ` Niklas Schnelle
  2023-08-25 10:11 ` [PATCH v12 6/6] iommu/dma: Use a large flush queue and timeout for shadow_on_flush Niklas Schnelle
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

In some virtualized environments, including s390 paged memory guests,
IOTLB flushes are used to update IOMMU shadow tables. Due to this, they
are much more expensive than in typical bare metal environments or
non-paged s390 guests. In addition they may parallelize poorly in
virtualized environments. This changes the trade off for flushing IOVAs
such that minimizing the number of IOTLB flushes trumps any benefit of
cheaper queuing operations or increased paralellism.

In this scenario per-CPU flush queues pose several problems. Firstly
per-CPU memory is often quite limited prohibiting larger queues.
Secondly collecting IOVAs per-CPU but flushing via a global timeout
reduces the number of IOVAs flushed for each timeout especially on s390
where PCI interrupts may not be bound to a specific CPU.

Let's introduce a single flush queue mode that reuses the same queue
logic but only allocates a single global queue. This mode is selected by
dma-iommu if a newly introduced .shadow_on_flush flag is set in struct
dev_iommu. As a first user the s390 IOMMU driver sets this flag during
probe_device. With the unchanged small FQ size and timeouts this setting
is worse than per-CPU queues but a follow up patch will make the FQ size
and timeout variable. Together this allows the common IOVA flushing code
to more closely resemble the global flush behavior used on s390's
previous internal DMA API implementation.

Link: https://lore.kernel.org/all/9a466109-01c5-96b0-bf03-304123f435ee@arm.com/
Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 drivers/iommu/dma-iommu.c  | 168 ++++++++++++++++++++++++++++++++++-----------
 drivers/iommu/s390-iommu.c |   3 +
 include/linux/iommu.h      |   2 +
 3 files changed, 134 insertions(+), 39 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index e57724163835..09660b0af130 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -43,14 +43,26 @@ enum iommu_dma_cookie_type {
 	IOMMU_DMA_MSI_COOKIE,
 };
 
+enum iommu_dma_queue_type {
+	IOMMU_DMA_OPTS_PER_CPU_QUEUE,
+	IOMMU_DMA_OPTS_SINGLE_QUEUE,
+};
+
+struct iommu_dma_options {
+	enum iommu_dma_queue_type qt;
+};
+
 struct iommu_dma_cookie {
 	enum iommu_dma_cookie_type	type;
 	union {
 		/* Full allocator for IOMMU_DMA_IOVA_COOKIE */
 		struct {
 			struct iova_domain	iovad;
-
-			struct iova_fq __percpu *fq;	/* Flush queue */
+			/* Flush queue */
+			union {
+				struct iova_fq	*single_fq;
+				struct iova_fq	__percpu *percpu_fq;
+			};
 			/* Number of TLB flushes that have been started */
 			atomic64_t		fq_flush_start_cnt;
 			/* Number of TLB flushes that have been finished */
@@ -67,6 +79,8 @@ struct iommu_dma_cookie {
 
 	/* Domain for flush queue callback; NULL if flush queue not in use */
 	struct iommu_domain		*fq_domain;
+	/* Options for dma-iommu use */
+	struct iommu_dma_options	options;
 	struct mutex			mutex;
 };
 
@@ -124,7 +138,7 @@ static inline unsigned int fq_ring_add(struct iova_fq *fq)
 	return idx;
 }
 
-static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq)
+static void fq_ring_free_locked(struct iommu_dma_cookie *cookie, struct iova_fq *fq)
 {
 	u64 counter = atomic64_read(&cookie->fq_flush_finish_cnt);
 	unsigned int idx;
@@ -145,6 +159,15 @@ static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq)
 	}
 }
 
+static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&fq->lock, flags);
+	fq_ring_free_locked(cookie, fq);
+	spin_unlock_irqrestore(&fq->lock, flags);
+}
+
 static void fq_flush_iotlb(struct iommu_dma_cookie *cookie)
 {
 	atomic64_inc(&cookie->fq_flush_start_cnt);
@@ -160,14 +183,11 @@ static void fq_flush_timeout(struct timer_list *t)
 	atomic_set(&cookie->fq_timer_on, 0);
 	fq_flush_iotlb(cookie);
 
-	for_each_possible_cpu(cpu) {
-		unsigned long flags;
-		struct iova_fq *fq;
-
-		fq = per_cpu_ptr(cookie->fq, cpu);
-		spin_lock_irqsave(&fq->lock, flags);
-		fq_ring_free(cookie, fq);
-		spin_unlock_irqrestore(&fq->lock, flags);
+	if (cookie->options.qt == IOMMU_DMA_OPTS_SINGLE_QUEUE) {
+		fq_ring_free(cookie, cookie->single_fq);
+	} else {
+		for_each_possible_cpu(cpu)
+			fq_ring_free(cookie, per_cpu_ptr(cookie->percpu_fq, cpu));
 	}
 }
 
@@ -188,7 +208,11 @@ static void queue_iova(struct iommu_dma_cookie *cookie,
 	 */
 	smp_mb();
 
-	fq = raw_cpu_ptr(cookie->fq);
+	if (cookie->options.qt == IOMMU_DMA_OPTS_SINGLE_QUEUE)
+		fq = cookie->single_fq;
+	else
+		fq = raw_cpu_ptr(cookie->percpu_fq);
+
 	spin_lock_irqsave(&fq->lock, flags);
 
 	/*
@@ -196,11 +220,11 @@ static void queue_iova(struct iommu_dma_cookie *cookie,
 	 * flushed out on another CPU. This makes the fq_full() check below less
 	 * likely to be true.
 	 */
-	fq_ring_free(cookie, fq);
+	fq_ring_free_locked(cookie, fq);
 
 	if (fq_full(fq)) {
 		fq_flush_iotlb(cookie);
-		fq_ring_free(cookie, fq);
+		fq_ring_free_locked(cookie, fq);
 	}
 
 	idx = fq_ring_add(fq);
@@ -219,31 +243,88 @@ static void queue_iova(struct iommu_dma_cookie *cookie,
 			  jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
 }
 
-static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie)
+static void iommu_dma_free_fq_single(struct iova_fq *fq)
+{
+	int idx;
+
+	fq_ring_for_each(idx, fq)
+		put_pages_list(&fq->entries[idx].freelist);
+	vfree(fq);
+}
+
+static void iommu_dma_free_fq_percpu(struct iova_fq __percpu *percpu_fq)
 {
 	int cpu, idx;
 
-	if (!cookie->fq)
-		return;
-
-	del_timer_sync(&cookie->fq_timer);
 	/* The IOVAs will be torn down separately, so just free our queued pages */
 	for_each_possible_cpu(cpu) {
-		struct iova_fq *fq = per_cpu_ptr(cookie->fq, cpu);
+		struct iova_fq *fq = per_cpu_ptr(percpu_fq, cpu);
 
 		fq_ring_for_each(idx, fq)
 			put_pages_list(&fq->entries[idx].freelist);
 	}
 
-	free_percpu(cookie->fq);
+	free_percpu(percpu_fq);
+}
+
+static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie)
+{
+	if (!cookie->fq_domain)
+		return;
+
+	del_timer_sync(&cookie->fq_timer);
+	if (cookie->options.qt == IOMMU_DMA_OPTS_SINGLE_QUEUE)
+		iommu_dma_free_fq_single(cookie->single_fq);
+	else
+		iommu_dma_free_fq_percpu(cookie->percpu_fq);
+}
+
+static void iommu_dma_init_one_fq(struct iova_fq *fq)
+{
+	int i;
+
+	fq->head = 0;
+	fq->tail = 0;
+
+	spin_lock_init(&fq->lock);
+
+	for (i = 0; i < IOVA_FQ_SIZE; i++)
+		INIT_LIST_HEAD(&fq->entries[i].freelist);
+}
+
+static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie)
+{
+	struct iova_fq *queue;
+
+	queue = vmalloc(sizeof(*queue));
+	if (!queue)
+		return -ENOMEM;
+	iommu_dma_init_one_fq(queue);
+	cookie->single_fq = queue;
+
+	return 0;
+}
+
+static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie)
+{
+	struct iova_fq __percpu *queue;
+	int cpu;
+
+	queue = alloc_percpu(struct iova_fq);
+	if (!queue)
+		return -ENOMEM;
+
+	for_each_possible_cpu(cpu)
+		iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu));
+	cookie->percpu_fq = queue;
+	return 0;
 }
 
 /* sysfs updates are serialised by the mutex of the group owning @domain */
 int iommu_dma_init_fq(struct iommu_domain *domain)
 {
 	struct iommu_dma_cookie *cookie = domain->iova_cookie;
-	struct iova_fq __percpu *queue;
-	int i, cpu;
+	int rc;
 
 	if (cookie->fq_domain)
 		return 0;
@@ -251,26 +332,16 @@ int iommu_dma_init_fq(struct iommu_domain *domain)
 	atomic64_set(&cookie->fq_flush_start_cnt,  0);
 	atomic64_set(&cookie->fq_flush_finish_cnt, 0);
 
-	queue = alloc_percpu(struct iova_fq);
-	if (!queue) {
+	if (cookie->options.qt == IOMMU_DMA_OPTS_SINGLE_QUEUE)
+		rc = iommu_dma_init_fq_single(cookie);
+	else
+		rc = iommu_dma_init_fq_percpu(cookie);
+
+	if (rc) {
 		pr_warn("iova flush queue initialization failed\n");
 		return -ENOMEM;
 	}
 
-	for_each_possible_cpu(cpu) {
-		struct iova_fq *fq = per_cpu_ptr(queue, cpu);
-
-		fq->head = 0;
-		fq->tail = 0;
-
-		spin_lock_init(&fq->lock);
-
-		for (i = 0; i < IOVA_FQ_SIZE; i++)
-			INIT_LIST_HEAD(&fq->entries[i].freelist);
-	}
-
-	cookie->fq = queue;
-
 	timer_setup(&cookie->fq_timer, fq_flush_timeout, 0);
 	atomic_set(&cookie->fq_timer_on, 0);
 	/*
@@ -554,6 +625,23 @@ static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg,
 	return false;
 }
 
+/**
+ * iommu_dma_init_options - Initialize dma-iommu options
+ * @options: The options to be initialized
+ * @dev: Device the options are set for
+ *
+ * This allows tuning dma-iommu specific to device properties
+ */
+static void iommu_dma_init_options(struct iommu_dma_options *options,
+				   struct device *dev)
+{
+	/* Shadowing IOTLB flushes do better with a single queue */
+	if (dev->iommu->shadow_on_flush)
+		options->qt = IOMMU_DMA_OPTS_SINGLE_QUEUE;
+	else
+		options->qt = IOMMU_DMA_OPTS_PER_CPU_QUEUE;
+}
+
 /**
  * iommu_dma_init_domain - Initialise a DMA mapping domain
  * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
@@ -614,6 +702,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 	if (ret)
 		goto done_unlock;
 
+	iommu_dma_init_options(&cookie->options, dev);
+
 	/* If the FQ fails we can simply fall back to strict mode */
 	if (domain->type == IOMMU_DOMAIN_DMA_FQ &&
 	    (!device_iommu_capable(dev, IOMMU_CAP_DEFERRED_FLUSH) || iommu_dma_init_fq(domain)))
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 8310180a102c..14e0c0b72630 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -470,6 +470,9 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
 	if (zdev->end_dma > ZPCI_TABLE_SIZE_RT - 1)
 		zdev->end_dma = ZPCI_TABLE_SIZE_RT - 1;
 
+	if (zdev->tlb_refresh)
+		dev->iommu->shadow_on_flush = 1;
+
 	return &zdev->iommu_dev;
 }
 
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 182cc4c71e62..c3687e066ed7 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -409,6 +409,7 @@ struct iommu_fault_param {
  * @priv:	 IOMMU Driver private data
  * @max_pasids:  number of PASIDs this device can consume
  * @attach_deferred: the dma domain attachment is deferred
+ * @shadow_on_flush: IOTLB flushes are used to sync shadow tables
  *
  * TODO: migrate other per device data pointers under iommu_dev_data, e.g.
  *	struct iommu_group	*iommu_group;
@@ -422,6 +423,7 @@ struct dev_iommu {
 	void				*priv;
 	u32				max_pasids;
 	u32				attach_deferred:1;
+	u32				shadow_on_flush:1;
 };
 
 int iommu_device_register(struct iommu_device *iommu,

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v12 6/6] iommu/dma: Use a large flush queue and timeout for shadow_on_flush
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (4 preceding siblings ...)
  2023-08-25 10:11 ` [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs Niklas Schnelle
@ 2023-08-25 10:11 ` Niklas Schnelle
  2023-08-25 18:26 ` [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Matthew Rosato
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-08-25 10:11 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Niklas Schnelle, Jonathan Corbet, linux-s390,
	netdev, linux-kernel, iommu, asahi, linux-arm-kernel,
	linux-arm-msm, linux-mediatek, linux-sunxi, linux-tegra,
	linux-doc

Flush queues currently use a fixed compile time size of 256 entries.
This being a power of 2 allows the compiler to use shift and mask
instead of more expensive modulo operations. With per-CPU flush queues
larger queue sizes would hit per-CPU allocation limits, with a single
flush queue these limits do not apply however. Also with single queues
being particularly suitable for virtualized environments with expensive
IOTLB flushes these benefit especially from larger queues and thus fewer
flushes.

To this end re-order struct iova_fq so we can use a dynamic array and
introduce the flush queue size and timeouts as new options in the
iommu_dma_options struct. So as not to lose the shift and mask
optimization, use a power of 2 for the length and use explicit shift and
mask instead of letting the compiler optimize this.

A large queue size and 1 second timeout is then set for the shadow on
flush case set by s390 paged memory guests. This then brings performance
on par with the previous s390 specific DMA API implementation.

Acked-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390
Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
---
 drivers/iommu/dma-iommu.c | 50 ++++++++++++++++++++++++++++++-----------------
 1 file changed, 32 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 09660b0af130..9d9a5aefd53d 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -50,6 +50,8 @@ enum iommu_dma_queue_type {
 
 struct iommu_dma_options {
 	enum iommu_dma_queue_type qt;
+	size_t		fq_size;
+	unsigned int	fq_timeout;
 };
 
 struct iommu_dma_cookie {
@@ -98,10 +100,12 @@ static int __init iommu_dma_forcedac_setup(char *str)
 early_param("iommu.forcedac", iommu_dma_forcedac_setup);
 
 /* Number of entries per flush queue */
-#define IOVA_FQ_SIZE	256
+#define IOVA_DEFAULT_FQ_SIZE	256
+#define IOVA_SINGLE_FQ_SIZE	32768
 
 /* Timeout (in ms) after which entries are flushed from the queue */
-#define IOVA_FQ_TIMEOUT	10
+#define IOVA_DEFAULT_FQ_TIMEOUT	10
+#define IOVA_SINGLE_FQ_TIMEOUT	1000
 
 /* Flush queue entry for deferred flushing */
 struct iova_fq_entry {
@@ -113,18 +117,19 @@ struct iova_fq_entry {
 
 /* Per-CPU flush queue structure */
 struct iova_fq {
-	struct iova_fq_entry entries[IOVA_FQ_SIZE];
-	unsigned int head, tail;
 	spinlock_t lock;
+	unsigned int head, tail;
+	unsigned int mod_mask;
+	struct iova_fq_entry entries[];
 };
 
 #define fq_ring_for_each(i, fq) \
-	for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_FQ_SIZE)
+	for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) & (fq)->mod_mask)
 
 static inline bool fq_full(struct iova_fq *fq)
 {
 	assert_spin_locked(&fq->lock);
-	return (((fq->tail + 1) % IOVA_FQ_SIZE) == fq->head);
+	return (((fq->tail + 1) & fq->mod_mask) == fq->head);
 }
 
 static inline unsigned int fq_ring_add(struct iova_fq *fq)
@@ -133,7 +138,7 @@ static inline unsigned int fq_ring_add(struct iova_fq *fq)
 
 	assert_spin_locked(&fq->lock);
 
-	fq->tail = (idx + 1) % IOVA_FQ_SIZE;
+	fq->tail = (idx + 1) & fq->mod_mask;
 
 	return idx;
 }
@@ -155,7 +160,7 @@ static void fq_ring_free_locked(struct iommu_dma_cookie *cookie, struct iova_fq
 			       fq->entries[idx].iova_pfn,
 			       fq->entries[idx].pages);
 
-		fq->head = (fq->head + 1) % IOVA_FQ_SIZE;
+		fq->head = (fq->head + 1) & fq->mod_mask;
 	}
 }
 
@@ -240,7 +245,7 @@ static void queue_iova(struct iommu_dma_cookie *cookie,
 	if (!atomic_read(&cookie->fq_timer_on) &&
 	    !atomic_xchg(&cookie->fq_timer_on, 1))
 		mod_timer(&cookie->fq_timer,
-			  jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
+			  jiffies + msecs_to_jiffies(cookie->options.fq_timeout));
 }
 
 static void iommu_dma_free_fq_single(struct iova_fq *fq)
@@ -279,27 +284,29 @@ static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie)
 		iommu_dma_free_fq_percpu(cookie->percpu_fq);
 }
 
-static void iommu_dma_init_one_fq(struct iova_fq *fq)
+static void iommu_dma_init_one_fq(struct iova_fq *fq, size_t fq_size)
 {
 	int i;
 
 	fq->head = 0;
 	fq->tail = 0;
+	fq->mod_mask = fq_size - 1;
 
 	spin_lock_init(&fq->lock);
 
-	for (i = 0; i < IOVA_FQ_SIZE; i++)
+	for (i = 0; i < fq_size; i++)
 		INIT_LIST_HEAD(&fq->entries[i].freelist);
 }
 
 static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie)
 {
+	size_t fq_size = cookie->options.fq_size;
 	struct iova_fq *queue;
 
-	queue = vmalloc(sizeof(*queue));
+	queue = vmalloc(struct_size(queue, entries, fq_size));
 	if (!queue)
 		return -ENOMEM;
-	iommu_dma_init_one_fq(queue);
+	iommu_dma_init_one_fq(queue, fq_size);
 	cookie->single_fq = queue;
 
 	return 0;
@@ -307,15 +314,17 @@ static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie)
 
 static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie)
 {
+	size_t fq_size = cookie->options.fq_size;
 	struct iova_fq __percpu *queue;
 	int cpu;
 
-	queue = alloc_percpu(struct iova_fq);
+	queue = __alloc_percpu(struct_size(queue, entries, fq_size),
+			       __alignof__(*queue));
 	if (!queue)
 		return -ENOMEM;
 
 	for_each_possible_cpu(cpu)
-		iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu));
+		iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu), fq_size);
 	cookie->percpu_fq = queue;
 	return 0;
 }
@@ -635,11 +644,16 @@ static bool dev_use_sg_swiotlb(struct device *dev, struct scatterlist *sg,
 static void iommu_dma_init_options(struct iommu_dma_options *options,
 				   struct device *dev)
 {
-	/* Shadowing IOTLB flushes do better with a single queue */
-	if (dev->iommu->shadow_on_flush)
+	/* Shadowing IOTLB flushes do better with a single large queue */
+	if (dev->iommu->shadow_on_flush) {
 		options->qt = IOMMU_DMA_OPTS_SINGLE_QUEUE;
-	else
+		options->fq_timeout = IOVA_SINGLE_FQ_TIMEOUT;
+		options->fq_size = IOVA_SINGLE_FQ_SIZE;
+	} else {
 		options->qt = IOMMU_DMA_OPTS_PER_CPU_QUEUE;
+		options->fq_size = IOVA_DEFAULT_FQ_SIZE;
+		options->fq_timeout = IOVA_DEFAULT_FQ_TIMEOUT;
+	}
 }
 
 /**

-- 
2.39.2



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices
  2023-08-25 10:11 ` [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices Niklas Schnelle
@ 2023-08-25 18:23   ` Matthew Rosato
  0 siblings, 0 replies; 24+ messages in thread
From: Matthew Rosato @ 2023-08-25 18:23 UTC (permalink / raw)
  To: Niklas Schnelle, Joerg Roedel, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On 8/25/23 6:11 AM, Niklas Schnelle wrote:
> ISM devices are virtual PCI devices used for cross-LPAR communication.
> Unlike real PCI devices ISM devices do not use the hardware IOMMU but
> inspects IOMMU translation tables directly on IOTLB flush (s390 RPCIT
> instruction).
> 
> ISM devices keep their DMA allocations static and only very rarely DMA
> unmap at all. For each IOTLB flush that occurs after unmap the ISM
> devices will however inspect the area of the IOVA space indicated by the
> flush. This means that for the global IOTLB flushes used by the flush
> queue mechanism the entire IOVA space would be inspected. In principle
> this would be fine, albeit potentially unnecessarily slow, it turns out
> however that ISM devices are sensitive to seeing IOVA addresses that are
> currently in use in the IOVA range being flushed. Seeing such in-use
> IOVA addresses will cause the ISM device to enter an error state and
> become unusable.
> 
> Fix this by claiming IOMMU_CAP_DEFERRED_FLUSH only for non-ISM devices.
> This makes sure IOTLB flushes only cover IOVAs that have been unmapped
> and also restricts the range of the IOTLB flush potentially reducing
> latency spikes.
> 
> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>

Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>

> ---
>  drivers/iommu/s390-iommu.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> index f6d6c60e5634..8310180a102c 100644
> --- a/drivers/iommu/s390-iommu.c
> +++ b/drivers/iommu/s390-iommu.c
> @@ -315,11 +315,13 @@ static struct s390_domain *to_s390_domain(struct iommu_domain *dom)
>  
>  static bool s390_iommu_capable(struct device *dev, enum iommu_cap cap)
>  {
> +	struct zpci_dev *zdev = to_zpci_dev(dev);
> +
>  	switch (cap) {
>  	case IOMMU_CAP_CACHE_COHERENCY:
>  		return true;
>  	case IOMMU_CAP_DEFERRED_FLUSH:
> -		return true;
> +		return zdev->pft != PCI_FUNC_TYPE_ISM;
>  	default:
>  		return false;
>  	}
> 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (5 preceding siblings ...)
  2023-08-25 10:11 ` [PATCH v12 6/6] iommu/dma: Use a large flush queue and timeout for shadow_on_flush Niklas Schnelle
@ 2023-08-25 18:26 ` Matthew Rosato
  2023-09-05 16:09   ` Robin Murphy
  2023-09-25  9:56 ` Joerg Roedel
  2023-09-26 15:04 ` Joerg Roedel
  8 siblings, 1 reply; 24+ messages in thread
From: Matthew Rosato @ 2023-08-25 18:26 UTC (permalink / raw)
  To: Niklas Schnelle, Joerg Roedel, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On 8/25/23 6:11 AM, Niklas Schnelle wrote:
> Hi All,
> 
> This patch series converts s390's PCI support from its platform specific DMA
> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
> The conversion itself is done in patches 3-4 with patch 2 providing the final
> necessary IOMMU driver improvement to handle s390's special IOTLB flush
> out-of-resource indication in virtualized environments. The conversion
> itself only touches the s390 IOMMU driver and s390 arch code moving over
> remaining functions from the s390 DMA API implementation. No changes to
> common code are necessary.
> 

I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives.  FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least.

So as far as I'm concerned anyway, this series is ready for -next (after the merge window). 

Thanks,
Matt



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-08-25 18:26 ` [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Matthew Rosato
@ 2023-09-05 16:09   ` Robin Murphy
  0 siblings, 0 replies; 24+ messages in thread
From: Robin Murphy @ 2023-09-05 16:09 UTC (permalink / raw)
  To: Matthew Rosato, Niklas Schnelle, Joerg Roedel, Will Deacon,
	Wenjia Zhang, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On 2023-08-25 19:26, Matthew Rosato wrote:
> On 8/25/23 6:11 AM, Niklas Schnelle wrote:
>> Hi All,
>>
>> This patch series converts s390's PCI support from its platform specific DMA
>> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
>> The conversion itself is done in patches 3-4 with patch 2 providing the final
>> necessary IOMMU driver improvement to handle s390's special IOTLB flush
>> out-of-resource indication in virtualized environments. The conversion
>> itself only touches the s390 IOMMU driver and s390 arch code moving over
>> remaining functions from the s390 DMA API implementation. No changes to
>> common code are necessary.
>>
> 
> I also picked up this latest version and ran various tests with ISM, mlx5 and some NVMe drives.  FWIW, I have been including versions of this series in my s390 dev environments for a number of months now and have also been building my s390 pci iommufd nested translation series on top of this, so it's seen quite a bit of testing from me at least.
> 
> So as far as I'm concerned anyway, this series is ready for -next (after the merge window).

Agreed; I'll trust your reviews for the s390-specific parts, so indeed 
it looks like this should have all it needs now and is ready for a nice 
long soak in -next once Joerg opens the tree for 6.7 material.

Cheers,
Robin.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs
  2023-08-25 10:11 ` [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs Niklas Schnelle
@ 2023-09-11 12:06   ` Niklas Schnelle
  0 siblings, 0 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-11 12:06 UTC (permalink / raw)
  To: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Jason Gunthorpe
  Cc: Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On Fri, 2023-08-25 at 12:11 +0200, Niklas Schnelle wrote:
> In some virtualized environments, including s390 paged memory guests,
> IOTLB flushes are used to update IOMMU shadow tables. Due to this, they
> are much more expensive than in typical bare metal environments or
> non-paged s390 guests. In addition they may parallelize poorly in
> virtualized environments. This changes the trade off for flushing IOVAs
> such that minimizing the number of IOTLB flushes trumps any benefit of
> cheaper queuing operations or increased paralellism.
> 
> In this scenario per-CPU flush queues pose several problems. Firstly
> per-CPU memory is often quite limited prohibiting larger queues.
> Secondly collecting IOVAs per-CPU but flushing via a global timeout
> reduces the number of IOVAs flushed for each timeout especially on s390
> where PCI interrupts may not be bound to a specific CPU.
> 
> Let's introduce a single flush queue mode that reuses the same queue
> logic but only allocates a single global queue. This mode is selected by
> dma-iommu if a newly introduced .shadow_on_flush flag is set in struct
> dev_iommu. As a first user the s390 IOMMU driver sets this flag during
> probe_device. With the unchanged small FQ size and timeouts this setting
> is worse than per-CPU queues but a follow up patch will make the FQ size
> and timeout variable. Together this allows the common IOVA flushing code
> to more closely resemble the global flush behavior used on s390's
> previous internal DMA API implementation.
> 
> Link: https://lore.kernel.org/all/9a466109-01c5-96b0-bf03-304123f435ee@arm.com/
> Acked-by: Robin Murphy <robin.murphy@arm.com>
> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> #s390
> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
> ---
>  drivers/iommu/dma-iommu.c  | 168 ++++++++++++++++++++++++++++++++++-----------
>  drivers/iommu/s390-iommu.c |   3 +
>  include/linux/iommu.h      |   2 +
>  3 files changed, 134 insertions(+), 39 deletions(-)
> 
> 
---8<---
>  
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 182cc4c71e62..c3687e066ed7 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -409,6 +409,7 @@ struct iommu_fault_param {
>   * @priv:	 IOMMU Driver private data
>   * @max_pasids:  number of PASIDs this device can consume
>   * @attach_deferred: the dma domain attachment is deferred
> + * @shadow_on_flush: IOTLB flushes are used to sync shadow tables
>   *
>   * TODO: migrate other per device data pointers under iommu_dev_data, e.g.
>   *	struct iommu_group	*iommu_group;
> @@ -422,6 +423,7 @@ struct dev_iommu {
>  	void				*priv;
>  	u32				max_pasids;
>  	u32				attach_deferred:1;
> +	u32				shadow_on_flush:1;

This causes a merge conflict with a48ce36e2786f ("iommu: Prevent
RESV_DIRECT devices from blocking domains"), The resolution is trivial
though in that shadow_on_flush:1 can just be added after (or before)
require_direct:1. @Joro do you want me to sent a version with this
resolution regardless or will you resolve this when applying?

>  };
>  
>  int iommu_device_register(struct iommu_device *iommu,
> 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (6 preceding siblings ...)
  2023-08-25 18:26 ` [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Matthew Rosato
@ 2023-09-25  9:56 ` Joerg Roedel
  2023-09-26 15:04 ` Joerg Roedel
  8 siblings, 0 replies; 24+ messages in thread
From: Joerg Roedel @ 2023-09-25  9:56 UTC (permalink / raw)
  To: Niklas Schnelle
  Cc: Matthew Rosato, Will Deacon, Wenjia Zhang, Robin Murphy,
	Jason Gunthorpe, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> Niklas Schnelle (6):
>       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>       s390/pci: prepare is_passed_through() for dma-iommu
>       s390/pci: Use dma-iommu layer
>       iommu/s390: Disable deferred flush for ISM devices
>       iommu/dma: Allow a single FQ in addition to per-CPU FQs
>       iommu/dma: Use a large flush queue and timeout for shadow_on_flush

Applied, thanks.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
                   ` (7 preceding siblings ...)
  2023-09-25  9:56 ` Joerg Roedel
@ 2023-09-26 15:04 ` Joerg Roedel
  2023-09-26 16:08   ` Jason Gunthorpe
  8 siblings, 1 reply; 24+ messages in thread
From: Joerg Roedel @ 2023-09-26 15:04 UTC (permalink / raw)
  To: Niklas Schnelle, Jason Gunthorpe
  Cc: Matthew Rosato, Will Deacon, Wenjia Zhang, Robin Murphy,
	Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

Hi Niklas,

On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> Niklas Schnelle (6):
>       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>       s390/pci: prepare is_passed_through() for dma-iommu
>       s390/pci: Use dma-iommu layer
>       iommu/s390: Disable deferred flush for ISM devices
>       iommu/dma: Allow a single FQ in addition to per-CPU FQs
>       iommu/dma: Use a large flush queue and timeout for shadow_on_flush

Turned out this series has non-trivial conflicts with Jasons
default-domain work so I had to remove it from the IOMMU tree for now.
Can you please rebase it to the latest iommu/core branch and re-send? I
will take it into the tree again then.

Thanks,

	Joerg


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-26 15:04 ` Joerg Roedel
@ 2023-09-26 16:08   ` Jason Gunthorpe
  2023-09-27  8:55     ` Niklas Schnelle
  0 siblings, 1 reply; 24+ messages in thread
From: Jason Gunthorpe @ 2023-09-26 16:08 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Niklas Schnelle, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
> Hi Niklas,
> 
> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> > Niklas Schnelle (6):
> >       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
> >       s390/pci: prepare is_passed_through() for dma-iommu
> >       s390/pci: Use dma-iommu layer
> >       iommu/s390: Disable deferred flush for ISM devices
> >       iommu/dma: Allow a single FQ in addition to per-CPU FQs
> >       iommu/dma: Use a large flush queue and timeout for shadow_on_flush
> 
> Turned out this series has non-trivial conflicts with Jasons
> default-domain work so I had to remove it from the IOMMU tree for now.
> Can you please rebase it to the latest iommu/core branch and re-send? I
> will take it into the tree again then.

Niklas, I think you just 'take yours' to resolve this. All the
IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
removed. Let me know if you need anything

Thanks,
Jason


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-26 16:08   ` Jason Gunthorpe
@ 2023-09-27  8:55     ` Niklas Schnelle
  2023-09-27  9:26       ` Robin Murphy
  2023-09-27  9:55       ` Joerg Roedel
  0 siblings, 2 replies; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-27  8:55 UTC (permalink / raw)
  To: Jason Gunthorpe, Joerg Roedel
  Cc: Matthew Rosato, Will Deacon, Wenjia Zhang, Robin Murphy,
	Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote:
> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
> > Hi Niklas,
> > 
> > On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
> > > Niklas Schnelle (6):
> > >       iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
> > >       s390/pci: prepare is_passed_through() for dma-iommu
> > >       s390/pci: Use dma-iommu layer
> > >       iommu/s390: Disable deferred flush for ISM devices
> > >       iommu/dma: Allow a single FQ in addition to per-CPU FQs
> > >       iommu/dma: Use a large flush queue and timeout for shadow_on_flush
> > 
> > Turned out this series has non-trivial conflicts with Jasons
> > default-domain work so I had to remove it from the IOMMU tree for now.
> > Can you please rebase it to the latest iommu/core branch and re-send? I
> > will take it into the tree again then.
> 
> Niklas, I think you just 'take yours' to resolve this. All the
> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
> removed. Let me know if you need anything
> 
> Thanks,
> Jason

Hi Joerg, Hi Jason,

I've run into an unfortunate problem, not with the rebase itself but
with the iommu/core branch. 

Jason is right, I basically need to just remove the platform ops and
.default_domain ops. This seems to work fine for an NVMe both in the
host and also when using the IOMMU with vfio-pci + KVM. I've already
pushed the result of that to my git.kernel.org:
https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu

The problem is that something seems to  be broken in the iommu/core
branch. Regardless of whether I have my DMA API conversion on top or
with the base iommu/core branch I can not use ConnectX-4 VFs.

# lspci
111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
# dmesg | grep mlx
[    3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting
[    3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12

This same card works on v6.6-rc3 both with and without my DMA API
conversion patch series applied. Looking at mlx5_mdev_init() -> 
mlx5_cmd_init(). The -ENOMEM seems to come from the following
dma_pool_create():

cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);

I'll try to debug this further but wanted to let you know already in
case you have some ideas. Either way as it doesn't seem to be related
to the DMA API conversion I can sent that out again regardless if you
want, really don't want to miss another cycle.

Thanks,
Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27  8:55     ` Niklas Schnelle
@ 2023-09-27  9:26       ` Robin Murphy
  2023-09-27  9:55       ` Joerg Roedel
  1 sibling, 0 replies; 24+ messages in thread
From: Robin Murphy @ 2023-09-27  9:26 UTC (permalink / raw)
  To: Niklas Schnelle, Jason Gunthorpe, Joerg Roedel
  Cc: Matthew Rosato, Will Deacon, Wenjia Zhang, Gerd Bayer,
	Julian Ruess, Pierre Morel, Alexandra Winter, Heiko Carstens,
	Vasily Gorbik, Alexander Gordeev, Christian Borntraeger,
	Sven Schnelle, Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On 2023-09-27 09:55, Niklas Schnelle wrote:
> On Tue, 2023-09-26 at 13:08 -0300, Jason Gunthorpe wrote:
>> On Tue, Sep 26, 2023 at 05:04:28PM +0200, Joerg Roedel wrote:
>>> Hi Niklas,
>>>
>>> On Fri, Aug 25, 2023 at 12:11:15PM +0200, Niklas Schnelle wrote:
>>>> Niklas Schnelle (6):
>>>>        iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
>>>>        s390/pci: prepare is_passed_through() for dma-iommu
>>>>        s390/pci: Use dma-iommu layer
>>>>        iommu/s390: Disable deferred flush for ISM devices
>>>>        iommu/dma: Allow a single FQ in addition to per-CPU FQs
>>>>        iommu/dma: Use a large flush queue and timeout for shadow_on_flush
>>>
>>> Turned out this series has non-trivial conflicts with Jasons
>>> default-domain work so I had to remove it from the IOMMU tree for now.
>>> Can you please rebase it to the latest iommu/core branch and re-send? I
>>> will take it into the tree again then.
>>
>> Niklas, I think you just 'take yours' to resolve this. All the
>> IOMMU_DOMAIN_PLATFORM related and .default_domain = parts should be
>> removed. Let me know if you need anything
>>
>> Thanks,
>> Jason
> 
> Hi Joerg, Hi Jason,
> 
> I've run into an unfortunate problem, not with the rebase itself but
> with the iommu/core branch.
> 
> Jason is right, I basically need to just remove the platform ops and
> .default_domain ops. This seems to work fine for an NVMe both in the
> host and also when using the IOMMU with vfio-pci + KVM. I've already
> pushed the result of that to my git.kernel.org:
> https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/log/?h=b4/dma_iommu
> 
> The problem is that something seems to  be broken in the iommu/core
> branch. Regardless of whether I have my DMA API conversion on top or
> with the base iommu/core branch I can not use ConnectX-4 VFs.
> 
> # lspci
> 111a:00:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function]
> # dmesg | grep mlx
> [    3.189749] mlx5_core 111a:00:00.0: mlx5_mdev_init:1802:(pid 464): Failed initializing cmdif SW structs, aborting
> [    3.189783] mlx5_core: probe of 111a:00:00.0 failed with error -12
> 
> This same card works on v6.6-rc3 both with and without my DMA API
> conversion patch series applied. Looking at mlx5_mdev_init() ->
> mlx5_cmd_init(). The -ENOMEM seems to come from the following
> dma_pool_create():
> 
> cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0);
> 
> I'll try to debug this further but wanted to let you know already in
> case you have some ideas.

I could imagine that potentially something in the initial default domain 
conversion somehow interferes with the DMA ops in a way that ends up 
causing alloc_cmd_page() to fail (maybe calling zpci_dma_init_device() 
at the wrong point, or too many times?). FWIW I see nothing that would 
obviously affect dma_pool_create() itself.

Robin.

> Either way as it doesn't seem to be related
> to the DMA API conversion I can sent that out again regardless if you
> want, really don't want to miss another cycle.
> 
> Thanks,
> Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27  8:55     ` Niklas Schnelle
  2023-09-27  9:26       ` Robin Murphy
@ 2023-09-27  9:55       ` Joerg Roedel
  2023-09-27 11:24         ` Niklas Schnelle
  1 sibling, 1 reply; 24+ messages in thread
From: Joerg Roedel @ 2023-09-27  9:55 UTC (permalink / raw)
  To: Niklas Schnelle
  Cc: Jason Gunthorpe, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

Hi Niklas,

On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote:
> The problem is that something seems to  be broken in the iommu/core
> branch. Regardless of whether I have my DMA API conversion on top or
> with the base iommu/core branch I can not use ConnectX-4 VFs.

Have you already tried to bisect the issue in the iommu/core branch?
The result might sched some light on the issue.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27  9:55       ` Joerg Roedel
@ 2023-09-27 11:24         ` Niklas Schnelle
  2023-09-27 13:20           ` Niklas Schnelle
  0 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-27 11:24 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Jason Gunthorpe, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote:
> Hi Niklas,
> 
> On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote:
> > The problem is that something seems to  be broken in the iommu/core
> > branch. Regardless of whether I have my DMA API conversion on top or
> > with the base iommu/core branch I can not use ConnectX-4 VFs.
> 
> Have you already tried to bisect the issue in the iommu/core branch?
> The result might sched some light on the issue.
> 
> Regards,
> 
> 	Joerg

Hi Joerg,

Working on it, somehow I must have messed up earlier. It now looks like
it might in fact be caused by my DMA API conversion rebase and the
"s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction
with Jason's patches that I haven't thought about. So sorry for any
wrong blame.

Thanks,
Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 11:24         ` Niklas Schnelle
@ 2023-09-27 13:20           ` Niklas Schnelle
  2023-09-27 14:31             ` Niklas Schnelle
  0 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-27 13:20 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Jason Gunthorpe, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote:
> On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote:
> > Hi Niklas,
> > 
> > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote:
> > > The problem is that something seems to  be broken in the iommu/core
> > > branch. Regardless of whether I have my DMA API conversion on top or
> > > with the base iommu/core branch I can not use ConnectX-4 VFs.
> > 
> > Have you already tried to bisect the issue in the iommu/core branch?
> > The result might sched some light on the issue.
> > 
> > Regards,
> > 
> > 	Joerg
> 
> Hi Joerg,
> 
> Working on it, somehow I must have messed up earlier. It now looks like
> it might in fact be caused by my DMA API conversion rebase and the
> "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction
> with Jason's patches that I haven't thought about. So sorry for any
> wrong blame.
> 
> Thanks,
> Niklas

Hi,

I tracked the problem down from mlx5_core's alloc_cmd_page() via
dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and
__iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova().
The allocation here is for 4K so nothing crazy.

On second look I also noticed:

nvme 2007:00:00.0: Using 42-bit DMA addresses

for the NVMe that is working. The problem here seems to be that we set
iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we
have currently have a reserved region over the first 4 GiB anyway so
will always use IOVAs larger than that. That however is too late since
iommu_dma_set_pci_32bit_workaround() is already checked in
__iommu_probe_device() which is called just before ops-
>probe_finalize(). So I moved setting iommu_dma_forcedac = true to
zpci_init_iommu() and that gets rid of the notice for the NVMe but I
still get a failure of iommu_dma_alloc_iova() in
__iommu_dma_alloc_noncontiguous(). So I'll keep digging.

Thanks,
Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 13:20           ` Niklas Schnelle
@ 2023-09-27 14:31             ` Niklas Schnelle
  2023-09-27 15:24               ` Niklas Schnelle
  0 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-27 14:31 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Jason Gunthorpe, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Wed, 2023-09-27 at 15:20 +0200, Niklas Schnelle wrote:
> On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote:
> > On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote:
> > > Hi Niklas,
> > > 
> > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote:
> > > > The problem is that something seems to  be broken in the iommu/core
> > > > branch. Regardless of whether I have my DMA API conversion on top or
> > > > with the base iommu/core branch I can not use ConnectX-4 VFs.
> > > 
> > > Have you already tried to bisect the issue in the iommu/core branch?
> > > The result might sched some light on the issue.
> > > 
> > > Regards,
> > > 
> > > 	Joerg
> > 
> > Hi Joerg,
> > 
> > Working on it, somehow I must have messed up earlier. It now looks like
> > it might in fact be caused by my DMA API conversion rebase and the
> > "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction
> > with Jason's patches that I haven't thought about. So sorry for any
> > wrong blame.
> > 
> > Thanks,
> > Niklas
> 
> Hi,
> 
> I tracked the problem down from mlx5_core's alloc_cmd_page() via
> dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and
> __iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova().
> The allocation here is for 4K so nothing crazy.
> 
> On second look I also noticed:
> 
> nvme 2007:00:00.0: Using 42-bit DMA addresses
> 
> for the NVMe that is working. The problem here seems to be that we set
> iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we
> have currently have a reserved region over the first 4 GiB anyway so
> will always use IOVAs larger than that. That however is too late since
> iommu_dma_set_pci_32bit_workaround() is already checked in
> __iommu_probe_device() which is called just before ops-
> > probe_finalize(). So I moved setting iommu_dma_forcedac = true to
> zpci_init_iommu() and that gets rid of the notice for the NVMe but I
> still get a failure of iommu_dma_alloc_iova() in
> __iommu_dma_alloc_noncontiguous(). So I'll keep digging.
> 
> Thanks,
> Niklas


Ok I think I got it and this doesn't seem strictly s390x specific but
I'd think should happen with iommu.forcedac=1 everywhere.

The reason iommu_dma_alloc_iova() fails seems to be that mlx5_core does
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) in 
mlx5_pci_init()->set_dma_caps() which happens after it already called
mlx5_mdev_init()->mlx5_cmd_init()->alloc_cmd_page() so for the
dma_alloc_coherent() in there the dev->coherent_dma_mask is still
DMA_BIT_MASK(32) for which we can't find an IOVA because well we don't
have IOVAs below 4 GiB. Not entirely sure what caused this not to be
enforced before.

Thanks,
Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 14:31             ` Niklas Schnelle
@ 2023-09-27 15:24               ` Niklas Schnelle
  2023-09-27 15:40                 ` Jason Gunthorpe
  0 siblings, 1 reply; 24+ messages in thread
From: Niklas Schnelle @ 2023-09-27 15:24 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Jason Gunthorpe, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Wed, 2023-09-27 at 16:31 +0200, Niklas Schnelle wrote:
> On Wed, 2023-09-27 at 15:20 +0200, Niklas Schnelle wrote:
> > On Wed, 2023-09-27 at 13:24 +0200, Niklas Schnelle wrote:
> > > On Wed, 2023-09-27 at 11:55 +0200, Joerg Roedel wrote:
> > > > Hi Niklas,
> > > > 
> > > > On Wed, Sep 27, 2023 at 10:55:23AM +0200, Niklas Schnelle wrote:
> > > > > The problem is that something seems to  be broken in the iommu/core
> > > > > branch. Regardless of whether I have my DMA API conversion on top or
> > > > > with the base iommu/core branch I can not use ConnectX-4 VFs.
> > > > 
> > > > Have you already tried to bisect the issue in the iommu/core branch?
> > > > The result might sched some light on the issue.
> > > > 
> > > > Regards,
> > > > 
> > > > 	Joerg
> > > 
> > > Hi Joerg,
> > > 
> > > Working on it, somehow I must have messed up earlier. It now looks like
> > > it might in fact be caused by my DMA API conversion rebase and the
> > > "s390/pci: Use dma-iommu layer" commit. Maybe there is some interaction
> > > with Jason's patches that I haven't thought about. So sorry for any
> > > wrong blame.
> > > 
> > > Thanks,
> > > Niklas
> > 
> > Hi,
> > 
> > I tracked the problem down from mlx5_core's alloc_cmd_page() via
> > dma_alloc_coherent(), ops->alloc, iommu_dma_alloc_remap(), and
> > __iommu_dma_alloc_noncontiguous() to a failed iommu_dma_alloc_iova().
> > The allocation here is for 4K so nothing crazy.
> > 
> > On second look I also noticed:
> > 
> > nvme 2007:00:00.0: Using 42-bit DMA addresses
> > 
> > for the NVMe that is working. The problem here seems to be that we set
> > iommu_dma_forcedac = true in s390_iommu_probe_finalize() because we
> > have currently have a reserved region over the first 4 GiB anyway so
> > will always use IOVAs larger than that. That however is too late since
> > iommu_dma_set_pci_32bit_workaround() is already checked in
> > __iommu_probe_device() which is called just before ops-
> > > probe_finalize(). So I moved setting iommu_dma_forcedac = true to
> > zpci_init_iommu() and that gets rid of the notice for the NVMe but I
> > still get a failure of iommu_dma_alloc_iova() in
> > __iommu_dma_alloc_noncontiguous(). So I'll keep digging.
> > 
> > Thanks,
> > Niklas
> 
> 
> Ok I think I got it and this doesn't seem strictly s390x specific but
> I'd think should happen with iommu.forcedac=1 everywhere.
> 
> The reason iommu_dma_alloc_iova() fails seems to be that mlx5_core does
> dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) in 
> mlx5_pci_init()->set_dma_caps() which happens after it already called
> mlx5_mdev_init()->mlx5_cmd_init()->alloc_cmd_page() so for the
> dma_alloc_coherent() in there the dev->coherent_dma_mask is still
> DMA_BIT_MASK(32) for which we can't find an IOVA because well we don't
> have IOVAs below 4 GiB. Not entirely sure what caused this not to be
> enforced before.
> 
> Thanks,
> Niklas
> 

Ok, another update. On trying it out again this problem actually also
occurs when applying this v12 on top of v6.6-rc3 too. Also I guess
unlike my prior thinking it probably doesn't occur with
iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might
be the only ones who don't support those. From my point of view this
sounds like a mlx5_core issue they really should call
dma_set_mask_and_coherent() before their first call to
dma_alloc_coherent() not after. So I guess I'll send a v13 of this
series rebased on iommu/core and with an additional mlx5 patch and then
let's hope we can get that merged in a way that doesn't leave us with
broken ConnectX VFs for too long.

Thanks,
Niklas


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 15:24               ` Niklas Schnelle
@ 2023-09-27 15:40                 ` Jason Gunthorpe
  2023-09-27 16:16                   ` Robin Murphy
  2023-09-27 20:25                   ` Matthew Rosato
  0 siblings, 2 replies; 24+ messages in thread
From: Jason Gunthorpe @ 2023-09-27 15:40 UTC (permalink / raw)
  To: Niklas Schnelle
  Cc: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Robin Murphy, Gerd Bayer, Julian Ruess, Pierre Morel,
	Alexandra Winter, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Christian Borntraeger, Sven Schnelle,
	Suravee Suthikulpanit, Hector Martin, Sven Peter,
	Alyssa Rosenzweig, David Woodhouse, Lu Baolu, Andy Gross,
	Bjorn Andersson, Konrad Dybcio, Yong Wu, Matthias Brugger,
	AngeloGioacchino Del Regno, Gerald Schaefer, Orson Zhai,
	Baolin Wang, Chunyan Zhang, Chen-Yu Tsai, Jernej Skrabec,
	Samuel Holland, Thierry Reding, Krishna Reddy, Jonathan Hunter,
	Jonathan Corbet, linux-s390, netdev, linux-kernel, iommu, asahi,
	linux-arm-kernel, linux-arm-msm, linux-mediatek, linux-sunxi,
	linux-tegra, linux-doc

On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote:

> Ok, another update. On trying it out again this problem actually also
> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess
> unlike my prior thinking it probably doesn't occur with
> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might
> be the only ones who don't support those. From my point of view this
> sounds like a mlx5_core issue they really should call
> dma_set_mask_and_coherent() before their first call to
> dma_alloc_coherent() not after. So I guess I'll send a v13 of this
> series rebased on iommu/core and with an additional mlx5 patch and then
> let's hope we can get that merged in a way that doesn't leave us with
> broken ConnectX VFs for too long.

Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before
setting it's dma_set_mask_and_coherent(). Please link to this thread
and we can get Leon or Saeed to ack it for Joerg.

(though wondering why s390 is the only case that ever hit this?)

Jason


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 15:40                 ` Jason Gunthorpe
@ 2023-09-27 16:16                   ` Robin Murphy
  2023-09-27 20:25                   ` Matthew Rosato
  1 sibling, 0 replies; 24+ messages in thread
From: Robin Murphy @ 2023-09-27 16:16 UTC (permalink / raw)
  To: Jason Gunthorpe, Niklas Schnelle
  Cc: Joerg Roedel, Matthew Rosato, Will Deacon, Wenjia Zhang,
	Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On 27/09/2023 4:40 pm, Jason Gunthorpe wrote:
> On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote:
> 
>> Ok, another update. On trying it out again this problem actually also
>> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess
>> unlike my prior thinking it probably doesn't occur with
>> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might
>> be the only ones who don't support those. From my point of view this
>> sounds like a mlx5_core issue they really should call
>> dma_set_mask_and_coherent() before their first call to
>> dma_alloc_coherent() not after. So I guess I'll send a v13 of this
>> series rebased on iommu/core and with an additional mlx5 patch and then
>> let's hope we can get that merged in a way that doesn't leave us with
>> broken ConnectX VFs for too long.
> 
> Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before
> setting it's dma_set_mask_and_coherent(). Please link to this thread
> and we can get Leon or Saeed to ack it for Joerg.
> 
> (though wondering why s390 is the only case that ever hit this?)

Probably because most systems happen to be able to satisfy the 
allocation within the default 32-bit mask - the whole bottom 4GB of IOVA 
space being reserved is pretty atypical.

TBH it makes me wonder the opposite - how this ever worked on s390 
before? And I think the answer to that is "by pure chance", since upon 
inspection the existing s390_pci_dma_ops implementation appears to pay 
absolutely no attention to the device's DMA masks whatsoever :(

Robin.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing
  2023-09-27 15:40                 ` Jason Gunthorpe
  2023-09-27 16:16                   ` Robin Murphy
@ 2023-09-27 20:25                   ` Matthew Rosato
  1 sibling, 0 replies; 24+ messages in thread
From: Matthew Rosato @ 2023-09-27 20:25 UTC (permalink / raw)
  To: Jason Gunthorpe, Niklas Schnelle
  Cc: Joerg Roedel, Will Deacon, Wenjia Zhang, Robin Murphy,
	Gerd Bayer, Julian Ruess, Pierre Morel, Alexandra Winter,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Christian Borntraeger, Sven Schnelle, Suravee Suthikulpanit,
	Hector Martin, Sven Peter, Alyssa Rosenzweig, David Woodhouse,
	Lu Baolu, Andy Gross, Bjorn Andersson, Konrad Dybcio, Yong Wu,
	Matthias Brugger, AngeloGioacchino Del Regno, Gerald Schaefer,
	Orson Zhai, Baolin Wang, Chunyan Zhang, Chen-Yu Tsai,
	Jernej Skrabec, Samuel Holland, Thierry Reding, Krishna Reddy,
	Jonathan Hunter, Jonathan Corbet, linux-s390, netdev,
	linux-kernel, iommu, asahi, linux-arm-kernel, linux-arm-msm,
	linux-mediatek, linux-sunxi, linux-tegra, linux-doc

On 9/27/23 11:40 AM, Jason Gunthorpe wrote:
> On Wed, Sep 27, 2023 at 05:24:20PM +0200, Niklas Schnelle wrote:
> 
>> Ok, another update. On trying it out again this problem actually also
>> occurs when applying this v12 on top of v6.6-rc3 too. Also I guess
>> unlike my prior thinking it probably doesn't occur with
>> iommu.forcedac=1 since that still allows IOVAs below 4 GiB and we might
>> be the only ones who don't support those. From my point of view this
>> sounds like a mlx5_core issue they really should call
>> dma_set_mask_and_coherent() before their first call to
>> dma_alloc_coherent() not after. So I guess I'll send a v13 of this
>> series rebased on iommu/core and with an additional mlx5 patch and then
>> let's hope we can get that merged in a way that doesn't leave us with
>> broken ConnectX VFs for too long.
> 
> Yes, OK. It definitely sounds wrong that mlx5 is doing dma allocations before
> setting it's dma_set_mask_and_coherent(). Please link to this thread
> and we can get Leon or Saeed to ack it for Joerg.
> 

Hi Niklas,

I bisected the start of this issue to the following commit (only noticeable on s390 when you apply this subject series on top):

06cd555f73caec515a14d42ef052221fa2587ff9 ("net/mlx5: split mlx5_cmd_init() to probe and reload routines")

Which went in during the merge window.  Please include with your fix and/or report to the mlx5 maintainers.  Looks like the changes in this patch match what you and Jason describe; it splits up mlx5_cmd_init() and moves part of the call earlier.  The net result is we first call mlx5_mdev_init>mlx5_cmd_init->alloc_cmd_page->dma_alloc_coherent and then sometime later call mlx5_pci_init->set_dma_caps->dma_set_mask_and_coherent. 

Prior to this patch, we would not drive mlx5_cmd_init (and thus that first dma_alloc_coherent) until mlx5_init_one which happens _after_ mlx5_pci_init->set_dma_caps->dma_set_mask_and_coherent.

Thanks,
Matt



^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2023-09-27 20:26 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-25 10:11 [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
2023-08-25 10:11 ` [PATCH v12 1/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Niklas Schnelle
2023-08-25 10:11 ` [PATCH v12 2/6] s390/pci: prepare is_passed_through() for dma-iommu Niklas Schnelle
2023-08-25 10:11 ` [PATCH v12 3/6] s390/pci: Use dma-iommu layer Niklas Schnelle
2023-08-25 10:11 ` [PATCH v12 4/6] iommu/s390: Disable deferred flush for ISM devices Niklas Schnelle
2023-08-25 18:23   ` Matthew Rosato
2023-08-25 10:11 ` [PATCH v12 5/6] iommu/dma: Allow a single FQ in addition to per-CPU FQs Niklas Schnelle
2023-09-11 12:06   ` Niklas Schnelle
2023-08-25 10:11 ` [PATCH v12 6/6] iommu/dma: Use a large flush queue and timeout for shadow_on_flush Niklas Schnelle
2023-08-25 18:26 ` [PATCH v12 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Matthew Rosato
2023-09-05 16:09   ` Robin Murphy
2023-09-25  9:56 ` Joerg Roedel
2023-09-26 15:04 ` Joerg Roedel
2023-09-26 16:08   ` Jason Gunthorpe
2023-09-27  8:55     ` Niklas Schnelle
2023-09-27  9:26       ` Robin Murphy
2023-09-27  9:55       ` Joerg Roedel
2023-09-27 11:24         ` Niklas Schnelle
2023-09-27 13:20           ` Niklas Schnelle
2023-09-27 14:31             ` Niklas Schnelle
2023-09-27 15:24               ` Niklas Schnelle
2023-09-27 15:40                 ` Jason Gunthorpe
2023-09-27 16:16                   ` Robin Murphy
2023-09-27 20:25                   ` Matthew Rosato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).