linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
@ 2019-05-06 18:52 Tom Murphy
  2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Tom Murphy @ 2019-05-06 18:52 UTC (permalink / raw)
  To: iommu
  Cc: murphyt7, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.

Change-log:
v3:
-rename dma_limit to dma_mask
-exit handle_deferred_device early if (!is_kdump_kernel())
-remove pointless calls to handle_deferred_device
v2:
-Rebase on top of this series:
 http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-iommu-ops.3
-Add a gfp_t parameter to the iommu_ops::map function.
-Made use of the reserve region code inside the dma-iommu api

Tom Murphy (4):
  iommu: Add gfp parameter to iommu_ops::map
  iommu/dma-iommu: Handle deferred devices
  iommu/dma-iommu: Use the dev->coherent_dma_mask
  iommu/amd: Convert the AMD iommu driver to the dma-iommu api

 drivers/iommu/Kconfig          |   1 +
 drivers/iommu/amd_iommu.c      | 694 ++++-----------------------------
 drivers/iommu/arm-smmu-v3.c    |   2 +-
 drivers/iommu/arm-smmu.c       |   2 +-
 drivers/iommu/dma-iommu.c      |  50 ++-
 drivers/iommu/exynos-iommu.c   |   2 +-
 drivers/iommu/intel-iommu.c    |   2 +-
 drivers/iommu/iommu.c          |  43 +-
 drivers/iommu/ipmmu-vmsa.c     |   2 +-
 drivers/iommu/msm_iommu.c      |   2 +-
 drivers/iommu/mtk_iommu.c      |   2 +-
 drivers/iommu/mtk_iommu_v1.c   |   2 +-
 drivers/iommu/omap-iommu.c     |   2 +-
 drivers/iommu/qcom_iommu.c     |   2 +-
 drivers/iommu/rockchip-iommu.c |   2 +-
 drivers/iommu/s390-iommu.c     |   2 +-
 drivers/iommu/tegra-gart.c     |   2 +-
 drivers/iommu/tegra-smmu.c     |   2 +-
 include/linux/iommu.h          |  21 +-
 19 files changed, 184 insertions(+), 653 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map
  2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
@ 2019-05-06 18:52 ` Tom Murphy
  2019-06-04 18:11   ` Robin Murphy
  2019-06-05  5:50   ` Christoph Hellwig
  2019-05-06 18:52 ` [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices Tom Murphy
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 13+ messages in thread
From: Tom Murphy @ 2019-05-06 18:52 UTC (permalink / raw)
  To: iommu
  Cc: murphyt7, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

Add a gfp_t parameter to the iommu_ops::map function.
Remove the needless locking in the AMD iommu driver.

The iommu_ops::map function (or the iommu_map function which calls it)
was always supposed to be sleepable (according to Joerg's comment in
this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
should probably have had a "might_sleep()" since it was written. However
currently the dma-iommu api can call iommu_map in an atomic context,
which it shouldn't do. This doesn't cause any problems because any iommu
driver which uses the dma-iommu api uses gfp_atomic in it's
iommu_ops::map function. But doing this wastes the memory allocators
atomic pools.

We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
iommu_map doesn’t lock while mapping and so no two calls should touch
the same iova range. The AMD driver already handles the page table page
allocations without locks so we can safely remove the locks.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/amd_iommu.c      | 14 ++++-------
 drivers/iommu/arm-smmu-v3.c    |  2 +-
 drivers/iommu/arm-smmu.c       |  2 +-
 drivers/iommu/dma-iommu.c      |  6 ++---
 drivers/iommu/exynos-iommu.c   |  2 +-
 drivers/iommu/intel-iommu.c    |  2 +-
 drivers/iommu/iommu.c          | 43 +++++++++++++++++++++++++++++-----
 drivers/iommu/ipmmu-vmsa.c     |  2 +-
 drivers/iommu/msm_iommu.c      |  2 +-
 drivers/iommu/mtk_iommu.c      |  2 +-
 drivers/iommu/mtk_iommu_v1.c   |  2 +-
 drivers/iommu/omap-iommu.c     |  2 +-
 drivers/iommu/qcom_iommu.c     |  2 +-
 drivers/iommu/rockchip-iommu.c |  2 +-
 drivers/iommu/s390-iommu.c     |  2 +-
 drivers/iommu/tegra-gart.c     |  2 +-
 drivers/iommu/tegra-smmu.c     |  2 +-
 include/linux/iommu.h          | 21 ++++++++++++++++-
 18 files changed, 78 insertions(+), 34 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index ebd062522cf5..ea3a5dc61bb0 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3092,7 +3092,8 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 }
 
 static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot)
+			 phys_addr_t paddr, size_t page_size, int iommu_prot,
+			 gfp_t gfp)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	int prot = 0;
@@ -3106,9 +3107,7 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
-	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
+	ret = iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
 
 	domain_flush_np_cache(domain, iova, page_size);
 
@@ -3119,16 +3118,11 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 			   size_t page_size)
 {
 	struct protection_domain *domain = to_pdomain(dom);
-	size_t unmap_size;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return 0;
 
-	mutex_lock(&domain->api_lock);
-	unmap_size = iommu_unmap_page(domain, iova, page_size);
-	mutex_unlock(&domain->api_lock);
-
-	return unmap_size;
+	return iommu_unmap_page(domain, iova, page_size);
 }
 
 static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index d3880010c6cf..e5c48089b49f 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -1777,7 +1777,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 }
 
 static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
-			phys_addr_t paddr, size_t size, int prot)
+			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
 
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 045d93884164..2d50db55b788 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1286,7 +1286,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 }
 
 static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
-			phys_addr_t paddr, size_t size, int prot)
+			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
 	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index fa5713a4f6f8..7a96c2c8f56b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -440,7 +440,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
 	if (!iova)
 		return DMA_MAPPING_ERROR;
 
-	if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
+	if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
 		iommu_dma_free_iova(cookie, iova, size);
 		return DMA_MAPPING_ERROR;
 	}
@@ -641,7 +641,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
 			arch_dma_prep_coherent(sg_page(sg), sg->length);
 	}
 
-	if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
+	if (iommu_map_sg_atomic(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
 			< size)
 		goto out_free_sg;
 
@@ -1003,7 +1003,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 	 * We'll leave any physical concatenation to the IOMMU driver's
 	 * implementation - it knows better than we do.
 	 */
-	if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len)
+	if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
 		goto out_free_iova;
 
 	return __finalise_sg(dev, sg, nents, iova);
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 05c6bc099d62..46414234c179 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -1078,7 +1078,7 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size,
  */
 static int exynos_iommu_map(struct iommu_domain *iommu_domain,
 			    unsigned long l_iova, phys_addr_t paddr, size_t size,
-			    int prot)
+			    int prot, gfp_t gfp)
 {
 	struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
 	sysmmu_pte_t *entry;
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 28cb713d728c..4f0ff28f7cb9 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -5137,7 +5137,7 @@ static void intel_iommu_detach_device(struct iommu_domain *domain,
 
 static int intel_iommu_map(struct iommu_domain *domain,
 			   unsigned long iova, phys_addr_t hpa,
-			   size_t size, int iommu_prot)
+			   size_t size, int iommu_prot, gfp_t gfp)
 {
 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
 	u64 max_addr;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 109de67d5d72..1b49841c177e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1584,8 +1584,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
 	return pgsize;
 }
 
-int iommu_map(struct iommu_domain *domain, unsigned long iova,
-	      phys_addr_t paddr, size_t size, int prot)
+int __iommu_map(struct iommu_domain *domain, unsigned long iova,
+	      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	const struct iommu_ops *ops = domain->ops;
 	unsigned long orig_iova = iova;
@@ -1622,8 +1622,8 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 
 		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
 			 iova, &paddr, pgsize);
+		ret = ops->map(domain, iova, paddr, pgsize, prot, gfp);
 
-		ret = ops->map(domain, iova, paddr, pgsize, prot);
 		if (ret)
 			break;
 
@@ -1643,8 +1643,22 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 
 	return ret;
 }
+
+int iommu_map(struct iommu_domain *domain, unsigned long iova,
+	      phys_addr_t paddr, size_t size, int prot)
+{
+	might_sleep();
+	return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
+}
 EXPORT_SYMBOL_GPL(iommu_map);
 
+int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
+	      phys_addr_t paddr, size_t size, int prot)
+{
+	return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
+}
+EXPORT_SYMBOL_GPL(iommu_map_atomic);
+
 static size_t __iommu_unmap(struct iommu_domain *domain,
 			    unsigned long iova, size_t size,
 			    bool sync)
@@ -1719,8 +1733,9 @@ size_t iommu_unmap_fast(struct iommu_domain *domain,
 }
 EXPORT_SYMBOL_GPL(iommu_unmap_fast);
 
-size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
-		    struct scatterlist *sg, unsigned int nents, int prot)
+size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
+		    struct scatterlist *sg, unsigned int nents, int prot,
+		    gfp_t gfp)
 {
 	size_t len = 0, mapped = 0;
 	phys_addr_t start;
@@ -1731,7 +1746,9 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 		phys_addr_t s_phys = sg_phys(sg);
 
 		if (len && s_phys != start + len) {
-			ret = iommu_map(domain, iova + mapped, start, len, prot);
+			ret = __iommu_map(domain, iova + mapped, start,
+					len, prot, gfp);
+
 			if (ret)
 				goto out_err;
 
@@ -1759,8 +1776,22 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 	return 0;
 
 }
+
+size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
+		    struct scatterlist *sg, unsigned int nents, int prot)
+{
+	might_sleep();
+	return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_KERNEL);
+}
 EXPORT_SYMBOL_GPL(iommu_map_sg);
 
+size_t iommu_map_sg_atomic(struct iommu_domain *domain, unsigned long iova,
+		    struct scatterlist *sg, unsigned int nents, int prot)
+{
+	return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_ATOMIC);
+}
+EXPORT_SYMBOL_GPL(iommu_map_sg_atomic);
+
 int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
 			       phys_addr_t paddr, u64 size, int prot)
 {
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 9a380c10655e..e005c83d49d8 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -707,7 +707,7 @@ static void ipmmu_detach_device(struct iommu_domain *io_domain,
 }
 
 static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
-		     phys_addr_t paddr, size_t size, int prot)
+		     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
 
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index 9fb0eb7a4d02..3f6bf3653aa2 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -508,7 +508,7 @@ static void msm_iommu_detach_dev(struct iommu_domain *domain,
 }
 
 static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			 phys_addr_t pa, size_t len, int prot)
+			 phys_addr_t pa, size_t len, int prot, gfp_t gfp)
 {
 	struct msm_priv *priv = to_msm_priv(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index de3e02277b70..3176b9b54d4d 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -364,7 +364,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
 }
 
 static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			 phys_addr_t paddr, size_t size, int prot)
+			 phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
index 52b01e3a49df..e7b1907faec1 100644
--- a/drivers/iommu/mtk_iommu_v1.c
+++ b/drivers/iommu/mtk_iommu_v1.c
@@ -303,7 +303,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
 }
 
 static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			 phys_addr_t paddr, size_t size, int prot)
+			 phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 	unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index d2fb347aa4ff..c1d5a71285dc 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -1109,7 +1109,7 @@ static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, int pgsz)
 }
 
 static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
-			  phys_addr_t pa, size_t bytes, int prot)
+			  phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
 {
 	struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
 	struct device *dev = omap_domain->dev;
diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
index 8cdd3f059513..a01e07a4e76f 100644
--- a/drivers/iommu/qcom_iommu.c
+++ b/drivers/iommu/qcom_iommu.c
@@ -411,7 +411,7 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
 }
 
 static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			  phys_addr_t paddr, size_t size, int prot)
+			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	int ret;
 	unsigned long flags;
diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index 77d4bd93fe4b..aa3507f35107 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -760,7 +760,7 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
 }
 
 static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
-			phys_addr_t paddr, size_t size, int prot)
+			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 22d4db302c1c..efa6aa68521d 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -265,7 +265,7 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
 }
 
 static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			  phys_addr_t paddr, size_t size, int prot)
+			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct s390_domain *s390_domain = to_s390_domain(domain);
 	int flags = ZPCI_PTE_VALID, rc = 0;
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index 4d8057916552..f300099852b1 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -190,7 +190,7 @@ static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova,
 }
 
 static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
-			  phys_addr_t pa, size_t bytes, int prot)
+			  phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
 {
 	struct gart_device *gart = gart_handle;
 	int ret;
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 5182c7d6171e..e1bf867e0607 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -641,7 +641,7 @@ static void tegra_smmu_set_pte(struct tegra_smmu_as *as, unsigned long iova,
 }
 
 static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova,
-			  phys_addr_t paddr, size_t size, int prot)
+			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
 {
 	struct tegra_smmu_as *as = to_smmu_as(domain);
 	dma_addr_t pte_dma;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index ffbbc7e39cee..76b8e7fe3ed0 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -198,7 +198,7 @@ struct iommu_ops {
 	int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
 	void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
 	int (*map)(struct iommu_domain *domain, unsigned long iova,
-		   phys_addr_t paddr, size_t size, int prot);
+		   phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
 	size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
 		     size_t size);
 	void (*flush_iotlb_all)(struct iommu_domain *domain);
@@ -295,12 +295,17 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
 extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
 extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
 		     phys_addr_t paddr, size_t size, int prot);
+extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
+		     phys_addr_t paddr, size_t size, int prot);
 extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
 			  size_t size);
 extern size_t iommu_unmap_fast(struct iommu_domain *domain,
 			       unsigned long iova, size_t size);
 extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 			   struct scatterlist *sg,unsigned int nents, int prot);
+extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
+				  unsigned long iova, struct scatterlist *sg,
+				  unsigned int nents, int prot);
 extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
 extern void iommu_set_fault_handler(struct iommu_domain *domain,
 			iommu_fault_handler_t handler, void *token);
@@ -469,6 +474,13 @@ static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
 	return -ENODEV;
 }
 
+static inline int iommu_map_atomic(struct iommu_domain *domain,
+				   unsigned long iova, phys_addr_t paddr,
+				   size_t size, int prot)
+{
+	return -ENODEV;
+}
+
 static inline size_t iommu_unmap(struct iommu_domain *domain,
 				 unsigned long iova, size_t size)
 {
@@ -488,6 +500,13 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
 	return 0;
 }
 
+static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain,
+				  unsigned long iova, struct scatterlist *sg,
+				  unsigned int nents, int prot)
+{
+	return 0;
+}
+
 static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
 {
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices
  2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
  2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
@ 2019-05-06 18:52 ` Tom Murphy
  2019-05-07  6:40   ` Christoph Hellwig
  2019-05-06 18:52 ` [PATCH v3 3/4] iommu/dma-iommu: Use the dev->coherent_dma_mask Tom Murphy
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Tom Murphy @ 2019-05-06 18:52 UTC (permalink / raw)
  To: iommu
  Cc: murphyt7, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

Handle devices which defer their attach to the iommu in the dma-iommu api

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/dma-iommu.c | 28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7a96c2c8f56b..b383498e2dc3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -22,6 +22,7 @@
 #include <linux/pci.h>
 #include <linux/scatterlist.h>
 #include <linux/vmalloc.h>
+#include <linux/crash_dump.h>
 
 struct iommu_dma_msi_page {
 	struct list_head	list;
@@ -322,6 +323,22 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 	return iova_reserve_iommu_regions(dev, domain);
 }
 
+static int handle_deferred_device(struct device *dev)
+{
+	struct iommu_domain *domain;
+	const struct iommu_ops *ops;
+
+	if (!is_kdump_kernel())
+		return 0;
+
+	domain = iommu_get_domain_for_dev(dev);
+	ops = domain->ops;
+	if (ops->is_attach_deferred && ops->is_attach_deferred(domain, dev))
+		return iommu_attach_device(domain, dev);
+
+	return 0;
+}
+
 /**
  * dma_info_to_prot - Translate DMA API directions and attributes to IOMMU API
  *                    page flags.
@@ -835,7 +852,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
 	bool coherent = dev_is_dma_coherent(dev);
 	dma_addr_t dma_handle;
 
-	dma_handle =__iommu_dma_map(dev, phys, size,
+	if (unlikely(handle_deferred_device(dev)))
+		return DMA_MAPPING_ERROR;
+
+	dma_handle = __iommu_dma_map(dev, phys, size,
 			dma_info_to_prot(dir, coherent, attrs),
 			iommu_get_dma_domain(dev));
 	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
@@ -953,6 +973,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 	unsigned long mask = dma_get_seg_boundary(dev);
 	int i;
 
+	if (unlikely(handle_deferred_device(dev)))
+		return 0;
+
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
 
@@ -1056,6 +1079,9 @@ static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
 static void *iommu_dma_alloc(struct device *dev, size_t size,
 		dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs)
 {
+	if (unlikely(handle_deferred_device(dev)))
+		return NULL;
+
 	gfp |= __GFP_ZERO;
 
 #ifdef CONFIG_DMA_DIRECT_REMAP
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/4] iommu/dma-iommu: Use the dev->coherent_dma_mask
  2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
  2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
  2019-05-06 18:52 ` [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices Tom Murphy
@ 2019-05-06 18:52 ` Tom Murphy
  2019-05-06 18:52 ` [PATCH v3 4/4] iommu/amd: Convert AMD iommu driver to the dma-iommu api Tom Murphy
  2019-06-03 10:51 ` [PATCH v3 0/4] iommu/amd: Convert the " Joerg Roedel
  4 siblings, 0 replies; 13+ messages in thread
From: Tom Murphy @ 2019-05-06 18:52 UTC (permalink / raw)
  To: iommu
  Cc: murphyt7, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

Use the dev->coherent_dma_mask when allocating in the dma-iommu ops api.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/dma-iommu.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b383498e2dc3..2a968afdab10 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -442,7 +442,8 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr,
 }
 
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
-		size_t size, int prot, struct iommu_domain *domain)
+		size_t size, int prot, struct iommu_domain *domain,
+		dma_addr_t dma_mask)
 {
 	struct iommu_dma_cookie *cookie = domain->iova_cookie;
 	size_t iova_off = 0;
@@ -453,7 +454,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
 		size = iova_align(&cookie->iovad, size + iova_off);
 	}
 
-	iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
+	iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev);
 	if (!iova)
 		return DMA_MAPPING_ERROR;
 
@@ -496,7 +497,7 @@ static void *iommu_dma_alloc_contiguous(struct device *dev, size_t size,
 		return NULL;
 
 	*dma_handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
-			iommu_get_dma_domain(dev));
+			iommu_get_dma_domain(dev), dev->coherent_dma_mask);
 	if (*dma_handle == DMA_MAPPING_ERROR) {
 		if (!dma_release_from_contiguous(dev, page, count))
 			__free_pages(page, page_order);
@@ -766,7 +767,7 @@ static void *iommu_dma_alloc_pool(struct device *dev, size_t size,
 
 	*dma_handle = __iommu_dma_map(dev, page_to_phys(page), size,
 			dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs),
-			iommu_get_domain_for_dev(dev));
+			iommu_get_domain_for_dev(dev), dev->coherent_dma_mask);
 	if (*dma_handle == DMA_MAPPING_ERROR) {
 		dma_free_from_pool(vaddr, PAGE_ALIGN(size));
 		return NULL;
@@ -857,7 +858,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
 
 	dma_handle = __iommu_dma_map(dev, phys, size,
 			dma_info_to_prot(dir, coherent, attrs),
-			iommu_get_dma_domain(dev));
+			iommu_get_dma_domain(dev), dma_get_mask(dev));
 	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    dma_handle != DMA_MAPPING_ERROR)
 		arch_sync_dma_for_device(dev, phys, size, dir);
@@ -1067,7 +1068,7 @@ static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
 {
 	return __iommu_dma_map(dev, phys, size,
 			dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
-			iommu_get_dma_domain(dev));
+			iommu_get_dma_domain(dev), dma_get_mask(dev));
 }
 
 static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
@@ -1246,7 +1247,8 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
 	if (!msi_page)
 		return NULL;
 
-	iova = __iommu_dma_map(dev, msi_addr, size, prot, domain);
+	iova = __iommu_dma_map(dev, msi_addr, size, prot, domain,
+			dma_get_mask(dev));
 	if (iova == DMA_MAPPING_ERROR)
 		goto out_free_page;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/4] iommu/amd: Convert AMD iommu driver to the dma-iommu api
  2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
                   ` (2 preceding siblings ...)
  2019-05-06 18:52 ` [PATCH v3 3/4] iommu/dma-iommu: Use the dev->coherent_dma_mask Tom Murphy
@ 2019-05-06 18:52 ` Tom Murphy
  2019-06-03 10:51 ` [PATCH v3 0/4] iommu/amd: Convert the " Joerg Roedel
  4 siblings, 0 replies; 13+ messages in thread
From: Tom Murphy @ 2019-05-06 18:52 UTC (permalink / raw)
  To: iommu
  Cc: murphyt7, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/Kconfig     |   1 +
 drivers/iommu/amd_iommu.c | 680 ++++----------------------------------
 2 files changed, 70 insertions(+), 611 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 6f07f3b21816..5914ba85180b 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -136,6 +136,7 @@ config AMD_IOMMU
 	select PCI_PASID
 	select IOMMU_API
 	select IOMMU_IOVA
+	select IOMMU_DMA
 	depends on X86_64 && PCI && ACPI
 	---help---
 	  With this option you can enable support for AMD IOMMU hardware in
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index ea3a5dc61bb0..366af2b27e7f 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -32,6 +32,7 @@
 #include <linux/scatterlist.h>
 #include <linux/dma-mapping.h>
 #include <linux/dma-direct.h>
+#include <linux/dma-iommu.h>
 #include <linux/iommu-helper.h>
 #include <linux/iommu.h>
 #include <linux/delay.h>
@@ -101,8 +102,6 @@ const struct iommu_ops amd_iommu_ops;
 static ATOMIC_NOTIFIER_HEAD(ppr_notifier);
 int amd_iommu_max_glx_val = -1;
 
-static const struct dma_map_ops amd_iommu_dma_ops;
-
 /*
  * general struct to manage commands send to an IOMMU
  */
@@ -115,21 +114,6 @@ struct kmem_cache *amd_iommu_irq_cache;
 static void update_domain(struct protection_domain *domain);
 static int protection_domain_init(struct protection_domain *domain);
 static void detach_device(struct device *dev);
-static void iova_domain_flush_tlb(struct iova_domain *iovad);
-
-/*
- * Data container for a dma_ops specific protection domain
- */
-struct dma_ops_domain {
-	/* generic protection domain information */
-	struct protection_domain domain;
-
-	/* IOVA RB-Tree */
-	struct iova_domain iovad;
-};
-
-static struct iova_domain reserved_iova_ranges;
-static struct lock_class_key reserved_rbtree_key;
 
 /****************************************************************************
  *
@@ -200,12 +184,6 @@ static struct protection_domain *to_pdomain(struct iommu_domain *dom)
 	return container_of(dom, struct protection_domain, domain);
 }
 
-static struct dma_ops_domain* to_dma_ops_domain(struct protection_domain *domain)
-{
-	BUG_ON(domain->flags != PD_DMA_OPS_MASK);
-	return container_of(domain, struct dma_ops_domain, domain);
-}
-
 static struct iommu_dev_data *alloc_dev_data(u16 devid)
 {
 	struct iommu_dev_data *dev_data;
@@ -1279,12 +1257,6 @@ static void domain_flush_pages(struct protection_domain *domain,
 	__domain_flush_pages(domain, address, size, 0);
 }
 
-/* Flush the whole IO/TLB for a given protection domain */
-static void domain_flush_tlb(struct protection_domain *domain)
-{
-	__domain_flush_pages(domain, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS, 0);
-}
-
 /* Flush the whole IO/TLB for a given protection domain - including PDE */
 static void domain_flush_tlb_pde(struct protection_domain *domain)
 {
@@ -1686,43 +1658,6 @@ static unsigned long iommu_unmap_page(struct protection_domain *dom,
 	return unmapped;
 }
 
-/****************************************************************************
- *
- * The next functions belong to the address allocator for the dma_ops
- * interface functions.
- *
- ****************************************************************************/
-
-
-static unsigned long dma_ops_alloc_iova(struct device *dev,
-					struct dma_ops_domain *dma_dom,
-					unsigned int pages, u64 dma_mask)
-{
-	unsigned long pfn = 0;
-
-	pages = __roundup_pow_of_two(pages);
-
-	if (dma_mask > DMA_BIT_MASK(32))
-		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
-				      IOVA_PFN(DMA_BIT_MASK(32)), false);
-
-	if (!pfn)
-		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
-				      IOVA_PFN(dma_mask), true);
-
-	return (pfn << PAGE_SHIFT);
-}
-
-static void dma_ops_free_iova(struct dma_ops_domain *dma_dom,
-			      unsigned long address,
-			      unsigned int pages)
-{
-	pages = __roundup_pow_of_two(pages);
-	address >>= PAGE_SHIFT;
-
-	free_iova_fast(&dma_dom->iovad, address, pages);
-}
-
 /****************************************************************************
  *
  * The next functions belong to the domain allocation. A domain is
@@ -1824,40 +1759,25 @@ static void free_gcr3_table(struct protection_domain *domain)
 	free_page((unsigned long)domain->gcr3_tbl);
 }
 
-static void dma_ops_domain_flush_tlb(struct dma_ops_domain *dom)
-{
-	domain_flush_tlb(&dom->domain);
-	domain_flush_complete(&dom->domain);
-}
-
-static void iova_domain_flush_tlb(struct iova_domain *iovad)
-{
-	struct dma_ops_domain *dom;
-
-	dom = container_of(iovad, struct dma_ops_domain, iovad);
-
-	dma_ops_domain_flush_tlb(dom);
-}
-
 /*
  * Free a domain, only used if something went wrong in the
  * allocation path and we need to free an already allocated page table
  */
-static void dma_ops_domain_free(struct dma_ops_domain *dom)
+static void dma_ops_domain_free(struct protection_domain *domain)
 {
-	if (!dom)
+	if (!domain)
 		return;
 
-	del_domain_from_list(&dom->domain);
+	del_domain_from_list(domain);
 
-	put_iova_domain(&dom->iovad);
+	iommu_put_dma_cookie(&domain->domain);
 
-	free_pagetable(&dom->domain);
+	free_pagetable(domain);
 
-	if (dom->domain.id)
-		domain_id_free(dom->domain.id);
+	if (domain->id)
+		domain_id_free(domain->id);
 
-	kfree(dom);
+	kfree(domain);
 }
 
 /*
@@ -1865,37 +1785,32 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom)
  * It also initializes the page table and the address allocator data
  * structures required for the dma_ops interface
  */
-static struct dma_ops_domain *dma_ops_domain_alloc(void)
+static struct protection_domain *dma_ops_domain_alloc(void)
 {
-	struct dma_ops_domain *dma_dom;
+	struct protection_domain *domain;
 
-	dma_dom = kzalloc(sizeof(struct dma_ops_domain), GFP_KERNEL);
-	if (!dma_dom)
+	domain = kzalloc(sizeof(struct protection_domain), GFP_KERNEL);
+	if (!domain)
 		return NULL;
 
-	if (protection_domain_init(&dma_dom->domain))
-		goto free_dma_dom;
-
-	dma_dom->domain.mode = PAGE_MODE_3_LEVEL;
-	dma_dom->domain.pt_root = (void *)get_zeroed_page(GFP_KERNEL);
-	dma_dom->domain.flags = PD_DMA_OPS_MASK;
-	if (!dma_dom->domain.pt_root)
-		goto free_dma_dom;
-
-	init_iova_domain(&dma_dom->iovad, PAGE_SIZE, IOVA_START_PFN);
+	if (protection_domain_init(domain))
+		goto free_domain;
 
-	if (init_iova_flush_queue(&dma_dom->iovad, iova_domain_flush_tlb, NULL))
-		goto free_dma_dom;
+	domain->mode = PAGE_MODE_3_LEVEL;
+	domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
+	domain->flags = PD_DMA_OPS_MASK;
+	if (!domain->pt_root)
+		goto free_domain;
 
-	/* Initialize reserved ranges */
-	copy_reserved_iova(&reserved_iova_ranges, &dma_dom->iovad);
+	if (iommu_get_dma_cookie(&domain->domain) == -ENOMEM)
+		goto free_domain;
 
-	add_domain_to_list(&dma_dom->domain);
+	add_domain_to_list(domain);
 
-	return dma_dom;
+	return domain;
 
-free_dma_dom:
-	dma_ops_domain_free(dma_dom);
+free_domain:
+	dma_ops_domain_free(domain);
 
 	return NULL;
 }
@@ -2291,8 +2206,8 @@ static int amd_iommu_add_device(struct device *dev)
 	domain = iommu_get_domain_for_dev(dev);
 	if (domain->type == IOMMU_DOMAIN_IDENTITY)
 		dev_data->passthrough = true;
-	else
-		dev->dma_ops = &amd_iommu_dma_ops;
+	else if (domain->type == IOMMU_DOMAIN_DMA)
+		iommu_setup_dma_ops(dev, IOVA_START_PFN << PAGE_SHIFT, 0);
 
 out:
 	iommu_completion_wait(iommu);
@@ -2326,43 +2241,32 @@ static struct iommu_group *amd_iommu_device_group(struct device *dev)
 	return acpihid_device_group(dev);
 }
 
+static int amd_iommu_domain_get_attr(struct iommu_domain *domain,
+		enum iommu_attr attr, void *data)
+{
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		return -ENODEV;
+	case IOMMU_DOMAIN_DMA:
+		switch (attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			*(int *)data = !amd_iommu_unmap_flush;
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+}
+
 /*****************************************************************************
  *
  * The next functions belong to the dma_ops mapping/unmapping code.
  *
  *****************************************************************************/
 
-/*
- * In the dma_ops path we only have the struct device. This function
- * finds the corresponding IOMMU, the protection domain and the
- * requestor id for a given device.
- * If the device is not yet associated with a domain this is also done
- * in this function.
- */
-static struct protection_domain *get_domain(struct device *dev)
-{
-	struct protection_domain *domain;
-	struct iommu_domain *io_domain;
-
-	if (!check_device(dev))
-		return ERR_PTR(-EINVAL);
-
-	domain = get_dev_data(dev)->domain;
-	if (domain == NULL && get_dev_data(dev)->defer_attach) {
-		get_dev_data(dev)->defer_attach = false;
-		io_domain = iommu_get_domain_for_dev(dev);
-		domain = to_pdomain(io_domain);
-		attach_device(dev, domain);
-	}
-	if (domain == NULL)
-		return ERR_PTR(-EBUSY);
-
-	if (!dma_ops_domain(domain))
-		return ERR_PTR(-EBUSY);
-
-	return domain;
-}
-
 static void update_device_table(struct protection_domain *domain)
 {
 	struct iommu_dev_data *dev_data;
@@ -2393,446 +2297,6 @@ static void update_domain(struct protection_domain *domain)
 	domain->updated = false;
 }
 
-static int dir2prot(enum dma_data_direction direction)
-{
-	if (direction == DMA_TO_DEVICE)
-		return IOMMU_PROT_IR;
-	else if (direction == DMA_FROM_DEVICE)
-		return IOMMU_PROT_IW;
-	else if (direction == DMA_BIDIRECTIONAL)
-		return IOMMU_PROT_IW | IOMMU_PROT_IR;
-	else
-		return 0;
-}
-
-/*
- * This function contains common code for mapping of a physically
- * contiguous memory region into DMA address space. It is used by all
- * mapping functions provided with this IOMMU driver.
- * Must be called with the domain lock held.
- */
-static dma_addr_t __map_single(struct device *dev,
-			       struct dma_ops_domain *dma_dom,
-			       phys_addr_t paddr,
-			       size_t size,
-			       enum dma_data_direction direction,
-			       u64 dma_mask)
-{
-	dma_addr_t offset = paddr & ~PAGE_MASK;
-	dma_addr_t address, start, ret;
-	unsigned int pages;
-	int prot = 0;
-	int i;
-
-	pages = iommu_num_pages(paddr, size, PAGE_SIZE);
-	paddr &= PAGE_MASK;
-
-	address = dma_ops_alloc_iova(dev, dma_dom, pages, dma_mask);
-	if (!address)
-		goto out;
-
-	prot = dir2prot(direction);
-
-	start = address;
-	for (i = 0; i < pages; ++i) {
-		ret = iommu_map_page(&dma_dom->domain, start, paddr,
-				     PAGE_SIZE, prot, GFP_ATOMIC);
-		if (ret)
-			goto out_unmap;
-
-		paddr += PAGE_SIZE;
-		start += PAGE_SIZE;
-	}
-	address += offset;
-
-	domain_flush_np_cache(&dma_dom->domain, address, size);
-
-out:
-	return address;
-
-out_unmap:
-
-	for (--i; i >= 0; --i) {
-		start -= PAGE_SIZE;
-		iommu_unmap_page(&dma_dom->domain, start, PAGE_SIZE);
-	}
-
-	domain_flush_tlb(&dma_dom->domain);
-	domain_flush_complete(&dma_dom->domain);
-
-	dma_ops_free_iova(dma_dom, address, pages);
-
-	return DMA_MAPPING_ERROR;
-}
-
-/*
- * Does the reverse of the __map_single function. Must be called with
- * the domain lock held too
- */
-static void __unmap_single(struct dma_ops_domain *dma_dom,
-			   dma_addr_t dma_addr,
-			   size_t size,
-			   int dir)
-{
-	dma_addr_t i, start;
-	unsigned int pages;
-
-	pages = iommu_num_pages(dma_addr, size, PAGE_SIZE);
-	dma_addr &= PAGE_MASK;
-	start = dma_addr;
-
-	for (i = 0; i < pages; ++i) {
-		iommu_unmap_page(&dma_dom->domain, start, PAGE_SIZE);
-		start += PAGE_SIZE;
-	}
-
-	if (amd_iommu_unmap_flush) {
-		domain_flush_tlb(&dma_dom->domain);
-		domain_flush_complete(&dma_dom->domain);
-		dma_ops_free_iova(dma_dom, dma_addr, pages);
-	} else {
-		pages = __roundup_pow_of_two(pages);
-		queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);
-	}
-}
-
-/*
- * The exported map_single function for dma_ops.
- */
-static dma_addr_t map_page(struct device *dev, struct page *page,
-			   unsigned long offset, size_t size,
-			   enum dma_data_direction dir,
-			   unsigned long attrs)
-{
-	phys_addr_t paddr = page_to_phys(page) + offset;
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-	u64 dma_mask;
-
-	domain = get_domain(dev);
-	if (PTR_ERR(domain) == -EINVAL)
-		return (dma_addr_t)paddr;
-	else if (IS_ERR(domain))
-		return DMA_MAPPING_ERROR;
-
-	dma_mask = *dev->dma_mask;
-	dma_dom = to_dma_ops_domain(domain);
-
-	return __map_single(dev, dma_dom, paddr, size, dir, dma_mask);
-}
-
-/*
- * The exported unmap_single function for dma_ops.
- */
-static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
-		       enum dma_data_direction dir, unsigned long attrs)
-{
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-
-	domain = get_domain(dev);
-	if (IS_ERR(domain))
-		return;
-
-	dma_dom = to_dma_ops_domain(domain);
-
-	__unmap_single(dma_dom, dma_addr, size, dir);
-}
-
-static int sg_num_pages(struct device *dev,
-			struct scatterlist *sglist,
-			int nelems)
-{
-	unsigned long mask, boundary_size;
-	struct scatterlist *s;
-	int i, npages = 0;
-
-	mask          = dma_get_seg_boundary(dev);
-	boundary_size = mask + 1 ? ALIGN(mask + 1, PAGE_SIZE) >> PAGE_SHIFT :
-				   1UL << (BITS_PER_LONG - PAGE_SHIFT);
-
-	for_each_sg(sglist, s, nelems, i) {
-		int p, n;
-
-		s->dma_address = npages << PAGE_SHIFT;
-		p = npages % boundary_size;
-		n = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE);
-		if (p + n > boundary_size)
-			npages += boundary_size - p;
-		npages += n;
-	}
-
-	return npages;
-}
-
-/*
- * The exported map_sg function for dma_ops (handles scatter-gather
- * lists).
- */
-static int map_sg(struct device *dev, struct scatterlist *sglist,
-		  int nelems, enum dma_data_direction direction,
-		  unsigned long attrs)
-{
-	int mapped_pages = 0, npages = 0, prot = 0, i;
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-	struct scatterlist *s;
-	unsigned long address;
-	u64 dma_mask;
-	int ret;
-
-	domain = get_domain(dev);
-	if (IS_ERR(domain))
-		return 0;
-
-	dma_dom  = to_dma_ops_domain(domain);
-	dma_mask = *dev->dma_mask;
-
-	npages = sg_num_pages(dev, sglist, nelems);
-
-	address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask);
-	if (address == DMA_MAPPING_ERROR)
-		goto out_err;
-
-	prot = dir2prot(direction);
-
-	/* Map all sg entries */
-	for_each_sg(sglist, s, nelems, i) {
-		int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE);
-
-		for (j = 0; j < pages; ++j) {
-			unsigned long bus_addr, phys_addr;
-
-			bus_addr  = address + s->dma_address + (j << PAGE_SHIFT);
-			phys_addr = (sg_phys(s) & PAGE_MASK) + (j << PAGE_SHIFT);
-			ret = iommu_map_page(domain, bus_addr, phys_addr, PAGE_SIZE, prot, GFP_ATOMIC);
-			if (ret)
-				goto out_unmap;
-
-			mapped_pages += 1;
-		}
-	}
-
-	/* Everything is mapped - write the right values into s->dma_address */
-	for_each_sg(sglist, s, nelems, i) {
-		/*
-		 * Add in the remaining piece of the scatter-gather offset that
-		 * was masked out when we were determining the physical address
-		 * via (sg_phys(s) & PAGE_MASK) earlier.
-		 */
-		s->dma_address += address + (s->offset & ~PAGE_MASK);
-		s->dma_length   = s->length;
-	}
-
-	domain_flush_np_cache(domain, s->dma_address, s->dma_length);
-
-	return nelems;
-
-out_unmap:
-	dev_err(dev, "IOMMU mapping error in map_sg (io-pages: %d reason: %d)\n",
-		npages, ret);
-
-	for_each_sg(sglist, s, nelems, i) {
-		int j, pages = iommu_num_pages(sg_phys(s), s->length, PAGE_SIZE);
-
-		for (j = 0; j < pages; ++j) {
-			unsigned long bus_addr;
-
-			bus_addr  = address + s->dma_address + (j << PAGE_SHIFT);
-			iommu_unmap_page(domain, bus_addr, PAGE_SIZE);
-
-			if (--mapped_pages == 0)
-				goto out_free_iova;
-		}
-	}
-
-out_free_iova:
-	free_iova_fast(&dma_dom->iovad, address >> PAGE_SHIFT, npages);
-
-out_err:
-	return 0;
-}
-
-/*
- * The exported map_sg function for dma_ops (handles scatter-gather
- * lists).
- */
-static void unmap_sg(struct device *dev, struct scatterlist *sglist,
-		     int nelems, enum dma_data_direction dir,
-		     unsigned long attrs)
-{
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-	unsigned long startaddr;
-	int npages = 2;
-
-	domain = get_domain(dev);
-	if (IS_ERR(domain))
-		return;
-
-	startaddr = sg_dma_address(sglist) & PAGE_MASK;
-	dma_dom   = to_dma_ops_domain(domain);
-	npages    = sg_num_pages(dev, sglist, nelems);
-
-	__unmap_single(dma_dom, startaddr, npages << PAGE_SHIFT, dir);
-}
-
-/*
- * The exported alloc_coherent function for dma_ops.
- */
-static void *alloc_coherent(struct device *dev, size_t size,
-			    dma_addr_t *dma_addr, gfp_t flag,
-			    unsigned long attrs)
-{
-	u64 dma_mask = dev->coherent_dma_mask;
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-	struct page *page;
-
-	domain = get_domain(dev);
-	if (PTR_ERR(domain) == -EINVAL) {
-		page = alloc_pages(flag, get_order(size));
-		*dma_addr = page_to_phys(page);
-		return page_address(page);
-	} else if (IS_ERR(domain))
-		return NULL;
-
-	dma_dom   = to_dma_ops_domain(domain);
-	size	  = PAGE_ALIGN(size);
-	dma_mask  = dev->coherent_dma_mask;
-	flag     &= ~(__GFP_DMA | __GFP_HIGHMEM | __GFP_DMA32);
-	flag     |= __GFP_ZERO;
-
-	page = alloc_pages(flag | __GFP_NOWARN,  get_order(size));
-	if (!page) {
-		if (!gfpflags_allow_blocking(flag))
-			return NULL;
-
-		page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT,
-					get_order(size), flag & __GFP_NOWARN);
-		if (!page)
-			return NULL;
-	}
-
-	if (!dma_mask)
-		dma_mask = *dev->dma_mask;
-
-	*dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
-				 size, DMA_BIDIRECTIONAL, dma_mask);
-
-	if (*dma_addr == DMA_MAPPING_ERROR)
-		goto out_free;
-
-	return page_address(page);
-
-out_free:
-
-	if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-		__free_pages(page, get_order(size));
-
-	return NULL;
-}
-
-/*
- * The exported free_coherent function for dma_ops.
- */
-static void free_coherent(struct device *dev, size_t size,
-			  void *virt_addr, dma_addr_t dma_addr,
-			  unsigned long attrs)
-{
-	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
-	struct page *page;
-
-	page = virt_to_page(virt_addr);
-	size = PAGE_ALIGN(size);
-
-	domain = get_domain(dev);
-	if (IS_ERR(domain))
-		goto free_mem;
-
-	dma_dom = to_dma_ops_domain(domain);
-
-	__unmap_single(dma_dom, dma_addr, size, DMA_BIDIRECTIONAL);
-
-free_mem:
-	if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-		__free_pages(page, get_order(size));
-}
-
-/*
- * This function is called by the DMA layer to find out if we can handle a
- * particular device. It is part of the dma_ops.
- */
-static int amd_iommu_dma_supported(struct device *dev, u64 mask)
-{
-	if (!dma_direct_supported(dev, mask))
-		return 0;
-	return check_device(dev);
-}
-
-static const struct dma_map_ops amd_iommu_dma_ops = {
-	.alloc		= alloc_coherent,
-	.free		= free_coherent,
-	.map_page	= map_page,
-	.unmap_page	= unmap_page,
-	.map_sg		= map_sg,
-	.unmap_sg	= unmap_sg,
-	.dma_supported	= amd_iommu_dma_supported,
-};
-
-static int init_reserved_iova_ranges(void)
-{
-	struct pci_dev *pdev = NULL;
-	struct iova *val;
-
-	init_iova_domain(&reserved_iova_ranges, PAGE_SIZE, IOVA_START_PFN);
-
-	lockdep_set_class(&reserved_iova_ranges.iova_rbtree_lock,
-			  &reserved_rbtree_key);
-
-	/* MSI memory range */
-	val = reserve_iova(&reserved_iova_ranges,
-			   IOVA_PFN(MSI_RANGE_START), IOVA_PFN(MSI_RANGE_END));
-	if (!val) {
-		pr_err("Reserving MSI range failed\n");
-		return -ENOMEM;
-	}
-
-	/* HT memory range */
-	val = reserve_iova(&reserved_iova_ranges,
-			   IOVA_PFN(HT_RANGE_START), IOVA_PFN(HT_RANGE_END));
-	if (!val) {
-		pr_err("Reserving HT range failed\n");
-		return -ENOMEM;
-	}
-
-	/*
-	 * Memory used for PCI resources
-	 * FIXME: Check whether we can reserve the PCI-hole completly
-	 */
-	for_each_pci_dev(pdev) {
-		int i;
-
-		for (i = 0; i < PCI_NUM_RESOURCES; ++i) {
-			struct resource *r = &pdev->resource[i];
-
-			if (!(r->flags & IORESOURCE_MEM))
-				continue;
-
-			val = reserve_iova(&reserved_iova_ranges,
-					   IOVA_PFN(r->start),
-					   IOVA_PFN(r->end));
-			if (!val) {
-				pci_err(pdev, "Reserve pci-resource range %pR failed\n", r);
-				return -ENOMEM;
-			}
-		}
-	}
-
-	return 0;
-}
-
 int __init amd_iommu_init_api(void)
 {
 	int ret, err = 0;
@@ -2841,10 +2305,6 @@ int __init amd_iommu_init_api(void)
 	if (ret)
 		return ret;
 
-	ret = init_reserved_iova_ranges();
-	if (ret)
-		return ret;
-
 	err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
 	if (err)
 		return err;
@@ -2950,7 +2410,6 @@ static struct protection_domain *protection_domain_alloc(void)
 static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
 {
 	struct protection_domain *pdomain;
-	struct dma_ops_domain *dma_domain;
 
 	switch (type) {
 	case IOMMU_DOMAIN_UNMANAGED:
@@ -2971,12 +2430,11 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
 
 		break;
 	case IOMMU_DOMAIN_DMA:
-		dma_domain = dma_ops_domain_alloc();
-		if (!dma_domain) {
+		pdomain = dma_ops_domain_alloc();
+		if (!pdomain) {
 			pr_err("Failed to allocate\n");
 			return NULL;
 		}
-		pdomain = &dma_domain->domain;
 		break;
 	case IOMMU_DOMAIN_IDENTITY:
 		pdomain = protection_domain_alloc();
@@ -2995,7 +2453,6 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
 static void amd_iommu_domain_free(struct iommu_domain *dom)
 {
 	struct protection_domain *domain;
-	struct dma_ops_domain *dma_dom;
 
 	domain = to_pdomain(dom);
 
@@ -3010,8 +2467,7 @@ static void amd_iommu_domain_free(struct iommu_domain *dom)
 	switch (dom->type) {
 	case IOMMU_DOMAIN_DMA:
 		/* Now release the domain */
-		dma_dom = to_dma_ops_domain(domain);
-		dma_ops_domain_free(dma_dom);
+		dma_ops_domain_free(domain);
 		break;
 	default:
 		if (domain->mode != PAGE_MODE_NONE)
@@ -3067,6 +2523,7 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 		return -EINVAL;
 
 	dev_data = dev->archdata.iommu;
+	dev_data->defer_attach = false;
 
 	iommu = amd_iommu_rlookup_table[dev_data->devid];
 	if (!iommu)
@@ -3223,19 +2680,6 @@ static void amd_iommu_put_resv_regions(struct device *dev,
 		kfree(entry);
 }
 
-static void amd_iommu_apply_resv_region(struct device *dev,
-				      struct iommu_domain *domain,
-				      struct iommu_resv_region *region)
-{
-	struct dma_ops_domain *dma_dom = to_dma_ops_domain(to_pdomain(domain));
-	unsigned long start, end;
-
-	start = IOVA_PFN(region->start);
-	end   = IOVA_PFN(region->start + region->length - 1);
-
-	WARN_ON_ONCE(reserve_iova(&dma_dom->iovad, start, end) == NULL);
-}
-
 static bool amd_iommu_is_attach_deferred(struct iommu_domain *domain,
 					 struct device *dev)
 {
@@ -3268,9 +2712,9 @@ const struct iommu_ops amd_iommu_ops = {
 	.add_device = amd_iommu_add_device,
 	.remove_device = amd_iommu_remove_device,
 	.device_group = amd_iommu_device_group,
+	.domain_get_attr = amd_iommu_domain_get_attr,
 	.get_resv_regions = amd_iommu_get_resv_regions,
 	.put_resv_regions = amd_iommu_put_resv_regions,
-	.apply_resv_region = amd_iommu_apply_resv_region,
 	.is_attach_deferred = amd_iommu_is_attach_deferred,
 	.pgsize_bitmap	= AMD_IOMMU_PGSIZES,
 	.flush_iotlb_all = amd_iommu_flush_iotlb_all,
@@ -3585,9 +3029,23 @@ EXPORT_SYMBOL(amd_iommu_complete_ppr);
 struct iommu_domain *amd_iommu_get_v2_domain(struct pci_dev *pdev)
 {
 	struct protection_domain *pdomain;
+	struct iommu_domain *io_domain;
+	struct device *dev = &pdev->dev;
+
+	if (!check_device(dev))
+		return NULL;
+
+	pdomain = get_dev_data(dev)->domain;
+	if (pdomain == NULL && get_dev_data(dev)->defer_attach) {
+		get_dev_data(dev)->defer_attach = false;
+		io_domain = iommu_get_domain_for_dev(dev);
+		pdomain = to_pdomain(io_domain);
+		attach_device(dev, pdomain);
+	}
+	if (pdomain == NULL)
+		return NULL;
 
-	pdomain = get_domain(&pdev->dev);
-	if (IS_ERR(pdomain))
+	if (!dma_ops_domain(pdomain))
 		return NULL;
 
 	/* Only return IOMMUv2 domains */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices
  2019-05-06 18:52 ` [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices Tom Murphy
@ 2019-05-07  6:40   ` Christoph Hellwig
  2019-05-15 12:46     ` Tom Murphy
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2019-05-07  6:40 UTC (permalink / raw)
  To: Tom Murphy
  Cc: iommu, murphyt7, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

On Mon, May 06, 2019 at 07:52:04PM +0100, Tom Murphy wrote:
> +static int handle_deferred_device(struct device *dev)
> +{
> +	struct iommu_domain *domain;
> +	const struct iommu_ops *ops;
> +
> +	if (!is_kdump_kernel())
> +		return 0;
> +
> +	domain = iommu_get_domain_for_dev(dev);

> -	dma_handle =__iommu_dma_map(dev, phys, size,
> +	if (unlikely(handle_deferred_device(dev)))
> +		return DMA_MAPPING_ERROR;
> +
> +	dma_handle = __iommu_dma_map(dev, phys, size,

__iommu_dma_map already looks up the domain, and as far as I can
tell all callers need the handle_deferred_device call.  Should we
just move it to there and pass the domain from the caller?

Also shouldn't the iommu_attach_device call inside
handle_deferred_device also get an unlikely marker?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices
  2019-05-07  6:40   ` Christoph Hellwig
@ 2019-05-15 12:46     ` Tom Murphy
  0 siblings, 0 replies; 13+ messages in thread
From: Tom Murphy @ 2019-05-15 12:46 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: iommu, Tom Murphy, Joerg Roedel, Will Deacon, Robin Murphy,
	Marek Szyprowski, Kukjin Kim, Krzysztof Kozlowski,
	David Woodhouse, Andy Gross, David Brown, Matthias Brugger,
	Rob Clark, Heiko Stuebner, Gerald Schaefer, Thierry Reding,
	Jonathan Hunter, linux-kernel, linux-arm-kernel,
	linux-samsung-soc, linux-arm-msm, linux-mediatek, linux-rockchip,
	linux-s390, linux-tegra

like this?

In that case we need to add a call to iommu_dma_alloc_remap.

From 862aeebb601008cf863e3aff4ff8ed7cefebeefa Mon Sep 17 00:00:00 2001
From: Tom Murphy <tmurphy@tmurphy-419tom-0.sjc.aristanetworks.com>
Date: Wed, 15 May 2019 05:43:25 -0700
Subject: [PATCH] iommu/dma-iommu: Handle deferred devices

Handle devices which defer their attach to the iommu in the dma-iommu api

Signed-off-by: Tom Murphy <tmurphy@arista.com>
---
 drivers/iommu/dma-iommu.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7f313cfa9..a48ae906d 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -22,6 +22,7 @@
 #include <linux/pci.h>
 #include <linux/scatterlist.h>
 #include <linux/vmalloc.h>
+#include <linux/crash_dump.h>

 struct iommu_dma_msi_page {
     struct list_head    list;
@@ -323,6 +324,21 @@ static int iommu_dma_init_domain(struct
iommu_domain *domain, dma_addr_t base,
     return iova_reserve_iommu_regions(dev, domain);
 }

+static int handle_deferred_device(struct device *dev,
+        struct iommu_domain *domain)
+{
+    const struct iommu_ops *ops = domain->ops;
+
+    if (!is_kdump_kernel())
+        return 0;
+
+    if (unlikely(ops->is_attach_deferred &&
+            ops->is_attach_deferred(domain, dev)))
+        return iommu_attach_device(domain, dev);
+
+    return 0;
+}
+
 /**
  * dma_info_to_prot - Translate DMA API directions and attributes to IOMMU API
  *                    page flags.
@@ -432,6 +448,9 @@ static dma_addr_t __iommu_dma_map(struct device
*dev, phys_addr_t phys,
     size_t iova_off = 0;
     dma_addr_t iova;

+    if (unlikely(handle_deferred_device(dev, domain)))
+        return DMA_MAPPING_ERROR;
+
     if (cookie->type == IOMMU_DMA_IOVA_COOKIE) {
         iova_off = iova_offset(&cookie->iovad, phys);
         size = iova_align(&cookie->iovad, size + iova_off);
@@ -609,6 +628,9 @@ static void *iommu_dma_alloc_remap(struct device
*dev, size_t size,
     dma_addr_t iova;
     void *vaddr;

+    if (unlikely(handle_deferred_device(dev, domain)))
+        return DMA_MAPPING_ERROR;
+
     *dma_handle = DMA_MAPPING_ERROR;

     min_size = alloc_sizes & -alloc_sizes;
@@ -836,7 +858,7 @@ static dma_addr_t iommu_dma_map_page(struct device
*dev, struct page *page,
     bool coherent = dev_is_dma_coherent(dev);
     dma_addr_t dma_handle;

-    dma_handle =__iommu_dma_map(dev, phys, size,
+    dma_handle = __iommu_dma_map(dev, phys, size,
             dma_info_to_prot(dir, coherent, attrs),
             iommu_get_dma_domain(dev));
     if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
@@ -954,6 +976,9 @@ static int iommu_dma_map_sg(struct device *dev,
struct scatterlist *sg,
     unsigned long mask = dma_get_seg_boundary(dev);
     int i;

+    if (unlikely(handle_deferred_device(dev, domain)))
+        return 0;
+
     if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
         iommu_dma_sync_sg_for_device(dev, sg, nents, dir);

-- 
2.20.0

On Tue, May 7, 2019 at 7:40 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Mon, May 06, 2019 at 07:52:04PM +0100, Tom Murphy wrote:
> > +static int handle_deferred_device(struct device *dev)
> > +{
> > +     struct iommu_domain *domain;
> > +     const struct iommu_ops *ops;
> > +
> > +     if (!is_kdump_kernel())
> > +             return 0;
> > +
> > +     domain = iommu_get_domain_for_dev(dev);
>
> > -     dma_handle =__iommu_dma_map(dev, phys, size,
> > +     if (unlikely(handle_deferred_device(dev)))
> > +             return DMA_MAPPING_ERROR;
> > +
> > +     dma_handle = __iommu_dma_map(dev, phys, size,
>
> __iommu_dma_map already looks up the domain, and as far as I can
> tell all callers need the handle_deferred_device call.  Should we
> just move it to there and pass the domain from the caller?
>
> Also shouldn't the iommu_attach_device call inside
> handle_deferred_device also get an unlikely marker?

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
  2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
                   ` (3 preceding siblings ...)
  2019-05-06 18:52 ` [PATCH v3 4/4] iommu/amd: Convert AMD iommu driver to the dma-iommu api Tom Murphy
@ 2019-06-03 10:51 ` Joerg Roedel
  2019-06-03 11:27   ` Tom Murphy
  4 siblings, 1 reply; 13+ messages in thread
From: Joerg Roedel @ 2019-06-03 10:51 UTC (permalink / raw)
  To: Tom Murphy
  Cc: iommu, murphyt7, Will Deacon, Robin Murphy, Marek Szyprowski,
	Kukjin Kim, Krzysztof Kozlowski, David Woodhouse, Andy Gross,
	David Brown, Matthias Brugger, Rob Clark, Heiko Stuebner,
	Gerald Schaefer, Thierry Reding, Jonathan Hunter, linux-kernel,
	linux-arm-kernel, linux-samsung-soc, linux-arm-msm,
	linux-mediatek, linux-rockchip, linux-s390, linux-tegra

Hi Tom,

On Mon, May 06, 2019 at 07:52:02PM +0100, Tom Murphy wrote:
> Convert the AMD iommu driver to the dma-iommu api. Remove the iova
> handling and reserve region code from the AMD iommu driver.

Thank you for your work on this! I appreciate that much, but I am not
sure we are ready to make that move for the AMD and Intel IOMMU drivers
yet.

My main concern right now is that these changes will add a per-page
table lock into the fast-path for dma-mapping operations. There has been
much work in the past to remove all locking from these code-paths and
make it scalable on x86.

The dma-ops implementations in the x86 IOMMU drivers have the benefit
that they can call their page-table manipulation functions directly and
without locks, because they can make the necessary assumptions. The
IOMMU-API mapping/unmapping path can't make these assumptions because it
is also used for non-DMA-API use-cases.

So before we can move the AMD and Intel drivers to the generic DMA-API
implementation we need to solve this problem to not introduce new
scalability regressions.

Regards,

	Joerg


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
  2019-06-03 10:51 ` [PATCH v3 0/4] iommu/amd: Convert the " Joerg Roedel
@ 2019-06-03 11:27   ` Tom Murphy
  0 siblings, 0 replies; 13+ messages in thread
From: Tom Murphy @ 2019-06-03 11:27 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: iommu, Tom Murphy, Will Deacon, Robin Murphy, Marek Szyprowski,
	Kukjin Kim, Krzysztof Kozlowski, David Woodhouse, Andy Gross,
	David Brown, Matthias Brugger, Rob Clark, Heiko Stuebner,
	Gerald Schaefer, Thierry Reding, Jonathan Hunter, linux-kernel,
	linux-arm-kernel, linux-samsung-soc, linux-arm-msm,
	linux-mediatek, linux-rockchip, linux-s390, linux-tegra

On Mon, Jun 3, 2019 at 11:52 AM Joerg Roedel <joro@8bytes.org> wrote:
>
> Hi Tom,
>
> On Mon, May 06, 2019 at 07:52:02PM +0100, Tom Murphy wrote:
> > Convert the AMD iommu driver to the dma-iommu api. Remove the iova
> > handling and reserve region code from the AMD iommu driver.
>
> Thank you for your work on this! I appreciate that much, but I am not
> sure we are ready to make that move for the AMD and Intel IOMMU drivers
> yet.
>
> My main concern right now is that these changes will add a per-page
> table lock into the fast-path for dma-mapping operations. There has been
> much work in the past to remove all locking from these code-paths and
> make it scalable on x86.

Where is the locking introduced? intel doesn't use a lock in it's
iommu_map function:
https://github.com/torvalds/linux/blob/f2c7c76c5d0a443053e94adb9f0918fa2fb85c3a/drivers/iommu/intel-iommu.c#L5302
because it cleverly uses cmpxchg64 to avoid using locks:
https://github.com/torvalds/linux/blob/f2c7c76c5d0a443053e94adb9f0918fa2fb85c3a/drivers/iommu/intel-iommu.c#L900
And the locking in AMD's iommu_map function can be removed (and i have
removed it in my patch set) because it does that same thing as intel:
https://github.com/torvalds/linux/blob/f2c7c76c5d0a443053e94adb9f0918fa2fb85c3a/drivers/iommu/amd_iommu.c#L1486

Is there something I'm missing?

>
> The dma-ops implementations in the x86 IOMMU drivers have the benefit
> that they can call their page-table manipulation functions directly and
> without locks, because they can make the necessary assumptions. The
> IOMMU-API mapping/unmapping path can't make these assumptions because it
> is also used for non-DMA-API use-cases.
>
> So before we can move the AMD and Intel drivers to the generic DMA-API
> implementation we need to solve this problem to not introduce new
> scalability regressions.
>
> Regards,
>
>         Joerg
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map
  2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
@ 2019-06-04 18:11   ` Robin Murphy
  2019-06-04 20:55     ` Tom Murphy
  2019-06-04 21:24     ` Rob Clark
  2019-06-05  5:50   ` Christoph Hellwig
  1 sibling, 2 replies; 13+ messages in thread
From: Robin Murphy @ 2019-06-04 18:11 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: murphyt7, Joerg Roedel, Will Deacon, Marek Szyprowski,
	Kukjin Kim, Krzysztof Kozlowski, David Woodhouse, Andy Gross,
	David Brown, Matthias Brugger, Rob Clark, Heiko Stuebner,
	Gerald Schaefer, Thierry Reding, Jonathan Hunter, linux-kernel,
	linux-arm-kernel, linux-samsung-soc, linux-arm-msm,
	linux-mediatek, linux-rockchip, linux-s390, linux-tegra

On 06/05/2019 19:52, Tom Murphy wrote:
> Add a gfp_t parameter to the iommu_ops::map function.
> Remove the needless locking in the AMD iommu driver.
> 
> The iommu_ops::map function (or the iommu_map function which calls it)
> was always supposed to be sleepable (according to Joerg's comment in
> this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
> should probably have had a "might_sleep()" since it was written. However
> currently the dma-iommu api can call iommu_map in an atomic context,
> which it shouldn't do. This doesn't cause any problems because any iommu
> driver which uses the dma-iommu api uses gfp_atomic in it's
> iommu_ops::map function. But doing this wastes the memory allocators
> atomic pools.

Hmm, in some cases iommu_ops::unmap may need to allocate as well, 
primarily if it needs to split a hugepage mapping. Should we pass flags 
through there as well, or are we prepared to assume that that case will 
happen rarely enough that it's fair to just assume GFP_ATOMIC? It won't 
happen for DMA ops, but it's conceivable that other users such as GPU 
drivers might make partial unmaps, and I doubt we could totally rule out 
the wackiest ones doing so from non-sleeping contexts.

Robin.

> We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
> iommu_map doesn’t lock while mapping and so no two calls should touch
> the same iova range. The AMD driver already handles the page table page
> allocations without locks so we can safely remove the locks.
> 
> Signed-off-by: Tom Murphy <tmurphy@arista.com>
> ---
>   drivers/iommu/amd_iommu.c      | 14 ++++-------
>   drivers/iommu/arm-smmu-v3.c    |  2 +-
>   drivers/iommu/arm-smmu.c       |  2 +-
>   drivers/iommu/dma-iommu.c      |  6 ++---
>   drivers/iommu/exynos-iommu.c   |  2 +-
>   drivers/iommu/intel-iommu.c    |  2 +-
>   drivers/iommu/iommu.c          | 43 +++++++++++++++++++++++++++++-----
>   drivers/iommu/ipmmu-vmsa.c     |  2 +-
>   drivers/iommu/msm_iommu.c      |  2 +-
>   drivers/iommu/mtk_iommu.c      |  2 +-
>   drivers/iommu/mtk_iommu_v1.c   |  2 +-
>   drivers/iommu/omap-iommu.c     |  2 +-
>   drivers/iommu/qcom_iommu.c     |  2 +-
>   drivers/iommu/rockchip-iommu.c |  2 +-
>   drivers/iommu/s390-iommu.c     |  2 +-
>   drivers/iommu/tegra-gart.c     |  2 +-
>   drivers/iommu/tegra-smmu.c     |  2 +-
>   include/linux/iommu.h          | 21 ++++++++++++++++-
>   18 files changed, 78 insertions(+), 34 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index ebd062522cf5..ea3a5dc61bb0 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -3092,7 +3092,8 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
>   }
>   
>   static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
> -			 phys_addr_t paddr, size_t page_size, int iommu_prot)
> +			 phys_addr_t paddr, size_t page_size, int iommu_prot,
> +			 gfp_t gfp)
>   {
>   	struct protection_domain *domain = to_pdomain(dom);
>   	int prot = 0;
> @@ -3106,9 +3107,7 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
>   	if (iommu_prot & IOMMU_WRITE)
>   		prot |= IOMMU_PROT_IW;
>   
> -	mutex_lock(&domain->api_lock);
> -	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
> -	mutex_unlock(&domain->api_lock);
> +	ret = iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
>   
>   	domain_flush_np_cache(domain, iova, page_size);
>   
> @@ -3119,16 +3118,11 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
>   			   size_t page_size)
>   {
>   	struct protection_domain *domain = to_pdomain(dom);
> -	size_t unmap_size;
>   
>   	if (domain->mode == PAGE_MODE_NONE)
>   		return 0;
>   
> -	mutex_lock(&domain->api_lock);
> -	unmap_size = iommu_unmap_page(domain, iova, page_size);
> -	mutex_unlock(&domain->api_lock);
> -
> -	return unmap_size;
> +	return iommu_unmap_page(domain, iova, page_size);
>   }
>   
>   static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index d3880010c6cf..e5c48089b49f 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -1777,7 +1777,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>   }
>   
>   static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot)
> +			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
>   
> diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> index 045d93884164..2d50db55b788 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -1286,7 +1286,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
>   }
>   
>   static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			phys_addr_t paddr, size_t size, int prot)
> +			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
>   	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index fa5713a4f6f8..7a96c2c8f56b 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -440,7 +440,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>   	if (!iova)
>   		return DMA_MAPPING_ERROR;
>   
> -	if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
> +	if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
>   		iommu_dma_free_iova(cookie, iova, size);
>   		return DMA_MAPPING_ERROR;
>   	}
> @@ -641,7 +641,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
>   			arch_dma_prep_coherent(sg_page(sg), sg->length);
>   	}
>   
> -	if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
> +	if (iommu_map_sg_atomic(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
>   			< size)
>   		goto out_free_sg;
>   
> @@ -1003,7 +1003,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	 * We'll leave any physical concatenation to the IOMMU driver's
>   	 * implementation - it knows better than we do.
>   	 */
> -	if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len)
> +	if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
>   		goto out_free_iova;
>   
>   	return __finalise_sg(dev, sg, nents, iova);
> diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
> index 05c6bc099d62..46414234c179 100644
> --- a/drivers/iommu/exynos-iommu.c
> +++ b/drivers/iommu/exynos-iommu.c
> @@ -1078,7 +1078,7 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size,
>    */
>   static int exynos_iommu_map(struct iommu_domain *iommu_domain,
>   			    unsigned long l_iova, phys_addr_t paddr, size_t size,
> -			    int prot)
> +			    int prot, gfp_t gfp)
>   {
>   	struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
>   	sysmmu_pte_t *entry;
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 28cb713d728c..4f0ff28f7cb9 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -5137,7 +5137,7 @@ static void intel_iommu_detach_device(struct iommu_domain *domain,
>   
>   static int intel_iommu_map(struct iommu_domain *domain,
>   			   unsigned long iova, phys_addr_t hpa,
> -			   size_t size, int iommu_prot)
> +			   size_t size, int iommu_prot, gfp_t gfp)
>   {
>   	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>   	u64 max_addr;
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 109de67d5d72..1b49841c177e 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1584,8 +1584,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
>   	return pgsize;
>   }
>   
> -int iommu_map(struct iommu_domain *domain, unsigned long iova,
> -	      phys_addr_t paddr, size_t size, int prot)
> +int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> +	      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	const struct iommu_ops *ops = domain->ops;
>   	unsigned long orig_iova = iova;
> @@ -1622,8 +1622,8 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
>   
>   		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
>   			 iova, &paddr, pgsize);
> +		ret = ops->map(domain, iova, paddr, pgsize, prot, gfp);
>   
> -		ret = ops->map(domain, iova, paddr, pgsize, prot);
>   		if (ret)
>   			break;
>   
> @@ -1643,8 +1643,22 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
>   
>   	return ret;
>   }
> +
> +int iommu_map(struct iommu_domain *domain, unsigned long iova,
> +	      phys_addr_t paddr, size_t size, int prot)
> +{
> +	might_sleep();
> +	return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
> +}
>   EXPORT_SYMBOL_GPL(iommu_map);
>   
> +int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
> +	      phys_addr_t paddr, size_t size, int prot)
> +{
> +	return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
> +}
> +EXPORT_SYMBOL_GPL(iommu_map_atomic);
> +
>   static size_t __iommu_unmap(struct iommu_domain *domain,
>   			    unsigned long iova, size_t size,
>   			    bool sync)
> @@ -1719,8 +1733,9 @@ size_t iommu_unmap_fast(struct iommu_domain *domain,
>   }
>   EXPORT_SYMBOL_GPL(iommu_unmap_fast);
>   
> -size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> -		    struct scatterlist *sg, unsigned int nents, int prot)
> +size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> +		    struct scatterlist *sg, unsigned int nents, int prot,
> +		    gfp_t gfp)
>   {
>   	size_t len = 0, mapped = 0;
>   	phys_addr_t start;
> @@ -1731,7 +1746,9 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>   		phys_addr_t s_phys = sg_phys(sg);
>   
>   		if (len && s_phys != start + len) {
> -			ret = iommu_map(domain, iova + mapped, start, len, prot);
> +			ret = __iommu_map(domain, iova + mapped, start,
> +					len, prot, gfp);
> +
>   			if (ret)
>   				goto out_err;
>   
> @@ -1759,8 +1776,22 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>   	return 0;
>   
>   }
> +
> +size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> +		    struct scatterlist *sg, unsigned int nents, int prot)
> +{
> +	might_sleep();
> +	return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_KERNEL);
> +}
>   EXPORT_SYMBOL_GPL(iommu_map_sg);
>   
> +size_t iommu_map_sg_atomic(struct iommu_domain *domain, unsigned long iova,
> +		    struct scatterlist *sg, unsigned int nents, int prot)
> +{
> +	return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_ATOMIC);
> +}
> +EXPORT_SYMBOL_GPL(iommu_map_sg_atomic);
> +
>   int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
>   			       phys_addr_t paddr, u64 size, int prot)
>   {
> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 9a380c10655e..e005c83d49d8 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -707,7 +707,7 @@ static void ipmmu_detach_device(struct iommu_domain *io_domain,
>   }
>   
>   static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
> -		     phys_addr_t paddr, size_t size, int prot)
> +		     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
>   
> diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
> index 9fb0eb7a4d02..3f6bf3653aa2 100644
> --- a/drivers/iommu/msm_iommu.c
> +++ b/drivers/iommu/msm_iommu.c
> @@ -508,7 +508,7 @@ static void msm_iommu_detach_dev(struct iommu_domain *domain,
>   }
>   
>   static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			 phys_addr_t pa, size_t len, int prot)
> +			 phys_addr_t pa, size_t len, int prot, gfp_t gfp)
>   {
>   	struct msm_priv *priv = to_msm_priv(domain);
>   	unsigned long flags;
> diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> index de3e02277b70..3176b9b54d4d 100644
> --- a/drivers/iommu/mtk_iommu.c
> +++ b/drivers/iommu/mtk_iommu.c
> @@ -364,7 +364,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
>   }
>   
>   static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			 phys_addr_t paddr, size_t size, int prot)
> +			 phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
>   	unsigned long flags;
> diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
> index 52b01e3a49df..e7b1907faec1 100644
> --- a/drivers/iommu/mtk_iommu_v1.c
> +++ b/drivers/iommu/mtk_iommu_v1.c
> @@ -303,7 +303,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
>   }
>   
>   static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			 phys_addr_t paddr, size_t size, int prot)
> +			 phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
>   	unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
> diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
> index d2fb347aa4ff..c1d5a71285dc 100644
> --- a/drivers/iommu/omap-iommu.c
> +++ b/drivers/iommu/omap-iommu.c
> @@ -1109,7 +1109,7 @@ static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, int pgsz)
>   }
>   
>   static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
> -			  phys_addr_t pa, size_t bytes, int prot)
> +			  phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
>   {
>   	struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
>   	struct device *dev = omap_domain->dev;
> diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
> index 8cdd3f059513..a01e07a4e76f 100644
> --- a/drivers/iommu/qcom_iommu.c
> +++ b/drivers/iommu/qcom_iommu.c
> @@ -411,7 +411,7 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
>   }
>   
>   static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			  phys_addr_t paddr, size_t size, int prot)
> +			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	int ret;
>   	unsigned long flags;
> diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
> index 77d4bd93fe4b..aa3507f35107 100644
> --- a/drivers/iommu/rockchip-iommu.c
> +++ b/drivers/iommu/rockchip-iommu.c
> @@ -760,7 +760,7 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
>   }
>   
>   static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
> -			phys_addr_t paddr, size_t size, int prot)
> +			phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
>   	unsigned long flags;
> diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> index 22d4db302c1c..efa6aa68521d 100644
> --- a/drivers/iommu/s390-iommu.c
> +++ b/drivers/iommu/s390-iommu.c
> @@ -265,7 +265,7 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
>   }
>   
>   static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			  phys_addr_t paddr, size_t size, int prot)
> +			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct s390_domain *s390_domain = to_s390_domain(domain);
>   	int flags = ZPCI_PTE_VALID, rc = 0;
> diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
> index 4d8057916552..f300099852b1 100644
> --- a/drivers/iommu/tegra-gart.c
> +++ b/drivers/iommu/tegra-gart.c
> @@ -190,7 +190,7 @@ static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova,
>   }
>   
>   static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
> -			  phys_addr_t pa, size_t bytes, int prot)
> +			  phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
>   {
>   	struct gart_device *gart = gart_handle;
>   	int ret;
> diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
> index 5182c7d6171e..e1bf867e0607 100644
> --- a/drivers/iommu/tegra-smmu.c
> +++ b/drivers/iommu/tegra-smmu.c
> @@ -641,7 +641,7 @@ static void tegra_smmu_set_pte(struct tegra_smmu_as *as, unsigned long iova,
>   }
>   
>   static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova,
> -			  phys_addr_t paddr, size_t size, int prot)
> +			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
>   {
>   	struct tegra_smmu_as *as = to_smmu_as(domain);
>   	dma_addr_t pte_dma;
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index ffbbc7e39cee..76b8e7fe3ed0 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -198,7 +198,7 @@ struct iommu_ops {
>   	int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
>   	void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
>   	int (*map)(struct iommu_domain *domain, unsigned long iova,
> -		   phys_addr_t paddr, size_t size, int prot);
> +		   phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
>   	size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
>   		     size_t size);
>   	void (*flush_iotlb_all)(struct iommu_domain *domain);
> @@ -295,12 +295,17 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
>   extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
>   extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
>   		     phys_addr_t paddr, size_t size, int prot);
> +extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
> +		     phys_addr_t paddr, size_t size, int prot);
>   extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
>   			  size_t size);
>   extern size_t iommu_unmap_fast(struct iommu_domain *domain,
>   			       unsigned long iova, size_t size);
>   extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>   			   struct scatterlist *sg,unsigned int nents, int prot);
> +extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
> +				  unsigned long iova, struct scatterlist *sg,
> +				  unsigned int nents, int prot);
>   extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
>   extern void iommu_set_fault_handler(struct iommu_domain *domain,
>   			iommu_fault_handler_t handler, void *token);
> @@ -469,6 +474,13 @@ static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
>   	return -ENODEV;
>   }
>   
> +static inline int iommu_map_atomic(struct iommu_domain *domain,
> +				   unsigned long iova, phys_addr_t paddr,
> +				   size_t size, int prot)
> +{
> +	return -ENODEV;
> +}
> +
>   static inline size_t iommu_unmap(struct iommu_domain *domain,
>   				 unsigned long iova, size_t size)
>   {
> @@ -488,6 +500,13 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
>   	return 0;
>   }
>   
> +static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain,
> +				  unsigned long iova, struct scatterlist *sg,
> +				  unsigned int nents, int prot)
> +{
> +	return 0;
> +}
> +
>   static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
>   {
>   }
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map
  2019-06-04 18:11   ` Robin Murphy
@ 2019-06-04 20:55     ` Tom Murphy
  2019-06-04 21:24     ` Rob Clark
  1 sibling, 0 replies; 13+ messages in thread
From: Tom Murphy @ 2019-06-04 20:55 UTC (permalink / raw)
  To: Robin Murphy
  Cc: iommu, Tom Murphy, Joerg Roedel, Will Deacon, Marek Szyprowski,
	Kukjin Kim, Krzysztof Kozlowski, David Woodhouse, Andy Gross,
	David Brown, Matthias Brugger, Rob Clark, Heiko Stuebner,
	Gerald Schaefer, Thierry Reding, Jonathan Hunter, linux-kernel,
	linux-arm-kernel, linux-samsung-soc, linux-arm-msm,
	linux-mediatek, linux-rockchip, linux-s390, linux-tegra

On Tue, Jun 4, 2019 at 7:11 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 06/05/2019 19:52, Tom Murphy wrote:
> > Add a gfp_t parameter to the iommu_ops::map function.
> > Remove the needless locking in the AMD iommu driver.
> >
> > The iommu_ops::map function (or the iommu_map function which calls it)
> > was always supposed to be sleepable (according to Joerg's comment in
> > this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
> > should probably have had a "might_sleep()" since it was written. However
> > currently the dma-iommu api can call iommu_map in an atomic context,
> > which it shouldn't do. This doesn't cause any problems because any iommu
> > driver which uses the dma-iommu api uses gfp_atomic in it's
> > iommu_ops::map function. But doing this wastes the memory allocators
> > atomic pools.
>
> Hmm, in some cases iommu_ops::unmap may need to allocate as well,
> primarily if it needs to split a hugepage mapping. Should we pass flags

Are you sure that's the case?

I don't see that in the amd or intel iommu_ops::unmap code and from
looking at the intel code it seems like we actually unmap the whole
large page:
/* Cope with horrid API which requires us to unmap more than the
size argument if it happens to be a large-page mapping. */
BUG_ON(!pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level));

if (size < VTD_PAGE_SIZE << level_to_offset_bits(level))
      size = VTD_PAGE_SIZE << level_to_offset_bits(level);



> through there as well, or are we prepared to assume that that case will
> happen rarely enough that it's fair to just assume GFP_ATOMIC? It won't
> happen for DMA ops, but it's conceivable that other users such as GPU
> drivers might make partial unmaps, and I doubt we could totally rule out
> the wackiest ones doing so from non-sleeping contexts.
>
> Robin.
>
> > We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
> > iommu_map doesn’t lock while mapping and so no two calls should touch
> > the same iova range. The AMD driver already handles the page table page
> > allocations without locks so we can safely remove the locks.
> >
> > Signed-off-by: Tom Murphy <tmurphy@arista.com>
> > ---
> >   drivers/iommu/amd_iommu.c      | 14 ++++-------
> >   drivers/iommu/arm-smmu-v3.c    |  2 +-
> >   drivers/iommu/arm-smmu.c       |  2 +-
> >   drivers/iommu/dma-iommu.c      |  6 ++---
> >   drivers/iommu/exynos-iommu.c   |  2 +-
> >   drivers/iommu/intel-iommu.c    |  2 +-
> >   drivers/iommu/iommu.c          | 43 +++++++++++++++++++++++++++++-----
> >   drivers/iommu/ipmmu-vmsa.c     |  2 +-
> >   drivers/iommu/msm_iommu.c      |  2 +-
> >   drivers/iommu/mtk_iommu.c      |  2 +-
> >   drivers/iommu/mtk_iommu_v1.c   |  2 +-
> >   drivers/iommu/omap-iommu.c     |  2 +-
> >   drivers/iommu/qcom_iommu.c     |  2 +-
> >   drivers/iommu/rockchip-iommu.c |  2 +-
> >   drivers/iommu/s390-iommu.c     |  2 +-
> >   drivers/iommu/tegra-gart.c     |  2 +-
> >   drivers/iommu/tegra-smmu.c     |  2 +-
> >   include/linux/iommu.h          | 21 ++++++++++++++++-
> >   18 files changed, 78 insertions(+), 34 deletions(-)
> >
> > diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> > index ebd062522cf5..ea3a5dc61bb0 100644
> > --- a/drivers/iommu/amd_iommu.c
> > +++ b/drivers/iommu/amd_iommu.c
> > @@ -3092,7 +3092,8 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
> >   }
> >
> >   static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
> > -                      phys_addr_t paddr, size_t page_size, int iommu_prot)
> > +                      phys_addr_t paddr, size_t page_size, int iommu_prot,
> > +                      gfp_t gfp)
> >   {
> >       struct protection_domain *domain = to_pdomain(dom);
> >       int prot = 0;
> > @@ -3106,9 +3107,7 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
> >       if (iommu_prot & IOMMU_WRITE)
> >               prot |= IOMMU_PROT_IW;
> >
> > -     mutex_lock(&domain->api_lock);
> > -     ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
> > -     mutex_unlock(&domain->api_lock);
> > +     ret = iommu_map_page(domain, iova, paddr, page_size, prot, gfp);
> >
> >       domain_flush_np_cache(domain, iova, page_size);
> >
> > @@ -3119,16 +3118,11 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
> >                          size_t page_size)
> >   {
> >       struct protection_domain *domain = to_pdomain(dom);
> > -     size_t unmap_size;
> >
> >       if (domain->mode == PAGE_MODE_NONE)
> >               return 0;
> >
> > -     mutex_lock(&domain->api_lock);
> > -     unmap_size = iommu_unmap_page(domain, iova, page_size);
> > -     mutex_unlock(&domain->api_lock);
> > -
> > -     return unmap_size;
> > +     return iommu_unmap_page(domain, iova, page_size);
> >   }
> >
> >   static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
> > diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> > index d3880010c6cf..e5c48089b49f 100644
> > --- a/drivers/iommu/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm-smmu-v3.c
> > @@ -1777,7 +1777,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> >   }
> >
> >   static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> > -                     phys_addr_t paddr, size_t size, int prot)
> > +                     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> >
> > diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
> > index 045d93884164..2d50db55b788 100644
> > --- a/drivers/iommu/arm-smmu.c
> > +++ b/drivers/iommu/arm-smmu.c
> > @@ -1286,7 +1286,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
> >   }
> >
> >   static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
> > -                     phys_addr_t paddr, size_t size, int prot)
> > +                     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
> >       struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index fa5713a4f6f8..7a96c2c8f56b 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -440,7 +440,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> >       if (!iova)
> >               return DMA_MAPPING_ERROR;
> >
> > -     if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
> > +     if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
> >               iommu_dma_free_iova(cookie, iova, size);
> >               return DMA_MAPPING_ERROR;
> >       }
> > @@ -641,7 +641,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
> >                       arch_dma_prep_coherent(sg_page(sg), sg->length);
> >       }
> >
> > -     if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
> > +     if (iommu_map_sg_atomic(domain, iova, sgt.sgl, sgt.orig_nents, ioprot)
> >                       < size)
> >               goto out_free_sg;
> >
> > @@ -1003,7 +1003,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
> >        * We'll leave any physical concatenation to the IOMMU driver's
> >        * implementation - it knows better than we do.
> >        */
> > -     if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len)
> > +     if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
> >               goto out_free_iova;
> >
> >       return __finalise_sg(dev, sg, nents, iova);
> > diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
> > index 05c6bc099d62..46414234c179 100644
> > --- a/drivers/iommu/exynos-iommu.c
> > +++ b/drivers/iommu/exynos-iommu.c
> > @@ -1078,7 +1078,7 @@ static int lv2set_page(sysmmu_pte_t *pent, phys_addr_t paddr, size_t size,
> >    */
> >   static int exynos_iommu_map(struct iommu_domain *iommu_domain,
> >                           unsigned long l_iova, phys_addr_t paddr, size_t size,
> > -                         int prot)
> > +                         int prot, gfp_t gfp)
> >   {
> >       struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
> >       sysmmu_pte_t *entry;
> > diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> > index 28cb713d728c..4f0ff28f7cb9 100644
> > --- a/drivers/iommu/intel-iommu.c
> > +++ b/drivers/iommu/intel-iommu.c
> > @@ -5137,7 +5137,7 @@ static void intel_iommu_detach_device(struct iommu_domain *domain,
> >
> >   static int intel_iommu_map(struct iommu_domain *domain,
> >                          unsigned long iova, phys_addr_t hpa,
> > -                        size_t size, int iommu_prot)
> > +                        size_t size, int iommu_prot, gfp_t gfp)
> >   {
> >       struct dmar_domain *dmar_domain = to_dmar_domain(domain);
> >       u64 max_addr;
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index 109de67d5d72..1b49841c177e 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iommu.c
> > @@ -1584,8 +1584,8 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
> >       return pgsize;
> >   }
> >
> > -int iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -           phys_addr_t paddr, size_t size, int prot)
> > +int __iommu_map(struct iommu_domain *domain, unsigned long iova,
> > +           phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       const struct iommu_ops *ops = domain->ops;
> >       unsigned long orig_iova = iova;
> > @@ -1622,8 +1622,8 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
> >
> >               pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
> >                        iova, &paddr, pgsize);
> > +             ret = ops->map(domain, iova, paddr, pgsize, prot, gfp);
> >
> > -             ret = ops->map(domain, iova, paddr, pgsize, prot);
> >               if (ret)
> >                       break;
> >
> > @@ -1643,8 +1643,22 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
> >
> >       return ret;
> >   }
> > +
> > +int iommu_map(struct iommu_domain *domain, unsigned long iova,
> > +           phys_addr_t paddr, size_t size, int prot)
> > +{
> > +     might_sleep();
> > +     return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
> > +}
> >   EXPORT_SYMBOL_GPL(iommu_map);
> >
> > +int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
> > +           phys_addr_t paddr, size_t size, int prot)
> > +{
> > +     return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
> > +}
> > +EXPORT_SYMBOL_GPL(iommu_map_atomic);
> > +
> >   static size_t __iommu_unmap(struct iommu_domain *domain,
> >                           unsigned long iova, size_t size,
> >                           bool sync)
> > @@ -1719,8 +1733,9 @@ size_t iommu_unmap_fast(struct iommu_domain *domain,
> >   }
> >   EXPORT_SYMBOL_GPL(iommu_unmap_fast);
> >
> > -size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> > -                 struct scatterlist *sg, unsigned int nents, int prot)
> > +size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> > +                 struct scatterlist *sg, unsigned int nents, int prot,
> > +                 gfp_t gfp)
> >   {
> >       size_t len = 0, mapped = 0;
> >       phys_addr_t start;
> > @@ -1731,7 +1746,9 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> >               phys_addr_t s_phys = sg_phys(sg);
> >
> >               if (len && s_phys != start + len) {
> > -                     ret = iommu_map(domain, iova + mapped, start, len, prot);
> > +                     ret = __iommu_map(domain, iova + mapped, start,
> > +                                     len, prot, gfp);
> > +
> >                       if (ret)
> >                               goto out_err;
> >
> > @@ -1759,8 +1776,22 @@ size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> >       return 0;
> >
> >   }
> > +
> > +size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> > +                 struct scatterlist *sg, unsigned int nents, int prot)
> > +{
> > +     might_sleep();
> > +     return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_KERNEL);
> > +}
> >   EXPORT_SYMBOL_GPL(iommu_map_sg);
> >
> > +size_t iommu_map_sg_atomic(struct iommu_domain *domain, unsigned long iova,
> > +                 struct scatterlist *sg, unsigned int nents, int prot)
> > +{
> > +     return __iommu_map_sg(domain, iova, sg, nents, prot, GFP_ATOMIC);
> > +}
> > +EXPORT_SYMBOL_GPL(iommu_map_sg_atomic);
> > +
> >   int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
> >                              phys_addr_t paddr, u64 size, int prot)
> >   {
> > diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> > index 9a380c10655e..e005c83d49d8 100644
> > --- a/drivers/iommu/ipmmu-vmsa.c
> > +++ b/drivers/iommu/ipmmu-vmsa.c
> > @@ -707,7 +707,7 @@ static void ipmmu_detach_device(struct iommu_domain *io_domain,
> >   }
> >
> >   static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
> > -                  phys_addr_t paddr, size_t size, int prot)
> > +                  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
> >
> > diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
> > index 9fb0eb7a4d02..3f6bf3653aa2 100644
> > --- a/drivers/iommu/msm_iommu.c
> > +++ b/drivers/iommu/msm_iommu.c
> > @@ -508,7 +508,7 @@ static void msm_iommu_detach_dev(struct iommu_domain *domain,
> >   }
> >
> >   static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                      phys_addr_t pa, size_t len, int prot)
> > +                      phys_addr_t pa, size_t len, int prot, gfp_t gfp)
> >   {
> >       struct msm_priv *priv = to_msm_priv(domain);
> >       unsigned long flags;
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index de3e02277b70..3176b9b54d4d 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -364,7 +364,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
> >   }
> >
> >   static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                      phys_addr_t paddr, size_t size, int prot)
> > +                      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct mtk_iommu_domain *dom = to_mtk_domain(domain);
> >       unsigned long flags;
> > diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
> > index 52b01e3a49df..e7b1907faec1 100644
> > --- a/drivers/iommu/mtk_iommu_v1.c
> > +++ b/drivers/iommu/mtk_iommu_v1.c
> > @@ -303,7 +303,7 @@ static void mtk_iommu_detach_device(struct iommu_domain *domain,
> >   }
> >
> >   static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                      phys_addr_t paddr, size_t size, int prot)
> > +                      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct mtk_iommu_domain *dom = to_mtk_domain(domain);
> >       unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT;
> > diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
> > index d2fb347aa4ff..c1d5a71285dc 100644
> > --- a/drivers/iommu/omap-iommu.c
> > +++ b/drivers/iommu/omap-iommu.c
> > @@ -1109,7 +1109,7 @@ static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa, int pgsz)
> >   }
> >
> >   static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
> > -                       phys_addr_t pa, size_t bytes, int prot)
> > +                       phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
> >   {
> >       struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
> >       struct device *dev = omap_domain->dev;
> > diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
> > index 8cdd3f059513..a01e07a4e76f 100644
> > --- a/drivers/iommu/qcom_iommu.c
> > +++ b/drivers/iommu/qcom_iommu.c
> > @@ -411,7 +411,7 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
> >   }
> >
> >   static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                       phys_addr_t paddr, size_t size, int prot)
> > +                       phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       int ret;
> >       unsigned long flags;
> > diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
> > index 77d4bd93fe4b..aa3507f35107 100644
> > --- a/drivers/iommu/rockchip-iommu.c
> > +++ b/drivers/iommu/rockchip-iommu.c
> > @@ -760,7 +760,7 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr,
> >   }
> >
> >   static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
> > -                     phys_addr_t paddr, size_t size, int prot)
> > +                     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
> >       unsigned long flags;
> > diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> > index 22d4db302c1c..efa6aa68521d 100644
> > --- a/drivers/iommu/s390-iommu.c
> > +++ b/drivers/iommu/s390-iommu.c
> > @@ -265,7 +265,7 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
> >   }
> >
> >   static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                       phys_addr_t paddr, size_t size, int prot)
> > +                       phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct s390_domain *s390_domain = to_s390_domain(domain);
> >       int flags = ZPCI_PTE_VALID, rc = 0;
> > diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
> > index 4d8057916552..f300099852b1 100644
> > --- a/drivers/iommu/tegra-gart.c
> > +++ b/drivers/iommu/tegra-gart.c
> > @@ -190,7 +190,7 @@ static inline int __gart_iommu_map(struct gart_device *gart, unsigned long iova,
> >   }
> >
> >   static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
> > -                       phys_addr_t pa, size_t bytes, int prot)
> > +                       phys_addr_t pa, size_t bytes, int prot, gfp_t gfp)
> >   {
> >       struct gart_device *gart = gart_handle;
> >       int ret;
> > diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
> > index 5182c7d6171e..e1bf867e0607 100644
> > --- a/drivers/iommu/tegra-smmu.c
> > +++ b/drivers/iommu/tegra-smmu.c
> > @@ -641,7 +641,7 @@ static void tegra_smmu_set_pte(struct tegra_smmu_as *as, unsigned long iova,
> >   }
> >
> >   static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova,
> > -                       phys_addr_t paddr, size_t size, int prot)
> > +                       phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> >   {
> >       struct tegra_smmu_as *as = to_smmu_as(domain);
> >       dma_addr_t pte_dma;
> > diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> > index ffbbc7e39cee..76b8e7fe3ed0 100644
> > --- a/include/linux/iommu.h
> > +++ b/include/linux/iommu.h
> > @@ -198,7 +198,7 @@ struct iommu_ops {
> >       int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
> >       void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
> >       int (*map)(struct iommu_domain *domain, unsigned long iova,
> > -                phys_addr_t paddr, size_t size, int prot);
> > +                phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
> >       size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
> >                    size_t size);
> >       void (*flush_iotlb_all)(struct iommu_domain *domain);
> > @@ -295,12 +295,17 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
> >   extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
> >   extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
> >                    phys_addr_t paddr, size_t size, int prot);
> > +extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
> > +                  phys_addr_t paddr, size_t size, int prot);
> >   extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
> >                         size_t size);
> >   extern size_t iommu_unmap_fast(struct iommu_domain *domain,
> >                              unsigned long iova, size_t size);
> >   extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
> >                          struct scatterlist *sg,unsigned int nents, int prot);
> > +extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
> > +                               unsigned long iova, struct scatterlist *sg,
> > +                               unsigned int nents, int prot);
> >   extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
> >   extern void iommu_set_fault_handler(struct iommu_domain *domain,
> >                       iommu_fault_handler_t handler, void *token);
> > @@ -469,6 +474,13 @@ static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
> >       return -ENODEV;
> >   }
> >
> > +static inline int iommu_map_atomic(struct iommu_domain *domain,
> > +                                unsigned long iova, phys_addr_t paddr,
> > +                                size_t size, int prot)
> > +{
> > +     return -ENODEV;
> > +}
> > +
> >   static inline size_t iommu_unmap(struct iommu_domain *domain,
> >                                unsigned long iova, size_t size)
> >   {
> > @@ -488,6 +500,13 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
> >       return 0;
> >   }
> >
> > +static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain,
> > +                               unsigned long iova, struct scatterlist *sg,
> > +                               unsigned int nents, int prot)
> > +{
> > +     return 0;
> > +}
> > +
> >   static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
> >   {
> >   }
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map
  2019-06-04 18:11   ` Robin Murphy
  2019-06-04 20:55     ` Tom Murphy
@ 2019-06-04 21:24     ` Rob Clark
  1 sibling, 0 replies; 13+ messages in thread
From: Rob Clark @ 2019-06-04 21:24 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Tom Murphy,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>,,
	murphyt7, Joerg Roedel, Will Deacon, Marek Szyprowski,
	Kukjin Kim, Krzysztof Kozlowski, David Woodhouse, Andy Gross,
	David Brown, Matthias Brugger, Heiko Stuebner, Gerald Schaefer,
	Thierry Reding, Jonathan Hunter, Linux Kernel Mailing List,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE,
	moderated list:ARM/S5P EXYNOS AR...,
	linux-arm-msm, linux-mediatek, open list:ARM/Rockchip SoC...,
	linux-s390, linux-tegra

On Tue, Jun 4, 2019 at 11:11 AM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 06/05/2019 19:52, Tom Murphy wrote:
> > Add a gfp_t parameter to the iommu_ops::map function.
> > Remove the needless locking in the AMD iommu driver.
> >
> > The iommu_ops::map function (or the iommu_map function which calls it)
> > was always supposed to be sleepable (according to Joerg's comment in
> > this thread: https://lore.kernel.org/patchwork/patch/977520/ ) and so
> > should probably have had a "might_sleep()" since it was written. However
> > currently the dma-iommu api can call iommu_map in an atomic context,
> > which it shouldn't do. This doesn't cause any problems because any iommu
> > driver which uses the dma-iommu api uses gfp_atomic in it's
> > iommu_ops::map function. But doing this wastes the memory allocators
> > atomic pools.
>
> Hmm, in some cases iommu_ops::unmap may need to allocate as well,
> primarily if it needs to split a hugepage mapping. Should we pass flags
> through there as well, or are we prepared to assume that that case will
> happen rarely enough that it's fair to just assume GFP_ATOMIC? It won't
> happen for DMA ops, but it's conceivable that other users such as GPU
> drivers might make partial unmaps, and I doubt we could totally rule out
> the wackiest ones doing so from non-sleeping contexts.
>

jfyi, today we (well, drm/msm) only unmap entire buffers, so assuming
there isn't any coelescense of adjacent buffers that happen to form a
hugepage on ::map(), we are probably ok on ::unmap()..

we do always only call ::map or ::unmap under sleepable conditions.

btw, would it be simpler to just make gfp_t a domain a attribute?
That seems like it would be less churn, but maybe I'm overlooking
something.

BR,
-R

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map
  2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
  2019-06-04 18:11   ` Robin Murphy
@ 2019-06-05  5:50   ` Christoph Hellwig
  1 sibling, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2019-06-05  5:50 UTC (permalink / raw)
  To: Tom Murphy
  Cc: iommu, Heiko Stuebner, Will Deacon, David Brown, Thierry Reding,
	linux-s390, linux-samsung-soc, Krzysztof Kozlowski,
	Jonathan Hunter, linux-rockchip, Kukjin Kim, Gerald Schaefer,
	Andy Gross, linux-tegra, linux-arm-msm, linux-mediatek,
	Matthias Brugger, linux-arm-kernel, Robin Murphy, linux-kernel,
	murphyt7, David Woodhouse

On Mon, May 06, 2019 at 07:52:03PM +0100, Tom Murphy via iommu wrote:
> We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
> iommu_map doesn’t lock while mapping and so no two calls should touch
> the same iova range. The AMD driver already handles the page table page
> allocations without locks so we can safely remove the locks.

Btw, this really should be a separate patch.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-06-05  5:50 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-06 18:52 [PATCH v3 0/4] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
2019-05-06 18:52 ` [PATCH v3 1/4] iommu: Add gfp parameter to iommu_ops::map Tom Murphy
2019-06-04 18:11   ` Robin Murphy
2019-06-04 20:55     ` Tom Murphy
2019-06-04 21:24     ` Rob Clark
2019-06-05  5:50   ` Christoph Hellwig
2019-05-06 18:52 ` [PATCH v3 2/4] iommu/dma-iommu: Handle deferred devices Tom Murphy
2019-05-07  6:40   ` Christoph Hellwig
2019-05-15 12:46     ` Tom Murphy
2019-05-06 18:52 ` [PATCH v3 3/4] iommu/dma-iommu: Use the dev->coherent_dma_mask Tom Murphy
2019-05-06 18:52 ` [PATCH v3 4/4] iommu/amd: Convert AMD iommu driver to the dma-iommu api Tom Murphy
2019-06-03 10:51 ` [PATCH v3 0/4] iommu/amd: Convert the " Joerg Roedel
2019-06-03 11:27   ` Tom Murphy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).