iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
@ 2019-12-21 15:03 Tom Murphy
  2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
                   ` (10 more replies)
  0 siblings, 11 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

This patchset converts the intel iommu driver to the dma-iommu api.

While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg

This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51

Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.  

Could someone from the intel team look at this?


I have been testing on a lenovo x1 carbon 5th generation. Let me know if there’s any more information you need.

To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem.

As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I’m running out of time to work on this patch set.

On top of that I also didn’t port over the intel tracing code from this commit:
https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don’t have the time at the moment and I want to get these changes out before the iommu code changes any more.


Tom Murphy (8):
  iommu/vt-d: clean up 32bit si_domain assignment
  iommu/vt-d: Use default dma_direct_* mapping functions for direct
    mapped devices
  iommu/vt-d: Remove IOVA handling code from non-dma_ops path
  iommu: Handle freelists when using deferred flushing in iommu drivers
  iommu: Add iommu_dma_free_cpu_cached_iovas function
  iommu: allow the dma-iommu api to use bounce buffers
  iommu/vt-d: Convert intel iommu driver to the iommu ops
  DO NOT MERGE: iommu: disable list appending in dma-iommu

 drivers/iommu/Kconfig           |   1 +
 drivers/iommu/amd_iommu.c       |  14 +-
 drivers/iommu/arm-smmu-v3.c     |   3 +-
 drivers/iommu/arm-smmu.c        |   3 +-
 drivers/iommu/dma-iommu.c       | 183 +++++--
 drivers/iommu/exynos-iommu.c    |   3 +-
 drivers/iommu/intel-iommu.c     | 936 ++++----------------------------
 drivers/iommu/iommu.c           |  39 +-
 drivers/iommu/ipmmu-vmsa.c      |   3 +-
 drivers/iommu/msm_iommu.c       |   3 +-
 drivers/iommu/mtk_iommu.c       |   3 +-
 drivers/iommu/mtk_iommu_v1.c    |   3 +-
 drivers/iommu/omap-iommu.c      |   3 +-
 drivers/iommu/qcom_iommu.c      |   3 +-
 drivers/iommu/rockchip-iommu.c  |   3 +-
 drivers/iommu/s390-iommu.c      |   3 +-
 drivers/iommu/tegra-gart.c      |   3 +-
 drivers/iommu/tegra-smmu.c      |   3 +-
 drivers/iommu/virtio-iommu.c    |   3 +-
 drivers/vfio/vfio_iommu_type1.c |   2 +-
 include/linux/dma-iommu.h       |   3 +
 include/linux/intel-iommu.h     |   1 -
 include/linux/iommu.h           |  32 +-
 23 files changed, 345 insertions(+), 908 deletions(-)

-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-21 23:46   ` Arvind Sankar
  2019-12-23  3:00   ` Lu Baolu
  2019-12-21 15:03 ` [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices Tom Murphy
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

In the intel iommu driver devices which only support 32bit DMA can't be
direct mapped. The implementation of this is weird. Currently we assign
it a direct mapped domain and then remove the domain later and replace
it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it
a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than
needlessly swapping domains.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/intel-iommu.c | 88 +++++++++++++------------------------
 1 file changed, 31 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 0c8d81f56a30..c1ea66467918 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3462,46 +3462,9 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 }
 
 /* Check if the dev needs to go through non-identity map and unmap process.*/
-static bool iommu_need_mapping(struct device *dev)
+static bool iommu_no_mapping(struct device *dev)
 {
-	int ret;
-
-	if (iommu_dummy(dev))
-		return false;
-
-	ret = identity_mapping(dev);
-	if (ret) {
-		u64 dma_mask = *dev->dma_mask;
-
-		if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask)
-			dma_mask = dev->coherent_dma_mask;
-
-		if (dma_mask >= dma_direct_get_required_mask(dev))
-			return false;
-
-		/*
-		 * 32 bit DMA is removed from si_domain and fall back to
-		 * non-identity mapping.
-		 */
-		dmar_remove_one_dev_info(dev);
-		ret = iommu_request_dma_domain_for_dev(dev);
-		if (ret) {
-			struct iommu_domain *domain;
-			struct dmar_domain *dmar_domain;
-
-			domain = iommu_get_domain_for_dev(dev);
-			if (domain) {
-				dmar_domain = to_dmar_domain(domain);
-				dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN;
-			}
-			dmar_remove_one_dev_info(dev);
-			get_private_domain_for_dev(dev);
-		}
-
-		dev_info(dev, "32bit DMA uses non-identity mapping\n");
-	}
-
-	return true;
+	return iommu_dummy(dev) || identity_mapping(dev);
 }
 
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
@@ -3568,20 +3531,22 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page,
 				 enum dma_data_direction dir,
 				 unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		return __intel_map_single(dev, page_to_phys(page) + offset,
-				size, dir, *dev->dma_mask);
-	return dma_direct_map_page(dev, page, offset, size, dir, attrs);
+	if (iommu_no_mapping(dev))
+		return dma_direct_map_page(dev, page, offset, size, dir, attrs);
+
+	return __intel_map_single(dev, page_to_phys(page) + offset, size, dir,
+			*dev->dma_mask);
 }
 
 static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr,
 				     size_t size, enum dma_data_direction dir,
 				     unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		return __intel_map_single(dev, phys_addr, size, dir,
-				*dev->dma_mask);
-	return dma_direct_map_resource(dev, phys_addr, size, dir, attrs);
+	if (iommu_no_mapping(dev))
+		return dma_direct_map_resource(dev, phys_addr, size, dir,
+				attrs);
+
+	return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
 }
 
 static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
@@ -3632,16 +3597,16 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
 			     size_t size, enum dma_data_direction dir,
 			     unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
-		intel_unmap(dev, dev_addr, size);
-	else
+	if (iommu_no_mapping(dev))
 		dma_direct_unmap_page(dev, dev_addr, size, dir, attrs);
+	else
+		intel_unmap(dev, dev_addr, size);
 }
 
 static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	if (iommu_need_mapping(dev))
+	if (!iommu_no_mapping(dev))
 		intel_unmap(dev, dev_addr, size);
 }
 
@@ -3652,7 +3617,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
 	struct page *page = NULL;
 	int order;
 
-	if (!iommu_need_mapping(dev))
+	if (iommu_no_mapping(dev))
 		return dma_direct_alloc(dev, size, dma_handle, flags, attrs);
 
 	size = PAGE_ALIGN(size);
@@ -3688,7 +3653,7 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
 	int order;
 	struct page *page = virt_to_page(vaddr);
 
-	if (!iommu_need_mapping(dev))
+	if (iommu_no_mapping(dev))
 		return dma_direct_free(dev, size, vaddr, dma_handle, attrs);
 
 	size = PAGE_ALIGN(size);
@@ -3708,7 +3673,7 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
 	struct scatterlist *sg;
 	int i;
 
-	if (!iommu_need_mapping(dev))
+	if (iommu_no_mapping(dev))
 		return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs);
 
 	for_each_sg(sglist, sg, nelems, i) {
@@ -3734,7 +3699,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 	struct intel_iommu *iommu;
 
 	BUG_ON(dir == DMA_NONE);
-	if (!iommu_need_mapping(dev))
+	if (iommu_no_mapping(dev))
 		return dma_direct_map_sg(dev, sglist, nelems, dir, attrs);
 
 	domain = deferred_attach_domain(dev);
@@ -3782,7 +3747,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 
 static u64 intel_get_required_mask(struct device *dev)
 {
-	if (!iommu_need_mapping(dev))
+	if (iommu_no_mapping(dev))
 		return dma_direct_get_required_mask(dev);
 	return DMA_BIT_MASK(32);
 }
@@ -5618,9 +5583,13 @@ static int intel_iommu_add_device(struct device *dev)
 	struct iommu_domain *domain;
 	struct intel_iommu *iommu;
 	struct iommu_group *group;
+	u64 dma_mask = *dev->dma_mask;
 	u8 bus, devfn;
 	int ret;
 
+	if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask)
+		dma_mask = dev->coherent_dma_mask;
+
 	iommu = device_to_iommu(dev, &bus, &devfn);
 	if (!iommu)
 		return -ENODEV;
@@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev)
 	domain = iommu_get_domain_for_dev(dev);
 	dmar_domain = to_dmar_domain(domain);
 	if (domain->type == IOMMU_DOMAIN_DMA) {
-		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) {
+		/*
+		 * We check dma_mask >= dma_get_required_mask(dev) because
+		 * 32 bit DMA falls back to non-identity mapping.
+		 */
+		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY &&
+				dma_mask >= dma_get_required_mask(dev)) {
 			ret = iommu_request_dm_for_dev(dev);
 			if (ret) {
 				dmar_remove_one_dev_info(dev);
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
  2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-21 15:03 ` [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Tom Murphy
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

We should only assign intel_dma_ops to devices which will actually use
the iommu and let the default fall back dma_direct_* functions handle
all other devices. This won't change any behaviour but will just use the
generic implementations for direct mapped devices rather than intel
specific ones.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/intel-iommu.c | 52 +++++--------------------------------
 1 file changed, 6 insertions(+), 46 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index c1ea66467918..64b1a9793daa 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2794,17 +2794,6 @@ static int __init si_domain_init(int hw)
 	return 0;
 }
 
-static int identity_mapping(struct device *dev)
-{
-	struct device_domain_info *info;
-
-	info = dev->archdata.iommu;
-	if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO)
-		return (info->domain == si_domain);
-
-	return 0;
-}
-
 static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev)
 {
 	struct dmar_domain *ndomain;
@@ -3461,12 +3450,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 	return domain;
 }
 
-/* Check if the dev needs to go through non-identity map and unmap process.*/
-static bool iommu_no_mapping(struct device *dev)
-{
-	return iommu_dummy(dev) || identity_mapping(dev);
-}
-
 static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
 				     size_t size, int dir, u64 dma_mask)
 {
@@ -3531,9 +3514,6 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page,
 				 enum dma_data_direction dir,
 				 unsigned long attrs)
 {
-	if (iommu_no_mapping(dev))
-		return dma_direct_map_page(dev, page, offset, size, dir, attrs);
-
 	return __intel_map_single(dev, page_to_phys(page) + offset, size, dir,
 			*dev->dma_mask);
 }
@@ -3542,10 +3522,6 @@ static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr,
 				     size_t size, enum dma_data_direction dir,
 				     unsigned long attrs)
 {
-	if (iommu_no_mapping(dev))
-		return dma_direct_map_resource(dev, phys_addr, size, dir,
-				attrs);
-
 	return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
 }
 
@@ -3597,17 +3573,13 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
 			     size_t size, enum dma_data_direction dir,
 			     unsigned long attrs)
 {
-	if (iommu_no_mapping(dev))
-		dma_direct_unmap_page(dev, dev_addr, size, dir, attrs);
-	else
-		intel_unmap(dev, dev_addr, size);
+	intel_unmap(dev, dev_addr, size);
 }
 
 static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	if (!iommu_no_mapping(dev))
-		intel_unmap(dev, dev_addr, size);
+	intel_unmap(dev, dev_addr, size);
 }
 
 static void *intel_alloc_coherent(struct device *dev, size_t size,
@@ -3617,9 +3589,6 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
 	struct page *page = NULL;
 	int order;
 
-	if (iommu_no_mapping(dev))
-		return dma_direct_alloc(dev, size, dma_handle, flags, attrs);
-
 	size = PAGE_ALIGN(size);
 	order = get_order(size);
 
@@ -3653,9 +3622,6 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
 	int order;
 	struct page *page = virt_to_page(vaddr);
 
-	if (iommu_no_mapping(dev))
-		return dma_direct_free(dev, size, vaddr, dma_handle, attrs);
-
 	size = PAGE_ALIGN(size);
 	order = get_order(size);
 
@@ -3673,9 +3639,6 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
 	struct scatterlist *sg;
 	int i;
 
-	if (iommu_no_mapping(dev))
-		return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs);
-
 	for_each_sg(sglist, sg, nelems, i) {
 		nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg));
 	}
@@ -3699,8 +3662,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 	struct intel_iommu *iommu;
 
 	BUG_ON(dir == DMA_NONE);
-	if (iommu_no_mapping(dev))
-		return dma_direct_map_sg(dev, sglist, nelems, dir, attrs);
 
 	domain = deferred_attach_domain(dev);
 	if (!domain)
@@ -3747,8 +3708,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
 
 static u64 intel_get_required_mask(struct device *dev)
 {
-	if (iommu_no_mapping(dev))
-		return dma_direct_get_required_mask(dev);
 	return DMA_BIT_MASK(32);
 }
 
@@ -5014,7 +4973,6 @@ int __init intel_iommu_init(void)
 	if (!has_untrusted_dev() || intel_no_bounce)
 		swiotlb = 0;
 #endif
-	dma_ops = &intel_dma_ops;
 
 	init_iommu_pm_ops();
 
@@ -5623,6 +5581,8 @@ static int intel_iommu_add_device(struct device *dev)
 				dev_info(dev,
 					 "Device uses a private identity domain.\n");
 			}
+		} else {
+			dev->dma_ops = &intel_dma_ops;
 		}
 	} else {
 		if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) {
@@ -5639,6 +5599,7 @@ static int intel_iommu_add_device(struct device *dev)
 				dev_info(dev,
 					 "Device uses a private dma domain.\n");
 			}
+			dev->dma_ops = &intel_dma_ops;
 		}
 	}
 
@@ -5665,8 +5626,7 @@ static void intel_iommu_remove_device(struct device *dev)
 
 	iommu_device_unlink(&iommu->iommu, dev);
 
-	if (device_needs_bounce(dev))
-		set_dma_ops(dev, NULL);
+	set_dma_ops(dev, NULL);
 }
 
 static void intel_iommu_get_resv_regions(struct device *device,
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
  2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
  2019-12-21 15:03 ` [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2020-03-20  6:30   ` Tom Murphy
  2019-12-21 15:03 ` [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers Tom Murphy
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Remove all IOVA handling code from the non-dma_ops path in the intel
iommu driver.

There's no need for the non-dma_ops path to keep track of IOVAs. The
whole point of the non-dma_ops path is that it allows the IOVAs to be
handled separately. The IOVA handling code removed in this patch is
pointless.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
 1 file changed, 33 insertions(+), 56 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 64b1a9793daa..8d72ea0fb843 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain)
 	domain_remove_dev_info(domain);
 
 	/* destroy iovas */
-	put_iova_domain(&domain->iovad);
+	if (domain->domain.type == IOMMU_DOMAIN_DMA)
+		put_iova_domain(&domain->iovad);
 
 	if (domain->pgd) {
 		struct page *freelist;
@@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev,
 }
 
 static int iommu_domain_identity_map(struct dmar_domain *domain,
-				     unsigned long long start,
-				     unsigned long long end)
+				     unsigned long first_vpfn,
+				     unsigned long last_vpfn)
 {
-	unsigned long first_vpfn = start >> VTD_PAGE_SHIFT;
-	unsigned long last_vpfn = end >> VTD_PAGE_SHIFT;
-
-	if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
-			  dma_to_mm_pfn(last_vpfn))) {
-		pr_err("Reserving iova failed\n");
-		return -ENOMEM;
-	}
-
-	pr_debug("Mapping reserved region %llx-%llx\n", start, end);
 	/*
 	 * RMRR range might have overlap with physical memory range,
 	 * clear it first
@@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw)
 
 		for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
 			ret = iommu_domain_identity_map(si_domain,
-					PFN_PHYS(start_pfn), PFN_PHYS(end_pfn));
+					mm_to_dma_pfn(start_pfn),
+					mm_to_dma_pfn(end_pfn));
 			if (ret)
 				return ret;
 		}
@@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
 				       unsigned long val, void *v)
 {
 	struct memory_notify *mhp = v;
-	unsigned long long start, end;
-	unsigned long start_vpfn, last_vpfn;
+	unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
+	unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn +
+			mhp->nr_pages - 1);
 
 	switch (val) {
 	case MEM_GOING_ONLINE:
-		start = mhp->start_pfn << PAGE_SHIFT;
-		end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
-		if (iommu_domain_identity_map(si_domain, start, end)) {
-			pr_warn("Failed to build identity map for [%llx-%llx]\n",
-				start, end);
+		if (iommu_domain_identity_map(si_domain, start_vpfn,
+					last_vpfn)) {
+			pr_warn("Failed to build identity map for [%lx-%lx]\n",
+				start_vpfn, last_vpfn);
 			return NOTIFY_BAD;
 		}
 		break;
 
 	case MEM_OFFLINE:
 	case MEM_CANCEL_ONLINE:
-		start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
-		last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1);
-		while (start_vpfn <= last_vpfn) {
-			struct iova *iova;
+		{
 			struct dmar_drhd_unit *drhd;
 			struct intel_iommu *iommu;
 			struct page *freelist;
 
-			iova = find_iova(&si_domain->iovad, start_vpfn);
-			if (iova == NULL) {
-				pr_debug("Failed get IOVA for PFN %lx\n",
-					 start_vpfn);
-				break;
-			}
-
-			iova = split_and_remove_iova(&si_domain->iovad, iova,
-						     start_vpfn, last_vpfn);
-			if (iova == NULL) {
-				pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
-					start_vpfn, last_vpfn);
-				return NOTIFY_BAD;
-			}
-
-			freelist = domain_unmap(si_domain, iova->pfn_lo,
-					       iova->pfn_hi);
+			freelist = domain_unmap(si_domain, start_vpfn,
+					last_vpfn);
 
 			rcu_read_lock();
 			for_each_active_iommu(iommu, drhd)
 				iommu_flush_iotlb_psi(iommu, si_domain,
-					iova->pfn_lo, iova_size(iova),
+					start_vpfn, mhp->nr_pages,
 					!freelist, 0);
 			rcu_read_unlock();
 			dma_free_pagelist(freelist);
-
-			start_vpfn = iova->pfn_hi + 1;
-			free_iova_mem(iova);
 		}
 		break;
 	}
@@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
 		for (did = 0; did < cap_ndoms(iommu->cap); did++) {
 			domain = get_iommu_domain(iommu, (u16)did);
 
-			if (!domain)
+			if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
 				continue;
+
 			free_cpu_cached_iovas(cpu, &domain->iovad);
 		}
 	}
@@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
 {
 	int adjust_width;
 
-	init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
-	domain_reserve_special_ranges(domain);
-
 	/* calculate AGAW */
 	domain->gaw = guest_width;
 	adjust_width = guestwidth_to_adjustwidth(guest_width);
@@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
 	return 0;
 }
 
+static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
+{
+	init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
+	copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
+
+	if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
+				iova_entry_free)) {
+		pr_warn("iova flush queue initialization failed\n");
+		intel_iommu_strict = 1;
+	}
+}
+
 static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
 {
 	struct dmar_domain *dmar_domain;
@@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
 			return NULL;
 		}
 
-		if (type == IOMMU_DOMAIN_DMA &&
-		    init_iova_flush_queue(&dmar_domain->iovad,
-					  iommu_flush_iova, iova_entry_free)) {
-			pr_warn("iova flush queue initialization failed\n");
-			intel_iommu_strict = 1;
-		}
+		if (type == IOMMU_DOMAIN_DMA)
+			intel_init_iova_domain(dmar_domain);
 
 		domain_update_iommu_cap(dmar_domain);
 
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (2 preceding siblings ...)
  2019-12-21 15:03 ` [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-21 15:03 ` [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function Tom Murphy
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.

This is useful for iommu drivers (in this case the intel iommu driver)
which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way we can still batch
ioTLB free operations and handle the freelists.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/amd_iommu.c       | 14 ++++++++-
 drivers/iommu/arm-smmu-v3.c     |  3 +-
 drivers/iommu/arm-smmu.c        |  3 +-
 drivers/iommu/dma-iommu.c       | 45 ++++++++++++++++++++++-------
 drivers/iommu/exynos-iommu.c    |  3 +-
 drivers/iommu/intel-iommu.c     | 51 +++++++++++++++++++++------------
 drivers/iommu/iommu.c           | 29 ++++++++++++++-----
 drivers/iommu/ipmmu-vmsa.c      |  3 +-
 drivers/iommu/msm_iommu.c       |  3 +-
 drivers/iommu/mtk_iommu.c       |  3 +-
 drivers/iommu/mtk_iommu_v1.c    |  3 +-
 drivers/iommu/omap-iommu.c      |  3 +-
 drivers/iommu/qcom_iommu.c      |  3 +-
 drivers/iommu/rockchip-iommu.c  |  3 +-
 drivers/iommu/s390-iommu.c      |  3 +-
 drivers/iommu/tegra-gart.c      |  3 +-
 drivers/iommu/tegra-smmu.c      |  3 +-
 drivers/iommu/virtio-iommu.c    |  3 +-
 drivers/vfio/vfio_iommu_type1.c |  2 +-
 include/linux/iommu.h           | 25 ++++++++++++----
 20 files changed, 151 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index bd25674ee4db..e8a4c0842624 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2542,7 +2542,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 
 static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 			      size_t page_size,
-			      struct iommu_iotlb_gather *gather)
+			      struct iommu_iotlb_gather *gather,
+			      struct page **freelist)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 
@@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
 	spin_unlock_irqrestore(&dom->lock, flags);
 }
 
+static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain,
+					unsigned long iova, size_t size,
+					struct page *freelist)
+{
+	struct protection_domain *dom = to_pdomain(domain);
+
+	domain_flush_pages(dom, iova, size);
+	domain_flush_complete(dom);
+}
+
 static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
 				 struct iommu_iotlb_gather *gather)
 {
@@ -2692,6 +2703,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.is_attach_deferred = amd_iommu_is_attach_deferred,
 	.pgsize_bitmap	= AMD_IOMMU_PGSIZES,
 	.flush_iotlb_all = amd_iommu_flush_iotlb_all,
+	.flush_iotlb_range = amd_iommu_flush_iotlb_range,
 	.iotlb_sync = amd_iommu_iotlb_sync,
 };
 
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index effe72eb89e7..a27d4bf4492c 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2459,7 +2459,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-			     size_t size, struct iommu_iotlb_gather *gather)
+			     size_t size, struct iommu_iotlb_gather *gather,
+			     struct page **freelist)
 {
 	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
 	struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 4f1a350d9529..ea1ab3387a07 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1205,7 +1205,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-			     size_t size, struct iommu_iotlb_gather *gather)
+			     size_t size, struct iommu_iotlb_gather *gather,
+			     struct page **freelist)
 {
 	struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops;
 	struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 0cc702a70a96..df28facdfb8b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -50,6 +50,19 @@ struct iommu_dma_cookie {
 	struct iommu_domain		*fq_domain;
 };
 
+static void iommu_dma_entry_dtor(unsigned long data)
+{
+	struct page *freelist = (struct page *)data;
+
+	while (freelist != NULL) {
+		unsigned long p = (unsigned long)page_address(freelist);
+
+		freelist = freelist->freelist;
+		free_page(p);
+	}
+}
+
+
 static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie)
 {
 	if (cookie->type == IOMMU_DMA_IOVA_COOKIE)
@@ -345,7 +358,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
 	if (!cookie->fq_domain && !iommu_domain_get_attr(domain,
 			DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) {
 		cookie->fq_domain = domain;
-		init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL);
+		init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
+				iommu_dma_entry_dtor);
 	}
 
 	if (!dev)
@@ -439,7 +453,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
 }
 
 static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
-		dma_addr_t iova, size_t size)
+		dma_addr_t iova, size_t size, struct page *freelist)
 {
 	struct iova_domain *iovad = &cookie->iovad;
 
@@ -448,7 +462,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
 		cookie->msi_iova -= size;
 	else if (cookie->fq_domain)	/* non-strict mode */
 		queue_iova(iovad, iova_pfn(iovad, iova),
-				size >> iova_shift(iovad), 0);
+				size >> iova_shift(iovad),
+				(unsigned long) freelist);
 	else
 		free_iova_fast(iovad, iova_pfn(iovad, iova),
 				size >> iova_shift(iovad));
@@ -462,18 +477,26 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
 	struct iova_domain *iovad = &cookie->iovad;
 	size_t iova_off = iova_offset(iovad, dma_addr);
 	struct iommu_iotlb_gather iotlb_gather;
+	struct page *freelist = NULL;
 	size_t unmapped;
 
 	dma_addr -= iova_off;
 	size = iova_align(iovad, size + iova_off);
 	iommu_iotlb_gather_init(&iotlb_gather);
 
-	unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather);
+	unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather,
+			&freelist);
 	WARN_ON(unmapped != size);
 
-	if (!cookie->fq_domain)
-		iommu_tlb_sync(domain, &iotlb_gather);
-	iommu_dma_free_iova(cookie, dma_addr, size);
+	if (!cookie->fq_domain) {
+		if (domain->ops->flush_iotlb_range)
+			domain->ops->flush_iotlb_range(domain, dma_addr, size,
+					freelist);
+		else
+			iommu_tlb_sync(domain, &iotlb_gather);
+	}
+
+	iommu_dma_free_iova(cookie, dma_addr, size, freelist);
 }
 
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
@@ -495,7 +518,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
 		return DMA_MAPPING_ERROR;
 
 	if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
-		iommu_dma_free_iova(cookie, iova, size);
+		iommu_dma_free_iova(cookie, iova, size, NULL);
 		return DMA_MAPPING_ERROR;
 	}
 	return iova + iova_off;
@@ -650,7 +673,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
 out_free_sg:
 	sg_free_table(&sgt);
 out_free_iova:
-	iommu_dma_free_iova(cookie, iova, size);
+	iommu_dma_free_iova(cookie, iova, size, NULL);
 out_free_pages:
 	__iommu_dma_free_pages(pages, count);
 	return NULL;
@@ -901,7 +924,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
 	return __finalise_sg(dev, sg, nents, iova);
 
 out_free_iova:
-	iommu_dma_free_iova(cookie, iova, iova_len);
+	iommu_dma_free_iova(cookie, iova, iova_len, NULL);
 out_restore_sg:
 	__invalidate_sg(sg, nents);
 	return 0;
@@ -1194,7 +1217,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
 	return msi_page;
 
 out_free_iova:
-	iommu_dma_free_iova(cookie, iova, size);
+	iommu_dma_free_iova(cookie, iova, size, NULL);
 out_free_page:
 	kfree(msi_page);
 	return NULL;
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 186ff5cc975c..e6456eb8ac4d 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -1129,7 +1129,8 @@ static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *domain
 
 static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain,
 				 unsigned long l_iova, size_t size,
-				 struct iommu_iotlb_gather *gather)
+				 struct iommu_iotlb_gather *gather,
+				 struct page **freelist)
 {
 	struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain);
 	sysmmu_iova_t iova = (sysmmu_iova_t)l_iova;
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 8d72ea0fb843..675ca2aa6e20 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1145,9 +1145,9 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level,
    pages can only be freed after the IOTLB flush has been done. */
 static struct page *domain_unmap(struct dmar_domain *domain,
 				 unsigned long start_pfn,
-				 unsigned long last_pfn)
+				 unsigned long last_pfn,
+				 struct page *freelist)
 {
-	struct page *freelist;
 
 	BUG_ON(!domain_pfn_supported(domain, start_pfn));
 	BUG_ON(!domain_pfn_supported(domain, last_pfn));
@@ -1155,7 +1155,8 @@ static struct page *domain_unmap(struct dmar_domain *domain,
 
 	/* we don't need lock here; nobody else touches the iova range */
 	freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw),
-				       domain->pgd, 0, start_pfn, last_pfn, NULL);
+				       domain->pgd, 0, start_pfn, last_pfn,
+				       freelist);
 
 	/* free pgd */
 	if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) {
@@ -1914,7 +1915,8 @@ static void domain_exit(struct dmar_domain *domain)
 	if (domain->pgd) {
 		struct page *freelist;
 
-		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
+		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw),
+				NULL);
 		dma_free_pagelist(freelist);
 	}
 
@@ -3541,7 +3543,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
 	if (dev_is_pci(dev))
 		pdev = to_pci_dev(dev);
 
-	freelist = domain_unmap(domain, start_pfn, last_pfn);
+	freelist = domain_unmap(domain, start_pfn, last_pfn, NULL);
 	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
 			!has_iova_flush_queue(&domain->iovad)) {
 		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
@@ -4607,7 +4609,7 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
 			struct page *freelist;
 
 			freelist = domain_unmap(si_domain, start_vpfn,
-					last_vpfn);
+					last_vpfn, NULL);
 
 			rcu_read_lock();
 			for_each_active_iommu(iommu, drhd)
@@ -5412,13 +5414,12 @@ static int intel_iommu_map(struct iommu_domain *domain,
 
 static size_t intel_iommu_unmap(struct iommu_domain *domain,
 				unsigned long iova, size_t size,
-				struct iommu_iotlb_gather *gather)
+				struct iommu_iotlb_gather *gather,
+				struct page **freelist)
 {
 	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
-	struct page *freelist = NULL;
 	unsigned long start_pfn, last_pfn;
-	unsigned int npages;
-	int iommu_id, level = 0;
+	int level = 0;
 
 	/* Cope with horrid API which requires us to unmap more than the
 	   size argument if it happens to be a large-page mapping. */
@@ -5432,20 +5433,33 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
 	start_pfn = iova >> VTD_PAGE_SHIFT;
 	last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT;
 
-	freelist = domain_unmap(dmar_domain, start_pfn, last_pfn);
+	*freelist = domain_unmap(dmar_domain, start_pfn, last_pfn, *freelist);
 
-	npages = last_pfn - start_pfn + 1;
+	if (dmar_domain->max_addr == iova + size)
+		dmar_domain->max_addr = iova;
+
+	return size;
+}
+
+static void intel_iommu_flush_iotlb_range(struct iommu_domain *domain,
+		unsigned long iova, size_t size,
+		struct page *freelist)
+{
+	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
+	unsigned long start_pfn, last_pfn;
+	unsigned long iova_pfn = IOVA_PFN(iova);
+	unsigned long nrpages;
+	int iommu_id;
+
+	nrpages = aligned_nrpages(iova, size);
+	start_pfn = mm_to_dma_pfn(iova_pfn);
+	last_pfn = start_pfn + nrpages - 1;
 
 	for_each_domain_iommu(iommu_id, dmar_domain)
 		iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain,
-				      start_pfn, npages, !freelist, 0);
+				      start_pfn, nrpages, !freelist, 0);
 
 	dma_free_pagelist(freelist);
-
-	if (dmar_domain->max_addr == iova + size)
-		dmar_domain->max_addr = iova;
-
-	return size;
 }
 
 static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
@@ -5902,6 +5916,7 @@ const struct iommu_ops intel_iommu_ops = {
 	.aux_get_pasid		= intel_iommu_aux_get_pasid,
 	.map			= intel_iommu_map,
 	.unmap			= intel_iommu_unmap,
+	.flush_iotlb_range      = intel_iommu_flush_iotlb_range,
 	.iova_to_phys		= intel_iommu_iova_to_phys,
 	.add_device		= intel_iommu_add_device,
 	.remove_device		= intel_iommu_remove_device,
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index db7bfd4f2d20..cec728f40d9c 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -667,7 +667,7 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group,
 
 	}
 
-	iommu_flush_tlb_all(domain);
+	iommu_flush_iotlb_all(domain);
 
 out:
 	iommu_put_resv_regions(dev, &mappings);
@@ -1961,11 +1961,13 @@ EXPORT_SYMBOL_GPL(iommu_map_atomic);
 
 static size_t __iommu_unmap(struct iommu_domain *domain,
 			    unsigned long iova, size_t size,
-			    struct iommu_iotlb_gather *iotlb_gather)
+			    struct iommu_iotlb_gather *iotlb_gather,
+			    struct page **freelist)
 {
 	const struct iommu_ops *ops = domain->ops;
 	size_t unmapped_page, unmapped = 0;
 	unsigned long orig_iova = iova;
+	struct page *freelist_head = NULL;
 	unsigned int min_pagesz;
 
 	if (unlikely(ops->unmap == NULL ||
@@ -1998,7 +2000,8 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
 	while (unmapped < size) {
 		size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
 
-		unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather);
+		unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather,
+				&freelist_head);
 		if (!unmapped_page)
 			break;
 
@@ -2009,6 +2012,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
 		unmapped += unmapped_page;
 	}
 
+	if (freelist)
+		*freelist = freelist_head;
+
 	trace_unmap(orig_iova, size, unmapped);
 	return unmapped;
 }
@@ -2016,12 +2022,20 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
 size_t iommu_unmap(struct iommu_domain *domain,
 		   unsigned long iova, size_t size)
 {
+	const struct iommu_ops *ops = domain->ops;
 	struct iommu_iotlb_gather iotlb_gather;
+	struct page *freelist;
 	size_t ret;
 
 	iommu_iotlb_gather_init(&iotlb_gather);
-	ret = __iommu_unmap(domain, iova, size, &iotlb_gather);
-	iommu_tlb_sync(domain, &iotlb_gather);
+	ret = __iommu_unmap(domain, iova, size, &iotlb_gather,
+			&freelist);
+
+	if (ops->flush_iotlb_range)
+		ops->flush_iotlb_range(domain, iova, ret,
+				freelist);
+	else
+		iommu_tlb_sync(domain, &iotlb_gather);
 
 	return ret;
 }
@@ -2029,9 +2043,10 @@ EXPORT_SYMBOL_GPL(iommu_unmap);
 
 size_t iommu_unmap_fast(struct iommu_domain *domain,
 			unsigned long iova, size_t size,
-			struct iommu_iotlb_gather *iotlb_gather)
+			struct iommu_iotlb_gather *iotlb_gather,
+			struct page **freelist)
 {
-	return __iommu_unmap(domain, iova, size, iotlb_gather);
+	return __iommu_unmap(domain, iova, size, iotlb_gather, freelist);
 }
 EXPORT_SYMBOL_GPL(iommu_unmap_fast);
 
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index d02edd2751f3..63bbee653859 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -693,7 +693,8 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova,
 }
 
 static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova,
-			  size_t size, struct iommu_iotlb_gather *gather)
+			  size_t size, struct iommu_iotlb_gather *gather,
+			  struct page **freelist)
 {
 	struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
 
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index 93f14bca26ee..66d1587ab714 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -518,7 +518,8 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
-			      size_t len, struct iommu_iotlb_gather *gather)
+			      size_t len, struct iommu_iotlb_gather *gather,
+			      struct page **freelist)
 {
 	struct msm_priv *priv = to_msm_priv(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 6fc1f5ecf91e..6bd9f39bb259 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -402,7 +402,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
 
 static size_t mtk_iommu_unmap(struct iommu_domain *domain,
 			      unsigned long iova, size_t size,
-			      struct iommu_iotlb_gather *gather)
+			      struct iommu_iotlb_gather *gather,
+			      struct page **freelist)
 {
 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 
diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
index e93b94ecac45..f94d225c3404 100644
--- a/drivers/iommu/mtk_iommu_v1.c
+++ b/drivers/iommu/mtk_iommu_v1.c
@@ -325,7 +325,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova,
 
 static size_t mtk_iommu_unmap(struct iommu_domain *domain,
 			      unsigned long iova, size_t size,
-			      struct iommu_iotlb_gather *gather)
+			      struct iommu_iotlb_gather *gather,
+			      struct page **freelist)
 {
 	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index be551cc34be4..c5e012267f2b 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -1383,7 +1383,8 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
 }
 
 static size_t omap_iommu_unmap(struct iommu_domain *domain, unsigned long da,
-			       size_t size, struct iommu_iotlb_gather *gather)
+			       size_t size, struct iommu_iotlb_gather *gather,
+			       struct page **freelist)
 {
 	struct omap_iommu_domain *omap_domain = to_omap_domain(domain);
 	struct device *dev = omap_domain->dev;
diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
index 52f38292df5b..99ebf34e50be 100644
--- a/drivers/iommu/qcom_iommu.c
+++ b/drivers/iommu/qcom_iommu.c
@@ -440,7 +440,8 @@ static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
-			       size_t size, struct iommu_iotlb_gather *gather)
+			       size_t size, struct iommu_iotlb_gather *gather,
+			       struct page **freelist)
 {
 	size_t ret;
 	unsigned long flags;
diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index b33cdd5aad81..ec16e01c376a 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -795,7 +795,8 @@ static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova,
 }
 
 static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova,
-			     size_t size, struct iommu_iotlb_gather *gather)
+			     size_t size, struct iommu_iotlb_gather *gather,
+			     struct page **freelist)
 {
 	struct rk_iommu_domain *rk_domain = to_rk_domain(domain);
 	unsigned long flags;
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 1137f3ddcb85..d69d7cf4dee3 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -315,7 +315,8 @@ static phys_addr_t s390_iommu_iova_to_phys(struct iommu_domain *domain,
 
 static size_t s390_iommu_unmap(struct iommu_domain *domain,
 			       unsigned long iova, size_t size,
-			       struct iommu_iotlb_gather *gather)
+			       struct iommu_iotlb_gather *gather,
+			       struct page **freelist)
 {
 	struct s390_domain *s390_domain = to_s390_domain(domain);
 	int flags = ZPCI_PTE_INVALID;
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index 3fb7ba72507d..68e7eee9172f 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -207,7 +207,8 @@ static inline int __gart_iommu_unmap(struct gart_device *gart,
 }
 
 static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
-			       size_t bytes, struct iommu_iotlb_gather *gather)
+			       size_t bytes, struct iommu_iotlb_gather *gather,
+			       struct page **freelist)
 {
 	struct gart_device *gart = gart_handle;
 	int err;
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 63a147b623e6..0c5e5f2c3c7d 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -686,7 +686,8 @@ static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t tegra_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
-			       size_t size, struct iommu_iotlb_gather *gather)
+			       size_t size, struct iommu_iotlb_gather *gather,
+			       struct page **freelist)
 {
 	struct tegra_smmu_as *as = to_smmu_as(domain);
 	dma_addr_t pte_dma;
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 315c7cc4f99d..e74baab27b61 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -750,7 +750,8 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova,
-			   size_t size, struct iommu_iotlb_gather *gather)
+			   size_t size, struct iommu_iotlb_gather *gather,
+			   struct page *freelist)
 {
 	int ret = 0;
 	size_t unmapped;
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 2ada8e6cdb88..d24ea1181c03 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -685,7 +685,7 @@ static size_t unmap_unpin_fast(struct vfio_domain *domain,
 
 	if (entry) {
 		unmapped = iommu_unmap_fast(domain->domain, *iova, len,
-					    iotlb_gather);
+					    iotlb_gather, NULL);
 
 		if (!unmapped) {
 			kfree(entry);
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index f2223cbb5fd5..61cac25410b5 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -219,6 +219,7 @@ struct iommu_iotlb_gather {
  * @map: map a physically contiguous memory region to an iommu domain
  * @unmap: unmap a physically contiguous memory region from an iommu domain
  * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain
+ * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain
  * @iotlb_sync_map: Sync mappings created recently using @map to the hardware
  * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
  *            queue
@@ -262,8 +263,12 @@ struct iommu_ops {
 	int (*map)(struct iommu_domain *domain, unsigned long iova,
 		   phys_addr_t paddr, size_t size, int prot, gfp_t gfp);
 	size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
-		     size_t size, struct iommu_iotlb_gather *iotlb_gather);
+		     size_t size, struct iommu_iotlb_gather *iotlb_gather,
+		     struct page **freelist);
 	void (*flush_iotlb_all)(struct iommu_domain *domain);
+	void (*flush_iotlb_range)(struct iommu_domain *domain,
+			unsigned long iova, size_t size,
+			struct page *freelist);
 	void (*iotlb_sync_map)(struct iommu_domain *domain);
 	void (*iotlb_sync)(struct iommu_domain *domain,
 			   struct iommu_iotlb_gather *iotlb_gather);
@@ -444,7 +449,8 @@ extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
 			  size_t size);
 extern size_t iommu_unmap_fast(struct iommu_domain *domain,
 			       unsigned long iova, size_t size,
-			       struct iommu_iotlb_gather *iotlb_gather);
+			       struct iommu_iotlb_gather *iotlb_gather,
+			       struct page **freelist);
 extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 			   struct scatterlist *sg,unsigned int nents, int prot);
 extern size_t iommu_map_sg_atomic(struct iommu_domain *domain,
@@ -518,12 +524,20 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr)
 extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
 			      unsigned long iova, int flags);
 
-static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
+static inline void iommu_flush_iotlb_all(struct iommu_domain *domain)
 {
 	if (domain->ops->flush_iotlb_all)
 		domain->ops->flush_iotlb_all(domain);
 }
 
+static inline void flush_iotlb_range(struct iommu_domain *domain,
+			unsigned long iova, size_t size,
+			struct page *freelist)
+{
+	if (domain->ops->flush_iotlb_range)
+		domain->ops->flush_iotlb_range(domain, iova, size, freelist);
+}
+
 static inline void iommu_tlb_sync(struct iommu_domain *domain,
 				  struct iommu_iotlb_gather *iotlb_gather)
 {
@@ -699,7 +713,8 @@ static inline size_t iommu_unmap(struct iommu_domain *domain,
 
 static inline size_t iommu_unmap_fast(struct iommu_domain *domain,
 				      unsigned long iova, int gfp_order,
-				      struct iommu_iotlb_gather *iotlb_gather)
+				      struct iommu_iotlb_gather *iotlb_gather,
+				      struct page **freelist)
 {
 	return 0;
 }
@@ -718,7 +733,7 @@ static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain,
 	return 0;
 }
 
-static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
+static inline void iommu_flush_iotlb_all(struct iommu_domain *domain)
 {
 }
 
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (3 preceding siblings ...)
  2019-12-21 15:03 ` [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-21 15:03 ` [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Tom Murphy
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

to dma-iommu ops

Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which
use the dma-iommu ops to free cached cpu iovas.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/dma-iommu.c | 9 +++++++++
 include/linux/dma-iommu.h | 3 +++
 2 files changed, 12 insertions(+)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index df28facdfb8b..4eac3cd35443 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -50,6 +50,15 @@ struct iommu_dma_cookie {
 	struct iommu_domain		*fq_domain;
 };
 
+void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+		struct iommu_domain *domain)
+{
+	struct iommu_dma_cookie *cookie = domain->iova_cookie;
+	struct iova_domain *iovad = &cookie->iovad;
+
+	free_cpu_cached_iovas(cpu, iovad);
+}
+
 static void iommu_dma_entry_dtor(unsigned long data)
 {
 	struct page *freelist = (struct page *)data;
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index 2112f21f73d8..316d22a4a860 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -37,6 +37,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc,
 
 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
 
+void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
+		struct iommu_domain *domain);
+
 #else /* CONFIG_IOMMU_DMA */
 
 struct iommu_domain;
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (4 preceding siblings ...)
  2019-12-21 15:03 ` [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-24 10:20   ` kbuild test robot
  2019-12-21 15:03 ` [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops Tom Murphy
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Allow the dma-iommu api to use bounce buffers for untrusted devices.
This is a copy of the intel bounce buffer code.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/dma-iommu.c | 93 ++++++++++++++++++++++++++++++++-------
 drivers/iommu/iommu.c     | 10 +++++
 include/linux/iommu.h     |  9 +++-
 3 files changed, 95 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4eac3cd35443..cf778db7d84d 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -20,9 +20,11 @@
 #include <linux/irq.h>
 #include <linux/mm.h>
 #include <linux/pci.h>
+#include <linux/swiotlb.h>
 #include <linux/scatterlist.h>
 #include <linux/vmalloc.h>
 #include <linux/crash_dump.h>
+#include <linux/dma-direct.h>
 
 struct iommu_dma_msi_page {
 	struct list_head	list;
@@ -505,29 +507,89 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
 			iommu_tlb_sync(domain, &iotlb_gather);
 	}
 
+
 	iommu_dma_free_iova(cookie, dma_addr, size, freelist);
 }
 
+static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
+		size_t size, enum dma_data_direction dir,
+		unsigned long attrs)
+{
+	struct iommu_domain *domain = iommu_get_dma_domain(dev);
+	struct iommu_dma_cookie *cookie = domain->iova_cookie;
+	struct iova_domain *iovad = &cookie->iovad;
+	size_t iova_off = iova_offset(iovad, dma_addr);
+	size_t aligned_size = iova_align(iovad, size + iova_off);
+	phys_addr_t phys;
+
+	phys = iommu_iova_to_phys(domain, dma_addr);
+	if (WARN_ON(!phys))
+		return;
+
+	__iommu_dma_unmap(dev, dma_addr, size);
+
+#ifdef CONFIG_SWIOTLB
+	if (unlikely(is_swiotlb_buffer(phys)))
+		swiotlb_tbl_unmap_single(dev, phys, size,
+				aligned_size, dir, attrs);
+#endif
+}
+
 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
-		size_t size, int prot, dma_addr_t dma_mask)
+		size_t org_size, dma_addr_t dma_mask, bool coherent,
+		enum dma_data_direction dir, unsigned long attrs)
 {
+	int prot = dma_info_to_prot(dir, coherent, attrs);
 	struct iommu_domain *domain = iommu_get_dma_domain(dev);
 	struct iommu_dma_cookie *cookie = domain->iova_cookie;
 	struct iova_domain *iovad = &cookie->iovad;
 	size_t iova_off = iova_offset(iovad, phys);
+	size_t aligned_size = iova_align(iovad, org_size + iova_off);
 	dma_addr_t iova;
 
 	if (unlikely(iommu_dma_deferred_attach(dev, domain)))
 		return DMA_MAPPING_ERROR;
 
-	size = iova_align(iovad, size + iova_off);
+#ifdef CONFIG_SWIOTLB
+	/*
+	 * If both the physical buffer start address and size are
+	 * page aligned, we don't need to use a bounce page.
+	 */
+	if (iommu_needs_bounce_buffer(dev)
+			&& !iova_offset(iovad, phys | org_size)) {
+		phys = swiotlb_tbl_map_single(dev,
+				__phys_to_dma(dev, io_tlb_start),
+				phys, org_size, aligned_size, dir, attrs);
+
+		if (phys == DMA_MAPPING_ERROR)
+			return DMA_MAPPING_ERROR;
+
+		/* Cleanup the padding area. */
+		void *padding_start = phys_to_virt(phys);
+		size_t padding_size = aligned_size;
+
+		if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+		    (dir == DMA_TO_DEVICE ||
+		     dir == DMA_BIDIRECTIONAL)) {
+			padding_start += org_size;
+			padding_size -= org_size;
+		}
 
-	iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev);
+		memset(padding_start, 0, padding_size);
+	}
+#endif
+
+	iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev);
 	if (!iova)
 		return DMA_MAPPING_ERROR;
 
-	if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) {
-		iommu_dma_free_iova(cookie, iova, size, NULL);
+	if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size,
+				prot)) {
+
+		if (unlikely(is_swiotlb_buffer(phys)))
+			swiotlb_tbl_unmap_single(dev, phys, aligned_size,
+					aligned_size, dir, attrs);
+		iommu_dma_free_iova(cookie, iova, aligned_size, NULL);
 		return DMA_MAPPING_ERROR;
 	}
 	return iova + iova_off;
@@ -761,10 +823,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
 {
 	phys_addr_t phys = page_to_phys(page) + offset;
 	bool coherent = dev_is_dma_coherent(dev);
-	int prot = dma_info_to_prot(dir, coherent, attrs);
 	dma_addr_t dma_handle;
 
-	dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev));
+	dma_handle = __iommu_dma_map(dev, phys, size, dma_get_mask(dev),
+			coherent, dir, attrs);
 	if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    dma_handle != DMA_MAPPING_ERROR)
 		arch_sync_dma_for_device(phys, size, dir);
@@ -776,7 +838,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
 {
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
-	__iommu_dma_unmap(dev, dma_handle, size);
+	__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
 }
 
 /*
@@ -960,21 +1022,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
 		sg = tmp;
 	}
 	end = sg_dma_address(sg) + sg_dma_len(sg);
-	__iommu_dma_unmap(dev, start, end - start);
+	__iommu_dma_unmap_swiotlb(dev, start, end - start, dir, attrs);
 }
 
 static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	return __iommu_dma_map(dev, phys, size,
-			dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO,
-			dma_get_mask(dev));
+	return __iommu_dma_map(dev, phys, size, dma_get_mask(dev), false, dir,
+			attrs);
 }
 
 static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	__iommu_dma_unmap(dev, handle, size);
+	__iommu_dma_unmap_swiotlb(dev, handle, size, dir, attrs);
 }
 
 static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr)
@@ -1056,7 +1117,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
 		dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
 {
 	bool coherent = dev_is_dma_coherent(dev);
-	int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs);
 	struct page *page = NULL;
 	void *cpu_addr;
 
@@ -1074,8 +1134,9 @@ static void *iommu_dma_alloc(struct device *dev, size_t size,
 	if (!cpu_addr)
 		return NULL;
 
-	*handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot,
-			dev->coherent_dma_mask);
+	*handle = __iommu_dma_map(dev, page_to_phys(page), size,
+			dev->coherent_dma_mask, coherent, DMA_BIDIRECTIONAL,
+			attrs);
 	if (*handle == DMA_MAPPING_ERROR) {
 		__iommu_dma_free(dev, size, cpu_addr);
 		return NULL;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index cec728f40d9c..e5653cb20c83 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2236,6 +2236,16 @@ void iommu_get_resv_regions(struct device *dev, struct list_head *list)
 		ops->get_resv_regions(dev, list);
 }
 
+int iommu_needs_bounce_buffer(struct device *dev)
+{
+	const struct iommu_ops *ops = dev->bus->iommu_ops;
+
+	if (ops && ops->needs_bounce_buffer)
+		return ops->needs_bounce_buffer(dev);
+
+	return 0;
+}
+
 void iommu_put_resv_regions(struct device *dev, struct list_head *list)
 {
 	const struct iommu_ops *ops = dev->bus->iommu_ops;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 61cac25410b5..d377ffa362a7 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -280,6 +280,7 @@ struct iommu_ops {
 			       enum iommu_attr attr, void *data);
 	int (*domain_set_attr)(struct iommu_domain *domain,
 			       enum iommu_attr attr, void *data);
+	int (*needs_bounce_buffer)(struct device *dev);
 
 	/* Request/Free a list of reserved regions for a device */
 	void (*get_resv_regions)(struct device *dev, struct list_head *list);
@@ -460,6 +461,7 @@ extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t io
 extern void iommu_set_fault_handler(struct iommu_domain *domain,
 			iommu_fault_handler_t handler, void *token);
 
+extern int iommu_needs_bounce_buffer(struct device *dev);
 extern void iommu_get_resv_regions(struct device *dev, struct list_head *list);
 extern void iommu_put_resv_regions(struct device *dev, struct list_head *list);
 extern int iommu_request_dm_for_dev(struct device *dev);
@@ -530,7 +532,7 @@ static inline void iommu_flush_iotlb_all(struct iommu_domain *domain)
 		domain->ops->flush_iotlb_all(domain);
 }
 
-static inline void flush_iotlb_range(struct iommu_domain *domain,
+static inline void iommu_flush_iotlb_range(struct iommu_domain *domain,
 			unsigned long iova, size_t size,
 			struct page *freelist)
 {
@@ -764,6 +766,11 @@ static inline void iommu_set_fault_handler(struct iommu_domain *domain,
 {
 }
 
+static inline int iommu_needs_bounce_buffer(struct device *dev)
+{
+	return 0;
+}
+
 static inline void iommu_get_resv_regions(struct device *dev,
 					struct list_head *list)
 {
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (5 preceding siblings ...)
  2019-12-21 15:03 ` [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Tom Murphy
@ 2019-12-21 15:03 ` Tom Murphy
  2019-12-21 15:04 ` [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu Tom Murphy
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:03 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Convert the intel iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the intel iommu driver.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/Kconfig       |   1 +
 drivers/iommu/intel-iommu.c | 742 +++---------------------------------
 include/linux/intel-iommu.h |   1 -
 3 files changed, 55 insertions(+), 689 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 0b9d78a0f3ac..4126bb2794c7 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -188,6 +188,7 @@ config INTEL_IOMMU
 	select NEED_DMA_MAP_STATE
 	select DMAR_TABLE
 	select SWIOTLB
+	select IOMMU_DMA
 	help
 	  DMA remapping (DMAR) devices support enables independent address
 	  translations for Direct Memory Access (DMA) from devices.
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 675ca2aa6e20..e673e1ee9288 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -31,6 +31,7 @@
 #include <linux/io.h>
 #include <linux/iova.h>
 #include <linux/iommu.h>
+#include <linux/dma-iommu.h>
 #include <linux/intel-iommu.h>
 #include <linux/syscore_ops.h>
 #include <linux/tboot.h>
@@ -41,7 +42,6 @@
 #include <linux/dma-direct.h>
 #include <linux/crash_dump.h>
 #include <linux/numa.h>
-#include <linux/swiotlb.h>
 #include <asm/irq_remapping.h>
 #include <asm/cacheflush.h>
 #include <asm/iommu.h>
@@ -380,9 +380,6 @@ EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped);
 static DEFINE_SPINLOCK(device_domain_lock);
 static LIST_HEAD(device_domain_list);
 
-#define device_needs_bounce(d) (!intel_no_bounce && dev_is_pci(d) &&	\
-				to_pci_dev(d)->untrusted)
-
 /*
  * Iterate over elements in device_domain_list and call the specified
  * callback @fn against each element.
@@ -1180,13 +1177,6 @@ static void dma_free_pagelist(struct page *freelist)
 	}
 }
 
-static void iova_entry_free(unsigned long data)
-{
-	struct page *freelist = (struct page *)data;
-
-	dma_free_pagelist(freelist);
-}
-
 /* iommu handling */
 static int iommu_alloc_root_entry(struct intel_iommu *iommu)
 {
@@ -1530,16 +1520,14 @@ static inline void __mapping_notify_one(struct intel_iommu *iommu,
 		iommu_flush_write_buffer(iommu);
 }
 
-static void iommu_flush_iova(struct iova_domain *iovad)
+static void intel_flush_iotlb_all(struct iommu_domain *domain)
 {
-	struct dmar_domain *domain;
+	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
 	int idx;
 
-	domain = container_of(iovad, struct dmar_domain, iovad);
-
-	for_each_domain_iommu(idx, domain) {
+	for_each_domain_iommu(idx, dmar_domain) {
 		struct intel_iommu *iommu = g_iommus[idx];
-		u16 did = domain->iommu_did[iommu->seq_id];
+		u16 did = dmar_domain->iommu_did[iommu->seq_id];
 
 		iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);
 
@@ -1784,53 +1772,6 @@ static int domain_detach_iommu(struct dmar_domain *domain,
 	return count;
 }
 
-static struct iova_domain reserved_iova_list;
-static struct lock_class_key reserved_rbtree_key;
-
-static int dmar_init_reserved_ranges(void)
-{
-	struct pci_dev *pdev = NULL;
-	struct iova *iova;
-	int i;
-
-	init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN);
-
-	lockdep_set_class(&reserved_iova_list.iova_rbtree_lock,
-		&reserved_rbtree_key);
-
-	/* IOAPIC ranges shouldn't be accessed by DMA */
-	iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START),
-		IOVA_PFN(IOAPIC_RANGE_END));
-	if (!iova) {
-		pr_err("Reserve IOAPIC range failed\n");
-		return -ENODEV;
-	}
-
-	/* Reserve all PCI MMIO to avoid peer-to-peer access */
-	for_each_pci_dev(pdev) {
-		struct resource *r;
-
-		for (i = 0; i < PCI_NUM_RESOURCES; i++) {
-			r = &pdev->resource[i];
-			if (!r->flags || !(r->flags & IORESOURCE_MEM))
-				continue;
-			iova = reserve_iova(&reserved_iova_list,
-					    IOVA_PFN(r->start),
-					    IOVA_PFN(r->end));
-			if (!iova) {
-				pci_err(pdev, "Reserve iova for %pR failed\n", r);
-				return -ENODEV;
-			}
-		}
-	}
-	return 0;
-}
-
-static void domain_reserve_special_ranges(struct dmar_domain *domain)
-{
-	copy_reserved_iova(&reserved_iova_list, &domain->iovad);
-}
-
 static inline int guestwidth_to_adjustwidth(int gaw)
 {
 	int agaw;
@@ -1850,16 +1791,11 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu,
 {
 	int adjust_width, agaw;
 	unsigned long sagaw;
-	int err;
-
-	init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
-
-	err = init_iova_flush_queue(&domain->iovad,
-				    iommu_flush_iova, iova_entry_free);
-	if (err)
-		return err;
+	int ret;
 
-	domain_reserve_special_ranges(domain);
+	ret = iommu_get_dma_cookie(&domain->domain);
+	if (ret)
+		return ret;
 
 	/* calculate AGAW */
 	if (guest_width > cap_mgaw(iommu->cap))
@@ -1910,7 +1846,7 @@ static void domain_exit(struct dmar_domain *domain)
 
 	/* destroy iovas */
 	if (domain->domain.type == IOMMU_DOMAIN_DMA)
-		put_iova_domain(&domain->iovad);
+		iommu_put_dma_cookie(&domain->domain);
 
 	if (domain->pgd) {
 		struct page *freelist;
@@ -2439,20 +2375,6 @@ static struct dmar_domain *find_domain(struct device *dev)
 	return NULL;
 }
 
-static struct dmar_domain *deferred_attach_domain(struct device *dev)
-{
-	if (unlikely(dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO)) {
-		struct iommu_domain *domain;
-
-		dev->archdata.iommu = NULL;
-		domain = iommu_get_domain_for_dev(dev);
-		if (domain)
-			intel_iommu_attach_device(domain, dev);
-	}
-
-	return find_domain(dev);
-}
-
 static inline struct device_domain_info *
 dmar_search_domain_by_dev_info(int segment, int bus, int devfn)
 {
@@ -3363,39 +3285,6 @@ static int __init init_dmars(void)
 	return ret;
 }
 
-/* This takes a number of _MM_ pages, not VTD pages */
-static unsigned long intel_alloc_iova(struct device *dev,
-				     struct dmar_domain *domain,
-				     unsigned long nrpages, uint64_t dma_mask)
-{
-	unsigned long iova_pfn;
-
-	/* Restrict dma_mask to the width that the iommu can handle */
-	dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask);
-	/* Ensure we reserve the whole size-aligned region */
-	nrpages = __roundup_pow_of_two(nrpages);
-
-	if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) {
-		/*
-		 * First try to allocate an io virtual address in
-		 * DMA_BIT_MASK(32) and if that fails then try allocating
-		 * from higher range
-		 */
-		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
-					   IOVA_PFN(DMA_BIT_MASK(32)), false);
-		if (iova_pfn)
-			return iova_pfn;
-	}
-	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
-				   IOVA_PFN(dma_mask), true);
-	if (unlikely(!iova_pfn)) {
-		dev_err(dev, "Allocating %ld-page iova failed", nrpages);
-		return 0;
-	}
-
-	return iova_pfn;
-}
-
 static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 {
 	struct dmar_domain *domain, *tmp;
@@ -3444,528 +3333,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev)
 	return domain;
 }
 
-static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
-				     size_t size, int dir, u64 dma_mask)
-{
-	struct dmar_domain *domain;
-	phys_addr_t start_paddr;
-	unsigned long iova_pfn;
-	int prot = 0;
-	int ret;
-	struct intel_iommu *iommu;
-	unsigned long paddr_pfn = paddr >> PAGE_SHIFT;
-
-	BUG_ON(dir == DMA_NONE);
-
-	domain = deferred_attach_domain(dev);
-	if (!domain)
-		return DMA_MAPPING_ERROR;
-
-	iommu = domain_get_iommu(domain);
-	size = aligned_nrpages(paddr, size);
-
-	iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask);
-	if (!iova_pfn)
-		goto error;
-
-	/*
-	 * Check if DMAR supports zero-length reads on write only
-	 * mappings..
-	 */
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \
-			!cap_zlr(iommu->cap))
-		prot |= DMA_PTE_READ;
-	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
-		prot |= DMA_PTE_WRITE;
-	/*
-	 * paddr - (paddr + size) might be partial page, we should map the whole
-	 * page.  Note: if two part of one page are separately mapped, we
-	 * might have two guest_addr mapping to the same host paddr, but this
-	 * is not a big problem
-	 */
-	ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn),
-				 mm_to_dma_pfn(paddr_pfn), size, prot);
-	if (ret)
-		goto error;
-
-	start_paddr = (phys_addr_t)iova_pfn << PAGE_SHIFT;
-	start_paddr += paddr & ~PAGE_MASK;
-
-	trace_map_single(dev, start_paddr, paddr, size << VTD_PAGE_SHIFT);
-
-	return start_paddr;
-
-error:
-	if (iova_pfn)
-		free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size));
-	dev_err(dev, "Device request: %zx@%llx dir %d --- failed\n",
-		size, (unsigned long long)paddr, dir);
-	return DMA_MAPPING_ERROR;
-}
-
-static dma_addr_t intel_map_page(struct device *dev, struct page *page,
-				 unsigned long offset, size_t size,
-				 enum dma_data_direction dir,
-				 unsigned long attrs)
-{
-	return __intel_map_single(dev, page_to_phys(page) + offset, size, dir,
-			*dev->dma_mask);
-}
-
-static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr,
-				     size_t size, enum dma_data_direction dir,
-				     unsigned long attrs)
-{
-	return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask);
-}
-
-static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
-{
-	struct dmar_domain *domain;
-	unsigned long start_pfn, last_pfn;
-	unsigned long nrpages;
-	unsigned long iova_pfn;
-	struct intel_iommu *iommu;
-	struct page *freelist;
-	struct pci_dev *pdev = NULL;
-
-	domain = find_domain(dev);
-	BUG_ON(!domain);
-
-	iommu = domain_get_iommu(domain);
-
-	iova_pfn = IOVA_PFN(dev_addr);
-
-	nrpages = aligned_nrpages(dev_addr, size);
-	start_pfn = mm_to_dma_pfn(iova_pfn);
-	last_pfn = start_pfn + nrpages - 1;
-
-	if (dev_is_pci(dev))
-		pdev = to_pci_dev(dev);
-
-	freelist = domain_unmap(domain, start_pfn, last_pfn, NULL);
-	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
-			!has_iova_flush_queue(&domain->iovad)) {
-		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
-				      nrpages, !freelist, 0);
-		/* free iova */
-		free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages));
-		dma_free_pagelist(freelist);
-	} else {
-		queue_iova(&domain->iovad, iova_pfn, nrpages,
-			   (unsigned long)freelist);
-		/*
-		 * queue up the release of the unmap to save the 1/6th of the
-		 * cpu used up by the iotlb flush operation...
-		 */
-	}
-
-	trace_unmap_single(dev, dev_addr, size);
-}
-
-static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
-			     size_t size, enum dma_data_direction dir,
-			     unsigned long attrs)
-{
-	intel_unmap(dev, dev_addr, size);
-}
-
-static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr,
-		size_t size, enum dma_data_direction dir, unsigned long attrs)
-{
-	intel_unmap(dev, dev_addr, size);
-}
-
-static void *intel_alloc_coherent(struct device *dev, size_t size,
-				  dma_addr_t *dma_handle, gfp_t flags,
-				  unsigned long attrs)
-{
-	struct page *page = NULL;
-	int order;
-
-	size = PAGE_ALIGN(size);
-	order = get_order(size);
-
-	if (gfpflags_allow_blocking(flags)) {
-		unsigned int count = size >> PAGE_SHIFT;
-
-		page = dma_alloc_from_contiguous(dev, count, order,
-						 flags & __GFP_NOWARN);
-	}
-
-	if (!page)
-		page = alloc_pages(flags, order);
-	if (!page)
-		return NULL;
-	memset(page_address(page), 0, size);
-
-	*dma_handle = __intel_map_single(dev, page_to_phys(page), size,
-					 DMA_BIDIRECTIONAL,
-					 dev->coherent_dma_mask);
-	if (*dma_handle != DMA_MAPPING_ERROR)
-		return page_address(page);
-	if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-		__free_pages(page, order);
-
-	return NULL;
-}
-
-static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
-				dma_addr_t dma_handle, unsigned long attrs)
-{
-	int order;
-	struct page *page = virt_to_page(vaddr);
-
-	size = PAGE_ALIGN(size);
-	order = get_order(size);
-
-	intel_unmap(dev, dma_handle, size);
-	if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
-		__free_pages(page, order);
-}
-
-static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
-			   int nelems, enum dma_data_direction dir,
-			   unsigned long attrs)
-{
-	dma_addr_t startaddr = sg_dma_address(sglist) & PAGE_MASK;
-	unsigned long nrpages = 0;
-	struct scatterlist *sg;
-	int i;
-
-	for_each_sg(sglist, sg, nelems, i) {
-		nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg));
-	}
-
-	intel_unmap(dev, startaddr, nrpages << VTD_PAGE_SHIFT);
-
-	trace_unmap_sg(dev, startaddr, nrpages << VTD_PAGE_SHIFT);
-}
-
-static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nelems,
-			enum dma_data_direction dir, unsigned long attrs)
-{
-	int i;
-	struct dmar_domain *domain;
-	size_t size = 0;
-	int prot = 0;
-	unsigned long iova_pfn;
-	int ret;
-	struct scatterlist *sg;
-	unsigned long start_vpfn;
-	struct intel_iommu *iommu;
-
-	BUG_ON(dir == DMA_NONE);
-
-	domain = deferred_attach_domain(dev);
-	if (!domain)
-		return 0;
-
-	iommu = domain_get_iommu(domain);
-
-	for_each_sg(sglist, sg, nelems, i)
-		size += aligned_nrpages(sg->offset, sg->length);
-
-	iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size),
-				*dev->dma_mask);
-	if (!iova_pfn) {
-		sglist->dma_length = 0;
-		return 0;
-	}
-
-	/*
-	 * Check if DMAR supports zero-length reads on write only
-	 * mappings..
-	 */
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \
-			!cap_zlr(iommu->cap))
-		prot |= DMA_PTE_READ;
-	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
-		prot |= DMA_PTE_WRITE;
-
-	start_vpfn = mm_to_dma_pfn(iova_pfn);
-
-	ret = domain_sg_mapping(domain, start_vpfn, sglist, size, prot);
-	if (unlikely(ret)) {
-		dma_pte_free_pagetable(domain, start_vpfn,
-				       start_vpfn + size - 1,
-				       agaw_to_level(domain->agaw) + 1);
-		free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size));
-		return 0;
-	}
-
-	trace_map_sg(dev, iova_pfn << PAGE_SHIFT,
-		     sg_phys(sglist), size << VTD_PAGE_SHIFT);
-
-	return nelems;
-}
-
-static u64 intel_get_required_mask(struct device *dev)
-{
-	return DMA_BIT_MASK(32);
-}
-
-static const struct dma_map_ops intel_dma_ops = {
-	.alloc = intel_alloc_coherent,
-	.free = intel_free_coherent,
-	.map_sg = intel_map_sg,
-	.unmap_sg = intel_unmap_sg,
-	.map_page = intel_map_page,
-	.unmap_page = intel_unmap_page,
-	.map_resource = intel_map_resource,
-	.unmap_resource = intel_unmap_resource,
-	.dma_supported = dma_direct_supported,
-	.mmap = dma_common_mmap,
-	.get_sgtable = dma_common_get_sgtable,
-	.get_required_mask = intel_get_required_mask,
-};
-
-static void
-bounce_sync_single(struct device *dev, dma_addr_t addr, size_t size,
-		   enum dma_data_direction dir, enum dma_sync_target target)
-{
-	struct dmar_domain *domain;
-	phys_addr_t tlb_addr;
-
-	domain = find_domain(dev);
-	if (WARN_ON(!domain))
-		return;
-
-	tlb_addr = intel_iommu_iova_to_phys(&domain->domain, addr);
-	if (is_swiotlb_buffer(tlb_addr))
-		swiotlb_tbl_sync_single(dev, tlb_addr, size, dir, target);
-}
-
-static dma_addr_t
-bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
-		  enum dma_data_direction dir, unsigned long attrs,
-		  u64 dma_mask)
-{
-	size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE);
-	struct dmar_domain *domain;
-	struct intel_iommu *iommu;
-	unsigned long iova_pfn;
-	unsigned long nrpages;
-	phys_addr_t tlb_addr;
-	int prot = 0;
-	int ret;
-
-	domain = deferred_attach_domain(dev);
-	if (WARN_ON(dir == DMA_NONE || !domain))
-		return DMA_MAPPING_ERROR;
-
-	iommu = domain_get_iommu(domain);
-	if (WARN_ON(!iommu))
-		return DMA_MAPPING_ERROR;
-
-	nrpages = aligned_nrpages(0, size);
-	iova_pfn = intel_alloc_iova(dev, domain,
-				    dma_to_mm_pfn(nrpages), dma_mask);
-	if (!iova_pfn)
-		return DMA_MAPPING_ERROR;
-
-	/*
-	 * Check if DMAR supports zero-length reads on write only
-	 * mappings..
-	 */
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL ||
-			!cap_zlr(iommu->cap))
-		prot |= DMA_PTE_READ;
-	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
-		prot |= DMA_PTE_WRITE;
-
-	/*
-	 * If both the physical buffer start address and size are
-	 * page aligned, we don't need to use a bounce page.
-	 */
-	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
-		tlb_addr = swiotlb_tbl_map_single(dev,
-				__phys_to_dma(dev, io_tlb_start),
-				paddr, size, aligned_size, dir, attrs);
-		if (tlb_addr == DMA_MAPPING_ERROR) {
-			goto swiotlb_error;
-		} else {
-			/* Cleanup the padding area. */
-			void *padding_start = phys_to_virt(tlb_addr);
-			size_t padding_size = aligned_size;
-
-			if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-			    (dir == DMA_TO_DEVICE ||
-			     dir == DMA_BIDIRECTIONAL)) {
-				padding_start += size;
-				padding_size -= size;
-			}
-
-			memset(padding_start, 0, padding_size);
-		}
-	} else {
-		tlb_addr = paddr;
-	}
-
-	ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn),
-				 tlb_addr >> VTD_PAGE_SHIFT, nrpages, prot);
-	if (ret)
-		goto mapping_error;
-
-	trace_bounce_map_single(dev, iova_pfn << PAGE_SHIFT, paddr, size);
-
-	return (phys_addr_t)iova_pfn << PAGE_SHIFT;
-
-mapping_error:
-	if (is_swiotlb_buffer(tlb_addr))
-		swiotlb_tbl_unmap_single(dev, tlb_addr, size,
-					 aligned_size, dir, attrs);
-swiotlb_error:
-	free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages));
-	dev_err(dev, "Device bounce map: %zx@%llx dir %d --- failed\n",
-		size, (unsigned long long)paddr, dir);
-
-	return DMA_MAPPING_ERROR;
-}
-
-static void
-bounce_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size,
-		    enum dma_data_direction dir, unsigned long attrs)
-{
-	size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE);
-	struct dmar_domain *domain;
-	phys_addr_t tlb_addr;
-
-	domain = find_domain(dev);
-	if (WARN_ON(!domain))
-		return;
-
-	tlb_addr = intel_iommu_iova_to_phys(&domain->domain, dev_addr);
-	if (WARN_ON(!tlb_addr))
-		return;
-
-	intel_unmap(dev, dev_addr, size);
-	if (is_swiotlb_buffer(tlb_addr))
-		swiotlb_tbl_unmap_single(dev, tlb_addr, size,
-					 aligned_size, dir, attrs);
-
-	trace_bounce_unmap_single(dev, dev_addr, size);
-}
-
-static dma_addr_t
-bounce_map_page(struct device *dev, struct page *page, unsigned long offset,
-		size_t size, enum dma_data_direction dir, unsigned long attrs)
-{
-	return bounce_map_single(dev, page_to_phys(page) + offset,
-				 size, dir, attrs, *dev->dma_mask);
-}
-
-static dma_addr_t
-bounce_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
-		    enum dma_data_direction dir, unsigned long attrs)
-{
-	return bounce_map_single(dev, phys_addr, size,
-				 dir, attrs, *dev->dma_mask);
-}
-
-static void
-bounce_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size,
-		  enum dma_data_direction dir, unsigned long attrs)
-{
-	bounce_unmap_single(dev, dev_addr, size, dir, attrs);
-}
-
-static void
-bounce_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size,
-		      enum dma_data_direction dir, unsigned long attrs)
-{
-	bounce_unmap_single(dev, dev_addr, size, dir, attrs);
-}
-
-static void
-bounce_unmap_sg(struct device *dev, struct scatterlist *sglist, int nelems,
-		enum dma_data_direction dir, unsigned long attrs)
-{
-	struct scatterlist *sg;
-	int i;
-
-	for_each_sg(sglist, sg, nelems, i)
-		bounce_unmap_page(dev, sg->dma_address,
-				  sg_dma_len(sg), dir, attrs);
-}
-
-static int
-bounce_map_sg(struct device *dev, struct scatterlist *sglist, int nelems,
-	      enum dma_data_direction dir, unsigned long attrs)
-{
-	int i;
-	struct scatterlist *sg;
-
-	for_each_sg(sglist, sg, nelems, i) {
-		sg->dma_address = bounce_map_page(dev, sg_page(sg),
-						  sg->offset, sg->length,
-						  dir, attrs);
-		if (sg->dma_address == DMA_MAPPING_ERROR)
-			goto out_unmap;
-		sg_dma_len(sg) = sg->length;
-	}
-
-	return nelems;
-
-out_unmap:
-	bounce_unmap_sg(dev, sglist, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
-	return 0;
-}
-
-static void
-bounce_sync_single_for_cpu(struct device *dev, dma_addr_t addr,
-			   size_t size, enum dma_data_direction dir)
-{
-	bounce_sync_single(dev, addr, size, dir, SYNC_FOR_CPU);
-}
-
-static void
-bounce_sync_single_for_device(struct device *dev, dma_addr_t addr,
-			      size_t size, enum dma_data_direction dir)
-{
-	bounce_sync_single(dev, addr, size, dir, SYNC_FOR_DEVICE);
-}
-
-static void
-bounce_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist,
-		       int nelems, enum dma_data_direction dir)
-{
-	struct scatterlist *sg;
-	int i;
-
-	for_each_sg(sglist, sg, nelems, i)
-		bounce_sync_single(dev, sg_dma_address(sg),
-				   sg_dma_len(sg), dir, SYNC_FOR_CPU);
-}
-
-static void
-bounce_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
-			  int nelems, enum dma_data_direction dir)
-{
-	struct scatterlist *sg;
-	int i;
-
-	for_each_sg(sglist, sg, nelems, i)
-		bounce_sync_single(dev, sg_dma_address(sg),
-				   sg_dma_len(sg), dir, SYNC_FOR_DEVICE);
-}
-
-static const struct dma_map_ops bounce_dma_ops = {
-	.alloc			= intel_alloc_coherent,
-	.free			= intel_free_coherent,
-	.map_sg			= bounce_map_sg,
-	.unmap_sg		= bounce_unmap_sg,
-	.map_page		= bounce_map_page,
-	.unmap_page		= bounce_unmap_page,
-	.sync_single_for_cpu	= bounce_sync_single_for_cpu,
-	.sync_single_for_device	= bounce_sync_single_for_device,
-	.sync_sg_for_cpu	= bounce_sync_sg_for_cpu,
-	.sync_sg_for_device	= bounce_sync_sg_for_device,
-	.map_resource		= bounce_map_resource,
-	.unmap_resource		= bounce_unmap_resource,
-	.dma_supported		= dma_direct_supported,
-};
-
 static inline int iommu_domain_cache_init(void)
 {
 	int ret = 0;
@@ -4648,7 +4015,7 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
 			if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
 				continue;
 
-			free_cpu_cached_iovas(cpu, &domain->iovad);
+			iommu_dma_free_cpu_cached_iovas(cpu, &domain->domain);
 		}
 	}
 }
@@ -4917,12 +4284,6 @@ int __init intel_iommu_init(void)
 	if (list_empty(&dmar_atsr_units))
 		pr_info("No ATSR found\n");
 
-	if (dmar_init_reserved_ranges()) {
-		if (force_on)
-			panic("tboot: Failed to reserve iommu ranges\n");
-		goto out_free_reserved_range;
-	}
-
 	if (dmar_map_gfx)
 		intel_iommu_gfx_mapped = 1;
 
@@ -4933,7 +4294,7 @@ int __init intel_iommu_init(void)
 		if (force_on)
 			panic("tboot: Failed to initialize DMARs\n");
 		pr_err("Initialization failed\n");
-		goto out_free_reserved_range;
+		goto out_free_dmar;
 	}
 	up_write(&dmar_global_lock);
 
@@ -4983,8 +4344,6 @@ int __init intel_iommu_init(void)
 
 	return 0;
 
-out_free_reserved_range:
-	put_iova_domain(&reserved_iova_list);
 out_free_dmar:
 	intel_iommu_free_dmars();
 	up_write(&dmar_global_lock);
@@ -5087,18 +4446,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
 	return 0;
 }
 
-static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
-{
-	init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
-	copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
-
-	if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
-				iova_entry_free)) {
-		pr_warn("iova flush queue initialization failed\n");
-		intel_iommu_strict = 1;
-	}
-}
-
 static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
 {
 	struct dmar_domain *dmar_domain;
@@ -5119,8 +4466,9 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
 			return NULL;
 		}
 
-		if (type == IOMMU_DOMAIN_DMA)
-			intel_init_iova_domain(dmar_domain);
+		if (type == IOMMU_DOMAIN_DMA &&
+				iommu_get_dma_cookie(&dmar_domain->domain))
+			return NULL;
 
 		domain_update_iommu_cap(dmar_domain);
 
@@ -5319,6 +4667,9 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
 {
 	int ret;
 
+	/* Clear deferred attach */
+	dev->archdata.iommu = NULL;
+
 	if (domain->type == IOMMU_DOMAIN_UNMANAGED &&
 	    device_is_rmrr_locked(dev)) {
 		dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.\n");
@@ -5533,6 +4884,7 @@ static int intel_iommu_add_device(struct device *dev)
 	struct intel_iommu *iommu;
 	struct iommu_group *group;
 	u64 dma_mask = *dev->dma_mask;
+	dma_addr_t base;
 	u8 bus, devfn;
 	int ret;
 
@@ -5555,6 +4907,7 @@ static int intel_iommu_add_device(struct device *dev)
 
 	iommu_group_put(group);
 
+	base = IOVA_START_PFN << VTD_PAGE_SHIFT;
 	domain = iommu_get_domain_for_dev(dev);
 	dmar_domain = to_dmar_domain(domain);
 	if (domain->type == IOMMU_DOMAIN_DMA) {
@@ -5573,7 +4926,9 @@ static int intel_iommu_add_device(struct device *dev)
 					 "Device uses a private identity domain.\n");
 			}
 		} else {
-			dev->dma_ops = &intel_dma_ops;
+			iommu_setup_dma_ops(dev, base,
+					__DOMAIN_MAX_ADDR(dmar_domain->gaw) -
+					base);
 		}
 	} else {
 		if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) {
@@ -5590,15 +4945,12 @@ static int intel_iommu_add_device(struct device *dev)
 				dev_info(dev,
 					 "Device uses a private dma domain.\n");
 			}
-			dev->dma_ops = &intel_dma_ops;
+			iommu_setup_dma_ops(dev, base,
+					__DOMAIN_MAX_ADDR(dmar_domain->gaw) -
+					base);
 		}
 	}
 
-	if (device_needs_bounce(dev)) {
-		dev_info(dev, "Use Intel IOMMU bounce page dma_ops\n");
-		set_dma_ops(dev, &bounce_dma_ops);
-	}
-
 	return 0;
 }
 
@@ -5620,6 +4972,31 @@ static void intel_iommu_remove_device(struct device *dev)
 	set_dma_ops(dev, NULL);
 }
 
+static int intel_iommu_domain_get_attr(struct iommu_domain *domain,
+		enum iommu_attr attr, void *data)
+{
+	switch (domain->type) {
+	case IOMMU_DOMAIN_UNMANAGED:
+		return -ENODEV;
+	case IOMMU_DOMAIN_DMA:
+		switch (attr) {
+		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
+			*(int *)data = !intel_iommu_strict;
+			return 0;
+		default:
+			return -ENODEV;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+}
+
+static int intel_iommu_needs_bounce_buffer(struct device *d)
+{
+	return !intel_no_bounce && dev_is_pci(d) && to_pci_dev(d)->untrusted;
+}
+
 static void intel_iommu_get_resv_regions(struct device *device,
 					 struct list_head *head)
 {
@@ -5737,19 +5114,6 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev)
 	return ret;
 }
 
-static void intel_iommu_apply_resv_region(struct device *dev,
-					  struct iommu_domain *domain,
-					  struct iommu_resv_region *region)
-{
-	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
-	unsigned long start, end;
-
-	start = IOVA_PFN(region->start);
-	end   = IOVA_PFN(region->start + region->length - 1);
-
-	WARN_ON_ONCE(!reserve_iova(&dmar_domain->iovad, start, end));
-}
-
 #ifdef CONFIG_INTEL_IOMMU_SVM
 struct intel_iommu *intel_svm_device_to_iommu(struct device *dev)
 {
@@ -5916,13 +5280,15 @@ const struct iommu_ops intel_iommu_ops = {
 	.aux_get_pasid		= intel_iommu_aux_get_pasid,
 	.map			= intel_iommu_map,
 	.unmap			= intel_iommu_unmap,
+	.flush_iotlb_all	= intel_flush_iotlb_all,
 	.flush_iotlb_range      = intel_iommu_flush_iotlb_range,
 	.iova_to_phys		= intel_iommu_iova_to_phys,
 	.add_device		= intel_iommu_add_device,
 	.remove_device		= intel_iommu_remove_device,
+	.domain_get_attr	= intel_iommu_domain_get_attr,
+	.needs_bounce_buffer	= intel_iommu_needs_bounce_buffer,
 	.get_resv_regions	= intel_iommu_get_resv_regions,
 	.put_resv_regions	= intel_iommu_put_resv_regions,
-	.apply_resv_region	= intel_iommu_apply_resv_region,
 	.device_group		= pci_device_group,
 	.dev_has_feat		= intel_iommu_dev_has_feat,
 	.dev_feat_enabled	= intel_iommu_dev_feat_enabled,
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 6d8bf4bdf240..d07f14340870 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -489,7 +489,6 @@ struct dmar_domain {
 	bool has_iotlb_device;
 	struct list_head devices;	/* all devices' list */
 	struct list_head auxd;		/* link to device's auxiliary list */
-	struct iova_domain iovad;	/* iova's that belong to this domain */
 
 	struct dma_pte	*pgd;		/* virtual address */
 	int		gaw;		/* max guest address width */
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (6 preceding siblings ...)
  2019-12-21 15:03 ` [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops Tom Murphy
@ 2019-12-21 15:04 ` Tom Murphy
  2019-12-23 10:37 ` [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Jani Nikula
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2019-12-21 15:04 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Tom Murphy, Kukjin Kim, Daniel Vetter,
	Robin Murphy

ops __finalise_sg

Disable combining sg segments in the dma-iommu api.
Combining the sg segments exposes a bug in the intel i915 driver which
causes visual artifacts and the screen to freeze. This is most likely
because of how the i915 handles the returned list. It probably doesn't
respect the returned value specifying the number of elements in the list
and instead depends on the previous behaviour of the intel iommu driver
which would return the same number of elements in the output list as in
the input list.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/dma-iommu.c | 38 +++++++-------------------------------
 1 file changed, 7 insertions(+), 31 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index cf778db7d84d..d7547b912c87 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -853,8 +853,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
 {
 	struct scatterlist *s, *cur = sg;
 	unsigned long seg_mask = dma_get_seg_boundary(dev);
-	unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev);
-	int i, count = 0;
+	int i;
 
 	for_each_sg(sg, s, nents, i) {
 		/* Restore this segment's original unaligned fields first */
@@ -862,39 +861,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
 		unsigned int s_length = sg_dma_len(s);
 		unsigned int s_iova_len = s->length;
 
+		if (i > 0)
+			cur = sg_next(cur);
+
 		s->offset += s_iova_off;
 		s->length = s_length;
-		sg_dma_address(s) = DMA_MAPPING_ERROR;
-		sg_dma_len(s) = 0;
-
-		/*
-		 * Now fill in the real DMA data. If...
-		 * - there is a valid output segment to append to
-		 * - and this segment starts on an IOVA page boundary
-		 * - but doesn't fall at a segment boundary
-		 * - and wouldn't make the resulting output segment too long
-		 */
-		if (cur_len && !s_iova_off && (dma_addr & seg_mask) &&
-		    (max_len - cur_len >= s_length)) {
-			/* ...then concatenate it with the previous one */
-			cur_len += s_length;
-		} else {
-			/* Otherwise start the next output segment */
-			if (i > 0)
-				cur = sg_next(cur);
-			cur_len = s_length;
-			count++;
-
-			sg_dma_address(cur) = dma_addr + s_iova_off;
-		}
-
-		sg_dma_len(cur) = cur_len;
+		sg_dma_address(cur) = dma_addr + s_iova_off;
+		sg_dma_len(cur) = s_length;
 		dma_addr += s_iova_len;
-
-		if (s_length + s_iova_off < s_iova_len)
-			cur_len = 0;
 	}
-	return count;
+	return nents;
 }
 
 /*
-- 
2.20.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment
  2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
@ 2019-12-21 23:46   ` Arvind Sankar
  2019-12-23  3:00   ` Lu Baolu
  1 sibling, 0 replies; 38+ messages in thread
From: Arvind Sankar @ 2019-12-21 23:46 UTC (permalink / raw)
  To: Tom Murphy
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, iommu, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Kukjin Kim, Daniel Vetter, Robin Murphy

On Sat, Dec 21, 2019 at 03:03:53PM +0000, Tom Murphy wrote:
> In the intel iommu driver devices which only support 32bit DMA can't be
> direct mapped. The implementation of this is weird. Currently we assign
> it a direct mapped domain and then remove the domain later and replace
> it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it
> a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than
> needlessly swapping domains.
> 
> Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
> ---
>  drivers/iommu/intel-iommu.c | 88 +++++++++++++------------------------
>  1 file changed, 31 insertions(+), 57 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 0c8d81f56a30..c1ea66467918 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev)
>  	domain = iommu_get_domain_for_dev(dev);
>  	dmar_domain = to_dmar_domain(domain);
>  	if (domain->type == IOMMU_DOMAIN_DMA) {
> -		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) {
> +		/*
> +		 * We check dma_mask >= dma_get_required_mask(dev) because
> +		 * 32 bit DMA falls back to non-identity mapping.
> +		 */
> +		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY &&
> +				dma_mask >= dma_get_required_mask(dev)) {
>  			ret = iommu_request_dm_for_dev(dev);
>  			if (ret) {
>  				dmar_remove_one_dev_info(dev);
> -- 
> 2.20.1
> 

Should this be dma_direct_get_required_mask? dma_get_required_mask may
return DMA_BIT_MASK(32) -- it callbacks into intel_get_required_mask,
but I'm not sure what iommu_no_mapping(dev) will do at this point?
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment
  2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
  2019-12-21 23:46   ` Arvind Sankar
@ 2019-12-23  3:00   ` Lu Baolu
  1 sibling, 0 replies; 38+ messages in thread
From: Lu Baolu @ 2019-12-23  3:00 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	linux-kernel, Kukjin Kim, Daniel Vetter, Robin Murphy

Hi,

On 12/21/19 11:03 PM, Tom Murphy wrote:
> @@ -5618,9 +5583,13 @@ static int intel_iommu_add_device(struct device *dev)
>   	struct iommu_domain *domain;
>   	struct intel_iommu *iommu;
>   	struct iommu_group *group;
> +	u64 dma_mask = *dev->dma_mask;
>   	u8 bus, devfn;
>   	int ret;
>   
> +	if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask)
> +		dma_mask = dev->coherent_dma_mask;
> +
>   	iommu = device_to_iommu(dev, &bus, &devfn);
>   	if (!iommu)
>   		return -ENODEV;
> @@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev)
>   	domain = iommu_get_domain_for_dev(dev);
>   	dmar_domain = to_dmar_domain(domain);
>   	if (domain->type == IOMMU_DOMAIN_DMA) {
> -		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) {
> +		/*
> +		 * We check dma_mask >= dma_get_required_mask(dev) because
> +		 * 32 bit DMA falls back to non-identity mapping.
> +		 */
> +		if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY &&
> +				dma_mask >= dma_get_required_mask(dev)) {
>   			ret = iommu_request_dm_for_dev(dev);
>   			if (ret) {
>   				dmar_remove_one_dev_info(dev);

dev->dma_mask is set to 32bit by default. During loading driver, it sets
the real dma_mask with dma_set_mask() according to the real capability.
Here you will always see 32bit dma_mask for each device.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (7 preceding siblings ...)
  2019-12-21 15:04 ` [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu Tom Murphy
@ 2019-12-23 10:37 ` Jani Nikula
  2019-12-23 11:29   ` Robin Murphy
  2020-05-29  0:00 ` Logan Gunthorpe
  2020-08-26 18:14 ` Robin Murphy
  10 siblings, 1 reply; 38+ messages in thread
From: Jani Nikula @ 2019-12-23 10:37 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Alex Williamson, linux-mediatek,
	Rodrigo Vivi, Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck, linux-kernel,
	Tom Murphy, Kukjin Kim, Daniel Vetter, Robin Murphy

On Sat, 21 Dec 2019, Tom Murphy <murphyt7@tcd.ie> wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.
>
> While converting the driver I exposed a bug in the intel i915 driver
> which causes a huge amount of artifacts on the screen of my
> laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>
> This issue is most likely in the i915 driver and is most likely caused
> by the driver not respecting the return value of the
> dma_map_ops::map_sg function. You can see the driver ignoring the
> return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>
> Previously this didn’t cause issues because the intel map_sg always
> returned the same number of elements as the input scatter gather list
> but with the change to this dma-iommu api this is no longer the
> case. I wasn’t able to track the bug down to a specific line of code
> unfortunately.
>
> Could someone from the intel team look at this?

Let me get this straight. There is current API that on success always
returns the same number of elements as the input scatter gather
list. You propose to change the API so that this is no longer the case?

A quick check of various dma_map_sg() calls in the kernel seems to
indicate checking for 0 for errors and then ignoring the non-zero return
is a common pattern. Are you sure it's okay to make the change you're
proposing?

Anyway, due to the time of year and all, I'd like to ask you to file a
bug against i915 at [1] so this is not forgotten, and please let's not
merge the changes before this is resolved.


Thanks,
Jani.


[1] https://gitlab.freedesktop.org/drm/intel/issues/new


-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-23 10:37 ` [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Jani Nikula
@ 2019-12-23 11:29   ` Robin Murphy
  2019-12-23 11:41     ` Jani Nikula
  0 siblings, 1 reply; 38+ messages in thread
From: Robin Murphy @ 2019-12-23 11:29 UTC (permalink / raw)
  To: Jani Nikula, Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Alex Williamson, linux-mediatek,
	Rodrigo Vivi, Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, Cornelia Huck, linux-kernel, Kukjin Kim,
	Daniel Vetter, David Woodhouse

On 2019-12-23 10:37 am, Jani Nikula wrote:
> On Sat, 21 Dec 2019, Tom Murphy <murphyt7@tcd.ie> wrote:
>> This patchset converts the intel iommu driver to the dma-iommu api.
>>
>> While converting the driver I exposed a bug in the intel i915 driver
>> which causes a huge amount of artifacts on the screen of my
>> laptop. You can see a picture of it here:
>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>>
>> This issue is most likely in the i915 driver and is most likely caused
>> by the driver not respecting the return value of the
>> dma_map_ops::map_sg function. You can see the driver ignoring the
>> return value here:
>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>
>> Previously this didn’t cause issues because the intel map_sg always
>> returned the same number of elements as the input scatter gather list
>> but with the change to this dma-iommu api this is no longer the
>> case. I wasn’t able to track the bug down to a specific line of code
>> unfortunately.
>>
>> Could someone from the intel team look at this?
> 
> Let me get this straight. There is current API that on success always
> returns the same number of elements as the input scatter gather
> list. You propose to change the API so that this is no longer the case?

No, the API for dma_map_sg() has always been that it may return fewer 
DMA segments than nents - see Documentation/DMA-API.txt (and otherwise, 
the return value would surely be a simple success/fail condition). 
Relying on a particular implementation behaviour has never been strictly 
correct, even if it does happen to be a very common behaviour.

> A quick check of various dma_map_sg() calls in the kernel seems to
> indicate checking for 0 for errors and then ignoring the non-zero return
> is a common pattern. Are you sure it's okay to make the change you're
> proposing?

Various code uses tricks like just iterating the mapped list until the 
first segment with zero sg_dma_len(). Others may well simply have bugs.

Robin.

> Anyway, due to the time of year and all, I'd like to ask you to file a
> bug against i915 at [1] so this is not forgotten, and please let's not
> merge the changes before this is resolved.
> 
> 
> Thanks,
> Jani.
> 
> 
> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
> 
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-23 11:29   ` Robin Murphy
@ 2019-12-23 11:41     ` Jani Nikula
  2020-03-20  6:28       ` Tom Murphy
  0 siblings, 1 reply; 38+ messages in thread
From: Jani Nikula @ 2019-12-23 11:41 UTC (permalink / raw)
  To: Robin Murphy, Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Alex Williamson, linux-mediatek,
	Rodrigo Vivi, Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, Cornelia Huck, linux-kernel, Kukjin Kim,
	Daniel Vetter, David Woodhouse

On Mon, 23 Dec 2019, Robin Murphy <robin.murphy@arm.com> wrote:
> On 2019-12-23 10:37 am, Jani Nikula wrote:
>> On Sat, 21 Dec 2019, Tom Murphy <murphyt7@tcd.ie> wrote:
>>> This patchset converts the intel iommu driver to the dma-iommu api.
>>>
>>> While converting the driver I exposed a bug in the intel i915 driver
>>> which causes a huge amount of artifacts on the screen of my
>>> laptop. You can see a picture of it here:
>>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>>>
>>> This issue is most likely in the i915 driver and is most likely caused
>>> by the driver not respecting the return value of the
>>> dma_map_ops::map_sg function. You can see the driver ignoring the
>>> return value here:
>>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>>
>>> Previously this didn’t cause issues because the intel map_sg always
>>> returned the same number of elements as the input scatter gather list
>>> but with the change to this dma-iommu api this is no longer the
>>> case. I wasn’t able to track the bug down to a specific line of code
>>> unfortunately.
>>>
>>> Could someone from the intel team look at this?
>> 
>> Let me get this straight. There is current API that on success always
>> returns the same number of elements as the input scatter gather
>> list. You propose to change the API so that this is no longer the case?
>
> No, the API for dma_map_sg() has always been that it may return fewer 
> DMA segments than nents - see Documentation/DMA-API.txt (and otherwise, 
> the return value would surely be a simple success/fail condition). 
> Relying on a particular implementation behaviour has never been strictly 
> correct, even if it does happen to be a very common behaviour.
>
>> A quick check of various dma_map_sg() calls in the kernel seems to
>> indicate checking for 0 for errors and then ignoring the non-zero return
>> is a common pattern. Are you sure it's okay to make the change you're
>> proposing?
>
> Various code uses tricks like just iterating the mapped list until the 
> first segment with zero sg_dma_len(). Others may well simply have bugs.

Thanks for the clarification.

BR,
Jani.

>
> Robin.
>
>> Anyway, due to the time of year and all, I'd like to ask you to file a
>> bug against i915 at [1] so this is not forgotten, and please let's not
>> merge the changes before this is resolved.
>> 
>> 
>> Thanks,
>> Jani.
>> 
>> 
>> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
>> 
>> 

-- 
Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
  2019-12-21 15:03 ` [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Tom Murphy
@ 2019-12-24 10:20   ` kbuild test robot
  0 siblings, 0 replies; 38+ messages in thread
From: kbuild test robot @ 2019-12-24 10:20 UTC (permalink / raw)
  To: Tom Murphy
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, iommu, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, kbuild-all, David Woodhouse,
	Cornelia Huck, linux-kernel, Tom Murphy, Kukjin Kim,
	Daniel Vetter, Robin Murphy

[-- Attachment #1: Type: text/plain, Size: 3637 bytes --]

Hi Tom,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on rockchip/for-next]
[cannot apply to iommu/next tegra/for-next vfio/next linus/master v5.5-rc3 next-20191219]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Tom-Murphy/Convert-the-intel-iommu-driver-to-the-dma-iommu-api/20191224-171249
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip.git for-next
config: x86_64-defconfig (attached as .config)
compiler: gcc-7 (Debian 7.5.0-3) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers//iommu/dma-iommu.c: In function '__iommu_dma_map':
>> drivers//iommu/dma-iommu.c:568:3: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]
      void *padding_start = phys_to_virt(phys);
      ^~~~

vim +568 drivers//iommu/dma-iommu.c

   537	
   538	static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
   539			size_t org_size, dma_addr_t dma_mask, bool coherent,
   540			enum dma_data_direction dir, unsigned long attrs)
   541	{
   542		int prot = dma_info_to_prot(dir, coherent, attrs);
   543		struct iommu_domain *domain = iommu_get_dma_domain(dev);
   544		struct iommu_dma_cookie *cookie = domain->iova_cookie;
   545		struct iova_domain *iovad = &cookie->iovad;
   546		size_t iova_off = iova_offset(iovad, phys);
   547		size_t aligned_size = iova_align(iovad, org_size + iova_off);
   548		dma_addr_t iova;
   549	
   550		if (unlikely(iommu_dma_deferred_attach(dev, domain)))
   551			return DMA_MAPPING_ERROR;
   552	
   553	#ifdef CONFIG_SWIOTLB
   554		/*
   555		 * If both the physical buffer start address and size are
   556		 * page aligned, we don't need to use a bounce page.
   557		 */
   558		if (iommu_needs_bounce_buffer(dev)
   559				&& !iova_offset(iovad, phys | org_size)) {
   560			phys = swiotlb_tbl_map_single(dev,
   561					__phys_to_dma(dev, io_tlb_start),
   562					phys, org_size, aligned_size, dir, attrs);
   563	
   564			if (phys == DMA_MAPPING_ERROR)
   565				return DMA_MAPPING_ERROR;
   566	
   567			/* Cleanup the padding area. */
 > 568			void *padding_start = phys_to_virt(phys);
   569			size_t padding_size = aligned_size;
   570	
   571			if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
   572			    (dir == DMA_TO_DEVICE ||
   573			     dir == DMA_BIDIRECTIONAL)) {
   574				padding_start += org_size;
   575				padding_size -= org_size;
   576			}
   577	
   578			memset(padding_start, 0, padding_size);
   579		}
   580	#endif
   581	
   582		iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev);
   583		if (!iova)
   584			return DMA_MAPPING_ERROR;
   585	
   586		if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size,
   587					prot)) {
   588	
   589			if (unlikely(is_swiotlb_buffer(phys)))
   590				swiotlb_tbl_unmap_single(dev, phys, aligned_size,
   591						aligned_size, dir, attrs);
   592			iommu_dma_free_iova(cookie, iova, aligned_size, NULL);
   593			return DMA_MAPPING_ERROR;
   594		}
   595		return iova + iova_off;
   596	}
   597	

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 28860 bytes --]

[-- Attachment #3: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-23 11:41     ` Jani Nikula
@ 2020-03-20  6:28       ` Tom Murphy
  0 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2020-03-20  6:28 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, iommu, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Alex Williamson, linux-mediatek,
	Rodrigo Vivi, Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Any news on this? Is there anyone who wants to try and fix this possible bug?

On Mon, 23 Dec 2019 at 03:41, Jani Nikula <jani.nikula@linux.intel.com> wrote:
>
> On Mon, 23 Dec 2019, Robin Murphy <robin.murphy@arm.com> wrote:
> > On 2019-12-23 10:37 am, Jani Nikula wrote:
> >> On Sat, 21 Dec 2019, Tom Murphy <murphyt7@tcd.ie> wrote:
> >>> This patchset converts the intel iommu driver to the dma-iommu api.
> >>>
> >>> While converting the driver I exposed a bug in the intel i915 driver
> >>> which causes a huge amount of artifacts on the screen of my
> >>> laptop. You can see a picture of it here:
> >>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
> >>>
> >>> This issue is most likely in the i915 driver and is most likely caused
> >>> by the driver not respecting the return value of the
> >>> dma_map_ops::map_sg function. You can see the driver ignoring the
> >>> return value here:
> >>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> >>>
> >>> Previously this didn’t cause issues because the intel map_sg always
> >>> returned the same number of elements as the input scatter gather list
> >>> but with the change to this dma-iommu api this is no longer the
> >>> case. I wasn’t able to track the bug down to a specific line of code
> >>> unfortunately.
> >>>
> >>> Could someone from the intel team look at this?
> >>
> >> Let me get this straight. There is current API that on success always
> >> returns the same number of elements as the input scatter gather
> >> list. You propose to change the API so that this is no longer the case?
> >
> > No, the API for dma_map_sg() has always been that it may return fewer
> > DMA segments than nents - see Documentation/DMA-API.txt (and otherwise,
> > the return value would surely be a simple success/fail condition).
> > Relying on a particular implementation behaviour has never been strictly
> > correct, even if it does happen to be a very common behaviour.
> >
> >> A quick check of various dma_map_sg() calls in the kernel seems to
> >> indicate checking for 0 for errors and then ignoring the non-zero return
> >> is a common pattern. Are you sure it's okay to make the change you're
> >> proposing?
> >
> > Various code uses tricks like just iterating the mapped list until the
> > first segment with zero sg_dma_len(). Others may well simply have bugs.
>
> Thanks for the clarification.
>
> BR,
> Jani.
>
> >
> > Robin.
> >
> >> Anyway, due to the time of year and all, I'd like to ask you to file a
> >> bug against i915 at [1] so this is not forgotten, and please let's not
> >> merge the changes before this is resolved.
> >>
> >>
> >> Thanks,
> >> Jani.
> >>
> >>
> >> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
> >>
> >>
>
> --
> Jani Nikula, Intel Open Source Graphics Center
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
  2019-12-21 15:03 ` [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Tom Murphy
@ 2020-03-20  6:30   ` Tom Murphy
  2020-03-20  7:06     ` Lu Baolu
  0 siblings, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2020-03-20  6:30 UTC (permalink / raw)
  To: iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, Kukjin Kim, Daniel Vetter,
	Robin Murphy

Could we merge patch 1-3 from this series? it just cleans up weird
code and merging these patches will cover some of the work needed to
move the intel iommu driver to the dma-iommu api in the future.

On Sat, 21 Dec 2019 at 07:04, Tom Murphy <murphyt7@tcd.ie> wrote:
>
> Remove all IOVA handling code from the non-dma_ops path in the intel
> iommu driver.
>
> There's no need for the non-dma_ops path to keep track of IOVAs. The
> whole point of the non-dma_ops path is that it allows the IOVAs to be
> handled separately. The IOVA handling code removed in this patch is
> pointless.
>
> Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
> ---
>  drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
>  1 file changed, 33 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 64b1a9793daa..8d72ea0fb843 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain)
>         domain_remove_dev_info(domain);
>
>         /* destroy iovas */
> -       put_iova_domain(&domain->iovad);
> +       if (domain->domain.type == IOMMU_DOMAIN_DMA)
> +               put_iova_domain(&domain->iovad);
>
>         if (domain->pgd) {
>                 struct page *freelist;
> @@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev,
>  }
>
>  static int iommu_domain_identity_map(struct dmar_domain *domain,
> -                                    unsigned long long start,
> -                                    unsigned long long end)
> +                                    unsigned long first_vpfn,
> +                                    unsigned long last_vpfn)
>  {
> -       unsigned long first_vpfn = start >> VTD_PAGE_SHIFT;
> -       unsigned long last_vpfn = end >> VTD_PAGE_SHIFT;
> -
> -       if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
> -                         dma_to_mm_pfn(last_vpfn))) {
> -               pr_err("Reserving iova failed\n");
> -               return -ENOMEM;
> -       }
> -
> -       pr_debug("Mapping reserved region %llx-%llx\n", start, end);
>         /*
>          * RMRR range might have overlap with physical memory range,
>          * clear it first
> @@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw)
>
>                 for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
>                         ret = iommu_domain_identity_map(si_domain,
> -                                       PFN_PHYS(start_pfn), PFN_PHYS(end_pfn));
> +                                       mm_to_dma_pfn(start_pfn),
> +                                       mm_to_dma_pfn(end_pfn));
>                         if (ret)
>                                 return ret;
>                 }
> @@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
>                                        unsigned long val, void *v)
>  {
>         struct memory_notify *mhp = v;
> -       unsigned long long start, end;
> -       unsigned long start_vpfn, last_vpfn;
> +       unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> +       unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn +
> +                       mhp->nr_pages - 1);
>
>         switch (val) {
>         case MEM_GOING_ONLINE:
> -               start = mhp->start_pfn << PAGE_SHIFT;
> -               end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
> -               if (iommu_domain_identity_map(si_domain, start, end)) {
> -                       pr_warn("Failed to build identity map for [%llx-%llx]\n",
> -                               start, end);
> +               if (iommu_domain_identity_map(si_domain, start_vpfn,
> +                                       last_vpfn)) {
> +                       pr_warn("Failed to build identity map for [%lx-%lx]\n",
> +                               start_vpfn, last_vpfn);
>                         return NOTIFY_BAD;
>                 }
>                 break;
>
>         case MEM_OFFLINE:
>         case MEM_CANCEL_ONLINE:
> -               start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> -               last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1);
> -               while (start_vpfn <= last_vpfn) {
> -                       struct iova *iova;
> +               {
>                         struct dmar_drhd_unit *drhd;
>                         struct intel_iommu *iommu;
>                         struct page *freelist;
>
> -                       iova = find_iova(&si_domain->iovad, start_vpfn);
> -                       if (iova == NULL) {
> -                               pr_debug("Failed get IOVA for PFN %lx\n",
> -                                        start_vpfn);
> -                               break;
> -                       }
> -
> -                       iova = split_and_remove_iova(&si_domain->iovad, iova,
> -                                                    start_vpfn, last_vpfn);
> -                       if (iova == NULL) {
> -                               pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
> -                                       start_vpfn, last_vpfn);
> -                               return NOTIFY_BAD;
> -                       }
> -
> -                       freelist = domain_unmap(si_domain, iova->pfn_lo,
> -                                              iova->pfn_hi);
> +                       freelist = domain_unmap(si_domain, start_vpfn,
> +                                       last_vpfn);
>
>                         rcu_read_lock();
>                         for_each_active_iommu(iommu, drhd)
>                                 iommu_flush_iotlb_psi(iommu, si_domain,
> -                                       iova->pfn_lo, iova_size(iova),
> +                                       start_vpfn, mhp->nr_pages,
>                                         !freelist, 0);
>                         rcu_read_unlock();
>                         dma_free_pagelist(freelist);
> -
> -                       start_vpfn = iova->pfn_hi + 1;
> -                       free_iova_mem(iova);
>                 }
>                 break;
>         }
> @@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
>                 for (did = 0; did < cap_ndoms(iommu->cap); did++) {
>                         domain = get_iommu_domain(iommu, (u16)did);
>
> -                       if (!domain)
> +                       if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
>                                 continue;
> +
>                         free_cpu_cached_iovas(cpu, &domain->iovad);
>                 }
>         }
> @@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
>  {
>         int adjust_width;
>
> -       init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> -       domain_reserve_special_ranges(domain);
> -
>         /* calculate AGAW */
>         domain->gaw = guest_width;
>         adjust_width = guestwidth_to_adjustwidth(guest_width);
> @@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
>         return 0;
>  }
>
> +static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
> +{
> +       init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> +       copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
> +
> +       if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
> +                               iova_entry_free)) {
> +               pr_warn("iova flush queue initialization failed\n");
> +               intel_iommu_strict = 1;
> +       }
> +}
> +
>  static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
>  {
>         struct dmar_domain *dmar_domain;
> @@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
>                         return NULL;
>                 }
>
> -               if (type == IOMMU_DOMAIN_DMA &&
> -                   init_iova_flush_queue(&dmar_domain->iovad,
> -                                         iommu_flush_iova, iova_entry_free)) {
> -                       pr_warn("iova flush queue initialization failed\n");
> -                       intel_iommu_strict = 1;
> -               }
> +               if (type == IOMMU_DOMAIN_DMA)
> +                       intel_init_iova_domain(dmar_domain);
>
>                 domain_update_iommu_cap(dmar_domain);
>
> --
> 2.20.1
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
  2020-03-20  6:30   ` Tom Murphy
@ 2020-03-20  7:06     ` Lu Baolu
  0 siblings, 0 replies; 38+ messages in thread
From: Lu Baolu @ 2020-03-20  7:06 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Bjorn Andersson, linux-tegra, Julien Grall, Thierry Reding,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	linux-rockchip, Andy Gross, linux-arm-kernel, linux-s390,
	linux-arm-msm, intel-gfx, Jani Nikula, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, Matthias Brugger, Thomas Gleixner,
	virtualization, Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, Kukjin Kim, Daniel Vetter,
	Robin Murphy

On 2020/3/20 14:30, Tom Murphy wrote:
> Could we merge patch 1-3 from this series? it just cleans up weird
> code and merging these patches will cover some of the work needed to
> move the intel iommu driver to the dma-iommu api in the future.

Can you please take a look at this patch series?

https://lkml.org/lkml/2020/3/13/1162

It probably makes this series easier.

Best regards,
baolu

> 
> On Sat, 21 Dec 2019 at 07:04, Tom Murphy<murphyt7@tcd.ie>  wrote:
>> Remove all IOVA handling code from the non-dma_ops path in the intel
>> iommu driver.
>>
>> There's no need for the non-dma_ops path to keep track of IOVAs. The
>> whole point of the non-dma_ops path is that it allows the IOVAs to be
>> handled separately. The IOVA handling code removed in this patch is
>> pointless.
>>
>> Signed-off-by: Tom Murphy<murphyt7@tcd.ie>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (8 preceding siblings ...)
  2019-12-23 10:37 ` [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Jani Nikula
@ 2020-05-29  0:00 ` Logan Gunthorpe
  2020-05-29 12:45   ` Christoph Hellwig
  2020-08-26 18:14 ` Robin Murphy
  10 siblings, 1 reply; 38+ messages in thread
From: Logan Gunthorpe @ 2020-05-29  0:00 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: kvm, David Airlie, dri-devel, Bjorn Andersson, Matthias Brugger,
	Julien Grall, Thierry Reding, Will Deacon, Jean-Philippe Brucker,
	linux-samsung-soc, Marc Zyngier, Krzysztof Kozlowski,
	Jonathan Hunter, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, Alex Williamson,
	linux-mediatek, Rodrigo Vivi, linux-tegra, Thomas Gleixner,
	virtualization, linux-arm-kernel, Robin Murphy, Cornelia Huck,
	linux-kernel, Kukjin Kim, David Woodhouse

Hi Tom,

On 2019-12-21 8:03 a.m., Tom Murphy wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.

Just wanted to note that I've rebased your series on recent kernels and
have done some testing on my old Sandybridge machine (without the DO NOT
MERGE patch) and have found no issues. I hope this can make progress
soon and get merged soon. If you like you can add:

Tested-By: Logan Gunthorpe <logang@deltatee.com>

> While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
> 
> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> 
> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.  

I did some digging into this myself and while I don't have full patch, I
think I traced it closer to the problem.

Sadly, ignoring the number of nents returned by map_sg() is endemic to
dma-buf users, but AMD's GPU driver seems to do the same thing,
presumably without issues.

Digging a bit further, I found that the i915 has an "innovative" way of
iterating through SGLs, see [1]. I suspect if __sgt_iter is changed to
increment with sg_dma_len() and return NULL when there is no length
left, it may fix the issue.

But, sorry, I don't really have the means or time to fix and test this
myself.

Thanks,

Logan

[1]
https://elixir.bootlin.com/linux/v5.7-rc7/source/drivers/gpu/drm/i915/i915_scatterlist.h#L76
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-05-29  0:00 ` Logan Gunthorpe
@ 2020-05-29 12:45   ` Christoph Hellwig
  2020-05-29 19:05     ` Logan Gunthorpe
  0 siblings, 1 reply; 38+ messages in thread
From: Christoph Hellwig @ 2020-05-29 12:45 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, Bjorn Andersson,
	linux-tegra, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck, linux-kernel,
	Tom Murphy, iommu, Kukjin Kim, Robin Murphy

On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
> > This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> > https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> > 
> > Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.  

Mark did a big audit into the map_sg API abuse and initially had
some i915 patches, but then gave up on them with this comment:

"The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
 it fully. The driver creatively uses sg_table->orig_nents to store the
 size of the allocate scatterlist and ignores the number of the entries
 returned by dma_map_sg function. In this patchset I only fixed the
 sg_table objects exported by dmabuf related functions. I hope that I
 didn't break anything there."

it would be really nice if the i915 maintainers could help with sorting
that API abuse out.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-05-29 12:45   ` Christoph Hellwig
@ 2020-05-29 19:05     ` Logan Gunthorpe
  2020-05-29 21:11       ` Marek Szyprowski
  0 siblings, 1 reply; 38+ messages in thread
From: Logan Gunthorpe @ 2020-05-29 19:05 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, Bjorn Andersson,
	linux-tegra, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck, linux-kernel,
	Tom Murphy, iommu, Kukjin Kim, Robin Murphy



On 2020-05-29 6:45 a.m., Christoph Hellwig wrote:
> On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
>>> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
>>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>>
>>> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.  
> 
> Mark did a big audit into the map_sg API abuse and initially had
> some i915 patches, but then gave up on them with this comment:
> 
> "The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
>  it fully. The driver creatively uses sg_table->orig_nents to store the
>  size of the allocate scatterlist and ignores the number of the entries
>  returned by dma_map_sg function. In this patchset I only fixed the
>  sg_table objects exported by dmabuf related functions. I hope that I
>  didn't break anything there."
> 
> it would be really nice if the i915 maintainers could help with sorting
> that API abuse out.
> 

I agree completely that the API abuse should be sorted out, but I think
that's much larger than just the i915 driver. Pretty much every dma-buf
map_dma_buf implementation I looked at ignores the returned nents of
sg_attrs. This sucks, but I don't think it's the bug Tom ran into. See:

amdgpu_dma_buf_map
armada_gem_prime_map_dma_buf
drm_gem_map_dma_buf
i915_gem_map_dma_buf
tegra_gem_prime_map_dma_buf

So this should probably be addressed by the whole GPU community.

However, as Robin pointed out, there are other ugly tricks like stopping
iterating through the SGL when sg_dma_len() is zero. For example, the
AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
this trick and thus likely isn't buggy (otherwise, I'd expect someone to
have complained by now seeing AMD has already switched to IOMMU-DMA.

As I tried to point out in my previous email, i915 does not do this
trick. In fact, it completely ignores sg_dma_len() and is thus
completely broken. See i915_scatterlist.h and the __sgt_iter() function.
So it doesn't sound to me like Mark's fix would address the issue at
all. Per my previous email, I'd guess that it can be fixed simply by
adjusting the __sgt_iter() function to do something more sensible.
(Better yet, if possible, ditch __sgt_iter() and use the common DRM
function that AMD uses).

This would at least allow us to make progress with Tom's IOMMU-DMA patch
set and once that gets in, it will be harder for other drivers to make
the same mistake.

Logan


_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-05-29 19:05     ` Logan Gunthorpe
@ 2020-05-29 21:11       ` Marek Szyprowski
  2020-05-29 21:21         ` Logan Gunthorpe
  0 siblings, 1 reply; 38+ messages in thread
From: Marek Szyprowski @ 2020-05-29 21:11 UTC (permalink / raw)
  To: Logan Gunthorpe, Christoph Hellwig
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, Bjorn Andersson,
	linux-tegra, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck, linux-kernel,
	Tom Murphy, iommu, Kukjin Kim, Robin Murphy

Hi Logan,

On 29.05.2020 21:05, Logan Gunthorpe wrote:
> On 2020-05-29 6:45 a.m., Christoph Hellwig wrote:
>> On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
>>>> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
>>>> https://protect2.fireeye.com/url?k=ca25a34b-97f7b813-ca242804-0cc47a31c8b4-0ecdffc9f56851e1&q=1&u=https%3A%2F%2Fgithub.com%2Ftorvalds%2Flinux%2Fblob%2F7e0165b2f1a912a06e381e91f0f4e495f4ac3736%2Fdrivers%2Fgpu%2Fdrm%2Fi915%2Fgem%2Fi915_gem_dmabuf.c%23L51
>>>>
>>>> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.
>> Mark did a big audit into the map_sg API abuse and initially had
>> some i915 patches, but then gave up on them with this comment:
>>
>> "The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
>>   it fully. The driver creatively uses sg_table->orig_nents to store the
>>   size of the allocate scatterlist and ignores the number of the entries
>>   returned by dma_map_sg function. In this patchset I only fixed the
>>   sg_table objects exported by dmabuf related functions. I hope that I
>>   didn't break anything there."
>>
>> it would be really nice if the i915 maintainers could help with sorting
>> that API abuse out.
>>
> I agree completely that the API abuse should be sorted out, but I think
> that's much larger than just the i915 driver. Pretty much every dma-buf
> map_dma_buf implementation I looked at ignores the returned nents of
> sg_attrs. This sucks, but I don't think it's the bug Tom ran into. See:
>
> amdgpu_dma_buf_map
> armada_gem_prime_map_dma_buf
> drm_gem_map_dma_buf
> i915_gem_map_dma_buf
> tegra_gem_prime_map_dma_buf
>
> So this should probably be addressed by the whole GPU community.

Patches are pending:
https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/

> However, as Robin pointed out, there are other ugly tricks like stopping
> iterating through the SGL when sg_dma_len() is zero. For example, the
> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> have complained by now seeing AMD has already switched to IOMMU-DMA.

I'm not sure that this is a trick. Stopping at zero sg_dma_len() was 
somewhere documented.

> As I tried to point out in my previous email, i915 does not do this
> trick. In fact, it completely ignores sg_dma_len() and is thus
> completely broken. See i915_scatterlist.h and the __sgt_iter() function.
> So it doesn't sound to me like Mark's fix would address the issue at
> all. Per my previous email, I'd guess that it can be fixed simply by
> adjusting the __sgt_iter() function to do something more sensible.
> (Better yet, if possible, ditch __sgt_iter() and use the common DRM
> function that AMD uses).
>
> This would at least allow us to make progress with Tom's IOMMU-DMA patch
> set and once that gets in, it will be harder for other drivers to make
> the same mistake.

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-05-29 21:11       ` Marek Szyprowski
@ 2020-05-29 21:21         ` Logan Gunthorpe
  2020-08-24  0:04           ` Tom Murphy
  0 siblings, 1 reply; 38+ messages in thread
From: Logan Gunthorpe @ 2020-05-29 21:21 UTC (permalink / raw)
  To: Marek Szyprowski, Christoph Hellwig
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, Bjorn Andersson,
	linux-tegra, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck, linux-kernel,
	Tom Murphy, iommu, Kukjin Kim, Robin Murphy



On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> Patches are pending:
> https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/

Cool, nice! Though, I still don't think that fixes the issue in
i915_scatterlist.h given it still ignores sg_dma_len() and strictly
relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
is the bug that got in Tom's way.

>> However, as Robin pointed out, there are other ugly tricks like stopping
>> iterating through the SGL when sg_dma_len() is zero. For example, the
>> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
>> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
>> have complained by now seeing AMD has already switched to IOMMU-DMA.
> 
> I'm not sure that this is a trick. Stopping at zero sg_dma_len() was 
> somewhere documented.

Well whatever you want to call it, it is ugly to have some drivers doing
one thing with the returned value and others assuming there's an extra
zero at the end. It just causes confusion for people reading/copying the
code. It would be better if they are all consistent. However, I concede
stopping at zero should not be broken, presently.

Logan
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-05-29 21:21         ` Logan Gunthorpe
@ 2020-08-24  0:04           ` Tom Murphy
  2020-08-26 18:26             ` Alex Deucher
  2020-08-27 21:36             ` Logan Gunthorpe
  0 siblings, 2 replies; 38+ messages in thread
From: Tom Murphy @ 2020-08-24  0:04 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, linux-tegra,
	Julien Grall, Thierry Reding, Will Deacon, Jean-Philippe Brucker,
	linux-samsung-soc, Marc Zyngier, Krzysztof Kozlowski,
	Jonathan Hunter, Christoph Hellwig, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy

Hi Logan/All,

I have added a check for the sg_dma_len == 0 :
"""
 } __sgt_iter(struct scatterlist *sgl, bool dma) {
        struct sgt_iter s = { .sgp = sgl };

+       if (sgl && sg_dma_len(sgl) == 0)
+           s.sgp = NULL;

        if (s.sgp) {
            .....
"""
at location [1].
but it doens't fix the problem.

You're right though, this change does need to be made, this code
doesn't handle pages of sg_dma_len(sg) == 0 correctly
So my guess is that we have more bugs in other parts of the i915
driver (or there is a problem with my "sg_dma_len == 0" fix above).
I have been trying to spot where else the code might be buggy but I
haven't had any luck so far.

I'm doing a microconfernce (at LPC 2020) this wednesdays [1] on this
if you're interested in attending.
I'm hoping I can chat about it with a few people and find how can
reproduce and fix this issues. I don't have any more time I can give
to this unfortunately and it would be a shame for the work to go to
waste.

[0] https://github.com/torvalds/linux/blob/d012a7190fc1fd72ed48911e77ca97ba4521bccd/drivers/gpu/drm/i915/i915_scatterlist.h#L28
[1] https://linuxplumbersconf.org/event/7/contributions/846/

On Fri, 29 May 2020 at 22:21, Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> > Patches are pending:
> > https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/
>
> Cool, nice! Though, I still don't think that fixes the issue in
> i915_scatterlist.h given it still ignores sg_dma_len() and strictly
> relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
> is the bug that got in Tom's way.
>
> >> However, as Robin pointed out, there are other ugly tricks like stopping
> >> iterating through the SGL when sg_dma_len() is zero. For example, the
> >> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> >> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> >> have complained by now seeing AMD has already switched to IOMMU-DMA.
> >
> > I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
> > somewhere documented.
>
> Well whatever you want to call it, it is ugly to have some drivers doing
> one thing with the returned value and others assuming there's an extra
> zero at the end. It just causes confusion for people reading/copying the
> code. It would be better if they are all consistent. However, I concede
> stopping at zero should not be broken, presently.
>
> Logan
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
                   ` (9 preceding siblings ...)
  2020-05-29  0:00 ` Logan Gunthorpe
@ 2020-08-26 18:14 ` Robin Murphy
  2020-08-26 18:23   ` Tom Murphy
  10 siblings, 1 reply; 38+ messages in thread
From: Robin Murphy @ 2020-08-26 18:14 UTC (permalink / raw)
  To: Tom Murphy, iommu
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Matthias Brugger, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	Gerald Schaefer, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	linux-tegra, Thomas Gleixner, virtualization, linux-arm-kernel,
	Cornelia Huck, linux-kernel, Kukjin Kim, Daniel Vetter,
	David Woodhouse

Hi Tom,

On 2019-12-21 15:03, Tom Murphy wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.
> 
> While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
> 
> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> 
> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.
> 
> Could someone from the intel team look at this?
> 
> 
> I have been testing on a lenovo x1 carbon 5th generation. Let me know if there’s any more information you need.
> 
> To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem.
> 
> As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I’m running out of time to work on this patch set.
> 
> On top of that I also didn’t port over the intel tracing code from this commit:
> https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
> So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don’t have the time at the moment and I want to get these changes out before the iommu code changes any more.

Further to what we just discussed at LPC, I've realised that tracepoints 
are actually something I could do with *right now* for debugging my Arm 
DMA ops series, so if I'm going to hack something up anyway I may as 
well take responsibility for polishing it into a proper patch as well :)

Robin.

> 
> Tom Murphy (8):
>    iommu/vt-d: clean up 32bit si_domain assignment
>    iommu/vt-d: Use default dma_direct_* mapping functions for direct
>      mapped devices
>    iommu/vt-d: Remove IOVA handling code from non-dma_ops path
>    iommu: Handle freelists when using deferred flushing in iommu drivers
>    iommu: Add iommu_dma_free_cpu_cached_iovas function
>    iommu: allow the dma-iommu api to use bounce buffers
>    iommu/vt-d: Convert intel iommu driver to the iommu ops
>    DO NOT MERGE: iommu: disable list appending in dma-iommu
> 
>   drivers/iommu/Kconfig           |   1 +
>   drivers/iommu/amd_iommu.c       |  14 +-
>   drivers/iommu/arm-smmu-v3.c     |   3 +-
>   drivers/iommu/arm-smmu.c        |   3 +-
>   drivers/iommu/dma-iommu.c       | 183 +++++--
>   drivers/iommu/exynos-iommu.c    |   3 +-
>   drivers/iommu/intel-iommu.c     | 936 ++++----------------------------
>   drivers/iommu/iommu.c           |  39 +-
>   drivers/iommu/ipmmu-vmsa.c      |   3 +-
>   drivers/iommu/msm_iommu.c       |   3 +-
>   drivers/iommu/mtk_iommu.c       |   3 +-
>   drivers/iommu/mtk_iommu_v1.c    |   3 +-
>   drivers/iommu/omap-iommu.c      |   3 +-
>   drivers/iommu/qcom_iommu.c      |   3 +-
>   drivers/iommu/rockchip-iommu.c  |   3 +-
>   drivers/iommu/s390-iommu.c      |   3 +-
>   drivers/iommu/tegra-gart.c      |   3 +-
>   drivers/iommu/tegra-smmu.c      |   3 +-
>   drivers/iommu/virtio-iommu.c    |   3 +-
>   drivers/vfio/vfio_iommu_type1.c |   2 +-
>   include/linux/dma-iommu.h       |   3 +
>   include/linux/intel-iommu.h     |   1 -
>   include/linux/iommu.h           |  32 +-
>   23 files changed, 345 insertions(+), 908 deletions(-)
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-26 18:14 ` Robin Murphy
@ 2020-08-26 18:23   ` Tom Murphy
  0 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2020-08-26 18:23 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Heiko Stuebner, kvm, David Airlie, Joonas Lahtinen, dri-devel,
	Matthias Brugger, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, linux-rockchip, Andy Gross,
	Gerald Schaefer, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	linux-tegra, Thomas Gleixner, virtualization, linux-arm-kernel,
	Cornelia Huck, linux-kernel, iommu, Kukjin Kim, Daniel Vetter,
	David Woodhouse


[-- Attachment #1.1: Type: text/plain, Size: 4614 bytes --]

That would be great!

On Wed., Aug. 26, 2020, 2:14 p.m. Robin Murphy, <robin.murphy@arm.com>
wrote:

> Hi Tom,
>
> On 2019-12-21 15:03, Tom Murphy wrote:
> > This patchset converts the intel iommu driver to the dma-iommu api.
> >
> > While converting the driver I exposed a bug in the intel i915 driver
> which causes a huge amount of artifacts on the screen of my laptop. You can
> see a picture of it here:
> >
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
> >
> > This issue is most likely in the i915 driver and is most likely caused
> by the driver not respecting the return value of the dma_map_ops::map_sg
> function. You can see the driver ignoring the return value here:
> >
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> >
> > Previously this didn’t cause issues because the intel map_sg always
> returned the same number of elements as the input scatter gather list but
> with the change to this dma-iommu api this is no longer the case. I wasn’t
> able to track the bug down to a specific line of code unfortunately.
> >
> > Could someone from the intel team look at this?
> >
> >
> > I have been testing on a lenovo x1 carbon 5th generation. Let me know if
> there’s any more information you need.
> >
> > To allow my patch set to be tested I have added a patch (patch 8/8) in
> this series to disable combining sg segments in the dma-iommu api which
> fixes the bug but it doesn't fix the actual problem.
> >
> > As part of this patch series I copied the intel bounce buffer code to
> the dma-iommu path. The addition of the bounce buffer code took me by
> surprise. I did most of my development on this patch series before the
> bounce buffer code was added and my reimplementation in the dma-iommu path
> is very rushed and not properly tested but I’m running out of time to work
> on this patch set.
> >
> > On top of that I also didn’t port over the intel tracing code from this
> commit:
> >
> https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
> > So all the work in that commit is now wasted. The code will need to be
> removed and reimplemented in the dma-iommu path. I would like to take the
> time to do this but I really don’t have the time at the moment and I want
> to get these changes out before the iommu code changes any more.
>
> Further to what we just discussed at LPC, I've realised that tracepoints
> are actually something I could do with *right now* for debugging my Arm
> DMA ops series, so if I'm going to hack something up anyway I may as
> well take responsibility for polishing it into a proper patch as well :)
>
> Robin.
>
> >
> > Tom Murphy (8):
> >    iommu/vt-d: clean up 32bit si_domain assignment
> >    iommu/vt-d: Use default dma_direct_* mapping functions for direct
> >      mapped devices
> >    iommu/vt-d: Remove IOVA handling code from non-dma_ops path
> >    iommu: Handle freelists when using deferred flushing in iommu drivers
> >    iommu: Add iommu_dma_free_cpu_cached_iovas function
> >    iommu: allow the dma-iommu api to use bounce buffers
> >    iommu/vt-d: Convert intel iommu driver to the iommu ops
> >    DO NOT MERGE: iommu: disable list appending in dma-iommu
> >
> >   drivers/iommu/Kconfig           |   1 +
> >   drivers/iommu/amd_iommu.c       |  14 +-
> >   drivers/iommu/arm-smmu-v3.c     |   3 +-
> >   drivers/iommu/arm-smmu.c        |   3 +-
> >   drivers/iommu/dma-iommu.c       | 183 +++++--
> >   drivers/iommu/exynos-iommu.c    |   3 +-
> >   drivers/iommu/intel-iommu.c     | 936 ++++----------------------------
> >   drivers/iommu/iommu.c           |  39 +-
> >   drivers/iommu/ipmmu-vmsa.c      |   3 +-
> >   drivers/iommu/msm_iommu.c       |   3 +-
> >   drivers/iommu/mtk_iommu.c       |   3 +-
> >   drivers/iommu/mtk_iommu_v1.c    |   3 +-
> >   drivers/iommu/omap-iommu.c      |   3 +-
> >   drivers/iommu/qcom_iommu.c      |   3 +-
> >   drivers/iommu/rockchip-iommu.c  |   3 +-
> >   drivers/iommu/s390-iommu.c      |   3 +-
> >   drivers/iommu/tegra-gart.c      |   3 +-
> >   drivers/iommu/tegra-smmu.c      |   3 +-
> >   drivers/iommu/virtio-iommu.c    |   3 +-
> >   drivers/vfio/vfio_iommu_type1.c |   2 +-
> >   include/linux/dma-iommu.h       |   3 +
> >   include/linux/intel-iommu.h     |   1 -
> >   include/linux/iommu.h           |  32 +-
> >   23 files changed, 345 insertions(+), 908 deletions(-)
> >
>

[-- Attachment #1.2: Type: text/html, Size: 5831 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-24  0:04           ` Tom Murphy
@ 2020-08-26 18:26             ` Alex Deucher
  2020-08-27 21:36             ` Logan Gunthorpe
  1 sibling, 0 replies; 38+ messages in thread
From: Alex Deucher @ 2020-08-26 18:26 UTC (permalink / raw)
  To: Tom Murphy
  Cc: kvm, David Airlie, Maling list - DRI developers,
	Matthias Brugger, Julien Grall, Thierry Reding, Will Deacon,
	Jean-Philippe Brucker, linux-samsung-soc, Marc Zyngier,
	Krzysztof Kozlowski, Jonathan Hunter, Christoph Hellwig,
	linux-rockchip, Andy Gross, Gerald Schaefer, linux-s390,
	linux-arm-msm, Intel Graphics Development, Robin Murphy,
	Alex Williamson, linux-mediatek, Rodrigo Vivi, linux-tegra,
	Thomas Gleixner, open list:VIRTIO CORE, NET...,
	linux-arm-kernel, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	Kukjin Kim, Logan Gunthorpe

On Mon, Aug 24, 2020 at 2:56 AM Tom Murphy <murphyt7@tcd.ie> wrote:
>
> Hi Logan/All,
>
> I have added a check for the sg_dma_len == 0 :
> """
>  } __sgt_iter(struct scatterlist *sgl, bool dma) {
>         struct sgt_iter s = { .sgp = sgl };
>
> +       if (sgl && sg_dma_len(sgl) == 0)
> +           s.sgp = NULL;
>
>         if (s.sgp) {
>             .....
> """
> at location [1].
> but it doens't fix the problem.
>
> You're right though, this change does need to be made, this code
> doesn't handle pages of sg_dma_len(sg) == 0 correctly
> So my guess is that we have more bugs in other parts of the i915
> driver (or there is a problem with my "sg_dma_len == 0" fix above).
> I have been trying to spot where else the code might be buggy but I
> haven't had any luck so far.
>
> I'm doing a microconfernce (at LPC 2020) this wednesdays [1] on this
> if you're interested in attending.
> I'm hoping I can chat about it with a few people and find how can
> reproduce and fix this issues. I don't have any more time I can give
> to this unfortunately and it would be a shame for the work to go to
> waste.
>
> [0] https://github.com/torvalds/linux/blob/d012a7190fc1fd72ed48911e77ca97ba4521bccd/drivers/gpu/drm/i915/i915_scatterlist.h#L28
> [1] https://linuxplumbersconf.org/event/7/contributions/846/
>
> On Fri, 29 May 2020 at 22:21, Logan Gunthorpe <logang@deltatee.com> wrote:
> >
> >
> >
> > On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> > > Patches are pending:
> > > https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/
> >
> > Cool, nice! Though, I still don't think that fixes the issue in
> > i915_scatterlist.h given it still ignores sg_dma_len() and strictly
> > relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
> > is the bug that got in Tom's way.
> >
> > >> However, as Robin pointed out, there are other ugly tricks like stopping
> > >> iterating through the SGL when sg_dma_len() is zero. For example, the
> > >> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> > >> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> > >> have complained by now seeing AMD has already switched to IOMMU-DMA.

We ran into the same issue with amdgpu and radeon when the AMD IOMMU
driver was converted and had to fix it as well.  The relevant fixes
were:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=42e67b479eab6d26459b80b4867298232b0435e7
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0199172f933342d8b1011aae2054a695c25726f4
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=47f7826c520ecd92ffbffe59ecaa2fe61e42ec70
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c0f83d164fb8f3a2b7bc379a6c1e27d1123a9eab

Alex

> > >
> > > I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
> > > somewhere documented.
> >
> > Well whatever you want to call it, it is ugly to have some drivers doing
> > one thing with the returned value and others assuming there's an extra
> > zero at the end. It just causes confusion for people reading/copying the
> > code. It would be better if they are all consistent. However, I concede
> > stopping at zero should not be broken, presently.
> >
> > Logan
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-24  0:04           ` Tom Murphy
  2020-08-26 18:26             ` Alex Deucher
@ 2020-08-27 21:36             ` Logan Gunthorpe
  2020-08-27 23:34               ` Tom Murphy
  2020-09-08 15:28               ` [Intel-gfx] " Tvrtko Ursulin
  1 sibling, 2 replies; 38+ messages in thread
From: Logan Gunthorpe @ 2020-08-27 21:36 UTC (permalink / raw)
  To: Tom Murphy
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, linux-tegra,
	Julien Grall, Thierry Reding, Will Deacon, Jean-Philippe Brucker,
	linux-samsung-soc, Marc Zyngier, Krzysztof Kozlowski,
	Jonathan Hunter, Christoph Hellwig, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy



On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> I have added a check for the sg_dma_len == 0 :
> """
>  } __sgt_iter(struct scatterlist *sgl, bool dma) {
>         struct sgt_iter s = { .sgp = sgl };
> 
> +       if (sgl && sg_dma_len(sgl) == 0)
> +           s.sgp = NULL;
> 
>         if (s.sgp) {
>             .....
> """
> at location [1].
> but it doens't fix the problem.

Based on my read of the code, it looks like we also need to change usage
of sgl->length... Something like the rough patch below, maybe?

Also, Tom, do you have an updated version of the patchset to convert the
Intel IOMMU to dma-iommu available? The last one I've found doesn't
apply cleanly (I'm assuming parts of it have been merged in slightly
modified forms).

Thanks,

Logan

--

diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
b/drivers/gpu/drm/i915/i915
index b7b59328cb76..9367ac801f0c 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.h
+++ b/drivers/gpu/drm/i915/i915_scatterlist.h
@@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
 } __sgt_iter(struct scatterlist *sgl, bool dma) {
        struct sgt_iter s = { .sgp = sgl };

+       if (sgl && !sg_dma_len(s.sgp))
+               s.sgp = NULL;
+
        if (s.sgp) {
                s.max = s.curr = s.sgp->offset;
-               s.max += s.sgp->length;
-               if (dma)
+
+               if (dma) {
+                       s.max += sg_dma_len(s.sgp);
                        s.dma = sg_dma_address(s.sgp);
-               else
+               } else {
+                       s.max += s.sgp->length;
                        s.pfn = page_to_pfn(sg_page(s.sgp));
+               }
        }

        return s;
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-27 21:36             ` Logan Gunthorpe
@ 2020-08-27 23:34               ` Tom Murphy
  2020-09-03 20:26                 ` Tom Murphy
  2020-09-08 15:28               ` [Intel-gfx] " Tvrtko Ursulin
  1 sibling, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2020-08-27 23:34 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, linux-tegra,
	Julien Grall, Thierry Reding, Will Deacon, Jean-Philippe Brucker,
	linux-samsung-soc, Marc Zyngier, Krzysztof Kozlowski,
	Jonathan Hunter, Christoph Hellwig, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy

On Thu, 27 Aug 2020 at 22:36, Logan Gunthorpe <logang@deltatee.com> wrote:
>
>
>
> On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> > I have added a check for the sg_dma_len == 0 :
> > """
> >  } __sgt_iter(struct scatterlist *sgl, bool dma) {
> >         struct sgt_iter s = { .sgp = sgl };
> >
> > +       if (sgl && sg_dma_len(sgl) == 0)
> > +           s.sgp = NULL;
> >
> >         if (s.sgp) {
> >             .....
> > """
> > at location [1].
> > but it doens't fix the problem.
>
> Based on my read of the code, it looks like we also need to change usage
> of sgl->length... Something like the rough patch below, maybe?
>
> Also, Tom, do you have an updated version of the patchset to convert the
> Intel IOMMU to dma-iommu available? The last one I've found doesn't
> apply cleanly (I'm assuming parts of it have been merged in slightly
> modified forms).
>

I'll try and post one in the next 24hours

> Thanks,
>
> Logan
>
> --
>
> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> b/drivers/gpu/drm/i915/i915
> index b7b59328cb76..9367ac801f0c 100644
> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>  } __sgt_iter(struct scatterlist *sgl, bool dma) {
>         struct sgt_iter s = { .sgp = sgl };
>
> +       if (sgl && !sg_dma_len(s.sgp))
> +               s.sgp = NULL;
> +
>         if (s.sgp) {
>                 s.max = s.curr = s.sgp->offset;
> -               s.max += s.sgp->length;
> -               if (dma)
> +
> +               if (dma) {
> +                       s.max += sg_dma_len(s.sgp);
>                         s.dma = sg_dma_address(s.sgp);
> -               else
> +               } else {
> +                       s.max += s.sgp->length;
>                         s.pfn = page_to_pfn(sg_page(s.sgp));
> +               }
>         }
>
>         return s;
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-27 23:34               ` Tom Murphy
@ 2020-09-03 20:26                 ` Tom Murphy
  0 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2020-09-03 20:26 UTC (permalink / raw)
  To: Logan Gunthorpe
  Cc: kvm, David Airlie, Joonas Lahtinen, dri-devel, linux-tegra,
	Julien Grall, Thierry Reding, Will Deacon, Jean-Philippe Brucker,
	linux-samsung-soc, Marc Zyngier, Krzysztof Kozlowski,
	Jonathan Hunter, Christoph Hellwig, linux-rockchip, Andy Gross,
	linux-arm-kernel, linux-s390, linux-arm-msm, intel-gfx,
	Jani Nikula, Alex Williamson, linux-mediatek, Rodrigo Vivi,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy

On Fri, 28 Aug 2020 at 00:34, Tom Murphy <murphyt7@tcd.ie> wrote:
>
> On Thu, 27 Aug 2020 at 22:36, Logan Gunthorpe <logang@deltatee.com> wrote:
> >
> >
> >
> > On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> > > I have added a check for the sg_dma_len == 0 :
> > > """
> > >  } __sgt_iter(struct scatterlist *sgl, bool dma) {
> > >         struct sgt_iter s = { .sgp = sgl };
> > >
> > > +       if (sgl && sg_dma_len(sgl) == 0)
> > > +           s.sgp = NULL;
> > >
> > >         if (s.sgp) {
> > >             .....
> > > """
> > > at location [1].
> > > but it doens't fix the problem.
> >
> > Based on my read of the code, it looks like we also need to change usage
> > of sgl->length... Something like the rough patch below, maybe?
> >
> > Also, Tom, do you have an updated version of the patchset to convert the
> > Intel IOMMU to dma-iommu available? The last one I've found doesn't
> > apply cleanly (I'm assuming parts of it have been merged in slightly
> > modified forms).
> >
>
> I'll try and post one in the next 24hours

I have just posted this now:
The subject of the cover letter is:
"[PATCH V2 0/5] Convert the intel iommu driver to the dma-iommu api"

>
> > Thanks,
> >
> > Logan
> >
> > --
> >
> > diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> > b/drivers/gpu/drm/i915/i915
> > index b7b59328cb76..9367ac801f0c 100644
> > --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> > +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> > @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> >  } __sgt_iter(struct scatterlist *sgl, bool dma) {
> >         struct sgt_iter s = { .sgp = sgl };
> >
> > +       if (sgl && !sg_dma_len(s.sgp))
> > +               s.sgp = NULL;
> > +
> >         if (s.sgp) {
> >                 s.max = s.curr = s.sgp->offset;
> > -               s.max += s.sgp->length;
> > -               if (dma)
> > +
> > +               if (dma) {
> > +                       s.max += sg_dma_len(s.sgp);
> >                         s.dma = sg_dma_address(s.sgp);
> > -               else
> > +               } else {
> > +                       s.max += s.sgp->length;
> >                         s.pfn = page_to_pfn(sg_page(s.sgp));
> > +               }
> >         }
> >
> >         return s;
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-08-27 21:36             ` Logan Gunthorpe
  2020-08-27 23:34               ` Tom Murphy
@ 2020-09-08 15:28               ` Tvrtko Ursulin
  2020-09-08 15:44                 ` Logan Gunthorpe
  1 sibling, 1 reply; 38+ messages in thread
From: Tvrtko Ursulin @ 2020-09-08 15:28 UTC (permalink / raw)
  To: Logan Gunthorpe, Tom Murphy
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	linux-arm-kernel, Robin Murphy, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, David Woodhouse


Hi,

On 27/08/2020 22:36, Logan Gunthorpe wrote:
> On 2020-08-23 6:04 p.m., Tom Murphy wrote:
>> I have added a check for the sg_dma_len == 0 :
>> """
>>   } __sgt_iter(struct scatterlist *sgl, bool dma) {
>>          struct sgt_iter s = { .sgp = sgl };
>>
>> +       if (sgl && sg_dma_len(sgl) == 0)
>> +           s.sgp = NULL;
>>
>>          if (s.sgp) {
>>              .....
>> """
>> at location [1].
>> but it doens't fix the problem.
> 
> Based on my read of the code, it looks like we also need to change usage
> of sgl->length... Something like the rough patch below, maybe?

This thread was brought to my attention and I initially missed this 
reply. Essentially I came to the same conclusion about the need to use 
sg_dma_len. One small correction below:

> Also, Tom, do you have an updated version of the patchset to convert the
> Intel IOMMU to dma-iommu available? The last one I've found doesn't
> apply cleanly (I'm assuming parts of it have been merged in slightly
> modified forms).
> 
> Thanks,
> 
> Logan
> 
> --
> 
> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> b/drivers/gpu/drm/i915/i915
> index b7b59328cb76..9367ac801f0c 100644
> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>   } __sgt_iter(struct scatterlist *sgl, bool dma) {
>          struct sgt_iter s = { .sgp = sgl };
> 
> +       if (sgl && !sg_dma_len(s.sgp))

I'd extend the condition to be, just to be safe:

	if (dma && sgl && !sg_dma_len(s.sgp))

> +               s.sgp = NULL;
> +
>          if (s.sgp) {
>                  s.max = s.curr = s.sgp->offset;
> -               s.max += s.sgp->length;
> -               if (dma)
> +
> +               if (dma) {
> +                       s.max += sg_dma_len(s.sgp);
>                          s.dma = sg_dma_address(s.sgp);
> -               else
> +               } else {
> +                       s.max += s.sgp->length;
>                          s.pfn = page_to_pfn(sg_page(s.sgp));
> +               }

Otherwise has this been tested or alternatively how to test it? (How to 
repro the issue.)

Regards,

Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-08 15:28               ` [Intel-gfx] " Tvrtko Ursulin
@ 2020-09-08 15:44                 ` Logan Gunthorpe
  2020-09-08 15:56                   ` Tvrtko Ursulin
  0 siblings, 1 reply; 38+ messages in thread
From: Logan Gunthorpe @ 2020-09-08 15:44 UTC (permalink / raw)
  To: Tvrtko Ursulin, Tom Murphy
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	linux-arm-kernel, Robin Murphy, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, David Woodhouse



On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
>>
>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
>> b/drivers/gpu/drm/i915/i915
>> index b7b59328cb76..9367ac801f0c 100644
>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>>   } __sgt_iter(struct scatterlist *sgl, bool dma) {
>>          struct sgt_iter s = { .sgp = sgl };
>>
>> +       if (sgl && !sg_dma_len(s.sgp))
> 
> I'd extend the condition to be, just to be safe:
>     if (dma && sgl && !sg_dma_len(s.sgp))
>

Right, good catch, that's definitely necessary.

>> +               s.sgp = NULL;
>> +
>>          if (s.sgp) {
>>                  s.max = s.curr = s.sgp->offset;
>> -               s.max += s.sgp->length;
>> -               if (dma)
>> +
>> +               if (dma) {
>> +                       s.max += sg_dma_len(s.sgp);
>>                          s.dma = sg_dma_address(s.sgp);
>> -               else
>> +               } else {
>> +                       s.max += s.sgp->length;
>>                          s.pfn = page_to_pfn(sg_page(s.sgp));
>> +               }
> 
> Otherwise has this been tested or alternatively how to test it? (How to
> repro the issue.)

It has not been tested. To test it, you need Tom's patch set without the
last "DO NOT MERGE" patch:

https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/

Thanks,

Logan
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-08 15:44                 ` Logan Gunthorpe
@ 2020-09-08 15:56                   ` Tvrtko Ursulin
  2020-09-08 22:43                     ` Tom Murphy
  0 siblings, 1 reply; 38+ messages in thread
From: Tvrtko Ursulin @ 2020-09-08 15:56 UTC (permalink / raw)
  To: Logan Gunthorpe, Tom Murphy
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	linux-arm-kernel, Robin Murphy, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, David Woodhouse


On 08/09/2020 16:44, Logan Gunthorpe wrote:
> On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
>>> b/drivers/gpu/drm/i915/i915
>>> index b7b59328cb76..9367ac801f0c 100644
>>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
>>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
>>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>>>    } __sgt_iter(struct scatterlist *sgl, bool dma) {
>>>           struct sgt_iter s = { .sgp = sgl };
>>>
>>> +       if (sgl && !sg_dma_len(s.sgp))
>>
>> I'd extend the condition to be, just to be safe:
>>      if (dma && sgl && !sg_dma_len(s.sgp))
>>
> 
> Right, good catch, that's definitely necessary.
> 
>>> +               s.sgp = NULL;
>>> +
>>>           if (s.sgp) {
>>>                   s.max = s.curr = s.sgp->offset;
>>> -               s.max += s.sgp->length;
>>> -               if (dma)
>>> +
>>> +               if (dma) {
>>> +                       s.max += sg_dma_len(s.sgp);
>>>                           s.dma = sg_dma_address(s.sgp);
>>> -               else
>>> +               } else {
>>> +                       s.max += s.sgp->length;
>>>                           s.pfn = page_to_pfn(sg_page(s.sgp));
>>> +               }
>>
>> Otherwise has this been tested or alternatively how to test it? (How to
>> repro the issue.)
> 
> It has not been tested. To test it, you need Tom's patch set without the
> last "DO NOT MERGE" patch:
> 
> https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/

Tom, do you have a branch somewhere I could pull from? (Just being lazy 
about downloading a bunch of messages from the archives.)

What GPU is in your Lenovo x1 carbon 5th generation and what 
graphical/desktop setup I need to repro?

Regards,

Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-08 15:56                   ` Tvrtko Ursulin
@ 2020-09-08 22:43                     ` Tom Murphy
  2020-09-09  9:16                       ` Tvrtko Ursulin
  0 siblings, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2020-09-08 22:43 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: kvm, David Airlie, dri-devel, Matthias Brugger, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, Robin Murphy,
	linux-mediatek, linux-tegra, Thomas Gleixner, virtualization,
	linux-arm-kernel, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Logan Gunthorpe

On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 08/09/2020 16:44, Logan Gunthorpe wrote:
> > On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> >>> b/drivers/gpu/drm/i915/i915
> >>> index b7b59328cb76..9367ac801f0c 100644
> >>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> >>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> >>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> >>>    } __sgt_iter(struct scatterlist *sgl, bool dma) {
> >>>           struct sgt_iter s = { .sgp = sgl };
> >>>
> >>> +       if (sgl && !sg_dma_len(s.sgp))
> >>
> >> I'd extend the condition to be, just to be safe:
> >>      if (dma && sgl && !sg_dma_len(s.sgp))
> >>
> >
> > Right, good catch, that's definitely necessary.
> >
> >>> +               s.sgp = NULL;
> >>> +
> >>>           if (s.sgp) {
> >>>                   s.max = s.curr = s.sgp->offset;
> >>> -               s.max += s.sgp->length;
> >>> -               if (dma)
> >>> +
> >>> +               if (dma) {
> >>> +                       s.max += sg_dma_len(s.sgp);
> >>>                           s.dma = sg_dma_address(s.sgp);
> >>> -               else
> >>> +               } else {
> >>> +                       s.max += s.sgp->length;
> >>>                           s.pfn = page_to_pfn(sg_page(s.sgp));
> >>> +               }
> >>
> >> Otherwise has this been tested or alternatively how to test it? (How to
> >> repro the issue.)
> >
> > It has not been tested. To test it, you need Tom's patch set without the
> > last "DO NOT MERGE" patch:
> >
> > https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/
>
> Tom, do you have a branch somewhere I could pull from? (Just being lazy
> about downloading a bunch of messages from the archives.)

I don't unfortunately. I'm working locally with poor internet.

>
> What GPU is in your Lenovo x1 carbon 5th generation and what
> graphical/desktop setup I need to repro?


Is this enough info?:

$ lspci -vnn | grep VGA -A 12
00:02.0 VGA compatible controller [0300]: Intel Corporation HD
Graphics 620 [8086:5916] (rev 02) (prog-if 00 [VGA controller])
    Subsystem: Lenovo ThinkPad X1 Carbon 5th Gen [17aa:224f]
    Flags: bus master, fast devsel, latency 0, IRQ 148
    Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
    Memory at 60000000 (64-bit, prefetchable) [size=256M]
    I/O ports at e000 [size=64]
    [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
    Capabilities: [40] Vendor Specific Information: Len=0c <?>
    Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
    Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
    Capabilities: [d0] Power Management version 2
    Capabilities: [100] Process Address Space ID (PASID)
    Capabilities: [200] Address Translation Service (ATS)


>
> Regards,
>
> Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-08 22:43                     ` Tom Murphy
@ 2020-09-09  9:16                       ` Tvrtko Ursulin
  2020-09-09 12:55                         ` Tvrtko Ursulin
  0 siblings, 1 reply; 38+ messages in thread
From: Tvrtko Ursulin @ 2020-09-09  9:16 UTC (permalink / raw)
  To: Tom Murphy
  Cc: kvm, David Airlie, dri-devel, Matthias Brugger, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, Gerald Schaefer,
	linux-s390, linux-arm-msm, intel-gfx, Robin Murphy,
	linux-mediatek, linux-tegra, Thomas Gleixner, virtualization,
	linux-arm-kernel, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Logan Gunthorpe


On 08/09/2020 23:43, Tom Murphy wrote:
> On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 08/09/2020 16:44, Logan Gunthorpe wrote:
>>> On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
>>>>>
>>>>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>> b/drivers/gpu/drm/i915/i915
>>>>> index b7b59328cb76..9367ac801f0c 100644
>>>>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>>>>>     } __sgt_iter(struct scatterlist *sgl, bool dma) {
>>>>>            struct sgt_iter s = { .sgp = sgl };
>>>>>
>>>>> +       if (sgl && !sg_dma_len(s.sgp))
>>>>
>>>> I'd extend the condition to be, just to be safe:
>>>>       if (dma && sgl && !sg_dma_len(s.sgp))
>>>>
>>>
>>> Right, good catch, that's definitely necessary.
>>>
>>>>> +               s.sgp = NULL;
>>>>> +
>>>>>            if (s.sgp) {
>>>>>                    s.max = s.curr = s.sgp->offset;
>>>>> -               s.max += s.sgp->length;
>>>>> -               if (dma)
>>>>> +
>>>>> +               if (dma) {
>>>>> +                       s.max += sg_dma_len(s.sgp);
>>>>>                            s.dma = sg_dma_address(s.sgp);
>>>>> -               else
>>>>> +               } else {
>>>>> +                       s.max += s.sgp->length;
>>>>>                            s.pfn = page_to_pfn(sg_page(s.sgp));
>>>>> +               }
>>>>
>>>> Otherwise has this been tested or alternatively how to test it? (How to
>>>> repro the issue.)
>>>
>>> It has not been tested. To test it, you need Tom's patch set without the
>>> last "DO NOT MERGE" patch:
>>>
>>> https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/
>>
>> Tom, do you have a branch somewhere I could pull from? (Just being lazy
>> about downloading a bunch of messages from the archives.)
> 
> I don't unfortunately. I'm working locally with poor internet.
> 
>>
>> What GPU is in your Lenovo x1 carbon 5th generation and what
>> graphical/desktop setup I need to repro?
> 
> 
> Is this enough info?:
> 
> $ lspci -vnn | grep VGA -A 12
> 00:02.0 VGA compatible controller [0300]: Intel Corporation HD
> Graphics 620 [8086:5916] (rev 02) (prog-if 00 [VGA controller])
>      Subsystem: Lenovo ThinkPad X1 Carbon 5th Gen [17aa:224f]
>      Flags: bus master, fast devsel, latency 0, IRQ 148
>      Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
>      Memory at 60000000 (64-bit, prefetchable) [size=256M]
>      I/O ports at e000 [size=64]
>      [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
>      Capabilities: [40] Vendor Specific Information: Len=0c <?>
>      Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
>      Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
>      Capabilities: [d0] Power Management version 2
>      Capabilities: [100] Process Address Space ID (PASID)
>      Capabilities: [200] Address Translation Service (ATS)

Works for a start. What about the steps to repro? Any desktop 
environment and it is just visual corruption, no hangs/stalls or such?

I've submitted a series consisting of what I understood are the patches 
needed to repro the issue to our automated CI here:

https://patchwork.freedesktop.org/series/81489/

So will see if it will catch something, or more targeted testing will be 
required. Hopefully it does trip over in which case I can add the patch 
suggested by Logan on top and see if that fixes it. Or I'll need to 
write a new test case.

If you could glance over my series to check I identified the patches 
correctly it would be appreciated.

Regards,

Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-09  9:16                       ` Tvrtko Ursulin
@ 2020-09-09 12:55                         ` Tvrtko Ursulin
  2020-09-10 13:33                           ` Tom Murphy
  0 siblings, 1 reply; 38+ messages in thread
From: Tvrtko Ursulin @ 2020-09-09 12:55 UTC (permalink / raw)
  To: Tom Murphy
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, linux-arm-kernel,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, Logan Gunthorpe, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy


On 09/09/2020 10:16, Tvrtko Ursulin wrote:
> On 08/09/2020 23:43, Tom Murphy wrote:
>> On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>> On 08/09/2020 16:44, Logan Gunthorpe wrote:
>>>> On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>>> b/drivers/gpu/drm/i915/i915
>>>>>> index b7b59328cb76..9367ac801f0c 100644
>>>>>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
>>>>>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
>>>>>>     } __sgt_iter(struct scatterlist *sgl, bool dma) {
>>>>>>            struct sgt_iter s = { .sgp = sgl };
>>>>>>
>>>>>> +       if (sgl && !sg_dma_len(s.sgp))
>>>>>
>>>>> I'd extend the condition to be, just to be safe:
>>>>>       if (dma && sgl && !sg_dma_len(s.sgp))
>>>>>
>>>>
>>>> Right, good catch, that's definitely necessary.
>>>>
>>>>>> +               s.sgp = NULL;
>>>>>> +
>>>>>>            if (s.sgp) {
>>>>>>                    s.max = s.curr = s.sgp->offset;
>>>>>> -               s.max += s.sgp->length;
>>>>>> -               if (dma)
>>>>>> +
>>>>>> +               if (dma) {
>>>>>> +                       s.max += sg_dma_len(s.sgp);
>>>>>>                            s.dma = sg_dma_address(s.sgp);
>>>>>> -               else
>>>>>> +               } else {
>>>>>> +                       s.max += s.sgp->length;
>>>>>>                            s.pfn = page_to_pfn(sg_page(s.sgp));
>>>>>> +               }
>>>>>
>>>>> Otherwise has this been tested or alternatively how to test it? 
>>>>> (How to
>>>>> repro the issue.)
>>>>
>>>> It has not been tested. To test it, you need Tom's patch set without 
>>>> the
>>>> last "DO NOT MERGE" patch:
>>>>
>>>> https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/
>>>
>>> Tom, do you have a branch somewhere I could pull from? (Just being lazy
>>> about downloading a bunch of messages from the archives.)
>>
>> I don't unfortunately. I'm working locally with poor internet.
>>
>>>
>>> What GPU is in your Lenovo x1 carbon 5th generation and what
>>> graphical/desktop setup I need to repro?
>>
>>
>> Is this enough info?:
>>
>> $ lspci -vnn | grep VGA -A 12
>> 00:02.0 VGA compatible controller [0300]: Intel Corporation HD
>> Graphics 620 [8086:5916] (rev 02) (prog-if 00 [VGA controller])
>>      Subsystem: Lenovo ThinkPad X1 Carbon 5th Gen [17aa:224f]
>>      Flags: bus master, fast devsel, latency 0, IRQ 148
>>      Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
>>      Memory at 60000000 (64-bit, prefetchable) [size=256M]
>>      I/O ports at e000 [size=64]
>>      [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
>>      Capabilities: [40] Vendor Specific Information: Len=0c <?>
>>      Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
>>      Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
>>      Capabilities: [d0] Power Management version 2
>>      Capabilities: [100] Process Address Space ID (PASID)
>>      Capabilities: [200] Address Translation Service (ATS)
> 
> Works for a start. What about the steps to repro? Any desktop 
> environment and it is just visual corruption, no hangs/stalls or such?
> 
> I've submitted a series consisting of what I understood are the patches 
> needed to repro the issue to our automated CI here:
> 
> https://patchwork.freedesktop.org/series/81489/
> 
> So will see if it will catch something, or more targeted testing will be 
> required. Hopefully it does trip over in which case I can add the patch 
> suggested by Logan on top and see if that fixes it. Or I'll need to 
> write a new test case.
> 
> If you could glance over my series to check I identified the patches 
> correctly it would be appreciated.

Our CI was more than capable at catching the breakage so I've copied you 
on a patch (https://patchwork.freedesktop.org/series/81497/) which has a 
good potential to fix this. (Or improve the robustness of our sg walks, 
depends how you look at it.)

Would you be able to test it in your environment by any chance? If it 
works I understand it unblocks your IOMMU work, right?

Regards,

Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-09 12:55                         ` Tvrtko Ursulin
@ 2020-09-10 13:33                           ` Tom Murphy
  2020-09-10 13:34                             ` Tom Murphy
  0 siblings, 1 reply; 38+ messages in thread
From: Tom Murphy @ 2020-09-10 13:33 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, linux-arm-kernel,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, Logan Gunthorpe, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy

On Wed, 9 Sep 2020 at 13:56, Tvrtko Ursulin
<tvrtko.ursulin@linux.intel.com> wrote:
>
>
> On 09/09/2020 10:16, Tvrtko Ursulin wrote:
> > On 08/09/2020 23:43, Tom Murphy wrote:
> >> On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
> >> <tvrtko.ursulin@linux.intel.com> wrote:
> >>> On 08/09/2020 16:44, Logan Gunthorpe wrote:
> >>>> On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
> >>>>>>
> >>>>>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> >>>>>> b/drivers/gpu/drm/i915/i915
> >>>>>> index b7b59328cb76..9367ac801f0c 100644
> >>>>>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> >>>>>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> >>>>>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> >>>>>>     } __sgt_iter(struct scatterlist *sgl, bool dma) {
> >>>>>>            struct sgt_iter s = { .sgp = sgl };
> >>>>>>
> >>>>>> +       if (sgl && !sg_dma_len(s.sgp))
> >>>>>
> >>>>> I'd extend the condition to be, just to be safe:
> >>>>>       if (dma && sgl && !sg_dma_len(s.sgp))
> >>>>>
> >>>>
> >>>> Right, good catch, that's definitely necessary.
> >>>>
> >>>>>> +               s.sgp = NULL;
> >>>>>> +
> >>>>>>            if (s.sgp) {
> >>>>>>                    s.max = s.curr = s.sgp->offset;
> >>>>>> -               s.max += s.sgp->length;
> >>>>>> -               if (dma)
> >>>>>> +
> >>>>>> +               if (dma) {
> >>>>>> +                       s.max += sg_dma_len(s.sgp);
> >>>>>>                            s.dma = sg_dma_address(s.sgp);
> >>>>>> -               else
> >>>>>> +               } else {
> >>>>>> +                       s.max += s.sgp->length;
> >>>>>>                            s.pfn = page_to_pfn(sg_page(s.sgp));
> >>>>>> +               }
> >>>>>
> >>>>> Otherwise has this been tested or alternatively how to test it?
> >>>>> (How to
> >>>>> repro the issue.)
> >>>>
> >>>> It has not been tested. To test it, you need Tom's patch set without
> >>>> the
> >>>> last "DO NOT MERGE" patch:
> >>>>
> >>>> https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/
> >>>
> >>> Tom, do you have a branch somewhere I could pull from? (Just being lazy
> >>> about downloading a bunch of messages from the archives.)
> >>
> >> I don't unfortunately. I'm working locally with poor internet.
> >>
> >>>
> >>> What GPU is in your Lenovo x1 carbon 5th generation and what
> >>> graphical/desktop setup I need to repro?
> >>
> >>
> >> Is this enough info?:
> >>
> >> $ lspci -vnn | grep VGA -A 12
> >> 00:02.0 VGA compatible controller [0300]: Intel Corporation HD
> >> Graphics 620 [8086:5916] (rev 02) (prog-if 00 [VGA controller])
> >>      Subsystem: Lenovo ThinkPad X1 Carbon 5th Gen [17aa:224f]
> >>      Flags: bus master, fast devsel, latency 0, IRQ 148
> >>      Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
> >>      Memory at 60000000 (64-bit, prefetchable) [size=256M]
> >>      I/O ports at e000 [size=64]
> >>      [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
> >>      Capabilities: [40] Vendor Specific Information: Len=0c <?>
> >>      Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
> >>      Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
> >>      Capabilities: [d0] Power Management version 2
> >>      Capabilities: [100] Process Address Space ID (PASID)
> >>      Capabilities: [200] Address Translation Service (ATS)
> >
> > Works for a start. What about the steps to repro? Any desktop
> > environment and it is just visual corruption, no hangs/stalls or such?
> >
> > I've submitted a series consisting of what I understood are the patches
> > needed to repro the issue to our automated CI here:
> >
> > https://patchwork.freedesktop.org/series/81489/
> >
> > So will see if it will catch something, or more targeted testing will be
> > required. Hopefully it does trip over in which case I can add the patch
> > suggested by Logan on top and see if that fixes it. Or I'll need to
> > write a new test case.
> >
> > If you could glance over my series to check I identified the patches
> > correctly it would be appreciated.
>
> Our CI was more than capable at catching the breakage so I've copied you
> on a patch (https://patchwork.freedesktop.org/series/81497/) which has a
> good potential to fix this. (Or improve the robustness of our sg walks,
> depends how you look at it.)
>
> Would you be able to test it in your environment by any chance? If it
> works I understand it unblocks your IOMMU work, right?

I tested your latest patch set ([PATCH 1/2] drm/i915: Fix DMA mapped
scatterlist walks) and it fixes the issue. great work!

>
> Regards,
>
> Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
  2020-09-10 13:33                           ` Tom Murphy
@ 2020-09-10 13:34                             ` Tom Murphy
  0 siblings, 0 replies; 38+ messages in thread
From: Tom Murphy @ 2020-09-10 13:34 UTC (permalink / raw)
  To: Tvrtko Ursulin
  Cc: kvm, David Airlie, dri-devel, linux-tegra, Julien Grall,
	Will Deacon, Jean-Philippe Brucker, linux-samsung-soc,
	Marc Zyngier, Krzysztof Kozlowski, Jonathan Hunter,
	Christoph Hellwig, linux-rockchip, Andy Gross, linux-arm-kernel,
	linux-s390, linux-arm-msm, intel-gfx, linux-mediatek,
	Matthias Brugger, Thomas Gleixner, virtualization,
	Gerald Schaefer, Logan Gunthorpe, David Woodhouse, Cornelia Huck,
	Linux Kernel Mailing List, iommu, Kukjin Kim, Robin Murphy

On Thu, 10 Sep 2020 at 14:33, Tom Murphy <murphyt7@tcd.ie> wrote:
>
> On Wed, 9 Sep 2020 at 13:56, Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
> >
> >
> > On 09/09/2020 10:16, Tvrtko Ursulin wrote:
> > > On 08/09/2020 23:43, Tom Murphy wrote:
> > >> On Tue, 8 Sep 2020 at 16:56, Tvrtko Ursulin
> > >> <tvrtko.ursulin@linux.intel.com> wrote:
> > >>> On 08/09/2020 16:44, Logan Gunthorpe wrote:
> > >>>> On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
> > >>>>>>
> > >>>>>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> > >>>>>> b/drivers/gpu/drm/i915/i915
> > >>>>>> index b7b59328cb76..9367ac801f0c 100644
> > >>>>>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> > >>>>>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> > >>>>>> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> > >>>>>>     } __sgt_iter(struct scatterlist *sgl, bool dma) {
> > >>>>>>            struct sgt_iter s = { .sgp = sgl };
> > >>>>>>
> > >>>>>> +       if (sgl && !sg_dma_len(s.sgp))
> > >>>>>
> > >>>>> I'd extend the condition to be, just to be safe:
> > >>>>>       if (dma && sgl && !sg_dma_len(s.sgp))
> > >>>>>
> > >>>>
> > >>>> Right, good catch, that's definitely necessary.
> > >>>>
> > >>>>>> +               s.sgp = NULL;
> > >>>>>> +
> > >>>>>>            if (s.sgp) {
> > >>>>>>                    s.max = s.curr = s.sgp->offset;
> > >>>>>> -               s.max += s.sgp->length;
> > >>>>>> -               if (dma)
> > >>>>>> +
> > >>>>>> +               if (dma) {
> > >>>>>> +                       s.max += sg_dma_len(s.sgp);
> > >>>>>>                            s.dma = sg_dma_address(s.sgp);
> > >>>>>> -               else
> > >>>>>> +               } else {
> > >>>>>> +                       s.max += s.sgp->length;
> > >>>>>>                            s.pfn = page_to_pfn(sg_page(s.sgp));
> > >>>>>> +               }
> > >>>>>
> > >>>>> Otherwise has this been tested or alternatively how to test it?
> > >>>>> (How to
> > >>>>> repro the issue.)
> > >>>>
> > >>>> It has not been tested. To test it, you need Tom's patch set without
> > >>>> the
> > >>>> last "DO NOT MERGE" patch:
> > >>>>
> > >>>> https://lkml.kernel.org/lkml/20200907070035.GA25114@infradead.org/T/
> > >>>
> > >>> Tom, do you have a branch somewhere I could pull from? (Just being lazy
> > >>> about downloading a bunch of messages from the archives.)
> > >>
> > >> I don't unfortunately. I'm working locally with poor internet.
> > >>
> > >>>
> > >>> What GPU is in your Lenovo x1 carbon 5th generation and what
> > >>> graphical/desktop setup I need to repro?
> > >>
> > >>
> > >> Is this enough info?:
> > >>
> > >> $ lspci -vnn | grep VGA -A 12
> > >> 00:02.0 VGA compatible controller [0300]: Intel Corporation HD
> > >> Graphics 620 [8086:5916] (rev 02) (prog-if 00 [VGA controller])
> > >>      Subsystem: Lenovo ThinkPad X1 Carbon 5th Gen [17aa:224f]
> > >>      Flags: bus master, fast devsel, latency 0, IRQ 148
> > >>      Memory at eb000000 (64-bit, non-prefetchable) [size=16M]
> > >>      Memory at 60000000 (64-bit, prefetchable) [size=256M]
> > >>      I/O ports at e000 [size=64]
> > >>      [virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
> > >>      Capabilities: [40] Vendor Specific Information: Len=0c <?>
> > >>      Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
> > >>      Capabilities: [ac] MSI: Enable+ Count=1/1 Maskable- 64bit-
> > >>      Capabilities: [d0] Power Management version 2
> > >>      Capabilities: [100] Process Address Space ID (PASID)
> > >>      Capabilities: [200] Address Translation Service (ATS)
> > >
> > > Works for a start. What about the steps to repro? Any desktop
> > > environment and it is just visual corruption, no hangs/stalls or such?
> > >
> > > I've submitted a series consisting of what I understood are the patches
> > > needed to repro the issue to our automated CI here:
> > >
> > > https://patchwork.freedesktop.org/series/81489/
> > >
> > > So will see if it will catch something, or more targeted testing will be
> > > required. Hopefully it does trip over in which case I can add the patch
> > > suggested by Logan on top and see if that fixes it. Or I'll need to
> > > write a new test case.
> > >
> > > If you could glance over my series to check I identified the patches
> > > correctly it would be appreciated.
> >
> > Our CI was more than capable at catching the breakage so I've copied you
> > on a patch (https://patchwork.freedesktop.org/series/81497/) which has a
> > good potential to fix this. (Or improve the robustness of our sg walks,
> > depends how you look at it.)
> >
> > Would you be able to test it in your environment by any chance? If it
> > works I understand it unblocks your IOMMU work, right?

And yes this does unblock the iommu work

>
> I tested your latest patch set ([PATCH 1/2] drm/i915: Fix DMA mapped
> scatterlist walks) and it fixes the issue. great work!
>
> >
> > Regards,
> >
> > Tvrtko
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2020-09-10 13:34 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
2019-12-21 23:46   ` Arvind Sankar
2019-12-23  3:00   ` Lu Baolu
2019-12-21 15:03 ` [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices Tom Murphy
2019-12-21 15:03 ` [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Tom Murphy
2020-03-20  6:30   ` Tom Murphy
2020-03-20  7:06     ` Lu Baolu
2019-12-21 15:03 ` [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers Tom Murphy
2019-12-21 15:03 ` [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function Tom Murphy
2019-12-21 15:03 ` [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Tom Murphy
2019-12-24 10:20   ` kbuild test robot
2019-12-21 15:03 ` [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops Tom Murphy
2019-12-21 15:04 ` [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu Tom Murphy
2019-12-23 10:37 ` [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Jani Nikula
2019-12-23 11:29   ` Robin Murphy
2019-12-23 11:41     ` Jani Nikula
2020-03-20  6:28       ` Tom Murphy
2020-05-29  0:00 ` Logan Gunthorpe
2020-05-29 12:45   ` Christoph Hellwig
2020-05-29 19:05     ` Logan Gunthorpe
2020-05-29 21:11       ` Marek Szyprowski
2020-05-29 21:21         ` Logan Gunthorpe
2020-08-24  0:04           ` Tom Murphy
2020-08-26 18:26             ` Alex Deucher
2020-08-27 21:36             ` Logan Gunthorpe
2020-08-27 23:34               ` Tom Murphy
2020-09-03 20:26                 ` Tom Murphy
2020-09-08 15:28               ` [Intel-gfx] " Tvrtko Ursulin
2020-09-08 15:44                 ` Logan Gunthorpe
2020-09-08 15:56                   ` Tvrtko Ursulin
2020-09-08 22:43                     ` Tom Murphy
2020-09-09  9:16                       ` Tvrtko Ursulin
2020-09-09 12:55                         ` Tvrtko Ursulin
2020-09-10 13:33                           ` Tom Murphy
2020-09-10 13:34                             ` Tom Murphy
2020-08-26 18:14 ` Robin Murphy
2020-08-26 18:23   ` Tom Murphy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).