linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* swiotlb cleanup
@ 2018-05-15 18:05 Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Hi Konrad,

below are a few swiotlb patches.  Mostly just cleanups, but the removal
of the panic option is an actual change in (rarely used) functionality.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/6] swiotlb: remove a pointless comment
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  2018-05-18 20:22   ` Konrad Rzeszutek Wilk
  2018-05-15 18:05 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

This comments describes an aspect of the map_sg interface that isn't
even exploited by swiotlb.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/swiotlb.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 16ace0e25d52..721f93677eee 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -927,12 +927,6 @@ swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
  * appropriate dma address and length.  They are obtained via
  * sg_dma_{address,length}(SG).
  *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
  * Device ownership issues as mentioned above for swiotlb_map_page are the
  * same here.
  */
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/6] swiotlb: do not panic on mapping failures
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  2018-05-18 20:21   ` Konrad Rzeszutek Wilk
  2018-05-15 18:05 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

We now have error handling in map_single/map_page callers (most of them
anyway). As swiotlb_tbl_map_single already prints a useful warning
when running out of swiotlb pool swace we can also remove swiotlb_full
entirely as it serves no purpose now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/swiotlb.c | 33 +--------------------------------
 1 file changed, 1 insertion(+), 32 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 721f93677eee..4d36340bc4f9 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -763,34 +763,6 @@ static bool swiotlb_free_buffer(struct device *dev, size_t size,
 	return true;
 }
 
-static void
-swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
-	     int do_panic)
-{
-	if (swiotlb_force == SWIOTLB_NO_FORCE)
-		return;
-
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	dev_err_ratelimited(dev, "DMA: Out of SW-IOMMU space for %zu bytes\n",
-			    size);
-
-	if (size <= io_tlb_overflow || !do_panic)
-		return;
-
-	if (dir == DMA_BIDIRECTIONAL)
-		panic("DMA: Random memory could be DMA accessed\n");
-	if (dir == DMA_FROM_DEVICE)
-		panic("DMA: Random memory could be DMA written\n");
-	if (dir == DMA_TO_DEVICE)
-		panic("DMA: Random memory could be DMA read\n");
-}
-
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
  * physical address to use is returned.
@@ -819,10 +791,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 
 	/* Oh well, have to allocate and map a bounce buffer. */
 	map = map_single(dev, phys, size, dir, attrs);
-	if (map == SWIOTLB_MAP_ERROR) {
-		swiotlb_full(dev, size, dir, 1);
+	if (map == SWIOTLB_MAP_ERROR)
 		return __phys_to_dma(dev, io_tlb_overflow_buffer);
-	}
 
 	dev_addr = __phys_to_dma(dev, map);
 
@@ -950,7 +920,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 			if (map == SWIOTLB_MAP_ERROR) {
 				/* Don't panic here, we expect map_sg users
 				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
 				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
 				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
 						       attrs);
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  2018-05-18 20:27   ` Konrad Rzeszutek Wilk
  2018-05-15 18:05 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/swiotlb.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 4d36340bc4f9..2ebbc7204061 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -814,9 +814,9 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
  * After this call, reads by the cpu to the buffer are guaranteed to see
  * whatever the device wrote there.
  */
-static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
-			 size_t size, enum dma_data_direction dir,
-			 unsigned long attrs)
+void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
+			size_t size, enum dma_data_direction dir,
+			unsigned long attrs)
 {
 	phys_addr_t paddr = dma_to_phys(hwdev, dev_addr);
 
@@ -839,13 +839,6 @@ static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
 	dma_mark_clean(phys_to_virt(paddr), size);
 }
 
-void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
-			size_t size, enum dma_data_direction dir,
-			unsigned long attrs)
-{
-	unmap_single(hwdev, dev_addr, size, dir, attrs);
-}
-
 /*
  * Make physical memory consistent for a single streaming mode DMA translation
  * after a transfer.
@@ -949,7 +942,7 @@ swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 	BUG_ON(dir == DMA_NONE);
 
 	for_each_sg(sgl, sg, nelems, i)
-		unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir,
+		swiotlb_unmap_page(hwdev, sg->dma_address, sg_dma_len(sg), dir,
 			     attrs);
 }
 
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
                   ` (2 preceding siblings ...)
  2018-05-15 18:05 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
  5 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h | 1 -
 lib/swiotlb.c           | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 965be92c33b5..7ef541ce8f34 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -121,7 +121,6 @@ static inline unsigned int swiotlb_max_segment(void) { return 0; }
 #endif
 
 extern void swiotlb_print_info(void);
-extern int is_swiotlb_buffer(phys_addr_t paddr);
 extern void swiotlb_set_max_segment(unsigned int);
 
 extern const struct dma_map_ops swiotlb_dma_ops;
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 2ebbc7204061..8333277d1cd1 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -431,7 +431,7 @@ void __init swiotlb_exit(void)
 	max_segment = 0;
 }
 
-int is_swiotlb_buffer(phys_addr_t paddr)
+static int is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return paddr >= io_tlb_start && paddr < io_tlb_end;
 }
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/6] swiotlb: share more code between map_page and map_sg
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
                   ` (3 preceding siblings ...)
  2018-05-15 18:05 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  2018-05-15 18:05 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
  5 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Refactor all the common code into what previously was map_single, which
is now renamed to __swiotlb_map_page.  This also improves the map_sg
error handling and diagnostics to match the map_page ones.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/swiotlb.c | 114 +++++++++++++++++++++++---------------------------
 1 file changed, 53 insertions(+), 61 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 8333277d1cd1..5becc2fc680a 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -595,21 +595,47 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
 /*
  * Allocates bounce buffer and returns its physical address.
  */
-static phys_addr_t
-map_single(struct device *hwdev, phys_addr_t phys, size_t size,
-	   enum dma_data_direction dir, unsigned long attrs)
+static int
+__swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
+		enum dma_data_direction dir, unsigned long attrs,
+		dma_addr_t *dma_addr)
 {
-	dma_addr_t start_dma_addr;
-
-	if (swiotlb_force == SWIOTLB_NO_FORCE) {
-		dev_warn_ratelimited(hwdev, "Cannot do DMA to address %pa\n",
-				     &phys);
-		return SWIOTLB_MAP_ERROR;
+	phys_addr_t map_addr;
+
+	if (WARN_ON_ONCE(dir == DMA_NONE))
+		return -EINVAL;
+	*dma_addr = phys_to_dma(dev, phys);
+
+	switch (swiotlb_force) {
+	case SWIOTLB_NO_FORCE:
+		dev_warn_ratelimited(dev,
+			"swiotlb: force disabled for address %pa\n", &phys);
+		return -EOPNOTSUPP;
+	case SWIOTLB_NORMAL:
+		/* can we address the memory directly? */
+		if (dma_capable(dev, *dma_addr, size))
+			return 0;
+		break;
+	case SWIOTLB_FORCE:
+		break;
 	}
 
-	start_dma_addr = __phys_to_dma(hwdev, io_tlb_start);
-	return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size,
-				      dir, attrs);
+	trace_swiotlb_bounced(dev, *dma_addr, size, swiotlb_force);
+	map_addr = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start),
+			phys, size, dir, attrs);
+	if (unlikely(map_addr == SWIOTLB_MAP_ERROR))
+		return -ENOMEM;
+
+	/* Ensure that the address returned is DMA'ble */
+	*dma_addr = __phys_to_dma(dev, map_addr);
+	if (unlikely(!dma_capable(dev, *dma_addr, size))) {
+		dev_err_ratelimited(dev,
+			"DMA: swiotlb buffer not addressable.\n");
+		swiotlb_tbl_unmap_single(dev, map_addr, size, dir,
+			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+		return -EINVAL;
+	}
+	return 0;
 }
 
 /*
@@ -775,35 +801,12 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 			    enum dma_data_direction dir,
 			    unsigned long attrs)
 {
-	phys_addr_t map, phys = page_to_phys(page) + offset;
-	dma_addr_t dev_addr = phys_to_dma(dev, phys);
-
-	BUG_ON(dir == DMA_NONE);
-	/*
-	 * If the address happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (dma_capable(dev, dev_addr, size) && swiotlb_force != SWIOTLB_FORCE)
-		return dev_addr;
+	dma_addr_t dma_addr;
 
-	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
-
-	/* Oh well, have to allocate and map a bounce buffer. */
-	map = map_single(dev, phys, size, dir, attrs);
-	if (map == SWIOTLB_MAP_ERROR)
+	if (unlikely(__swiotlb_map_page(dev, page_to_phys(page) + offset, size,
+			dir, attrs, &dma_addr) < 0))
 		return __phys_to_dma(dev, io_tlb_overflow_buffer);
-
-	dev_addr = __phys_to_dma(dev, map);
-
-	/* Ensure that the address returned is DMA'ble */
-	if (dma_capable(dev, dev_addr, size))
-		return dev_addr;
-
-	attrs |= DMA_ATTR_SKIP_CPU_SYNC;
-	swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);
-
-	return __phys_to_dma(dev, io_tlb_overflow_buffer);
+	return dma_addr;
 }
 
 /*
@@ -894,37 +897,26 @@ swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
  * same here.
  */
 int
-swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
+swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, int nelems,
 		     enum dma_data_direction dir, unsigned long attrs)
 {
 	struct scatterlist *sg;
 	int i;
 
-	BUG_ON(dir == DMA_NONE);
-
 	for_each_sg(sgl, sg, nelems, i) {
-		phys_addr_t paddr = sg_phys(sg);
-		dma_addr_t dev_addr = phys_to_dma(hwdev, paddr);
-
-		if (swiotlb_force == SWIOTLB_FORCE ||
-		    !dma_capable(hwdev, dev_addr, sg->length)) {
-			phys_addr_t map = map_single(hwdev, sg_phys(sg),
-						     sg->length, dir, attrs);
-			if (map == SWIOTLB_MAP_ERROR) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
-				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
-						       attrs);
-				sg_dma_len(sgl) = 0;
-				return 0;
-			}
-			sg->dma_address = __phys_to_dma(hwdev, map);
-		} else
-			sg->dma_address = dev_addr;
+		if (unlikely(__swiotlb_map_page(dev, sg_phys(sg), sg->length,
+				dir, attrs, &sg->dma_address) < 0))
+			goto out_error;
 		sg_dma_len(sg) = sg->length;
 	}
+
 	return nelems;
+
+out_error:
+	swiotlb_unmap_sg_attrs(dev, sgl, i, dir,
+			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+	sg_dma_len(sgl) = 0;
+	return 0;
 }
 
 /*
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page
  2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
                   ` (4 preceding siblings ...)
  2018-05-15 18:05 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
@ 2018-05-15 18:05 ` Christoph Hellwig
  5 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-15 18:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 lib/swiotlb.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 5becc2fc680a..5cf88e090cb6 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -608,8 +608,11 @@ __swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
 
 	switch (swiotlb_force) {
 	case SWIOTLB_NO_FORCE:
-		dev_warn_ratelimited(dev,
-			"swiotlb: force disabled for address %pa\n", &phys);
+		if (!(attrs & DMA_ATTR_NO_WARN)) {
+			dev_warn_ratelimited(dev,
+				"swiotlb: force disabled for address %pa\n",
+				&phys);
+		}
 		return -EOPNOTSUPP;
 	case SWIOTLB_NORMAL:
 		/* can we address the memory directly? */
@@ -629,10 +632,12 @@ __swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
 	/* Ensure that the address returned is DMA'ble */
 	*dma_addr = __phys_to_dma(dev, map_addr);
 	if (unlikely(!dma_capable(dev, *dma_addr, size))) {
-		dev_err_ratelimited(dev,
-			"DMA: swiotlb buffer not addressable.\n");
+		if (!(attrs & DMA_ATTR_NO_WARN)) {
+			dev_err_ratelimited(dev,
+				"DMA: swiotlb buffer not addressable.\n");
+		}
 		swiotlb_tbl_unmap_single(dev, map_addr, size, dir,
-			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+				attrs | DMA_ATTR_SKIP_CPU_SYNC);
 		return -EINVAL;
 	}
 	return 0;
-- 
2.17.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/6] swiotlb: do not panic on mapping failures
  2018-05-15 18:05 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
@ 2018-05-18 20:21   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-18 20:21 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: iommu, linux-kernel

On Tue, May 15, 2018 at 08:05:19PM +0200, Christoph Hellwig wrote:
> We now have error handling in map_single/map_page callers (most of them

Which ones are missing? Shouldn't we first fix those before we rip this out?

> anyway). As swiotlb_tbl_map_single already prints a useful warning
> when running out of swiotlb pool swace we can also remove swiotlb_full

s/swace/so/

> entirely as it serves no purpose now.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  lib/swiotlb.c | 33 +--------------------------------
>  1 file changed, 1 insertion(+), 32 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> index 721f93677eee..4d36340bc4f9 100644
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -763,34 +763,6 @@ static bool swiotlb_free_buffer(struct device *dev, size_t size,
>  	return true;
>  }
>  
> -static void
> -swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
> -	     int do_panic)
> -{
> -	if (swiotlb_force == SWIOTLB_NO_FORCE)
> -		return;
> -
> -	/*
> -	 * Ran out of IOMMU space for this operation. This is very bad.
> -	 * Unfortunately the drivers cannot handle this operation properly.
> -	 * unless they check for dma_mapping_error (most don't)
> -	 * When the mapping is small enough return a static buffer to limit
> -	 * the damage, or panic when the transfer is too big.
> -	 */
> -	dev_err_ratelimited(dev, "DMA: Out of SW-IOMMU space for %zu bytes\n",
> -			    size);
> -
> -	if (size <= io_tlb_overflow || !do_panic)
> -		return;
> -
> -	if (dir == DMA_BIDIRECTIONAL)
> -		panic("DMA: Random memory could be DMA accessed\n");
> -	if (dir == DMA_FROM_DEVICE)
> -		panic("DMA: Random memory could be DMA written\n");
> -	if (dir == DMA_TO_DEVICE)
> -		panic("DMA: Random memory could be DMA read\n");
> -}
> -
>  /*
>   * Map a single buffer of the indicated size for DMA in streaming mode.  The
>   * physical address to use is returned.
> @@ -819,10 +791,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
>  
>  	/* Oh well, have to allocate and map a bounce buffer. */
>  	map = map_single(dev, phys, size, dir, attrs);
> -	if (map == SWIOTLB_MAP_ERROR) {
> -		swiotlb_full(dev, size, dir, 1);
> +	if (map == SWIOTLB_MAP_ERROR)
>  		return __phys_to_dma(dev, io_tlb_overflow_buffer);
> -	}
>  
>  	dev_addr = __phys_to_dma(dev, map);
>  
> @@ -950,7 +920,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
>  			if (map == SWIOTLB_MAP_ERROR) {
>  				/* Don't panic here, we expect map_sg users
>  				   to do proper error handling. */
> -				swiotlb_full(hwdev, sg->length, dir, 0);
>  				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
>  				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
>  						       attrs);
> -- 
> 2.17.0
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/6] swiotlb: remove a pointless comment
  2018-05-15 18:05 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
@ 2018-05-18 20:22   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-18 20:22 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: iommu, linux-kernel

On Tue, May 15, 2018 at 08:05:18PM +0200, Christoph Hellwig wrote:
> This comments describes an aspect of the map_sg interface that isn't
> even exploited by swiotlb.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Applied.
> ---
>  lib/swiotlb.c | 6 ------
>  1 file changed, 6 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> index 16ace0e25d52..721f93677eee 100644
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -927,12 +927,6 @@ swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
>   * appropriate dma address and length.  They are obtained via
>   * sg_dma_{address,length}(SG).
>   *
> - * NOTE: An implementation may be able to use a smaller number of
> - *       DMA address/length pairs than there are SG table elements.
> - *       (for example via virtual mapping capabilities)
> - *       The routine returns the number of addr/length pairs actually
> - *       used, at most nents.
> - *
>   * Device ownership issues as mentioned above for swiotlb_map_page are the
>   * same here.
>   */
> -- 
> 2.17.0
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single
  2018-05-15 18:05 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
@ 2018-05-18 20:27   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 11+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-05-18 20:27 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: iommu, linux-kernel

On Tue, May 15, 2018 at 08:05:20PM +0200, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Applied.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 2/6] swiotlb: do not panic on mapping failures
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
@ 2018-07-25 11:37 ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

All properly written drivers now have error handling in the
dma_map_single / dma_map_page callers.  As swiotlb_tbl_map_single already
prints a useful warning when running out of swiotlb pool swace we can
also remove swiotlb_full entirely as it serves no purpose now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 33 +--------------------------------
 1 file changed, 1 insertion(+), 32 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 9062b14bc7f4..06cfc4d00325 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -761,34 +761,6 @@ static bool swiotlb_free_buffer(struct device *dev, size_t size,
 	return true;
 }
 
-static void
-swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
-	     int do_panic)
-{
-	if (swiotlb_force == SWIOTLB_NO_FORCE)
-		return;
-
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	dev_err_ratelimited(dev, "DMA: Out of SW-IOMMU space for %zu bytes\n",
-			    size);
-
-	if (size <= io_tlb_overflow || !do_panic)
-		return;
-
-	if (dir == DMA_BIDIRECTIONAL)
-		panic("DMA: Random memory could be DMA accessed\n");
-	if (dir == DMA_FROM_DEVICE)
-		panic("DMA: Random memory could be DMA written\n");
-	if (dir == DMA_TO_DEVICE)
-		panic("DMA: Random memory could be DMA read\n");
-}
-
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
  * physical address to use is returned.
@@ -817,10 +789,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 
 	/* Oh well, have to allocate and map a bounce buffer. */
 	map = map_single(dev, phys, size, dir, attrs);
-	if (map == SWIOTLB_MAP_ERROR) {
-		swiotlb_full(dev, size, dir, 1);
+	if (map == SWIOTLB_MAP_ERROR)
 		return __phys_to_dma(dev, io_tlb_overflow_buffer);
-	}
 
 	dev_addr = __phys_to_dma(dev, map);
 
@@ -948,7 +918,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 			if (map == SWIOTLB_MAP_ERROR) {
 				/* Don't panic here, we expect map_sg users
 				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
 				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
 				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
 						       attrs);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-07-25 11:38 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-15 18:05 swiotlb cleanup Christoph Hellwig
2018-05-15 18:05 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
2018-05-18 20:22   ` Konrad Rzeszutek Wilk
2018-05-15 18:05 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
2018-05-18 20:21   ` Konrad Rzeszutek Wilk
2018-05-15 18:05 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
2018-05-18 20:27   ` Konrad Rzeszutek Wilk
2018-05-15 18:05 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
2018-05-15 18:05 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
2018-05-15 18:05 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
2018-07-25 11:37 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).