linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* swiotlb cleanup (resend)
@ 2018-07-25 11:37 Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Hi Konrad,

below are a few swiotlb patches.  Mostly just cleanups, but the removal
of the panic option is an actual change in (rarely used) functionality.

I'd be happy to pick them up through the dma-mapping tree if you are
fine with that.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] swiotlb: remove a pointless comment
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
@ 2018-07-25 11:37 ` Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

This comments describes an aspect of the map_sg interface that isn't
even exploited by swiotlb.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 4f8a6dbf0b60..9062b14bc7f4 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -925,12 +925,6 @@ swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
  * appropriate dma address and length.  They are obtained via
  * sg_dma_{address,length}(SG).
  *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
  * Device ownership issues as mentioned above for swiotlb_map_page are the
  * same here.
  */
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] swiotlb: do not panic on mapping failures
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
@ 2018-07-25 11:37 ` Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

All properly written drivers now have error handling in the
dma_map_single / dma_map_page callers.  As swiotlb_tbl_map_single already
prints a useful warning when running out of swiotlb pool swace we can
also remove swiotlb_full entirely as it serves no purpose now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 33 +--------------------------------
 1 file changed, 1 insertion(+), 32 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 9062b14bc7f4..06cfc4d00325 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -761,34 +761,6 @@ static bool swiotlb_free_buffer(struct device *dev, size_t size,
 	return true;
 }
 
-static void
-swiotlb_full(struct device *dev, size_t size, enum dma_data_direction dir,
-	     int do_panic)
-{
-	if (swiotlb_force == SWIOTLB_NO_FORCE)
-		return;
-
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	dev_err_ratelimited(dev, "DMA: Out of SW-IOMMU space for %zu bytes\n",
-			    size);
-
-	if (size <= io_tlb_overflow || !do_panic)
-		return;
-
-	if (dir == DMA_BIDIRECTIONAL)
-		panic("DMA: Random memory could be DMA accessed\n");
-	if (dir == DMA_FROM_DEVICE)
-		panic("DMA: Random memory could be DMA written\n");
-	if (dir == DMA_TO_DEVICE)
-		panic("DMA: Random memory could be DMA read\n");
-}
-
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
  * physical address to use is returned.
@@ -817,10 +789,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 
 	/* Oh well, have to allocate and map a bounce buffer. */
 	map = map_single(dev, phys, size, dir, attrs);
-	if (map == SWIOTLB_MAP_ERROR) {
-		swiotlb_full(dev, size, dir, 1);
+	if (map == SWIOTLB_MAP_ERROR)
 		return __phys_to_dma(dev, io_tlb_overflow_buffer);
-	}
 
 	dev_addr = __phys_to_dma(dev, map);
 
@@ -948,7 +918,6 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 			if (map == SWIOTLB_MAP_ERROR) {
 				/* Don't panic here, we expect map_sg users
 				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
 				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
 				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
 						       attrs);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
  2018-07-25 11:37 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
@ 2018-07-25 11:37 ` Christoph Hellwig
  2018-07-25 11:38 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:37 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 06cfc4d00325..03016221fc64 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -812,9 +812,9 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
  * After this call, reads by the cpu to the buffer are guaranteed to see
  * whatever the device wrote there.
  */
-static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
-			 size_t size, enum dma_data_direction dir,
-			 unsigned long attrs)
+void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
+			size_t size, enum dma_data_direction dir,
+			unsigned long attrs)
 {
 	phys_addr_t paddr = dma_to_phys(hwdev, dev_addr);
 
@@ -837,13 +837,6 @@ static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
 	dma_mark_clean(phys_to_virt(paddr), size);
 }
 
-void swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
-			size_t size, enum dma_data_direction dir,
-			unsigned long attrs)
-{
-	unmap_single(hwdev, dev_addr, size, dir, attrs);
-}
-
 /*
  * Make physical memory consistent for a single streaming mode DMA translation
  * after a transfer.
@@ -947,7 +940,7 @@ swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
 	BUG_ON(dir == DMA_NONE);
 
 	for_each_sg(sgl, sg, nelems, i)
-		unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir,
+		swiotlb_unmap_page(hwdev, sg->dma_address, sg_dma_len(sg), dir,
 			     attrs);
 }
 
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
                   ` (2 preceding siblings ...)
  2018-07-25 11:37 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
@ 2018-07-25 11:38 ` Christoph Hellwig
  2018-07-25 11:38 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:38 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/swiotlb.h | 1 -
 kernel/dma/swiotlb.c    | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 965be92c33b5..7ef541ce8f34 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -121,7 +121,6 @@ static inline unsigned int swiotlb_max_segment(void) { return 0; }
 #endif
 
 extern void swiotlb_print_info(void);
-extern int is_swiotlb_buffer(phys_addr_t paddr);
 extern void swiotlb_set_max_segment(unsigned int);
 
 extern const struct dma_map_ops swiotlb_dma_ops;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 03016221fc64..5555e1fd03cf 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -429,7 +429,7 @@ void __init swiotlb_exit(void)
 	max_segment = 0;
 }
 
-int is_swiotlb_buffer(phys_addr_t paddr)
+static int is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return paddr >= io_tlb_start && paddr < io_tlb_end;
 }
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] swiotlb: share more code between map_page and map_sg
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
                   ` (3 preceding siblings ...)
  2018-07-25 11:38 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
@ 2018-07-25 11:38 ` Christoph Hellwig
  2018-07-25 11:38 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
  2018-08-27 18:13 ` swiotlb cleanup (resend) Konrad Rzeszutek Wilk
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:38 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Refactor all the common code into what previously was map_single, which
is now renamed to __swiotlb_map_page.  This also improves the map_sg
error handling and diagnostics to match the map_page ones.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 114 ++++++++++++++++++++-----------------------
 1 file changed, 53 insertions(+), 61 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 5555e1fd03cf..8ca0964ebf3a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -593,21 +593,47 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
 /*
  * Allocates bounce buffer and returns its physical address.
  */
-static phys_addr_t
-map_single(struct device *hwdev, phys_addr_t phys, size_t size,
-	   enum dma_data_direction dir, unsigned long attrs)
+static int
+__swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
+		enum dma_data_direction dir, unsigned long attrs,
+		dma_addr_t *dma_addr)
 {
-	dma_addr_t start_dma_addr;
-
-	if (swiotlb_force == SWIOTLB_NO_FORCE) {
-		dev_warn_ratelimited(hwdev, "Cannot do DMA to address %pa\n",
-				     &phys);
-		return SWIOTLB_MAP_ERROR;
+	phys_addr_t map_addr;
+
+	if (WARN_ON_ONCE(dir == DMA_NONE))
+		return -EINVAL;
+	*dma_addr = phys_to_dma(dev, phys);
+
+	switch (swiotlb_force) {
+	case SWIOTLB_NO_FORCE:
+		dev_warn_ratelimited(dev,
+			"swiotlb: force disabled for address %pa\n", &phys);
+		return -EOPNOTSUPP;
+	case SWIOTLB_NORMAL:
+		/* can we address the memory directly? */
+		if (dma_capable(dev, *dma_addr, size))
+			return 0;
+		break;
+	case SWIOTLB_FORCE:
+		break;
 	}
 
-	start_dma_addr = __phys_to_dma(hwdev, io_tlb_start);
-	return swiotlb_tbl_map_single(hwdev, start_dma_addr, phys, size,
-				      dir, attrs);
+	trace_swiotlb_bounced(dev, *dma_addr, size, swiotlb_force);
+	map_addr = swiotlb_tbl_map_single(dev, __phys_to_dma(dev, io_tlb_start),
+			phys, size, dir, attrs);
+	if (unlikely(map_addr == SWIOTLB_MAP_ERROR))
+		return -ENOMEM;
+
+	/* Ensure that the address returned is DMA'ble */
+	*dma_addr = __phys_to_dma(dev, map_addr);
+	if (unlikely(!dma_capable(dev, *dma_addr, size))) {
+		dev_err_ratelimited(dev,
+			"DMA: swiotlb buffer not addressable.\n");
+		swiotlb_tbl_unmap_single(dev, map_addr, size, dir,
+			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+		return -EINVAL;
+	}
+	return 0;
 }
 
 /*
@@ -773,35 +799,12 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 			    enum dma_data_direction dir,
 			    unsigned long attrs)
 {
-	phys_addr_t map, phys = page_to_phys(page) + offset;
-	dma_addr_t dev_addr = phys_to_dma(dev, phys);
-
-	BUG_ON(dir == DMA_NONE);
-	/*
-	 * If the address happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (dma_capable(dev, dev_addr, size) && swiotlb_force != SWIOTLB_FORCE)
-		return dev_addr;
+	dma_addr_t dma_addr;
 
-	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
-
-	/* Oh well, have to allocate and map a bounce buffer. */
-	map = map_single(dev, phys, size, dir, attrs);
-	if (map == SWIOTLB_MAP_ERROR)
+	if (unlikely(__swiotlb_map_page(dev, page_to_phys(page) + offset, size,
+			dir, attrs, &dma_addr) < 0))
 		return __phys_to_dma(dev, io_tlb_overflow_buffer);
-
-	dev_addr = __phys_to_dma(dev, map);
-
-	/* Ensure that the address returned is DMA'ble */
-	if (dma_capable(dev, dev_addr, size))
-		return dev_addr;
-
-	attrs |= DMA_ATTR_SKIP_CPU_SYNC;
-	swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);
-
-	return __phys_to_dma(dev, io_tlb_overflow_buffer);
+	return dma_addr;
 }
 
 /*
@@ -892,37 +895,26 @@ swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
  * same here.
  */
 int
-swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
+swiotlb_map_sg_attrs(struct device *dev, struct scatterlist *sgl, int nelems,
 		     enum dma_data_direction dir, unsigned long attrs)
 {
 	struct scatterlist *sg;
 	int i;
 
-	BUG_ON(dir == DMA_NONE);
-
 	for_each_sg(sgl, sg, nelems, i) {
-		phys_addr_t paddr = sg_phys(sg);
-		dma_addr_t dev_addr = phys_to_dma(hwdev, paddr);
-
-		if (swiotlb_force == SWIOTLB_FORCE ||
-		    !dma_capable(hwdev, dev_addr, sg->length)) {
-			phys_addr_t map = map_single(hwdev, sg_phys(sg),
-						     sg->length, dir, attrs);
-			if (map == SWIOTLB_MAP_ERROR) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				attrs |= DMA_ATTR_SKIP_CPU_SYNC;
-				swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir,
-						       attrs);
-				sg_dma_len(sgl) = 0;
-				return 0;
-			}
-			sg->dma_address = __phys_to_dma(hwdev, map);
-		} else
-			sg->dma_address = dev_addr;
+		if (unlikely(__swiotlb_map_page(dev, sg_phys(sg), sg->length,
+				dir, attrs, &sg->dma_address) < 0))
+			goto out_error;
 		sg_dma_len(sg) = sg->length;
 	}
+
 	return nelems;
+
+out_error:
+	swiotlb_unmap_sg_attrs(dev, sgl, i, dir,
+			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+	sg_dma_len(sgl) = 0;
+	return 0;
 }
 
 /*
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
                   ` (4 preceding siblings ...)
  2018-07-25 11:38 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
@ 2018-07-25 11:38 ` Christoph Hellwig
  2018-08-27 18:13 ` swiotlb cleanup (resend) Konrad Rzeszutek Wilk
  6 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2018-07-25 11:38 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: iommu, linux-kernel

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/swiotlb.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca0964ebf3a..5c3db7c89e0f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -606,8 +606,11 @@ __swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
 
 	switch (swiotlb_force) {
 	case SWIOTLB_NO_FORCE:
-		dev_warn_ratelimited(dev,
-			"swiotlb: force disabled for address %pa\n", &phys);
+		if (!(attrs & DMA_ATTR_NO_WARN)) {
+			dev_warn_ratelimited(dev,
+				"swiotlb: force disabled for address %pa\n",
+				&phys);
+		}
 		return -EOPNOTSUPP;
 	case SWIOTLB_NORMAL:
 		/* can we address the memory directly? */
@@ -627,10 +630,12 @@ __swiotlb_map_page(struct device *dev, phys_addr_t phys, size_t size,
 	/* Ensure that the address returned is DMA'ble */
 	*dma_addr = __phys_to_dma(dev, map_addr);
 	if (unlikely(!dma_capable(dev, *dma_addr, size))) {
-		dev_err_ratelimited(dev,
-			"DMA: swiotlb buffer not addressable.\n");
+		if (!(attrs & DMA_ATTR_NO_WARN)) {
+			dev_err_ratelimited(dev,
+				"DMA: swiotlb buffer not addressable.\n");
+		}
 		swiotlb_tbl_unmap_single(dev, map_addr, size, dir,
-			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+				attrs | DMA_ATTR_SKIP_CPU_SYNC);
 		return -EINVAL;
 	}
 	return 0;
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: swiotlb cleanup (resend)
  2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
                   ` (5 preceding siblings ...)
  2018-07-25 11:38 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
@ 2018-08-27 18:13 ` Konrad Rzeszutek Wilk
  2018-08-27 18:33   ` Christoph Hellwig
  6 siblings, 1 reply; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-08-27 18:13 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Konrad Rzeszutek Wilk, iommu, linux-kernel

On Wed, Jul 25, 2018 at 01:37:56PM +0200, Christoph Hellwig wrote:
> Hi Konrad,
> 
> below are a few swiotlb patches.  Mostly just cleanups, but the removal
> of the panic option is an actual change in (rarely used) functionality.
> 
> I'd be happy to pick them up through the dma-mapping tree if you are
> fine with that.

Hey,

I put them in my 'devel/for-linus-4.19' as I would like to make
sure that they work just fine with Xen-SWIOTLB and some baremetal
machines.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: swiotlb cleanup (resend)
  2018-08-27 18:13 ` swiotlb cleanup (resend) Konrad Rzeszutek Wilk
@ 2018-08-27 18:33   ` Christoph Hellwig
  2018-08-27 19:00     ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2018-08-27 18:33 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Christoph Hellwig, Konrad Rzeszutek Wilk, iommu, linux-kernel

On Mon, Aug 27, 2018 at 02:13:11PM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Jul 25, 2018 at 01:37:56PM +0200, Christoph Hellwig wrote:
> > Hi Konrad,
> > 
> > below are a few swiotlb patches.  Mostly just cleanups, but the removal
> > of the panic option is an actual change in (rarely used) functionality.
> > 
> > I'd be happy to pick them up through the dma-mapping tree if you are
> > fine with that.
> 
> Hey,
> 
> I put them in my 'devel/for-linus-4.19' as I would like to make
> sure that they work just fine with Xen-SWIOTLB and some baremetal
> machines.

I actually have a new version of these combined with merging
arm64 support into the main swiotlb ops.  Please hold off a bit for
now.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: swiotlb cleanup (resend)
  2018-08-27 18:33   ` Christoph Hellwig
@ 2018-08-27 19:00     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2018-08-27 19:00 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Konrad Rzeszutek Wilk, iommu, linux-kernel

On Mon, Aug 27, 2018 at 08:33:08PM +0200, Christoph Hellwig wrote:
> On Mon, Aug 27, 2018 at 02:13:11PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Wed, Jul 25, 2018 at 01:37:56PM +0200, Christoph Hellwig wrote:
> > > Hi Konrad,
> > > 
> > > below are a few swiotlb patches.  Mostly just cleanups, but the removal
> > > of the panic option is an actual change in (rarely used) functionality.
> > > 
> > > I'd be happy to pick them up through the dma-mapping tree if you are
> > > fine with that.
> > 
> > Hey,
> > 
> > I put them in my 'devel/for-linus-4.19' as I would like to make
> > sure that they work just fine with Xen-SWIOTLB and some baremetal
> > machines.
> 
> I actually have a new version of these combined with merging
> arm64 support into the main swiotlb ops.  Please hold off a bit for
> now.

<nods>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-08-27 19:01 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-25 11:37 swiotlb cleanup (resend) Christoph Hellwig
2018-07-25 11:37 ` [PATCH 1/6] swiotlb: remove a pointless comment Christoph Hellwig
2018-07-25 11:37 ` [PATCH 2/6] swiotlb: do not panic on mapping failures Christoph Hellwig
2018-07-25 11:37 ` [PATCH 3/6] swiotlb: merge swiotlb_unmap_page and unmap_single Christoph Hellwig
2018-07-25 11:38 ` [PATCH 4/6] swiotlb: mark is_swiotlb_buffer static Christoph Hellwig
2018-07-25 11:38 ` [PATCH 5/6] swiotlb: share more code between map_page and map_sg Christoph Hellwig
2018-07-25 11:38 ` [PATCH 6/6] swiotlb: respect DMA_ATTR_NO_WARN in __swiotlb_map_page Christoph Hellwig
2018-08-27 18:13 ` swiotlb cleanup (resend) Konrad Rzeszutek Wilk
2018-08-27 18:33   ` Christoph Hellwig
2018-08-27 19:00     ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).