linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* fix nvme performance regression due to dma_max_mapping_size()
@ 2019-07-17  6:26 Christoph Hellwig
  2019-07-17  6:26 ` [PATCH 1/2] dma-mapping: add a dma_addressing_limited helper Christoph Hellwig
  2019-07-17  6:26 ` [PATCH 2/2] dma-direct: only limit the mapping size if swiotlb could be used Christoph Hellwig
  0 siblings, 2 replies; 3+ messages in thread
From: Christoph Hellwig @ 2019-07-17  6:26 UTC (permalink / raw)
  To: iommu; +Cc: Joerg Roedel, Benjamin Herrenschmidt, linux-kernel

Hi all,

the new dma_max_mapping_size function is a little to eager to limit
the I/O size if the swiotlb buffer is present, but the device is
not addressing limited.  Fix this by adding an additional check.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] dma-mapping: add a dma_addressing_limited helper
  2019-07-17  6:26 fix nvme performance regression due to dma_max_mapping_size() Christoph Hellwig
@ 2019-07-17  6:26 ` Christoph Hellwig
  2019-07-17  6:26 ` [PATCH 2/2] dma-direct: only limit the mapping size if swiotlb could be used Christoph Hellwig
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2019-07-17  6:26 UTC (permalink / raw)
  To: iommu; +Cc: Joerg Roedel, Benjamin Herrenschmidt, linux-kernel

This helper returns if the device has issues addressing all present
memory in the system.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/dma-mapping.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 8d13e28a8e07..e11b115dd0e4 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -679,6 +679,20 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
 	return dma_set_mask_and_coherent(dev, mask);
 }
 
+/**
+ * dma_addressing_limited - return if the device is addressing limited
+ * @dev:	device to check
+ *
+ * Return %true if the devices DMA mask is too small to address all memory in
+ * the system, else %false.  Lack of addressing bits is the prime reason for
+ * bounce buffering, but might not be the only one.
+ */
+static inline bool dma_addressing_limited(struct device *dev)
+{
+	return min_not_zero(*dev->dma_mask, dev->bus_dma_mask) <
+		dma_get_required_mask(dev);
+}
+
 #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 		const struct iommu_ops *iommu, bool coherent);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] dma-direct: only limit the mapping size if swiotlb could be used
  2019-07-17  6:26 fix nvme performance regression due to dma_max_mapping_size() Christoph Hellwig
  2019-07-17  6:26 ` [PATCH 1/2] dma-mapping: add a dma_addressing_limited helper Christoph Hellwig
@ 2019-07-17  6:26 ` Christoph Hellwig
  1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2019-07-17  6:26 UTC (permalink / raw)
  To: iommu; +Cc: Joerg Roedel, Benjamin Herrenschmidt, linux-kernel

Don't just check for a swiotlb buffer, but also if buffering might
be required for this particular device.

Fixes: 133d624b1cee ("dma: Introduce dma_max_mapping_size()")
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
 kernel/dma/direct.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index d7cec866d16b..e269b6f9b444 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -399,11 +399,9 @@ int dma_direct_supported(struct device *dev, u64 mask)
 
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
-	size_t size = SIZE_MAX;
-
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active())
-		size = swiotlb_max_mapping_size(dev);
-
-	return size;
+	if (is_swiotlb_active() &&
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+		return swiotlb_max_mapping_size(dev);
+	return SIZE_MAX;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-07-17  6:26 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-17  6:26 fix nvme performance regression due to dma_max_mapping_size() Christoph Hellwig
2019-07-17  6:26 ` [PATCH 1/2] dma-mapping: add a dma_addressing_limited helper Christoph Hellwig
2019-07-17  6:26 ` [PATCH 2/2] dma-direct: only limit the mapping size if swiotlb could be used Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).