linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* add swiotlb support to arm32
@ 2019-07-09 14:20 Christoph Hellwig
  2019-07-09 14:20 ` [PATCH 1/2] dma-mapping check pfn validity in dma_common_{mmap, get_sgtable} Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-09 14:20 UTC (permalink / raw)
  To: Russell King - ARM Linux admin
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Hi Russell,

This series adds swiotlb support to the 32-bit arm port to ensure
platforms with LPAE support can support DMA mapping for all devices
using 32-bit dma masks, just like we do on other ports that support
> 32-bit physical addressing and don't have an iommu.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 1/2] dma-mapping check pfn validity in dma_common_{mmap, get_sgtable}
  2019-07-09 14:20 add swiotlb support to arm32 Christoph Hellwig
@ 2019-07-09 14:20 ` Christoph Hellwig
  2019-07-09 14:20 ` [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Christoph Hellwig
  2019-07-17 13:21 ` add swiotlb support to arm32 Vignesh Raghavendra
  2 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-09 14:20 UTC (permalink / raw)
  To: Russell King - ARM Linux admin
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Check that the pfn returned from arch_dma_coherent_to_pfn refers to
a valid page and reject the mmap / get_sgtable requests otherwise.

Based on the arm implementation of the mmap and get_sgtable methods.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/mapping.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 1f628e7ac709..b945239621d8 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -116,11 +116,16 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
 	int ret;
 
 	if (!dev_is_dma_coherent(dev)) {
+		unsigned long pfn;
+
 		if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN))
 			return -ENXIO;
 
-		page = pfn_to_page(arch_dma_coherent_to_pfn(dev, cpu_addr,
-				dma_addr));
+		/* If the PFN is not valid, we do not have a struct page */
+		pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr);
+		if (!pfn_valid(pfn))
+			return -ENXIO;
+		page = pfn_to_page(pfn);
 	} else {
 		page = virt_to_page(cpu_addr);
 	}
@@ -170,7 +175,11 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 	if (!dev_is_dma_coherent(dev)) {
 		if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_COHERENT_TO_PFN))
 			return -ENXIO;
+
+		/* If the PFN is not valid, we do not have a struct page */
 		pfn = arch_dma_coherent_to_pfn(dev, cpu_addr, dma_addr);
+		if (!pfn_valid(pfn))
+			return -ENXIO;
 	} else {
 		pfn = page_to_pfn(virt_to_page(cpu_addr));
 	}
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-07-09 14:20 add swiotlb support to arm32 Christoph Hellwig
  2019-07-09 14:20 ` [PATCH 1/2] dma-mapping check pfn validity in dma_common_{mmap, get_sgtable} Christoph Hellwig
@ 2019-07-09 14:20 ` Christoph Hellwig
  2019-07-24 17:23   ` Nicolas Saenz Julienne
  2019-12-19 13:10   ` Peter Ujfalusi
  2019-07-17 13:21 ` add swiotlb support to arm32 Vignesh Raghavendra
  2 siblings, 2 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-09 14:20 UTC (permalink / raw)
  To: Russell King - ARM Linux admin
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

The DMA API requires that 32-bit DMA masks are always supported, but on
arm LPAE configs they do not currently work when memory is present
above 4GB.  Wire up the swiotlb code like for all other architectures
to provide the bounce buffering in that case.

Fixes: 21e07dba9fb11 ("scsi: reduce use of block bounce buffers").
Reported-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/arm/include/asm/dma-mapping.h |  4 +-
 arch/arm/mm/Kconfig                |  5 +++
 arch/arm/mm/dma-mapping.c          | 61 ++++++++++++++++++++++++++++++
 arch/arm/mm/init.c                 |  5 +++
 4 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 03ba90ffc0f8..054119cd7757 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -18,7 +18,9 @@ extern const struct dma_map_ops arm_coherent_dma_ops;
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
-	return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : NULL;
+	if (IS_ENABLED(CONFIG_MMU) && !IS_ENABLED(CONFIG_ARM_LPAE))
+		return &arm_dma_ops;
+	return NULL;
 }
 
 #ifdef __arch_page_to_dma
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index b169e580bf82..2dd36183d0e6 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -663,6 +663,11 @@ config ARM_LPAE
 	depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \
 		!CPU_32v4 && !CPU_32v3
 	select PHYS_ADDR_T_64BIT
+	select SWIOTLB
+	select ARCH_HAS_DMA_COHERENT_TO_PFN
+	select ARCH_HAS_DMA_MMAP_PGPROT
+	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
+	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	help
 	  Say Y if you have an ARMv7 processor supporting the LPAE page
 	  table format and you would like to access memory beyond the
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index bdf0d236aaee..01a5b96d76a7 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -18,6 +18,7 @@
 #include <linux/init.h>
 #include <linux/device.h>
 #include <linux/dma-mapping.h>
+#include <linux/dma-noncoherent.h>
 #include <linux/dma-contiguous.h>
 #include <linux/highmem.h>
 #include <linux/memblock.h>
@@ -1129,6 +1130,19 @@ int arm_dma_supported(struct device *dev, u64 mask)
 
 static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
 {
+	/*
+	 * When CONFIG_ARM_LPAE is set, physical address can extend above
+	 * 32-bits, which then can't be addressed by devices that only support
+	 * 32-bit DMA.
+	 * Use the generic dma-direct / swiotlb ops code in that case, as that
+	 * handles bounce buffering for us.
+	 *
+	 * Note: this checks CONFIG_ARM_LPAE instead of CONFIG_SWIOTLB as the
+	 * latter is also selected by the Xen code, but that code for now relies
+	 * on non-NULL dev_dma_ops.  To be cleaned up later.
+	 */
+	if (IS_ENABLED(CONFIG_ARM_LPAE))
+		return NULL;
 	return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
 }
 
@@ -2333,6 +2347,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 	const struct dma_map_ops *dma_ops;
 
 	dev->archdata.dma_coherent = coherent;
+#ifdef CONFIG_SWIOTLB
+	dev->dma_coherent = coherent;
+#endif
 
 	/*
 	 * Don't override the dma_ops if they have already been set. Ideally
@@ -2367,3 +2384,47 @@ void arch_teardown_dma_ops(struct device *dev)
 	/* Let arch_setup_dma_ops() start again from scratch upon re-probe */
 	set_dma_ops(dev, NULL);
 }
+
+#ifdef CONFIG_SWIOTLB
+void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
+		size_t size, enum dma_data_direction dir)
+{
+	__dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
+			      size, dir);
+}
+
+void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
+		size_t size, enum dma_data_direction dir)
+{
+	__dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
+			      size, dir);
+}
+
+long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr,
+		dma_addr_t dma_addr)
+{
+	return dma_to_pfn(dev, dma_addr);
+}
+
+pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot,
+		unsigned long attrs)
+{
+	if (!dev_is_dma_coherent(dev))
+		return __get_dma_pgprot(attrs, prot);
+	return prot;
+}
+
+void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+		gfp_t gfp, unsigned long attrs)
+{
+	return __dma_alloc(dev, size, dma_handle, gfp,
+			   __get_dma_pgprot(attrs, PAGE_KERNEL), false,
+			   attrs, __builtin_return_address(0));
+}
+
+void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
+		dma_addr_t dma_handle, unsigned long attrs)
+{
+	__arm_dma_free(dev, size, cpu_addr, dma_handle, attrs, false);
+}
+#endif /* CONFIG_SWIOTLB */
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index be0b42937888..64541be15d43 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -24,6 +24,7 @@
 #include <linux/dma-contiguous.h>
 #include <linux/sizes.h>
 #include <linux/stop_machine.h>
+#include <linux/swiotlb.h>
 
 #include <asm/cp15.h>
 #include <asm/mach-types.h>
@@ -456,6 +457,10 @@ void __init mem_init(void)
 	extern u32 itcm_end;
 #endif
 
+#ifdef CONFIG_ARM_LPAE
+	swiotlb_init(1);
+#endif
+
 	set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
 
 	/* this will put all unused low memory onto the freelists */
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: add swiotlb support to arm32
  2019-07-09 14:20 add swiotlb support to arm32 Christoph Hellwig
  2019-07-09 14:20 ` [PATCH 1/2] dma-mapping check pfn validity in dma_common_{mmap, get_sgtable} Christoph Hellwig
  2019-07-09 14:20 ` [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Christoph Hellwig
@ 2019-07-17 13:21 ` Vignesh Raghavendra
  2019-07-19 12:33   ` Christoph Hellwig
  2019-07-24 15:37   ` Christoph Hellwig
  2 siblings, 2 replies; 30+ messages in thread
From: Vignesh Raghavendra @ 2019-07-17 13:21 UTC (permalink / raw)
  To: Christoph Hellwig, Russell King - ARM Linux admin
  Cc: iommu, Konrad Rzeszutek Wilk, linux-kernel, linux-arm-kernel,
	Roger Quadros

Hi,

On 09/07/19 7:50 PM, Christoph Hellwig wrote:
> Hi Russell,
> 
> This series adds swiotlb support to the 32-bit arm port to ensure
> platforms with LPAE support can support DMA mapping for all devices
> using 32-bit dma masks, just like we do on other ports that support
>> 32-bit physical addressing and don't have an iommu.


This series fixes SATA errors seen on TI platforms with LPAE like DRA7
Rev H EVM.

Tested-by: Vignesh Raghavendra <vigneshr@ti.com>

Thanks for the fix!

-- 
Regards
Vignesh

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: add swiotlb support to arm32
  2019-07-17 13:21 ` add swiotlb support to arm32 Vignesh Raghavendra
@ 2019-07-19 12:33   ` Christoph Hellwig
  2019-07-24 15:37   ` Christoph Hellwig
  1 sibling, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-19 12:33 UTC (permalink / raw)
  To: Vignesh Raghavendra
  Cc: Konrad Rzeszutek Wilk, linux-kernel,
	Russell King - ARM Linux admin, iommu, Christoph Hellwig,
	linux-arm-kernel, Roger Quadros

On Wed, Jul 17, 2019 at 06:51:46PM +0530, Vignesh Raghavendra wrote:
> > This series adds swiotlb support to the 32-bit arm port to ensure
> > platforms with LPAE support can support DMA mapping for all devices
> > using 32-bit dma masks, just like we do on other ports that support
> >> 32-bit physical addressing and don't have an iommu.
> 
> 
> This series fixes SATA errors seen on TI platforms with LPAE like DRA7
> Rev H EVM.
> 
> Tested-by: Vignesh Raghavendra <vigneshr@ti.com>
> 
> Thanks for the fix!

Thanks for testing!  Russell, do you want to pick this up via the
arm tree, or should I queue it up in the dma-mapping tree?

> 
> -- 
> Regards
> Vignesh
---end quoted text---

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: add swiotlb support to arm32
  2019-07-17 13:21 ` add swiotlb support to arm32 Vignesh Raghavendra
  2019-07-19 12:33   ` Christoph Hellwig
@ 2019-07-24 15:37   ` Christoph Hellwig
  1 sibling, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-24 15:37 UTC (permalink / raw)
  To: Vignesh Raghavendra
  Cc: Konrad Rzeszutek Wilk, linux-kernel,
	Russell King - ARM Linux admin, iommu, Christoph Hellwig,
	linux-arm-kernel, Roger Quadros

FYI, I've tentatively pulled the branch into the dma-mapping for-next
branch so that we get some linux-next exposure.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-07-09 14:20 ` [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Christoph Hellwig
@ 2019-07-24 17:23   ` Nicolas Saenz Julienne
  2019-07-24 17:55     ` Christoph Hellwig
  2019-12-19 13:10   ` Peter Ujfalusi
  1 sibling, 1 reply; 30+ messages in thread
From: Nicolas Saenz Julienne @ 2019-07-24 17:23 UTC (permalink / raw)
  To: Christoph Hellwig, Russell King - ARM Linux admin
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros


[-- Attachment #1.1: Type: text/plain, Size: 659 bytes --]

On Tue, 2019-07-09 at 07:20 -0700, Christoph Hellwig wrote:
> The DMA API requires that 32-bit DMA masks are always supported, but on
> arm LPAE configs they do not currently work when memory is present
> above 4GB.  Wire up the swiotlb code like for all other architectures
> to provide the bounce buffering in that case.
> 
> Fixes: 21e07dba9fb11 ("scsi: reduce use of block bounce buffers").
> Reported-by: Roger Quadros <rogerq@ti.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---

Hi Chistoph,
Out of curiosity, what is the reason stopping us from using dma-direct/swiotlb
instead of arm_dma_ops altogether?

Regards,
Nicolas


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-07-24 17:23   ` Nicolas Saenz Julienne
@ 2019-07-24 17:55     ` Christoph Hellwig
  0 siblings, 0 replies; 30+ messages in thread
From: Christoph Hellwig @ 2019-07-24 17:55 UTC (permalink / raw)
  To: Nicolas Saenz Julienne
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel,
	Russell King - ARM Linux admin, iommu, Christoph Hellwig,
	linux-arm-kernel, Roger Quadros

On Wed, Jul 24, 2019 at 07:23:50PM +0200, Nicolas Saenz Julienne wrote:
> Out of curiosity, what is the reason stopping us from using dma-direct/swiotlb
> instead of arm_dma_ops altogether?

Nothing fundamental.  We just need to do a very careful piecemeal
migration as the arm code handles a ot of interesting corner case and
we need to ensure we don't break that.  I have various WIP patches
for the easier bits and we can work from there.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-07-09 14:20 ` [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Christoph Hellwig
  2019-07-24 17:23   ` Nicolas Saenz Julienne
@ 2019-12-19 13:10   ` Peter Ujfalusi
  2019-12-19 15:02     ` Christoph Hellwig
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2019-12-19 13:10 UTC (permalink / raw)
  To: Christoph Hellwig, Russell King - ARM Linux admin
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Hi,

On 09/07/2019 17.20, Christoph Hellwig wrote:
> The DMA API requires that 32-bit DMA masks are always supported, but on
> arm LPAE configs they do not currently work when memory is present
> above 4GB.  Wire up the swiotlb code like for all other architectures
> to provide the bounce buffering in that case.

bisect pointed me to this commit as the reason why EDMA fails to probe and sdhci is falling back to PIO mode (not using it's built in DMA).

In both cases the reason is that
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
fails, because dma_direct_supported() is returning false.


Prints inside dma_direct_supported():
sdhci-omap 23000000.mmc: max_pfn: 880000
sdhci-omap 23000000.mmc: min_mask #1: ffffff
sdhci-omap 23000000.mmc: min_mask #2: ffffff
sdhci-omap 23000000.mmc: dev->dma_pfn_offset: 780000
sdhci-omap 23000000.mmc: PAGE_SHIFT: 12
sdhci-omap 23000000.mmc: __phys_to_dma(dev, min_mask): ff880ffffff
sdhci-omap 23000000.mmc: mask: ffffffff

Print in dma_supported() after returning from dma_direct_supported():
sdhci-omap 23000000.mmc: dma_is_direct, ret = 0

sdhci-omap 23100000.mmc: DMA is not supported for the device

keystone-k2g have this in soc0 node:
dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;

DDR starts at 0x8 0000 0000 (8G) and 2G is aliased at 0x8000 0000.

This gives the 0x780000 for dma_pfn_offset for all devices underneath it.

The DMA_BIT_MASK(24) is passed to __phys_to_dma() because CONFIG_ZONE_DMA is enabled.

SWIOTLB is enabled in kconfig.

I'm not sure how to correctly fix it, but the following patch makes things working:

From b682a61776f0861755c9d54e5ebccf8471d85bfd Mon Sep 17 00:00:00 2001
From: Peter Ujfalusi <peter.ujfalusi@ti.com>
Date: Thu, 19 Dec 2019 15:07:25 +0200
Subject: [PATCH] arm: mm: dma-mapping: Fix dma_supported() when
 dev->dma_pfn_offset is not 0

We can only use direct mapping when LPAE is enabled if the dma_pfn_offset
is 0, otherwise valid dma_masks will be rejected and the DMA support is
going to be denied for peripherals, or DMA drivers.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
---
 arch/arm/mm/dma-mapping.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 7d042d5c43e3..bf199b1e82bd 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1100,15 +1100,6 @@ int arm_dma_supported(struct device *dev, u64 mask)
 
 static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
 {
-	/*
-	 * When CONFIG_ARM_LPAE is set, physical address can extend above
-	 * 32-bits, which then can't be addressed by devices that only support
-	 * 32-bit DMA.
-	 * Use the generic dma-direct / swiotlb ops code in that case, as that
-	 * handles bounce buffering for us.
-	 */
-	if (IS_ENABLED(CONFIG_ARM_LPAE))
-		return NULL;
 	return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
 }
 
@@ -2309,6 +2300,15 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 
 	if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu))
 		dma_ops = arm_get_iommu_dma_map_ops(coherent);
+	else if (IS_ENABLED(CONFIG_ARM_LPAE) && !dev->dma_pfn_offset)
+		/*
+		 * When CONFIG_ARM_LPAE is set, physical address can extend
+		 * above * 32-bits, which then can't be addressed by devices
+		 * that only support 32-bit DMA.
+		 * Use the generic dma-direct / swiotlb ops code in that case,
+		 * as that handles bounce buffering for us.
+		 */
+		dma_ops = NULL;
 	else
 		dma_ops = arm_get_dma_map_ops(coherent);
 
-- 
- Péter

> Fixes: 21e07dba9fb11 ("scsi: reduce use of block bounce buffers").
> Reported-by: Roger Quadros <rogerq@ti.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  arch/arm/include/asm/dma-mapping.h |  4 +-
>  arch/arm/mm/Kconfig                |  5 +++
>  arch/arm/mm/dma-mapping.c          | 61 ++++++++++++++++++++++++++++++
>  arch/arm/mm/init.c                 |  5 +++
>  4 files changed, 74 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
> index 03ba90ffc0f8..054119cd7757 100644
> --- a/arch/arm/include/asm/dma-mapping.h
> +++ b/arch/arm/include/asm/dma-mapping.h
> @@ -18,7 +18,9 @@ extern const struct dma_map_ops arm_coherent_dma_ops;
>  
>  static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
>  {
> -	return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : NULL;
> +	if (IS_ENABLED(CONFIG_MMU) && !IS_ENABLED(CONFIG_ARM_LPAE))
> +		return &arm_dma_ops;
> +	return NULL;
>  }
>  
>  #ifdef __arch_page_to_dma
> diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
> index b169e580bf82..2dd36183d0e6 100644
> --- a/arch/arm/mm/Kconfig
> +++ b/arch/arm/mm/Kconfig
> @@ -663,6 +663,11 @@ config ARM_LPAE
>  	depends on MMU && CPU_32v7 && !CPU_32v6 && !CPU_32v5 && \
>  		!CPU_32v4 && !CPU_32v3
>  	select PHYS_ADDR_T_64BIT
> +	select SWIOTLB
> +	select ARCH_HAS_DMA_COHERENT_TO_PFN
> +	select ARCH_HAS_DMA_MMAP_PGPROT
> +	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> +	select ARCH_HAS_SYNC_DMA_FOR_CPU
>  	help
>  	  Say Y if you have an ARMv7 processor supporting the LPAE page
>  	  table format and you would like to access memory beyond the
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index bdf0d236aaee..01a5b96d76a7 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -18,6 +18,7 @@
>  #include <linux/init.h>
>  #include <linux/device.h>
>  #include <linux/dma-mapping.h>
> +#include <linux/dma-noncoherent.h>
>  #include <linux/dma-contiguous.h>
>  #include <linux/highmem.h>
>  #include <linux/memblock.h>
> @@ -1129,6 +1130,19 @@ int arm_dma_supported(struct device *dev, u64 mask)
>  
>  static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
>  {
> +	/*
> +	 * When CONFIG_ARM_LPAE is set, physical address can extend above
> +	 * 32-bits, which then can't be addressed by devices that only support
> +	 * 32-bit DMA.
> +	 * Use the generic dma-direct / swiotlb ops code in that case, as that
> +	 * handles bounce buffering for us.
> +	 *
> +	 * Note: this checks CONFIG_ARM_LPAE instead of CONFIG_SWIOTLB as the
> +	 * latter is also selected by the Xen code, but that code for now relies
> +	 * on non-NULL dev_dma_ops.  To be cleaned up later.
> +	 */
> +	if (IS_ENABLED(CONFIG_ARM_LPAE))
> +		return NULL;
>  	return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
>  }
>  
> @@ -2333,6 +2347,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
>  	const struct dma_map_ops *dma_ops;
>  
>  	dev->archdata.dma_coherent = coherent;
> +#ifdef CONFIG_SWIOTLB
> +	dev->dma_coherent = coherent;
> +#endif
>  
>  	/*
>  	 * Don't override the dma_ops if they have already been set. Ideally
> @@ -2367,3 +2384,47 @@ void arch_teardown_dma_ops(struct device *dev)
>  	/* Let arch_setup_dma_ops() start again from scratch upon re-probe */
>  	set_dma_ops(dev, NULL);
>  }
> +
> +#ifdef CONFIG_SWIOTLB
> +void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
> +		size_t size, enum dma_data_direction dir)
> +{
> +	__dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
> +			      size, dir);
> +}
> +
> +void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
> +		size_t size, enum dma_data_direction dir)
> +{
> +	__dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
> +			      size, dir);
> +}
> +
> +long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr,
> +		dma_addr_t dma_addr)
> +{
> +	return dma_to_pfn(dev, dma_addr);
> +}
> +
> +pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot,
> +		unsigned long attrs)
> +{
> +	if (!dev_is_dma_coherent(dev))
> +		return __get_dma_pgprot(attrs, prot);
> +	return prot;
> +}
> +
> +void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
> +		gfp_t gfp, unsigned long attrs)
> +{
> +	return __dma_alloc(dev, size, dma_handle, gfp,
> +			   __get_dma_pgprot(attrs, PAGE_KERNEL), false,
> +			   attrs, __builtin_return_address(0));
> +}
> +
> +void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
> +		dma_addr_t dma_handle, unsigned long attrs)
> +{
> +	__arm_dma_free(dev, size, cpu_addr, dma_handle, attrs, false);
> +}
> +#endif /* CONFIG_SWIOTLB */
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index be0b42937888..64541be15d43 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -24,6 +24,7 @@
>  #include <linux/dma-contiguous.h>
>  #include <linux/sizes.h>
>  #include <linux/stop_machine.h>
> +#include <linux/swiotlb.h>
>  
>  #include <asm/cp15.h>
>  #include <asm/mach-types.h>
> @@ -456,6 +457,10 @@ void __init mem_init(void)
>  	extern u32 itcm_end;
>  #endif
>  
> +#ifdef CONFIG_ARM_LPAE
> +	swiotlb_init(1);
> +#endif
> +
>  	set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
>  
>  	/* this will put all unused low memory onto the freelists */
> 

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-12-19 13:10   ` Peter Ujfalusi
@ 2019-12-19 15:02     ` Christoph Hellwig
  2019-12-19 15:20       ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2019-12-19 15:02 UTC (permalink / raw)
  To: Peter Ujfalusi
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	Christoph Hellwig, linux-arm-kernel, Roger Quadros

Hi Peter,

can you try the patch below (it will need to be split into two):

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index e822af0d9219..30b9c6786ce3 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
 
 static int __dma_supported(struct device *dev, u64 mask, bool warn)
 {
-	unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
+	unsigned long max_dma_pfn =
+		min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
 
 	/*
 	 * Translate the device's DMA mask to a PFN limit.  This
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 3ef204137e73..dd0e169a1bb1 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -19,6 +19,7 @@
 #include <linux/gfp.h>
 #include <linux/memblock.h>
 #include <linux/dma-contiguous.h>
+#include <linux/dma-direct.h>
 #include <linux/sizes.h>
 #include <linux/stop_machine.h>
 #include <linux/swiotlb.h>
@@ -84,15 +85,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
 phys_addr_t arm_dma_zone_size __read_mostly;
 EXPORT_SYMBOL(arm_dma_zone_size);
 
-/*
- * The DMA mask corresponding to the maximum bus address allocatable
- * using GFP_DMA.  The default here places no restriction on DMA
- * allocations.  This must be the smallest DMA mask in the system,
- * so a successful GFP_DMA allocation will always satisfy this.
- */
-phys_addr_t arm_dma_limit;
-unsigned long arm_dma_pfn_limit;
-
 static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
 	unsigned long dma_size)
 {
@@ -108,14 +100,14 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
 
 void __init setup_dma_zone(const struct machine_desc *mdesc)
 {
-#ifdef CONFIG_ZONE_DMA
-	if (mdesc->dma_zone_size) {
+	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
+		zone_dma_limit = ((phys_addr_t)~0);
+	} else if (mdesc->dma_zone_size) {
 		arm_dma_zone_size = mdesc->dma_zone_size;
-		arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
-	} else
-		arm_dma_limit = 0xffffffff;
-	arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
-#endif
+		zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
+	} else {
+		zone_dma_limit = 0xffffffff;
+	}
 }
 
 static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
@@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
 	early_init_fdt_scan_reserved_mem();
 
 	/* reserve memory for DMA contiguous allocations */
-	dma_contiguous_reserve(arm_dma_limit);
+	dma_contiguous_reserve(zone_dma_limit);
 
 	arm_memblock_steal_permitted = false;
 	memblock_dump_all();
diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
index 88c121ac14b3..7dbd77554273 100644
--- a/arch/arm/mm/mm.h
+++ b/arch/arm/mm/mm.h
@@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct static_vm *svm);
 
 #endif
 
-#ifdef CONFIG_ZONE_DMA
-extern phys_addr_t arm_dma_limit;
-extern unsigned long arm_dma_pfn_limit;
-#else
-#define arm_dma_limit ((phys_addr_t)~0)
-#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
-#endif
-
 extern phys_addr_t arm_lowmem_limit;
 
 void __init bootmem_init(void);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index b65dffdfb201..7a7501acd763 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
 	early_init_fdt_scan_reserved_mem();
 
 	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
-		zone_dma_bits = ARM64_ZONE_DMA_BITS;
+		zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
 		arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
 	}
 
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 9488b63dfc87..337ace03d3f0 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
  * everything else. GFP_DMA32 page allocations automatically fall back to
  * ZONE_DMA.
  *
- * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the
+ * By using 31-bit unconditionally, we can exploit zone_dma_limit to inform the
  * generic DMA mapping code.  32-bit only devices (if not handled by an IOMMU
  * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
  * ZONE_DMA.
@@ -257,18 +257,20 @@ void __init paging_init(void)
 	printk(KERN_DEBUG "Memory hole size: %ldMB\n",
 	       (long int)((top_of_ram - total_ram) >> 20));
 
+#ifdef CONFIG_ZONE_DMA
 	/*
 	 * Allow 30-bit DMA for very limited Broadcom wifi chips on many
 	 * powerbooks.
 	 */
-	if (IS_ENABLED(CONFIG_PPC32))
-		zone_dma_bits = 30;
-	else
-		zone_dma_bits = 31;
-
-#ifdef CONFIG_ZONE_DMA
-	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
-				      1UL << (zone_dma_bits - PAGE_SHIFT));
+	if (IS_ENABLED(CONFIG_PPC32)) {
+		zone_dma_limit = DMA_BIT_MASK(30);
+		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
+					      1UL << (30 - PAGE_SHIFT));
+	} else {
+		zone_dma_limit = DMA_BIT_MASK(31);
+		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
+					      1UL << (31 - PAGE_SHIFT));
+	}
 #endif
 	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
 #ifdef CONFIG_HIGHMEM
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index f0ce22220565..c403f61cb56b 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -118,7 +118,7 @@ void __init paging_init(void)
 
 	sparse_memory_present_with_active_regions(MAX_NUMNODES);
 	sparse_init();
-	zone_dma_bits = 31;
+	zone_dma_limit = DMA_BIT_MASK(31);
 	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
 	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
 	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 24b8684aa21d..20d56d597506 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -6,7 +6,7 @@
 #include <linux/memblock.h> /* for min_low_pfn */
 #include <linux/mem_encrypt.h>
 
-extern unsigned int zone_dma_bits;
+extern phys_addr_t zone_dma_limit;
 
 #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
 #include <asm/dma-direct.h>
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 6af7ae83c4ad..5ea1bed2ba6f 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -21,7 +21,7 @@
  * it for entirely different regions. In that case the arch code needs to
  * override the variable below for dma-direct to work properly.
  */
-unsigned int zone_dma_bits __ro_after_init = 24;
+phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
 
 static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
 {
@@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
 	 * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
 	 * zones.
 	 */
-	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
+	if (*phys_limit <= zone_dma_limit)
 		return GFP_DMA;
 	if (*phys_limit <= DMA_BIT_MASK(32))
 		return GFP_DMA32;
@@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 	u64 min_mask;
 
 	if (IS_ENABLED(CONFIG_ZONE_DMA))
-		min_mask = DMA_BIT_MASK(zone_dma_bits);
+		min_mask = zone_dma_limit;
 	else
 		min_mask = DMA_BIT_MASK(32);
 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-12-19 15:02     ` Christoph Hellwig
@ 2019-12-19 15:20       ` Peter Ujfalusi
  2020-01-08  8:28         ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2019-12-19 15:20 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Hi Christoph,

On 19/12/2019 17.02, Christoph Hellwig wrote:
> Hi Peter,
> 
> can you try the patch below (it will need to be split into two):

Thank you!

Unfortunately it does not help:
[    0.596208] edma: probe of 2700000.edma failed with error -5
[    0.596626] edma: probe of 2728000.edma failed with error -5
...
[    2.108602] sdhci-omap 23000000.mmc: Got CD GPIO
[    2.113899] mmc0: Failed to set 32-bit DMA mask.
[    2.118592] mmc0: No suitable DMA available - falling back to PIO
[    2.159038] mmc0: SDHCI controller on 23000000.mmc [23000000.mmc]
using PIO
[    2.167531] mmc1: Failed to set 32-bit DMA mask.
[    2.172192] mmc1: No suitable DMA available - falling back to PIO
[    2.213841] mmc1: SDHCI controller on 23100000.mmc [23100000.mmc]
using PIO

- Péter


> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index e822af0d9219..30b9c6786ce3 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
>  
>  static int __dma_supported(struct device *dev, u64 mask, bool warn)
>  {
> -	unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
> +	unsigned long max_dma_pfn =
> +		min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
>  
>  	/*
>  	 * Translate the device's DMA mask to a PFN limit.  This
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 3ef204137e73..dd0e169a1bb1 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -19,6 +19,7 @@
>  #include <linux/gfp.h>
>  #include <linux/memblock.h>
>  #include <linux/dma-contiguous.h>
> +#include <linux/dma-direct.h>
>  #include <linux/sizes.h>
>  #include <linux/stop_machine.h>
>  #include <linux/swiotlb.h>
> @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
>  phys_addr_t arm_dma_zone_size __read_mostly;
>  EXPORT_SYMBOL(arm_dma_zone_size);
>  
> -/*
> - * The DMA mask corresponding to the maximum bus address allocatable
> - * using GFP_DMA.  The default here places no restriction on DMA
> - * allocations.  This must be the smallest DMA mask in the system,
> - * so a successful GFP_DMA allocation will always satisfy this.
> - */
> -phys_addr_t arm_dma_limit;
> -unsigned long arm_dma_pfn_limit;
> -
>  static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>  	unsigned long dma_size)
>  {
> @@ -108,14 +100,14 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>  
>  void __init setup_dma_zone(const struct machine_desc *mdesc)
>  {
> -#ifdef CONFIG_ZONE_DMA
> -	if (mdesc->dma_zone_size) {
> +	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
> +		zone_dma_limit = ((phys_addr_t)~0);
> +	} else if (mdesc->dma_zone_size) {
>  		arm_dma_zone_size = mdesc->dma_zone_size;
> -		arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
> -	} else
> -		arm_dma_limit = 0xffffffff;
> -	arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
> -#endif
> +		zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
> +	} else {
> +		zone_dma_limit = 0xffffffff;
> +	}
>  }
>  
>  static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
> @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
>  	early_init_fdt_scan_reserved_mem();
>  
>  	/* reserve memory for DMA contiguous allocations */
> -	dma_contiguous_reserve(arm_dma_limit);
> +	dma_contiguous_reserve(zone_dma_limit);
>  
>  	arm_memblock_steal_permitted = false;
>  	memblock_dump_all();
> diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
> index 88c121ac14b3..7dbd77554273 100644
> --- a/arch/arm/mm/mm.h
> +++ b/arch/arm/mm/mm.h
> @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct static_vm *svm);
>  
>  #endif
>  
> -#ifdef CONFIG_ZONE_DMA
> -extern phys_addr_t arm_dma_limit;
> -extern unsigned long arm_dma_pfn_limit;
> -#else
> -#define arm_dma_limit ((phys_addr_t)~0)
> -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
> -#endif
> -
>  extern phys_addr_t arm_lowmem_limit;
>  
>  void __init bootmem_init(void);
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index b65dffdfb201..7a7501acd763 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
>  	early_init_fdt_scan_reserved_mem();
>  
>  	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
> -		zone_dma_bits = ARM64_ZONE_DMA_BITS;
> +		zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
>  		arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
>  	}
>  
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index 9488b63dfc87..337ace03d3f0 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
>   * everything else. GFP_DMA32 page allocations automatically fall back to
>   * ZONE_DMA.
>   *
> - * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the
> + * By using 31-bit unconditionally, we can exploit zone_dma_limit to inform the
>   * generic DMA mapping code.  32-bit only devices (if not handled by an IOMMU
>   * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
>   * ZONE_DMA.
> @@ -257,18 +257,20 @@ void __init paging_init(void)
>  	printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>  	       (long int)((top_of_ram - total_ram) >> 20));
>  
> +#ifdef CONFIG_ZONE_DMA
>  	/*
>  	 * Allow 30-bit DMA for very limited Broadcom wifi chips on many
>  	 * powerbooks.
>  	 */
> -	if (IS_ENABLED(CONFIG_PPC32))
> -		zone_dma_bits = 30;
> -	else
> -		zone_dma_bits = 31;
> -
> -#ifdef CONFIG_ZONE_DMA
> -	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
> -				      1UL << (zone_dma_bits - PAGE_SHIFT));
> +	if (IS_ENABLED(CONFIG_PPC32)) {
> +		zone_dma_limit = DMA_BIT_MASK(30);
> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
> +					      1UL << (30 - PAGE_SHIFT));
> +	} else {
> +		zone_dma_limit = DMA_BIT_MASK(31);
> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
> +					      1UL << (31 - PAGE_SHIFT));
> +	}
>  #endif
>  	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>  #ifdef CONFIG_HIGHMEM
> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
> index f0ce22220565..c403f61cb56b 100644
> --- a/arch/s390/mm/init.c
> +++ b/arch/s390/mm/init.c
> @@ -118,7 +118,7 @@ void __init paging_init(void)
>  
>  	sparse_memory_present_with_active_regions(MAX_NUMNODES);
>  	sparse_init();
> -	zone_dma_bits = 31;
> +	zone_dma_limit = DMA_BIT_MASK(31);
>  	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
>  	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
>  	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
> index 24b8684aa21d..20d56d597506 100644
> --- a/include/linux/dma-direct.h
> +++ b/include/linux/dma-direct.h
> @@ -6,7 +6,7 @@
>  #include <linux/memblock.h> /* for min_low_pfn */
>  #include <linux/mem_encrypt.h>
>  
> -extern unsigned int zone_dma_bits;
> +extern phys_addr_t zone_dma_limit;
>  
>  #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>  #include <asm/dma-direct.h>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..5ea1bed2ba6f 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -21,7 +21,7 @@
>   * it for entirely different regions. In that case the arch code needs to
>   * override the variable below for dma-direct to work properly.
>   */
> -unsigned int zone_dma_bits __ro_after_init = 24;
> +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>  
>  static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
>  {
> @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
>  	 * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
>  	 * zones.
>  	 */
> -	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
> +	if (*phys_limit <= zone_dma_limit)
>  		return GFP_DMA;
>  	if (*phys_limit <= DMA_BIT_MASK(32))
>  		return GFP_DMA32;
> @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>  	u64 min_mask;
>  
>  	if (IS_ENABLED(CONFIG_ZONE_DMA))
> -		min_mask = DMA_BIT_MASK(zone_dma_bits);
> +		min_mask = zone_dma_limit;
>  	else
>  		min_mask = DMA_BIT_MASK(32);
>  
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2019-12-19 15:20       ` Peter Ujfalusi
@ 2020-01-08  8:28         ` Peter Ujfalusi
  2020-01-08 12:21           ` Robin Murphy
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-08  8:28 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Hi Christoph,

On 19/12/2019 17.20, Peter Ujfalusi wrote:
> Hi Christoph,
> 
> On 19/12/2019 17.02, Christoph Hellwig wrote:
>> Hi Peter,
>>
>> can you try the patch below (it will need to be split into two):
> 
> Thank you!
> 
> Unfortunately it does not help:
> [    0.596208] edma: probe of 2700000.edma failed with error -5
> [    0.596626] edma: probe of 2728000.edma failed with error -5
> ...
> [    2.108602] sdhci-omap 23000000.mmc: Got CD GPIO
> [    2.113899] mmc0: Failed to set 32-bit DMA mask.
> [    2.118592] mmc0: No suitable DMA available - falling back to PIO
> [    2.159038] mmc0: SDHCI controller on 23000000.mmc [23000000.mmc]
> using PIO
> [    2.167531] mmc1: Failed to set 32-bit DMA mask.
> [    2.172192] mmc1: No suitable DMA available - falling back to PIO
> [    2.213841] mmc1: SDHCI controller on 23100000.mmc [23100000.mmc]
> using PIO

Do you have idea on how to fix this in a proper way?

IMHO when drivers are setting the dma_mask and coherent_mask the
dma_pfn_offset should not be applied to the mask at all.

If I understand it correctly for EDMA as example:

I set dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
since it can only address memory in this range.

It does not matter if dma_pfn_offset is 0 or not 0 (like in k2g, where
it is 0x780000) the EDMA still can only address within 32 bits.

The dma_pfn_offset will tell us that the memory location's physical
address is seen by the DMA at (phys_pfn - dma_pfn_offset) -> dma_pfn.

The dma_mask should be checked against the dma_pfn.

We can not 'move' the dma_mask with dma_pfn_offset when setting the mask
since it is not correct. The DMA can access in 32 bits range and we have
the peripherals under 0x8000 0000.

I might be missing something, but it looks to me that the way we set the
dma_mask and the coherent_mask is the place where this can be fixed.

Best regards,
- Péter

> 
> - Péter
> 
> 
>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>> index e822af0d9219..30b9c6786ce3 100644
>> --- a/arch/arm/mm/dma-mapping.c
>> +++ b/arch/arm/mm/dma-mapping.c
>> @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
>>  
>>  static int __dma_supported(struct device *dev, u64 mask, bool warn)
>>  {
>> -	unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
>> +	unsigned long max_dma_pfn =
>> +		min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
>>  
>>  	/*
>>  	 * Translate the device's DMA mask to a PFN limit.  This
>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>> index 3ef204137e73..dd0e169a1bb1 100644
>> --- a/arch/arm/mm/init.c
>> +++ b/arch/arm/mm/init.c
>> @@ -19,6 +19,7 @@
>>  #include <linux/gfp.h>
>>  #include <linux/memblock.h>
>>  #include <linux/dma-contiguous.h>
>> +#include <linux/dma-direct.h>
>>  #include <linux/sizes.h>
>>  #include <linux/stop_machine.h>
>>  #include <linux/swiotlb.h>
>> @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
>>  phys_addr_t arm_dma_zone_size __read_mostly;
>>  EXPORT_SYMBOL(arm_dma_zone_size);
>>  
>> -/*
>> - * The DMA mask corresponding to the maximum bus address allocatable
>> - * using GFP_DMA.  The default here places no restriction on DMA
>> - * allocations.  This must be the smallest DMA mask in the system,
>> - * so a successful GFP_DMA allocation will always satisfy this.
>> - */
>> -phys_addr_t arm_dma_limit;
>> -unsigned long arm_dma_pfn_limit;
>> -
>>  static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>  	unsigned long dma_size)
>>  {
>> @@ -108,14 +100,14 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>  
>>  void __init setup_dma_zone(const struct machine_desc *mdesc)
>>  {
>> -#ifdef CONFIG_ZONE_DMA
>> -	if (mdesc->dma_zone_size) {
>> +	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
>> +		zone_dma_limit = ((phys_addr_t)~0);
>> +	} else if (mdesc->dma_zone_size) {
>>  		arm_dma_zone_size = mdesc->dma_zone_size;
>> -		arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>> -	} else
>> -		arm_dma_limit = 0xffffffff;
>> -	arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
>> -#endif
>> +		zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>> +	} else {
>> +		zone_dma_limit = 0xffffffff;
>> +	}
>>  }
>>  
>>  static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
>> @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
>>  	early_init_fdt_scan_reserved_mem();
>>  
>>  	/* reserve memory for DMA contiguous allocations */
>> -	dma_contiguous_reserve(arm_dma_limit);
>> +	dma_contiguous_reserve(zone_dma_limit);
>>  
>>  	arm_memblock_steal_permitted = false;
>>  	memblock_dump_all();
>> diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
>> index 88c121ac14b3..7dbd77554273 100644
>> --- a/arch/arm/mm/mm.h
>> +++ b/arch/arm/mm/mm.h
>> @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct static_vm *svm);
>>  
>>  #endif
>>  
>> -#ifdef CONFIG_ZONE_DMA
>> -extern phys_addr_t arm_dma_limit;
>> -extern unsigned long arm_dma_pfn_limit;
>> -#else
>> -#define arm_dma_limit ((phys_addr_t)~0)
>> -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
>> -#endif
>> -
>>  extern phys_addr_t arm_lowmem_limit;
>>  
>>  void __init bootmem_init(void);
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index b65dffdfb201..7a7501acd763 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
>>  	early_init_fdt_scan_reserved_mem();
>>  
>>  	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
>> -		zone_dma_bits = ARM64_ZONE_DMA_BITS;
>> +		zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
>>  		arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
>>  	}
>>  
>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>> index 9488b63dfc87..337ace03d3f0 100644
>> --- a/arch/powerpc/mm/mem.c
>> +++ b/arch/powerpc/mm/mem.c
>> @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
>>   * everything else. GFP_DMA32 page allocations automatically fall back to
>>   * ZONE_DMA.
>>   *
>> - * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the
>> + * By using 31-bit unconditionally, we can exploit zone_dma_limit to inform the
>>   * generic DMA mapping code.  32-bit only devices (if not handled by an IOMMU
>>   * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
>>   * ZONE_DMA.
>> @@ -257,18 +257,20 @@ void __init paging_init(void)
>>  	printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>>  	       (long int)((top_of_ram - total_ram) >> 20));
>>  
>> +#ifdef CONFIG_ZONE_DMA
>>  	/*
>>  	 * Allow 30-bit DMA for very limited Broadcom wifi chips on many
>>  	 * powerbooks.
>>  	 */
>> -	if (IS_ENABLED(CONFIG_PPC32))
>> -		zone_dma_bits = 30;
>> -	else
>> -		zone_dma_bits = 31;
>> -
>> -#ifdef CONFIG_ZONE_DMA
>> -	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>> -				      1UL << (zone_dma_bits - PAGE_SHIFT));
>> +	if (IS_ENABLED(CONFIG_PPC32)) {
>> +		zone_dma_limit = DMA_BIT_MASK(30);
>> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>> +					      1UL << (30 - PAGE_SHIFT));
>> +	} else {
>> +		zone_dma_limit = DMA_BIT_MASK(31);
>> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>> +					      1UL << (31 - PAGE_SHIFT));
>> +	}
>>  #endif
>>  	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>  #ifdef CONFIG_HIGHMEM
>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>> index f0ce22220565..c403f61cb56b 100644
>> --- a/arch/s390/mm/init.c
>> +++ b/arch/s390/mm/init.c
>> @@ -118,7 +118,7 @@ void __init paging_init(void)
>>  
>>  	sparse_memory_present_with_active_regions(MAX_NUMNODES);
>>  	sparse_init();
>> -	zone_dma_bits = 31;
>> +	zone_dma_limit = DMA_BIT_MASK(31);
>>  	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
>>  	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
>>  	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>> index 24b8684aa21d..20d56d597506 100644
>> --- a/include/linux/dma-direct.h
>> +++ b/include/linux/dma-direct.h
>> @@ -6,7 +6,7 @@
>>  #include <linux/memblock.h> /* for min_low_pfn */
>>  #include <linux/mem_encrypt.h>
>>  
>> -extern unsigned int zone_dma_bits;
>> +extern phys_addr_t zone_dma_limit;
>>  
>>  #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>  #include <asm/dma-direct.h>
>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>> index 6af7ae83c4ad..5ea1bed2ba6f 100644
>> --- a/kernel/dma/direct.c
>> +++ b/kernel/dma/direct.c
>> @@ -21,7 +21,7 @@
>>   * it for entirely different regions. In that case the arch code needs to
>>   * override the variable below for dma-direct to work properly.
>>   */
>> -unsigned int zone_dma_bits __ro_after_init = 24;
>> +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>>  
>>  static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
>>  {
>> @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
>>  	 * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
>>  	 * zones.
>>  	 */
>> -	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
>> +	if (*phys_limit <= zone_dma_limit)
>>  		return GFP_DMA;
>>  	if (*phys_limit <= DMA_BIT_MASK(32))
>>  		return GFP_DMA32;
>> @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>>  	u64 min_mask;
>>  
>>  	if (IS_ENABLED(CONFIG_ZONE_DMA))
>> -		min_mask = DMA_BIT_MASK(zone_dma_bits);
>> +		min_mask = zone_dma_limit;
>>  	else
>>  		min_mask = DMA_BIT_MASK(32);
>>  
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>
> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2020-01-08  8:28         ` Peter Ujfalusi
@ 2020-01-08 12:21           ` Robin Murphy
  2020-01-08 14:00             ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Robin Murphy @ 2020-01-08 12:21 UTC (permalink / raw)
  To: Peter Ujfalusi, Christoph Hellwig
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

On 08/01/2020 8:28 am, Peter Ujfalusi via iommu wrote:
> Hi Christoph,
> 
> On 19/12/2019 17.20, Peter Ujfalusi wrote:
>> Hi Christoph,
>>
>> On 19/12/2019 17.02, Christoph Hellwig wrote:
>>> Hi Peter,
>>>
>>> can you try the patch below (it will need to be split into two):
>>
>> Thank you!
>>
>> Unfortunately it does not help:
>> [    0.596208] edma: probe of 2700000.edma failed with error -5
>> [    0.596626] edma: probe of 2728000.edma failed with error -5
>> ...
>> [    2.108602] sdhci-omap 23000000.mmc: Got CD GPIO
>> [    2.113899] mmc0: Failed to set 32-bit DMA mask.
>> [    2.118592] mmc0: No suitable DMA available - falling back to PIO
>> [    2.159038] mmc0: SDHCI controller on 23000000.mmc [23000000.mmc]
>> using PIO
>> [    2.167531] mmc1: Failed to set 32-bit DMA mask.
>> [    2.172192] mmc1: No suitable DMA available - falling back to PIO
>> [    2.213841] mmc1: SDHCI controller on 23100000.mmc [23100000.mmc]
>> using PIO
> 
> Do you have idea on how to fix this in a proper way?
> 
> IMHO when drivers are setting the dma_mask and coherent_mask the
> dma_pfn_offset should not be applied to the mask at all.
> 
> If I understand it correctly for EDMA as example:
> 
> I set dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
> since it can only address memory in this range.
> 
> It does not matter if dma_pfn_offset is 0 or not 0 (like in k2g, where
> it is 0x780000) the EDMA still can only address within 32 bits.
> 
> The dma_pfn_offset will tell us that the memory location's physical
> address is seen by the DMA at (phys_pfn - dma_pfn_offset) -> dma_pfn.
> 
> The dma_mask should be checked against the dma_pfn.

That's exactly what dma_direct_supported() does, though. What it doesn't 
do, AFAICS, is account for weird cases where the DMA zone *starts* way, 
way above 32 physical address bits because of an implicit assumption 
that *all* devices have a dma_pfn_offset equal to min_low_pfn. I'm not 
sure how possible it is to cope with that generically.

Robin.

> We can not 'move' the dma_mask with dma_pfn_offset when setting the mask
> since it is not correct. The DMA can access in 32 bits range and we have
> the peripherals under 0x8000 0000.
> 
> I might be missing something, but it looks to me that the way we set the
> dma_mask and the coherent_mask is the place where this can be fixed.
> 
> Best regards,
> - Péter
> 
>>
>> - Péter
>>
>>
>>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>>> index e822af0d9219..30b9c6786ce3 100644
>>> --- a/arch/arm/mm/dma-mapping.c
>>> +++ b/arch/arm/mm/dma-mapping.c
>>> @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
>>>   
>>>   static int __dma_supported(struct device *dev, u64 mask, bool warn)
>>>   {
>>> -	unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
>>> +	unsigned long max_dma_pfn =
>>> +		min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
>>>   
>>>   	/*
>>>   	 * Translate the device's DMA mask to a PFN limit.  This
>>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>>> index 3ef204137e73..dd0e169a1bb1 100644
>>> --- a/arch/arm/mm/init.c
>>> +++ b/arch/arm/mm/init.c
>>> @@ -19,6 +19,7 @@
>>>   #include <linux/gfp.h>
>>>   #include <linux/memblock.h>
>>>   #include <linux/dma-contiguous.h>
>>> +#include <linux/dma-direct.h>
>>>   #include <linux/sizes.h>
>>>   #include <linux/stop_machine.h>
>>>   #include <linux/swiotlb.h>
>>> @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long *min, unsigned long *max_low,
>>>   phys_addr_t arm_dma_zone_size __read_mostly;
>>>   EXPORT_SYMBOL(arm_dma_zone_size);
>>>   
>>> -/*
>>> - * The DMA mask corresponding to the maximum bus address allocatable
>>> - * using GFP_DMA.  The default here places no restriction on DMA
>>> - * allocations.  This must be the smallest DMA mask in the system,
>>> - * so a successful GFP_DMA allocation will always satisfy this.
>>> - */
>>> -phys_addr_t arm_dma_limit;
>>> -unsigned long arm_dma_pfn_limit;
>>> -
>>>   static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>>   	unsigned long dma_size)
>>>   {
>>> @@ -108,14 +100,14 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>>   
>>>   void __init setup_dma_zone(const struct machine_desc *mdesc)
>>>   {
>>> -#ifdef CONFIG_ZONE_DMA
>>> -	if (mdesc->dma_zone_size) {
>>> +	if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
>>> +		zone_dma_limit = ((phys_addr_t)~0);
>>> +	} else if (mdesc->dma_zone_size) {
>>>   		arm_dma_zone_size = mdesc->dma_zone_size;
>>> -		arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>> -	} else
>>> -		arm_dma_limit = 0xffffffff;
>>> -	arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
>>> -#endif
>>> +		zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>> +	} else {
>>> +		zone_dma_limit = 0xffffffff;
>>> +	}
>>>   }
>>>   
>>>   static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
>>> @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
>>>   	early_init_fdt_scan_reserved_mem();
>>>   
>>>   	/* reserve memory for DMA contiguous allocations */
>>> -	dma_contiguous_reserve(arm_dma_limit);
>>> +	dma_contiguous_reserve(zone_dma_limit);
>>>   
>>>   	arm_memblock_steal_permitted = false;
>>>   	memblock_dump_all();
>>> diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
>>> index 88c121ac14b3..7dbd77554273 100644
>>> --- a/arch/arm/mm/mm.h
>>> +++ b/arch/arm/mm/mm.h
>>> @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct static_vm *svm);
>>>   
>>>   #endif
>>>   
>>> -#ifdef CONFIG_ZONE_DMA
>>> -extern phys_addr_t arm_dma_limit;
>>> -extern unsigned long arm_dma_pfn_limit;
>>> -#else
>>> -#define arm_dma_limit ((phys_addr_t)~0)
>>> -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
>>> -#endif
>>> -
>>>   extern phys_addr_t arm_lowmem_limit;
>>>   
>>>   void __init bootmem_init(void);
>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>>> index b65dffdfb201..7a7501acd763 100644
>>> --- a/arch/arm64/mm/init.c
>>> +++ b/arch/arm64/mm/init.c
>>> @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
>>>   	early_init_fdt_scan_reserved_mem();
>>>   
>>>   	if (IS_ENABLED(CONFIG_ZONE_DMA)) {
>>> -		zone_dma_bits = ARM64_ZONE_DMA_BITS;
>>> +		zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
>>>   		arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
>>>   	}
>>>   
>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>> index 9488b63dfc87..337ace03d3f0 100644
>>> --- a/arch/powerpc/mm/mem.c
>>> +++ b/arch/powerpc/mm/mem.c
>>> @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
>>>    * everything else. GFP_DMA32 page allocations automatically fall back to
>>>    * ZONE_DMA.
>>>    *
>>> - * By using 31-bit unconditionally, we can exploit zone_dma_bits to inform the
>>> + * By using 31-bit unconditionally, we can exploit zone_dma_limit to inform the
>>>    * generic DMA mapping code.  32-bit only devices (if not handled by an IOMMU
>>>    * anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
>>>    * ZONE_DMA.
>>> @@ -257,18 +257,20 @@ void __init paging_init(void)
>>>   	printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>>>   	       (long int)((top_of_ram - total_ram) >> 20));
>>>   
>>> +#ifdef CONFIG_ZONE_DMA
>>>   	/*
>>>   	 * Allow 30-bit DMA for very limited Broadcom wifi chips on many
>>>   	 * powerbooks.
>>>   	 */
>>> -	if (IS_ENABLED(CONFIG_PPC32))
>>> -		zone_dma_bits = 30;
>>> -	else
>>> -		zone_dma_bits = 31;
>>> -
>>> -#ifdef CONFIG_ZONE_DMA
>>> -	max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>>> -				      1UL << (zone_dma_bits - PAGE_SHIFT));
>>> +	if (IS_ENABLED(CONFIG_PPC32)) {
>>> +		zone_dma_limit = DMA_BIT_MASK(30);
>>> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>>> +					      1UL << (30 - PAGE_SHIFT));
>>> +	} else {
>>> +		zone_dma_limit = DMA_BIT_MASK(31);
>>> +		max_zone_pfns[ZONE_DMA]	= min(max_low_pfn,
>>> +					      1UL << (31 - PAGE_SHIFT));
>>> +	}
>>>   #endif
>>>   	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>>   #ifdef CONFIG_HIGHMEM
>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>> index f0ce22220565..c403f61cb56b 100644
>>> --- a/arch/s390/mm/init.c
>>> +++ b/arch/s390/mm/init.c
>>> @@ -118,7 +118,7 @@ void __init paging_init(void)
>>>   
>>>   	sparse_memory_present_with_active_regions(MAX_NUMNODES);
>>>   	sparse_init();
>>> -	zone_dma_bits = 31;
>>> +	zone_dma_limit = DMA_BIT_MASK(31);
>>>   	memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
>>>   	max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
>>>   	max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>>> index 24b8684aa21d..20d56d597506 100644
>>> --- a/include/linux/dma-direct.h
>>> +++ b/include/linux/dma-direct.h
>>> @@ -6,7 +6,7 @@
>>>   #include <linux/memblock.h> /* for min_low_pfn */
>>>   #include <linux/mem_encrypt.h>
>>>   
>>> -extern unsigned int zone_dma_bits;
>>> +extern phys_addr_t zone_dma_limit;
>>>   
>>>   #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>>   #include <asm/dma-direct.h>
>>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>>> index 6af7ae83c4ad..5ea1bed2ba6f 100644
>>> --- a/kernel/dma/direct.c
>>> +++ b/kernel/dma/direct.c
>>> @@ -21,7 +21,7 @@
>>>    * it for entirely different regions. In that case the arch code needs to
>>>    * override the variable below for dma-direct to work properly.
>>>    */
>>> -unsigned int zone_dma_bits __ro_after_init = 24;
>>> +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>>>   
>>>   static void report_addr(struct device *dev, dma_addr_t dma_addr, size_t size)
>>>   {
>>> @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask,
>>>   	 * Note that GFP_DMA32 and GFP_DMA are no ops without the corresponding
>>>   	 * zones.
>>>   	 */
>>> -	if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
>>> +	if (*phys_limit <= zone_dma_limit)
>>>   		return GFP_DMA;
>>>   	if (*phys_limit <= DMA_BIT_MASK(32))
>>>   		return GFP_DMA32;
>>> @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
>>>   	u64 min_mask;
>>>   
>>>   	if (IS_ENABLED(CONFIG_ZONE_DMA))
>>> -		min_mask = DMA_BIT_MASK(zone_dma_bits);
>>> +		min_mask = zone_dma_limit;
>>>   	else
>>>   		min_mask = DMA_BIT_MASK(32);
>>>   
>>>
>>> _______________________________________________
>>> linux-arm-kernel mailing list
>>> linux-arm-kernel@lists.infradead.org
>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>
> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2020-01-08 12:21           ` Robin Murphy
@ 2020-01-08 14:00             ` Peter Ujfalusi
  2020-01-08 15:20               ` Robin Murphy
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-08 14:00 UTC (permalink / raw)
  To: Robin Murphy, Christoph Hellwig
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Robin,

On 08/01/2020 14.21, Robin Murphy wrote:
> On 08/01/2020 8:28 am, Peter Ujfalusi via iommu wrote:
>> Hi Christoph,
>>
>> On 19/12/2019 17.20, Peter Ujfalusi wrote:
>>> Hi Christoph,
>>>
>>> On 19/12/2019 17.02, Christoph Hellwig wrote:
>>>> Hi Peter,
>>>>
>>>> can you try the patch below (it will need to be split into two):
>>>
>>> Thank you!
>>>
>>> Unfortunately it does not help:
>>> [    0.596208] edma: probe of 2700000.edma failed with error -5
>>> [    0.596626] edma: probe of 2728000.edma failed with error -5
>>> ...
>>> [    2.108602] sdhci-omap 23000000.mmc: Got CD GPIO
>>> [    2.113899] mmc0: Failed to set 32-bit DMA mask.
>>> [    2.118592] mmc0: No suitable DMA available - falling back to PIO
>>> [    2.159038] mmc0: SDHCI controller on 23000000.mmc [23000000.mmc]
>>> using PIO
>>> [    2.167531] mmc1: Failed to set 32-bit DMA mask.
>>> [    2.172192] mmc1: No suitable DMA available - falling back to PIO
>>> [    2.213841] mmc1: SDHCI controller on 23100000.mmc [23100000.mmc]
>>> using PIO
>>
>> Do you have idea on how to fix this in a proper way?
>>
>> IMHO when drivers are setting the dma_mask and coherent_mask the
>> dma_pfn_offset should not be applied to the mask at all.
>>
>> If I understand it correctly for EDMA as example:
>>
>> I set dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
>> since it can only address memory in this range.
>>
>> It does not matter if dma_pfn_offset is 0 or not 0 (like in k2g, where
>> it is 0x780000) the EDMA still can only address within 32 bits.
>>
>> The dma_pfn_offset will tell us that the memory location's physical
>> address is seen by the DMA at (phys_pfn - dma_pfn_offset) -> dma_pfn.
>>
>> The dma_mask should be checked against the dma_pfn.
> 
> That's exactly what dma_direct_supported() does, though.

Yes, this is my understanding as well.

> What it doesn't
> do, AFAICS, is account for weird cases where the DMA zone *starts* way,
> way above 32 physical address bits because of an implicit assumption
> that *all* devices have a dma_pfn_offset equal to min_low_pfn.

The problem - I think - is that the DMA_BIT_MASK(32) from
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) is treated as physical
address along the call path so the dma_pfn_offset is applied to it and
the check will fail, saying that DMA_BIT_MASK(32) can not be supported.

Fyi, this is what I have gathered on k2g via prints:

dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));

Prints inside dma_direct_supported():
sdhci-omap 23000000.mmc: max_pfn: 880000
sdhci-omap 23000000.mmc: min_mask #1: ffffff
sdhci-omap 23000000.mmc: min_mask #2: ffffff
sdhci-omap 23000000.mmc: dev->dma_pfn_offset: 780000
sdhci-omap 23000000.mmc: PAGE_SHIFT: 12
sdhci-omap 23000000.mmc: __phys_to_dma(dev, min_mask): ff880ffffff
sdhci-omap 23000000.mmc: mask: ffffffff

Print in dma_supported() after returning from dma_direct_supported():
sdhci-omap 23000000.mmc: dma_is_direct, ret = 0

sdhci-omap 23100000.mmc: DMA is not supported for the device

keystone-k2g have this in soc0 node:
dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;

DDR starts at 0x8 0000 0000 (8G) and 2G is aliased at 0x8000 0000.

This gives the 0x780000 for dma_pfn_offset for all devices underneath it.

The DMA_BIT_MASK(24) is passed to __phys_to_dma() because
CONFIG_ZONE_DMA is enabled.

SWIOTLB is enabled in kconfig.

> I'm not sure how possible it is to cope with that generically.

Me neither, but k2g is failing since v5.3-rc3 and the bisect pointed to
this commit.

> 
> Robin.
> 
>> We can not 'move' the dma_mask with dma_pfn_offset when setting the mask
>> since it is not correct. The DMA can access in 32 bits range and we have
>> the peripherals under 0x8000 0000.
>>
>> I might be missing something, but it looks to me that the way we set the
>> dma_mask and the coherent_mask is the place where this can be fixed.
>>
>> Best regards,
>> - Péter
>>
>>>
>>> - Péter
>>>
>>>
>>>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>>>> index e822af0d9219..30b9c6786ce3 100644
>>>> --- a/arch/arm/mm/dma-mapping.c
>>>> +++ b/arch/arm/mm/dma-mapping.c
>>>> @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
>>>>     static int __dma_supported(struct device *dev, u64 mask, bool warn)
>>>>   {
>>>> -    unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
>>>> +    unsigned long max_dma_pfn =
>>>> +        min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
>>>>         /*
>>>>        * Translate the device's DMA mask to a PFN limit.  This
>>>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>>>> index 3ef204137e73..dd0e169a1bb1 100644
>>>> --- a/arch/arm/mm/init.c
>>>> +++ b/arch/arm/mm/init.c
>>>> @@ -19,6 +19,7 @@
>>>>   #include <linux/gfp.h>
>>>>   #include <linux/memblock.h>
>>>>   #include <linux/dma-contiguous.h>
>>>> +#include <linux/dma-direct.h>
>>>>   #include <linux/sizes.h>
>>>>   #include <linux/stop_machine.h>
>>>>   #include <linux/swiotlb.h>
>>>> @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long
>>>> *min, unsigned long *max_low,
>>>>   phys_addr_t arm_dma_zone_size __read_mostly;
>>>>   EXPORT_SYMBOL(arm_dma_zone_size);
>>>>   -/*
>>>> - * The DMA mask corresponding to the maximum bus address allocatable
>>>> - * using GFP_DMA.  The default here places no restriction on DMA
>>>> - * allocations.  This must be the smallest DMA mask in the system,
>>>> - * so a successful GFP_DMA allocation will always satisfy this.
>>>> - */
>>>> -phys_addr_t arm_dma_limit;
>>>> -unsigned long arm_dma_pfn_limit;
>>>> -
>>>>   static void __init arm_adjust_dma_zone(unsigned long *size,
>>>> unsigned long *hole,
>>>>       unsigned long dma_size)
>>>>   {
>>>> @@ -108,14 +100,14 @@ static void __init
>>>> arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>>>     void __init setup_dma_zone(const struct machine_desc *mdesc)
>>>>   {
>>>> -#ifdef CONFIG_ZONE_DMA
>>>> -    if (mdesc->dma_zone_size) {
>>>> +    if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
>>>> +        zone_dma_limit = ((phys_addr_t)~0);
>>>> +    } else if (mdesc->dma_zone_size) {
>>>>           arm_dma_zone_size = mdesc->dma_zone_size;
>>>> -        arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>>> -    } else
>>>> -        arm_dma_limit = 0xffffffff;
>>>> -    arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
>>>> -#endif
>>>> +        zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>>> +    } else {
>>>> +        zone_dma_limit = 0xffffffff;
>>>> +    }
>>>>   }
>>>>     static void __init zone_sizes_init(unsigned long min, unsigned
>>>> long max_low,
>>>> @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct
>>>> machine_desc *mdesc)
>>>>       early_init_fdt_scan_reserved_mem();
>>>>         /* reserve memory for DMA contiguous allocations */
>>>> -    dma_contiguous_reserve(arm_dma_limit);
>>>> +    dma_contiguous_reserve(zone_dma_limit);
>>>>         arm_memblock_steal_permitted = false;
>>>>       memblock_dump_all();
>>>> diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
>>>> index 88c121ac14b3..7dbd77554273 100644
>>>> --- a/arch/arm/mm/mm.h
>>>> +++ b/arch/arm/mm/mm.h
>>>> @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct
>>>> static_vm *svm);
>>>>     #endif
>>>>   -#ifdef CONFIG_ZONE_DMA
>>>> -extern phys_addr_t arm_dma_limit;
>>>> -extern unsigned long arm_dma_pfn_limit;
>>>> -#else
>>>> -#define arm_dma_limit ((phys_addr_t)~0)
>>>> -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
>>>> -#endif
>>>> -
>>>>   extern phys_addr_t arm_lowmem_limit;
>>>>     void __init bootmem_init(void);
>>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>>>> index b65dffdfb201..7a7501acd763 100644
>>>> --- a/arch/arm64/mm/init.c
>>>> +++ b/arch/arm64/mm/init.c
>>>> @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
>>>>       early_init_fdt_scan_reserved_mem();
>>>>         if (IS_ENABLED(CONFIG_ZONE_DMA)) {
>>>> -        zone_dma_bits = ARM64_ZONE_DMA_BITS;
>>>> +        zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
>>>>           arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
>>>>       }
>>>>   diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>> index 9488b63dfc87..337ace03d3f0 100644
>>>> --- a/arch/powerpc/mm/mem.c
>>>> +++ b/arch/powerpc/mm/mem.c
>>>> @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
>>>>    * everything else. GFP_DMA32 page allocations automatically fall
>>>> back to
>>>>    * ZONE_DMA.
>>>>    *
>>>> - * By using 31-bit unconditionally, we can exploit zone_dma_bits to
>>>> inform the
>>>> + * By using 31-bit unconditionally, we can exploit zone_dma_limit
>>>> to inform the
>>>>    * generic DMA mapping code.  32-bit only devices (if not handled
>>>> by an IOMMU
>>>>    * anyway) will take a first dip into ZONE_NORMAL and get
>>>> otherwise served by
>>>>    * ZONE_DMA.
>>>> @@ -257,18 +257,20 @@ void __init paging_init(void)
>>>>       printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>>>>              (long int)((top_of_ram - total_ram) >> 20));
>>>>   +#ifdef CONFIG_ZONE_DMA
>>>>       /*
>>>>        * Allow 30-bit DMA for very limited Broadcom wifi chips on many
>>>>        * powerbooks.
>>>>        */
>>>> -    if (IS_ENABLED(CONFIG_PPC32))
>>>> -        zone_dma_bits = 30;
>>>> -    else
>>>> -        zone_dma_bits = 31;
>>>> -
>>>> -#ifdef CONFIG_ZONE_DMA
>>>> -    max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>> -                      1UL << (zone_dma_bits - PAGE_SHIFT));
>>>> +    if (IS_ENABLED(CONFIG_PPC32)) {
>>>> +        zone_dma_limit = DMA_BIT_MASK(30);
>>>> +        max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>> +                          1UL << (30 - PAGE_SHIFT));
>>>> +    } else {
>>>> +        zone_dma_limit = DMA_BIT_MASK(31);
>>>> +        max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>> +                          1UL << (31 - PAGE_SHIFT));
>>>> +    }
>>>>   #endif
>>>>       max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>>>   #ifdef CONFIG_HIGHMEM
>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>> index f0ce22220565..c403f61cb56b 100644
>>>> --- a/arch/s390/mm/init.c
>>>> +++ b/arch/s390/mm/init.c
>>>> @@ -118,7 +118,7 @@ void __init paging_init(void)
>>>>         sparse_memory_present_with_active_regions(MAX_NUMNODES);
>>>>       sparse_init();
>>>> -    zone_dma_bits = 31;
>>>> +    zone_dma_limit = DMA_BIT_MASK(31);
>>>>       memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
>>>>       max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
>>>>       max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>>> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>>>> index 24b8684aa21d..20d56d597506 100644
>>>> --- a/include/linux/dma-direct.h
>>>> +++ b/include/linux/dma-direct.h
>>>> @@ -6,7 +6,7 @@
>>>>   #include <linux/memblock.h> /* for min_low_pfn */
>>>>   #include <linux/mem_encrypt.h>
>>>>   -extern unsigned int zone_dma_bits;
>>>> +extern phys_addr_t zone_dma_limit;
>>>>     #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>>>   #include <asm/dma-direct.h>
>>>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>>>> index 6af7ae83c4ad..5ea1bed2ba6f 100644
>>>> --- a/kernel/dma/direct.c
>>>> +++ b/kernel/dma/direct.c
>>>> @@ -21,7 +21,7 @@
>>>>    * it for entirely different regions. In that case the arch code
>>>> needs to
>>>>    * override the variable below for dma-direct to work properly.
>>>>    */
>>>> -unsigned int zone_dma_bits __ro_after_init = 24;
>>>> +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>>>>     static void report_addr(struct device *dev, dma_addr_t dma_addr,
>>>> size_t size)
>>>>   {
>>>> @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct
>>>> device *dev, u64 dma_mask,
>>>>        * Note that GFP_DMA32 and GFP_DMA are no ops without the
>>>> corresponding
>>>>        * zones.
>>>>        */
>>>> -    if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
>>>> +    if (*phys_limit <= zone_dma_limit)
>>>>           return GFP_DMA;
>>>>       if (*phys_limit <= DMA_BIT_MASK(32))
>>>>           return GFP_DMA32;
>>>> @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64
>>>> mask)
>>>>       u64 min_mask;
>>>>         if (IS_ENABLED(CONFIG_ZONE_DMA))
>>>> -        min_mask = DMA_BIT_MASK(zone_dma_bits);
>>>> +        min_mask = zone_dma_limit;
>>>>       else
>>>>           min_mask = DMA_BIT_MASK(32);
>>>>  
>>>> _______________________________________________
>>>> linux-arm-kernel mailing list
>>>> linux-arm-kernel@lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>>
>>>
>>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>>
>>> _______________________________________________
>>> linux-arm-kernel mailing list
>>> linux-arm-kernel@lists.infradead.org
>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>> _______________________________________________
>> iommu mailing list
>> iommu@lists.linux-foundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2020-01-08 14:00             ` Peter Ujfalusi
@ 2020-01-08 15:20               ` Robin Murphy
  2020-01-09 14:49                 ` Christoph Hellwig
  0 siblings, 1 reply; 30+ messages in thread
From: Robin Murphy @ 2020-01-08 15:20 UTC (permalink / raw)
  To: Peter Ujfalusi, Christoph Hellwig
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

On 08/01/2020 2:00 pm, Peter Ujfalusi wrote:
> Robin,
> 
> On 08/01/2020 14.21, Robin Murphy wrote:
>> On 08/01/2020 8:28 am, Peter Ujfalusi via iommu wrote:
>>> Hi Christoph,
>>>
>>> On 19/12/2019 17.20, Peter Ujfalusi wrote:
>>>> Hi Christoph,
>>>>
>>>> On 19/12/2019 17.02, Christoph Hellwig wrote:
>>>>> Hi Peter,
>>>>>
>>>>> can you try the patch below (it will need to be split into two):
>>>>
>>>> Thank you!
>>>>
>>>> Unfortunately it does not help:
>>>> [    0.596208] edma: probe of 2700000.edma failed with error -5
>>>> [    0.596626] edma: probe of 2728000.edma failed with error -5
>>>> ...
>>>> [    2.108602] sdhci-omap 23000000.mmc: Got CD GPIO
>>>> [    2.113899] mmc0: Failed to set 32-bit DMA mask.
>>>> [    2.118592] mmc0: No suitable DMA available - falling back to PIO
>>>> [    2.159038] mmc0: SDHCI controller on 23000000.mmc [23000000.mmc]
>>>> using PIO
>>>> [    2.167531] mmc1: Failed to set 32-bit DMA mask.
>>>> [    2.172192] mmc1: No suitable DMA available - falling back to PIO
>>>> [    2.213841] mmc1: SDHCI controller on 23100000.mmc [23100000.mmc]
>>>> using PIO
>>>
>>> Do you have idea on how to fix this in a proper way?
>>>
>>> IMHO when drivers are setting the dma_mask and coherent_mask the
>>> dma_pfn_offset should not be applied to the mask at all.
>>>
>>> If I understand it correctly for EDMA as example:
>>>
>>> I set dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
>>> since it can only address memory in this range.
>>>
>>> It does not matter if dma_pfn_offset is 0 or not 0 (like in k2g, where
>>> it is 0x780000) the EDMA still can only address within 32 bits.
>>>
>>> The dma_pfn_offset will tell us that the memory location's physical
>>> address is seen by the DMA at (phys_pfn - dma_pfn_offset) -> dma_pfn.
>>>
>>> The dma_mask should be checked against the dma_pfn.
>>
>> That's exactly what dma_direct_supported() does, though.
> 
> Yes, this is my understanding as well.
> 
>> What it doesn't
>> do, AFAICS, is account for weird cases where the DMA zone *starts* way,
>> way above 32 physical address bits because of an implicit assumption
>> that *all* devices have a dma_pfn_offset equal to min_low_pfn.
> 
> The problem - I think - is that the DMA_BIT_MASK(32) from
> dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) is treated as physical
> address along the call path so the dma_pfn_offset is applied to it and
> the check will fail, saying that DMA_BIT_MASK(32) can not be supported.

But that's the thing - in isolation, that is entirely correct. 
Considering ZONE_DMA32 for simplicity, in general the zone is expected 
to cover the physical address range 0x0000_0000 - 0xffff_ffff (because 
DMA offsets are relatively rare), and a device with a dma_pfn_offset of 
more than (0x1_0000_0000 >> PAGE_SHIFT) *cannot* support that range with 
any mask, because the DMA address itself would have to be negative.

The problem is that platforms with esoteric memory maps have no right 
thing to do. If the base of RAM is at at 0x1_0000_0000 or higher, the 
"correct" ZONE_DMA32 would be empty while ZONE_NORMAL above it would 
not, and last time I looked that makes the page allocator break badly. 
So the standard bodge on such platforms is to make ZONE_DMA32 cover not 
the first 4GB of *PA space*, but the first 4GB of *RAM*, wherever that 
happens to be. That then brings different problems - now the page 
allocator is happy and successfully returns GFP_DMA32 allocations from 
the range 0x8_0000_0000 - 0x8_ffff_ffff that are utterly useless to 
32-bit devices with zero dma_pfn_offset - see the AMD Seattle SoC for 
the prime example of that. If on the other hand all devices are 
guaranteed to have a dma_pfn_offset that puts the base of RAM at DMA 
address 0 then GFP_DMA32 allocations do end up working as expected, but 
now the original assumption of where ZONE_DMA32 actually is is broken, 
so generic code unaware of the platform/architecture-specific bodge will 
be misled - that's the case you're running into.

Having thought this far, if there's a non-hacky way to reach in and grab 
ZONE_DMA{32} such that dma_direct_supported() could use zone_end_pfn() 
instead of trying to assume either way, that might be the most robust 
general solution.

Robin.

> Fyi, this is what I have gathered on k2g via prints:
> 
> dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
> 
> Prints inside dma_direct_supported():
> sdhci-omap 23000000.mmc: max_pfn: 880000
> sdhci-omap 23000000.mmc: min_mask #1: ffffff
> sdhci-omap 23000000.mmc: min_mask #2: ffffff
> sdhci-omap 23000000.mmc: dev->dma_pfn_offset: 780000
> sdhci-omap 23000000.mmc: PAGE_SHIFT: 12
> sdhci-omap 23000000.mmc: __phys_to_dma(dev, min_mask): ff880ffffff
> sdhci-omap 23000000.mmc: mask: ffffffff
> 
> Print in dma_supported() after returning from dma_direct_supported():
> sdhci-omap 23000000.mmc: dma_is_direct, ret = 0
> 
> sdhci-omap 23100000.mmc: DMA is not supported for the device
> 
> keystone-k2g have this in soc0 node:
> dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;
> 
> DDR starts at 0x8 0000 0000 (8G) and 2G is aliased at 0x8000 0000.
> 
> This gives the 0x780000 for dma_pfn_offset for all devices underneath it.
> 
> The DMA_BIT_MASK(24) is passed to __phys_to_dma() because
> CONFIG_ZONE_DMA is enabled.
> 
> SWIOTLB is enabled in kconfig.
> 
>> I'm not sure how possible it is to cope with that generically.
> 
> Me neither, but k2g is failing since v5.3-rc3 and the bisect pointed to
> this commit.
> 
>>
>> Robin.
>>
>>> We can not 'move' the dma_mask with dma_pfn_offset when setting the mask
>>> since it is not correct. The DMA can access in 32 bits range and we have
>>> the peripherals under 0x8000 0000.
>>>
>>> I might be missing something, but it looks to me that the way we set the
>>> dma_mask and the coherent_mask is the place where this can be fixed.
>>>
>>> Best regards,
>>> - Péter
>>>
>>>>
>>>> - Péter
>>>>
>>>>
>>>>> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
>>>>> index e822af0d9219..30b9c6786ce3 100644
>>>>> --- a/arch/arm/mm/dma-mapping.c
>>>>> +++ b/arch/arm/mm/dma-mapping.c
>>>>> @@ -221,7 +221,8 @@ EXPORT_SYMBOL(arm_coherent_dma_ops);
>>>>>      static int __dma_supported(struct device *dev, u64 mask, bool warn)
>>>>>    {
>>>>> -    unsigned long max_dma_pfn = min(max_pfn, arm_dma_pfn_limit);
>>>>> +    unsigned long max_dma_pfn =
>>>>> +        min_t(unsigned long, max_pfn, zone_dma_limit >> PAGE_SHIFT);
>>>>>          /*
>>>>>         * Translate the device's DMA mask to a PFN limit.  This
>>>>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>>>>> index 3ef204137e73..dd0e169a1bb1 100644
>>>>> --- a/arch/arm/mm/init.c
>>>>> +++ b/arch/arm/mm/init.c
>>>>> @@ -19,6 +19,7 @@
>>>>>    #include <linux/gfp.h>
>>>>>    #include <linux/memblock.h>
>>>>>    #include <linux/dma-contiguous.h>
>>>>> +#include <linux/dma-direct.h>
>>>>>    #include <linux/sizes.h>
>>>>>    #include <linux/stop_machine.h>
>>>>>    #include <linux/swiotlb.h>
>>>>> @@ -84,15 +85,6 @@ static void __init find_limits(unsigned long
>>>>> *min, unsigned long *max_low,
>>>>>    phys_addr_t arm_dma_zone_size __read_mostly;
>>>>>    EXPORT_SYMBOL(arm_dma_zone_size);
>>>>>    -/*
>>>>> - * The DMA mask corresponding to the maximum bus address allocatable
>>>>> - * using GFP_DMA.  The default here places no restriction on DMA
>>>>> - * allocations.  This must be the smallest DMA mask in the system,
>>>>> - * so a successful GFP_DMA allocation will always satisfy this.
>>>>> - */
>>>>> -phys_addr_t arm_dma_limit;
>>>>> -unsigned long arm_dma_pfn_limit;
>>>>> -
>>>>>    static void __init arm_adjust_dma_zone(unsigned long *size,
>>>>> unsigned long *hole,
>>>>>        unsigned long dma_size)
>>>>>    {
>>>>> @@ -108,14 +100,14 @@ static void __init
>>>>> arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
>>>>>      void __init setup_dma_zone(const struct machine_desc *mdesc)
>>>>>    {
>>>>> -#ifdef CONFIG_ZONE_DMA
>>>>> -    if (mdesc->dma_zone_size) {
>>>>> +    if (!IS_ENABLED(CONFIG_ZONE_DMA)) {
>>>>> +        zone_dma_limit = ((phys_addr_t)~0);
>>>>> +    } else if (mdesc->dma_zone_size) {
>>>>>            arm_dma_zone_size = mdesc->dma_zone_size;
>>>>> -        arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>>>> -    } else
>>>>> -        arm_dma_limit = 0xffffffff;
>>>>> -    arm_dma_pfn_limit = arm_dma_limit >> PAGE_SHIFT;
>>>>> -#endif
>>>>> +        zone_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
>>>>> +    } else {
>>>>> +        zone_dma_limit = 0xffffffff;
>>>>> +    }
>>>>>    }
>>>>>      static void __init zone_sizes_init(unsigned long min, unsigned
>>>>> long max_low,
>>>>> @@ -279,7 +271,7 @@ void __init arm_memblock_init(const struct
>>>>> machine_desc *mdesc)
>>>>>        early_init_fdt_scan_reserved_mem();
>>>>>          /* reserve memory for DMA contiguous allocations */
>>>>> -    dma_contiguous_reserve(arm_dma_limit);
>>>>> +    dma_contiguous_reserve(zone_dma_limit);
>>>>>          arm_memblock_steal_permitted = false;
>>>>>        memblock_dump_all();
>>>>> diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
>>>>> index 88c121ac14b3..7dbd77554273 100644
>>>>> --- a/arch/arm/mm/mm.h
>>>>> +++ b/arch/arm/mm/mm.h
>>>>> @@ -82,14 +82,6 @@ extern __init void add_static_vm_early(struct
>>>>> static_vm *svm);
>>>>>      #endif
>>>>>    -#ifdef CONFIG_ZONE_DMA
>>>>> -extern phys_addr_t arm_dma_limit;
>>>>> -extern unsigned long arm_dma_pfn_limit;
>>>>> -#else
>>>>> -#define arm_dma_limit ((phys_addr_t)~0)
>>>>> -#define arm_dma_pfn_limit (~0ul >> PAGE_SHIFT)
>>>>> -#endif
>>>>> -
>>>>>    extern phys_addr_t arm_lowmem_limit;
>>>>>      void __init bootmem_init(void);
>>>>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>>>>> index b65dffdfb201..7a7501acd763 100644
>>>>> --- a/arch/arm64/mm/init.c
>>>>> +++ b/arch/arm64/mm/init.c
>>>>> @@ -441,7 +441,7 @@ void __init arm64_memblock_init(void)
>>>>>        early_init_fdt_scan_reserved_mem();
>>>>>          if (IS_ENABLED(CONFIG_ZONE_DMA)) {
>>>>> -        zone_dma_bits = ARM64_ZONE_DMA_BITS;
>>>>> +        zone_dma_limit = DMA_BIT_MASK(ARM64_ZONE_DMA_BITS);
>>>>>            arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
>>>>>        }
>>>>>    diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>>> index 9488b63dfc87..337ace03d3f0 100644
>>>>> --- a/arch/powerpc/mm/mem.c
>>>>> +++ b/arch/powerpc/mm/mem.c
>>>>> @@ -223,7 +223,7 @@ static int __init mark_nonram_nosave(void)
>>>>>     * everything else. GFP_DMA32 page allocations automatically fall
>>>>> back to
>>>>>     * ZONE_DMA.
>>>>>     *
>>>>> - * By using 31-bit unconditionally, we can exploit zone_dma_bits to
>>>>> inform the
>>>>> + * By using 31-bit unconditionally, we can exploit zone_dma_limit
>>>>> to inform the
>>>>>     * generic DMA mapping code.  32-bit only devices (if not handled
>>>>> by an IOMMU
>>>>>     * anyway) will take a first dip into ZONE_NORMAL and get
>>>>> otherwise served by
>>>>>     * ZONE_DMA.
>>>>> @@ -257,18 +257,20 @@ void __init paging_init(void)
>>>>>        printk(KERN_DEBUG "Memory hole size: %ldMB\n",
>>>>>               (long int)((top_of_ram - total_ram) >> 20));
>>>>>    +#ifdef CONFIG_ZONE_DMA
>>>>>        /*
>>>>>         * Allow 30-bit DMA for very limited Broadcom wifi chips on many
>>>>>         * powerbooks.
>>>>>         */
>>>>> -    if (IS_ENABLED(CONFIG_PPC32))
>>>>> -        zone_dma_bits = 30;
>>>>> -    else
>>>>> -        zone_dma_bits = 31;
>>>>> -
>>>>> -#ifdef CONFIG_ZONE_DMA
>>>>> -    max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>>> -                      1UL << (zone_dma_bits - PAGE_SHIFT));
>>>>> +    if (IS_ENABLED(CONFIG_PPC32)) {
>>>>> +        zone_dma_limit = DMA_BIT_MASK(30);
>>>>> +        max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>>> +                          1UL << (30 - PAGE_SHIFT));
>>>>> +    } else {
>>>>> +        zone_dma_limit = DMA_BIT_MASK(31);
>>>>> +        max_zone_pfns[ZONE_DMA]    = min(max_low_pfn,
>>>>> +                          1UL << (31 - PAGE_SHIFT));
>>>>> +    }
>>>>>    #endif
>>>>>        max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>>>>    #ifdef CONFIG_HIGHMEM
>>>>> diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
>>>>> index f0ce22220565..c403f61cb56b 100644
>>>>> --- a/arch/s390/mm/init.c
>>>>> +++ b/arch/s390/mm/init.c
>>>>> @@ -118,7 +118,7 @@ void __init paging_init(void)
>>>>>          sparse_memory_present_with_active_regions(MAX_NUMNODES);
>>>>>        sparse_init();
>>>>> -    zone_dma_bits = 31;
>>>>> +    zone_dma_limit = DMA_BIT_MASK(31);
>>>>>        memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
>>>>>        max_zone_pfns[ZONE_DMA] = PFN_DOWN(MAX_DMA_ADDRESS);
>>>>>        max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
>>>>> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>>>>> index 24b8684aa21d..20d56d597506 100644
>>>>> --- a/include/linux/dma-direct.h
>>>>> +++ b/include/linux/dma-direct.h
>>>>> @@ -6,7 +6,7 @@
>>>>>    #include <linux/memblock.h> /* for min_low_pfn */
>>>>>    #include <linux/mem_encrypt.h>
>>>>>    -extern unsigned int zone_dma_bits;
>>>>> +extern phys_addr_t zone_dma_limit;
>>>>>      #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>>>>    #include <asm/dma-direct.h>
>>>>> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
>>>>> index 6af7ae83c4ad..5ea1bed2ba6f 100644
>>>>> --- a/kernel/dma/direct.c
>>>>> +++ b/kernel/dma/direct.c
>>>>> @@ -21,7 +21,7 @@
>>>>>     * it for entirely different regions. In that case the arch code
>>>>> needs to
>>>>>     * override the variable below for dma-direct to work properly.
>>>>>     */
>>>>> -unsigned int zone_dma_bits __ro_after_init = 24;
>>>>> +phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24);
>>>>>      static void report_addr(struct device *dev, dma_addr_t dma_addr,
>>>>> size_t size)
>>>>>    {
>>>>> @@ -74,7 +74,7 @@ static gfp_t __dma_direct_optimal_gfp_mask(struct
>>>>> device *dev, u64 dma_mask,
>>>>>         * Note that GFP_DMA32 and GFP_DMA are no ops without the
>>>>> corresponding
>>>>>         * zones.
>>>>>         */
>>>>> -    if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits))
>>>>> +    if (*phys_limit <= zone_dma_limit)
>>>>>            return GFP_DMA;
>>>>>        if (*phys_limit <= DMA_BIT_MASK(32))
>>>>>            return GFP_DMA32;
>>>>> @@ -483,7 +483,7 @@ int dma_direct_supported(struct device *dev, u64
>>>>> mask)
>>>>>        u64 min_mask;
>>>>>          if (IS_ENABLED(CONFIG_ZONE_DMA))
>>>>> -        min_mask = DMA_BIT_MASK(zone_dma_bits);
>>>>> +        min_mask = zone_dma_limit;
>>>>>        else
>>>>>            min_mask = DMA_BIT_MASK(32);
>>>>>   
>>>>> _______________________________________________
>>>>> linux-arm-kernel mailing list
>>>>> linux-arm-kernel@lists.infradead.org
>>>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>>>
>>>>
>>>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>>>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>>>
>>>> _______________________________________________
>>>> linux-arm-kernel mailing list
>>>> linux-arm-kernel@lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>>
>>>
>>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
>>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>> _______________________________________________
>>> iommu mailing list
>>> iommu@lists.linux-foundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>
> 
> - Péter
> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2020-01-08 15:20               ` Robin Murphy
@ 2020-01-09 14:49                 ` Christoph Hellwig
  2020-01-14 10:43                   ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2020-01-09 14:49 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, Peter Ujfalusi,
	iommu, Christoph Hellwig, linux-arm-kernel, Roger Quadros

On Wed, Jan 08, 2020 at 03:20:07PM +0000, Robin Murphy wrote:
>> The problem - I think - is that the DMA_BIT_MASK(32) from
>> dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) is treated as physical
>> address along the call path so the dma_pfn_offset is applied to it and
>> the check will fail, saying that DMA_BIT_MASK(32) can not be supported.
>
> But that's the thing - in isolation, that is entirely correct. Considering 
> ZONE_DMA32 for simplicity, in general the zone is expected to cover the 
> physical address range 0x0000_0000 - 0xffff_ffff (because DMA offsets are 
> relatively rare), and a device with a dma_pfn_offset of more than 
> (0x1_0000_0000 >> PAGE_SHIFT) *cannot* support that range with any mask, 
> because the DMA address itself would have to be negative.

Note that ZONE_DMA32 is irrelevant in this particular case, as we are
talking about arm32.  But with ZONE_DMA instead this roughly makes sense.

> The problem is that platforms with esoteric memory maps have no right thing 
> to do. If the base of RAM is at at 0x1_0000_0000 or higher, the "correct" 
> ZONE_DMA32 would be empty while ZONE_NORMAL above it would not, and last 
> time I looked that makes the page allocator break badly. So the standard 
> bodge on such platforms is to make ZONE_DMA32 cover not the first 4GB of 
> *PA space*, but the first 4GB of *RAM*, wherever that happens to be. That 
> then brings different problems - now the page allocator is happy and 
> successfully returns GFP_DMA32 allocations from the range 0x8_0000_0000 - 
> 0x8_ffff_ffff that are utterly useless to 32-bit devices with zero 
> dma_pfn_offset - see the AMD Seattle SoC for the prime example of that. If 
> on the other hand all devices are guaranteed to have a dma_pfn_offset that 
> puts the base of RAM at DMA address 0 then GFP_DMA32 allocations do end up 
> working as expected, but now the original assumption of where ZONE_DMA32 
> actually is is broken, so generic code unaware of the 
> platform/architecture-specific bodge will be misled - that's the case 
> you're running into.
>
> Having thought this far, if there's a non-hacky way to reach in and grab 
> ZONE_DMA{32} such that dma_direct_supported() could use zone_end_pfn() 
> instead of trying to assume either way, that might be the most robust 
> general solution.

zone_dma_bits is our somewhat ugly way to try to poke into this
information, although the way it is done right now sucks pretty badly.

The patch I sent to Peter in December was trying to convey that
information in a way similar to what the arm32 legacy dma code does, but
it didn't work, so I'll need to find some time to sit down and figure out
why.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs
  2020-01-09 14:49                 ` Christoph Hellwig
@ 2020-01-14 10:43                   ` Peter Ujfalusi
  2020-01-14 16:43                     ` [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-14 10:43 UTC (permalink / raw)
  To: Christoph Hellwig, Robin Murphy
  Cc: Vignesh Raghavendra, Konrad Rzeszutek Wilk,
	Russell King - ARM Linux admin, linux-kernel, iommu,
	linux-arm-kernel, Roger Quadros

Christoph, Robin,

On 09/01/2020 16.49, Christoph Hellwig wrote:
> On Wed, Jan 08, 2020 at 03:20:07PM +0000, Robin Murphy wrote:
>>> The problem - I think - is that the DMA_BIT_MASK(32) from
>>> dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) is treated as physical
>>> address along the call path so the dma_pfn_offset is applied to it and
>>> the check will fail, saying that DMA_BIT_MASK(32) can not be supported.
>>
>> But that's the thing - in isolation, that is entirely correct. Considering 
>> ZONE_DMA32 for simplicity, in general the zone is expected to cover the 
>> physical address range 0x0000_0000 - 0xffff_ffff (because DMA offsets are 
>> relatively rare), and a device with a dma_pfn_offset of more than 
>> (0x1_0000_0000 >> PAGE_SHIFT) *cannot* support that range with any mask, 
>> because the DMA address itself would have to be negative.
> 
> Note that ZONE_DMA32 is irrelevant in this particular case, as we are
> talking about arm32.  But with ZONE_DMA instead this roughly makes sense.
> 
>> The problem is that platforms with esoteric memory maps have no right thing 
>> to do. If the base of RAM is at at 0x1_0000_0000 or higher, the "correct" 
>> ZONE_DMA32 would be empty while ZONE_NORMAL above it would not, and last 
>> time I looked that makes the page allocator break badly. So the standard 
>> bodge on such platforms is to make ZONE_DMA32 cover not the first 4GB of 
>> *PA space*, but the first 4GB of *RAM*, wherever that happens to be. That 
>> then brings different problems - now the page allocator is happy and 
>> successfully returns GFP_DMA32 allocations from the range 0x8_0000_0000 - 
>> 0x8_ffff_ffff that are utterly useless to 32-bit devices with zero 
>> dma_pfn_offset - see the AMD Seattle SoC for the prime example of that. If 
>> on the other hand all devices are guaranteed to have a dma_pfn_offset that 
>> puts the base of RAM at DMA address 0 then GFP_DMA32 allocations do end up 
>> working as expected, but now the original assumption of where ZONE_DMA32 
>> actually is is broken, so generic code unaware of the 
>> platform/architecture-specific bodge will be misled - that's the case 
>> you're running into.
>>
>> Having thought this far, if there's a non-hacky way to reach in and grab 
>> ZONE_DMA{32} such that dma_direct_supported() could use zone_end_pfn() 
>> instead of trying to assume either way, that might be the most robust 
>> general solution.
> 
> zone_dma_bits is our somewhat ugly way to try to poke into this
> information, although the way it is done right now sucks pretty badly.

In my view the handling of dma_pfn_offset is just incorrect as it is applied to _any_ address.
According to DT specification dma-ranges:
"Value type: <empty> or <prop-encoded-array> encoded as an arbitrary
number of (child-bus-address, parent-bus-address, length) triplets."

Yet in drivers/of/ we only take the _first_ triplet and ignore the rest.

The dma_pfn_offset should be only applied to paddr in the range:
parent-bus-address to parent-bus-address+length
for anything outside of this the dma_pfn_offset is 0.

conversion back from dma to paddr should consider the offset in range:
child-bus-address to child-bus-address+length
and 0 for everything outside of this.

To correctly handle the dma-ranges we would need something like this in device.h:
+struct dma_ranges {
+       u64 paddr;
+       u64 dma_addr;
+       u64 size;
+	unsigned long pfn_offset;
+};
+

struct device {
	...
-	unsigned long	dma_pfn_offset;
+       struct dma_ranges *dma_ranges;
	int dma_ranges_cnt;
	...
};

Then when we currently use dma_pfn_offset we would have:

unsigned long __phys_to_dma_pfn_offset(struct device *dev, phys_addr_t paddr)
{
	int i;

	if (!dev->dma_ranges)
		return 0;

	for (i = 0; i < dev->dma_ranges_cnt; i++) {
		struct dma_ranges *range = &dev->dma_ranges[i];
		if (paddr >= range->paddr &&
		    paddr <= (range->paddr + range->size))
			return range->pfn_offset;
	}

	return 0;
}

unsigned long __dma_to_phys_pfn_offset(struct device *dev, dma_addr_t dma_addr)
{
	int i;

	for (i = 0; i < dev->dma_ranges_cnt; i++) {
		struct dma_ranges *range = &dev->dma_ranges[i];
		if (dma_addr >= range->dma_addr &&
		    dma_addr <= (range->dma_addr + range->size))
			return range->pfn_offset;
	}

	return 0;
}

For existing drivers/archs setting dma_pfn_offset we can:
if (dev->dma_ranges_cnt == 1 && dev->dma_ranges[0].pfn_offset && !dev->dma_ranges[0].size)
	return dev->dma_ranges[0].pfn_offset;

and they would have to set up one struct dma_ranges.

One of the issue with this is that the struct dma_ranges would need to be allocated for
all devices, so there should be a some clever way need to be invented to use pointers
as much as we can.

> The patch I sent to Peter in December was trying to convey that
> information in a way similar to what the arm32 legacy dma code does, but
> it didn't work, so I'll need to find some time to sit down and figure out
> why.

But, while we get a proper solution can we get the following patch in to fix the regression?
Basically we are falling back to what works (and was used before commit ad3c7b18c5b362be5dbd0f2c0bcf1fd5fd659315).

commit 8c3c36b377c139603a9dff5c58dac59865f1ac0f
Author: Peter Ujfalusi <peter.ujfalusi@ti.com>
Date:   Thu Dec 19 15:07:25 2019 +0200

    arm: mm: dma-mapping: Fix dma_supported() when dev->dma_pfn_offset is not 0
    
    We can only use direct mapping when LPAE is enabled if the dma_pfn_offset
    is 0, otherwise valid dma_masks will be rejected and the DMA support is
    going to be denied for peripherals, or DMA drivers.
    
    Cc: Stable <stable@vger.kernel.org> #v5.3+
    Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 9414d72f664b..e07ec1ea3865 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1100,15 +1100,6 @@ int arm_dma_supported(struct device *dev, u64 mask)
 
 static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
 {
-	/*
-	 * When CONFIG_ARM_LPAE is set, physical address can extend above
-	 * 32-bits, which then can't be addressed by devices that only support
-	 * 32-bit DMA.
-	 * Use the generic dma-direct / swiotlb ops code in that case, as that
-	 * handles bounce buffering for us.
-	 */
-	if (IS_ENABLED(CONFIG_ARM_LPAE))
-		return NULL;
 	return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
 }
 
@@ -2313,6 +2304,15 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 
 	if (arm_setup_iommu_dma_ops(dev, dma_base, size, iommu))
 		dma_ops = arm_get_iommu_dma_map_ops(coherent);
+	else if (IS_ENABLED(CONFIG_ARM_LPAE) && !dev->dma_pfn_offset)
+		/*
+		 * When CONFIG_ARM_LPAE is set, physical address can extend
+		 * above * 32-bits, which then can't be addressed by devices
+		 * that only support 32-bit DMA.
+		 * Use the generic dma-direct / swiotlb ops code in that case,
+		 * as that handles bounce buffering for us.
+		 */
+		dma_ops = NULL;
 	else
 		dma_ops = arm_get_dma_map_ops(coherent);

--- 
- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-14 10:43                   ` Peter Ujfalusi
@ 2020-01-14 16:43                     ` Peter Ujfalusi
  2020-01-14 18:19                       ` Robin Murphy
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-14 16:43 UTC (permalink / raw)
  To: hch, robin.murphy
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	linux-arm-kernel, rogerq

The dma_pfn_offset should only be applied to an address which is within the
dma-ranges range. Any address outside should have offset as 0.

This is a proof of concept patch which works on k2g where we have
dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;
for the SoC.

Without this patch everything which tries to set DMA_BIT_MASK(32) or less
fails -> DMA and peripherals with built in DMA (SD with ADMA) will not
probe or fall back to PIO mode.

With this patch EDMA probes, SD's ADMA is working.
Audio and dma-test is working just fine with EDMA, mmc accesses with ADMA
also operational.

The patch does not tried to address the incomplete handling of dma-ranges
from DT and it is not fixing/updating arch code or drivers which uses
dma_pfn_offset.
Neither provides fallback support for kernel setting only dma_pfn_offset to
arbitrary number without paddr/dma_addr/size.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
---
Hi Christoph, Robin,

I know it is a bit more complicated, but with this patch k2g is working fine...

I wanted to test the concept I was describing and a patch speaks better than
words.

Kind regards,
Peter

 arch/arm/include/asm/dma-mapping.h | 25 ++++++++++++++++++++--
 drivers/of/device.c                |  7 ++++++-
 include/linux/device.h             |  8 ++++++++
 include/linux/dma-direct.h         | 33 ++++++++++++++++++++++++++++--
 kernel/dma/coherent.c              |  9 +++++---
 5 files changed, 74 insertions(+), 8 deletions(-)

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index bdd80ddbca34..9bff6ad2d8c8 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -33,10 +33,31 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
  * addresses. They must not be used by drivers.
  */
 #ifndef __arch_pfn_to_dma
+
+static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
+						     phys_addr_t paddr)
+{
+	if (paddr >= dev->dma_ranges.paddr &&
+	    paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
+		return dev->dma_ranges.pfn_offset;
+
+	return 0;
+}
+
+static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
+						     dma_addr_t dma_addr)
+{
+	if (dma_addr >= dev->dma_ranges.dma_addr &&
+	    dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
+		return dev->dma_ranges.pfn_offset;
+
+	return 0;
+}
+
 static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned long pfn)
 {
 	if (dev)
-		pfn -= dev->dma_pfn_offset;
+		pfn -= __phys_to_dma_pfn_offset(dev, __pfn_to_phys(pfn));
 	return (dma_addr_t)__pfn_to_bus(pfn);
 }
 
@@ -45,7 +66,7 @@ static inline unsigned long dma_to_pfn(struct device *dev, dma_addr_t addr)
 	unsigned long pfn = __bus_to_pfn(addr);
 
 	if (dev)
-		pfn += dev->dma_pfn_offset;
+		pfn += __dma_to_phys_pfn_offset(dev, addr);
 
 	return pfn;
 }
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 27203bfd0b22..07a8cc1a7d7f 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -105,7 +105,7 @@ int of_dma_configure(struct device *dev, struct device_node *np, bool force_dma)
 		if (!force_dma)
 			return ret == -ENODEV ? 0 : ret;
 
-		dma_addr = offset = 0;
+		dma_addr = offset = paddr = 0;
 	} else {
 		offset = PFN_DOWN(paddr - dma_addr);
 
@@ -144,6 +144,11 @@ int of_dma_configure(struct device *dev, struct device_node *np, bool force_dma)
 
 	dev->dma_pfn_offset = offset;
 
+	dev->dma_ranges.paddr = paddr;
+	dev->dma_ranges.dma_addr = dma_addr;
+	dev->dma_ranges.size = size;
+	dev->dma_ranges.pfn_offset = offset;
+
 	/*
 	 * Limit coherent and dma mask based on size and default mask
 	 * set by the driver.
diff --git a/include/linux/device.h b/include/linux/device.h
index ce6db68c3f29..57006b51a989 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -293,6 +293,13 @@ struct device_dma_parameters {
 	unsigned long segment_boundary_mask;
 };
 
+struct dma_ranges {
+	u64 paddr;
+	u64 dma_addr;
+	u64 size;
+	unsigned long pfn_offset;
+};
+
 /**
  * struct device_connection - Device Connection Descriptor
  * @fwnode: The device node of the connected device
@@ -581,6 +588,7 @@ struct device {
 					     allocations such descriptors. */
 	u64		bus_dma_limit;	/* upstream dma constraint */
 	unsigned long	dma_pfn_offset;
+	struct dma_ranges dma_ranges;
 
 	struct device_dma_parameters *dma_parms;
 
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 24b8684aa21d..4a46a15945ea 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -11,18 +11,47 @@ extern unsigned int zone_dma_bits;
 #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
 #include <asm/dma-direct.h>
 #else
+
+static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
+						     phys_addr_t paddr)
+{
+	if (!dev)
+		return 0;
+
+	if (paddr >= dev->dma_ranges.paddr &&
+	    paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
+		return dev->dma_ranges.pfn_offset
+
+	return 0;
+}
+
+static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
+						     dma_addr_t dma_addr)
+{
+	if (!dev)
+		return 0;
+
+	if (dma_addr >= dev->dma_ranges.dma_addr &&
+	    dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
+		return dev->dma_ranges.pfn_offset
+
+	return 0;
+}
+
 static inline dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr)
 {
 	dma_addr_t dev_addr = (dma_addr_t)paddr;
+	unsigned long offset = __phys_to_dma_pfn_offset(dev, paddr);
 
-	return dev_addr - ((dma_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
+	return dev_addr - ((dma_addr_t)offset << PAGE_SHIFT);
 }
 
 static inline phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dev_addr)
 {
 	phys_addr_t paddr = (phys_addr_t)dev_addr;
+	unsigned long offset = __dma_to_phys_pfn_offset(dev, dev_addr);
 
-	return paddr + ((phys_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
+	return paddr + ((phys_addr_t)offset << PAGE_SHIFT);
 }
 #endif /* !CONFIG_ARCH_HAS_PHYS_TO_DMA */
 
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 551b0eb7028a..7a68fd09f5d0 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -31,10 +31,13 @@ static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *de
 static inline dma_addr_t dma_get_device_base(struct device *dev,
 					     struct dma_coherent_mem * mem)
 {
-	if (mem->use_dev_dma_pfn_offset)
-		return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
-	else
+	if (mem->use_dev_dma_pfn_offset) {
+		unsigned long offset = __phys_to_dma_pfn_offset(dev,
+						__pfn_to_phys(mem->pfn_base));
+		return (mem->pfn_base - offset) << PAGE_SHIFT;
+	} else {
 		return mem->device_base;
+	}
 }
 
 static int dma_init_coherent_memory(phys_addr_t phys_addr,
-- 
Peter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-14 16:43                     ` [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid Peter Ujfalusi
@ 2020-01-14 18:19                       ` Robin Murphy
  2020-01-15 11:50                         ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Robin Murphy @ 2020-01-14 18:19 UTC (permalink / raw)
  To: Peter Ujfalusi, hch
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	linux-arm-kernel, rogerq

On 14/01/2020 4:43 pm, Peter Ujfalusi wrote:
> The dma_pfn_offset should only be applied to an address which is within the
> dma-ranges range. Any address outside should have offset as 0.

No, that's wrong. If a non-empty dma-ranges is present, then addresses 
which do not fall within any specified range are invalid altogether.

The current long-term plan is indeed to try to move to some sort of 
internal "DMA range descriptor" in order to properly cope with the kind 
of esoteric integrations which have multiple disjoint windows, 
potentially even with different offsets, but as you point out there are 
still many hurdles between now and that becoming reality. So although 
this patch does represent the "right" thing, it's for entirely the wrong 
reason. AFAICT for your case it basically just works out as a very 
baroque way to hack dma_direct_supported() again - we shouldn't need a 
special case to map a bogus physical address to valid DMA address, we 
should be fixing the source of the bogus PA in the first place.

> This is a proof of concept patch which works on k2g where we have
> dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;
> for the SoC.

TBH it's probably extra-confusing that you're on Keystone 2, where 
technically this ends up closer-to-OK than most, since IIRC the 0-2GB 
MMIO region is the same on all 3(?) interconnect maps. Thus the 100% 
honest description would really be:

dma-ranges = <0x0 0x0 0x0 0x80000000>,
	     <0x80000000 0x8 0x00000000 0x80000000>;

but yeah, that would just go horribly wrong with Linux today. The 
subtelty that dma_map_resource() ignores the pfn_offset happens to be a 
"feature" in this regard ;)

Robin.

> Without this patch everything which tries to set DMA_BIT_MASK(32) or less
> fails -> DMA and peripherals with built in DMA (SD with ADMA) will not
> probe or fall back to PIO mode.
> 
> With this patch EDMA probes, SD's ADMA is working.
> Audio and dma-test is working just fine with EDMA, mmc accesses with ADMA
> also operational.
> 
> The patch does not tried to address the incomplete handling of dma-ranges
> from DT and it is not fixing/updating arch code or drivers which uses
> dma_pfn_offset.
> Neither provides fallback support for kernel setting only dma_pfn_offset to
> arbitrary number without paddr/dma_addr/size.
> 
> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> ---
> Hi Christoph, Robin,
> 
> I know it is a bit more complicated, but with this patch k2g is working fine...
> 
> I wanted to test the concept I was describing and a patch speaks better than
> words.
> 
> Kind regards,
> Peter
> 
>   arch/arm/include/asm/dma-mapping.h | 25 ++++++++++++++++++++--
>   drivers/of/device.c                |  7 ++++++-
>   include/linux/device.h             |  8 ++++++++
>   include/linux/dma-direct.h         | 33 ++++++++++++++++++++++++++++--
>   kernel/dma/coherent.c              |  9 +++++---
>   5 files changed, 74 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
> index bdd80ddbca34..9bff6ad2d8c8 100644
> --- a/arch/arm/include/asm/dma-mapping.h
> +++ b/arch/arm/include/asm/dma-mapping.h
> @@ -33,10 +33,31 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
>    * addresses. They must not be used by drivers.
>    */
>   #ifndef __arch_pfn_to_dma
> +
> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
> +						     phys_addr_t paddr)
> +{
> +	if (paddr >= dev->dma_ranges.paddr &&
> +	    paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
> +		return dev->dma_ranges.pfn_offset;
> +
> +	return 0;
> +}
> +
> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
> +						     dma_addr_t dma_addr)
> +{
> +	if (dma_addr >= dev->dma_ranges.dma_addr &&
> +	    dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
> +		return dev->dma_ranges.pfn_offset;
> +
> +	return 0;
> +}
> +
>   static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned long pfn)
>   {
>   	if (dev)
> -		pfn -= dev->dma_pfn_offset;
> +		pfn -= __phys_to_dma_pfn_offset(dev, __pfn_to_phys(pfn));
>   	return (dma_addr_t)__pfn_to_bus(pfn);
>   }
>   
> @@ -45,7 +66,7 @@ static inline unsigned long dma_to_pfn(struct device *dev, dma_addr_t addr)
>   	unsigned long pfn = __bus_to_pfn(addr);
>   
>   	if (dev)
> -		pfn += dev->dma_pfn_offset;
> +		pfn += __dma_to_phys_pfn_offset(dev, addr);
>   
>   	return pfn;
>   }
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index 27203bfd0b22..07a8cc1a7d7f 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -105,7 +105,7 @@ int of_dma_configure(struct device *dev, struct device_node *np, bool force_dma)
>   		if (!force_dma)
>   			return ret == -ENODEV ? 0 : ret;
>   
> -		dma_addr = offset = 0;
> +		dma_addr = offset = paddr = 0;
>   	} else {
>   		offset = PFN_DOWN(paddr - dma_addr);
>   
> @@ -144,6 +144,11 @@ int of_dma_configure(struct device *dev, struct device_node *np, bool force_dma)
>   
>   	dev->dma_pfn_offset = offset;
>   
> +	dev->dma_ranges.paddr = paddr;
> +	dev->dma_ranges.dma_addr = dma_addr;
> +	dev->dma_ranges.size = size;
> +	dev->dma_ranges.pfn_offset = offset;
> +
>   	/*
>   	 * Limit coherent and dma mask based on size and default mask
>   	 * set by the driver.
> diff --git a/include/linux/device.h b/include/linux/device.h
> index ce6db68c3f29..57006b51a989 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -293,6 +293,13 @@ struct device_dma_parameters {
>   	unsigned long segment_boundary_mask;
>   };
>   
> +struct dma_ranges {
> +	u64 paddr;
> +	u64 dma_addr;
> +	u64 size;
> +	unsigned long pfn_offset;
> +};
> +
>   /**
>    * struct device_connection - Device Connection Descriptor
>    * @fwnode: The device node of the connected device
> @@ -581,6 +588,7 @@ struct device {
>   					     allocations such descriptors. */
>   	u64		bus_dma_limit;	/* upstream dma constraint */
>   	unsigned long	dma_pfn_offset;
> +	struct dma_ranges dma_ranges;
>   
>   	struct device_dma_parameters *dma_parms;
>   
> diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
> index 24b8684aa21d..4a46a15945ea 100644
> --- a/include/linux/dma-direct.h
> +++ b/include/linux/dma-direct.h
> @@ -11,18 +11,47 @@ extern unsigned int zone_dma_bits;
>   #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>   #include <asm/dma-direct.h>
>   #else
> +
> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
> +						     phys_addr_t paddr)
> +{
> +	if (!dev)
> +		return 0;
> +
> +	if (paddr >= dev->dma_ranges.paddr &&
> +	    paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
> +		return dev->dma_ranges.pfn_offset
> +
> +	return 0;
> +}
> +
> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
> +						     dma_addr_t dma_addr)
> +{
> +	if (!dev)
> +		return 0;
> +
> +	if (dma_addr >= dev->dma_ranges.dma_addr &&
> +	    dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
> +		return dev->dma_ranges.pfn_offset
> +
> +	return 0;
> +}
> +
>   static inline dma_addr_t __phys_to_dma(struct device *dev, phys_addr_t paddr)
>   {
>   	dma_addr_t dev_addr = (dma_addr_t)paddr;
> +	unsigned long offset = __phys_to_dma_pfn_offset(dev, paddr);
>   
> -	return dev_addr - ((dma_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
> +	return dev_addr - ((dma_addr_t)offset << PAGE_SHIFT);
>   }
>   
>   static inline phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dev_addr)
>   {
>   	phys_addr_t paddr = (phys_addr_t)dev_addr;
> +	unsigned long offset = __dma_to_phys_pfn_offset(dev, dev_addr);
>   
> -	return paddr + ((phys_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
> +	return paddr + ((phys_addr_t)offset << PAGE_SHIFT);
>   }
>   #endif /* !CONFIG_ARCH_HAS_PHYS_TO_DMA */
>   
> diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
> index 551b0eb7028a..7a68fd09f5d0 100644
> --- a/kernel/dma/coherent.c
> +++ b/kernel/dma/coherent.c
> @@ -31,10 +31,13 @@ static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *de
>   static inline dma_addr_t dma_get_device_base(struct device *dev,
>   					     struct dma_coherent_mem * mem)
>   {
> -	if (mem->use_dev_dma_pfn_offset)
> -		return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
> -	else
> +	if (mem->use_dev_dma_pfn_offset) {
> +		unsigned long offset = __phys_to_dma_pfn_offset(dev,
> +						__pfn_to_phys(mem->pfn_base));
> +		return (mem->pfn_base - offset) << PAGE_SHIFT;
> +	} else {
>   		return mem->device_base;
> +	}
>   }
>   
>   static int dma_init_coherent_memory(phys_addr_t phys_addr,
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-14 18:19                       ` Robin Murphy
@ 2020-01-15 11:50                         ` Peter Ujfalusi
  2020-01-16 19:13                           ` Robin Murphy
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-15 11:50 UTC (permalink / raw)
  To: Robin Murphy, hch
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	linux-arm-kernel, rogerq



On 14/01/2020 20.19, Robin Murphy wrote:
> On 14/01/2020 4:43 pm, Peter Ujfalusi wrote:
>> The dma_pfn_offset should only be applied to an address which is
>> within the
>> dma-ranges range. Any address outside should have offset as 0.
> 
> No, that's wrong. If a non-empty dma-ranges is present, then addresses
> which do not fall within any specified range are invalid altogether.

It is not explicitly stated by the specification, but can be interpreted
like that and from a pow it does make sense to treat things like that.

> The current long-term plan is indeed to try to move to some sort of
> internal "DMA range descriptor" in order to properly cope with the kind
> of esoteric integrations which have multiple disjoint windows,
> potentially even with different offsets, but as you point out there are
> still many hurdles between now and that becoming reality. So although
> this patch does represent the "right" thing, it's for entirely the wrong
> reason. AFAICT for your case it basically just works out as a very
> baroque way to hack dma_direct_supported() again - we shouldn't need a
> special case to map a bogus physical address to valid DMA address, we
> should be fixing the source of the bogus PA in the first place.

DMA_BIT_MASK(32) is pretty clear: The DMA can handle addresses within
32bit space. DMA_BIT_MASK(24) is also clear: The DMA can handle
addresses within 24bit space.

dma-ranges does not change that. The DMA can still address the same
space. What dma-ranges will tell is that a physical address range 'X'
can be accessed on the bus under range 'Y'.
For the DMA within the bus the physical address within 'X' does not
matter. What matters is the matching address within 'Y'

We should do dma_pfn_offset conversion _only_ for the range it applies
to. Outside of it is not valid to apply it. The dma API will check
(without applying dma_pfn_offset) addresses outside of any range (only
one currently in Linux) and if it is not OK for the mask then it will fail.

> 
>> This is a proof of concept patch which works on k2g where we have
>> dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;
>> for the SoC.
> 
> TBH it's probably extra-confusing that you're on Keystone 2, where
> technically this ends up closer-to-OK than most, since IIRC the 0-2GB
> MMIO region is the same on all 3(?) interconnect maps. Thus the 100%
> honest description would really be:
> 
> dma-ranges = <0x0 0x0 0x0 0x80000000>,
>          <0x80000000 0x8 0x00000000 0x80000000>;
> 
> but yeah, that would just go horribly wrong with Linux today.

It does ;) This was the first thing I have tried.

> The
> subtelty that dma_map_resource() ignores the pfn_offset happens to be a
> "feature" in this regard ;)

Right, but Keystone 2 is broken since 5.3-rc3 by commit
ad3c7b18c5b362be5dbd0f2c0bcf1fd5fd659315.

Can you propose a fix which we can use until things get sorted out?

Thanks,
- Péter

> 
> Robin.
> 
>> Without this patch everything which tries to set DMA_BIT_MASK(32) or less
>> fails -> DMA and peripherals with built in DMA (SD with ADMA) will not
>> probe or fall back to PIO mode.
>>
>> With this patch EDMA probes, SD's ADMA is working.
>> Audio and dma-test is working just fine with EDMA, mmc accesses with ADMA
>> also operational.
>>
>> The patch does not tried to address the incomplete handling of dma-ranges
>> from DT and it is not fixing/updating arch code or drivers which uses
>> dma_pfn_offset.
>> Neither provides fallback support for kernel setting only
>> dma_pfn_offset to
>> arbitrary number without paddr/dma_addr/size.
>>
>> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
>> ---
>> Hi Christoph, Robin,
>>
>> I know it is a bit more complicated, but with this patch k2g is
>> working fine...
>>
>> I wanted to test the concept I was describing and a patch speaks
>> better than
>> words.
>>
>> Kind regards,
>> Peter
>>
>>   arch/arm/include/asm/dma-mapping.h | 25 ++++++++++++++++++++--
>>   drivers/of/device.c                |  7 ++++++-
>>   include/linux/device.h             |  8 ++++++++
>>   include/linux/dma-direct.h         | 33 ++++++++++++++++++++++++++++--
>>   kernel/dma/coherent.c              |  9 +++++---
>>   5 files changed, 74 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/dma-mapping.h
>> b/arch/arm/include/asm/dma-mapping.h
>> index bdd80ddbca34..9bff6ad2d8c8 100644
>> --- a/arch/arm/include/asm/dma-mapping.h
>> +++ b/arch/arm/include/asm/dma-mapping.h
>> @@ -33,10 +33,31 @@ static inline const struct dma_map_ops
>> *get_arch_dma_ops(struct bus_type *bus)
>>    * addresses. They must not be used by drivers.
>>    */
>>   #ifndef __arch_pfn_to_dma
>> +
>> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
>> +                             phys_addr_t paddr)
>> +{
>> +    if (paddr >= dev->dma_ranges.paddr &&
>> +        paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
>> +        return dev->dma_ranges.pfn_offset;
>> +
>> +    return 0;
>> +}
>> +
>> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
>> +                             dma_addr_t dma_addr)
>> +{
>> +    if (dma_addr >= dev->dma_ranges.dma_addr &&
>> +        dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
>> +        return dev->dma_ranges.pfn_offset;
>> +
>> +    return 0;
>> +}
>> +
>>   static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned
>> long pfn)
>>   {
>>       if (dev)
>> -        pfn -= dev->dma_pfn_offset;
>> +        pfn -= __phys_to_dma_pfn_offset(dev, __pfn_to_phys(pfn));
>>       return (dma_addr_t)__pfn_to_bus(pfn);
>>   }
>>   @@ -45,7 +66,7 @@ static inline unsigned long dma_to_pfn(struct
>> device *dev, dma_addr_t addr)
>>       unsigned long pfn = __bus_to_pfn(addr);
>>         if (dev)
>> -        pfn += dev->dma_pfn_offset;
>> +        pfn += __dma_to_phys_pfn_offset(dev, addr);
>>         return pfn;
>>   }
>> diff --git a/drivers/of/device.c b/drivers/of/device.c
>> index 27203bfd0b22..07a8cc1a7d7f 100644
>> --- a/drivers/of/device.c
>> +++ b/drivers/of/device.c
>> @@ -105,7 +105,7 @@ int of_dma_configure(struct device *dev, struct
>> device_node *np, bool force_dma)
>>           if (!force_dma)
>>               return ret == -ENODEV ? 0 : ret;
>>   -        dma_addr = offset = 0;
>> +        dma_addr = offset = paddr = 0;
>>       } else {
>>           offset = PFN_DOWN(paddr - dma_addr);
>>   @@ -144,6 +144,11 @@ int of_dma_configure(struct device *dev, struct
>> device_node *np, bool force_dma)
>>         dev->dma_pfn_offset = offset;
>>   +    dev->dma_ranges.paddr = paddr;
>> +    dev->dma_ranges.dma_addr = dma_addr;
>> +    dev->dma_ranges.size = size;
>> +    dev->dma_ranges.pfn_offset = offset;
>> +
>>       /*
>>        * Limit coherent and dma mask based on size and default mask
>>        * set by the driver.
>> diff --git a/include/linux/device.h b/include/linux/device.h
>> index ce6db68c3f29..57006b51a989 100644
>> --- a/include/linux/device.h
>> +++ b/include/linux/device.h
>> @@ -293,6 +293,13 @@ struct device_dma_parameters {
>>       unsigned long segment_boundary_mask;
>>   };
>>   +struct dma_ranges {
>> +    u64 paddr;
>> +    u64 dma_addr;
>> +    u64 size;
>> +    unsigned long pfn_offset;
>> +};
>> +
>>   /**
>>    * struct device_connection - Device Connection Descriptor
>>    * @fwnode: The device node of the connected device
>> @@ -581,6 +588,7 @@ struct device {
>>                            allocations such descriptors. */
>>       u64        bus_dma_limit;    /* upstream dma constraint */
>>       unsigned long    dma_pfn_offset;
>> +    struct dma_ranges dma_ranges;
>>         struct device_dma_parameters *dma_parms;
>>   diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>> index 24b8684aa21d..4a46a15945ea 100644
>> --- a/include/linux/dma-direct.h
>> +++ b/include/linux/dma-direct.h
>> @@ -11,18 +11,47 @@ extern unsigned int zone_dma_bits;
>>   #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>   #include <asm/dma-direct.h>
>>   #else
>> +
>> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
>> +                             phys_addr_t paddr)
>> +{
>> +    if (!dev)
>> +        return 0;
>> +
>> +    if (paddr >= dev->dma_ranges.paddr &&
>> +        paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
>> +        return dev->dma_ranges.pfn_offset
>> +
>> +    return 0;
>> +}
>> +
>> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
>> +                             dma_addr_t dma_addr)
>> +{
>> +    if (!dev)
>> +        return 0;
>> +
>> +    if (dma_addr >= dev->dma_ranges.dma_addr &&
>> +        dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
>> +        return dev->dma_ranges.pfn_offset
>> +
>> +    return 0;
>> +}
>> +
>>   static inline dma_addr_t __phys_to_dma(struct device *dev,
>> phys_addr_t paddr)
>>   {
>>       dma_addr_t dev_addr = (dma_addr_t)paddr;
>> +    unsigned long offset = __phys_to_dma_pfn_offset(dev, paddr);
>>   -    return dev_addr - ((dma_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
>> +    return dev_addr - ((dma_addr_t)offset << PAGE_SHIFT);
>>   }
>>     static inline phys_addr_t __dma_to_phys(struct device *dev,
>> dma_addr_t dev_addr)
>>   {
>>       phys_addr_t paddr = (phys_addr_t)dev_addr;
>> +    unsigned long offset = __dma_to_phys_pfn_offset(dev, dev_addr);
>>   -    return paddr + ((phys_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
>> +    return paddr + ((phys_addr_t)offset << PAGE_SHIFT);
>>   }
>>   #endif /* !CONFIG_ARCH_HAS_PHYS_TO_DMA */
>>   diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
>> index 551b0eb7028a..7a68fd09f5d0 100644
>> --- a/kernel/dma/coherent.c
>> +++ b/kernel/dma/coherent.c
>> @@ -31,10 +31,13 @@ static inline struct dma_coherent_mem
>> *dev_get_coherent_memory(struct device *de
>>   static inline dma_addr_t dma_get_device_base(struct device *dev,
>>                            struct dma_coherent_mem * mem)
>>   {
>> -    if (mem->use_dev_dma_pfn_offset)
>> -        return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
>> -    else
>> +    if (mem->use_dev_dma_pfn_offset) {
>> +        unsigned long offset = __phys_to_dma_pfn_offset(dev,
>> +                        __pfn_to_phys(mem->pfn_base));
>> +        return (mem->pfn_base - offset) << PAGE_SHIFT;
>> +    } else {
>>           return mem->device_base;
>> +    }
>>   }
>>     static int dma_init_coherent_memory(phys_addr_t phys_addr,
>>

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-15 11:50                         ` Peter Ujfalusi
@ 2020-01-16 19:13                           ` Robin Murphy
  2020-01-27 14:00                             ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Robin Murphy @ 2020-01-16 19:13 UTC (permalink / raw)
  To: Peter Ujfalusi, hch
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	linux-arm-kernel, rogerq

On 15/01/2020 11:50 am, Peter Ujfalusi wrote:
> 
> 
> On 14/01/2020 20.19, Robin Murphy wrote:
>> On 14/01/2020 4:43 pm, Peter Ujfalusi wrote:
>>> The dma_pfn_offset should only be applied to an address which is
>>> within the
>>> dma-ranges range. Any address outside should have offset as 0.
>>
>> No, that's wrong. If a non-empty dma-ranges is present, then addresses
>> which do not fall within any specified range are invalid altogether.
> 
> It is not explicitly stated by the specification, but can be interpreted
> like that and from a pow it does make sense to treat things like that.

Yes, DTspec doesn't explicitly say so, but it does follow fairly 
logically from the definition of "ranges"/"dma-ranges" as a translation 
between address spaces that an address not matching any range cannot 
pass between those address spaces at all. Case in point being that an 
absent "ranges" property means "no translation at all" (sadly the ship 
sailed too long ago to treat "dma-ranges" similarly strictly, so we're 
stuck with the assumption that absent = empty in that direction)

>> The current long-term plan is indeed to try to move to some sort of
>> internal "DMA range descriptor" in order to properly cope with the kind
>> of esoteric integrations which have multiple disjoint windows,
>> potentially even with different offsets, but as you point out there are
>> still many hurdles between now and that becoming reality. So although
>> this patch does represent the "right" thing, it's for entirely the wrong
>> reason. AFAICT for your case it basically just works out as a very
>> baroque way to hack dma_direct_supported() again - we shouldn't need a
>> special case to map a bogus physical address to valid DMA address, we
>> should be fixing the source of the bogus PA in the first place.
> 
> DMA_BIT_MASK(32) is pretty clear: The DMA can handle addresses within
> 32bit space. DMA_BIT_MASK(24) is also clear: The DMA can handle
> addresses within 24bit space.

Careful there - DMA *masks* are about how wide an address the device may 
generate, but it's not necessarily true that the interconnect beyond 
will actually accept every possible address that that many bits can 
encode (see the aforementioned case of PCI host bridge windows, or the 
recent change of bus_dma_mask to a not-necessarily-power-of-two 
bus_dma_limit)...

> dma-ranges does not change that. The DMA can still address the same
> space.

...thus that assumption is incorrect. However it's not particularly 
important to the immediate problem at hand.

> What dma-ranges will tell is that a physical address range 'X'
> can be accessed on the bus under range 'Y'.
> For the DMA within the bus the physical address within 'X' does not
> matter. What matters is the matching address within 'Y'
> 
> We should do dma_pfn_offset conversion _only_ for the range it applies
> to. Outside of it is not valid to apply it.

That much is agreed. For a physical address A in Y, phys_to_dma(A) 
should return the corresponding DMA address A' in X. What you're 
proposing is that for address B not in Y, phys_to_dma(B) should just 
return B, but my point is that even doing that is wrong, because there 
is no possible DMA address corresponding to B, so there is no valid 
value to return at all.

Nobody's disputing that the current dma_direct_supported() 
implementation is broken for the case where ZONE_DMA itself is offset 
from PA 0; the more pressing question is why Christoph's diff, which was 
trying to take that into account, still didn't work.

Robin.

> The dma API will check
> (without applying dma_pfn_offset) addresses outside of any range (only
> one currently in Linux) and if it is not OK for the mask then it will fail.
> 
>>
>>> This is a proof of concept patch which works on k2g where we have
>>> dma-ranges = <0x80000000 0x8 0x00000000 0x80000000>;
>>> for the SoC.
>>
>> TBH it's probably extra-confusing that you're on Keystone 2, where
>> technically this ends up closer-to-OK than most, since IIRC the 0-2GB
>> MMIO region is the same on all 3(?) interconnect maps. Thus the 100%
>> honest description would really be:
>>
>> dma-ranges = <0x0 0x0 0x0 0x80000000>,
>>           <0x80000000 0x8 0x00000000 0x80000000>;
>>
>> but yeah, that would just go horribly wrong with Linux today.
> 
> It does ;) This was the first thing I have tried.
> 
>> The
>> subtelty that dma_map_resource() ignores the pfn_offset happens to be a
>> "feature" in this regard ;)
> 
> Right, but Keystone 2 is broken since 5.3-rc3 by commit
> ad3c7b18c5b362be5dbd0f2c0bcf1fd5fd659315.
> 
> Can you propose a fix which we can use until things get sorted out?
> 
> Thanks,
> - Péter
> 
>>
>> Robin.
>>
>>> Without this patch everything which tries to set DMA_BIT_MASK(32) or less
>>> fails -> DMA and peripherals with built in DMA (SD with ADMA) will not
>>> probe or fall back to PIO mode.
>>>
>>> With this patch EDMA probes, SD's ADMA is working.
>>> Audio and dma-test is working just fine with EDMA, mmc accesses with ADMA
>>> also operational.
>>>
>>> The patch does not tried to address the incomplete handling of dma-ranges
>>> from DT and it is not fixing/updating arch code or drivers which uses
>>> dma_pfn_offset.
>>> Neither provides fallback support for kernel setting only
>>> dma_pfn_offset to
>>> arbitrary number without paddr/dma_addr/size.
>>>
>>> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
>>> ---
>>> Hi Christoph, Robin,
>>>
>>> I know it is a bit more complicated, but with this patch k2g is
>>> working fine...
>>>
>>> I wanted to test the concept I was describing and a patch speaks
>>> better than
>>> words.
>>>
>>> Kind regards,
>>> Peter
>>>
>>>    arch/arm/include/asm/dma-mapping.h | 25 ++++++++++++++++++++--
>>>    drivers/of/device.c                |  7 ++++++-
>>>    include/linux/device.h             |  8 ++++++++
>>>    include/linux/dma-direct.h         | 33 ++++++++++++++++++++++++++++--
>>>    kernel/dma/coherent.c              |  9 +++++---
>>>    5 files changed, 74 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/dma-mapping.h
>>> b/arch/arm/include/asm/dma-mapping.h
>>> index bdd80ddbca34..9bff6ad2d8c8 100644
>>> --- a/arch/arm/include/asm/dma-mapping.h
>>> +++ b/arch/arm/include/asm/dma-mapping.h
>>> @@ -33,10 +33,31 @@ static inline const struct dma_map_ops
>>> *get_arch_dma_ops(struct bus_type *bus)
>>>     * addresses. They must not be used by drivers.
>>>     */
>>>    #ifndef __arch_pfn_to_dma
>>> +
>>> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
>>> +                             phys_addr_t paddr)
>>> +{
>>> +    if (paddr >= dev->dma_ranges.paddr &&
>>> +        paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
>>> +        return dev->dma_ranges.pfn_offset;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
>>> +                             dma_addr_t dma_addr)
>>> +{
>>> +    if (dma_addr >= dev->dma_ranges.dma_addr &&
>>> +        dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
>>> +        return dev->dma_ranges.pfn_offset;
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>    static inline dma_addr_t pfn_to_dma(struct device *dev, unsigned
>>> long pfn)
>>>    {
>>>        if (dev)
>>> -        pfn -= dev->dma_pfn_offset;
>>> +        pfn -= __phys_to_dma_pfn_offset(dev, __pfn_to_phys(pfn));
>>>        return (dma_addr_t)__pfn_to_bus(pfn);
>>>    }
>>>    @@ -45,7 +66,7 @@ static inline unsigned long dma_to_pfn(struct
>>> device *dev, dma_addr_t addr)
>>>        unsigned long pfn = __bus_to_pfn(addr);
>>>          if (dev)
>>> -        pfn += dev->dma_pfn_offset;
>>> +        pfn += __dma_to_phys_pfn_offset(dev, addr);
>>>          return pfn;
>>>    }
>>> diff --git a/drivers/of/device.c b/drivers/of/device.c
>>> index 27203bfd0b22..07a8cc1a7d7f 100644
>>> --- a/drivers/of/device.c
>>> +++ b/drivers/of/device.c
>>> @@ -105,7 +105,7 @@ int of_dma_configure(struct device *dev, struct
>>> device_node *np, bool force_dma)
>>>            if (!force_dma)
>>>                return ret == -ENODEV ? 0 : ret;
>>>    -        dma_addr = offset = 0;
>>> +        dma_addr = offset = paddr = 0;
>>>        } else {
>>>            offset = PFN_DOWN(paddr - dma_addr);
>>>    @@ -144,6 +144,11 @@ int of_dma_configure(struct device *dev, struct
>>> device_node *np, bool force_dma)
>>>          dev->dma_pfn_offset = offset;
>>>    +    dev->dma_ranges.paddr = paddr;
>>> +    dev->dma_ranges.dma_addr = dma_addr;
>>> +    dev->dma_ranges.size = size;
>>> +    dev->dma_ranges.pfn_offset = offset;
>>> +
>>>        /*
>>>         * Limit coherent and dma mask based on size and default mask
>>>         * set by the driver.
>>> diff --git a/include/linux/device.h b/include/linux/device.h
>>> index ce6db68c3f29..57006b51a989 100644
>>> --- a/include/linux/device.h
>>> +++ b/include/linux/device.h
>>> @@ -293,6 +293,13 @@ struct device_dma_parameters {
>>>        unsigned long segment_boundary_mask;
>>>    };
>>>    +struct dma_ranges {
>>> +    u64 paddr;
>>> +    u64 dma_addr;
>>> +    u64 size;
>>> +    unsigned long pfn_offset;
>>> +};
>>> +
>>>    /**
>>>     * struct device_connection - Device Connection Descriptor
>>>     * @fwnode: The device node of the connected device
>>> @@ -581,6 +588,7 @@ struct device {
>>>                             allocations such descriptors. */
>>>        u64        bus_dma_limit;    /* upstream dma constraint */
>>>        unsigned long    dma_pfn_offset;
>>> +    struct dma_ranges dma_ranges;
>>>          struct device_dma_parameters *dma_parms;
>>>    diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
>>> index 24b8684aa21d..4a46a15945ea 100644
>>> --- a/include/linux/dma-direct.h
>>> +++ b/include/linux/dma-direct.h
>>> @@ -11,18 +11,47 @@ extern unsigned int zone_dma_bits;
>>>    #ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
>>>    #include <asm/dma-direct.h>
>>>    #else
>>> +
>>> +static inline unsigned long __phys_to_dma_pfn_offset(struct device *dev,
>>> +                             phys_addr_t paddr)
>>> +{
>>> +    if (!dev)
>>> +        return 0;
>>> +
>>> +    if (paddr >= dev->dma_ranges.paddr &&
>>> +        paddr <= (dev->dma_ranges.paddr + dev->dma_ranges.size))
>>> +        return dev->dma_ranges.pfn_offset
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static inline unsigned long __dma_to_phys_pfn_offset(struct device *dev,
>>> +                             dma_addr_t dma_addr)
>>> +{
>>> +    if (!dev)
>>> +        return 0;
>>> +
>>> +    if (dma_addr >= dev->dma_ranges.dma_addr &&
>>> +        dma_addr <= (dev->dma_ranges.dma_addr + dev->dma_ranges.size))
>>> +        return dev->dma_ranges.pfn_offset
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>    static inline dma_addr_t __phys_to_dma(struct device *dev,
>>> phys_addr_t paddr)
>>>    {
>>>        dma_addr_t dev_addr = (dma_addr_t)paddr;
>>> +    unsigned long offset = __phys_to_dma_pfn_offset(dev, paddr);
>>>    -    return dev_addr - ((dma_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
>>> +    return dev_addr - ((dma_addr_t)offset << PAGE_SHIFT);
>>>    }
>>>      static inline phys_addr_t __dma_to_phys(struct device *dev,
>>> dma_addr_t dev_addr)
>>>    {
>>>        phys_addr_t paddr = (phys_addr_t)dev_addr;
>>> +    unsigned long offset = __dma_to_phys_pfn_offset(dev, dev_addr);
>>>    -    return paddr + ((phys_addr_t)dev->dma_pfn_offset << PAGE_SHIFT);
>>> +    return paddr + ((phys_addr_t)offset << PAGE_SHIFT);
>>>    }
>>>    #endif /* !CONFIG_ARCH_HAS_PHYS_TO_DMA */
>>>    diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
>>> index 551b0eb7028a..7a68fd09f5d0 100644
>>> --- a/kernel/dma/coherent.c
>>> +++ b/kernel/dma/coherent.c
>>> @@ -31,10 +31,13 @@ static inline struct dma_coherent_mem
>>> *dev_get_coherent_memory(struct device *de
>>>    static inline dma_addr_t dma_get_device_base(struct device *dev,
>>>                             struct dma_coherent_mem * mem)
>>>    {
>>> -    if (mem->use_dev_dma_pfn_offset)
>>> -        return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
>>> -    else
>>> +    if (mem->use_dev_dma_pfn_offset) {
>>> +        unsigned long offset = __phys_to_dma_pfn_offset(dev,
>>> +                        __pfn_to_phys(mem->pfn_base));
>>> +        return (mem->pfn_base - offset) << PAGE_SHIFT;
>>> +    } else {
>>>            return mem->device_base;
>>> +    }
>>>    }
>>>      static int dma_init_coherent_memory(phys_addr_t phys_addr,
>>>
> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-16 19:13                           ` Robin Murphy
@ 2020-01-27 14:00                             ` Peter Ujfalusi
  2020-01-30  7:53                               ` Christoph Hellwig
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-27 14:00 UTC (permalink / raw)
  To: Robin Murphy, hch
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	linux-arm-kernel, rogerq



On 16/01/2020 21.13, Robin Murphy wrote:
> On 15/01/2020 11:50 am, Peter Ujfalusi wrote:
>>
>>
>> On 14/01/2020 20.19, Robin Murphy wrote:
>>> On 14/01/2020 4:43 pm, Peter Ujfalusi wrote:
>>>> The dma_pfn_offset should only be applied to an address which is
>>>> within the
>>>> dma-ranges range. Any address outside should have offset as 0.
>>>
>>> No, that's wrong. If a non-empty dma-ranges is present, then addresses
>>> which do not fall within any specified range are invalid altogether.
>>
>> It is not explicitly stated by the specification, but can be interpreted
>> like that and from a pow it does make sense to treat things like that.
> 
> Yes, DTspec doesn't explicitly say so, but it does follow fairly
> logically from the definition of "ranges"/"dma-ranges" as a translation
> between address spaces that an address not matching any range cannot
> pass between those address spaces at all. Case in point being that an
> absent "ranges" property means "no translation at all" (sadly the ship
> sailed too long ago to treat "dma-ranges" similarly strictly, so we're
> stuck with the assumption that absent = empty in that direction)

Agree.

> 
>>> The current long-term plan is indeed to try to move to some sort of
>>> internal "DMA range descriptor" in order to properly cope with the kind
>>> of esoteric integrations which have multiple disjoint windows,
>>> potentially even with different offsets, but as you point out there are
>>> still many hurdles between now and that becoming reality. So although
>>> this patch does represent the "right" thing, it's for entirely the wrong
>>> reason. AFAICT for your case it basically just works out as a very
>>> baroque way to hack dma_direct_supported() again - we shouldn't need a
>>> special case to map a bogus physical address to valid DMA address, we
>>> should be fixing the source of the bogus PA in the first place.
>>
>> DMA_BIT_MASK(32) is pretty clear: The DMA can handle addresses within
>> 32bit space. DMA_BIT_MASK(24) is also clear: The DMA can handle
>> addresses within 24bit space.
> 
> Careful there - DMA *masks* are about how wide an address the device may
> generate

Which is in a simplified view is what address range the DMA can address.
The DMA can not generate address beyond the mask.

> but it's not necessarily true that the interconnect beyond
> will actually accept every possible address that that many bits can
> encode

True.

> (see the aforementioned case of PCI host bridge windows, or the
> recent change of bus_dma_mask to a not-necessarily-power-of-two
> bus_dma_limit)...

I see.

>> dma-ranges does not change that. The DMA can still address the same
>> space.
> 
> ...thus that assumption is incorrect.

Hrm, why it is not correct?
The DMA can generate addresses up to 32 bits. Anything beyond that is
not accessible by DMA.
The interconnect makes part of a higher (not addressable memory space)
available within the 32 bits range, thus the DMA can address that.
We tell this via the dma-ranges.
I agree that for a correct dma-ranges for k2g should be:
dma-ranges = <0x0 0x0 0x0 0x80000000>,
             <0x80000000 0x8 0x00000000 0x80000000>;

> However it's not particularly
> important to the immediate problem at hand.
> 
>> What dma-ranges will tell is that a physical address range 'X'
>> can be accessed on the bus under range 'Y'.
>> For the DMA within the bus the physical address within 'X' does not
>> matter. What matters is the matching address within 'Y'
>>
>> We should do dma_pfn_offset conversion _only_ for the range it applies
>> to. Outside of it is not valid to apply it.
> 
> That much is agreed. For a physical address A in Y, phys_to_dma(A)
> should return the corresponding DMA address A' in X. What you're
> proposing is that for address B not in Y, phys_to_dma(B) should just
> return B, but my point is that even doing that is wrong, because there
> is no possible DMA address corresponding to B, so there is no valid
> value to return at all.

I agree on the phys_to_dma(). It should fail for addresses which does
not fall into any of the ranges.
It is just a that we in Linux don't have the concept atm for ranges, we
have only _one_ range which applies to every memory address.

I guess what I'm proposing is to _not_ apply the phys_to_dma() to the
DMA mask itself?

Or reverse check the dma-ranges against the dma_mask?

0x0 - 0x8000 0000 : direct mapped w/o pfn_offset
0x8000 0000 - 0xFFFF FFFF : mapped from 0x8 0000 0000 w/ pfn_offset

and this is the whole address range the DMA can address.

> Nobody's disputing that the current dma_direct_supported()
> implementation is broken for the case where ZONE_DMA itself is offset
> from PA 0; the more pressing question is why Christoph's diff, which was
> trying to take that into account, still didn't work.

I understand that this is a bit more complex than I interpret it, but
the k2g is broken and currently the simplest way to make it work is to
use the arm dma_ops in case the pfn_offset is not 0.
It will be easy to test dma-direct changes trying to address the issue
in hand, but will allow k2g to be usable at the same time.

A patch like I first proposed in:
https://lore.kernel.org/lkml/8eb68140-97b2-62ce-3e06-3761984aa5b1@ti.com/

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-27 14:00                             ` Peter Ujfalusi
@ 2020-01-30  7:53                               ` Christoph Hellwig
  2020-01-30 13:04                                 ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2020-01-30  7:53 UTC (permalink / raw)
  To: Peter Ujfalusi
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, hch, linux-arm-kernel, rogerq

[skipping the DT bits, as I'm everything but an expert on that..]

On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
> I agree on the phys_to_dma(). It should fail for addresses which does
> not fall into any of the ranges.
> It is just a that we in Linux don't have the concept atm for ranges, we
> have only _one_ range which applies to every memory address.

what does atm here mean?

We have needed multi-range support for quite a while, as common broadcom
SOCs do need it.  So patches for that are welcome at least from the
DMA layer perspective (kinda similar to your pseudo code earlier)

> > Nobody's disputing that the current dma_direct_supported()
> > implementation is broken for the case where ZONE_DMA itself is offset
> > from PA 0; the more pressing question is why Christoph's diff, which was
> > trying to take that into account, still didn't work.
> 
> I understand that this is a bit more complex than I interpret it, but
> the k2g is broken and currently the simplest way to make it work is to
> use the arm dma_ops in case the pfn_offset is not 0.
> It will be easy to test dma-direct changes trying to address the issue
> in hand, but will allow k2g to be usable at the same time.

Well, using the legacy arm dma ops means we can't use swiotlb if there
is an offset, which is also wrong for lots of common cases, including
the Rpi 4.  I'm still curious why my patch didn't work, as I thought
it should.  We'll need to find the minimum change to make it work
for now without switching ops, even if it isn't the correct one, and
then work from there.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-30  7:53                               ` Christoph Hellwig
@ 2020-01-30 13:04                                 ` Peter Ujfalusi
  2020-01-30 16:40                                   ` Christoph Hellwig
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-30 13:04 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, linux-arm-kernel, rogerq



On 30/01/2020 9.53, Christoph Hellwig wrote:
> [skipping the DT bits, as I'm everything but an expert on that..]
> 
> On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
>> I agree on the phys_to_dma(). It should fail for addresses which does
>> not fall into any of the ranges.
>> It is just a that we in Linux don't have the concept atm for ranges, we
>> have only _one_ range which applies to every memory address.
> 
> what does atm here mean?

struct device have only single dma_pfn_offset, one can not have multiple
ranges defined. If we have then only the first is taken and the physical
address and dma address is discarded, only the dma_pfn_offset is stored
and used.

> We have needed multi-range support for quite a while, as common broadcom
> SOCs do need it.  So patches for that are welcome at least from the
> DMA layer perspective (kinda similar to your pseudo code earlier)

But do they have dma_pfn_offset != 0?

>>> Nobody's disputing that the current dma_direct_supported()
>>> implementation is broken for the case where ZONE_DMA itself is offset
>>> from PA 0; the more pressing question is why Christoph's diff, which was
>>> trying to take that into account, still didn't work.
>>
>> I understand that this is a bit more complex than I interpret it, but
>> the k2g is broken and currently the simplest way to make it work is to
>> use the arm dma_ops in case the pfn_offset is not 0.
>> It will be easy to test dma-direct changes trying to address the issue
>> in hand, but will allow k2g to be usable at the same time.
> 
> Well, using the legacy arm dma ops means we can't use swiotlb if there
> is an offset, which is also wrong for lots of common cases, including
> the Rpi 4.  I'm still curious why my patch didn't work, as I thought
> it should.

The dma_pfn_offset is _still_ applied to the mask we are trying to set
(and validate) via dma-direct.

in dma_direct_supported:
mask == 0xffffffff // DMA_BIT_MASK(32)
dev->dma_pfn_offset == 0x780000 // Keystone 2
min_mask == 0xffffff

tmp_mask = __phys_to_dma(dev, min_mask);
tmp_mask == 0xff880ffffff

within __phys_to_dma() converts the min_mask to pfn and calls
pfn_to_dma() which does:
if (dev)
	pfn -= dev->dma_pfn_offset;

the returned pfn is then converted back to address.

the mask (0xffffffff) is well under the tmp_mask (0xff880ffffff) so
dma_direct_supported() will tell us that DMA is not supported for
DMA_BIT_MASK(32), which is not true, because DMA is supporting 32 bits.

> We'll need to find the minimum change to make it work
> for now without switching ops, even if it isn't the correct one, and
> then work from there.

Sure, but can we fix the regression by reverting to arm_ops for now only
if dma_pfn_offset is not 0? It used to work fine in the past at least it
appeared to work on K2 platforms.

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-30 13:04                                 ` Peter Ujfalusi
@ 2020-01-30 16:40                                   ` Christoph Hellwig
  2020-01-31 13:59                                     ` Peter Ujfalusi
                                                       ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Christoph Hellwig @ 2020-01-30 16:40 UTC (permalink / raw)
  To: Peter Ujfalusi
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, Christoph Hellwig, linux-arm-kernel, rogerq

On Thu, Jan 30, 2020 at 03:04:37PM +0200, Peter Ujfalusi via iommu wrote:
> On 30/01/2020 9.53, Christoph Hellwig wrote:
> > [skipping the DT bits, as I'm everything but an expert on that..]
> > 
> > On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
> >> I agree on the phys_to_dma(). It should fail for addresses which does
> >> not fall into any of the ranges.
> >> It is just a that we in Linux don't have the concept atm for ranges, we
> >> have only _one_ range which applies to every memory address.
> > 
> > what does atm here mean?
> 
> struct device have only single dma_pfn_offset, one can not have multiple
> ranges defined. If we have then only the first is taken and the physical
> address and dma address is discarded, only the dma_pfn_offset is stored
> and used.
> 
> > We have needed multi-range support for quite a while, as common broadcom
> > SOCs do need it.  So patches for that are welcome at least from the
> > DMA layer perspective (kinda similar to your pseudo code earlier)
> 
> But do they have dma_pfn_offset != 0?

Well, with that I mean multiple ranges with different offsets.  Take
a look at arch/mips/bmips/dma.c:__phys_to_dma() and friends.  This
is an existing implementation for mips, but there are arm and arm64
SOCs using the same logic on the market as well, and we'll want to
support them eventually.

> The dma_pfn_offset is _still_ applied to the mask we are trying to set
> (and validate) via dma-direct.

And for the general case that is exactly the right thing to do, we
just need to deal with really odd ZONE_DMA placements like yours.

> > We'll need to find the minimum change to make it work
> > for now without switching ops, even if it isn't the correct one, and
> > then work from there.
> 
> Sure, but can we fix the regression by reverting to arm_ops for now only
> if dma_pfn_offset is not 0? It used to work fine in the past at least it
> appeared to work on K2 platforms.

But that will cause yet another regression in what we have just fixed
with using the generic direct ops, at which points it turns into who
screams louder.

For now I'm tempted to throw something like this in, which is a bit
of a hack, but actually 100% matches what various architectures have
historically done:


diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 6af7ae83c4ad..6ba9ee6e20bd 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -482,6 +482,9 @@ int dma_direct_supported(struct device *dev, u64 mask)
 {
 	u64 min_mask;
 
+	if (mask >= DMA_BIT_MASK(32))
+		return 1;
+
 	if (IS_ENABLED(CONFIG_ZONE_DMA))
 		min_mask = DMA_BIT_MASK(zone_dma_bits);
 	else

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-30 16:40                                   ` Christoph Hellwig
@ 2020-01-31 13:59                                     ` Peter Ujfalusi
  2020-01-31 14:00                                     ` Peter Ujfalusi
  2020-01-31 14:00                                     ` Peter Ujfalusi
  2 siblings, 0 replies; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-31 13:59 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, linux-arm-kernel, rogerq

Hi Christoph,

On 30/01/2020 18.40, Christoph Hellwig wrote:
> On Thu, Jan 30, 2020 at 03:04:37PM +0200, Peter Ujfalusi via iommu wrote:
>> On 30/01/2020 9.53, Christoph Hellwig wrote:
>>> [skipping the DT bits, as I'm everything but an expert on that..]
>>>
>>> On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
>>>> I agree on the phys_to_dma(). It should fail for addresses which does
>>>> not fall into any of the ranges.
>>>> It is just a that we in Linux don't have the concept atm for ranges, we
>>>> have only _one_ range which applies to every memory address.
>>>
>>> what does atm here mean?
>>
>> struct device have only single dma_pfn_offset, one can not have multiple
>> ranges defined. If we have then only the first is taken and the physical
>> address and dma address is discarded, only the dma_pfn_offset is stored
>> and used.
>>
>>> We have needed multi-range support for quite a while, as common broadcom
>>> SOCs do need it.  So patches for that are welcome at least from the
>>> DMA layer perspective (kinda similar to your pseudo code earlier)
>>
>> But do they have dma_pfn_offset != 0?
> 
> Well, with that I mean multiple ranges with different offsets.  Take
> a look at arch/mips/bmips/dma.c:__phys_to_dma() and friends.  This
> is an existing implementation for mips, but there are arm and arm64
> SOCs using the same logic on the market as well, and we'll want to
> support them eventually.

I see. My PoC patch was not too off then ;)
So the plan is to have a generic implementation for all of the
architecture, right?

>> The dma_pfn_offset is _still_ applied to the mask we are trying to set
>> (and validate) via dma-direct.
> 
> And for the general case that is exactly the right thing to do, we
> just need to deal with really odd ZONE_DMA placements like yours.

I'm still not convinced, the point of the DMA mask, at least how I see
it, to check that the dma address can be handled by the device (DMA,
peripheral with built in DMA, etc), it is not against physical address.
Doing phys_to_dma() on the mask from the dma_set_mask() is just wrong.

>>> We'll need to find the minimum change to make it work
>>> for now without switching ops, even if it isn't the correct one, and
>>> then work from there.
>>
>> Sure, but can we fix the regression by reverting to arm_ops for now only
>> if dma_pfn_offset is not 0? It used to work fine in the past at least it
>> appeared to work on K2 platforms.
> 
> But that will cause yet another regression in what we have just fixed
> with using the generic direct ops, at which points it turns into who
> screams louder.

Hehe, I see.
I genuinely curious why k2 platform worked just fine with LPAE (it needs
it), but guys had issues with LPAE on dra7/am5.
The fix for dra7/am5 broke k2.
As far as I can see the main (only) difference is that k2 have
dma_pfn_offset = 0x780000, while dra7/am5 have it 0 (really direct mapping).

> For now I'm tempted to throw something like this in, which is a bit
> of a hack, but actually 100% matches what various architectures have
> historically done:
> 
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..6ba9ee6e20bd 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -482,6 +482,9 @@ int dma_direct_supported(struct device *dev, u64 mask)
>  {
>  	u64 min_mask;
>  
> +	if (mask >= DMA_BIT_MASK(32))
> +		return 1;
> +

Right, so skipping phys_to_dma() for the mask and believing that it will
work..

It does: audio and dmatest memcpy tests are just fine with this, MMC
also probed with ADMA enabled.

As far as I can tell it works as well as falling back to the old arm ops
in case of LPAE && dma_pfn_offset != 0

Fwiw:
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>

Would yo be comfortable to send apply this patch to mainline with
Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE
configs")

So it gets picked for stable kernels as well?

>  	if (IS_ENABLED(CONFIG_ZONE_DMA))
>  		min_mask = DMA_BIT_MASK(zone_dma_bits);
>  	else
> 

Thank you,
- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-30 16:40                                   ` Christoph Hellwig
  2020-01-31 13:59                                     ` Peter Ujfalusi
@ 2020-01-31 14:00                                     ` Peter Ujfalusi
  2020-01-31 14:00                                     ` Peter Ujfalusi
  2 siblings, 0 replies; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-31 14:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, linux-arm-kernel, rogerq

Hi Christoph,

On 30/01/2020 18.40, Christoph Hellwig wrote:
> On Thu, Jan 30, 2020 at 03:04:37PM +0200, Peter Ujfalusi via iommu wrote:
>> On 30/01/2020 9.53, Christoph Hellwig wrote:
>>> [skipping the DT bits, as I'm everything but an expert on that..]
>>>
>>> On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
>>>> I agree on the phys_to_dma(). It should fail for addresses which does
>>>> not fall into any of the ranges.
>>>> It is just a that we in Linux don't have the concept atm for ranges, we
>>>> have only _one_ range which applies to every memory address.
>>>
>>> what does atm here mean?
>>
>> struct device have only single dma_pfn_offset, one can not have multiple
>> ranges defined. If we have then only the first is taken and the physical
>> address and dma address is discarded, only the dma_pfn_offset is stored
>> and used.
>>
>>> We have needed multi-range support for quite a while, as common broadcom
>>> SOCs do need it.  So patches for that are welcome at least from the
>>> DMA layer perspective (kinda similar to your pseudo code earlier)
>>
>> But do they have dma_pfn_offset != 0?
> 
> Well, with that I mean multiple ranges with different offsets.  Take
> a look at arch/mips/bmips/dma.c:__phys_to_dma() and friends.  This
> is an existing implementation for mips, but there are arm and arm64
> SOCs using the same logic on the market as well, and we'll want to
> support them eventually.

I see. My PoC patch was not too off then ;)
So the plan is to have a generic implementation for all of the
architecture, right?

>> The dma_pfn_offset is _still_ applied to the mask we are trying to set
>> (and validate) via dma-direct.
> 
> And for the general case that is exactly the right thing to do, we
> just need to deal with really odd ZONE_DMA placements like yours.

I'm still not convinced, the point of the DMA mask, at least how I see
it, to check that the dma address can be handled by the device (DMA,
peripheral with built in DMA, etc), it is not against physical address.
Doing phys_to_dma() on the mask from the dma_set_mask() is just wrong.

>>> We'll need to find the minimum change to make it work
>>> for now without switching ops, even if it isn't the correct one, and
>>> then work from there.
>>
>> Sure, but can we fix the regression by reverting to arm_ops for now only
>> if dma_pfn_offset is not 0? It used to work fine in the past at least it
>> appeared to work on K2 platforms.
> 
> But that will cause yet another regression in what we have just fixed
> with using the generic direct ops, at which points it turns into who
> screams louder.

Hehe, I see.
I genuinely curious why k2 platform worked just fine with LPAE (it needs
it), but guys had issues with LPAE on dra7/am5.
The fix for dra7/am5 broke k2.
As far as I can see the main (only) difference is that k2 have
dma_pfn_offset = 0x780000, while dra7/am5 have it 0 (really direct mapping).

> For now I'm tempted to throw something like this in, which is a bit
> of a hack, but actually 100% matches what various architectures have
> historically done:
> 
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..6ba9ee6e20bd 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -482,6 +482,9 @@ int dma_direct_supported(struct device *dev, u64 mask)
>  {
>  	u64 min_mask;
>  
> +	if (mask >= DMA_BIT_MASK(32))
> +		return 1;
> +

Right, so skipping phys_to_dma() for the mask and believing that it will
work..

It does: audio and dmatest memcpy tests are just fine with this, MMC
also probed with ADMA enabled.

As far as I can tell it works as well as falling back to the old arm ops
in case of LPAE && dma_pfn_offset != 0

Fwiw:
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>

Would you be comfortable to send apply this patch to mainline with
Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE
configs")

So it gets picked for stable kernels as well?

>  	if (IS_ENABLED(CONFIG_ZONE_DMA))
>  		min_mask = DMA_BIT_MASK(zone_dma_bits);
>  	else
> 

Thank you,
- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-30 16:40                                   ` Christoph Hellwig
  2020-01-31 13:59                                     ` Peter Ujfalusi
  2020-01-31 14:00                                     ` Peter Ujfalusi
@ 2020-01-31 14:00                                     ` Peter Ujfalusi
  2020-02-03 17:08                                       ` Christoph Hellwig
  2 siblings, 1 reply; 30+ messages in thread
From: Peter Ujfalusi @ 2020-01-31 14:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, linux-arm-kernel, rogerq

Hi Christoph,

On 30/01/2020 18.40, Christoph Hellwig wrote:
> On Thu, Jan 30, 2020 at 03:04:37PM +0200, Peter Ujfalusi via iommu wrote:
>> On 30/01/2020 9.53, Christoph Hellwig wrote:
>>> [skipping the DT bits, as I'm everything but an expert on that..]
>>>
>>> On Mon, Jan 27, 2020 at 04:00:30PM +0200, Peter Ujfalusi wrote:
>>>> I agree on the phys_to_dma(). It should fail for addresses which does
>>>> not fall into any of the ranges.
>>>> It is just a that we in Linux don't have the concept atm for ranges, we
>>>> have only _one_ range which applies to every memory address.
>>>
>>> what does atm here mean?
>>
>> struct device have only single dma_pfn_offset, one can not have multiple
>> ranges defined. If we have then only the first is taken and the physical
>> address and dma address is discarded, only the dma_pfn_offset is stored
>> and used.
>>
>>> We have needed multi-range support for quite a while, as common broadcom
>>> SOCs do need it.  So patches for that are welcome at least from the
>>> DMA layer perspective (kinda similar to your pseudo code earlier)
>>
>> But do they have dma_pfn_offset != 0?
> 
> Well, with that I mean multiple ranges with different offsets.  Take
> a look at arch/mips/bmips/dma.c:__phys_to_dma() and friends.  This
> is an existing implementation for mips, but there are arm and arm64
> SOCs using the same logic on the market as well, and we'll want to
> support them eventually.

I see. My PoC patch was not too off then ;)
So the plan is to have a generic implementation for all of the
architecture, right?

>> The dma_pfn_offset is _still_ applied to the mask we are trying to set
>> (and validate) via dma-direct.
> 
> And for the general case that is exactly the right thing to do, we
> just need to deal with really odd ZONE_DMA placements like yours.

I'm still not convinced, the point of the DMA mask, at least how I see
it, to check that the dma address can be handled by the device (DMA,
peripheral with built in DMA, etc), it is not against physical address.
Doing phys_to_dma() on the mask from the dma_set_mask() is just wrong.

>>> We'll need to find the minimum change to make it work
>>> for now without switching ops, even if it isn't the correct one, and
>>> then work from there.
>>
>> Sure, but can we fix the regression by reverting to arm_ops for now only
>> if dma_pfn_offset is not 0? It used to work fine in the past at least it
>> appeared to work on K2 platforms.
> 
> But that will cause yet another regression in what we have just fixed
> with using the generic direct ops, at which points it turns into who
> screams louder.

Hehe, I see.
I genuinely curious why k2 platform worked just fine with LPAE (it needs
it), but guys had issues with LPAE on dra7/am5.
The fix for dra7/am5 broke k2.
As far as I can see the main (only) difference is that k2 have
dma_pfn_offset = 0x780000, while dra7/am5 have it 0 (really direct mapping).

> For now I'm tempted to throw something like this in, which is a bit
> of a hack, but actually 100% matches what various architectures have
> historically done:
> 
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 6af7ae83c4ad..6ba9ee6e20bd 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -482,6 +482,9 @@ int dma_direct_supported(struct device *dev, u64 mask)
>  {
>  	u64 min_mask;
>  
> +	if (mask >= DMA_BIT_MASK(32))
> +		return 1;
> +

Right, so skipping phys_to_dma() for the mask and believing that it will
work..

It does: audio and dmatest memcpy tests are just fine with this, MMC
also probed with ADMA enabled.

As far as I can tell it works as well as falling back to the old arm ops
in case of LPAE && dma_pfn_offset != 0

Fwiw:
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>

Would you be comfortable to send this patch for mainline with
Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE
configs")

So it gets picked for stable kernels as well?

>  	if (IS_ENABLED(CONFIG_ZONE_DMA))
>  		min_mask = DMA_BIT_MASK(zone_dma_bits);
>  	else
> 

Thank you,
- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-01-31 14:00                                     ` Peter Ujfalusi
@ 2020-02-03 17:08                                       ` Christoph Hellwig
  2020-02-05 10:19                                         ` Peter Ujfalusi
  0 siblings, 1 reply; 30+ messages in thread
From: Christoph Hellwig @ 2020-02-03 17:08 UTC (permalink / raw)
  To: Peter Ujfalusi
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, Christoph Hellwig, linux-arm-kernel, rogerq

On Fri, Jan 31, 2020 at 04:00:20PM +0200, Peter Ujfalusi wrote:
> I see. My PoC patch was not too off then ;)
> So the plan is to have a generic implementation for all of the
> architecture, right?

І don't know of a concrete plan, but that's defintively what I'd like
to see.

> >> The dma_pfn_offset is _still_ applied to the mask we are trying to set
> >> (and validate) via dma-direct.
> > 
> > And for the general case that is exactly the right thing to do, we
> > just need to deal with really odd ZONE_DMA placements like yours.
> 
> I'm still not convinced, the point of the DMA mask, at least how I see
> it, to check that the dma address can be handled by the device (DMA,
> peripheral with built in DMA, etc), it is not against physical address.
> Doing phys_to_dma() on the mask from the dma_set_mask() is just wrong.

We have a translation between the addresses that the device sees, and
those that the CPU sees.  The device can address N bits of address space
as seen from the device.  The addresses encoded in max_pfn,
zone_dma_bits or the harcoded 32 in the zone dma 32 case are CPU address.
So no, we can't blindly compare those.


> > But that will cause yet another regression in what we have just fixed
> > with using the generic direct ops, at which points it turns into who
> > screams louder.
> 
> Hehe, I see.
> I genuinely curious why k2 platform worked just fine with LPAE (it needs
> it), but guys had issues with LPAE on dra7/am5.
> The fix for dra7/am5 broke k2.
> As far as I can see the main (only) difference is that k2 have
> dma_pfn_offset = 0x780000, while dra7/am5 have it 0 (really direct mapping).

How much memory does the platform have?  Once you are above 32-bits worth
of address space devices with a 32-bit DMA mask can't address all the
memory.  Now if k2 for example only had less than 4G of memory, but at
addresses over 4G, and the offset compensates for the offset of the DRAM
it works without bounce buffering and thus didn't need swiotlb.  But any
platform that has DRAM that is not addressable will need swiotlb.

> >  	u64 min_mask;
> >  
> > +	if (mask >= DMA_BIT_MASK(32))
> > +		return 1;
> > +
> 
> Right, so skipping phys_to_dma() for the mask and believing that it will
> work..
> 
> It does: audio and dmatest memcpy tests are just fine with this, MMC
> also probed with ADMA enabled.
> 
> As far as I can tell it works as well as falling back to the old arm ops
> in case of LPAE && dma_pfn_offset != 0
> 
> Fwiw:
> Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> 
> Would you be comfortable to send this patch for mainline with
> Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE
> configs")

That is the big question.  I don't feel overly comfortable as I've been
trying to get this right, but so far it seems like the least bad option.
I'll send out a proper patch with updated comments and will see what
people think.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid
  2020-02-03 17:08                                       ` Christoph Hellwig
@ 2020-02-05 10:19                                         ` Peter Ujfalusi
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Ujfalusi @ 2020-02-05 10:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: robh, vigneshr, konrad.wilk, linux, linux-kernel, iommu,
	Robin Murphy, linux-arm-kernel, rogerq



On 03/02/2020 19.08, Christoph Hellwig wrote:
> On Fri, Jan 31, 2020 at 04:00:20PM +0200, Peter Ujfalusi wrote:
>> I see. My PoC patch was not too off then ;)
>> So the plan is to have a generic implementation for all of the
>> architecture, right?
> 
> І don't know of a concrete plan, but that's defintively what I'd like
> to see.
> 
>>>> The dma_pfn_offset is _still_ applied to the mask we are trying to set
>>>> (and validate) via dma-direct.
>>>
>>> And for the general case that is exactly the right thing to do, we
>>> just need to deal with really odd ZONE_DMA placements like yours.
>>
>> I'm still not convinced, the point of the DMA mask, at least how I see
>> it, to check that the dma address can be handled by the device (DMA,
>> peripheral with built in DMA, etc), it is not against physical address.
>> Doing phys_to_dma() on the mask from the dma_set_mask() is just wrong.
> 
> We have a translation between the addresses that the device sees, and
> those that the CPU sees.  The device can address N bits of address space
> as seen from the device.  The addresses encoded in max_pfn,
> zone_dma_bits or the harcoded 32 in the zone dma 32 case are CPU address.
> So no, we can't blindly compare those.

Right, thanks for the explanation.

>>> But that will cause yet another regression in what we have just fixed
>>> with using the generic direct ops, at which points it turns into who
>>> screams louder.
>>
>> Hehe, I see.
>> I genuinely curious why k2 platform worked just fine with LPAE (it needs
>> it), but guys had issues with LPAE on dra7/am5.
>> The fix for dra7/am5 broke k2.
>> As far as I can see the main (only) difference is that k2 have
>> dma_pfn_offset = 0x780000, while dra7/am5 have it 0 (really direct mapping).
> 
> How much memory does the platform have?

The boards which is bootable in mainline have maximum of 2G, there might
be custom boards with more RAM, but I'm not aware of them.

> Once you are above 32-bits worth
> of address space devices with a 32-bit DMA mask can't address all the
> memory.  Now if k2 for example only had less than 4G of memory, but at
> addresses over 4G, and the offset compensates for the offset of the DRAM
> it works without bounce buffering and thus didn't need swiotlb.  But any
> platform that has DRAM that is not addressable will need swiotlb.

I see, since we have maximum of 2G, which is mirrored at 0x80000000 for
devices we never needed the assistance from swiotlb for bounce buffering
and that's why the arm ops worked fine.

> 
>>>  	u64 min_mask;
>>>  
>>> +	if (mask >= DMA_BIT_MASK(32))
>>> +		return 1;
>>> +
>>
>> Right, so skipping phys_to_dma() for the mask and believing that it will
>> work..
>>
>> It does: audio and dmatest memcpy tests are just fine with this, MMC
>> also probed with ADMA enabled.
>>
>> As far as I can tell it works as well as falling back to the old arm ops
>> in case of LPAE && dma_pfn_offset != 0
>>
>> Fwiw:
>> Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
>>
>> Would you be comfortable to send this patch for mainline with
>> Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE
>> configs")
> 
> That is the big question.  I don't feel overly comfortable as I've been
> trying to get this right, but so far it seems like the least bad option.
> I'll send out a proper patch with updated comments and will see what
> people think.

I understand and thank you for the patch, it makes k2 platform working
again!

- Péter

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-02-05 10:19 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09 14:20 add swiotlb support to arm32 Christoph Hellwig
2019-07-09 14:20 ` [PATCH 1/2] dma-mapping check pfn validity in dma_common_{mmap, get_sgtable} Christoph Hellwig
2019-07-09 14:20 ` [PATCH 2/2] arm: use swiotlb for bounce buffer on LPAE configs Christoph Hellwig
2019-07-24 17:23   ` Nicolas Saenz Julienne
2019-07-24 17:55     ` Christoph Hellwig
2019-12-19 13:10   ` Peter Ujfalusi
2019-12-19 15:02     ` Christoph Hellwig
2019-12-19 15:20       ` Peter Ujfalusi
2020-01-08  8:28         ` Peter Ujfalusi
2020-01-08 12:21           ` Robin Murphy
2020-01-08 14:00             ` Peter Ujfalusi
2020-01-08 15:20               ` Robin Murphy
2020-01-09 14:49                 ` Christoph Hellwig
2020-01-14 10:43                   ` Peter Ujfalusi
2020-01-14 16:43                     ` [PoC] arm: dma-mapping: direct: Apply dma_pfn_offset only when it is valid Peter Ujfalusi
2020-01-14 18:19                       ` Robin Murphy
2020-01-15 11:50                         ` Peter Ujfalusi
2020-01-16 19:13                           ` Robin Murphy
2020-01-27 14:00                             ` Peter Ujfalusi
2020-01-30  7:53                               ` Christoph Hellwig
2020-01-30 13:04                                 ` Peter Ujfalusi
2020-01-30 16:40                                   ` Christoph Hellwig
2020-01-31 13:59                                     ` Peter Ujfalusi
2020-01-31 14:00                                     ` Peter Ujfalusi
2020-01-31 14:00                                     ` Peter Ujfalusi
2020-02-03 17:08                                       ` Christoph Hellwig
2020-02-05 10:19                                         ` Peter Ujfalusi
2019-07-17 13:21 ` add swiotlb support to arm32 Vignesh Raghavendra
2019-07-19 12:33   ` Christoph Hellwig
2019-07-24 15:37   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).