All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-04-24 10:16 ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd, Joerg Roedel,
	Christian Borntraeger, Michal Nazarewicz, Marek Szyprowski,
	Alan Stern, Yoshinori Sato, Rich Felker, Roger Quadros,
	Rob Herring, Mark Rutland, Doug Ledford

It seem that addition of cache support for M-class CPUs uncovered
latent bug in DMA usage. NOMMU memory model has been treated as being
always consistent; however, for R/M CPU classes memory can be covered
by MPU which in turn might configure RAM as Normal i.e. bufferable and
cacheable. It breaks dma_alloc_coherent() and friends, since data can
stuck in caches now or be buffered.

This patch set is trying to address the issue by providing region of
memory suitable for consistent DMA operations. It is supposed that
such region is marked by MPU as non-cacheable. Robin suggested to
advertise such memory as reserved shared-dma-pool, rather then using
homebrew command line option, and extend dma-coherent to provide
default DMA area in the similar way as it is done for CMA (PATCH
4/7). It allows us to offload all bookkeeping on generic coherent DMA
framework, and it seems that it might be reused by other architectures
like c6x and blackfin.

While reviewing/testing previous vesrions of the patch set it turned
out that dma-coherent does not take into account "dma-ranges" device
tree property, so it is addressed in PATCH 3/7.

For ARM, dedicated DMA region is required for cases other than:
 - MMU/MPU is off
 - cpu is v7m w/o cache support
 - device is coherent

In case one of the above conditions is true dma operations are forced
to be coherent and wired with dma_noop_ops.

To make life easier NOMMU dma operations are kept in separate
compilation unit.

Since the issue was reported in the same time as Benjamin sent his
patch [1] to allow mmap for NOMMU, his case is also addressed in this
series (PATCH 1/7 and PATCH 2/7).

Thanks!

[1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: Roger Quadros <rogerq@ti.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Doug Ledford <dledford@redhat.com>

Changelog:
	    v3 -> v4
	       - rebased on v4.11-rc7
	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
	       - added Arnd's Acked-by

	    v2 -> v3
	       - fixed warnings reported by Alexandre and kbuild robot

	    v1 -> v2
	       - rebased on v4.11-rc1
	       - added Robin's Reviewed-by
	       - dedicated flag is introduced to use dev->dma_pfn_offset
	         rather than mem->device_base in case memory region is
		 configured via device tree (so Tested-by discarded there)

	RFC v6 -> v1
	       - dropped RFC tag
	       - added Alexandre's Tested-by


Vladimir Murzin (7):
  dma: Take into account dma_pfn_offset
  dma: Add simple dma_noop_mmap
  drivers: dma-coherent: Account dma_pfn_offset when used with device
    tree
  drivers: dma-coherent: Introduce default DMA pool
  ARM: NOMMU: Introduce dma operations for noMMU
  ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  ARM: dma-mapping: Remove traces of NOMMU code

 .../bindings/reserved-memory/reserved-memory.txt   |   3 +
 arch/arm/Kconfig                                   |   1 +
 arch/arm/include/asm/dma-mapping.h                 |   2 +-
 arch/arm/mm/Kconfig                                |   4 +-
 arch/arm/mm/Makefile                               |   5 +-
 arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
 arch/arm/mm/dma-mapping.c                          |  29 +--
 drivers/base/dma-coherent.c                        |  74 +++++-
 lib/dma-noop.c                                     |  29 ++-
 9 files changed, 355 insertions(+), 45 deletions(-)
 create mode 100644 arch/arm/mm/dma-mapping-nommu.c

-- 
2.0.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-04-24 10:16 ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

It seem that addition of cache support for M-class CPUs uncovered
latent bug in DMA usage. NOMMU memory model has been treated as being
always consistent; however, for R/M CPU classes memory can be covered
by MPU which in turn might configure RAM as Normal i.e. bufferable and
cacheable. It breaks dma_alloc_coherent() and friends, since data can
stuck in caches now or be buffered.

This patch set is trying to address the issue by providing region of
memory suitable for consistent DMA operations. It is supposed that
such region is marked by MPU as non-cacheable. Robin suggested to
advertise such memory as reserved shared-dma-pool, rather then using
homebrew command line option, and extend dma-coherent to provide
default DMA area in the similar way as it is done for CMA (PATCH
4/7). It allows us to offload all bookkeeping on generic coherent DMA
framework, and it seems that it might be reused by other architectures
like c6x and blackfin.

While reviewing/testing previous vesrions of the patch set it turned
out that dma-coherent does not take into account "dma-ranges" device
tree property, so it is addressed in PATCH 3/7.

For ARM, dedicated DMA region is required for cases other than:
 - MMU/MPU is off
 - cpu is v7m w/o cache support
 - device is coherent

In case one of the above conditions is true dma operations are forced
to be coherent and wired with dma_noop_ops.

To make life easier NOMMU dma operations are kept in separate
compilation unit.

Since the issue was reported in the same time as Benjamin sent his
patch [1] to allow mmap for NOMMU, his case is also addressed in this
series (PATCH 1/7 and PATCH 2/7).

Thanks!

[1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Cc: Roger Quadros <rogerq@ti.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Doug Ledford <dledford@redhat.com>

Changelog:
	    v3 -> v4
	       - rebased on v4.11-rc7
	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
	       - added Arnd's Acked-by

	    v2 -> v3
	       - fixed warnings reported by Alexandre and kbuild robot

	    v1 -> v2
	       - rebased on v4.11-rc1
	       - added Robin's Reviewed-by
	       - dedicated flag is introduced to use dev->dma_pfn_offset
	         rather than mem->device_base in case memory region is
		 configured via device tree (so Tested-by discarded there)

	RFC v6 -> v1
	       - dropped RFC tag
	       - added Alexandre's Tested-by


Vladimir Murzin (7):
  dma: Take into account dma_pfn_offset
  dma: Add simple dma_noop_mmap
  drivers: dma-coherent: Account dma_pfn_offset when used with device
    tree
  drivers: dma-coherent: Introduce default DMA pool
  ARM: NOMMU: Introduce dma operations for noMMU
  ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  ARM: dma-mapping: Remove traces of NOMMU code

 .../bindings/reserved-memory/reserved-memory.txt   |   3 +
 arch/arm/Kconfig                                   |   1 +
 arch/arm/include/asm/dma-mapping.h                 |   2 +-
 arch/arm/mm/Kconfig                                |   4 +-
 arch/arm/mm/Makefile                               |   5 +-
 arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
 arch/arm/mm/dma-mapping.c                          |  29 +--
 drivers/base/dma-coherent.c                        |  74 +++++-
 lib/dma-noop.c                                     |  29 ++-
 9 files changed, 355 insertions(+), 45 deletions(-)
 create mode 100644 arch/arm/mm/dma-mapping-nommu.c

-- 
2.0.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/7] dma: Take into account dma_pfn_offset
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd, Joerg Roedel,
	Christian Borntraeger

Even though dma-noop-ops assumes 1:1 memory mapping DMA memory range
can be different to RAM. For example, ARM STM32F4 MCU offers the
possibility to remap SDRAM from 0xc000_0000 to 0x0 to get CPU
performance boost, but DMA continue to see SDRAM at 0xc000_0000. This
difference in mapping is handled via device-tree "dma-range" property
which leads to dev->dma_pfn_offset is set nonzero. To handle such
cases take dma_pfn_offset into account.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 lib/dma-noop.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index de26c8b..ff4ef5e 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -7,6 +7,7 @@
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
 #include <linux/scatterlist.h>
+#include <linux/pfn.h>
 
 static void *dma_noop_alloc(struct device *dev, size_t size,
 			    dma_addr_t *dma_handle, gfp_t gfp,
@@ -16,7 +17,8 @@ static void *dma_noop_alloc(struct device *dev, size_t size,
 
 	ret = (void *)__get_free_pages(gfp, get_order(size));
 	if (ret)
-		*dma_handle = virt_to_phys(ret);
+		*dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset);
+
 	return ret;
 }
 
@@ -32,7 +34,7 @@ static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
 				      enum dma_data_direction dir,
 				      unsigned long attrs)
 {
-	return page_to_phys(page) + offset;
+	return page_to_phys(page) + offset - PFN_PHYS(dev->dma_pfn_offset);
 }
 
 static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
@@ -47,7 +49,7 @@ static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nent
 
 		BUG_ON(!sg_page(sg));
 		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_address(sg) = (dma_addr_t)(virt_to_phys(va) - PFN_PHYS(dev->dma_pfn_offset));
 		sg_dma_len(sg) = sg->length;
 	}
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 1/7] dma: Take into account dma_pfn_offset
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

Even though dma-noop-ops assumes 1:1 memory mapping DMA memory range
can be different to RAM. For example, ARM STM32F4 MCU offers the
possibility to remap SDRAM from 0xc000_0000 to 0x0 to get CPU
performance boost, but DMA continue to see SDRAM at 0xc000_0000. This
difference in mapping is handled via device-tree "dma-range" property
which leads to dev->dma_pfn_offset is set nonzero. To handle such
cases take dma_pfn_offset into account.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 lib/dma-noop.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index de26c8b..ff4ef5e 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -7,6 +7,7 @@
 #include <linux/mm.h>
 #include <linux/dma-mapping.h>
 #include <linux/scatterlist.h>
+#include <linux/pfn.h>
 
 static void *dma_noop_alloc(struct device *dev, size_t size,
 			    dma_addr_t *dma_handle, gfp_t gfp,
@@ -16,7 +17,8 @@ static void *dma_noop_alloc(struct device *dev, size_t size,
 
 	ret = (void *)__get_free_pages(gfp, get_order(size));
 	if (ret)
-		*dma_handle = virt_to_phys(ret);
+		*dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset);
+
 	return ret;
 }
 
@@ -32,7 +34,7 @@ static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
 				      enum dma_data_direction dir,
 				      unsigned long attrs)
 {
-	return page_to_phys(page) + offset;
+	return page_to_phys(page) + offset - PFN_PHYS(dev->dma_pfn_offset);
 }
 
 static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
@@ -47,7 +49,7 @@ static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nent
 
 		BUG_ON(!sg_page(sg));
 		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_address(sg) = (dma_addr_t)(virt_to_phys(va) - PFN_PHYS(dev->dma_pfn_offset));
 		sg_dma_len(sg) = sg->length;
 	}
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 2/7] dma: Add simple dma_noop_mmap
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd, Joerg Roedel,
	Christian Borntraeger

This patch adds a simple implementation of mmap to dma_noop_ops.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 lib/dma-noop.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index ff4ef5e..0acc3f6 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -66,6 +66,26 @@ static int dma_noop_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
+static int dma_noop_mmap(struct device *dev, struct vm_area_struct *vma,
+			 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			 unsigned long attrs)
+{
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
+	unsigned long off = vma->vm_pgoff;
+	int ret = -ENXIO;
+
+	if (off < count && user_count <= (count - off)) {
+		ret = remap_pfn_range(vma, vma->vm_start,
+				      pfn + off,
+				      user_count << PAGE_SHIFT,
+				      vma->vm_page_prot);
+	}
+
+	return ret;
+}
+
 const struct dma_map_ops dma_noop_ops = {
 	.alloc			= dma_noop_alloc,
 	.free			= dma_noop_free,
@@ -73,6 +93,7 @@ const struct dma_map_ops dma_noop_ops = {
 	.map_sg			= dma_noop_map_sg,
 	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= dma_noop_supported,
+	.mmap			= dma_noop_mmap,
 };
 
 EXPORT_SYMBOL(dma_noop_ops);
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 2/7] dma: Add simple dma_noop_mmap
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds a simple implementation of mmap to dma_noop_ops.

Cc: Joerg Roedel <jroedel@suse.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Reported-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 lib/dma-noop.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index ff4ef5e..0acc3f6 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -66,6 +66,26 @@ static int dma_noop_supported(struct device *dev, u64 mask)
 	return 1;
 }
 
+static int dma_noop_mmap(struct device *dev, struct vm_area_struct *vma,
+			 void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			 unsigned long attrs)
+{
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
+	unsigned long off = vma->vm_pgoff;
+	int ret = -ENXIO;
+
+	if (off < count && user_count <= (count - off)) {
+		ret = remap_pfn_range(vma, vma->vm_start,
+				      pfn + off,
+				      user_count << PAGE_SHIFT,
+				      vma->vm_page_prot);
+	}
+
+	return ret;
+}
+
 const struct dma_map_ops dma_noop_ops = {
 	.alloc			= dma_noop_alloc,
 	.free			= dma_noop_free,
@@ -73,6 +93,7 @@ const struct dma_map_ops dma_noop_ops = {
 	.map_sg			= dma_noop_map_sg,
 	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= dma_noop_supported,
+	.mmap			= dma_noop_mmap,
 };
 
 EXPORT_SYMBOL(dma_noop_ops);
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 3/7] drivers: dma-coherent: Account dma_pfn_offset when used with device tree
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd, Michal Nazarewicz,
	Marek Szyprowski, Roger Quadros

dma_declare_coherent_memory() and friends are designed to account
difference in CPU and device addresses. However, when it is used with
reserved memory regions there is assumption that CPU and device have
the same view on address space. This assumption gets invalid when
reserved memory for coherent DMA allocations is referenced by device
with non-empty "dma-range" property.

Simply feeding device address as rmem->base + dev->dma_pfn_offset
would not work due to reserved memory region can be shared, so this
patch turns device address to be expressed with help of CPU address
and device's dma_pfn_offset in case memory reservation has been done
via device tree; non device tree users continue to use the old scheme.

Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Roger Quadros <rogerq@ti.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 drivers/base/dma-coherent.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 640a7e6..99c9695 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -16,8 +16,18 @@ struct dma_coherent_mem {
 	int		flags;
 	unsigned long	*bitmap;
 	spinlock_t	spinlock;
+	bool		use_dev_dma_pfn_offset;
 };
 
+static inline dma_addr_t dma_get_device_base(struct device *dev,
+					     struct dma_coherent_mem * mem)
+{
+	if (mem->use_dev_dma_pfn_offset)
+		return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
+	else
+		return mem->device_base;
+}
+
 static bool dma_init_coherent_memory(
 	phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags,
 	struct dma_coherent_mem **mem)
@@ -133,7 +143,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
 		return ERR_PTR(-EINVAL);
 
 	spin_lock_irqsave(&mem->spinlock, flags);
-	pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
+	pos = PFN_DOWN(device_addr - dma_get_device_base(dev, mem));
 	err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
 	spin_unlock_irqrestore(&mem->spinlock, flags);
 
@@ -186,7 +196,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 	/*
 	 * Memory was found in the per-device area.
 	 */
-	*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
+	*dma_handle = dma_get_device_base(dev, mem) + (pageno << PAGE_SHIFT);
 	*ret = mem->virt_base + (pageno << PAGE_SHIFT);
 	dma_memory_map = (mem->flags & DMA_MEMORY_MAP);
 	spin_unlock_irqrestore(&mem->spinlock, flags);
@@ -299,6 +309,7 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 			&rmem->base, (unsigned long)rmem->size / SZ_1M);
 		return -ENODEV;
 	}
+	mem->use_dev_dma_pfn_offset = true;
 	rmem->priv = mem;
 	dma_assign_coherent_memory(dev, mem);
 	return 0;
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 3/7] drivers: dma-coherent: Account dma_pfn_offset when used with device tree
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

dma_declare_coherent_memory() and friends are designed to account
difference in CPU and device addresses. However, when it is used with
reserved memory regions there is assumption that CPU and device have
the same view on address space. This assumption gets invalid when
reserved memory for coherent DMA allocations is referenced by device
with non-empty "dma-range" property.

Simply feeding device address as rmem->base + dev->dma_pfn_offset
would not work due to reserved memory region can be shared, so this
patch turns device address to be expressed with help of CPU address
and device's dma_pfn_offset in case memory reservation has been done
via device tree; non device tree users continue to use the old scheme.

Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Roger Quadros <rogerq@ti.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 drivers/base/dma-coherent.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 640a7e6..99c9695 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -16,8 +16,18 @@ struct dma_coherent_mem {
 	int		flags;
 	unsigned long	*bitmap;
 	spinlock_t	spinlock;
+	bool		use_dev_dma_pfn_offset;
 };
 
+static inline dma_addr_t dma_get_device_base(struct device *dev,
+					     struct dma_coherent_mem * mem)
+{
+	if (mem->use_dev_dma_pfn_offset)
+		return (mem->pfn_base - dev->dma_pfn_offset) << PAGE_SHIFT;
+	else
+		return mem->device_base;
+}
+
 static bool dma_init_coherent_memory(
 	phys_addr_t phys_addr, dma_addr_t device_addr, size_t size, int flags,
 	struct dma_coherent_mem **mem)
@@ -133,7 +143,7 @@ void *dma_mark_declared_memory_occupied(struct device *dev,
 		return ERR_PTR(-EINVAL);
 
 	spin_lock_irqsave(&mem->spinlock, flags);
-	pos = (device_addr - mem->device_base) >> PAGE_SHIFT;
+	pos = PFN_DOWN(device_addr - dma_get_device_base(dev, mem));
 	err = bitmap_allocate_region(mem->bitmap, pos, get_order(size));
 	spin_unlock_irqrestore(&mem->spinlock, flags);
 
@@ -186,7 +196,7 @@ int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 	/*
 	 * Memory was found in the per-device area.
 	 */
-	*dma_handle = mem->device_base + (pageno << PAGE_SHIFT);
+	*dma_handle = dma_get_device_base(dev, mem) + (pageno << PAGE_SHIFT);
 	*ret = mem->virt_base + (pageno << PAGE_SHIFT);
 	dma_memory_map = (mem->flags & DMA_MEMORY_MAP);
 	spin_unlock_irqrestore(&mem->spinlock, flags);
@@ -299,6 +309,7 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 			&rmem->base, (unsigned long)rmem->size / SZ_1M);
 		return -ENODEV;
 	}
+	mem->use_dev_dma_pfn_offset = true;
 	rmem->priv = mem;
 	dma_assign_coherent_memory(dev, mem);
 	return 0;
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 4/7] drivers: dma-coherent: Introduce default DMA pool
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd, Michal Nazarewicz,
	Marek Szyprowski, Rob Herring, Mark Rutland

This patch introduces default coherent DMA pool similar to default CMA
area concept. To keep other users safe code kept under CONFIG_ARM.

Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 .../bindings/reserved-memory/reserved-memory.txt   |  3 ++
 drivers/base/dma-coherent.c                        | 59 +++++++++++++++++++---
 2 files changed, 55 insertions(+), 7 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index 3da0ebd..16291f2 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -68,6 +68,9 @@ Linux implementation note:
 - If a "linux,cma-default" property is present, then Linux will use the
   region for the default pool of the contiguous memory allocator.
 
+- If a "linux,dma-default" property is present, then Linux will use the
+  region for the default pool of the consistent DMA allocator.
+
 Device node references to reserved memory
 -----------------------------------------
 Regions in the /reserved-memory node may be referenced by other device
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 99c9695..2ae24c2 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -19,6 +19,15 @@ struct dma_coherent_mem {
 	bool		use_dev_dma_pfn_offset;
 };
 
+static struct dma_coherent_mem *dma_coherent_default_memory __ro_after_init;
+
+static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *dev)
+{
+	if (dev && dev->dma_mem)
+		return dev->dma_mem;
+	return dma_coherent_default_memory;
+}
+
 static inline dma_addr_t dma_get_device_base(struct device *dev,
 					     struct dma_coherent_mem * mem)
 {
@@ -93,6 +102,9 @@ static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
 static int dma_assign_coherent_memory(struct device *dev,
 				      struct dma_coherent_mem *mem)
 {
+	if (!dev)
+		return -ENODEV;
+
 	if (dev->dma_mem)
 		return -EBUSY;
 
@@ -171,15 +183,12 @@ EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
 int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 				       dma_addr_t *dma_handle, void **ret)
 {
-	struct dma_coherent_mem *mem;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 	int order = get_order(size);
 	unsigned long flags;
 	int pageno;
 	int dma_memory_map;
 
-	if (!dev)
-		return 0;
-	mem = dev->dma_mem;
 	if (!mem)
 		return 0;
 
@@ -233,7 +242,7 @@ EXPORT_SYMBOL(dma_alloc_from_coherent);
  */
 int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
 {
-	struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 
 	if (mem && vaddr >= mem->virt_base && vaddr <
 		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@@ -267,7 +276,7 @@ EXPORT_SYMBOL(dma_release_from_coherent);
 int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 			   void *vaddr, size_t size, int *ret)
 {
-	struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 
 	if (mem && vaddr >= mem->virt_base && vaddr + size <=
 		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@@ -297,6 +306,8 @@ EXPORT_SYMBOL(dma_mmap_from_coherent);
 #include <linux/of_fdt.h>
 #include <linux/of_reserved_mem.h>
 
+static struct reserved_mem *dma_reserved_default_memory __initdata;
+
 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 {
 	struct dma_coherent_mem *mem = rmem->priv;
@@ -318,7 +329,8 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 static void rmem_dma_device_release(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-	dev->dma_mem = NULL;
+	if (dev)
+		dev->dma_mem = NULL;
 }
 
 static const struct reserved_mem_ops rmem_dma_ops = {
@@ -338,6 +350,12 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
 		pr_err("Reserved memory: regions without no-map are not yet supported\n");
 		return -EINVAL;
 	}
+
+	if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) {
+		WARN(dma_reserved_default_memory,
+		     "Reserved memory: region for default DMA coherent area is redefined\n");
+		dma_reserved_default_memory = rmem;
+	}
 #endif
 
 	rmem->ops = &rmem_dma_ops;
@@ -345,5 +363,32 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
 		&rmem->base, (unsigned long)rmem->size / SZ_1M);
 	return 0;
 }
+
+static int __init dma_init_reserved_memory(void)
+{
+	const struct reserved_mem_ops *ops;
+	int ret;
+
+	if (!dma_reserved_default_memory)
+		return -ENOMEM;
+
+	ops = dma_reserved_default_memory->ops;
+
+	/*
+	 * We rely on rmem_dma_device_init() does not propagate error of
+	 * dma_assign_coherent_memory() for "NULL" device.
+	 */
+	ret = ops->device_init(dma_reserved_default_memory, NULL);
+
+	if (!ret) {
+		dma_coherent_default_memory = dma_reserved_default_memory->priv;
+		pr_info("DMA: default coherent area is set\n");
+	}
+
+	return ret;
+}
+
+core_initcall(dma_init_reserved_memory);
+
 RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
 #endif
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 4/7] drivers: dma-coherent: Introduce default DMA pool
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

This patch introduces default coherent DMA pool similar to default CMA
area concept. To keep other users safe code kept under CONFIG_ARM.

Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 .../bindings/reserved-memory/reserved-memory.txt   |  3 ++
 drivers/base/dma-coherent.c                        | 59 +++++++++++++++++++---
 2 files changed, 55 insertions(+), 7 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index 3da0ebd..16291f2 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -68,6 +68,9 @@ Linux implementation note:
 - If a "linux,cma-default" property is present, then Linux will use the
   region for the default pool of the contiguous memory allocator.
 
+- If a "linux,dma-default" property is present, then Linux will use the
+  region for the default pool of the consistent DMA allocator.
+
 Device node references to reserved memory
 -----------------------------------------
 Regions in the /reserved-memory node may be referenced by other device
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index 99c9695..2ae24c2 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -19,6 +19,15 @@ struct dma_coherent_mem {
 	bool		use_dev_dma_pfn_offset;
 };
 
+static struct dma_coherent_mem *dma_coherent_default_memory __ro_after_init;
+
+static inline struct dma_coherent_mem *dev_get_coherent_memory(struct device *dev)
+{
+	if (dev && dev->dma_mem)
+		return dev->dma_mem;
+	return dma_coherent_default_memory;
+}
+
 static inline dma_addr_t dma_get_device_base(struct device *dev,
 					     struct dma_coherent_mem * mem)
 {
@@ -93,6 +102,9 @@ static void dma_release_coherent_memory(struct dma_coherent_mem *mem)
 static int dma_assign_coherent_memory(struct device *dev,
 				      struct dma_coherent_mem *mem)
 {
+	if (!dev)
+		return -ENODEV;
+
 	if (dev->dma_mem)
 		return -EBUSY;
 
@@ -171,15 +183,12 @@ EXPORT_SYMBOL(dma_mark_declared_memory_occupied);
 int dma_alloc_from_coherent(struct device *dev, ssize_t size,
 				       dma_addr_t *dma_handle, void **ret)
 {
-	struct dma_coherent_mem *mem;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 	int order = get_order(size);
 	unsigned long flags;
 	int pageno;
 	int dma_memory_map;
 
-	if (!dev)
-		return 0;
-	mem = dev->dma_mem;
 	if (!mem)
 		return 0;
 
@@ -233,7 +242,7 @@ EXPORT_SYMBOL(dma_alloc_from_coherent);
  */
 int dma_release_from_coherent(struct device *dev, int order, void *vaddr)
 {
-	struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 
 	if (mem && vaddr >= mem->virt_base && vaddr <
 		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@@ -267,7 +276,7 @@ EXPORT_SYMBOL(dma_release_from_coherent);
 int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 			   void *vaddr, size_t size, int *ret)
 {
-	struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
 
 	if (mem && vaddr >= mem->virt_base && vaddr + size <=
 		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
@@ -297,6 +306,8 @@ EXPORT_SYMBOL(dma_mmap_from_coherent);
 #include <linux/of_fdt.h>
 #include <linux/of_reserved_mem.h>
 
+static struct reserved_mem *dma_reserved_default_memory __initdata;
+
 static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 {
 	struct dma_coherent_mem *mem = rmem->priv;
@@ -318,7 +329,8 @@ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
 static void rmem_dma_device_release(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-	dev->dma_mem = NULL;
+	if (dev)
+		dev->dma_mem = NULL;
 }
 
 static const struct reserved_mem_ops rmem_dma_ops = {
@@ -338,6 +350,12 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
 		pr_err("Reserved memory: regions without no-map are not yet supported\n");
 		return -EINVAL;
 	}
+
+	if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) {
+		WARN(dma_reserved_default_memory,
+		     "Reserved memory: region for default DMA coherent area is redefined\n");
+		dma_reserved_default_memory = rmem;
+	}
 #endif
 
 	rmem->ops = &rmem_dma_ops;
@@ -345,5 +363,32 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
 		&rmem->base, (unsigned long)rmem->size / SZ_1M);
 	return 0;
 }
+
+static int __init dma_init_reserved_memory(void)
+{
+	const struct reserved_mem_ops *ops;
+	int ret;
+
+	if (!dma_reserved_default_memory)
+		return -ENOMEM;
+
+	ops = dma_reserved_default_memory->ops;
+
+	/*
+	 * We rely on rmem_dma_device_init() does not propagate error of
+	 * dma_assign_coherent_memory() for "NULL" device.
+	 */
+	ret = ops->device_init(dma_reserved_default_memory, NULL);
+
+	if (!ret) {
+		dma_coherent_default_memory = dma_reserved_default_memory->priv;
+		pr_info("DMA: default coherent area is set\n");
+	}
+
+	return ret;
+}
+
+core_initcall(dma_init_reserved_memory);
+
 RESERVEDMEM_OF_DECLARE(dma, "shared-dma-pool", rmem_dma_setup);
 #endif
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 5/7] ARM: NOMMU: Introduce dma operations for noMMU
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd

R/M classes of cpus can have memory covered by MPU which in turn might
configure RAM as Normal i.e. bufferable and cacheable. It breaks
dma_alloc_coherent() and friends, since data can stuck in caches now
or be buffered.

This patch factors out DMA support for NOMMU configuration into
separate entity which provides dedicated dma_ops. We have to handle
there several cases:
- configurations with MMU/MPU setup
- configurations without MMU/MPU setup
- special case for M-class, since caches and MPU there are optional

In general we rely on default DMA area for coherent allocations or/and
per-device memory reserves suitable for coherent DMA, so if such
regions are set coherent allocations go from there.

In case MMU/MPU was not setup we fallback to normal page allocator for
DMA memory allocation.

In case we run M-class cpus, for configuration without cache support
(like Cortex-M3/M4) dma operations are forced to be coherent and wired
with dma-noop (such decision is made based on cacheid global
variable); however, if caches are detected there and no DMA coherent
region is given (either default or per-device), dma is disallowed even
MPU is not set - it is because M-class implement system memory map
which defines part of address space as Normal memory.

Reported-by: Alexandre Torgue <alexandre.torgue@st.com>
Reported-by: Andras Szemzo <sza@esh.hu>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/Kconfig                   |   1 +
 arch/arm/include/asm/dma-mapping.h |   2 +-
 arch/arm/mm/Makefile               |   5 +-
 arch/arm/mm/dma-mapping-nommu.c    | 253 +++++++++++++++++++++++++++++++++++++
 4 files changed, 257 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/mm/dma-mapping-nommu.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 6ab63fa..8f0b6ca 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -22,6 +22,7 @@ config ARM
 	select CLONE_BACKWARDS
 	select CPU_PM if (SUSPEND || CPU_IDLE)
 	select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
+	select DMA_NOOP_OPS if !MMU
 	select EDAC_SUPPORT
 	select EDAC_ATOMIC_SCRUB
 	select GENERIC_ALLOCATOR
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 7166569..63270de 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -20,7 +20,7 @@ static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
 	if (dev && dev->dma_ops)
 		return dev->dma_ops;
-	return &arm_dma_ops;
+	return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : &dma_noop_ops;
 }
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 54857bc..ea80df7 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -2,9 +2,8 @@
 # Makefile for the linux arm-specific parts of the memory manager.
 #
 
-obj-y				:= dma-mapping.o extable.o fault.o init.o \
-				   iomap.o
-
+obj-y				:= extable.o fault.o init.o iomap.o
+obj-y				+= dma-mapping$(MMUEXT).o
 obj-$(CONFIG_MMU)		+= fault-armv.o flush.o idmap.o ioremap.o \
 				   mmap.o pgd.o mmu.o pageattr.o
 
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
new file mode 100644
index 0000000..3ba3003
--- /dev/null
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -0,0 +1,253 @@
+/*
+ *  Based on linux/arch/arm/mm/dma-mapping.c
+ *
+ *  Copyright (C) 2000-2004 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+#include <asm/cachetype.h>
+#include <asm/cacheflush.h>
+#include <asm/outercache.h>
+#include <asm/cp15.h>
+
+#include "dma.h"
+
+/*
+ *  dma_noop_ops is used if
+ *   - MMU/MPU is off
+ *   - cpu is v7m w/o cache support
+ *   - device is coherent
+ *  otherwise arm_nommu_dma_ops is used.
+ *
+ *  arm_nommu_dma_ops rely on consistent DMA memory (please, refer to
+ *  [1] on how to declare such memory).
+ *
+ *  [1] Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+ */
+
+static void *arm_nommu_dma_alloc(struct device *dev, size_t size,
+				 dma_addr_t *dma_handle, gfp_t gfp,
+				 unsigned long attrs)
+
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+
+	/*
+	 * We are here because:
+	 * - no consistent DMA region has been defined, so we can't
+	 *   continue.
+	 * - there is no space left in consistent DMA region, so we
+	 *   only can fallback to generic allocator if we are
+	 *   advertised that consistency is not required.
+	 */
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		return ops->alloc(dev, size, dma_handle, gfp, attrs);
+
+	WARN_ON_ONCE(1);
+	return NULL;
+}
+
+static void arm_nommu_dma_free(struct device *dev, size_t size,
+			       void *cpu_addr, dma_addr_t dma_addr,
+			       unsigned long attrs)
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		ops->free(dev, size, cpu_addr, dma_addr, attrs);
+	else
+		WARN_ON_ONCE(1);
+
+	return;
+}
+
+static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+			      void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			      unsigned long attrs)
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+	int ret;
+
+	if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
+		return ret;
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+
+	WARN_ON_ONCE(1);
+	return -ENXIO;
+}
+
+static void __dma_page_cpu_to_dev(phys_addr_t paddr, size_t size,
+				  enum dma_data_direction dir)
+{
+	dmac_map_area(__va(paddr), size, dir);
+
+	if (dir == DMA_FROM_DEVICE)
+		outer_inv_range(paddr, paddr + size);
+	else
+		outer_clean_range(paddr, paddr + size);
+}
+
+static void __dma_page_dev_to_cpu(phys_addr_t paddr, size_t size,
+				  enum dma_data_direction dir)
+{
+	if (dir != DMA_TO_DEVICE) {
+		outer_inv_range(paddr, paddr + size);
+		dmac_unmap_area(__va(paddr), size, dir);
+	}
+}
+
+static dma_addr_t arm_nommu_dma_map_page(struct device *dev, struct page *page,
+					 unsigned long offset, size_t size,
+					 enum dma_data_direction dir,
+					 unsigned long attrs)
+{
+	dma_addr_t handle = page_to_phys(page) + offset;
+
+	__dma_page_cpu_to_dev(handle, size, dir);
+
+	return handle;
+}
+
+static void arm_nommu_dma_unmap_page(struct device *dev, dma_addr_t handle,
+				     size_t size, enum dma_data_direction dir,
+				     unsigned long attrs)
+{
+	__dma_page_dev_to_cpu(handle, size, dir);
+}
+
+
+static int arm_nommu_dma_map_sg(struct device *dev, struct scatterlist *sgl,
+				int nents, enum dma_data_direction dir,
+				unsigned long attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		sg_dma_address(sg) = sg_phys(sg);
+		sg_dma_len(sg) = sg->length;
+		__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
+	}
+
+	return nents;
+}
+
+static void arm_nommu_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
+				   int nents, enum dma_data_direction dir,
+				   unsigned long attrs)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+static void arm_nommu_dma_sync_single_for_device(struct device *dev,
+		dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+	__dma_page_cpu_to_dev(handle, size, dir);
+}
+
+static void arm_nommu_dma_sync_single_for_cpu(struct device *dev,
+		dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+	__dma_page_cpu_to_dev(handle, size, dir);
+}
+
+static void arm_nommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
+					     int nents, enum dma_data_direction dir)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+static void arm_nommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl,
+					  int nents, enum dma_data_direction dir)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+const struct dma_map_ops arm_nommu_dma_ops = {
+	.alloc			= arm_nommu_dma_alloc,
+	.free			= arm_nommu_dma_free,
+	.mmap			= arm_nommu_dma_mmap,
+	.map_page		= arm_nommu_dma_map_page,
+	.unmap_page		= arm_nommu_dma_unmap_page,
+	.map_sg			= arm_nommu_dma_map_sg,
+	.unmap_sg		= arm_nommu_dma_unmap_sg,
+	.sync_single_for_device	= arm_nommu_dma_sync_single_for_device,
+	.sync_single_for_cpu	= arm_nommu_dma_sync_single_for_cpu,
+	.sync_sg_for_device	= arm_nommu_dma_sync_sg_for_device,
+	.sync_sg_for_cpu	= arm_nommu_dma_sync_sg_for_cpu,
+};
+EXPORT_SYMBOL(arm_nommu_dma_ops);
+
+static const struct dma_map_ops *arm_nommu_get_dma_map_ops(bool coherent)
+{
+	return coherent ? &dma_noop_ops : &arm_nommu_dma_ops;
+}
+
+void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+			const struct iommu_ops *iommu, bool coherent)
+{
+	const struct dma_map_ops *dma_ops;
+
+	if (IS_ENABLED(CONFIG_CPU_V7M)) {
+		/*
+		 * Cache support for v7m is optional, so can be treated as
+		 * coherent if no cache has been detected. Note that it is not
+		 * enough to check if MPU is in use or not since in absense of
+		 * MPU system memory map is used.
+		 */
+		dev->archdata.dma_coherent = (cacheid) ? coherent : true;
+	} else {
+		/*
+		 * Assume coherent DMA in case MMU/MPU has not been set up.
+		 */
+		dev->archdata.dma_coherent = (get_cr() & CR_M) ? coherent : true;
+	}
+
+	dma_ops = arm_nommu_get_dma_map_ops(dev->archdata.dma_coherent);
+
+	set_dma_ops(dev, dma_ops);
+}
+
+void arch_teardown_dma_ops(struct device *dev)
+{
+}
+
+int dma_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+EXPORT_SYMBOL(dma_supported);
+
+#define PREALLOC_DMA_DEBUG_ENTRIES	4096
+
+static int __init dma_debug_do_init(void)
+{
+	dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
+	return 0;
+}
+core_initcall(dma_debug_do_init);
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 5/7] ARM: NOMMU: Introduce dma operations for noMMU
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

R/M classes of cpus can have memory covered by MPU which in turn might
configure RAM as Normal i.e. bufferable and cacheable. It breaks
dma_alloc_coherent() and friends, since data can stuck in caches now
or be buffered.

This patch factors out DMA support for NOMMU configuration into
separate entity which provides dedicated dma_ops. We have to handle
there several cases:
- configurations with MMU/MPU setup
- configurations without MMU/MPU setup
- special case for M-class, since caches and MPU there are optional

In general we rely on default DMA area for coherent allocations or/and
per-device memory reserves suitable for coherent DMA, so if such
regions are set coherent allocations go from there.

In case MMU/MPU was not setup we fallback to normal page allocator for
DMA memory allocation.

In case we run M-class cpus, for configuration without cache support
(like Cortex-M3/M4) dma operations are forced to be coherent and wired
with dma-noop (such decision is made based on cacheid global
variable); however, if caches are detected there and no DMA coherent
region is given (either default or per-device), dma is disallowed even
MPU is not set - it is because M-class implement system memory map
which defines part of address space as Normal memory.

Reported-by: Alexandre Torgue <alexandre.torgue@st.com>
Reported-by: Andras Szemzo <sza@esh.hu>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/Kconfig                   |   1 +
 arch/arm/include/asm/dma-mapping.h |   2 +-
 arch/arm/mm/Makefile               |   5 +-
 arch/arm/mm/dma-mapping-nommu.c    | 253 +++++++++++++++++++++++++++++++++++++
 4 files changed, 257 insertions(+), 4 deletions(-)
 create mode 100644 arch/arm/mm/dma-mapping-nommu.c

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 6ab63fa..8f0b6ca 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -22,6 +22,7 @@ config ARM
 	select CLONE_BACKWARDS
 	select CPU_PM if (SUSPEND || CPU_IDLE)
 	select DCACHE_WORD_ACCESS if HAVE_EFFICIENT_UNALIGNED_ACCESS
+	select DMA_NOOP_OPS if !MMU
 	select EDAC_SUPPORT
 	select EDAC_ATOMIC_SCRUB
 	select GENERIC_ALLOCATOR
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 7166569..63270de 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -20,7 +20,7 @@ static inline const struct dma_map_ops *__generic_dma_ops(struct device *dev)
 {
 	if (dev && dev->dma_ops)
 		return dev->dma_ops;
-	return &arm_dma_ops;
+	return IS_ENABLED(CONFIG_MMU) ? &arm_dma_ops : &dma_noop_ops;
 }
 
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 54857bc..ea80df7 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -2,9 +2,8 @@
 # Makefile for the linux arm-specific parts of the memory manager.
 #
 
-obj-y				:= dma-mapping.o extable.o fault.o init.o \
-				   iomap.o
-
+obj-y				:= extable.o fault.o init.o iomap.o
+obj-y				+= dma-mapping$(MMUEXT).o
 obj-$(CONFIG_MMU)		+= fault-armv.o flush.o idmap.o ioremap.o \
 				   mmap.o pgd.o mmu.o pageattr.o
 
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
new file mode 100644
index 0000000..3ba3003
--- /dev/null
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -0,0 +1,253 @@
+/*
+ *  Based on linux/arch/arm/mm/dma-mapping.c
+ *
+ *  Copyright (C) 2000-2004 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+#include <asm/cachetype.h>
+#include <asm/cacheflush.h>
+#include <asm/outercache.h>
+#include <asm/cp15.h>
+
+#include "dma.h"
+
+/*
+ *  dma_noop_ops is used if
+ *   - MMU/MPU is off
+ *   - cpu is v7m w/o cache support
+ *   - device is coherent
+ *  otherwise arm_nommu_dma_ops is used.
+ *
+ *  arm_nommu_dma_ops rely on consistent DMA memory (please, refer to
+ *  [1] on how to declare such memory).
+ *
+ *  [1] Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+ */
+
+static void *arm_nommu_dma_alloc(struct device *dev, size_t size,
+				 dma_addr_t *dma_handle, gfp_t gfp,
+				 unsigned long attrs)
+
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+
+	/*
+	 * We are here because:
+	 * - no consistent DMA region has been defined, so we can't
+	 *   continue.
+	 * - there is no space left in consistent DMA region, so we
+	 *   only can fallback to generic allocator if we are
+	 *   advertised that consistency is not required.
+	 */
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		return ops->alloc(dev, size, dma_handle, gfp, attrs);
+
+	WARN_ON_ONCE(1);
+	return NULL;
+}
+
+static void arm_nommu_dma_free(struct device *dev, size_t size,
+			       void *cpu_addr, dma_addr_t dma_addr,
+			       unsigned long attrs)
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		ops->free(dev, size, cpu_addr, dma_addr, attrs);
+	else
+		WARN_ON_ONCE(1);
+
+	return;
+}
+
+static int arm_nommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+			      void *cpu_addr, dma_addr_t dma_addr, size_t size,
+			      unsigned long attrs)
+{
+	const struct dma_map_ops *ops = &dma_noop_ops;
+	int ret;
+
+	if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
+		return ret;
+
+	if (attrs & DMA_ATTR_NON_CONSISTENT)
+		return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+
+	WARN_ON_ONCE(1);
+	return -ENXIO;
+}
+
+static void __dma_page_cpu_to_dev(phys_addr_t paddr, size_t size,
+				  enum dma_data_direction dir)
+{
+	dmac_map_area(__va(paddr), size, dir);
+
+	if (dir == DMA_FROM_DEVICE)
+		outer_inv_range(paddr, paddr + size);
+	else
+		outer_clean_range(paddr, paddr + size);
+}
+
+static void __dma_page_dev_to_cpu(phys_addr_t paddr, size_t size,
+				  enum dma_data_direction dir)
+{
+	if (dir != DMA_TO_DEVICE) {
+		outer_inv_range(paddr, paddr + size);
+		dmac_unmap_area(__va(paddr), size, dir);
+	}
+}
+
+static dma_addr_t arm_nommu_dma_map_page(struct device *dev, struct page *page,
+					 unsigned long offset, size_t size,
+					 enum dma_data_direction dir,
+					 unsigned long attrs)
+{
+	dma_addr_t handle = page_to_phys(page) + offset;
+
+	__dma_page_cpu_to_dev(handle, size, dir);
+
+	return handle;
+}
+
+static void arm_nommu_dma_unmap_page(struct device *dev, dma_addr_t handle,
+				     size_t size, enum dma_data_direction dir,
+				     unsigned long attrs)
+{
+	__dma_page_dev_to_cpu(handle, size, dir);
+}
+
+
+static int arm_nommu_dma_map_sg(struct device *dev, struct scatterlist *sgl,
+				int nents, enum dma_data_direction dir,
+				unsigned long attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		sg_dma_address(sg) = sg_phys(sg);
+		sg_dma_len(sg) = sg->length;
+		__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
+	}
+
+	return nents;
+}
+
+static void arm_nommu_dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
+				   int nents, enum dma_data_direction dir,
+				   unsigned long attrs)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+static void arm_nommu_dma_sync_single_for_device(struct device *dev,
+		dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+	__dma_page_cpu_to_dev(handle, size, dir);
+}
+
+static void arm_nommu_dma_sync_single_for_cpu(struct device *dev,
+		dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+	__dma_page_cpu_to_dev(handle, size, dir);
+}
+
+static void arm_nommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
+					     int nents, enum dma_data_direction dir)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_cpu_to_dev(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+static void arm_nommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl,
+					  int nents, enum dma_data_direction dir)
+{
+	struct scatterlist *sg;
+	int i;
+
+	for_each_sg(sgl, sg, nents, i)
+		__dma_page_dev_to_cpu(sg_dma_address(sg), sg_dma_len(sg), dir);
+}
+
+const struct dma_map_ops arm_nommu_dma_ops = {
+	.alloc			= arm_nommu_dma_alloc,
+	.free			= arm_nommu_dma_free,
+	.mmap			= arm_nommu_dma_mmap,
+	.map_page		= arm_nommu_dma_map_page,
+	.unmap_page		= arm_nommu_dma_unmap_page,
+	.map_sg			= arm_nommu_dma_map_sg,
+	.unmap_sg		= arm_nommu_dma_unmap_sg,
+	.sync_single_for_device	= arm_nommu_dma_sync_single_for_device,
+	.sync_single_for_cpu	= arm_nommu_dma_sync_single_for_cpu,
+	.sync_sg_for_device	= arm_nommu_dma_sync_sg_for_device,
+	.sync_sg_for_cpu	= arm_nommu_dma_sync_sg_for_cpu,
+};
+EXPORT_SYMBOL(arm_nommu_dma_ops);
+
+static const struct dma_map_ops *arm_nommu_get_dma_map_ops(bool coherent)
+{
+	return coherent ? &dma_noop_ops : &arm_nommu_dma_ops;
+}
+
+void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+			const struct iommu_ops *iommu, bool coherent)
+{
+	const struct dma_map_ops *dma_ops;
+
+	if (IS_ENABLED(CONFIG_CPU_V7M)) {
+		/*
+		 * Cache support for v7m is optional, so can be treated as
+		 * coherent if no cache has been detected. Note that it is not
+		 * enough to check if MPU is in use or not since in absense of
+		 * MPU system memory map is used.
+		 */
+		dev->archdata.dma_coherent = (cacheid) ? coherent : true;
+	} else {
+		/*
+		 * Assume coherent DMA in case MMU/MPU has not been set up.
+		 */
+		dev->archdata.dma_coherent = (get_cr() & CR_M) ? coherent : true;
+	}
+
+	dma_ops = arm_nommu_get_dma_map_ops(dev->archdata.dma_coherent);
+
+	set_dma_ops(dev, dma_ops);
+}
+
+void arch_teardown_dma_ops(struct device *dev)
+{
+}
+
+int dma_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+EXPORT_SYMBOL(dma_supported);
+
+#define PREALLOC_DMA_DEBUG_ENTRIES	4096
+
+static int __init dma_debug_do_init(void)
+{
+	dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
+	return 0;
+}
+core_initcall(dma_debug_do_init);
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd

Now, we have dedicated non-cacheable region for consistent DMA
operations. However, that region can still be marked as bufferable by
MPU, so it'd be safer to have barriers by default.

Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm/mm/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index d731f28..7e357c6 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1049,8 +1049,8 @@ config ARM_L1_CACHE_SHIFT
 	default 5
 
 config ARM_DMA_MEM_BUFFERABLE
-	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
-	default y if CPU_V6 || CPU_V6K || CPU_V7
+	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
+	default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
 	help
 	  Historically, the kernel has used strongly ordered mappings to
 	  provide DMA coherent memory.  With the advent of ARMv7, mapping
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

Now, we have dedicated non-cacheable region for consistent DMA
operations. However, that region can still be marked as bufferable by
MPU, so it'd be safer to have barriers by default.

Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm/mm/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index d731f28..7e357c6 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1049,8 +1049,8 @@ config ARM_L1_CACHE_SHIFT
 	default 5
 
 config ARM_DMA_MEM_BUFFERABLE
-	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
-	default y if CPU_V6 || CPU_V6K || CPU_V7
+	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
+	default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
 	help
 	  Historically, the kernel has used strongly ordered mappings to
 	  provide DMA coherent memory.  With the advent of ARMv7, mapping
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 7/7] ARM: dma-mapping: Remove traces of NOMMU code
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-04-24 10:16   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: sza, robin.murphy, alexandre.torgue, akpm, kbuild-all,
	linux-kernel, linux, gregkh, arnd

DMA operations for NOMMU case have been just factored out into
separate compilation unit, so don't keep dead code.

Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/mm/dma-mapping.c | 29 ++---------------------------
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 475811f..cd90338 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -344,8 +344,6 @@ static void __dma_free_buffer(struct page *page, size_t size)
 	}
 }
 
-#ifdef CONFIG_MMU
-
 static void *__alloc_from_contiguous(struct device *dev, size_t size,
 				     pgprot_t prot, struct page **ret_page,
 				     const void *caller, bool want_vaddr,
@@ -647,22 +645,6 @@ static inline pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot)
 	return prot;
 }
 
-#define nommu() 0
-
-#else	/* !CONFIG_MMU */
-
-#define nommu() 1
-
-#define __get_dma_pgprot(attrs, prot)				__pgprot(0)
-#define __alloc_remap_buffer(dev, size, gfp, prot, ret, c, wv)	NULL
-#define __alloc_from_pool(size, ret_page)			NULL
-#define __alloc_from_contiguous(dev, size, prot, ret, c, wv, coherent_flag, gfp)	NULL
-#define __free_from_pool(cpu_addr, size)			do { } while (0)
-#define __free_from_contiguous(dev, page, cpu_addr, size, wv)	do { } while (0)
-#define __dma_free_remap(cpu_addr, size)			do { } while (0)
-
-#endif	/* CONFIG_MMU */
-
 static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
 				   struct page **ret_page)
 {
@@ -805,7 +787,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
 
 	if (cma)
 		buf->allocator = &cma_allocator;
-	else if (nommu() || is_coherent)
+	else if (is_coherent)
 		buf->allocator = &simple_allocator;
 	else if (allowblock)
 		buf->allocator = &remap_allocator;
@@ -854,8 +836,7 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-	int ret = -ENXIO;
-#ifdef CONFIG_MMU
+	int ret;
 	unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
 	unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
 	unsigned long pfn = dma_to_pfn(dev, dma_addr);
@@ -870,10 +851,6 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 				      vma->vm_end - vma->vm_start,
 				      vma->vm_page_prot);
 	}
-#else
-	ret = vm_iomap_memory(vma, vma->vm_start,
-			      (vma->vm_end - vma->vm_start));
-#endif	/* CONFIG_MMU */
 
 	return ret;
 }
@@ -892,9 +869,7 @@ int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-#ifdef CONFIG_MMU
 	vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
-#endif	/* CONFIG_MMU */
 	return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
 }
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 7/7] ARM: dma-mapping: Remove traces of NOMMU code
@ 2017-04-24 10:16   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-04-24 10:16 UTC (permalink / raw)
  To: linux-arm-kernel

DMA operations for NOMMU case have been just factored out into
separate compilation unit, so don't keep dead code.

Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/mm/dma-mapping.c | 29 ++---------------------------
 1 file changed, 2 insertions(+), 27 deletions(-)

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 475811f..cd90338 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -344,8 +344,6 @@ static void __dma_free_buffer(struct page *page, size_t size)
 	}
 }
 
-#ifdef CONFIG_MMU
-
 static void *__alloc_from_contiguous(struct device *dev, size_t size,
 				     pgprot_t prot, struct page **ret_page,
 				     const void *caller, bool want_vaddr,
@@ -647,22 +645,6 @@ static inline pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot)
 	return prot;
 }
 
-#define nommu() 0
-
-#else	/* !CONFIG_MMU */
-
-#define nommu() 1
-
-#define __get_dma_pgprot(attrs, prot)				__pgprot(0)
-#define __alloc_remap_buffer(dev, size, gfp, prot, ret, c, wv)	NULL
-#define __alloc_from_pool(size, ret_page)			NULL
-#define __alloc_from_contiguous(dev, size, prot, ret, c, wv, coherent_flag, gfp)	NULL
-#define __free_from_pool(cpu_addr, size)			do { } while (0)
-#define __free_from_contiguous(dev, page, cpu_addr, size, wv)	do { } while (0)
-#define __dma_free_remap(cpu_addr, size)			do { } while (0)
-
-#endif	/* CONFIG_MMU */
-
 static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
 				   struct page **ret_page)
 {
@@ -805,7 +787,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
 
 	if (cma)
 		buf->allocator = &cma_allocator;
-	else if (nommu() || is_coherent)
+	else if (is_coherent)
 		buf->allocator = &simple_allocator;
 	else if (allowblock)
 		buf->allocator = &remap_allocator;
@@ -854,8 +836,7 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-	int ret = -ENXIO;
-#ifdef CONFIG_MMU
+	int ret;
 	unsigned long nr_vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
 	unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
 	unsigned long pfn = dma_to_pfn(dev, dma_addr);
@@ -870,10 +851,6 @@ static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 				      vma->vm_end - vma->vm_start,
 				      vma->vm_page_prot);
 	}
-#else
-	ret = vm_iomap_memory(vma, vma->vm_start,
-			      (vma->vm_end - vma->vm_start));
-#endif	/* CONFIG_MMU */
 
 	return ret;
 }
@@ -892,9 +869,7 @@ int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		 void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		 unsigned long attrs)
 {
-#ifdef CONFIG_MMU
 	vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
-#endif	/* CONFIG_MMU */
 	return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
 }
 
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
  2017-04-24 10:16 ` Vladimir Murzin
@ 2017-05-02  8:32   ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-02  8:32 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Roger Quadros, Joerg Roedel, sza, arnd,
	Yoshinori Sato, gregkh, linux-kernel, Michal Nazarewicz, linux,
	Christian Borntraeger, Doug Ledford, Rich Felker, Alan Stern,
	kbuild-all, Rob Herring, akpm, Marek Szyprowski, robin.murphy,
	alexandre.torgue

Gentle ping!

On 24/04/17 11:16, Vladimir Murzin wrote:
> It seem that addition of cache support for M-class CPUs uncovered
> latent bug in DMA usage. NOMMU memory model has been treated as being
> always consistent; however, for R/M CPU classes memory can be covered
> by MPU which in turn might configure RAM as Normal i.e. bufferable and
> cacheable. It breaks dma_alloc_coherent() and friends, since data can
> stuck in caches now or be buffered.
> 
> This patch set is trying to address the issue by providing region of
> memory suitable for consistent DMA operations. It is supposed that
> such region is marked by MPU as non-cacheable. Robin suggested to
> advertise such memory as reserved shared-dma-pool, rather then using
> homebrew command line option, and extend dma-coherent to provide
> default DMA area in the similar way as it is done for CMA (PATCH
> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
> framework, and it seems that it might be reused by other architectures
> like c6x and blackfin.
> 
> While reviewing/testing previous vesrions of the patch set it turned
> out that dma-coherent does not take into account "dma-ranges" device
> tree property, so it is addressed in PATCH 3/7.
> 
> For ARM, dedicated DMA region is required for cases other than:
>  - MMU/MPU is off
>  - cpu is v7m w/o cache support
>  - device is coherent
> 
> In case one of the above conditions is true dma operations are forced
> to be coherent and wired with dma_noop_ops.
> 
> To make life easier NOMMU dma operations are kept in separate
> compilation unit.
> 
> Since the issue was reported in the same time as Benjamin sent his
> patch [1] to allow mmap for NOMMU, his case is also addressed in this
> series (PATCH 1/7 and PATCH 2/7).
> 
> Thanks!
> 
> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
> 
> Cc: Joerg Roedel <jroedel@suse.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Michal Nazarewicz <mina86@mina86.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Cc: Rich Felker <dalias@libc.org>
> Cc: Roger Quadros <rogerq@ti.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Doug Ledford <dledford@redhat.com>
> 
> Changelog:
> 	    v3 -> v4
> 	       - rebased on v4.11-rc7
> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
> 	       - added Arnd's Acked-by
> 
> 	    v2 -> v3
> 	       - fixed warnings reported by Alexandre and kbuild robot
> 
> 	    v1 -> v2
> 	       - rebased on v4.11-rc1
> 	       - added Robin's Reviewed-by
> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
> 	         rather than mem->device_base in case memory region is
> 		 configured via device tree (so Tested-by discarded there)
> 
> 	RFC v6 -> v1
> 	       - dropped RFC tag
> 	       - added Alexandre's Tested-by
> 
> 
> Vladimir Murzin (7):
>   dma: Take into account dma_pfn_offset
>   dma: Add simple dma_noop_mmap
>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>     tree
>   drivers: dma-coherent: Introduce default DMA pool
>   ARM: NOMMU: Introduce dma operations for noMMU
>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>   ARM: dma-mapping: Remove traces of NOMMU code
> 
>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>  arch/arm/Kconfig                                   |   1 +
>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>  arch/arm/mm/Kconfig                                |   4 +-
>  arch/arm/mm/Makefile                               |   5 +-
>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>  arch/arm/mm/dma-mapping.c                          |  29 +--
>  drivers/base/dma-coherent.c                        |  74 +++++-
>  lib/dma-noop.c                                     |  29 ++-
>  9 files changed, 355 insertions(+), 45 deletions(-)
>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-05-02  8:32   ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-02  8:32 UTC (permalink / raw)
  To: linux-arm-kernel

Gentle ping!

On 24/04/17 11:16, Vladimir Murzin wrote:
> It seem that addition of cache support for M-class CPUs uncovered
> latent bug in DMA usage. NOMMU memory model has been treated as being
> always consistent; however, for R/M CPU classes memory can be covered
> by MPU which in turn might configure RAM as Normal i.e. bufferable and
> cacheable. It breaks dma_alloc_coherent() and friends, since data can
> stuck in caches now or be buffered.
> 
> This patch set is trying to address the issue by providing region of
> memory suitable for consistent DMA operations. It is supposed that
> such region is marked by MPU as non-cacheable. Robin suggested to
> advertise such memory as reserved shared-dma-pool, rather then using
> homebrew command line option, and extend dma-coherent to provide
> default DMA area in the similar way as it is done for CMA (PATCH
> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
> framework, and it seems that it might be reused by other architectures
> like c6x and blackfin.
> 
> While reviewing/testing previous vesrions of the patch set it turned
> out that dma-coherent does not take into account "dma-ranges" device
> tree property, so it is addressed in PATCH 3/7.
> 
> For ARM, dedicated DMA region is required for cases other than:
>  - MMU/MPU is off
>  - cpu is v7m w/o cache support
>  - device is coherent
> 
> In case one of the above conditions is true dma operations are forced
> to be coherent and wired with dma_noop_ops.
> 
> To make life easier NOMMU dma operations are kept in separate
> compilation unit.
> 
> Since the issue was reported in the same time as Benjamin sent his
> patch [1] to allow mmap for NOMMU, his case is also addressed in this
> series (PATCH 1/7 and PATCH 2/7).
> 
> Thanks!
> 
> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
> 
> Cc: Joerg Roedel <jroedel@suse.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Michal Nazarewicz <mina86@mina86.com>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> Cc: Rich Felker <dalias@libc.org>
> Cc: Roger Quadros <rogerq@ti.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Rob Herring <robh+dt@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Doug Ledford <dledford@redhat.com>
> 
> Changelog:
> 	    v3 -> v4
> 	       - rebased on v4.11-rc7
> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
> 	       - added Arnd's Acked-by
> 
> 	    v2 -> v3
> 	       - fixed warnings reported by Alexandre and kbuild robot
> 
> 	    v1 -> v2
> 	       - rebased on v4.11-rc1
> 	       - added Robin's Reviewed-by
> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
> 	         rather than mem->device_base in case memory region is
> 		 configured via device tree (so Tested-by discarded there)
> 
> 	RFC v6 -> v1
> 	       - dropped RFC tag
> 	       - added Alexandre's Tested-by
> 
> 
> Vladimir Murzin (7):
>   dma: Take into account dma_pfn_offset
>   dma: Add simple dma_noop_mmap
>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>     tree
>   drivers: dma-coherent: Introduce default DMA pool
>   ARM: NOMMU: Introduce dma operations for noMMU
>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>   ARM: dma-mapping: Remove traces of NOMMU code
> 
>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>  arch/arm/Kconfig                                   |   1 +
>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>  arch/arm/mm/Kconfig                                |   4 +-
>  arch/arm/mm/Makefile                               |   5 +-
>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>  arch/arm/mm/dma-mapping.c                          |  29 +--
>  drivers/base/dma-coherent.c                        |  74 +++++-
>  lib/dma-noop.c                                     |  29 ++-
>  9 files changed, 355 insertions(+), 45 deletions(-)
>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
  2017-05-02  8:32   ` Vladimir Murzin
@ 2017-05-15  8:52     ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-15  8:52 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Mark Rutland, Joerg Roedel, sza, Yoshinori Sato,
	alexandre.torgue, gregkh, linux-kernel, Michal Nazarewicz, linux,
	Christian Borntraeger, Doug Ledford, Rich Felker, arnd,
	kbuild-all, Rob Herring, Alan Stern, akpm, Marek Szyprowski,
	robin.murphy, Roger Quadros

Ping again...

On 02/05/17 09:32, Vladimir Murzin wrote:
> Gentle ping!
> 
> On 24/04/17 11:16, Vladimir Murzin wrote:
>> It seem that addition of cache support for M-class CPUs uncovered
>> latent bug in DMA usage. NOMMU memory model has been treated as being
>> always consistent; however, for R/M CPU classes memory can be covered
>> by MPU which in turn might configure RAM as Normal i.e. bufferable and
>> cacheable. It breaks dma_alloc_coherent() and friends, since data can
>> stuck in caches now or be buffered.
>>
>> This patch set is trying to address the issue by providing region of
>> memory suitable for consistent DMA operations. It is supposed that
>> such region is marked by MPU as non-cacheable. Robin suggested to
>> advertise such memory as reserved shared-dma-pool, rather then using
>> homebrew command line option, and extend dma-coherent to provide
>> default DMA area in the similar way as it is done for CMA (PATCH
>> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
>> framework, and it seems that it might be reused by other architectures
>> like c6x and blackfin.
>>
>> While reviewing/testing previous vesrions of the patch set it turned
>> out that dma-coherent does not take into account "dma-ranges" device
>> tree property, so it is addressed in PATCH 3/7.
>>
>> For ARM, dedicated DMA region is required for cases other than:
>>  - MMU/MPU is off
>>  - cpu is v7m w/o cache support
>>  - device is coherent
>>
>> In case one of the above conditions is true dma operations are forced
>> to be coherent and wired with dma_noop_ops.
>>
>> To make life easier NOMMU dma operations are kept in separate
>> compilation unit.
>>
>> Since the issue was reported in the same time as Benjamin sent his
>> patch [1] to allow mmap for NOMMU, his case is also addressed in this
>> series (PATCH 1/7 and PATCH 2/7).
>>
>> Thanks!
>>
>> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
>>
>> Cc: Joerg Roedel <jroedel@suse.de>
>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>> Cc: Michal Nazarewicz <mina86@mina86.com>
>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>> Cc: Alan Stern <stern@rowland.harvard.edu>
>> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
>> Cc: Rich Felker <dalias@libc.org>
>> Cc: Roger Quadros <rogerq@ti.com>
>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> Cc: Rob Herring <robh+dt@kernel.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Doug Ledford <dledford@redhat.com>
>>
>> Changelog:
>> 	    v3 -> v4
>> 	       - rebased on v4.11-rc7
>> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
>> 	       - added Arnd's Acked-by
>>
>> 	    v2 -> v3
>> 	       - fixed warnings reported by Alexandre and kbuild robot
>>
>> 	    v1 -> v2
>> 	       - rebased on v4.11-rc1
>> 	       - added Robin's Reviewed-by
>> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
>> 	         rather than mem->device_base in case memory region is
>> 		 configured via device tree (so Tested-by discarded there)
>>
>> 	RFC v6 -> v1
>> 	       - dropped RFC tag
>> 	       - added Alexandre's Tested-by
>>
>>
>> Vladimir Murzin (7):
>>   dma: Take into account dma_pfn_offset
>>   dma: Add simple dma_noop_mmap
>>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>>     tree
>>   drivers: dma-coherent: Introduce default DMA pool
>>   ARM: NOMMU: Introduce dma operations for noMMU
>>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>>   ARM: dma-mapping: Remove traces of NOMMU code
>>
>>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>>  arch/arm/Kconfig                                   |   1 +
>>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>>  arch/arm/mm/Kconfig                                |   4 +-
>>  arch/arm/mm/Makefile                               |   5 +-
>>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>>  arch/arm/mm/dma-mapping.c                          |  29 +--
>>  drivers/base/dma-coherent.c                        |  74 +++++-
>>  lib/dma-noop.c                                     |  29 ++-
>>  9 files changed, 355 insertions(+), 45 deletions(-)
>>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
>>
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-05-15  8:52     ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-15  8:52 UTC (permalink / raw)
  To: linux-arm-kernel

Ping again...

On 02/05/17 09:32, Vladimir Murzin wrote:
> Gentle ping!
> 
> On 24/04/17 11:16, Vladimir Murzin wrote:
>> It seem that addition of cache support for M-class CPUs uncovered
>> latent bug in DMA usage. NOMMU memory model has been treated as being
>> always consistent; however, for R/M CPU classes memory can be covered
>> by MPU which in turn might configure RAM as Normal i.e. bufferable and
>> cacheable. It breaks dma_alloc_coherent() and friends, since data can
>> stuck in caches now or be buffered.
>>
>> This patch set is trying to address the issue by providing region of
>> memory suitable for consistent DMA operations. It is supposed that
>> such region is marked by MPU as non-cacheable. Robin suggested to
>> advertise such memory as reserved shared-dma-pool, rather then using
>> homebrew command line option, and extend dma-coherent to provide
>> default DMA area in the similar way as it is done for CMA (PATCH
>> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
>> framework, and it seems that it might be reused by other architectures
>> like c6x and blackfin.
>>
>> While reviewing/testing previous vesrions of the patch set it turned
>> out that dma-coherent does not take into account "dma-ranges" device
>> tree property, so it is addressed in PATCH 3/7.
>>
>> For ARM, dedicated DMA region is required for cases other than:
>>  - MMU/MPU is off
>>  - cpu is v7m w/o cache support
>>  - device is coherent
>>
>> In case one of the above conditions is true dma operations are forced
>> to be coherent and wired with dma_noop_ops.
>>
>> To make life easier NOMMU dma operations are kept in separate
>> compilation unit.
>>
>> Since the issue was reported in the same time as Benjamin sent his
>> patch [1] to allow mmap for NOMMU, his case is also addressed in this
>> series (PATCH 1/7 and PATCH 2/7).
>>
>> Thanks!
>>
>> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
>>
>> Cc: Joerg Roedel <jroedel@suse.de>
>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>> Cc: Michal Nazarewicz <mina86@mina86.com>
>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>> Cc: Alan Stern <stern@rowland.harvard.edu>
>> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
>> Cc: Rich Felker <dalias@libc.org>
>> Cc: Roger Quadros <rogerq@ti.com>
>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> Cc: Rob Herring <robh+dt@kernel.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Doug Ledford <dledford@redhat.com>
>>
>> Changelog:
>> 	    v3 -> v4
>> 	       - rebased on v4.11-rc7
>> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
>> 	       - added Arnd's Acked-by
>>
>> 	    v2 -> v3
>> 	       - fixed warnings reported by Alexandre and kbuild robot
>>
>> 	    v1 -> v2
>> 	       - rebased on v4.11-rc1
>> 	       - added Robin's Reviewed-by
>> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
>> 	         rather than mem->device_base in case memory region is
>> 		 configured via device tree (so Tested-by discarded there)
>>
>> 	RFC v6 -> v1
>> 	       - dropped RFC tag
>> 	       - added Alexandre's Tested-by
>>
>>
>> Vladimir Murzin (7):
>>   dma: Take into account dma_pfn_offset
>>   dma: Add simple dma_noop_mmap
>>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>>     tree
>>   drivers: dma-coherent: Introduce default DMA pool
>>   ARM: NOMMU: Introduce dma operations for noMMU
>>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>>   ARM: dma-mapping: Remove traces of NOMMU code
>>
>>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>>  arch/arm/Kconfig                                   |   1 +
>>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>>  arch/arm/mm/Kconfig                                |   4 +-
>>  arch/arm/mm/Makefile                               |   5 +-
>>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>>  arch/arm/mm/dma-mapping.c                          |  29 +--
>>  drivers/base/dma-coherent.c                        |  74 +++++-
>>  lib/dma-noop.c                                     |  29 ++-
>>  9 files changed, 355 insertions(+), 45 deletions(-)
>>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
>>
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-04-24 10:16   ` Vladimir Murzin
@ 2017-05-23 20:01     ` Russell King - ARM Linux
  -1 siblings, 0 replies; 36+ messages in thread
From: Russell King - ARM Linux @ 2017-05-23 20:01 UTC (permalink / raw)
  To: Vladimir Murzin
  Cc: linux-arm-kernel, sza, robin.murphy, alexandre.torgue, akpm,
	kbuild-all, linux-kernel, gregkh, arnd

On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
> Now, we have dedicated non-cacheable region for consistent DMA
> operations. However, that region can still be marked as bufferable by
> MPU, so it'd be safer to have barriers by default.

What do you actually want here?  Your patch doesn't quite make sense,
the commit description seems to indicate that you require this option
to be set for V7M, but the patch says otherwise.

>  config ARM_DMA_MEM_BUFFERABLE
> -	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
> -	default y if CPU_V6 || CPU_V6K || CPU_V7
> +	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7

This "if" conditional conditionalises the visibility of the option,
it doesn't conditionalise the value.

> +	default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M

Taking both of these changes together what you end up with is an option
presented to the user for "Use non-cacheable memory for DMA" which
they can choose to disable.

If you require this option to be set, that's incorrect - your modification
to the default line is correct, but the first line is not.  To achieve
that, you want the if condition to evaluate false for V7M, thereby hiding
the option from the user.  In that case, the default value will always be
assigned to the option.

>  	help
>  	  Historically, the kernel has used strongly ordered mappings to
>  	  provide DMA coherent memory.  With the advent of ARMv7, mapping
> -- 
> 2.0.0
> 

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-23 20:01     ` Russell King - ARM Linux
  0 siblings, 0 replies; 36+ messages in thread
From: Russell King - ARM Linux @ 2017-05-23 20:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
> Now, we have dedicated non-cacheable region for consistent DMA
> operations. However, that region can still be marked as bufferable by
> MPU, so it'd be safer to have barriers by default.

What do you actually want here?  Your patch doesn't quite make sense,
the commit description seems to indicate that you require this option
to be set for V7M, but the patch says otherwise.

>  config ARM_DMA_MEM_BUFFERABLE
> -	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
> -	default y if CPU_V6 || CPU_V6K || CPU_V7
> +	bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7

This "if" conditional conditionalises the visibility of the option,
it doesn't conditionalise the value.

> +	default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M

Taking both of these changes together what you end up with is an option
presented to the user for "Use non-cacheable memory for DMA" which
they can choose to disable.

If you require this option to be set, that's incorrect - your modification
to the default line is correct, but the first line is not.  To achieve
that, you want the if condition to evaluate false for V7M, thereby hiding
the option from the user.  In that case, the default value will always be
assigned to the option.

>  	help
>  	  Historically, the kernel has used strongly ordered mappings to
>  	  provide DMA coherent memory.  With the advent of ARMv7, mapping
> -- 
> 2.0.0
> 

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
  2017-05-15  8:52     ` Vladimir Murzin
@ 2017-05-23 20:04       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 36+ messages in thread
From: Russell King - ARM Linux @ 2017-05-23 20:04 UTC (permalink / raw)
  To: Vladimir Murzin
  Cc: linux-arm-kernel, Mark Rutland, Joerg Roedel, sza,
	Yoshinori Sato, alexandre.torgue, gregkh, linux-kernel,
	Michal Nazarewicz, Christian Borntraeger, Doug Ledford,
	Rich Felker, arnd, kbuild-all, Rob Herring, Alan Stern, akpm,
	Marek Szyprowski, robin.murphy, Roger Quadros

On Mon, May 15, 2017 at 09:52:58AM +0100, Vladimir Murzin wrote:
> Ping again...

Apart from the one comment, the ARM bits look fine to me - but I'm
not saying anything about the lib/dma-noop.c or the drivers/base
changes.

What's the dependency between the ARM bits and those bits?

> On 02/05/17 09:32, Vladimir Murzin wrote:
> > Gentle ping!
> > 
> > On 24/04/17 11:16, Vladimir Murzin wrote:
> >> It seem that addition of cache support for M-class CPUs uncovered
> >> latent bug in DMA usage. NOMMU memory model has been treated as being
> >> always consistent; however, for R/M CPU classes memory can be covered
> >> by MPU which in turn might configure RAM as Normal i.e. bufferable and
> >> cacheable. It breaks dma_alloc_coherent() and friends, since data can
> >> stuck in caches now or be buffered.
> >>
> >> This patch set is trying to address the issue by providing region of
> >> memory suitable for consistent DMA operations. It is supposed that
> >> such region is marked by MPU as non-cacheable. Robin suggested to
> >> advertise such memory as reserved shared-dma-pool, rather then using
> >> homebrew command line option, and extend dma-coherent to provide
> >> default DMA area in the similar way as it is done for CMA (PATCH
> >> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
> >> framework, and it seems that it might be reused by other architectures
> >> like c6x and blackfin.
> >>
> >> While reviewing/testing previous vesrions of the patch set it turned
> >> out that dma-coherent does not take into account "dma-ranges" device
> >> tree property, so it is addressed in PATCH 3/7.
> >>
> >> For ARM, dedicated DMA region is required for cases other than:
> >>  - MMU/MPU is off
> >>  - cpu is v7m w/o cache support
> >>  - device is coherent
> >>
> >> In case one of the above conditions is true dma operations are forced
> >> to be coherent and wired with dma_noop_ops.
> >>
> >> To make life easier NOMMU dma operations are kept in separate
> >> compilation unit.
> >>
> >> Since the issue was reported in the same time as Benjamin sent his
> >> patch [1] to allow mmap for NOMMU, his case is also addressed in this
> >> series (PATCH 1/7 and PATCH 2/7).
> >>
> >> Thanks!
> >>
> >> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
> >>
> >> Cc: Joerg Roedel <jroedel@suse.de>
> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> >> Cc: Michal Nazarewicz <mina86@mina86.com>
> >> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> >> Cc: Alan Stern <stern@rowland.harvard.edu>
> >> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> >> Cc: Rich Felker <dalias@libc.org>
> >> Cc: Roger Quadros <rogerq@ti.com>
> >> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >> Cc: Rob Herring <robh+dt@kernel.org>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Doug Ledford <dledford@redhat.com>
> >>
> >> Changelog:
> >> 	    v3 -> v4
> >> 	       - rebased on v4.11-rc7
> >> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
> >> 	       - added Arnd's Acked-by
> >>
> >> 	    v2 -> v3
> >> 	       - fixed warnings reported by Alexandre and kbuild robot
> >>
> >> 	    v1 -> v2
> >> 	       - rebased on v4.11-rc1
> >> 	       - added Robin's Reviewed-by
> >> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
> >> 	         rather than mem->device_base in case memory region is
> >> 		 configured via device tree (so Tested-by discarded there)
> >>
> >> 	RFC v6 -> v1
> >> 	       - dropped RFC tag
> >> 	       - added Alexandre's Tested-by
> >>
> >>
> >> Vladimir Murzin (7):
> >>   dma: Take into account dma_pfn_offset
> >>   dma: Add simple dma_noop_mmap
> >>   drivers: dma-coherent: Account dma_pfn_offset when used with device
> >>     tree
> >>   drivers: dma-coherent: Introduce default DMA pool
> >>   ARM: NOMMU: Introduce dma operations for noMMU
> >>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
> >>   ARM: dma-mapping: Remove traces of NOMMU code
> >>
> >>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
> >>  arch/arm/Kconfig                                   |   1 +
> >>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
> >>  arch/arm/mm/Kconfig                                |   4 +-
> >>  arch/arm/mm/Makefile                               |   5 +-
> >>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
> >>  arch/arm/mm/dma-mapping.c                          |  29 +--
> >>  drivers/base/dma-coherent.c                        |  74 +++++-
> >>  lib/dma-noop.c                                     |  29 ++-
> >>  9 files changed, 355 insertions(+), 45 deletions(-)
> >>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
> >>
> > 
> > 
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> > 
> 

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-05-23 20:04       ` Russell King - ARM Linux
  0 siblings, 0 replies; 36+ messages in thread
From: Russell King - ARM Linux @ 2017-05-23 20:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 15, 2017 at 09:52:58AM +0100, Vladimir Murzin wrote:
> Ping again...

Apart from the one comment, the ARM bits look fine to me - but I'm
not saying anything about the lib/dma-noop.c or the drivers/base
changes.

What's the dependency between the ARM bits and those bits?

> On 02/05/17 09:32, Vladimir Murzin wrote:
> > Gentle ping!
> > 
> > On 24/04/17 11:16, Vladimir Murzin wrote:
> >> It seem that addition of cache support for M-class CPUs uncovered
> >> latent bug in DMA usage. NOMMU memory model has been treated as being
> >> always consistent; however, for R/M CPU classes memory can be covered
> >> by MPU which in turn might configure RAM as Normal i.e. bufferable and
> >> cacheable. It breaks dma_alloc_coherent() and friends, since data can
> >> stuck in caches now or be buffered.
> >>
> >> This patch set is trying to address the issue by providing region of
> >> memory suitable for consistent DMA operations. It is supposed that
> >> such region is marked by MPU as non-cacheable. Robin suggested to
> >> advertise such memory as reserved shared-dma-pool, rather then using
> >> homebrew command line option, and extend dma-coherent to provide
> >> default DMA area in the similar way as it is done for CMA (PATCH
> >> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
> >> framework, and it seems that it might be reused by other architectures
> >> like c6x and blackfin.
> >>
> >> While reviewing/testing previous vesrions of the patch set it turned
> >> out that dma-coherent does not take into account "dma-ranges" device
> >> tree property, so it is addressed in PATCH 3/7.
> >>
> >> For ARM, dedicated DMA region is required for cases other than:
> >>  - MMU/MPU is off
> >>  - cpu is v7m w/o cache support
> >>  - device is coherent
> >>
> >> In case one of the above conditions is true dma operations are forced
> >> to be coherent and wired with dma_noop_ops.
> >>
> >> To make life easier NOMMU dma operations are kept in separate
> >> compilation unit.
> >>
> >> Since the issue was reported in the same time as Benjamin sent his
> >> patch [1] to allow mmap for NOMMU, his case is also addressed in this
> >> series (PATCH 1/7 and PATCH 2/7).
> >>
> >> Thanks!
> >>
> >> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
> >>
> >> Cc: Joerg Roedel <jroedel@suse.de>
> >> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> >> Cc: Michal Nazarewicz <mina86@mina86.com>
> >> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> >> Cc: Alan Stern <stern@rowland.harvard.edu>
> >> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
> >> Cc: Rich Felker <dalias@libc.org>
> >> Cc: Roger Quadros <rogerq@ti.com>
> >> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >> Cc: Rob Herring <robh+dt@kernel.org>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Doug Ledford <dledford@redhat.com>
> >>
> >> Changelog:
> >> 	    v3 -> v4
> >> 	       - rebased on v4.11-rc7
> >> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
> >> 	       - added Arnd's Acked-by
> >>
> >> 	    v2 -> v3
> >> 	       - fixed warnings reported by Alexandre and kbuild robot
> >>
> >> 	    v1 -> v2
> >> 	       - rebased on v4.11-rc1
> >> 	       - added Robin's Reviewed-by
> >> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
> >> 	         rather than mem->device_base in case memory region is
> >> 		 configured via device tree (so Tested-by discarded there)
> >>
> >> 	RFC v6 -> v1
> >> 	       - dropped RFC tag
> >> 	       - added Alexandre's Tested-by
> >>
> >>
> >> Vladimir Murzin (7):
> >>   dma: Take into account dma_pfn_offset
> >>   dma: Add simple dma_noop_mmap
> >>   drivers: dma-coherent: Account dma_pfn_offset when used with device
> >>     tree
> >>   drivers: dma-coherent: Introduce default DMA pool
> >>   ARM: NOMMU: Introduce dma operations for noMMU
> >>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
> >>   ARM: dma-mapping: Remove traces of NOMMU code
> >>
> >>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
> >>  arch/arm/Kconfig                                   |   1 +
> >>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
> >>  arch/arm/mm/Kconfig                                |   4 +-
> >>  arch/arm/mm/Makefile                               |   5 +-
> >>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
> >>  arch/arm/mm/dma-mapping.c                          |  29 +--
> >>  drivers/base/dma-coherent.c                        |  74 +++++-
> >>  lib/dma-noop.c                                     |  29 ++-
> >>  9 files changed, 355 insertions(+), 45 deletions(-)
> >>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
> >>
> > 
> > 
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel at lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> > 
> 

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-05-23 20:01     ` Russell King - ARM Linux
@ 2017-05-23 20:33       ` Arnd Bergmann
  -1 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-23 20:33 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Vladimir Murzin, Linux ARM, sza, Robin Murphy, Alexandre Torgue,
	Andrew Morton, kbuild-all, Linux Kernel Mailing List, gregkh

On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
<linux@armlinux.org.uk> wrote:
> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>> Now, we have dedicated non-cacheable region for consistent DMA
>> operations. However, that region can still be marked as bufferable by
>> MPU, so it'd be safer to have barriers by default.
>
> What do you actually want here?  Your patch doesn't quite make sense,
> the commit description seems to indicate that you require this option
> to be set for V7M, but the patch says otherwise.
>
>>  config ARM_DMA_MEM_BUFFERABLE
>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>
> This "if" conditional conditionalises the visibility of the option,
> it doesn't conditionalise the value.
>
>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>
> Taking both of these changes together what you end up with is an option
> presented to the user for "Use non-cacheable memory for DMA" which
> they can choose to disable.
>
> If you require this option to be set, that's incorrect - your modification
> to the default line is correct, but the first line is not.  To achieve
> that, you want the if condition to evaluate false for V7M, thereby hiding
> the option from the user.  In that case, the default value will always be
> assigned to the option.

I had the opposite comment in the previous version ;-)
https://lkml.org/lkml/2017/4/19/185

I think the current patch is correct, but the description could still be
clarified: On some of the beefier ARMv7-M machines (with DMA
and write buffers) we want this enabled, while those that didn't
need it until now also won't need it in the future.

        Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-23 20:33       ` Arnd Bergmann
  0 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-23 20:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
<linux@armlinux.org.uk> wrote:
> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>> Now, we have dedicated non-cacheable region for consistent DMA
>> operations. However, that region can still be marked as bufferable by
>> MPU, so it'd be safer to have barriers by default.
>
> What do you actually want here?  Your patch doesn't quite make sense,
> the commit description seems to indicate that you require this option
> to be set for V7M, but the patch says otherwise.
>
>>  config ARM_DMA_MEM_BUFFERABLE
>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>
> This "if" conditional conditionalises the visibility of the option,
> it doesn't conditionalise the value.
>
>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>
> Taking both of these changes together what you end up with is an option
> presented to the user for "Use non-cacheable memory for DMA" which
> they can choose to disable.
>
> If you require this option to be set, that's incorrect - your modification
> to the default line is correct, but the first line is not.  To achieve
> that, you want the if condition to evaluate false for V7M, thereby hiding
> the option from the user.  In that case, the default value will always be
> assigned to the option.

I had the opposite comment in the previous version ;-)
https://lkml.org/lkml/2017/4/19/185

I think the current patch is correct, but the description could still be
clarified: On some of the beefier ARMv7-M machines (with DMA
and write buffers) we want this enabled, while those that didn't
need it until now also won't need it in the future.

        Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-05-23 20:33       ` Arnd Bergmann
@ 2017-05-24  8:31         ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  8:31 UTC (permalink / raw)
  To: Arnd Bergmann, Russell King - ARM Linux
  Cc: Linux ARM, sza, Robin Murphy, Alexandre Torgue, Andrew Morton,
	kbuild-all, Linux Kernel Mailing List, gregkh

On 23/05/17 21:33, Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
> <linux@armlinux.org.uk> wrote:
>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>> Now, we have dedicated non-cacheable region for consistent DMA
>>> operations. However, that region can still be marked as bufferable by
>>> MPU, so it'd be safer to have barriers by default.
>>
>> What do you actually want here?  Your patch doesn't quite make sense,
>> the commit description seems to indicate that you require this option
>> to be set for V7M, but the patch says otherwise.
>>
>>>  config ARM_DMA_MEM_BUFFERABLE
>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>
>> This "if" conditional conditionalises the visibility of the option,
>> it doesn't conditionalise the value.
>>
>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>
>> Taking both of these changes together what you end up with is an option
>> presented to the user for "Use non-cacheable memory for DMA" which
>> they can choose to disable.
>>
>> If you require this option to be set, that's incorrect - your modification
>> to the default line is correct, but the first line is not.  To achieve
>> that, you want the if condition to evaluate false for V7M, thereby hiding
>> the option from the user.  In that case, the default value will always be
>> assigned to the option.
> 
> I had the opposite comment in the previous version ;-)
> https://lkml.org/lkml/2017/4/19/185
> 
> I think the current patch is correct, but the description could still be
> clarified: On some of the beefier ARMv7-M machines (with DMA
> and write buffers) we want this enabled, while those that didn't
> need it until now also won't need it in the future.

Ok. Do you want it go into commit message or option description or maybe both?

Thanks
Vladimir

> 
>         Arnd
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-24  8:31         ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  8:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/05/17 21:33, Arnd Bergmann wrote:
> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
> <linux@armlinux.org.uk> wrote:
>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>> Now, we have dedicated non-cacheable region for consistent DMA
>>> operations. However, that region can still be marked as bufferable by
>>> MPU, so it'd be safer to have barriers by default.
>>
>> What do you actually want here?  Your patch doesn't quite make sense,
>> the commit description seems to indicate that you require this option
>> to be set for V7M, but the patch says otherwise.
>>
>>>  config ARM_DMA_MEM_BUFFERABLE
>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>
>> This "if" conditional conditionalises the visibility of the option,
>> it doesn't conditionalise the value.
>>
>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>
>> Taking both of these changes together what you end up with is an option
>> presented to the user for "Use non-cacheable memory for DMA" which
>> they can choose to disable.
>>
>> If you require this option to be set, that's incorrect - your modification
>> to the default line is correct, but the first line is not.  To achieve
>> that, you want the if condition to evaluate false for V7M, thereby hiding
>> the option from the user.  In that case, the default value will always be
>> assigned to the option.
> 
> I had the opposite comment in the previous version ;-)
> https://lkml.org/lkml/2017/4/19/185
> 
> I think the current patch is correct, but the description could still be
> clarified: On some of the beefier ARMv7-M machines (with DMA
> and write buffers) we want this enabled, while those that didn't
> need it until now also won't need it in the future.

Ok. Do you want it go into commit message or option description or maybe both?

Thanks
Vladimir

> 
>         Arnd
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-05-24  8:31         ` Vladimir Murzin
@ 2017-05-24  8:36           ` Arnd Bergmann
  -1 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-24  8:36 UTC (permalink / raw)
  To: Vladimir Murzin
  Cc: Russell King - ARM Linux, Linux ARM, sza, Robin Murphy,
	Alexandre Torgue, Andrew Morton, kbuild-all,
	Linux Kernel Mailing List, gregkh

On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
<vladimir.murzin@arm.com> wrote:
> On 23/05/17 21:33, Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>> <linux@armlinux.org.uk> wrote:
>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>> operations. However, that region can still be marked as bufferable by
>>>> MPU, so it'd be safer to have barriers by default.
>>>
>>> What do you actually want here?  Your patch doesn't quite make sense,
>>> the commit description seems to indicate that you require this option
>>> to be set for V7M, but the patch says otherwise.
>>>
>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>
>>> This "if" conditional conditionalises the visibility of the option,
>>> it doesn't conditionalise the value.
>>>
>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>
>>> Taking both of these changes together what you end up with is an option
>>> presented to the user for "Use non-cacheable memory for DMA" which
>>> they can choose to disable.
>>>
>>> If you require this option to be set, that's incorrect - your modification
>>> to the default line is correct, but the first line is not.  To achieve
>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>> the option from the user.  In that case, the default value will always be
>>> assigned to the option.
>>
>> I had the opposite comment in the previous version ;-)
>> https://lkml.org/lkml/2017/4/19/185
>>
>> I think the current patch is correct, but the description could still be
>> clarified: On some of the beefier ARMv7-M machines (with DMA
>> and write buffers) we want this enabled, while those that didn't
>> need it until now also won't need it in the future.
>
> Ok. Do you want it go into commit message or option description or maybe both?

I'd say both. It would also be helpful to identify specifically which platforms
require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
the platform, as we do from Moxart.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-24  8:36           ` Arnd Bergmann
  0 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-24  8:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
<vladimir.murzin@arm.com> wrote:
> On 23/05/17 21:33, Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>> <linux@armlinux.org.uk> wrote:
>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>> operations. However, that region can still be marked as bufferable by
>>>> MPU, so it'd be safer to have barriers by default.
>>>
>>> What do you actually want here?  Your patch doesn't quite make sense,
>>> the commit description seems to indicate that you require this option
>>> to be set for V7M, but the patch says otherwise.
>>>
>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>
>>> This "if" conditional conditionalises the visibility of the option,
>>> it doesn't conditionalise the value.
>>>
>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>
>>> Taking both of these changes together what you end up with is an option
>>> presented to the user for "Use non-cacheable memory for DMA" which
>>> they can choose to disable.
>>>
>>> If you require this option to be set, that's incorrect - your modification
>>> to the default line is correct, but the first line is not.  To achieve
>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>> the option from the user.  In that case, the default value will always be
>>> assigned to the option.
>>
>> I had the opposite comment in the previous version ;-)
>> https://lkml.org/lkml/2017/4/19/185
>>
>> I think the current patch is correct, but the description could still be
>> clarified: On some of the beefier ARMv7-M machines (with DMA
>> and write buffers) we want this enabled, while those that didn't
>> need it until now also won't need it in the future.
>
> Ok. Do you want it go into commit message or option description or maybe both?

I'd say both. It would also be helpful to identify specifically which platforms
require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
the platform, as we do from Moxart.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
  2017-05-23 20:04       ` Russell King - ARM Linux
@ 2017-05-24  8:49         ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  8:49 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-arm-kernel, Mark Rutland, Joerg Roedel, sza,
	Yoshinori Sato, alexandre.torgue, gregkh, linux-kernel,
	Michal Nazarewicz, Christian Borntraeger, Doug Ledford,
	Rich Felker, arnd, kbuild-all, Rob Herring, Alan Stern, akpm,
	Marek Szyprowski, robin.murphy, Roger Quadros

On 23/05/17 21:04, Russell King - ARM Linux wrote:
> On Mon, May 15, 2017 at 09:52:58AM +0100, Vladimir Murzin wrote:
>> Ping again...
> 
> Apart from the one comment, the ARM bits look fine to me - but I'm
> not saying anything about the lib/dma-noop.c or the drivers/base
> changes.
> 
> What's the dependency between the ARM bits and those bits?

In case only ARM bits would be applied machines w/o cache support continue to
work, for machines which have cache support coherent DMA allocation would fail
(a bit better than just silently corrupt data), Benjamin's case would be
broken because PATCH 7/7 depends on PATCH 2/7. In general, the goal of the
patch set can't be achieved without changes in dma-coherent.c and dma-noop.c,
but I'm open to any option to make progress with this series.

Thanks
Vladimir


> 
>> On 02/05/17 09:32, Vladimir Murzin wrote:
>>> Gentle ping!
>>>
>>> On 24/04/17 11:16, Vladimir Murzin wrote:
>>>> It seem that addition of cache support for M-class CPUs uncovered
>>>> latent bug in DMA usage. NOMMU memory model has been treated as being
>>>> always consistent; however, for R/M CPU classes memory can be covered
>>>> by MPU which in turn might configure RAM as Normal i.e. bufferable and
>>>> cacheable. It breaks dma_alloc_coherent() and friends, since data can
>>>> stuck in caches now or be buffered.
>>>>
>>>> This patch set is trying to address the issue by providing region of
>>>> memory suitable for consistent DMA operations. It is supposed that
>>>> such region is marked by MPU as non-cacheable. Robin suggested to
>>>> advertise such memory as reserved shared-dma-pool, rather then using
>>>> homebrew command line option, and extend dma-coherent to provide
>>>> default DMA area in the similar way as it is done for CMA (PATCH
>>>> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
>>>> framework, and it seems that it might be reused by other architectures
>>>> like c6x and blackfin.
>>>>
>>>> While reviewing/testing previous vesrions of the patch set it turned
>>>> out that dma-coherent does not take into account "dma-ranges" device
>>>> tree property, so it is addressed in PATCH 3/7.
>>>>
>>>> For ARM, dedicated DMA region is required for cases other than:
>>>>  - MMU/MPU is off
>>>>  - cpu is v7m w/o cache support
>>>>  - device is coherent
>>>>
>>>> In case one of the above conditions is true dma operations are forced
>>>> to be coherent and wired with dma_noop_ops.
>>>>
>>>> To make life easier NOMMU dma operations are kept in separate
>>>> compilation unit.
>>>>
>>>> Since the issue was reported in the same time as Benjamin sent his
>>>> patch [1] to allow mmap for NOMMU, his case is also addressed in this
>>>> series (PATCH 1/7 and PATCH 2/7).
>>>>
>>>> Thanks!
>>>>
>>>> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
>>>>
>>>> Cc: Joerg Roedel <jroedel@suse.de>
>>>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>>> Cc: Michal Nazarewicz <mina86@mina86.com>
>>>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>>>> Cc: Alan Stern <stern@rowland.harvard.edu>
>>>> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
>>>> Cc: Rich Felker <dalias@libc.org>
>>>> Cc: Roger Quadros <rogerq@ti.com>
>>>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>>>> Cc: Rob Herring <robh+dt@kernel.org>
>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>> Cc: Doug Ledford <dledford@redhat.com>
>>>>
>>>> Changelog:
>>>> 	    v3 -> v4
>>>> 	       - rebased on v4.11-rc7
>>>> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
>>>> 	       - added Arnd's Acked-by
>>>>
>>>> 	    v2 -> v3
>>>> 	       - fixed warnings reported by Alexandre and kbuild robot
>>>>
>>>> 	    v1 -> v2
>>>> 	       - rebased on v4.11-rc1
>>>> 	       - added Robin's Reviewed-by
>>>> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
>>>> 	         rather than mem->device_base in case memory region is
>>>> 		 configured via device tree (so Tested-by discarded there)
>>>>
>>>> 	RFC v6 -> v1
>>>> 	       - dropped RFC tag
>>>> 	       - added Alexandre's Tested-by
>>>>
>>>>
>>>> Vladimir Murzin (7):
>>>>   dma: Take into account dma_pfn_offset
>>>>   dma: Add simple dma_noop_mmap
>>>>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>>>>     tree
>>>>   drivers: dma-coherent: Introduce default DMA pool
>>>>   ARM: NOMMU: Introduce dma operations for noMMU
>>>>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>>>>   ARM: dma-mapping: Remove traces of NOMMU code
>>>>
>>>>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>>>>  arch/arm/Kconfig                                   |   1 +
>>>>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>>>>  arch/arm/mm/Kconfig                                |   4 +-
>>>>  arch/arm/mm/Makefile                               |   5 +-
>>>>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>>>>  arch/arm/mm/dma-mapping.c                          |  29 +--
>>>>  drivers/base/dma-coherent.c                        |  74 +++++-
>>>>  lib/dma-noop.c                                     |  29 ++-
>>>>  9 files changed, 355 insertions(+), 45 deletions(-)
>>>>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
>>>>
>>>
>>>
>>> _______________________________________________
>>> linux-arm-kernel mailing list
>>> linux-arm-kernel@lists.infradead.org
>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>
>>
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU
@ 2017-05-24  8:49         ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  8:49 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/05/17 21:04, Russell King - ARM Linux wrote:
> On Mon, May 15, 2017 at 09:52:58AM +0100, Vladimir Murzin wrote:
>> Ping again...
> 
> Apart from the one comment, the ARM bits look fine to me - but I'm
> not saying anything about the lib/dma-noop.c or the drivers/base
> changes.
> 
> What's the dependency between the ARM bits and those bits?

In case only ARM bits would be applied machines w/o cache support continue to
work, for machines which have cache support coherent DMA allocation would fail
(a bit better than just silently corrupt data), Benjamin's case would be
broken because PATCH 7/7 depends on PATCH 2/7. In general, the goal of the
patch set can't be achieved without changes in dma-coherent.c and dma-noop.c,
but I'm open to any option to make progress with this series.

Thanks
Vladimir


> 
>> On 02/05/17 09:32, Vladimir Murzin wrote:
>>> Gentle ping!
>>>
>>> On 24/04/17 11:16, Vladimir Murzin wrote:
>>>> It seem that addition of cache support for M-class CPUs uncovered
>>>> latent bug in DMA usage. NOMMU memory model has been treated as being
>>>> always consistent; however, for R/M CPU classes memory can be covered
>>>> by MPU which in turn might configure RAM as Normal i.e. bufferable and
>>>> cacheable. It breaks dma_alloc_coherent() and friends, since data can
>>>> stuck in caches now or be buffered.
>>>>
>>>> This patch set is trying to address the issue by providing region of
>>>> memory suitable for consistent DMA operations. It is supposed that
>>>> such region is marked by MPU as non-cacheable. Robin suggested to
>>>> advertise such memory as reserved shared-dma-pool, rather then using
>>>> homebrew command line option, and extend dma-coherent to provide
>>>> default DMA area in the similar way as it is done for CMA (PATCH
>>>> 4/7). It allows us to offload all bookkeeping on generic coherent DMA
>>>> framework, and it seems that it might be reused by other architectures
>>>> like c6x and blackfin.
>>>>
>>>> While reviewing/testing previous vesrions of the patch set it turned
>>>> out that dma-coherent does not take into account "dma-ranges" device
>>>> tree property, so it is addressed in PATCH 3/7.
>>>>
>>>> For ARM, dedicated DMA region is required for cases other than:
>>>>  - MMU/MPU is off
>>>>  - cpu is v7m w/o cache support
>>>>  - device is coherent
>>>>
>>>> In case one of the above conditions is true dma operations are forced
>>>> to be coherent and wired with dma_noop_ops.
>>>>
>>>> To make life easier NOMMU dma operations are kept in separate
>>>> compilation unit.
>>>>
>>>> Since the issue was reported in the same time as Benjamin sent his
>>>> patch [1] to allow mmap for NOMMU, his case is also addressed in this
>>>> series (PATCH 1/7 and PATCH 2/7).
>>>>
>>>> Thanks!
>>>>
>>>> [1] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8633/1
>>>>
>>>> Cc: Joerg Roedel <jroedel@suse.de>
>>>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>>> Cc: Michal Nazarewicz <mina86@mina86.com>
>>>> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
>>>> Cc: Alan Stern <stern@rowland.harvard.edu>
>>>> Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
>>>> Cc: Rich Felker <dalias@libc.org>
>>>> Cc: Roger Quadros <rogerq@ti.com>
>>>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>>>> Cc: Rob Herring <robh+dt@kernel.org>
>>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>>> Cc: Doug Ledford <dledford@redhat.com>
>>>>
>>>> Changelog:
>>>> 	    v3 -> v4
>>>> 	       - rebased on v4.11-rc7
>>>> 	       - made CONFIG_ARM_DMA_MEM_BUFFERABLE optional for CPU_V7M
>>>> 	       - added Arnd's Acked-by
>>>>
>>>> 	    v2 -> v3
>>>> 	       - fixed warnings reported by Alexandre and kbuild robot
>>>>
>>>> 	    v1 -> v2
>>>> 	       - rebased on v4.11-rc1
>>>> 	       - added Robin's Reviewed-by
>>>> 	       - dedicated flag is introduced to use dev->dma_pfn_offset
>>>> 	         rather than mem->device_base in case memory region is
>>>> 		 configured via device tree (so Tested-by discarded there)
>>>>
>>>> 	RFC v6 -> v1
>>>> 	       - dropped RFC tag
>>>> 	       - added Alexandre's Tested-by
>>>>
>>>>
>>>> Vladimir Murzin (7):
>>>>   dma: Take into account dma_pfn_offset
>>>>   dma: Add simple dma_noop_mmap
>>>>   drivers: dma-coherent: Account dma_pfn_offset when used with device
>>>>     tree
>>>>   drivers: dma-coherent: Introduce default DMA pool
>>>>   ARM: NOMMU: Introduce dma operations for noMMU
>>>>   ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
>>>>   ARM: dma-mapping: Remove traces of NOMMU code
>>>>
>>>>  .../bindings/reserved-memory/reserved-memory.txt   |   3 +
>>>>  arch/arm/Kconfig                                   |   1 +
>>>>  arch/arm/include/asm/dma-mapping.h                 |   2 +-
>>>>  arch/arm/mm/Kconfig                                |   4 +-
>>>>  arch/arm/mm/Makefile                               |   5 +-
>>>>  arch/arm/mm/dma-mapping-nommu.c                    | 253 +++++++++++++++++++++
>>>>  arch/arm/mm/dma-mapping.c                          |  29 +--
>>>>  drivers/base/dma-coherent.c                        |  74 +++++-
>>>>  lib/dma-noop.c                                     |  29 ++-
>>>>  9 files changed, 355 insertions(+), 45 deletions(-)
>>>>  create mode 100644 arch/arm/mm/dma-mapping-nommu.c
>>>>
>>>
>>>
>>> _______________________________________________
>>> linux-arm-kernel mailing list
>>> linux-arm-kernel at lists.infradead.org
>>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>>
>>
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-05-24  8:36           ` Arnd Bergmann
@ 2017-05-24  9:08             ` Vladimir Murzin
  -1 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  9:08 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Russell King - ARM Linux, Linux ARM, sza, Robin Murphy,
	Alexandre Torgue, Andrew Morton, kbuild-all,
	Linux Kernel Mailing List, gregkh

On 24/05/17 09:36, Arnd Bergmann wrote:
> On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
> <vladimir.murzin@arm.com> wrote:
>> On 23/05/17 21:33, Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>>> <linux@armlinux.org.uk> wrote:
>>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>>> operations. However, that region can still be marked as bufferable by
>>>>> MPU, so it'd be safer to have barriers by default.
>>>>
>>>> What do you actually want here?  Your patch doesn't quite make sense,
>>>> the commit description seems to indicate that you require this option
>>>> to be set for V7M, but the patch says otherwise.
>>>>
>>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>>
>>>> This "if" conditional conditionalises the visibility of the option,
>>>> it doesn't conditionalise the value.
>>>>
>>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>>
>>>> Taking both of these changes together what you end up with is an option
>>>> presented to the user for "Use non-cacheable memory for DMA" which
>>>> they can choose to disable.
>>>>
>>>> If you require this option to be set, that's incorrect - your modification
>>>> to the default line is correct, but the first line is not.  To achieve
>>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>>> the option from the user.  In that case, the default value will always be
>>>> assigned to the option.
>>>
>>> I had the opposite comment in the previous version ;-)
>>> https://lkml.org/lkml/2017/4/19/185
>>>
>>> I think the current patch is correct, but the description could still be
>>> clarified: On some of the beefier ARMv7-M machines (with DMA
>>> and write buffers) we want this enabled, while those that didn't
>>> need it until now also won't need it in the future.
>>
>> Ok. Do you want it go into commit message or option description or maybe both?
> 
> I'd say both. It would also be helpful to identify specifically which platforms
> require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
> the platform, as we do from Moxart.
> 

I'm a bit confused here. In case we want to control it on platform level via
'select ARM_DMA_MEM_BUFFERABLE' wouldn't we need to 'default n' for CPU_V7M?
IIUC, Moxart needs to select this option because it is neither CPU_V6(K) or
CPU_V7.

Thanks
Vladimir

>       Arnd
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-24  9:08             ` Vladimir Murzin
  0 siblings, 0 replies; 36+ messages in thread
From: Vladimir Murzin @ 2017-05-24  9:08 UTC (permalink / raw)
  To: linux-arm-kernel

On 24/05/17 09:36, Arnd Bergmann wrote:
> On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
> <vladimir.murzin@arm.com> wrote:
>> On 23/05/17 21:33, Arnd Bergmann wrote:
>>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>>> <linux@armlinux.org.uk> wrote:
>>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>>> operations. However, that region can still be marked as bufferable by
>>>>> MPU, so it'd be safer to have barriers by default.
>>>>
>>>> What do you actually want here?  Your patch doesn't quite make sense,
>>>> the commit description seems to indicate that you require this option
>>>> to be set for V7M, but the patch says otherwise.
>>>>
>>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>>
>>>> This "if" conditional conditionalises the visibility of the option,
>>>> it doesn't conditionalise the value.
>>>>
>>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>>
>>>> Taking both of these changes together what you end up with is an option
>>>> presented to the user for "Use non-cacheable memory for DMA" which
>>>> they can choose to disable.
>>>>
>>>> If you require this option to be set, that's incorrect - your modification
>>>> to the default line is correct, but the first line is not.  To achieve
>>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>>> the option from the user.  In that case, the default value will always be
>>>> assigned to the option.
>>>
>>> I had the opposite comment in the previous version ;-)
>>> https://lkml.org/lkml/2017/4/19/185
>>>
>>> I think the current patch is correct, but the description could still be
>>> clarified: On some of the beefier ARMv7-M machines (with DMA
>>> and write buffers) we want this enabled, while those that didn't
>>> need it until now also won't need it in the future.
>>
>> Ok. Do you want it go into commit message or option description or maybe both?
> 
> I'd say both. It would also be helpful to identify specifically which platforms
> require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
> the platform, as we do from Moxart.
> 

I'm a bit confused here. In case we want to control it on platform level via
'select ARM_DMA_MEM_BUFFERABLE' wouldn't we need to 'default n' for CPU_V7M?
IIUC, Moxart needs to select this option because it is neither CPU_V6(K) or
CPU_V7.

Thanks
Vladimir

>       Arnd
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
  2017-05-24  9:08             ` Vladimir Murzin
@ 2017-05-24  9:20               ` Arnd Bergmann
  -1 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-24  9:20 UTC (permalink / raw)
  To: Vladimir Murzin
  Cc: Russell King - ARM Linux, Linux ARM, sza, Robin Murphy,
	Alexandre Torgue, Andrew Morton, kbuild-all,
	Linux Kernel Mailing List, gregkh

On Wed, May 24, 2017 at 11:08 AM, Vladimir Murzin
<vladimir.murzin@arm.com> wrote:
> On 24/05/17 09:36, Arnd Bergmann wrote:
>> On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
>> <vladimir.murzin@arm.com> wrote:
>>> On 23/05/17 21:33, Arnd Bergmann wrote:
>>>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>>>> <linux@armlinux.org.uk> wrote:
>>>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>>>> operations. However, that region can still be marked as bufferable by
>>>>>> MPU, so it'd be safer to have barriers by default.
>>>>>
>>>>> What do you actually want here?  Your patch doesn't quite make sense,
>>>>> the commit description seems to indicate that you require this option
>>>>> to be set for V7M, but the patch says otherwise.
>>>>>
>>>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>>>
>>>>> This "if" conditional conditionalises the visibility of the option,
>>>>> it doesn't conditionalise the value.
>>>>>
>>>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>>>
>>>>> Taking both of these changes together what you end up with is an option
>>>>> presented to the user for "Use non-cacheable memory for DMA" which
>>>>> they can choose to disable.
>>>>>
>>>>> If you require this option to be set, that's incorrect - your modification
>>>>> to the default line is correct, but the first line is not.  To achieve
>>>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>>>> the option from the user.  In that case, the default value will always be
>>>>> assigned to the option.
>>>>
>>>> I had the opposite comment in the previous version ;-)
>>>> https://lkml.org/lkml/2017/4/19/185
>>>>
>>>> I think the current patch is correct, but the description could still be
>>>> clarified: On some of the beefier ARMv7-M machines (with DMA
>>>> and write buffers) we want this enabled, while those that didn't
>>>> need it until now also won't need it in the future.
>>>
>>> Ok. Do you want it go into commit message or option description or maybe both?
>>
>> I'd say both. It would also be helpful to identify specifically which platforms
>> require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
>> the platform, as we do from Moxart.
>>
>
> I'm a bit confused here. In case we want to control it on platform level via
> 'select ARM_DMA_MEM_BUFFERABLE' wouldn't we need to 'default n' for CPU_V7M?
> IIUC, Moxart needs to select this option because it is neither CPU_V6(K) or
> CPU_V7.

It depends: If we want to control it purely by platform, then we don't need
to patch anything here, just add the 'select' and be done with it, leaving
the default to 'n' for ARMv7-M.

If there are platforms on which you might reasonably make the option
user-visible (e.g. you only need it when you actually want to use one of
the DMA masters, but most configurations don't), then having the 'default y'
might still be appropriate, to default to the safe setting on platforms that
don't have the 'select' but letting users turn it off when they know what they
are doing.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus
@ 2017-05-24  9:20               ` Arnd Bergmann
  0 siblings, 0 replies; 36+ messages in thread
From: Arnd Bergmann @ 2017-05-24  9:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 24, 2017 at 11:08 AM, Vladimir Murzin
<vladimir.murzin@arm.com> wrote:
> On 24/05/17 09:36, Arnd Bergmann wrote:
>> On Wed, May 24, 2017 at 10:31 AM, Vladimir Murzin
>> <vladimir.murzin@arm.com> wrote:
>>> On 23/05/17 21:33, Arnd Bergmann wrote:
>>>> On Tue, May 23, 2017 at 10:01 PM, Russell King - ARM Linux
>>>> <linux@armlinux.org.uk> wrote:
>>>>> On Mon, Apr 24, 2017 at 11:16:56AM +0100, Vladimir Murzin wrote:
>>>>>> Now, we have dedicated non-cacheable region for consistent DMA
>>>>>> operations. However, that region can still be marked as bufferable by
>>>>>> MPU, so it'd be safer to have barriers by default.
>>>>>
>>>>> What do you actually want here?  Your patch doesn't quite make sense,
>>>>> the commit description seems to indicate that you require this option
>>>>> to be set for V7M, but the patch says otherwise.
>>>>>
>>>>>>  config ARM_DMA_MEM_BUFFERABLE
>>>>>> -     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K) && !CPU_V7
>>>>>> -     default y if CPU_V6 || CPU_V6K || CPU_V7
>>>>>> +     bool "Use non-cacheable memory for DMA" if (CPU_V6 || CPU_V6K || CPU_V7M) && !CPU_V7
>>>>>
>>>>> This "if" conditional conditionalises the visibility of the option,
>>>>> it doesn't conditionalise the value.
>>>>>
>>>>>> +     default y if CPU_V6 || CPU_V6K || CPU_V7 || CPU_V7M
>>>>>
>>>>> Taking both of these changes together what you end up with is an option
>>>>> presented to the user for "Use non-cacheable memory for DMA" which
>>>>> they can choose to disable.
>>>>>
>>>>> If you require this option to be set, that's incorrect - your modification
>>>>> to the default line is correct, but the first line is not.  To achieve
>>>>> that, you want the if condition to evaluate false for V7M, thereby hiding
>>>>> the option from the user.  In that case, the default value will always be
>>>>> assigned to the option.
>>>>
>>>> I had the opposite comment in the previous version ;-)
>>>> https://lkml.org/lkml/2017/4/19/185
>>>>
>>>> I think the current patch is correct, but the description could still be
>>>> clarified: On some of the beefier ARMv7-M machines (with DMA
>>>> and write buffers) we want this enabled, while those that didn't
>>>> need it until now also won't need it in the future.
>>>
>>> Ok. Do you want it go into commit message or option description or maybe both?
>>
>> I'd say both. It would also be helpful to identify specifically which platforms
>> require this, and then add a 'select ARM_DMA_MEM_BUFFERABLE' from
>> the platform, as we do from Moxart.
>>
>
> I'm a bit confused here. In case we want to control it on platform level via
> 'select ARM_DMA_MEM_BUFFERABLE' wouldn't we need to 'default n' for CPU_V7M?
> IIUC, Moxart needs to select this option because it is neither CPU_V6(K) or
> CPU_V7.

It depends: If we want to control it purely by platform, then we don't need
to patch anything here, just add the 'select' and be done with it, leaving
the default to 'n' for ARMv7-M.

If there are platforms on which you might reasonably make the option
user-visible (e.g. you only need it when you actually want to use one of
the DMA masters, but most configurations don't), then having the 'default y'
might still be appropriate, to default to the safe setting on platforms that
don't have the 'select' but letting users turn it off when they know what they
are doing.

      Arnd

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2017-05-24  9:20 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-24 10:16 [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU Vladimir Murzin
2017-04-24 10:16 ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 1/7] dma: Take into account dma_pfn_offset Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 2/7] dma: Add simple dma_noop_mmap Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 3/7] drivers: dma-coherent: Account dma_pfn_offset when used with device tree Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 4/7] drivers: dma-coherent: Introduce default DMA pool Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 5/7] ARM: NOMMU: Introduce dma operations for noMMU Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-04-24 10:16 ` [PATCH v4 6/7] ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-05-23 20:01   ` Russell King - ARM Linux
2017-05-23 20:01     ` Russell King - ARM Linux
2017-05-23 20:33     ` Arnd Bergmann
2017-05-23 20:33       ` Arnd Bergmann
2017-05-24  8:31       ` Vladimir Murzin
2017-05-24  8:31         ` Vladimir Murzin
2017-05-24  8:36         ` Arnd Bergmann
2017-05-24  8:36           ` Arnd Bergmann
2017-05-24  9:08           ` Vladimir Murzin
2017-05-24  9:08             ` Vladimir Murzin
2017-05-24  9:20             ` Arnd Bergmann
2017-05-24  9:20               ` Arnd Bergmann
2017-04-24 10:16 ` [PATCH v4 7/7] ARM: dma-mapping: Remove traces of NOMMU code Vladimir Murzin
2017-04-24 10:16   ` Vladimir Murzin
2017-05-02  8:32 ` [PATCH v4 0/7] ARM: Fix dma_alloc_coherent() and friends for NOMMU Vladimir Murzin
2017-05-02  8:32   ` Vladimir Murzin
2017-05-15  8:52   ` Vladimir Murzin
2017-05-15  8:52     ` Vladimir Murzin
2017-05-23 20:04     ` Russell King - ARM Linux
2017-05-23 20:04       ` Russell King - ARM Linux
2017-05-24  8:49       ` Vladimir Murzin
2017-05-24  8:49         ` Vladimir Murzin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.