linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] devres: dma-mapping: Introducing new functions
@ 2014-06-01  7:01 Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single() Eli Billauer
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Eli Billauer @ 2014-06-01  7:01 UTC (permalink / raw)
  To: tj
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, shuah.kh,
	iommu, discuss, Eli Billauer

This patchset consists of new functions to the managed device resource
API, followed by a patch for the Xillybus driver, which is my motivation
and what I tested with.

This is a resubmission after changing the API slightly.

Rationale: While migrating the staging/xillybus driver to rely completely on
managed resources, some functionalities were missing, and hence added:

* dmam_map_single()
* dmam_unmap_single()
* pcim_map_single()
* pcim_unmap_single()

Tejun suggested that dma_map_single_attrs() should have a managed version as
well. The second patch in this set turns dmam_map_single() into
dma_map_single_attrs(), and implements the former as a macro. Functions added:

* dmam_map_single_attrs()
* dmam_unmap_single_attrs()

Xillybus' driver works with and without this patch (depends on patches #1 and #3
only).

Thanks,
   Eli

Eli Billauer (4):
  dma-mapping: Add devm_ interface for dma_map_single()
  dma-mapping: Add devm_ interface for dma_map_single_attrs()
  dma-mapping: pci: Add devm_ interface for pci_map_single
  staging: xillybus: Use devm_ API for memory allocation and DMA
    mapping

 Documentation/driver-model/devres.txt    |    6 +
 drivers/base/dma-mapping.c               |  106 +++++++++++++++++
 drivers/staging/xillybus/xillybus.h      |   38 +------
 drivers/staging/xillybus/xillybus_core.c |  186 +++++++++---------------------
 drivers/staging/xillybus/xillybus_of.c   |   61 +---------
 drivers/staging/xillybus/xillybus_pcie.c |   54 ++--------
 include/asm-generic/dma-mapping-common.h |    3 +
 include/asm-generic/pci-dma-compat.h     |   18 +++
 include/linux/dma-mapping.h              |    8 +-
 9 files changed, 214 insertions(+), 266 deletions(-)

-- 
1.7.2.3


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-01  7:01 [PATCH v2 0/4] devres: dma-mapping: Introducing new functions Eli Billauer
@ 2014-06-01  7:01 ` Eli Billauer
  2014-06-03 21:24   ` Shuah Khan
  2014-06-01  7:01 ` [PATCH v2 2/4] dma-mapping: Add devm_ interface for dma_map_single_attrs() Eli Billauer
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 17+ messages in thread
From: Eli Billauer @ 2014-06-01  7:01 UTC (permalink / raw)
  To: tj
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, shuah.kh,
	iommu, discuss, Eli Billauer

dmam_map_single() and dmam_unmap_single() are the managed counterparts
for the respective dma_* functions.

Note that dmam_map_single() returns a status value rather than the DMA handle.
The DMA handle is passed to the caller through a pointer in the arguments.

The reason for this API change is that dmam_map_single() allocates memory,
which may fail even if the DMA mapping is successful. On the other hand,
it seems like there's no platform-independent value for dma_handle that can
be used to indicate an error.

This leaves dmam_map_single() with no safe way to tell its caller that the
memory allocation has failed, except for returning a status as an int. Trying
to keep close to the non-devres API could also tempt programmers into using
dma_mapping_error() on the dmam_* functions. This would work incorrectly on
platforms, for which dma_mapping_error() ignores its argument, and always
returns success.

Thanks to Tejun Heo for suggesting this API.

Signed-off-by: Eli Billauer <eli.billauer@gmail.com>
---
 Documentation/driver-model/devres.txt |    2 +
 drivers/base/dma-mapping.c            |   86 +++++++++++++++++++++++++++++++++
 include/linux/dma-mapping.h           |    6 ++-
 3 files changed, 93 insertions(+), 1 deletions(-)

diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index e1a2707..13b8be0 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -266,6 +266,8 @@ DMA
   dmam_declare_coherent_memory()
   dmam_pool_create()
   dmam_pool_destroy()
+  dmam_map_single()
+  dmam_unmap_single()
 
 PCI
   pcim_enable_device()	: after success, all PCI ops become managed
diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 0ce39a3..f76ea0f 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -19,6 +19,7 @@ struct dma_devres {
 	size_t		size;
 	void		*vaddr;
 	dma_addr_t	dma_handle;
+	enum dma_data_direction direction;
 };
 
 static void dmam_coherent_release(struct device *dev, void *res)
@@ -267,3 +268,88 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 	return ret;
 }
 EXPORT_SYMBOL(dma_common_mmap);
+
+static int dmam_map_match(struct device *dev, void *res, void *match_data)
+{
+	struct dma_devres *this = res, *match = match_data;
+
+	if (this->dma_handle == match->dma_handle) {
+		WARN_ON(this->size != match->size ||
+			this->direction != match->direction);
+		return 1;
+	}
+	return 0;
+}
+
+static void dmam_map_single_release(struct device *dev, void *res)
+{
+	struct dma_devres *this = res;
+
+	dma_unmap_single(dev, this->dma_handle, this->size, this->direction);
+}
+
+/**
+ * dmam_map_single - Managed dma_map_single()
+ * @dev: Device to map DMA region for
+ * @ptr: Pointer to region
+ * @size: Size to map
+ * @direction: The mapping's direction
+ * @ret_dma_handle: Pointer to return the value of the DMA handle
+ *
+ * Managed dma_map_single().  The region mapped using this
+ * function will be automatically unmapped on driver detach.
+ *
+ * RETURNED VALUE:
+ * 0 is returned on success
+ * -EIO if the DMA mapping failed
+ * -ENOMEM on failure to allocate memory for internal data structure
+ */
+int dmam_map_single(struct device *dev, void *ptr, size_t size,
+		    enum dma_data_direction direction,
+		    dma_addr_t *ret_dma_handle)
+
+{
+	struct dma_devres *dr;
+	dma_addr_t dma_handle;
+
+	dr = devres_alloc(dmam_map_single_release, sizeof(*dr), GFP_KERNEL);
+	if (!dr)
+		return -ENOMEM;
+
+	dma_handle = dma_map_single(dev, ptr, size, direction);
+	if (dma_mapping_error(dev, dma_handle)) {
+		devres_free(dr);
+		return -EIO;
+	}
+
+	*ret_dma_handle = dma_handle;
+
+	dr->vaddr = ptr;
+	dr->dma_handle = dma_handle;
+	dr->size = size;
+	dr->direction = direction;
+
+	devres_add(dev, dr);
+
+	return 0; /* Success */
+}
+EXPORT_SYMBOL(dmam_map_single);
+
+/**
+ * dmam_unmap_single - Managed dma_unmap_single()
+ * @dev: Device to map DMA region for
+ * @dma_handle: DMA handle of the region to unmap
+ * @size: Size to unmap
+ * @direction: The mapping's direction
+ *
+ * Managed dma_unmap_single().
+ */
+void dmam_unmap_single(struct device *dev, dma_addr_t dma_handle,
+		       size_t size, enum dma_data_direction direction)
+{
+	struct dma_devres match_data = { size, NULL, dma_handle, direction };
+
+	WARN_ON(devres_release(dev, dmam_map_single_release, dmam_map_match,
+			       &match_data));
+}
+EXPORT_SYMBOL(dmam_unmap_single);
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index fd4aee2..cad63de 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -233,7 +233,11 @@ static inline void dmam_release_declared_memory(struct device *dev)
 {
 }
 #endif /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */
-
+int dmam_map_single(struct device *dev, void *ptr, size_t size,
+		    enum dma_data_direction direction,
+		    dma_addr_t *ret_dma_handle);
+void dmam_unmap_single(struct device *dev, dma_addr_t dma_handle,
+		       size_t size, enum dma_data_direction direction);
 #ifndef CONFIG_HAVE_DMA_ATTRS
 struct dma_attrs;
 
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 2/4] dma-mapping: Add devm_ interface for dma_map_single_attrs()
  2014-06-01  7:01 [PATCH v2 0/4] devres: dma-mapping: Introducing new functions Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single() Eli Billauer
@ 2014-06-01  7:01 ` Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 3/4] dma-mapping: pci: Add devm_ interface for pci_map_single Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 4/4] staging: xillybus: Use devm_ API for memory allocation and DMA mapping Eli Billauer
  3 siblings, 0 replies; 17+ messages in thread
From: Eli Billauer @ 2014-06-01  7:01 UTC (permalink / raw)
  To: tj
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, shuah.kh,
	iommu, discuss, Eli Billauer

dmam_map_single_attrs() and dmam_unmap_single_attrs() replace the non-*_attrs
functions, which are implemented as defines instead.

The case of a non-NULL @attrs parameter has not been tested.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Eli Billauer <eli.billauer@gmail.com>
---
 Documentation/driver-model/devres.txt    |    2 +
 drivers/base/dma-mapping.c               |   46 +++++++++++++++++++++--------
 include/asm-generic/dma-mapping-common.h |    3 ++
 include/linux/dma-mapping.h              |   12 ++++---
 4 files changed, 45 insertions(+), 18 deletions(-)

diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 13b8be0..2112a00 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -268,6 +268,8 @@ DMA
   dmam_pool_destroy()
   dmam_map_single()
   dmam_unmap_single()
+  dmam_map_single_attrs()
+  dmam_unmap_single_attrs()
 
 PCI
   pcim_enable_device()	: after success, all PCI ops become managed
diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index f76ea0f..b24ae17 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -11,6 +11,7 @@
 #include <linux/export.h>
 #include <linux/gfp.h>
 #include <asm-generic/dma-coherent.h>
+#include <linux/slab.h>
 
 /*
  * Managed DMA API
@@ -20,6 +21,7 @@ struct dma_devres {
 	void		*vaddr;
 	dma_addr_t	dma_handle;
 	enum dma_data_direction direction;
+	struct dma_attrs *attrs;
 };
 
 static void dmam_coherent_release(struct device *dev, void *res)
@@ -285,18 +287,22 @@ static void dmam_map_single_release(struct device *dev, void *res)
 {
 	struct dma_devres *this = res;
 
-	dma_unmap_single(dev, this->dma_handle, this->size, this->direction);
+	dma_unmap_single_attrs(dev, this->dma_handle, this->size,
+			       this->direction, this->attrs);
+
+	kfree(this->attrs);
 }
 
 /**
- * dmam_map_single - Managed dma_map_single()
+ * dmam_map_single_attrs - Managed dma_map_single_attrs()
  * @dev: Device to map DMA region for
  * @ptr: Pointer to region
  * @size: Size to map
  * @direction: The mapping's direction
+ * @attrs: Attributes associated with the DMA mapping
  * @ret_dma_handle: Pointer to return the value of the DMA handle
  *
- * Managed dma_map_single().  The region mapped using this
+ * Managed dma_map_single_attrs().  The region mapped using this
  * function will be automatically unmapped on driver detach.
  *
  * RETURNED VALUE:
@@ -304,9 +310,11 @@ static void dmam_map_single_release(struct device *dev, void *res)
  * -EIO if the DMA mapping failed
  * -ENOMEM on failure to allocate memory for internal data structure
  */
-int dmam_map_single(struct device *dev, void *ptr, size_t size,
-		    enum dma_data_direction direction,
-		    dma_addr_t *ret_dma_handle)
+
+int dmam_map_single_attrs(struct device *dev, void *ptr, size_t size,
+			  enum dma_data_direction direction,
+			  struct dma_attrs *attrs,
+			  dma_addr_t *ret_dma_handle)
 
 {
 	struct dma_devres *dr;
@@ -316,8 +324,18 @@ int dmam_map_single(struct device *dev, void *ptr, size_t size,
 	if (!dr)
 		return -ENOMEM;
 
-	dma_handle = dma_map_single(dev, ptr, size, direction);
+	if (attrs) {
+		dr->attrs = kmemdup(attrs, sizeof(*attrs), GFP_KERNEL);
+		if (!dr->attrs) {
+			devres_free(dr);
+			return -ENOMEM;
+		}
+	} else
+		dr->attrs = NULL;
+
+	dma_handle = dma_map_single_attrs(dev, ptr, size, direction, attrs);
 	if (dma_mapping_error(dev, dma_handle)) {
+		kfree(dr->attrs);
 		devres_free(dr);
 		return -EIO;
 	}
@@ -333,23 +351,25 @@ int dmam_map_single(struct device *dev, void *ptr, size_t size,
 
 	return 0; /* Success */
 }
-EXPORT_SYMBOL(dmam_map_single);
+EXPORT_SYMBOL(dmam_map_single_attrs);
 
 /**
- * dmam_unmap_single - Managed dma_unmap_single()
+ * dmam_unmap_single_attrs - Managed dma_unmap_single_attrs()
  * @dev: Device to map DMA region for
  * @dma_handle: DMA handle of the region to unmap
  * @size: Size to unmap
  * @direction: The mapping's direction
+ * @attrs: Attributes associated with the DMA mapping.
  *
- * Managed dma_unmap_single().
+ * Managed dma_unmap_single_attrs().
  */
-void dmam_unmap_single(struct device *dev, dma_addr_t dma_handle,
-		       size_t size, enum dma_data_direction direction)
+void dmam_unmap_single_attrs(struct device *dev, dma_addr_t dma_handle,
+			     size_t size, enum dma_data_direction direction,
+			     struct dma_attrs *attrs)
 {
 	struct dma_devres match_data = { size, NULL, dma_handle, direction };
 
 	WARN_ON(devres_release(dev, dmam_map_single_release, dmam_map_match,
 			       &match_data));
 }
-EXPORT_SYMBOL(dmam_unmap_single);
+EXPORT_SYMBOL(dmam_unmap_single_attrs);
diff --git a/include/asm-generic/dma-mapping-common.h b/include/asm-generic/dma-mapping-common.h
index de8bf89..1af27b3 100644
--- a/include/asm-generic/dma-mapping-common.h
+++ b/include/asm-generic/dma-mapping-common.h
@@ -175,6 +175,9 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
 #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, NULL)
 #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL)
 #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL)
+#define dmam_map_single(d, a, s, r, p) \
+	dmam_map_single_attrs(d, a, s, r, NULL, p)
+#define dmam_unmap_single(d, a, s, r) dmam_unmap_single_attrs(d, a, s, r, NULL)
 
 extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 			   void *cpu_addr, dma_addr_t dma_addr, size_t size);
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index cad63de..4ca9134 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -233,11 +233,13 @@ static inline void dmam_release_declared_memory(struct device *dev)
 {
 }
 #endif /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */
-int dmam_map_single(struct device *dev, void *ptr, size_t size,
-		    enum dma_data_direction direction,
-		    dma_addr_t *ret_dma_handle);
-void dmam_unmap_single(struct device *dev, dma_addr_t dma_handle,
-		       size_t size, enum dma_data_direction direction);
+int dmam_map_single_attrs(struct device *dev, void *ptr, size_t size,
+			  enum dma_data_direction direction,
+			  struct dma_attrs *attrs,
+			  dma_addr_t *ret_dma_handle);
+void dmam_unmap_single_attrs(struct device *dev, dma_addr_t dma_handle,
+			     size_t size, enum dma_data_direction direction,
+			     struct dma_attrs *attrs);
 #ifndef CONFIG_HAVE_DMA_ATTRS
 struct dma_attrs;
 
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 3/4] dma-mapping: pci: Add devm_ interface for pci_map_single
  2014-06-01  7:01 [PATCH v2 0/4] devres: dma-mapping: Introducing new functions Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single() Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 2/4] dma-mapping: Add devm_ interface for dma_map_single_attrs() Eli Billauer
@ 2014-06-01  7:01 ` Eli Billauer
  2014-06-01  7:01 ` [PATCH v2 4/4] staging: xillybus: Use devm_ API for memory allocation and DMA mapping Eli Billauer
  3 siblings, 0 replies; 17+ messages in thread
From: Eli Billauer @ 2014-06-01  7:01 UTC (permalink / raw)
  To: tj
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, shuah.kh,
	iommu, discuss, Eli Billauer


Signed-off-by: Eli Billauer <eli.billauer@gmail.com>
---
 Documentation/driver-model/devres.txt |    2 ++
 include/asm-generic/pci-dma-compat.h  |   18 ++++++++++++++++++
 2 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 2112a00..fea2c69 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -274,6 +274,8 @@ DMA
 PCI
   pcim_enable_device()	: after success, all PCI ops become managed
   pcim_pin_device()	: keep PCI device enabled after release
+  pcim_map_single()
+  pcim_unmap_single()
 
 IOMAP
   devm_ioport_map()
diff --git a/include/asm-generic/pci-dma-compat.h b/include/asm-generic/pci-dma-compat.h
index 1437b7d..796a892 100644
--- a/include/asm-generic/pci-dma-compat.h
+++ b/include/asm-generic/pci-dma-compat.h
@@ -113,4 +113,22 @@ static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
 }
 #endif
 
+/*
+ * Managed DMA API
+ */
+
+static inline int
+pcim_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction,
+		dma_addr_t *ret_dma_handle)
+{
+	return dmam_map_single(hwdev == NULL ? NULL : &hwdev->dev, ptr, size, (enum dma_data_direction)direction, ret_dma_handle);
+}
+
+static inline void
+pcim_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
+		 size_t size, int direction)
+{
+	dmam_unmap_single(hwdev == NULL ? NULL : &hwdev->dev, dma_addr, size, (enum dma_data_direction)direction);
+}
+
 #endif
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 4/4] staging: xillybus: Use devm_ API for memory allocation and DMA mapping
  2014-06-01  7:01 [PATCH v2 0/4] devres: dma-mapping: Introducing new functions Eli Billauer
                   ` (2 preceding siblings ...)
  2014-06-01  7:01 ` [PATCH v2 3/4] dma-mapping: pci: Add devm_ interface for pci_map_single Eli Billauer
@ 2014-06-01  7:01 ` Eli Billauer
  3 siblings, 0 replies; 17+ messages in thread
From: Eli Billauer @ 2014-06-01  7:01 UTC (permalink / raw)
  To: tj
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, shuah.kh,
	iommu, discuss, Eli Billauer

Managed device resource API replaces code that reinvents it for memory
allocation, page allocation and DMA mapping.

Suggested-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Eli Billauer <eli.billauer@gmail.com>
---
 drivers/staging/xillybus/xillybus.h      |   38 +------
 drivers/staging/xillybus/xillybus_core.c |  186 +++++++++---------------------
 drivers/staging/xillybus/xillybus_of.c   |   61 +---------
 drivers/staging/xillybus/xillybus_pcie.c |   54 ++--------
 4 files changed, 74 insertions(+), 265 deletions(-)

diff --git a/drivers/staging/xillybus/xillybus.h b/drivers/staging/xillybus/xillybus.h
index 78a749a..d88e458 100644
--- a/drivers/staging/xillybus/xillybus.h
+++ b/drivers/staging/xillybus/xillybus.h
@@ -25,33 +25,12 @@
 
 struct xilly_endpoint_hardware;
 
-struct xilly_page {
-	struct list_head node;
-	unsigned long addr;
-	unsigned int order;
-};
-
-struct xilly_dma {
-	struct list_head node;
-	struct pci_dev *pdev;
-	struct device *dev;
-	dma_addr_t dma_addr;
-	size_t size;
-	int direction;
-};
-
 struct xilly_buffer {
 	void *addr;
 	dma_addr_t dma_addr;
 	int end_offset; /* Counting elements, not bytes */
 };
 
-struct xilly_cleanup {
-	struct list_head to_kfree;
-	struct list_head to_pagefree;
-	struct list_head to_unmap;
-};
-
 struct xilly_idt_handle {
 	unsigned char *chandesc;
 	unsigned char *idt;
@@ -126,9 +105,6 @@ struct xilly_endpoint {
 	struct mutex register_mutex;
 	wait_queue_head_t ep_wait;
 
-	/* List of memory allocations, to make release easy */
-	struct xilly_cleanup cleanup;
-
 	/* Channels and message handling */
 	struct cdev cdev;
 
@@ -156,19 +132,15 @@ struct xilly_endpoint_hardware {
 				       dma_addr_t,
 				       size_t,
 				       int);
-	dma_addr_t (*map_single)(struct xilly_cleanup *,
-				 struct xilly_endpoint *,
-				 void *,
-				 size_t,
-				 int);
-	void (*unmap_single)(struct xilly_dma *entry);
+	int (*map_single)(struct xilly_endpoint *,
+			  void *,
+			  size_t,
+			  int,
+			  dma_addr_t *);
 };
 
 irqreturn_t xillybus_isr(int irq, void *data);
 
-void xillybus_do_cleanup(struct xilly_cleanup *mem,
-			 struct xilly_endpoint *endpoint);
-
 struct xilly_endpoint *xillybus_init_endpoint(struct pci_dev *pdev,
 					      struct device *dev,
 					      struct xilly_endpoint_hardware
diff --git a/drivers/staging/xillybus/xillybus_core.c b/drivers/staging/xillybus/xillybus_core.c
index fe8f9d2..5fca58e 100644
--- a/drivers/staging/xillybus/xillybus_core.c
+++ b/drivers/staging/xillybus/xillybus_core.c
@@ -311,85 +311,14 @@ EXPORT_SYMBOL(xillybus_isr);
  * no locks are applied!
  */
 
-void xillybus_do_cleanup(struct xilly_cleanup *mem,
-			 struct xilly_endpoint *endpoint)
-{
-	struct list_head *this, *next;
-
-	list_for_each_safe(this, next, &mem->to_unmap) {
-		struct xilly_dma *entry =
-			list_entry(this, struct xilly_dma, node);
-
-		endpoint->ephw->unmap_single(entry);
-		kfree(entry);
-	}
-
-	INIT_LIST_HEAD(&mem->to_unmap);
-
-	list_for_each_safe(this, next, &mem->to_kfree)
-		kfree(this);
-
-	INIT_LIST_HEAD(&mem->to_kfree);
-
-	list_for_each_safe(this, next, &mem->to_pagefree) {
-		struct xilly_page *entry =
-			list_entry(this, struct xilly_page, node);
-
-		free_pages(entry->addr, entry->order);
-		kfree(entry);
-	}
-	INIT_LIST_HEAD(&mem->to_pagefree);
-}
-EXPORT_SYMBOL(xillybus_do_cleanup);
-
-static void *xilly_malloc(struct xilly_cleanup *mem, size_t size)
-{
-	void *ptr;
-
-	ptr = kzalloc(sizeof(struct list_head) + size, GFP_KERNEL);
-
-	if (!ptr)
-		return ptr;
-
-	list_add_tail((struct list_head *) ptr, &mem->to_kfree);
-
-	return ptr + sizeof(struct list_head);
-}
-
-static unsigned long xilly_pagealloc(struct xilly_cleanup *mem,
-				     unsigned long order)
-{
-	unsigned long addr;
-	struct xilly_page *this;
-
-	this = kmalloc(sizeof(struct xilly_page), GFP_KERNEL);
-	if (!this)
-		return 0;
-
-	addr =  __get_free_pages(GFP_KERNEL | __GFP_DMA32 | __GFP_ZERO, order);
-
-	if (!addr) {
-		kfree(this);
-		return 0;
-	}
-
-	this->addr = addr;
-	this->order = order;
-
-	list_add_tail(&this->node, &mem->to_pagefree);
-
-	return addr;
-}
-
-
 static void xillybus_autoflush(struct work_struct *work);
 
 static int xilly_setupchannels(struct xilly_endpoint *ep,
-			       struct xilly_cleanup *mem,
 			       unsigned char *chandesc,
 			       int entries
 	)
 {
+	struct device *dev = ep->dev;
 	int i, entry, wr_nbuffer, rd_nbuffer;
 	struct xilly_channel *channel;
 	int channelnum, bufnum, bufsize, format, is_writebuf;
@@ -402,17 +331,20 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 	int left_of_rd_salami = 0;
 	dma_addr_t dma_addr;
 	int msg_buf_done = 0;
+	const gfp_t gfp_mask = GFP_KERNEL | __GFP_DMA32 | __GFP_ZERO;
 
 	struct xilly_buffer *this_buffer = NULL; /* Init to silence warning */
+	int rc = 0;
 
-	channel = xilly_malloc(mem, ep->num_channels *
-			       sizeof(struct xilly_channel));
+	channel = devm_kzalloc(dev, ep->num_channels *
+			       sizeof(struct xilly_channel), GFP_KERNEL);
 
 	if (!channel)
 		goto memfail;
 
-	ep->channels = xilly_malloc(mem, (ep->num_channels + 1) *
-				    sizeof(struct xilly_channel *));
+	ep->channels = devm_kzalloc(dev, (ep->num_channels + 1) *
+				    sizeof(struct xilly_channel *),
+				    GFP_KERNEL);
 
 	if (!ep->channels)
 		goto memfail;
@@ -501,16 +433,16 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 			channel->rd_exclusive_open = exclusive_open;
 			channel->seekable = seekable;
 
-			channel->rd_buffers = xilly_malloc(
-				mem,
-				bufnum * sizeof(struct xilly_buffer *));
+			channel->rd_buffers = devm_kzalloc(dev,
+				bufnum * sizeof(struct xilly_buffer *),
+				GFP_KERNEL);
 
 			if (!channel->rd_buffers)
 				goto memfail;
 
-			this_buffer = xilly_malloc(
-				mem,
-				bufnum * sizeof(struct xilly_buffer));
+			this_buffer = devm_kzalloc(dev,
+				bufnum * sizeof(struct xilly_buffer),
+				GFP_KERNEL);
 
 			if (!this_buffer)
 				goto memfail;
@@ -530,16 +462,16 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 			channel->wr_synchronous = synchronous;
 			channel->wr_exclusive_open = exclusive_open;
 
-			channel->wr_buffers = xilly_malloc(
-				mem,
-				bufnum * sizeof(struct xilly_buffer *));
+			channel->wr_buffers = devm_kzalloc(dev,
+				bufnum * sizeof(struct xilly_buffer *),
+				GFP_KERNEL);
 
 			if (!channel->wr_buffers)
 				goto memfail;
 
-			this_buffer = xilly_malloc(
-				mem,
-				bufnum * sizeof(struct xilly_buffer));
+			this_buffer = devm_kzalloc(dev,
+				bufnum * sizeof(struct xilly_buffer),
+				GFP_KERNEL);
 
 			if (!this_buffer)
 				goto memfail;
@@ -576,21 +508,21 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 					}
 
 					wr_salami = (void *)
-						xilly_pagealloc(mem,
-								allocorder);
+						devm_get_free_pages(
+							dev, gfp_mask,
+							allocorder);
+
 					if (!wr_salami)
 						goto memfail;
 					left_of_wr_salami = allocsize;
 				}
 
-				dma_addr = ep->ephw->map_single(
-					mem,
-					ep,
-					wr_salami,
-					bytebufsize,
-					DMA_FROM_DEVICE);
+				rc = ep->ephw->map_single(ep, wr_salami,
+							  bytebufsize,
+							  DMA_FROM_DEVICE,
+							  &dma_addr);
 
-				if (!dma_addr)
+				if (rc)
 					goto dmafail;
 
 				iowrite32(
@@ -654,8 +586,8 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 					}
 
 					rd_salami = (void *)
-						xilly_pagealloc(
-							mem,
+						devm_get_free_pages(
+							dev, gfp_mask,
 							allocorder);
 
 					if (!rd_salami)
@@ -663,14 +595,13 @@ static int xilly_setupchannels(struct xilly_endpoint *ep,
 					left_of_rd_salami = allocsize;
 				}
 
-				dma_addr = ep->ephw->map_single(
-					mem,
-					ep,
-					rd_salami,
-					bytebufsize,
-					DMA_TO_DEVICE);
 
-				if (!dma_addr)
+				rc = ep->ephw->map_single(ep, rd_salami,
+							  bytebufsize,
+							  DMA_TO_DEVICE,
+							  &dma_addr);
+
+				if (rc)
 					goto dmafail;
 
 				iowrite32(
@@ -712,7 +643,7 @@ memfail:
 	return -ENOMEM;
 dmafail:
 	dev_err(ep->dev, "Failed to map DMA memory!. Aborting.\n");
-	return -ENOMEM;
+	return rc;
 }
 
 static void xilly_scan_idt(struct xilly_endpoint *endpoint,
@@ -2103,9 +2034,6 @@ struct xilly_endpoint *xillybus_init_endpoint(struct pci_dev *pdev,
 	endpoint->pdev = pdev;
 	endpoint->dev = dev;
 	endpoint->ephw = ephw;
-	INIT_LIST_HEAD(&endpoint->cleanup.to_kfree);
-	INIT_LIST_HEAD(&endpoint->cleanup.to_pagefree);
-	INIT_LIST_HEAD(&endpoint->cleanup.to_unmap);
 	endpoint->msg_counter = 0x0b;
 	endpoint->failed_messages = 0;
 	endpoint->fatal_error = 0;
@@ -2131,7 +2059,7 @@ static int xilly_quiesce(struct xilly_endpoint *endpoint)
 
 	if (endpoint->idtlen < 0) {
 		dev_err(endpoint->dev,
-			"Failed to quiesce the device on exit. Quitting while leaving a mess.\n");
+			"Failed to quiesce the device on exit.\n");
 		return -ENODEV;
 	}
 	return 0; /* Success */
@@ -2141,8 +2069,9 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 {
 	int rc = 0;
 
-	struct xilly_cleanup tmpmem;
+	void *bootstrap_resources;
 	int idtbuffersize = (1 << PAGE_SHIFT);
+	struct device *dev = endpoint->dev;
 
 	/*
 	 * The bogus IDT is used during bootstrap for allocating the initial
@@ -2155,10 +2084,6 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 				       3, 192, PAGE_SHIFT, 0 };
 	struct xilly_idt_handle idt_handle;
 
-	INIT_LIST_HEAD(&tmpmem.to_kfree);
-	INIT_LIST_HEAD(&tmpmem.to_pagefree);
-	INIT_LIST_HEAD(&tmpmem.to_unmap);
-
 	/*
 	 * Writing the value 0x00000001 to Endianness register signals which
 	 * endianness this processor is using, so the FPGA can swap words as
@@ -2170,12 +2095,16 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 
 	/* Bootstrap phase I: Allocate temporary message buffer */
 
+	bootstrap_resources = devres_open_group(dev, NULL, GFP_KERNEL);
+	if (!bootstrap_resources)
+		return -ENOMEM;
+
 	endpoint->num_channels = 0;
 
-	rc = xilly_setupchannels(endpoint, &tmpmem, bogus_idt, 1);
+	rc = xilly_setupchannels(endpoint, bogus_idt, 1);
 
 	if (rc)
-		goto failed_buffers;
+		return rc;
 
 	/* Clear the message subsystem (and counter in particular) */
 	iowrite32(0x04, &endpoint->registers[fpga_msg_ctrl_reg]);
@@ -2199,8 +2128,7 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 
 	if (endpoint->idtlen < 0) {
 		dev_err(endpoint->dev, "No response from FPGA. Aborting.\n");
-		rc = -ENODEV;
-		goto failed_quiesce;
+		return -ENODEV;
 	}
 
 	/* Enable DMA */
@@ -2216,7 +2144,7 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 
 	endpoint->num_channels = 1;
 
-	rc = xilly_setupchannels(endpoint, &tmpmem, bogus_idt, 2);
+	rc = xilly_setupchannels(endpoint, bogus_idt, 2);
 
 	if (rc)
 		goto failed_idt;
@@ -2234,10 +2162,12 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 		rc = -ENODEV;
 		goto failed_idt;
 	}
+
+	devres_close_group(dev, bootstrap_resources);
+
 	/* Bootstrap phase III: Allocate buffers according to IDT */
 
 	rc = xilly_setupchannels(endpoint,
-				 &endpoint->cleanup,
 				 idt_handle.chandesc,
 				 idt_handle.entries);
 
@@ -2260,7 +2190,7 @@ int xillybus_endpoint_discovery(struct xilly_endpoint *endpoint)
 	if (rc)
 		goto failed_chrdevs;
 
-	xillybus_do_cleanup(&tmpmem, endpoint);
+	devres_release_group(dev, bootstrap_resources);
 
 	return 0;
 
@@ -2270,16 +2200,8 @@ failed_chrdevs:
 	mutex_unlock(&ep_list_lock);
 
 failed_idt:
-	/* Quiesce the device. Now it's serious to do it */
-	rc = xilly_quiesce(endpoint);
-
-	if (rc)
-		return rc; /* FPGA may still DMA, so no release */
-
+	xilly_quiesce(endpoint);
 	flush_workqueue(xillybus_wq);
-failed_quiesce:
-failed_buffers:
-	xillybus_do_cleanup(&tmpmem, endpoint);
 
 	return rc;
 }
diff --git a/drivers/staging/xillybus/xillybus_of.c b/drivers/staging/xillybus/xillybus_of.c
index 46ea010..7c40cf7 100644
--- a/drivers/staging/xillybus/xillybus_of.c
+++ b/drivers/staging/xillybus/xillybus_of.c
@@ -62,44 +62,14 @@ static void xilly_dma_sync_single_nop(struct xilly_endpoint *ep,
 {
 }
 
-static dma_addr_t xilly_map_single_of(struct xilly_cleanup *mem,
-				      struct xilly_endpoint *ep,
-				      void *ptr,
-				      size_t size,
-				      int direction
+static int xilly_map_single_of(struct xilly_endpoint *ep,
+			       void *ptr,
+			       size_t size,
+			       int direction,
+			       dma_addr_t *ret_dma_handle
 	)
 {
-
-	dma_addr_t addr = 0;
-	struct xilly_dma *this;
-
-	this = kmalloc(sizeof(struct xilly_dma), GFP_KERNEL);
-	if (!this)
-		return 0;
-
-	addr = dma_map_single(ep->dev, ptr, size, direction);
-	this->direction = direction;
-
-	if (dma_mapping_error(ep->dev, addr)) {
-		kfree(this);
-		return 0;
-	}
-
-	this->dma_addr = addr;
-	this->dev = ep->dev;
-	this->size = size;
-
-	list_add_tail(&this->node, &mem->to_unmap);
-
-	return addr;
-}
-
-static void xilly_unmap_single_of(struct xilly_dma *entry)
-{
-	dma_unmap_single(entry->dev,
-			 entry->dma_addr,
-			 entry->size,
-			 entry->direction);
+	return dmam_map_single(ep->dev, ptr, size, direction, ret_dma_handle);
 }
 
 static struct xilly_endpoint_hardware of_hw = {
@@ -107,7 +77,6 @@ static struct xilly_endpoint_hardware of_hw = {
 	.hw_sync_sgl_for_cpu = xilly_dma_sync_single_for_cpu_of,
 	.hw_sync_sgl_for_device = xilly_dma_sync_single_for_device_of,
 	.map_single = xilly_map_single_of,
-	.unmap_single = xilly_unmap_single_of
 };
 
 static struct xilly_endpoint_hardware of_hw_coherent = {
@@ -115,7 +84,6 @@ static struct xilly_endpoint_hardware of_hw_coherent = {
 	.hw_sync_sgl_for_cpu = xilly_dma_sync_single_nop,
 	.hw_sync_sgl_for_device = xilly_dma_sync_single_nop,
 	.map_single = xilly_map_single_of,
-	.unmap_single = xilly_unmap_single_of
 };
 
 static int xilly_drv_probe(struct platform_device *op)
@@ -138,12 +106,6 @@ static int xilly_drv_probe(struct platform_device *op)
 	dev_set_drvdata(dev, endpoint);
 
 	rc = of_address_to_resource(dev->of_node, 0, &res);
-	if (rc) {
-		dev_warn(endpoint->dev,
-			 "Failed to obtain device tree resource\n");
-		return rc;
-	}
-
 	endpoint->registers = devm_ioremap_resource(dev, &res);
 
 	if (IS_ERR(endpoint->registers))
@@ -159,14 +121,7 @@ static int xilly_drv_probe(struct platform_device *op)
 		return -ENODEV;
 	}
 
-	rc = xillybus_endpoint_discovery(endpoint);
-
-	if (!rc)
-		return 0;
-
-	xillybus_do_cleanup(&endpoint->cleanup, endpoint);
-
-	return rc;
+	return xillybus_endpoint_discovery(endpoint);
 }
 
 static int xilly_drv_remove(struct platform_device *op)
@@ -176,8 +131,6 @@ static int xilly_drv_remove(struct platform_device *op)
 
 	xillybus_endpoint_remove(endpoint);
 
-	xillybus_do_cleanup(&endpoint->cleanup, endpoint);
-
 	return 0;
 }
 
diff --git a/drivers/staging/xillybus/xillybus_pcie.c b/drivers/staging/xillybus/xillybus_pcie.c
index a4fe51c..e03f890 100644
--- a/drivers/staging/xillybus/xillybus_pcie.c
+++ b/drivers/staging/xillybus/xillybus_pcie.c
@@ -78,46 +78,18 @@ static void xilly_dma_sync_single_for_device_pci(struct xilly_endpoint *ep,
  * but that can change.
  */
 
-static dma_addr_t xilly_map_single_pci(struct xilly_cleanup *mem,
-				       struct xilly_endpoint *ep,
-				       void *ptr,
-				       size_t size,
-				       int direction
+static int xilly_map_single_pci(struct xilly_endpoint *ep,
+				void *ptr,
+				size_t size,
+				int direction,
+				dma_addr_t *ret_dma_handle
 	)
 {
-
-	dma_addr_t addr = 0;
-	struct xilly_dma *this;
 	int pci_direction;
 
-	this = kmalloc(sizeof(struct xilly_dma), GFP_KERNEL);
-	if (!this)
-		return 0;
-
 	pci_direction = xilly_pci_direction(direction);
-	addr = pci_map_single(ep->pdev, ptr, size, pci_direction);
-	this->direction = pci_direction;
-
-	if (pci_dma_mapping_error(ep->pdev, addr)) {
-		kfree(this);
-		return 0;
-	}
-
-	this->dma_addr = addr;
-	this->pdev = ep->pdev;
-	this->size = size;
-
-	list_add_tail(&this->node, &mem->to_unmap);
-
-	return addr;
-}
-
-static void xilly_unmap_single_pci(struct xilly_dma *entry)
-{
-	pci_unmap_single(entry->pdev,
-			 entry->dma_addr,
-			 entry->size,
-			 entry->direction);
+	return pcim_map_single(ep->pdev, ptr, size, pci_direction,
+			       ret_dma_handle);
 }
 
 static struct xilly_endpoint_hardware pci_hw = {
@@ -125,7 +97,6 @@ static struct xilly_endpoint_hardware pci_hw = {
 	.hw_sync_sgl_for_cpu = xilly_dma_sync_single_for_cpu_pci,
 	.hw_sync_sgl_for_device = xilly_dma_sync_single_for_device_pci,
 	.map_single = xilly_map_single_pci,
-	.unmap_single = xilly_unmap_single_pci
 };
 
 static int xilly_probe(struct pci_dev *pdev,
@@ -199,14 +170,7 @@ static int xilly_probe(struct pci_dev *pdev,
 		return -ENODEV;
 	}
 
-	rc = xillybus_endpoint_discovery(endpoint);
-
-	if (!rc)
-		return 0;
-
-	xillybus_do_cleanup(&endpoint->cleanup, endpoint);
-
-	return rc;
+	return xillybus_endpoint_discovery(endpoint);
 }
 
 static void xilly_remove(struct pci_dev *pdev)
@@ -214,8 +178,6 @@ static void xilly_remove(struct pci_dev *pdev)
 	struct xilly_endpoint *endpoint = pci_get_drvdata(pdev);
 
 	xillybus_endpoint_remove(endpoint);
-
-	xillybus_do_cleanup(&endpoint->cleanup, endpoint);
 }
 
 MODULE_DEVICE_TABLE(pci, xillyids);
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-01  7:01 ` [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single() Eli Billauer
@ 2014-06-03 21:24   ` Shuah Khan
  2014-06-03 23:39     ` Joerg Roedel
  0 siblings, 1 reply; 17+ messages in thread
From: Shuah Khan @ 2014-06-03 21:24 UTC (permalink / raw)
  To: Eli Billauer, tj, Joerg Roedel
  Cc: devel, gregkh, bhelgaas, linux-kernel, linux-pci, iommu, discuss,
	Shuah Khan

On 06/01/2014 01:01 AM, Eli Billauer wrote:
> dmam_map_single() and dmam_unmap_single() are the managed counterparts
> for the respective dma_* functions.
>
> Note that dmam_map_single() returns a status value rather than the DMA handle.
> The DMA handle is passed to the caller through a pointer in the arguments.
>
> The reason for this API change is that dmam_map_single() allocates memory,
> which may fail even if the DMA mapping is successful. On the other hand,
> it seems like there's no platform-independent value for dma_handle that can
> be used to indicate an error.
>
> This leaves dmam_map_single() with no safe way to tell its caller that the
> memory allocation has failed, except for returning a status as an int. Trying
> to keep close to the non-devres API could also tempt programmers into using
> dma_mapping_error() on the dmam_* functions. This would work incorrectly on
> platforms, for which dma_mapping_error() ignores its argument, and always
> returns success.
>

I see the value of this interface in unmap case, this type of wrapper
can release dma buffers, drivers neglected to release leaving dangling
buffers.

However, driver writers should give it some thought before switching
from conventional dma_map_*() interfaces to these proposed managed
dma mapping interfaces. These new interfaces shouldn't be treated as
drop in replacements to dma_map_*() interfaces.

The reasons are:

1. This interface adds an overhead in allocation memory for devres to
    compared to other dma_map_* wrappers. Users need to be aware of that.
    This would be okay in the cases where a driver allocates several
    buffers at init time and uses them. However, several drivers allocate
    during run-time and release as soon as it is no longer needed. This
    overhead is going to be in the performance path.

2. It adds a failure case even when dma buffers are available. i.e if
    if devres alloc fails, it will return failure even if dma map could
    have succeeded. This is a new failure case for DMA-API.

    The reason dma_map_*() routines fail now is because there are no
    buffers available. Drivers handle this error as a retry case.

    Drivers using dmam_map_single() will have to handle the failure
    cases differently.

    Since the return values are different for dmam_map_*(), that is
    plus that these interfaces can't be drop in replacements to the
    dma_map_*() interfaces.

3. Similarly, it adds an overhead in releasing memory for devres to
    compared to other dma_unmap_* wrappers. Users need to be aware of
    that. This overhead is going to be in the performance path when
    drivers unmap buffers during run-time.

Adding Joerg Roedel to the thread.

-- Shuah

-- 
Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
shuah.kh@samsung.com | (970) 672-0658

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-03 21:24   ` Shuah Khan
@ 2014-06-03 23:39     ` Joerg Roedel
  2014-06-04 14:04       ` Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2014-06-03 23:39 UTC (permalink / raw)
  To: Shuah Khan
  Cc: Eli Billauer, tj, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

On Tue, Jun 03, 2014 at 03:24:20PM -0600, Shuah Khan wrote:
> On 06/01/2014 01:01 AM, Eli Billauer wrote:
> I see the value of this interface in unmap case, this type of wrapper
> can release dma buffers, drivers neglected to release leaving dangling
> buffers.
> 
> However, driver writers should give it some thought before switching
> from conventional dma_map_*() interfaces to these proposed managed
> dma mapping interfaces. These new interfaces shouldn't be treated as
> drop in replacements to dma_map_*() interfaces.
> 
> The reasons are:
> 
> 1. This interface adds an overhead in allocation memory for devres to
>    compared to other dma_map_* wrappers. Users need to be aware of that.
>    This would be okay in the cases where a driver allocates several
>    buffers at init time and uses them. However, several drivers allocate
>    during run-time and release as soon as it is no longer needed. This
>    overhead is going to be in the performance path.
> 
> 2. It adds a failure case even when dma buffers are available. i.e if
>    if devres alloc fails, it will return failure even if dma map could
>    have succeeded. This is a new failure case for DMA-API.
> 
>    The reason dma_map_*() routines fail now is because there are no
>    buffers available. Drivers handle this error as a retry case.
> 
>    Drivers using dmam_map_single() will have to handle the failure
>    cases differently.
> 
>    Since the return values are different for dmam_map_*(), that is
>    plus that these interfaces can't be drop in replacements to the
>    dma_map_*() interfaces.
> 
> 3. Similarly, it adds an overhead in releasing memory for devres to
>    compared to other dma_unmap_* wrappers. Users need to be aware of
>    that. This overhead is going to be in the performance path when
>    drivers unmap buffers during run-time.

I fully agree with the points Shuah brought up here. I don't think it is
a good idea to add this kind of resource management to runtime-allocated
(and de-allocated) resources of device drivers.

Also DMA handles are not something that could be garbage collected at
driver unload time. They are a limited resource that may be used up at
some point. And the whole point of a devm-API is that code can be
simpler because we don't need to de-allocate everything on the
error-path or at unload time, no?

Besides that, we already have DMA-API debug to find drivers that do not
release all their DMA buffers.


	Joerg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-03 23:39     ` Joerg Roedel
@ 2014-06-04 14:04       ` Tejun Heo
  2014-06-04 14:12         ` Joerg Roedel
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2014-06-04 14:04 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Shuah Khan, Eli Billauer, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

Hello,

On Wed, Jun 04, 2014 at 01:39:07AM +0200, Joerg Roedel wrote:
> I fully agree with the points Shuah brought up here. I don't think it is
> a good idea to add this kind of resource management to runtime-allocated
> (and de-allocated) resources of device drivers.
>
> Also DMA handles are not something that could be garbage collected at
> driver unload time. They are a limited resource that may be used up at
> some point. And the whole point of a devm-API is that code can be
> simpler because we don't need to de-allocate everything on the
> error-path or at unload time, no?

Hmmm?  Don't we have drivers which map dma buffers on device init and
release them on exit?  For dynamic usages, its usefulness is limited
especially given that dynamic tracking of buffers usually would
involve tracking of other information too in addition to dma buffer
pointer themselves.  If alloc on init and free on exit is a very rare
usage pattern, I have no objection against not adding devm interface
for dma mappings.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-04 14:04       ` Tejun Heo
@ 2014-06-04 14:12         ` Joerg Roedel
  2014-06-04 14:14           ` Tejun Heo
  0 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2014-06-04 14:12 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Shuah Khan, Eli Billauer, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

On Wed, Jun 04, 2014 at 10:04:08AM -0400, Tejun Heo wrote:
> Hmmm?  Don't we have drivers which map dma buffers on device init and
> release them on exit?  For dynamic usages, its usefulness is limited
> especially given that dynamic tracking of buffers usually would
> involve tracking of other information too in addition to dma buffer
> pointer themselves.  If alloc on init and free on exit is a very rare
> usage pattern, I have no objection against not adding devm interface
> for dma mappings.

Yes, but those drivers usually get DMA buffers at init time with the
dma_alloc_* interfaces. The dma_map_* interfaces discussed here belong
to the streaming DMA-API, so they are usually used for only one DMA
transaction before dma_unmap_* is called on them.

A devm interface for the dma_alloc_* family of functions would
actually make sense, but not for the dma_map_* functions.


	Joerg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-04 14:12         ` Joerg Roedel
@ 2014-06-04 14:14           ` Tejun Heo
  2014-06-04 15:03             ` Eli Billauer
  0 siblings, 1 reply; 17+ messages in thread
From: Tejun Heo @ 2014-06-04 14:14 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Shuah Khan, Eli Billauer, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

Hello,

On Wed, Jun 04, 2014 at 04:12:11PM +0200, Joerg Roedel wrote:
> Yes, but those drivers usually get DMA buffers at init time with the
> dma_alloc_* interfaces. The dma_map_* interfaces discussed here belong
> to the streaming DMA-API, so they are usually used for only one DMA
> transaction before dma_unmap_* is called on them.
> 
> A devm interface for the dma_alloc_* family of functions would
> actually make sense, but not for the dma_map_* functions.

Ah, okay.  Fair enough.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-04 14:14           ` Tejun Heo
@ 2014-06-04 15:03             ` Eli Billauer
  2014-06-04 21:25               ` Joerg Roedel
  0 siblings, 1 reply; 17+ messages in thread
From: Eli Billauer @ 2014-06-04 15:03 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Joerg Roedel, Shuah Khan, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

Hi,

I believe that I need a managed dma_map_single() my own driver, which 
doesn't fall in the case of a single use: The driver allocates its 
buffers with __get_free_pages() (or the to-be managed version of it). 
Then it cuts the allocated memory into smaller buffers (in some cases, 
and with certain alignment rules), and  then calls dma_map_single() to 
do the DMA mapping for each. The buffers are held along the driver's 
lifetime, with DMA sync API juggling control back and forth to the 
hardware. Note that the DMA is noncoherent.

Once could argue, that since dma_map_noncoherent() calls 
__get_free_pages() under the hood, I could request the larger chunk from 
dma_map_noncoherent(), and cut it into smaller DMA buffers. My concern 
is that the dma_sync_single_*() functions expect a DMA handle, not a 
physical address I've made up with my own buffer splitting. I don't see 
any problem with this trick on platforms I've worked with, but I'm not 
sure if this is the proper way to go. dma_map_single(), on the other 
hand, returns a DMA handle.

The DMA pool functions could be interesting, but I understand that they 
would supply me with coherent memory only.

Anything I missed?

Thanks,
    Eli

On 04/06/14 17:14, Tejun Heo wrote:
> Hello,
>
> On Wed, Jun 04, 2014 at 04:12:11PM +0200, Joerg Roedel wrote:
>    
>> Yes, but those drivers usually get DMA buffers at init time with the
>> dma_alloc_* interfaces. The dma_map_* interfaces discussed here belong
>> to the streaming DMA-API, so they are usually used for only one DMA
>> transaction before dma_unmap_* is called on them.
>>
>> A devm interface for the dma_alloc_* family of functions would
>> actually make sense, but not for the dma_map_* functions.
>>      
> Ah, okay.  Fair enough.
>
> Thanks.
>
>    



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-04 15:03             ` Eli Billauer
@ 2014-06-04 21:25               ` Joerg Roedel
  2014-06-06 11:45                 ` Eli Billauer
  0 siblings, 1 reply; 17+ messages in thread
From: Joerg Roedel @ 2014-06-04 21:25 UTC (permalink / raw)
  To: Eli Billauer
  Cc: Tejun Heo, Shuah Khan, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

Hi,

On Wed, Jun 04, 2014 at 06:03:36PM +0300, Eli Billauer wrote:
> I believe that I need a managed dma_map_single() my own driver,
> which doesn't fall in the case of a single use: The driver allocates
> its buffers with __get_free_pages() (or the to-be managed version of
> it). Then it cuts the allocated memory into smaller buffers (in some
> cases, and with certain alignment rules), and  then calls
> dma_map_single() to do the DMA mapping for each. The buffers are
> held along the driver's lifetime, with DMA sync API juggling control
> back and forth to the hardware. Note that the DMA is noncoherent.

What you are trying to do should work with dma_alloc_noncoherent(). The
API allows partial syncs on this memory, so you should be fine.

The problem with a devm variant of dma_map_* is that it is too easy to
misuse or to use it wrong so that a driver could eat up all available
DMA handles on some platforms.


	Joerg



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-04 21:25               ` Joerg Roedel
@ 2014-06-06 11:45                 ` Eli Billauer
  2014-06-06 16:01                   ` Greg KH
  0 siblings, 1 reply; 17+ messages in thread
From: Eli Billauer @ 2014-06-06 11:45 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Tejun Heo, Shuah Khan, devel, gregkh, bhelgaas, linux-kernel,
	linux-pci, iommu, discuss

Hello Joerg.


On 05/06/14 00:25, Joerg Roedel wrote:
>
> What you are trying to do should work with dma_alloc_noncoherent(). The
> API allows partial syncs on this memory, so you should be fine.
>    
Please try to put yourself in my position: I have a driver that I care 
about, which works fine for a few years. It's based upon 
dma_map_single(), which seems to be the common way to get non-coherent 
memory, even for the driver's entire lifespan. I realize that 
dma_alloc_* was the intended way to do it, but fact is that dma_map_* 
has become the common choice.

Now I need to switch to dma_alloc_noncoherent(), which isn't used by 
many drivers, it seems. It should work the same, but there's always the 
worry if I'll cover all the corners. So will I take this risk of a nasty 
DMA bug on some esoteric platform, just to cut some lines in the code?

And if I choose to keep the unmanaged dma_map_single(), maybe I'll mess 
up if I convert other allocations to the managed API? Hmmm, maybe it's 
best to forget all about it.
> The problem with a devm variant of dma_map_* is that it is too easy to
> misuse or to use it wrong so that a driver could eat up all available
> DMA handles on some platforms.
>    
I suppose you mean that those who use dma_map_* for a one-off DMA 
session will drop the unmapping, thinking that it's "managed anyhow"...? 
Well, you can say that about any of the managed functions. For example, 
when devm_kzalloc() was introduced, someone maybe argued that people 
would drop kfree()s where they shouldn't, causing memory leaks.

So I think it boils down to whether devres is a good idea or not. 
Someone who thinks it's bad, will reject any new API, referring to 
memory efficiency, additional causes of failure and the risk of 
misleading the herd.

But if devres is to become commonly used in the future, I think drop-in 
replacements are necessary. In my opinion, telling people to adopt 
another methodology (e.g. dma_alloc_noncoherent vs. mapping), even if 
functionally equivalent, is a good way to make sure devres is never adopted.

Regards,
    Eli
> 	Joerg
>
>
>    



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-06 11:45                 ` Eli Billauer
@ 2014-06-06 16:01                   ` Greg KH
  2014-06-06 16:21                     ` Eli Billauer
  0 siblings, 1 reply; 17+ messages in thread
From: Greg KH @ 2014-06-06 16:01 UTC (permalink / raw)
  To: Eli Billauer
  Cc: Joerg Roedel, devel, Shuah Khan, discuss, linux-kernel, bhelgaas,
	iommu, linux-pci, Tejun Heo

On Fri, Jun 06, 2014 at 02:45:06PM +0300, Eli Billauer wrote:
> Hello Joerg.
> 
> 
> On 05/06/14 00:25, Joerg Roedel wrote:
> >
> >What you are trying to do should work with dma_alloc_noncoherent(). The
> >API allows partial syncs on this memory, so you should be fine.
> Please try to put yourself in my position: I have a driver that I care
> about, which works fine for a few years. It's based upon dma_map_single(),
> which seems to be the common way to get non-coherent memory, even for the
> driver's entire lifespan. I realize that dma_alloc_* was the intended way to
> do it, but fact is that dma_map_* has become the common choice.

Is your driver in the kernel tree?  If not, you really are on your own :(

greg k-h

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-06 16:01                   ` Greg KH
@ 2014-06-06 16:21                     ` Eli Billauer
  2014-06-06 17:02                       ` Shuah Khan
  0 siblings, 1 reply; 17+ messages in thread
From: Eli Billauer @ 2014-06-06 16:21 UTC (permalink / raw)
  To: Greg KH
  Cc: Joerg Roedel, devel, Shuah Khan, discuss, linux-kernel, bhelgaas,
	iommu, linux-pci, Tejun Heo

On 06/06/14 19:01, Greg KH wrote:
>> Please try to put yourself in my position: I have a driver that I care
>> >  about, which works fine for a few years. It's based upon dma_map_single(),
>> >  which seems to be the common way to get non-coherent memory, even for the
>> >  driver's entire lifespan. I realize that dma_alloc_* was the intended way to
>> >  do it, but fact is that dma_map_* has become the common choice.
>>
> Is your driver in the kernel tree?  If not, you really are on your own :(
>
It's the Xillybus driver in the staging area. I don't know if this 
counts for being in the kernel tree...

The suggested patchset would allow replacing my use of dma_map_single() 
with a managed version of that function. This will clean the driver's 
code considerably.

But I think that the discussion here is if it's valid to use 
dma_map_single() for a device-permanent DMA mapping, or if 
dma_alloc_noncoherent() is the only way. If the answer is no, there's 
quite obviously no point in a devres version for that function.

Regards,
    Eli
> greg k-h
>
>



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-06 16:21                     ` Eli Billauer
@ 2014-06-06 17:02                       ` Shuah Khan
  2014-06-07 11:23                         ` Eli Billauer
  0 siblings, 1 reply; 17+ messages in thread
From: Shuah Khan @ 2014-06-06 17:02 UTC (permalink / raw)
  To: Eli Billauer, Greg KH
  Cc: Joerg Roedel, devel, discuss, linux-kernel, bhelgaas, iommu,
	linux-pci, Tejun Heo, Shuah Khan

On 06/06/2014 10:21 AM, Eli Billauer wrote:
> On 06/06/14 19:01, Greg KH wrote:
>>> Please try to put yourself in my position: I have a driver that I care
>>> >  about, which works fine for a few years. It's based upon
>>> dma_map_single(),
>>> >  which seems to be the common way to get non-coherent memory, even
>>> for the
>>> >  driver's entire lifespan. I realize that dma_alloc_* was the
>>> intended way to
>>> >  do it, but fact is that dma_map_* has become the common choice.
>>>
>> Is your driver in the kernel tree?  If not, you really are on your own :(
>>
> It's the Xillybus driver in the staging area. I don't know if this
> counts for being in the kernel tree...
>
> The suggested patchset would allow replacing my use of dma_map_single()
> with a managed version of that function. This will clean the driver's
> code considerably.
>
> But I think that the discussion here is if it's valid to use
> dma_map_single() for a device-permanent DMA mapping, or if
> dma_alloc_noncoherent() is the only way. If the answer is no, there's
> quite obviously no point in a devres version for that function.
>

Eli,

dma_map_single() and dma_unmap_single() are streaming DMA APIs. These
are intended for one DMA transfer and unmapped right after it is done.

dma buffers are limited shared resources for streaming that are
shared by several drivers. Hence the need for use and release
model.

Please refer to the Using Streaming DMA mappings in DMA-API-HOWTO.txt

"- Streaming DMA mappings which are usually mapped for one DMA
   transfer, unmapped right after it (unless you use dma_sync_* below)
   and for which hardware can optimize for sequential accesses.

   This of "streaming" as "asynchronous" or "outside the coherency
   domain".

   Good examples of what to use streaming mappings for are:

         - Networking buffers transmitted/received by a device.
         - Filesystem buffers written/read by a SCSI device."


If I understand your intended usage correctly, you are looking to
allocate and hold the buffers for the lifetime of the driver. For
such cases, dma_alloc_*() interfaces are the ones to use.

Please also refer to DMA-API.txt as well. Hope this helps.

-- Shuah


-- 
Shuah Khan
Senior Linux Kernel Developer - Open Source Group
Samsung Research America(Silicon Valley)
shuah.kh@samsung.com | (970) 672-0658

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single()
  2014-06-06 17:02                       ` Shuah Khan
@ 2014-06-07 11:23                         ` Eli Billauer
  0 siblings, 0 replies; 17+ messages in thread
From: Eli Billauer @ 2014-06-07 11:23 UTC (permalink / raw)
  To: shuah.kh
  Cc: Greg KH, Joerg Roedel, devel, discuss, linux-kernel, bhelgaas,
	iommu, linux-pci, Tejun Heo

Hello Shuah,


We agree that the streaming API was originally *intended* for short 
map-unmap DMA sessions, and that dma_alloc_noncoherent() was the 
*intended* API for those who want to hold the DMA during a device's 
lifetime.

We also agree that on some platforms, DMA mappings are precious, and 
therefore any driver should unmap a region as soon as it's not needed 
anymore.

But if we stick to the citation you gave, it says "...unmapped right 
after it (unless you use dma_sync_* below)". So even in the streaming 
API's definition, there's an understanding, that the "streaming" DMA 
buffer can be held for more than a single session. And a good sync tool 
for that is made available.

Using cross-reference on Linux' code, I get a strong impression, that 
dma_alloc_NONcoherent() is pretty much unused (I counted 8 drivers). The 
streaming API's sync functions are heavily used, on the other hand. So 
one gets a hunch, that there's a lot of use of the streaming API in the 
kernel tree for long-term DMA mappings.

This wasn't the original intention -- we agree on that. But why is it 
wrong? Assuming that a driver needs to hold a DMA mapping for a long 
while, why does it matter if it was done with dma_alloc_noncoherent() or 
with dma_map_*()? They are equally wasteful, aren't they?

Why maintaining two API sets doing the same thing? Or is there a subtle 
functional difference I'm not aware of?

Thanks,
   Eli



On 06/06/14 20:02, Shuah Khan wrote:
>
> dma_map_single() and dma_unmap_single() are streaming DMA APIs. These
> are intended for one DMA transfer and unmapped right after it is done.
>
> dma buffers are limited shared resources for streaming that are
> shared by several drivers. Hence the need for use and release
> model.
>
> Please refer to the Using Streaming DMA mappings in DMA-API-HOWTO.txt
>
> "- Streaming DMA mappings which are usually mapped for one DMA
>   transfer, unmapped right after it (unless you use dma_sync_* below)
>   and for which hardware can optimize for sequential accesses.
>
>   This of "streaming" as "asynchronous" or "outside the coherency
>   domain".
>
>   Good examples of what to use streaming mappings for are:
>
>         - Networking buffers transmitted/received by a device.
>         - Filesystem buffers written/read by a SCSI device."
>
>
> If I understand your intended usage correctly, you are looking to
> allocate and hold the buffers for the lifetime of the driver. For
> such cases, dma_alloc_*() interfaces are the ones to use.
>
> Please also refer to DMA-API.txt as well. Hope this helps.
>
> -- Shuah
>
>


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-06-07 12:34 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-01  7:01 [PATCH v2 0/4] devres: dma-mapping: Introducing new functions Eli Billauer
2014-06-01  7:01 ` [PATCH v2 1/4] dma-mapping: Add devm_ interface for dma_map_single() Eli Billauer
2014-06-03 21:24   ` Shuah Khan
2014-06-03 23:39     ` Joerg Roedel
2014-06-04 14:04       ` Tejun Heo
2014-06-04 14:12         ` Joerg Roedel
2014-06-04 14:14           ` Tejun Heo
2014-06-04 15:03             ` Eli Billauer
2014-06-04 21:25               ` Joerg Roedel
2014-06-06 11:45                 ` Eli Billauer
2014-06-06 16:01                   ` Greg KH
2014-06-06 16:21                     ` Eli Billauer
2014-06-06 17:02                       ` Shuah Khan
2014-06-07 11:23                         ` Eli Billauer
2014-06-01  7:01 ` [PATCH v2 2/4] dma-mapping: Add devm_ interface for dma_map_single_attrs() Eli Billauer
2014-06-01  7:01 ` [PATCH v2 3/4] dma-mapping: pci: Add devm_ interface for pci_map_single Eli Billauer
2014-06-01  7:01 ` [PATCH v2 4/4] staging: xillybus: Use devm_ API for memory allocation and DMA mapping Eli Billauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).