linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] dmaengine: add dmanegine slave map api's
@ 2012-09-14  9:33 Vinod Koul
  2012-09-14  9:39 ` [RFC] " Vinod Koul
  2012-09-14  9:41 ` [PATCH] " Russell King - ARM Linux
  0 siblings, 2 replies; 13+ messages in thread
From: Vinod Koul @ 2012-09-14  9:33 UTC (permalink / raw)
  To: linux-kernel, Dan Williams
  Cc: Russell King, Arnd Bergmann, linus.walleij, Jon Hunter,
	Stephen Warren, Benoit Cousson, Shawn Guo, Guennadi Liakhovetski,
	Sascha Hauer, Kukjin Kim, viresh kumar, Paul Mundt, Vinod Koul

when allocating a channel the dmaengine finds first channel that matches the
mask and calls filter function
In slave dmaengine model, there already exists a mapping, either hardwired in
SoC or thru a configurable mux. So we typically need channel from controller X
and in same cases a partcular channel Y.

Add new APIs which allow adding channel map for client-dma relationship.
This mapping needs to be registered with dmaengine by platform specfic code
which is aware of this mapping. This mapping needs to be added _before_ any
client tries to access a channel.

Then in order for slave devices to get a channel based on above mapping, add new
slave specfic dmanengine channel request/free APIs

Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
---
this is the first attempt to do the slave dma mapping, we have been discussing
this and we all agree to need for this. This is the same patch which was shown
at KS and attempts to build the mapping information within dmaengine which is
platform and arch agnostic.

The arch code like DT/board-files/SFI etc can use these APIs to build dmaengine
mapping and then dmaengine filters channels based on this mapping.

Now Credits:
the orginal idea was from Linus Walleij and has been discussed in similar
format.
---
 drivers/dma/dmaengine.c   |  152 +++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dmaengine.h |   50 +++++++++++++++
 2 files changed, 202 insertions(+)

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 3491654..293dfd0 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -1066,6 +1066,158 @@ void dma_run_dependencies(struct dma_async_tx_descriptor *tx)
 }
 EXPORT_SYMBOL_GPL(dma_run_dependencies);
 
+/* slave channel mapping code*/
+static LIST_HEAD(dma_slave_map);
+
+/*called under lock held */
+static struct dma_chan *dmaengine_get_slave_channel(char *requestor, dma_cap_mask_t *mask)
+{
+
+	struct dma_device *device, *_d;
+	struct dma_chan *chan = NULL;
+	int err;
+
+	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
+		if (strcmp(requestor, dev_name(device->dev)))
+			continue;
+
+		chan = private_candidate(mask, device, NULL, NULL);
+		if (chan) {
+			/* Found a suitable channel, try to grab, prep, and
+			 * return it.  We first set DMA_PRIVATE to disable
+			 * balance_ref_count as this channel will not be
+			 * published in the general-purpose allocator
+			 */
+			dma_cap_set(DMA_PRIVATE, device->cap_mask);
+			device->privatecnt++;
+			err = dma_chan_get(chan);
+
+			if (err == -ENODEV) {
+				pr_debug("%s: %s module removed\n",
+					 __func__, dma_chan_name(chan));
+				list_del_rcu(&device->global_node);
+			} else if (err)
+				pr_debug("%s: failed to get %s: (%d)\n",
+					 __func__, dma_chan_name(chan), err);
+			else
+				break;
+			if (--device->privatecnt == 0)
+				dma_cap_clear(DMA_PRIVATE, device->cap_mask);
+			chan = NULL;
+		}
+	}
+	return chan;
+}
+
+/**
+ * dmaengine_request_slave_channel - request a slave channel for which damengine
+ * knows the channel mapping
+ *
+ * This is generic API which should work on all arch and doesnt assume any arch
+ * implementation of how map is constructed. Arch code should call map apis to
+ * build the channel map first
+ *
+ * @requestor: client name
+ * @config: dma_slave_config, this is set post channle allocation
+ * @mask: capabilities that the channel must satisfy
+ * @fn: optional filter function should be unused typically
+ * @fn_param: filter fn params
+ */
+struct dma_chan *dmaengine_request_slave_channel(
+		char *requestor, struct dma_slave_config *config,
+		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
+{
+	struct dma_chan *slave = NULL;
+	struct dmaengine_slave_map_entries *entry, *_e;
+
+	/* perhpas we dont need mask arg, it should be slave anyway */
+	if (!__dma_has_cap(DMA_SLAVE, mask))
+		return NULL;
+
+	mutex_lock(&dma_list_mutex);
+	/* find a channel which maps to this request */
+	list_for_each_entry_safe(entry, _e, &dma_slave_map, node) {
+		if (strcmp(requestor, entry->map->client))
+			continue;
+
+		/* have we already allocated this mapping */
+		if (entry->used == true)
+			continue;
+
+		/* now we hit an entry in map,
+		 * see if we can get a channel in controller */
+		slave = dmaengine_get_slave_channel(requestor, mask);
+		if (!slave)
+			continue;
+
+		/* check if client is requesting some specfic slave channel */
+		if ((entry->map->ch != -1) && (entry->map->ch != slave->chan_id))
+			continue;
+
+		/* now call filter fn but it should not be used anyway */
+		if (fn && !fn(slave, fn_param))
+			continue;
+
+		entry->used = true;
+		mutex_unlock(&dma_list_mutex);
+
+		/* pick slave id from map if not set */
+		if (!config->slave_id)
+			config->slave_id = entry->map->slave_id;
+
+		/*now lets set slave params */
+		dmaengine_slave_config(slave, config);
+
+		return slave;
+	}
+	mutex_unlock(&dma_list_mutex);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(dmaengine_request_slave_channel);
+
+int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
+		unsigned int num_entries)
+{
+	int i = 0;
+	struct dmaengine_slave_map_entries *entry;
+
+	entry = kzalloc(sizeof(struct dmaengine_slave_map) * num_entries, GFP_KERNEL);
+	if (entry == NULL)
+		return -ENOMEM;
+	entry->num_entries = num_entries;
+	mutex_lock(&dma_list_mutex);
+	for (i = 0; i < num_entries; i++) {
+		entry->map = map;
+		entry->used = false;
+		entry->map->entry = entry;
+		list_add_tail(&entry->node, &dma_slave_map);
+		entry++;
+		map++;
+	}
+	mutex_unlock(&dma_list_mutex);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(dmaengine_add_channel_map);
+
+void dmaengine_free_channel_map(struct dmaengine_slave_map *map)
+{
+	struct dmaengine_slave_map_entries *entry;
+	unsigned int i, num_entries;
+
+	BUG_ON(map);
+	mutex_lock(&dma_list_mutex);
+	entry = map->entry;
+	num_entries = entry->num_entries;
+	for (i = 0; i < num_entries; i++) {
+		list_del(&entry->node);
+		entry++;
+	}
+	kfree(entry);
+	mutex_unlock(&dma_list_mutex);
+	return;
+}
+EXPORT_SYMBOL_GPL(dmaengine_free_channel_map);
+
 static int __init dma_bus_init(void)
 {
 	return class_register(&dma_devclass);
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index 9c02a45..54dd1d5 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -371,6 +371,35 @@ struct dma_slave_config {
 	unsigned int slave_id;
 };
 
+struct dmaengine_slave_map_entries;
+/* dmaengine slave channel map
+ *
+ * @dma: dma controller, mandatory
+ * @client: clinet associated with the controller, mandatory
+ * @ch: channel index, desired but optional
+ * @slave_id: slave id for programming mux
+ *
+ * the platform needs to tell the channel mapping information to the dmaengine
+ * code thru this structure.
+ * This is used by dmanengine to find the channel to be allocated during
+ * dma_rquest_channel process for slave channels
+ */
+struct dmaengine_slave_map {
+	char *dma;
+	char *client;
+	int ch;
+	int slave_id;
+	struct dmaengine_slave_map_entries *entry;
+};
+
+struct dmaengine_slave_map_entries {
+	struct dmaengine_slave_map *map;
+	bool used;
+	struct list_head node;
+	unsigned int num_entries;
+};
+
+
 static inline const char *dma_chan_name(struct dma_chan *chan)
 {
 	return dev_name(&chan->dev->device);
@@ -974,6 +1003,12 @@ enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
 void dma_issue_pending_all(void);
 struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
 void dma_release_channel(struct dma_chan *chan);
+struct dma_chan *dmaengine_request_slave_channel(
+		char *requestor, struct dma_slave_config *config,
+		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
+int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
+		unsigned int num_entries);
+void dmaengine_free_channel_map(struct dmaengine_slave_map *map);
 #else
 static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
 {
@@ -990,6 +1025,21 @@ static inline struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask,
 static inline void dma_release_channel(struct dma_chan *chan)
 {
 }
+struct dma_chan *dmaengine_request_slave_channel(
+		char *requestor, struct dma_slave_config *config,
+		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
+{
+	return NULL;
+}
+static inline int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
+		unsigned int num_entries)
+{
+	return -EIO;
+}
+void dmaengine_free_channel_map(struct dmaengine_slave_map *map)
+{
+	return;
+}
 #endif
 
 /* --- DMA device --- */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [RFC] dmaengine: add dmanegine slave map api's
  2012-09-14  9:33 [PATCH] dmaengine: add dmanegine slave map api's Vinod Koul
@ 2012-09-14  9:39 ` Vinod Koul
  2012-09-14  9:41 ` [PATCH] " Russell King - ARM Linux
  1 sibling, 0 replies; 13+ messages in thread
From: Vinod Koul @ 2012-09-14  9:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dan Williams, Russell King, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Fri, 2012-09-14 at 15:03 +0530, Vinod Koul wrote:

Sorry, this is a RFC, and not a PATCH yet.

> when allocating a channel the dmaengine finds first channel that matches the
> mask and calls filter function
> In slave dmaengine model, there already exists a mapping, either hardwired in
> SoC or thru a configurable mux. So we typically need channel from controller X
> and in same cases a partcular channel Y.
> 
> Add new APIs which allow adding channel map for client-dma relationship.
> This mapping needs to be registered with dmaengine by platform specfic code
> which is aware of this mapping. This mapping needs to be added _before_ any
> client tries to access a channel.
> 
> Then in order for slave devices to get a channel based on above mapping, add new
> slave specfic dmanengine channel request/free APIs
> 
> Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
> ---
> this is the first attempt to do the slave dma mapping, we have been discussing
> this and we all agree to need for this. This is the same patch which was shown
> at KS and attempts to build the mapping information within dmaengine which is
> platform and arch agnostic.
> 
> The arch code like DT/board-files/SFI etc can use these APIs to build dmaengine
> mapping and then dmaengine filters channels based on this mapping.
> 
> Now Credits:
> the orginal idea was from Linus Walleij and has been discussed in similar
> format.
> ---
>  drivers/dma/dmaengine.c   |  152 +++++++++++++++++++++++++++++++++++++++++++++
>  include/linux/dmaengine.h |   50 +++++++++++++++
>  2 files changed, 202 insertions(+)
> 
> diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
> index 3491654..293dfd0 100644
> --- a/drivers/dma/dmaengine.c
> +++ b/drivers/dma/dmaengine.c
> @@ -1066,6 +1066,158 @@ void dma_run_dependencies(struct dma_async_tx_descriptor *tx)
>  }
>  EXPORT_SYMBOL_GPL(dma_run_dependencies);
>  
> +/* slave channel mapping code*/
> +static LIST_HEAD(dma_slave_map);
> +
> +/*called under lock held */
> +static struct dma_chan *dmaengine_get_slave_channel(char *requestor, dma_cap_mask_t *mask)
> +{
> +
> +	struct dma_device *device, *_d;
> +	struct dma_chan *chan = NULL;
> +	int err;
> +
> +	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
> +		if (strcmp(requestor, dev_name(device->dev)))
> +			continue;
> +
> +		chan = private_candidate(mask, device, NULL, NULL);
> +		if (chan) {
> +			/* Found a suitable channel, try to grab, prep, and
> +			 * return it.  We first set DMA_PRIVATE to disable
> +			 * balance_ref_count as this channel will not be
> +			 * published in the general-purpose allocator
> +			 */
> +			dma_cap_set(DMA_PRIVATE, device->cap_mask);
> +			device->privatecnt++;
> +			err = dma_chan_get(chan);
> +
> +			if (err == -ENODEV) {
> +				pr_debug("%s: %s module removed\n",
> +					 __func__, dma_chan_name(chan));
> +				list_del_rcu(&device->global_node);
> +			} else if (err)
> +				pr_debug("%s: failed to get %s: (%d)\n",
> +					 __func__, dma_chan_name(chan), err);
> +			else
> +				break;
> +			if (--device->privatecnt == 0)
> +				dma_cap_clear(DMA_PRIVATE, device->cap_mask);
> +			chan = NULL;
> +		}
> +	}
> +	return chan;
> +}
> +
> +/**
> + * dmaengine_request_slave_channel - request a slave channel for which damengine
> + * knows the channel mapping
> + *
> + * This is generic API which should work on all arch and doesnt assume any arch
> + * implementation of how map is constructed. Arch code should call map apis to
> + * build the channel map first
> + *
> + * @requestor: client name
> + * @config: dma_slave_config, this is set post channle allocation
> + * @mask: capabilities that the channel must satisfy
> + * @fn: optional filter function should be unused typically
> + * @fn_param: filter fn params
> + */
> +struct dma_chan *dmaengine_request_slave_channel(
> +		char *requestor, struct dma_slave_config *config,
> +		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
> +{
> +	struct dma_chan *slave = NULL;
> +	struct dmaengine_slave_map_entries *entry, *_e;
> +
> +	/* perhpas we dont need mask arg, it should be slave anyway */
> +	if (!__dma_has_cap(DMA_SLAVE, mask))
> +		return NULL;
> +
> +	mutex_lock(&dma_list_mutex);
> +	/* find a channel which maps to this request */
> +	list_for_each_entry_safe(entry, _e, &dma_slave_map, node) {
> +		if (strcmp(requestor, entry->map->client))
> +			continue;
> +
> +		/* have we already allocated this mapping */
> +		if (entry->used == true)
> +			continue;
> +
> +		/* now we hit an entry in map,
> +		 * see if we can get a channel in controller */
> +		slave = dmaengine_get_slave_channel(requestor, mask);
> +		if (!slave)
> +			continue;
> +
> +		/* check if client is requesting some specfic slave channel */
> +		if ((entry->map->ch != -1) && (entry->map->ch != slave->chan_id))
> +			continue;
> +
> +		/* now call filter fn but it should not be used anyway */
> +		if (fn && !fn(slave, fn_param))
> +			continue;
> +
> +		entry->used = true;
> +		mutex_unlock(&dma_list_mutex);
> +
> +		/* pick slave id from map if not set */
> +		if (!config->slave_id)
> +			config->slave_id = entry->map->slave_id;
> +
> +		/*now lets set slave params */
> +		dmaengine_slave_config(slave, config);
> +
> +		return slave;
> +	}
> +	mutex_unlock(&dma_list_mutex);
> +	return NULL;
> +}
> +EXPORT_SYMBOL_GPL(dmaengine_request_slave_channel);
> +
> +int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
> +		unsigned int num_entries)
> +{
> +	int i = 0;
> +	struct dmaengine_slave_map_entries *entry;
> +
> +	entry = kzalloc(sizeof(struct dmaengine_slave_map) * num_entries, GFP_KERNEL);
> +	if (entry == NULL)
> +		return -ENOMEM;
> +	entry->num_entries = num_entries;
> +	mutex_lock(&dma_list_mutex);
> +	for (i = 0; i < num_entries; i++) {
> +		entry->map = map;
> +		entry->used = false;
> +		entry->map->entry = entry;
> +		list_add_tail(&entry->node, &dma_slave_map);
> +		entry++;
> +		map++;
> +	}
> +	mutex_unlock(&dma_list_mutex);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(dmaengine_add_channel_map);
> +
> +void dmaengine_free_channel_map(struct dmaengine_slave_map *map)
> +{
> +	struct dmaengine_slave_map_entries *entry;
> +	unsigned int i, num_entries;
> +
> +	BUG_ON(map);
> +	mutex_lock(&dma_list_mutex);
> +	entry = map->entry;
> +	num_entries = entry->num_entries;
> +	for (i = 0; i < num_entries; i++) {
> +		list_del(&entry->node);
> +		entry++;
> +	}
> +	kfree(entry);
> +	mutex_unlock(&dma_list_mutex);
> +	return;
> +}
> +EXPORT_SYMBOL_GPL(dmaengine_free_channel_map);
> +
>  static int __init dma_bus_init(void)
>  {
>  	return class_register(&dma_devclass);
> diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
> index 9c02a45..54dd1d5 100644
> --- a/include/linux/dmaengine.h
> +++ b/include/linux/dmaengine.h
> @@ -371,6 +371,35 @@ struct dma_slave_config {
>  	unsigned int slave_id;
>  };
>  
> +struct dmaengine_slave_map_entries;
> +/* dmaengine slave channel map
> + *
> + * @dma: dma controller, mandatory
> + * @client: clinet associated with the controller, mandatory
> + * @ch: channel index, desired but optional
> + * @slave_id: slave id for programming mux
> + *
> + * the platform needs to tell the channel mapping information to the dmaengine
> + * code thru this structure.
> + * This is used by dmanengine to find the channel to be allocated during
> + * dma_rquest_channel process for slave channels
> + */
> +struct dmaengine_slave_map {
> +	char *dma;
> +	char *client;
> +	int ch;
> +	int slave_id;
> +	struct dmaengine_slave_map_entries *entry;
> +};
> +
> +struct dmaengine_slave_map_entries {
> +	struct dmaengine_slave_map *map;
> +	bool used;
> +	struct list_head node;
> +	unsigned int num_entries;
> +};
> +
> +
>  static inline const char *dma_chan_name(struct dma_chan *chan)
>  {
>  	return dev_name(&chan->dev->device);
> @@ -974,6 +1003,12 @@ enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx);
>  void dma_issue_pending_all(void);
>  struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
>  void dma_release_channel(struct dma_chan *chan);
> +struct dma_chan *dmaengine_request_slave_channel(
> +		char *requestor, struct dma_slave_config *config,
> +		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param);
> +int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
> +		unsigned int num_entries);
> +void dmaengine_free_channel_map(struct dmaengine_slave_map *map);
>  #else
>  static inline enum dma_status dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
>  {
> @@ -990,6 +1025,21 @@ static inline struct dma_chan *__dma_request_channel(dma_cap_mask_t *mask,
>  static inline void dma_release_channel(struct dma_chan *chan)
>  {
>  }
> +struct dma_chan *dmaengine_request_slave_channel(
> +		char *requestor, struct dma_slave_config *config,
> +		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
> +{
> +	return NULL;
> +}
> +static inline int dmaengine_add_channel_map(struct dmaengine_slave_map *map,
> +		unsigned int num_entries)
> +{
> +	return -EIO;
> +}
> +void dmaengine_free_channel_map(struct dmaengine_slave_map *map)
> +{
> +	return;
> +}
>  #endif
>  
>  /* --- DMA device --- */


-- 
~Vinod


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-14  9:33 [PATCH] dmaengine: add dmanegine slave map api's Vinod Koul
  2012-09-14  9:39 ` [RFC] " Vinod Koul
@ 2012-09-14  9:41 ` Russell King - ARM Linux
  2012-09-14 10:51   ` Vinod Koul
  1 sibling, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2012-09-14  9:41 UTC (permalink / raw)
  To: Vinod Koul
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Fri, Sep 14, 2012 at 03:03:09PM +0530, Vinod Koul wrote:
> +/*called under lock held */
> +static struct dma_chan *dmaengine_get_slave_channel(char *requestor, dma_cap_mask_t *mask)
> +{
> +
> +	struct dma_device *device, *_d;
> +	struct dma_chan *chan = NULL;
> +	int err;
> +
> +	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
> +		if (strcmp(requestor, dev_name(device->dev)))
> +			continue;

So, this finds a DMA device where the struct device's name corresponds
with the name passed in as 'requestor' (note that requestor should be
const.)

> +
> +		chan = private_candidate(mask, device, NULL, NULL);
> +		if (chan) {

And this finds _any_ channel on the device.

> +			/* Found a suitable channel, try to grab, prep, and
> +			 * return it.  We first set DMA_PRIVATE to disable
> +			 * balance_ref_count as this channel will not be
> +			 * published in the general-purpose allocator
> +			 */
> +			dma_cap_set(DMA_PRIVATE, device->cap_mask);
> +			device->privatecnt++;
> +			err = dma_chan_get(chan);
> +
> +			if (err == -ENODEV) {
> +				pr_debug("%s: %s module removed\n",
> +					 __func__, dma_chan_name(chan));
> +				list_del_rcu(&device->global_node);
> +			} else if (err)
> +				pr_debug("%s: failed to get %s: (%d)\n",
> +					 __func__, dma_chan_name(chan), err);
> +			else
> +				break;
> +			if (--device->privatecnt == 0)
> +				dma_cap_clear(DMA_PRIVATE, device->cap_mask);
> +			chan = NULL;
> +		}
> +	}
> +	return chan;

and returns it...

> +struct dma_chan *dmaengine_request_slave_channel(
> +		char *requestor, struct dma_slave_config *config,
> +		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
> +{
> +	struct dma_chan *slave = NULL;
> +	struct dmaengine_slave_map_entries *entry, *_e;
> +
> +	/* perhpas we dont need mask arg, it should be slave anyway */
> +	if (!__dma_has_cap(DMA_SLAVE, mask))
> +		return NULL;
> +
> +	mutex_lock(&dma_list_mutex);
> +	/* find a channel which maps to this request */
> +	list_for_each_entry_safe(entry, _e, &dma_slave_map, node) {
> +		if (strcmp(requestor, entry->map->client))
> +			continue;
> +
> +		/* have we already allocated this mapping */
> +		if (entry->used == true)
> +			continue;
> +
> +		/* now we hit an entry in map,
> +		 * see if we can get a channel in controller */
> +		slave = dmaengine_get_slave_channel(requestor, mask);
> +		if (!slave)
> +			continue;

Ok, so here we've got a channel - any old channel what so ever on the
device described by 'requestor'.  And we're holding a reference to it.

> +
> +		/* check if client is requesting some specfic slave channel */
> +		if ((entry->map->ch != -1) && (entry->map->ch != slave->chan_id))
> +			continue;

If the channel number isn't what we wanted, where do we drop this
reference?  Also note that this 'continue' causes us to move to the
next entry in the map list - whereas what may have happened is that
the channel on the DMA engine is free, but it simply wasn't the first
free channel.  I don't see how this can work sanely.

> +		/* now call filter fn but it should not be used anyway */
> +		if (fn && !fn(slave, fn_param))
> +			continue;
> +
> +		entry->used = true;
> +		mutex_unlock(&dma_list_mutex);
> +
> +		/* pick slave id from map if not set */
> +		if (!config->slave_id)
> +			config->slave_id = entry->map->slave_id;

So, with this level, we're talking about tying DMA channels to slave IDs.
What about the case where you may have a DMA engine with just two channels
but 16 request signals?  (as is the case with PL081).

We're still into having 'virtual' DMA channels in DMA engines, this just
adds an additional layer of complexity on top.

Maybe a better idea would be to pass the map entry down to the DMA engine
driver itself, and let it return the appropriate struct dma_chan?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-14  9:41 ` [PATCH] " Russell King - ARM Linux
@ 2012-09-14 10:51   ` Vinod Koul
  2012-09-14 11:18     ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Vinod Koul @ 2012-09-14 10:51 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Fri, 2012-09-14 at 10:41 +0100, Russell King - ARM Linux wrote:
> On Fri, Sep 14, 2012 at 03:03:09PM +0530, Vinod Koul wrote:
> > +/*called under lock held */
> > +static struct dma_chan *dmaengine_get_slave_channel(char *requestor, dma_cap_mask_t *mask)
> > +{
> > +
> > +	struct dma_device *device, *_d;
> > +	struct dma_chan *chan = NULL;
> > +	int err;
> > +
> > +	list_for_each_entry_safe(device, _d, &dma_device_list, global_node) {
> > +		if (strcmp(requestor, dev_name(device->dev)))
> > +			continue;
> 
> So, this finds a DMA device where the struct device's name corresponds
> with the name passed in as 'requestor' (note that requestor should be
> const.)
Yes that is the idea for search

> 
> > +
> > +		chan = private_candidate(mask, device, NULL, NULL);
> > +		if (chan) {
> 
> And this finds _any_ channel on the device.
initially yes, and then we further narrow down based on if _any_ channel
will suffice or we need specific channel (in cases where we have a mux
and not hardwired soc wiring)
> 
> > +			/* Found a suitable channel, try to grab, prep, and
> > +			 * return it.  We first set DMA_PRIVATE to disable
> > +			 * balance_ref_count as this channel will not be
> > +			 * published in the general-purpose allocator
> > +			 */
> > +			dma_cap_set(DMA_PRIVATE, device->cap_mask);
> > +			device->privatecnt++;
> > +			err = dma_chan_get(chan);
> > +
> > +			if (err == -ENODEV) {
> > +				pr_debug("%s: %s module removed\n",
> > +					 __func__, dma_chan_name(chan));
> > +				list_del_rcu(&device->global_node);
> > +			} else if (err)
> > +				pr_debug("%s: failed to get %s: (%d)\n",
> > +					 __func__, dma_chan_name(chan), err);
> > +			else
> > +				break;
> > +			if (--device->privatecnt == 0)
> > +				dma_cap_clear(DMA_PRIVATE, device->cap_mask);
> > +			chan = NULL;
> > +		}
> > +	}
> > +	return chan;
> 
> and returns it...
> 
> > +struct dma_chan *dmaengine_request_slave_channel(
> > +		char *requestor, struct dma_slave_config *config,
> > +		dma_cap_mask_t *mask, dma_filter_fn fn, void *fn_param)
> > +{
> > +	struct dma_chan *slave = NULL;
> > +	struct dmaengine_slave_map_entries *entry, *_e;
> > +
> > +	/* perhpas we dont need mask arg, it should be slave anyway */
> > +	if (!__dma_has_cap(DMA_SLAVE, mask))
> > +		return NULL;
> > +
> > +	mutex_lock(&dma_list_mutex);
> > +	/* find a channel which maps to this request */
> > +	list_for_each_entry_safe(entry, _e, &dma_slave_map, node) {
> > +		if (strcmp(requestor, entry->map->client))
> > +			continue;
> > +
> > +		/* have we already allocated this mapping */
> > +		if (entry->used == true)
> > +			continue;
> > +
> > +		/* now we hit an entry in map,
> > +		 * see if we can get a channel in controller */
> > +		slave = dmaengine_get_slave_channel(requestor, mask);
> > +		if (!slave)
> > +			continue;
> 
> Ok, so here we've got a channel - any old channel what so ever on the
> device described by 'requestor'.  And we're holding a reference to it.
> 
> > +
> > +		/* check if client is requesting some specfic slave channel */
> > +		if ((entry->map->ch != -1) && (entry->map->ch != slave->chan_id))
> > +			continue;
> 
> If the channel number isn't what we wanted, where do we drop this
> reference?
Yes that needs to be taken care :)

>   Also note that this 'continue' causes us to move to the
> next entry in the map list - whereas what may have happened is that
> the channel on the DMA engine is free, but it simply wasn't the first
> free channel.  I don't see how this can work sanely.
But that is not the controller we are interested in.
Think of platform where we have 3 dma controllers and 3 peripherals,
spi, mmc, i2s. Only third dmac can be used by i2s controller so even
though we have free channel in first two controllers we are not
interested in that.
So if someone wants to get specific controller they use this API

> 
> > +		/* now call filter fn but it should not be used anyway */
> > +		if (fn && !fn(slave, fn_param))
> > +			continue;
> > +
> > +		entry->used = true;
> > +		mutex_unlock(&dma_list_mutex);
> > +
> > +		/* pick slave id from map if not set */
> > +		if (!config->slave_id)
> > +			config->slave_id = entry->map->slave_id;
> 
> So, with this level, we're talking about tying DMA channels to slave IDs.
> What about the case where you may have a DMA engine with just two channels
> but 16 request signals?  (as is the case with PL081).

The idea is that we also keep the slave_id in the map. And use the map
to find out slave_id and pass it to dmac to configure.
In the above case for pl081 when it talks to say i2s it would use
request line 3, so in map we would have this value in map.

> We're still into having 'virtual' DMA channels in DMA engines, this just
> adds an additional layer of complexity on top.
> Maybe a better idea would be to pass the map entry down to the DMA engine
> driver itself, and let it return the appropriate struct dma_chan?
At dmaengine today you find a channel and allocate that. So we are doing
same thing here, but just adding additional code to find the right
channel required. 

Note that if you have configurable mux you should not have channel
number in mapping info, you would allocate first channel in controller.

-- 
~Vinod


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-14 10:51   ` Vinod Koul
@ 2012-09-14 11:18     ` Russell King - ARM Linux
  2012-09-17  3:40       ` Vinod Koul
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2012-09-14 11:18 UTC (permalink / raw)
  To: Vinod Koul
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Fri, Sep 14, 2012 at 04:21:08PM +0530, Vinod Koul wrote:
> On Fri, 2012-09-14 at 10:41 +0100, Russell King - ARM Linux wrote:
> > On Fri, Sep 14, 2012 at 03:03:09PM +0530, Vinod Koul wrote:
> > > +		/* now we hit an entry in map,
> > > +		 * see if we can get a channel in controller */
> > > +		slave = dmaengine_get_slave_channel(requestor, mask);
> > > +		if (!slave)
> > > +			continue;
> > 
> > Ok, so here we've got a channel - any old channel what so ever on the
> > device described by 'requestor'.  And we're holding a reference to it.
> > 
> > > +
> > > +		/* check if client is requesting some specfic slave channel */
> > > +		if ((entry->map->ch != -1) && (entry->map->ch != slave->chan_id))
> > > +			continue;
> > 
> > If the channel number isn't what we wanted, where do we drop this
> > reference?
> Yes that needs to be taken care :)
> 
> >   Also note that this 'continue' causes us to move to the
> > next entry in the map list - whereas what may have happened is that
> > the channel on the DMA engine is free, but it simply wasn't the first
> > free channel.  I don't see how this can work sanely.
> But that is not the controller we are interested in.
> Think of platform where we have 3 dma controllers and 3 peripherals,
> spi, mmc, i2s. Only third dmac can be used by i2s controller so even
> though we have free channel in first two controllers we are not
> interested in that.

What you're comparing here is not the DMA controller, but the DMA channel
number.  You're actually doing nothing to select a particular dma_device
(a DMA engine device driver may register more than one dma_device per
struct device.)

So, your comments do not tie up with the code you've shown - so I'm not
sure you actually understand the code you've published...

> > So, with this level, we're talking about tying DMA channels to slave IDs.
> > What about the case where you may have a DMA engine with just two channels
> > but 16 request signals?  (as is the case with PL081).
> 
> The idea is that we also keep the slave_id in the map. And use the map
> to find out slave_id and pass it to dmac to configure.
> In the above case for pl081 when it talks to say i2s it would use
> request line 3, so in map we would have this value in map.

I'm not saying take the slave_id out of the map.  I'm saying, let the
DMA engine driver itself figure out what dma_chan to return.

> > We're still into having 'virtual' DMA channels in DMA engines, this just
> > adds an additional layer of complexity on top.
> > Maybe a better idea would be to pass the map entry down to the DMA engine
> > driver itself, and let it return the appropriate struct dma_chan?
> 
> At dmaengine today you find a channel and allocate that. So we are doing
> same thing here, but just adding additional code to find the right
> channel required. 

No you're most certainly not.  You're finding the _first_ channel on a
DMA device, and matching its chan_id against slave_id (if slave_id is not
-1) and returning that.  If the slave ID doesn't match, you move to the
next DMA device.  This is insane.

It's also insane that you think that each DMA channel is equal.  It isn't.

At the moment, NAK against your patch, it can't work as it stands.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-14 11:18     ` Russell King - ARM Linux
@ 2012-09-17  3:40       ` Vinod Koul
  2012-09-17  8:36         ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Vinod Koul @ 2012-09-17  3:40 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Fri, 2012-09-14 at 12:18 +0100, Russell King - ARM Linux wrote:
> I'm not saying take the slave_id out of the map.  I'm saying, let the
> DMA engine driver itself figure out what dma_chan to return. 
But wont that assume the dma controller knows which channel to allocate.
And how would it know this information? This can be problematic for hard
wired muxes, but can be easily done for controller which have
programmable mux.

-- 
~Vinod


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-17  3:40       ` Vinod Koul
@ 2012-09-17  8:36         ` Russell King - ARM Linux
  2012-09-17  9:50           ` Vinod Koul
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2012-09-17  8:36 UTC (permalink / raw)
  To: Vinod Koul
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Mon, Sep 17, 2012 at 09:10:23AM +0530, Vinod Koul wrote:
> On Fri, 2012-09-14 at 12:18 +0100, Russell King - ARM Linux wrote:
> > I'm not saying take the slave_id out of the map.  I'm saying, let the
> > DMA engine driver itself figure out what dma_chan to return. 
> But wont that assume the dma controller knows which channel to allocate.
> And how would it know this information? This can be problematic for hard
> wired muxes, but can be easily done for controller which have
> programmable mux.

Well, as I have already said, at the moment you're returning the _first_
_free_ _channel_ on a DMA device, which almost certainly will always be
the wrong one.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-17  8:36         ` Russell King - ARM Linux
@ 2012-09-17  9:50           ` Vinod Koul
  2012-09-17 21:57             ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Vinod Koul @ 2012-09-17  9:50 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Mon, 2012-09-17 at 09:36 +0100, Russell King - ARM Linux wrote:
> > > I'm not saying take the slave_id out of the map.  I'm saying, let the
> > > DMA engine driver itself figure out what dma_chan to return. 
> > But wont that assume the dma controller knows which channel to allocate.
> > And how would it know this information? This can be problematic for hard
> > wired muxes, but can be easily done for controller which have
> > programmable mux.
> 
> Well, as I have already said, at the moment you're returning the _first_
> _free_ _channel_ on a DMA device, which almost certainly will always be
> the wrong one. 
Yes I overlooked, the continue is wrong. It needs to move to next
available channel. I have fixed it.

Now on the question if we should allow dmaengine to select channel or
let dma engine driver do that, I don't see how that helps for hard wired
muxes where dma engine driver doesn't know anything of mapping.

For your OMAP dma this wont matter as I think you have a programmable
mux so you maybe able to use any channel with rightly programmed mux,
right?

-- 
~Vinod


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-17  9:50           ` Vinod Koul
@ 2012-09-17 21:57             ` Russell King - ARM Linux
  2012-09-18  3:18               ` Vinod Koul
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2012-09-17 21:57 UTC (permalink / raw)
  To: Vinod Koul
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Mon, Sep 17, 2012 at 03:20:29PM +0530, Vinod Koul wrote:
> On Mon, 2012-09-17 at 09:36 +0100, Russell King - ARM Linux wrote:
> > > > I'm not saying take the slave_id out of the map.  I'm saying, let the
> > > > DMA engine driver itself figure out what dma_chan to return. 
> > > But wont that assume the dma controller knows which channel to allocate.
> > > And how would it know this information? This can be problematic for hard
> > > wired muxes, but can be easily done for controller which have
> > > programmable mux.
> > 
> > Well, as I have already said, at the moment you're returning the _first_
> > _free_ _channel_ on a DMA device, which almost certainly will always be
> > the wrong one. 
> Yes I overlooked, the continue is wrong. It needs to move to next
> available channel. I have fixed it.
> 
> Now on the question if we should allow dmaengine to select channel or
> let dma engine driver do that, I don't see how that helps for hard wired
> muxes where dma engine driver doesn't know anything of mapping.
> 
> For your OMAP dma this wont matter as I think you have a programmable
> mux so you maybe able to use any channel with rightly programmed mux,
> right?

Except that we expose one 'channel' per mux setting, so as far as DMA
engine goes, the mux number _is_ the channel number - which is the same
approach taken by the PL08x and sa11x0 DMA engine drivers.  It is the
only sane approach to dealing with N hardware channels vs >>N clients.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-17 21:57             ` Russell King - ARM Linux
@ 2012-09-18  3:18               ` Vinod Koul
  2012-09-18 12:20                 ` Linus Walleij
  0 siblings, 1 reply; 13+ messages in thread
From: Vinod Koul @ 2012-09-18  3:18 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: linux-kernel, Dan Williams, Arnd Bergmann, linus.walleij,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt

On Mon, 2012-09-17 at 22:57 +0100, Russell King - ARM Linux wrote:
> On Mon, Sep 17, 2012 at 03:20:29PM +0530, Vinod Koul wrote:
> > On Mon, 2012-09-17 at 09:36 +0100, Russell King - ARM Linux wrote:
> > > > > I'm not saying take the slave_id out of the map.  I'm saying, let the
> > > > > DMA engine driver itself figure out what dma_chan to return. 
> > > > But wont that assume the dma controller knows which channel to allocate.
> > > > And how would it know this information? This can be problematic for hard
> > > > wired muxes, but can be easily done for controller which have
> > > > programmable mux.
> > > 
> > > Well, as I have already said, at the moment you're returning the _first_
> > > _free_ _channel_ on a DMA device, which almost certainly will always be
> > > the wrong one. 
> > Yes I overlooked, the continue is wrong. It needs to move to next
> > available channel. I have fixed it.
> > 
> > Now on the question if we should allow dmaengine to select channel or
> > let dma engine driver do that, I don't see how that helps for hard wired
> > muxes where dma engine driver doesn't know anything of mapping.
> > 
> > For your OMAP dma this wont matter as I think you have a programmable
> > mux so you maybe able to use any channel with rightly programmed mux,
> > right?
> 
> Except that we expose one 'channel' per mux setting, so as far as DMA
> engine goes, the mux number _is_ the channel number - which is the same
> approach taken by the PL08x and sa11x0 DMA engine drivers.  It is the
> only sane approach to dealing with N hardware channels vs >>N clients.
Sure that makes things easy in dmaengine code.

So we allocate channel number given by slave id (if set) or first free
channel. This would also mean people need to update their drivers to use
virtual channel which indeed is a good idea :)

-- 
~Vinod


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-18  3:18               ` Vinod Koul
@ 2012-09-18 12:20                 ` Linus Walleij
  2012-09-18 20:52                   ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Linus Walleij @ 2012-09-18 12:20 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Russell King - ARM Linux, linux-kernel, Dan Williams,
	Arnd Bergmann, Jon Hunter, Stephen Warren, Benoit Cousson,
	Shawn Guo, Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim,
	viresh kumar, Paul Mundt, Per Forlin, Rabin VINCENT

On Tue, Sep 18, 2012 at 5:18 AM, Vinod Koul <vinod.koul@linux.intel.com> wrote:
> On Mon, 2012-09-17 at 22:57 +0100, Russell King - ARM Linux wrote:
>>
>> Except that we expose one 'channel' per mux setting, so as far as DMA
>> engine goes, the mux number _is_ the channel number - which is the same
>> approach taken by the PL08x and sa11x0 DMA engine drivers.  It is the
>> only sane approach to dealing with N hardware channels vs >>N clients.
> Sure that makes things easy in dmaengine code.
>
> So we allocate channel number given by slave id (if set) or first free
> channel. This would also mean people need to update their drivers to use
> virtual channel which indeed is a good idea :)

Yeah, then just to complicate the world the DMA40 has "logical channels",
which is basically hardware-virtual channels multiplexed over physical
channels (so the hardware does some round-robin and queueing).

So we're actually a bit like:
clients = logical HW channels > physical channels
...

And since using logical channels has a performance impact we
have hacks in place to lock down a physical channel for a certain
client like MMC.

But we're probably the odd exception here so nevermind.
One day we may test to rip out the logical channel handling and
use Russell's virtual channel lib to run the show as an experiment.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-18 12:20                 ` Linus Walleij
@ 2012-09-18 20:52                   ` Russell King - ARM Linux
  2012-09-18 21:04                     ` Linus Walleij
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2012-09-18 20:52 UTC (permalink / raw)
  To: Linus Walleij
  Cc: Vinod Koul, linux-kernel, Dan Williams, Arnd Bergmann,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt, Per Forlin, Rabin VINCENT

On Tue, Sep 18, 2012 at 02:20:53PM +0200, Linus Walleij wrote:
> But we're probably the odd exception here so nevermind.
> One day we may test to rip out the logical channel handling and
> use Russell's virtual channel lib to run the show as an experiment.

Note that I'm beginning to add support for the async_tx API stuff into
the virtual channel DMA support, so it can handle the inter-dependencies
between descriptors that async_tx needs.

I'm currently trying to get the design of this right, as the async_tx
needs yet-another-list of descriptors which have been submitted, may
have been processed and completed, but for whatever reason have not
been acknowledged.  Such descriptors can not be freed because the
async_tx API may hold a reference to them, and may dereference them
at any moment to check the dependency situation.

There's a clue in that paragraph about how the DMA engine TX descriptors
_should_ be handled.  "hold a reference" is the clue.  Or another way to
say it, a kref should be embedded in the structure, providing us with
proper reference counting - and descriptors should only be 'freed'
(whether that means actually freeing them or placing them into a free
list) when the last reference is dropped.  That's _much_ better to
understand than this DMA_CTRL_ACK business...

I think switching stuff over to that may simplify things, but it's
going to be absolute hell modifying all the existing DMA engine drivers,
many of which I have no way to test (and probably as Dan has left Intel,
we have a pile of totally untestable drivers now.)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] dmaengine: add dmanegine slave map api's
  2012-09-18 20:52                   ` Russell King - ARM Linux
@ 2012-09-18 21:04                     ` Linus Walleij
  0 siblings, 0 replies; 13+ messages in thread
From: Linus Walleij @ 2012-09-18 21:04 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Vinod Koul, linux-kernel, Dan Williams, Arnd Bergmann,
	Jon Hunter, Stephen Warren, Benoit Cousson, Shawn Guo,
	Guennadi Liakhovetski, Sascha Hauer, Kukjin Kim, viresh kumar,
	Paul Mundt, Per Forlin, Rabin VINCENT

On Tue, Sep 18, 2012 at 10:52 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:

> There's a clue in that paragraph about how the DMA engine TX descriptors
> _should_ be handled.  "hold a reference" is the clue.  Or another way to
> say it, a kref should be embedded in the structure, providing us with
> proper reference counting - and descriptors should only be 'freed'
> (whether that means actually freeing them or placing them into a free
> list) when the last reference is dropped.  That's _much_ better to
> understand than this DMA_CTRL_ACK business...

This indeed sounds like a more robust approach by far.
Why didn't we do that from the beginning ...

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-09-18 21:04 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-14  9:33 [PATCH] dmaengine: add dmanegine slave map api's Vinod Koul
2012-09-14  9:39 ` [RFC] " Vinod Koul
2012-09-14  9:41 ` [PATCH] " Russell King - ARM Linux
2012-09-14 10:51   ` Vinod Koul
2012-09-14 11:18     ` Russell King - ARM Linux
2012-09-17  3:40       ` Vinod Koul
2012-09-17  8:36         ` Russell King - ARM Linux
2012-09-17  9:50           ` Vinod Koul
2012-09-17 21:57             ` Russell King - ARM Linux
2012-09-18  3:18               ` Vinod Koul
2012-09-18 12:20                 ` Linus Walleij
2012-09-18 20:52                   ` Russell King - ARM Linux
2012-09-18 21:04                     ` Linus Walleij

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).