linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] extract a simple dmaengine library from shdma.c
@ 2012-01-18 10:22 Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 1/7] dma: add a simple dma library Guennadi Liakhovetski
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro, linux-mmc,
	alsa-devel, linux-serial

This patch series has been triggered by a recent series by Shimoda-san
http://www.spinics.net/lists/linux-sh/index.html#09895
In a follow up to that thread it has been decided to extract a common 
library from the current shdma driver to be re-used by other dmaengine 
drivers. Primarily this library provides a flexible transfer descriptor 
management for hardware, that does not dupport scatter-gather lists 
natively. In such cases this library can be used to split transfer 
requests into smaller chunks, up to the size, supported by the specific 
hardware, enter them onto a linked list, thack their completion and call 
user callbacks. This is a first shot, aimed to be used by the SUDMAC 
driver. Once it is confirmed, that this version can be conveniently used 
with it, I'll do some cosmetic clean up, notably, add / improve 
documentation:-)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/7] dma: add a simple dma library
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 12:34   ` Paul Mundt
  2012-01-18 10:22 ` [PATCH 2/7] dma: shdma: prepare for simple DMA conversion Guennadi Liakhovetski
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro

This patch adds a library of functions, helping to implement dmaengine
drivers for hardware, unable to handle scatter-gather lists natively.
The first version of this driver only supports memcpy and slave DMA
operation.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/Kconfig        |    3 +
 drivers/dma/Makefile       |    1 +
 drivers/dma/dma-simple.c   |  841 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dma-simple.h |  114 ++++++
 4 files changed, 959 insertions(+), 0 deletions(-)
 create mode 100644 drivers/dma/dma-simple.c
 create mode 100644 include/linux/dma-simple.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index ab8f469..79093d9 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -149,6 +149,9 @@ config TXX9_DMAC
 	  Support the TXx9 SoC internal DMA controller.  This can be
 	  integrated in chips such as the Toshiba TX4927/38/39.
 
+config DMA_SIMPLE
+	tristate
+
 config SH_DMAE
 	tristate "Renesas SuperH DMAC support"
 	depends on (SUPERH && SH_DMA) || (ARM && ARCH_SHMOBILE)
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 30cf3b1..9968a6e 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -2,6 +2,7 @@ ccflags-$(CONFIG_DMADEVICES_DEBUG)  := -DDEBUG
 ccflags-$(CONFIG_DMADEVICES_VDEBUG) += -DVERBOSE_DEBUG
 
 obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
+obj-$(CONFIG_DMA_SIMPLE) += dma-simple.o
 obj-$(CONFIG_NET_DMA) += iovlock.o
 obj-$(CONFIG_INTEL_MID_DMAC) += intel_mid_dma.o
 obj-$(CONFIG_DMATEST) += dmatest.o
diff --git a/drivers/dma/dma-simple.c b/drivers/dma/dma-simple.c
new file mode 100644
index 0000000..92d65db
--- /dev/null
+++ b/drivers/dma/dma-simple.c
@@ -0,0 +1,841 @@
+/*
+ * Simple dmaengine driver library
+ *
+ * extracted from shdma.c
+ *
+ * Copyright (C) 2011-2012 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
+ * Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
+ * Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved.
+ * Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved.
+ *
+ * This is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-simple.h>
+#include <linux/dmaengine.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+/* DMA descriptor control */
+enum simple_desc_status {
+	DESC_IDLE,
+	DESC_PREPARED,
+	DESC_SUBMITTED,
+	DESC_COMPLETED,	/* completed, have to call callback */
+	DESC_WAITING,	/* callback called, waiting for ack / re-submit */
+};
+
+#define NR_DESCS_PER_CHANNEL 32
+
+#define to_simple_chan(c) container_of(c, struct dma_simple_chan, dma_chan)
+#define to_simple_dev(d) container_of(d, struct dma_simple_dev, dma_dev)
+
+/*
+ * For slave DMA we assume, that there is a finate number of DMA slaves in the
+ * system, and that each such slave can only use a finate number of channels.
+ * We use slave channel IDs to make sure, that no such slave channel ID is
+ * allocated more than once.
+ */
+static unsigned int slave_num = 256;
+module_param(slave_num, uint, 0444);
+
+/* A bitmask with slave_num bits */
+static unsigned long *simple_slave_used;
+
+/* Called under spin_lock_irq(&schan->chan_lock") */
+static void simple_chan_xfer_ld_queue(struct dma_simple_chan *schan)
+{
+	struct dma_simple_dev *sdev = to_simple_dev(schan->dma_chan.device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	struct dma_simple_desc *sdesc;
+
+	/* DMA work check */
+	if (ops->channel_busy(schan))
+		return;
+
+	/* Find the first not transferred descriptor */
+	list_for_each_entry(sdesc, &schan->ld_queue, node)
+		if (sdesc->mark == DESC_SUBMITTED) {
+			ops->start_xfer(schan, sdesc);
+			break;
+		}
+}
+
+static dma_cookie_t simple_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+	struct dma_simple_desc *chunk, *c, *desc =
+		container_of(tx, struct dma_simple_desc, async_tx),
+		*last = desc;
+	struct dma_simple_chan *schan = to_simple_chan(tx->chan);
+	struct dma_simple_slave *slave = tx->chan->private;
+	dma_async_tx_callback callback = tx->callback;
+	dma_cookie_t cookie;
+	bool power_up;
+
+	spin_lock_irq(&schan->chan_lock);
+
+	if (list_empty(&schan->ld_queue))
+		power_up = true;
+	else
+		power_up = false;
+
+	cookie = schan->dma_chan.cookie + 1;
+	if (cookie < 0)
+		cookie = 1;
+
+	schan->dma_chan.cookie = cookie;
+	tx->cookie = cookie;
+
+	/* Mark all chunks of this descriptor as submitted, move to the queue */
+	list_for_each_entry_safe(chunk, c, desc->node.prev, node) {
+		/*
+		 * All chunks are on the global ld_free, so, we have to find
+		 * the end of the chain ourselves
+		 */
+		if (chunk != desc && (chunk->mark == DESC_IDLE ||
+				      chunk->async_tx.cookie > 0 ||
+				      chunk->async_tx.cookie == -EBUSY ||
+				      &chunk->node == &schan->ld_free))
+			break;
+		chunk->mark = DESC_SUBMITTED;
+		/* Callback goes to the last chunk */
+		chunk->async_tx.callback = NULL;
+		chunk->cookie = cookie;
+		list_move_tail(&chunk->node, &schan->ld_queue);
+		last = chunk;
+
+		dev_dbg(schan->dev, "submit #%d@%p on %d\n",
+			tx->cookie, &last->async_tx, schan->id);
+	}
+
+	last->async_tx.callback = callback;
+	last->async_tx.callback_param = tx->callback_param;
+
+	if (power_up) {
+		int ret;
+		schan->pm_state = DMA_SIMPLE_PM_BUSY;
+
+		ret = pm_runtime_get(schan->dev);
+
+		spin_unlock_irq(&schan->chan_lock);
+		if (ret < 0)
+			dev_err(schan->dev, "%s(): GET = %d\n", __func__, ret);
+
+		pm_runtime_barrier(schan->dev);
+
+		spin_lock_irq(&schan->chan_lock);
+
+		/* Have we been reset, while waiting? */
+		if (schan->pm_state != DMA_SIMPLE_PM_ESTABLISHED) {
+			struct dma_simple_dev *sdev =
+				to_simple_dev(schan->dma_chan.device);
+			const struct dma_simple_ops *ops = sdev->ops;
+			dev_dbg(schan->dev, "Bring up channel %d\n",
+				schan->id);
+			/*
+			 * TODO: .xfer_setup() might fail on some platforms.
+			 * Make it int then, on error remove chunks from the
+			 * queue again
+			 */
+			ops->setup_xfer(schan, slave);
+
+			if (schan->pm_state == DMA_SIMPLE_PM_PENDING)
+				simple_chan_xfer_ld_queue(schan);
+			schan->pm_state = DMA_SIMPLE_PM_ESTABLISHED;
+		}
+	} else {
+		schan->pm_state = DMA_SIMPLE_PM_PENDING;
+	}
+
+	spin_unlock_irq(&schan->chan_lock);
+
+	return cookie;
+}
+
+/* Called with desc_lock held */
+static struct dma_simple_desc *simple_get_desc(struct dma_simple_chan *schan)
+{
+	struct dma_simple_desc *sdesc;
+
+	list_for_each_entry(sdesc, &schan->ld_free, node)
+		if (sdesc->mark != DESC_PREPARED) {
+			BUG_ON(sdesc->mark != DESC_IDLE);
+			list_del(&sdesc->node);
+			return sdesc;
+		}
+
+	return NULL;
+}
+
+static int simple_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	struct dma_simple_dev *sdev = to_simple_dev(schan->dma_chan.device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	struct dma_simple_desc *desc;
+	struct dma_simple_slave *slave = chan->private;
+	int ret, i;
+
+	/*
+	 * This relies on the guarantee from dmaengine that alloc_chan_resources
+	 * never runs concurrently with itself or free_chan_resources.
+	 */
+	if (slave) {
+		if (test_and_set_bit(slave->slave_id, simple_slave_used)) {
+			ret = -EBUSY;
+			goto etestused;
+		}
+
+		ret = ops->set_slave(schan, slave);
+		if (ret < 0)
+			goto esetslave;
+	}
+
+	schan->desc = kcalloc(NR_DESCS_PER_CHANNEL,
+			      sdev->desc_size, GFP_KERNEL);
+	if (!schan->desc) {
+		ret = -ENOMEM;
+		goto edescalloc;
+	}
+	schan->desc_num = NR_DESCS_PER_CHANNEL;
+
+	for (i = 0; i < NR_DESCS_PER_CHANNEL; i++) {
+		desc = ops->embedded_desc(schan->desc, i);
+		dma_async_tx_descriptor_init(&desc->async_tx,
+					     &schan->dma_chan);
+		desc->async_tx.tx_submit = simple_tx_submit;
+		desc->mark = DESC_IDLE;
+
+		list_add(&desc->node, &schan->ld_free);
+	}
+
+	return NR_DESCS_PER_CHANNEL;
+
+edescalloc:
+	if (slave)
+esetslave:
+		clear_bit(slave->slave_id, simple_slave_used);
+etestused:
+	chan->private = NULL;
+	return ret;
+}
+
+static dma_async_tx_callback __ld_cleanup(struct dma_simple_chan *schan, bool all)
+{
+	struct dma_simple_desc *desc, *_desc;
+	/* Is the "exposed" head of a chain acked? */
+	bool head_acked = false;
+	dma_cookie_t cookie = 0;
+	dma_async_tx_callback callback = NULL;
+	void *param = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&schan->chan_lock, flags);
+	list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) {
+		struct dma_async_tx_descriptor *tx = &desc->async_tx;
+
+		BUG_ON(tx->cookie > 0 && tx->cookie != desc->cookie);
+		BUG_ON(desc->mark != DESC_SUBMITTED &&
+		       desc->mark != DESC_COMPLETED &&
+		       desc->mark != DESC_WAITING);
+
+		/*
+		 * queue is ordered, and we use this loop to (1) clean up all
+		 * completed descriptors, and to (2) update descriptor flags of
+		 * any chunks in a (partially) completed chain
+		 */
+		if (!all && desc->mark == DESC_SUBMITTED &&
+		    desc->cookie != cookie)
+			break;
+
+		if (tx->cookie > 0)
+			cookie = tx->cookie;
+
+		if (desc->mark == DESC_COMPLETED && desc->chunks == 1) {
+			if (schan->completed_cookie != desc->cookie - 1)
+				dev_dbg(schan->dev,
+					"Completing cookie %d, expected %d\n",
+					desc->cookie,
+					schan->completed_cookie + 1);
+			schan->completed_cookie = desc->cookie;
+		}
+
+		/* Call callback on the last chunk */
+		if (desc->mark == DESC_COMPLETED && tx->callback) {
+			desc->mark = DESC_WAITING;
+			callback = tx->callback;
+			param = tx->callback_param;
+			dev_dbg(schan->dev, "descriptor #%d@%p on %d callback\n",
+				tx->cookie, tx, schan->id);
+			BUG_ON(desc->chunks != 1);
+			break;
+		}
+
+		if (tx->cookie > 0 || tx->cookie == -EBUSY) {
+			if (desc->mark == DESC_COMPLETED) {
+				BUG_ON(tx->cookie < 0);
+				desc->mark = DESC_WAITING;
+			}
+			head_acked = async_tx_test_ack(tx);
+		} else {
+			switch (desc->mark) {
+			case DESC_COMPLETED:
+				desc->mark = DESC_WAITING;
+				/* Fall through */
+			case DESC_WAITING:
+				if (head_acked)
+					async_tx_ack(&desc->async_tx);
+			}
+		}
+
+		dev_dbg(schan->dev, "descriptor %p #%d completed.\n",
+			tx, tx->cookie);
+
+		if (((desc->mark == DESC_COMPLETED ||
+		      desc->mark == DESC_WAITING) &&
+		     async_tx_test_ack(&desc->async_tx)) || all) {
+			/* Remove from ld_queue list */
+			desc->mark = DESC_IDLE;
+
+			list_move(&desc->node, &schan->ld_free);
+
+			if (list_empty(&schan->ld_queue)) {
+				dev_dbg(schan->dev, "Bring down channel %d\n", schan->id);
+				pm_runtime_put(schan->dev);
+			}
+		}
+	}
+
+	if (all && !callback)
+		/*
+		 * Terminating and the loop completed normally: forgive
+		 * uncompleted cookies
+		 */
+		schan->completed_cookie = schan->dma_chan.cookie;
+
+	spin_unlock_irqrestore(&schan->chan_lock, flags);
+
+	if (callback)
+		callback(param);
+
+	return callback;
+}
+
+/*
+ * simple_chan_ld_cleanup - Clean up link descriptors
+ *
+ * Clean up the ld_queue of DMA channel.
+ */
+static void simple_chan_ld_cleanup(struct dma_simple_chan *schan, bool all)
+{
+	while (__ld_cleanup(schan, all))
+		;
+}
+
+/*
+ * simple_free_chan_resources - Free all resources of the channel.
+ */
+static void simple_free_chan_resources(struct dma_chan *chan)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	struct dma_simple_dev *sdev = to_simple_dev(chan->device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	LIST_HEAD(list);
+
+	/* Protect against ISR */
+	spin_lock_irq(&schan->chan_lock);
+	ops->halt_channel(schan);
+	spin_unlock_irq(&schan->chan_lock);
+
+	/* Now no new interrupts will occur */
+
+	/* Prepared and not submitted descriptors can still be on the queue */
+	if (!list_empty(&schan->ld_queue))
+		simple_chan_ld_cleanup(schan, true);
+
+	if (chan->private) {
+		/* The caller is holding dma_list_mutex */
+		struct dma_simple_slave *slave = chan->private;
+		clear_bit(slave->slave_id, simple_slave_used);
+		chan->private = NULL;
+	}
+
+	spin_lock_irq(&schan->chan_lock);
+
+	list_splice_init(&schan->ld_free, &list);
+	schan->desc_num = 0;
+
+	spin_unlock_irq(&schan->chan_lock);
+
+	kfree(schan->desc);
+}
+
+/**
+ * simple_add_desc - get, set up and return one transfer descriptor
+ * @schan:	DMA channel
+ * @flags:	DMA transfer flags
+ * @dst:	destination DMA address, incremented when direction equals
+ *		DMA_FROM_DEVICE or DMA_BIDIRECTIONAL
+ * @src:	source DMA address, incremented when direction equals
+ *		DMA_TO_DEVICE or DMA_BIDIRECTIONAL
+ * @len:	DMA transfer length
+ * @first:	if NULL, set to the current descriptor and cookie set to -EBUSY
+ * @direction:	needed for slave DMA to decide which address to keep constant,
+ *		equals DMA_BIDIRECTIONAL for MEMCPY
+ * Returns 0 or an error
+ * Locks: called with desc_lock held
+ */
+static struct dma_simple_desc *simple_add_desc(struct dma_simple_chan *schan,
+	unsigned long flags, dma_addr_t *dst, dma_addr_t *src, size_t *len,
+	struct dma_simple_desc **first, enum dma_data_direction direction)
+{
+	struct dma_simple_dev *sdev = to_simple_dev(schan->dma_chan.device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	struct dma_simple_desc *new;
+	size_t copy_size = *len;
+
+	if (!copy_size)
+		return NULL;
+
+	/* Allocate the link descriptor from the free list */
+	new = simple_get_desc(schan);
+	if (!new) {
+		dev_err(schan->dev, "No free link descriptor available\n");
+		return NULL;
+	}
+
+	ops->desc_setup(schan, new, *src, *dst, &copy_size);
+
+	if (!*first) {
+		/* First desc */
+		new->async_tx.cookie = -EBUSY;
+		*first = new;
+	} else {
+		/* Other desc - invisible to the user */
+		new->async_tx.cookie = -EINVAL;
+	}
+
+	dev_dbg(schan->dev,
+		"chaining (%u/%u)@%x -> %x with %p, cookie %d\n",
+		copy_size, *len, *src, *dst, &new->async_tx,
+		new->async_tx.cookie);
+
+	new->mark = DESC_PREPARED;
+	new->async_tx.flags = flags;
+	new->direction = direction;
+
+	*len -= copy_size;
+	if (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE)
+		*src += copy_size;
+	if (direction == DMA_BIDIRECTIONAL || direction == DMA_FROM_DEVICE)
+		*dst += copy_size;
+
+	return new;
+}
+
+/*
+ * simple_prep_sg - prepare transfer descriptors from an SG list
+ *
+ * Common routine for public (MEMCPY) and slave DMA. The MEMCPY case is also
+ * converted to scatter-gather to guarantee consistent locking and a correct
+ * list manipulation. For slave DMA direction carries the usual meaning, and,
+ * logically, the SG list is RAM and the addr variable contains slave address,
+ * e.g., the FIFO I/O register. For MEMCPY direction equals DMA_BIDIRECTIONAL
+ * and the SG list contains only one element and points at the source buffer.
+ */
+static struct dma_async_tx_descriptor *simple_prep_sg(struct dma_simple_chan *schan,
+	struct scatterlist *sgl, unsigned int sg_len, dma_addr_t *addr,
+	enum dma_data_direction direction, unsigned long flags)
+{
+	struct scatterlist *sg;
+	struct dma_simple_desc *first = NULL, *new = NULL /* compiler... */;
+	LIST_HEAD(tx_list);
+	int chunks = 0;
+	unsigned long irq_flags;
+	int i;
+
+	for_each_sg(sgl, sg, sg_len, i)
+		chunks += DIV_ROUND_UP(sg_dma_len(sg), schan->max_xfer_len);
+
+	/* Have to lock the whole loop to protect against concurrent release */
+	spin_lock_irqsave(&schan->chan_lock, irq_flags);
+
+	/*
+	 * Chaining:
+	 * first descriptor is what user is dealing with in all API calls, its
+	 *	cookie is at first set to -EBUSY, at tx-submit to a positive
+	 *	number
+	 * if more than one chunk is needed further chunks have cookie = -EINVAL
+	 * the last chunk, if not equal to the first, has cookie = -ENOSPC
+	 * all chunks are linked onto the tx_list head with their .node heads
+	 *	only during this function, then they are immediately spliced
+	 *	back onto the free list in form of a chain
+	 */
+	for_each_sg(sgl, sg, sg_len, i) {
+		dma_addr_t sg_addr = sg_dma_address(sg);
+		size_t len = sg_dma_len(sg);
+
+		if (!len)
+			goto err_get_desc;
+
+		do {
+			dev_dbg(schan->dev, "Add SG #%d@%p[%d], dma %llx\n",
+				i, sg, len, (unsigned long long)sg_addr);
+
+			if (direction == DMA_FROM_DEVICE)
+				new = simple_add_desc(schan, flags,
+						&sg_addr, addr, &len, &first,
+						direction);
+			else
+				new = simple_add_desc(schan, flags,
+						addr, &sg_addr, &len, &first,
+						direction);
+			if (!new)
+				goto err_get_desc;
+
+			new->chunks = chunks--;
+			list_add_tail(&new->node, &tx_list);
+		} while (len);
+	}
+
+	if (new != first)
+		new->async_tx.cookie = -ENOSPC;
+
+	/* Put them back on the free list, so, they don't get lost */
+	list_splice_tail(&tx_list, &schan->ld_free);
+
+	spin_unlock_irqrestore(&schan->chan_lock, irq_flags);
+
+	return &first->async_tx;
+
+err_get_desc:
+	list_for_each_entry(new, &tx_list, node)
+		new->mark = DESC_IDLE;
+	list_splice(&tx_list, &schan->ld_free);
+
+	spin_unlock_irqrestore(&schan->chan_lock, irq_flags);
+
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *simple_prep_memcpy(
+	struct dma_chan *chan, dma_addr_t dma_dest, dma_addr_t dma_src,
+	size_t len, unsigned long flags)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	struct scatterlist sg;
+
+	if (!chan || !len)
+		return NULL;
+
+	BUG_ON(!schan->desc_num);
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, pfn_to_page(PFN_DOWN(dma_src)), len,
+		    offset_in_page(dma_src));
+	sg_dma_address(&sg) = dma_src;
+	sg_dma_len(&sg) = len;
+
+	return simple_prep_sg(schan, &sg, 1, &dma_dest,
+			      DMA_BIDIRECTIONAL, flags);
+}
+
+static struct dma_async_tx_descriptor *simple_prep_slave_sg(
+	struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_data_direction direction, unsigned long flags)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	struct dma_simple_dev *sdev = to_simple_dev(schan->dma_chan.device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	struct dma_simple_slave *slave = chan->private;
+	dma_addr_t slave_addr;
+
+	if (!chan)
+		return NULL;
+
+	BUG_ON(!schan->desc_num);
+
+	/* Someone calling slave DMA on a generic channel? */
+	if (!slave || !sg_len) {
+		dev_warn(schan->dev, "%s: bad parameter: %p, %d, %d\n",
+			 __func__, slave, sg_len, slave ? slave->slave_id : -1);
+		return NULL;
+	}
+
+	slave_addr = ops->slave_addr(schan);
+
+	return simple_prep_sg(schan, sgl, sg_len, &slave_addr,
+			      direction, flags);
+}
+
+static int simple_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
+			  unsigned long arg)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	struct dma_simple_dev *sdev = to_simple_dev(chan->device);
+	const struct dma_simple_ops *ops = sdev->ops;
+	unsigned long flags;
+
+	/* Only supports DMA_TERMINATE_ALL */
+	if (cmd != DMA_TERMINATE_ALL)
+		return -ENXIO;
+
+	if (!chan)
+		return -EINVAL;
+
+	spin_lock_irqsave(&schan->chan_lock, flags);
+
+	ops->halt_channel(schan);
+	ops->clear_channel(schan);
+
+	spin_unlock_irqrestore(&schan->chan_lock, flags);
+
+	simple_chan_ld_cleanup(schan, true);
+
+	return 0;
+}
+
+static void simple_memcpy_issue_pending(struct dma_chan *chan)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+
+	spin_lock_irq(&schan->chan_lock);
+	if (schan->pm_state == DMA_SIMPLE_PM_ESTABLISHED)
+		simple_chan_xfer_ld_queue(schan);
+	else
+		schan->pm_state = DMA_SIMPLE_PM_PENDING;
+	spin_unlock_irq(&schan->chan_lock);
+}
+
+static enum dma_status simple_tx_status(struct dma_chan *chan,
+					dma_cookie_t cookie,
+					struct dma_tx_state *txstate)
+{
+	struct dma_simple_chan *schan = to_simple_chan(chan);
+	dma_cookie_t last_used;
+	dma_cookie_t last_complete;
+	enum dma_status status;
+	unsigned long flags;
+
+	simple_chan_ld_cleanup(schan, false);
+
+	/* First read completed cookie to avoid a skew */
+	last_complete = schan->completed_cookie;
+	rmb();
+	last_used = chan->cookie;
+	BUG_ON(last_complete < 0);
+	dma_set_tx_state(txstate, last_complete, last_used, 0);
+
+	spin_lock_irqsave(&schan->chan_lock, flags);
+
+	status = dma_async_is_complete(cookie, last_complete, last_used);
+
+	/*
+	 * If we don't find cookie on the queue, it has been aborted and we have
+	 * to report error
+	 */
+	if (status != DMA_SUCCESS) {
+		struct dma_simple_desc *sdesc;
+		status = DMA_ERROR;
+		list_for_each_entry(sdesc, &schan->ld_queue, node)
+			if (sdesc->cookie == cookie) {
+				status = DMA_IN_PROGRESS;
+				break;
+			}
+	}
+
+	spin_unlock_irqrestore(&schan->chan_lock, flags);
+
+	return status;
+}
+
+/* Called from error IRQ or NMI */
+bool dma_simple_reset(struct dma_simple_dev *sdev)
+{
+	const struct dma_simple_ops *ops = sdev->ops;
+	struct dma_simple_chan *schan;
+	unsigned int handled = 0;
+	int i;
+
+	/* Reset all channels */
+	dma_simple_for_each_chan(schan, sdev, i) {
+		struct dma_simple_desc *sdesc;
+		LIST_HEAD(dl);
+
+		if (!schan)
+			continue;
+
+		spin_lock(&schan->chan_lock);
+
+		/* Stop the channel */
+		ops->halt_channel(schan);
+
+		list_splice_init(&schan->ld_queue, &dl);
+
+		if (!list_empty(&dl)) {
+			dev_dbg(schan->dev, "Bring down channel %d\n", schan->id);
+			pm_runtime_put(schan->dev);
+		}
+		schan->pm_state = DMA_SIMPLE_PM_ESTABLISHED;
+
+		spin_unlock(&schan->chan_lock);
+
+		/* Complete all  */
+		list_for_each_entry(sdesc, &dl, node) {
+			struct dma_async_tx_descriptor *tx = &sdesc->async_tx;
+			sdesc->mark = DESC_IDLE;
+			if (tx->callback)
+				tx->callback(tx->callback_param);
+		}
+
+		spin_lock(&schan->chan_lock);
+		list_splice(&dl, &schan->ld_free);
+		spin_unlock(&schan->chan_lock);
+
+		handled++;
+	}
+
+	return !!handled;
+}
+EXPORT_SYMBOL(dma_simple_reset);
+
+static void simple_do_tasklet(unsigned long data)
+{
+	struct dma_simple_chan *schan = (struct dma_simple_chan *)data;
+	const struct dma_simple_ops *ops =
+		to_simple_dev(schan->dma_chan.device)->ops;
+	struct dma_simple_desc *sdesc;
+
+	spin_lock_irq(&schan->chan_lock);
+	list_for_each_entry(sdesc, &schan->ld_queue, node) {
+		if (sdesc->mark == DESC_SUBMITTED &&
+		    ops->desc_completed(schan, sdesc)) {
+			dev_dbg(schan->dev, "done #%d@%p\n",
+				sdesc->async_tx.cookie, &sdesc->async_tx);
+			sdesc->mark = DESC_COMPLETED;
+			break;
+		}
+	}
+	/* Next desc */
+	simple_chan_xfer_ld_queue(schan);
+	spin_unlock_irq(&schan->chan_lock);
+
+	simple_chan_ld_cleanup(schan, false);
+}
+
+void __devinit dma_simple_chan_probe(struct dma_simple_dev *sdev,
+				     struct dma_simple_chan *schan, int id)
+{
+	schan->pm_state = DMA_SIMPLE_PM_ESTABLISHED;
+
+	/* reference struct dma_device */
+	schan->dma_chan.device = &sdev->dma_dev;
+
+	schan->dev = sdev->dma_dev.dev;
+	schan->id = id;
+
+	if (!schan->max_xfer_len)
+		schan->max_xfer_len = PAGE_SIZE;
+
+	/* Init DMA tasklet */
+	tasklet_init(&schan->tasklet, simple_do_tasklet, (unsigned long)schan);
+
+	spin_lock_init(&schan->chan_lock);
+
+	/* Init descripter manage list */
+	INIT_LIST_HEAD(&schan->ld_queue);
+	INIT_LIST_HEAD(&schan->ld_free);
+
+	/* Add the channel to DMA device channel list */
+	list_add_tail(&schan->dma_chan.device_node,
+			&sdev->dma_dev.channels);
+	sdev->schan[sdev->dma_dev.chancnt++] = schan;
+}
+EXPORT_SYMBOL(dma_simple_chan_probe);
+
+void dma_simple_chan_remove(struct dma_simple_chan *schan)
+{
+	list_del(&schan->dma_chan.device_node);
+}
+EXPORT_SYMBOL(dma_simple_chan_remove);
+
+int __devinit dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
+			      int chan_num)
+{
+	struct dma_device *dma_dev = &sdev->dma_dev;
+
+	/*
+	 * Require all call-backs for now, they can trivially be made optional
+	 * later as required
+	 */
+	if (!sdev->ops ||
+	    !sdev->desc_size ||
+	    !sdev->ops->embedded_desc ||
+	    !sdev->ops->start_xfer ||
+	    !sdev->ops->setup_xfer ||
+	    !sdev->ops->set_slave ||
+	    !sdev->ops->desc_setup ||
+	    !sdev->ops->slave_addr ||
+	    !sdev->ops->channel_busy ||
+	    !sdev->ops->clear_channel ||
+	    !sdev->ops->halt_channel ||
+	    !sdev->ops->desc_completed)
+		return -EINVAL;
+
+	sdev->schan = kcalloc(chan_num, sizeof(*sdev->schan), GFP_KERNEL);
+	if (!sdev->schan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&dma_dev->channels);
+
+	/* Common and MEMCPY operations */
+	dma_dev->device_alloc_chan_resources
+		= simple_alloc_chan_resources;
+	dma_dev->device_free_chan_resources = simple_free_chan_resources;
+	dma_dev->device_prep_dma_memcpy = simple_prep_memcpy;
+	dma_dev->device_tx_status = simple_tx_status;
+	dma_dev->device_issue_pending = simple_memcpy_issue_pending;
+
+	/* Compulsory for DMA_SLAVE fields */
+	dma_dev->device_prep_slave_sg = simple_prep_slave_sg;
+	dma_dev->device_control = simple_control;
+
+	dma_dev->dev = dev;
+
+	return 0;
+}
+EXPORT_SYMBOL(dma_simple_init);
+
+void __devexit dma_simple_cleanup(struct dma_simple_dev *sdev)
+{
+	kfree(sdev->schan);
+}
+EXPORT_SYMBOL(dma_simple_cleanup);
+
+static int __init dma_simple_enter(void)
+{
+	simple_slave_used = kzalloc(DIV_ROUND_UP(slave_num, BITS_PER_BYTE),
+				    GFP_KERNEL);
+	if (!simple_slave_used)
+		return -ENOMEM;
+	return 0;
+}
+module_init(dma_simple_enter);
+
+static void __exit dma_simple_exit(void)
+{
+	kfree(simple_slave_used);
+}
+module_exit(dma_simple_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Simple dmaengine driver library");
+MODULE_AUTHOR("Guennadi Liakhovetski <g.liakhovetski@gmx.de>");
diff --git a/include/linux/dma-simple.h b/include/linux/dma-simple.h
new file mode 100644
index 0000000..6091200
--- /dev/null
+++ b/include/linux/dma-simple.h
@@ -0,0 +1,114 @@
+/*
+ * Simple dmaengine driver library
+ *
+ * extracted from shdma.c and headers
+ *
+ * Copyright (C) 2011-2012 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
+ * Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
+ * Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved.
+ * Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved.
+ *
+ * This is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef DMA_SIMPLE_H
+#define DMA_SIMPLE_H
+
+#include <linux/dma-direction.h>
+#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/types.h>
+
+enum dma_simple_pm_state {
+	DMA_SIMPLE_PM_ESTABLISHED,
+	DMA_SIMPLE_PM_BUSY,
+	DMA_SIMPLE_PM_PENDING,
+};
+
+struct device;
+
+/*
+ * Drivers, using this library are expected to embed struct dma_simple_dev,
+ * struct dma_simple_chan, struct dma_simple_desc, and struct dma_simple_slave
+ * in their respective device, channel, descriptor and slave objects.
+ */
+
+struct dma_simple_slave {
+	unsigned int slave_id;
+};
+
+struct dma_simple_desc {
+	struct list_head node;
+	struct dma_async_tx_descriptor async_tx;
+	enum dma_data_direction direction;
+	dma_cookie_t cookie;
+	int chunks;
+	int mark;
+};
+
+struct dma_simple_chan {
+	dma_cookie_t completed_cookie;	/* The maximum cookie completed */
+	spinlock_t chan_lock;		/* Channel operation lock */
+	struct list_head ld_queue;	/* Link descriptors queue */
+	struct list_head ld_free;	/* Free link descriptors */
+	struct dma_chan dma_chan;	/* DMA channel */
+	struct device *dev;		/* Channel device */
+	struct tasklet_struct tasklet;	/* Complete / submit tasklet */
+	void *desc;			/* buffer for descriptor array */
+	int desc_num;			/* desc count */
+	size_t max_xfer_len;		/* max transfer length */
+	int id;				/* Raw id of this channel */
+	enum dma_simple_pm_state pm_state;
+};
+
+struct dma_simple_ops {
+	bool (*desc_completed)(struct dma_simple_chan *, struct dma_simple_desc *);
+	void (*halt_channel)(struct dma_simple_chan *);
+	void (*clear_channel)(struct dma_simple_chan *);
+	bool (*channel_busy)(struct dma_simple_chan *);
+	dma_addr_t (*slave_addr)(struct dma_simple_chan *);
+	int (*desc_setup)(struct dma_simple_chan *, struct dma_simple_desc *,
+			  dma_addr_t, dma_addr_t, size_t *);
+	int (*set_slave)(struct dma_simple_chan *, struct dma_simple_slave *);
+	void (*setup_xfer)(struct dma_simple_chan *, struct dma_simple_slave *);
+	void (*start_xfer)(struct dma_simple_chan *, struct dma_simple_desc *);
+	struct dma_simple_desc *(*embedded_desc)(void *, int);
+};
+
+struct dma_simple_dev {
+	struct dma_device dma_dev;
+	struct dma_simple_chan **schan;
+	const struct dma_simple_ops *ops;
+	size_t desc_size;
+};
+
+#define dma_simple_for_each_chan(c, d, i) for (i = 0, c = (d)->schan[0]; \
+				i < (d)->dma_dev.chancnt; c = (d)->schan[++i])
+
+static inline void dma_simple_lock(struct dma_simple_chan *schan)
+{
+	spin_lock(&schan->chan_lock);
+}
+
+static inline void dma_simple_unlock(struct dma_simple_chan *schan)
+{
+	spin_unlock(&schan->chan_lock);
+}
+
+static inline void dma_simple_reload(struct dma_simple_chan *schan)
+{
+	tasklet_schedule(&schan->tasklet);
+}
+
+bool dma_simple_reset(struct dma_simple_dev *sdev);
+void dma_simple_chan_probe(struct dma_simple_dev *sdev,
+			struct dma_simple_chan *schan, int id) __devinit;
+void dma_simple_chan_remove(struct dma_simple_chan *schan);
+int dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
+		    int chan_num) __devinit;
+void dma_simple_cleanup(struct dma_simple_dev *sdev) __devexit;
+
+#endif
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/7] dma: shdma: prepare for simple DMA conversion
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 1/7] dma: add a simple dma library Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 3/7] mmc: sh_mmcif: remove unneeded struct sh_mmcif_dma, prepare for simple DMA Guennadi Liakhovetski
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro

By placing an anonymous union at the top of struct sh_dmae_slave we can
transparently prepare all drivers for the upcoming simple DMA conversion.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 include/linux/sh_dma.h |    8 ++++++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/sh_dma.h b/include/linux/sh_dma.h
index db637b9..1b14cf4 100644
--- a/include/linux/sh_dma.h
+++ b/include/linux/sh_dma.h
@@ -10,12 +10,16 @@
 #ifndef SH_DMA_H
 #define SH_DMA_H
 
-#include <linux/list.h>
+#include <linux/dma-simple.h>
 #include <linux/dmaengine.h>
+#include <linux/list.h>
 
 /* Used by slave DMA clients to request DMA to/from a specific peripheral */
 struct sh_dmae_slave {
-	unsigned int			slave_id; /* Set by the platform */
+	union {
+		unsigned int		slave_id; /* Set by the platform */
+		struct dma_simple_slave	simple_slave;
+	};
 	struct device			*dma_dev; /* Set by the platform */
 	const struct sh_dmae_slave_config	*config;  /* Set by the driver */
 };
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/7] mmc: sh_mmcif: remove unneeded struct sh_mmcif_dma, prepare for simple DMA
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 1/7] dma: add a simple dma library Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 2/7] dma: shdma: prepare for simple DMA conversion Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 4/7] mmc: sh_mobile_sdhi: prepare for conversion to " Guennadi Liakhovetski
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro, linux-mmc

Now that all users have been updated to use the embedded in struct
sh_mmcif_plat_data DMA slave IDs, struct sh_mmcif_dma is no longer needed
and can be removed. This also makes preparation for simple DMA conversion
easier.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---

Notice, that this patch also requires the recent
http://article.gmane.org/gmane.linux.ports.sh.devel/13337
to be applied first to avoid breakage

 drivers/mmc/host/sh_mmcif.c  |   24 ++++++++++--------------
 include/linux/mmc/sh_mmcif.h |    8 +-------
 2 files changed, 11 insertions(+), 21 deletions(-)

diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
index c4bfbb0..497e6f4 100644
--- a/drivers/mmc/host/sh_mmcif.c
+++ b/drivers/mmc/host/sh_mmcif.c
@@ -383,31 +383,27 @@ static void sh_mmcif_request_dma(struct sh_mmcif_host *host,
 	host->dma_active = false;
 
 	/* We can only either use DMA for both Tx and Rx or not use it at all */
-	if (pdata->dma) {
-		dev_warn(&host->pd->dev,
-			 "Update your platform to use embedded DMA slave IDs\n");
-		tx = &pdata->dma->chan_priv_tx;
-		rx = &pdata->dma->chan_priv_rx;
-	} else {
-		tx = &host->dma_slave_tx;
-		tx->slave_id = pdata->slave_id_tx;
-		rx = &host->dma_slave_rx;
-		rx->slave_id = pdata->slave_id_rx;
-	}
-	if (tx->slave_id > 0 && rx->slave_id > 0) {
+	tx = &host->dma_slave_tx;
+	tx->simple_slave.slave_id = pdata->slave_id_tx;
+	rx = &host->dma_slave_rx;
+	rx->simple_slave.slave_id = pdata->slave_id_rx;
+
+	if (tx->simple_slave.slave_id > 0 && rx->simple_slave.slave_id > 0) {
 		dma_cap_mask_t mask;
 
 		dma_cap_zero(mask);
 		dma_cap_set(DMA_SLAVE, mask);
 
-		host->chan_tx = dma_request_channel(mask, sh_mmcif_filter, tx);
+		host->chan_tx = dma_request_channel(mask, sh_mmcif_filter,
+						    &tx->simple_slave);
 		dev_dbg(&host->pd->dev, "%s: TX: got channel %p\n", __func__,
 			host->chan_tx);
 
 		if (!host->chan_tx)
 			return;
 
-		host->chan_rx = dma_request_channel(mask, sh_mmcif_filter, rx);
+		host->chan_rx = dma_request_channel(mask, sh_mmcif_filter,
+						    &rx->simple_slave);
 		dev_dbg(&host->pd->dev, "%s: RX: got channel %p\n", __func__,
 			host->chan_rx);
 
diff --git a/include/linux/mmc/sh_mmcif.h b/include/linux/mmc/sh_mmcif.h
index 04ff452..b36caa9 100644
--- a/include/linux/mmc/sh_mmcif.h
+++ b/include/linux/mmc/sh_mmcif.h
@@ -32,17 +32,11 @@
  * 1111 : Peripheral clock (sup_pclk set '1')
  */
 
-struct sh_mmcif_dma {
-	struct sh_dmae_slave chan_priv_tx;
-	struct sh_dmae_slave chan_priv_rx;
-};
-
 struct sh_mmcif_plat_data {
 	void (*set_pwr)(struct platform_device *pdev, int state);
 	void (*down_pwr)(struct platform_device *pdev);
 	int (*get_cd)(struct platform_device *pdef);
-	struct sh_mmcif_dma	*dma;		/* Deprecated. Instead */
-	unsigned int		slave_id_tx;	/* use embedded slave_id_[tr]x */
+	unsigned int		slave_id_tx;	/* embedded slave_id_[tr]x */
 	unsigned int		slave_id_rx;
 	u8			sup_pclk;	/* 1 :SH7757, 0: SH7724/SH7372 */
 	unsigned long		caps;
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/7] mmc: sh_mobile_sdhi: prepare for conversion to simple DMA
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
                   ` (2 preceding siblings ...)
  2012-01-18 10:22 ` [PATCH 3/7] mmc: sh_mmcif: remove unneeded struct sh_mmcif_dma, prepare for simple DMA Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 5/7] serial: sh-sci: " Guennadi Liakhovetski
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro, linux-mmc

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/mmc/host/sh_mobile_sdhi.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/host/sh_mobile_sdhi.c b/drivers/mmc/host/sh_mobile_sdhi.c
index bf45359..1f6f67a 100644
--- a/drivers/mmc/host/sh_mobile_sdhi.c
+++ b/drivers/mmc/host/sh_mobile_sdhi.c
@@ -142,10 +142,10 @@ static int __devinit sh_mobile_sdhi_probe(struct platform_device *pdev)
 		mmc_data->cd_flags = p->cd_flags;
 
 		if (p->dma_slave_tx > 0 && p->dma_slave_rx > 0) {
-			priv->param_tx.slave_id = p->dma_slave_tx;
-			priv->param_rx.slave_id = p->dma_slave_rx;
-			priv->dma_priv.chan_priv_tx = &priv->param_tx;
-			priv->dma_priv.chan_priv_rx = &priv->param_rx;
+			priv->param_tx.simple_slave.slave_id = p->dma_slave_tx;
+			priv->param_rx.simple_slave.slave_id = p->dma_slave_rx;
+			priv->dma_priv.chan_priv_tx = &priv->param_tx.simple_slave;
+			priv->dma_priv.chan_priv_rx = &priv->param_rx.simple_slave;
 			priv->dma_priv.alignment_shift = 1; /* 2-byte alignment */
 			mmc_data->dma = &priv->dma_priv;
 		}
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/7] serial: sh-sci: prepare for conversion to simple DMA
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
                   ` (3 preceding siblings ...)
  2012-01-18 10:22 ` [PATCH 4/7] mmc: sh_mobile_sdhi: prepare for conversion to " Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 6/7] ASoC: SIU: " Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 7/7] dma: shdma: convert to the simple DMA library Guennadi Liakhovetski
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro, linux-serial

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/tty/serial/sh-sci.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index aff9d61..2a8ffda 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -1463,9 +1463,9 @@ static bool filter(struct dma_chan *chan, void *slave)
 	struct sh_dmae_slave *param = slave;
 
 	dev_dbg(chan->device->dev, "%s: slave ID %d\n", __func__,
-		param->slave_id);
+		param->simple_slave.slave_id);
 
-	chan->private = param;
+	chan->private = &param->simple_slave;
 	return true;
 }
 
@@ -1504,7 +1504,7 @@ static void sci_request_dma(struct uart_port *port)
 	param = &s->param_tx;
 
 	/* Slave ID, e.g., SHDMA_SLAVE_SCIF0_TX */
-	param->slave_id = s->cfg->dma_slave_tx;
+	param->simple_slave.slave_id = s->cfg->dma_slave_tx;
 
 	s->cookie_tx = -EINVAL;
 	chan = dma_request_channel(mask, filter, param);
@@ -1532,7 +1532,7 @@ static void sci_request_dma(struct uart_port *port)
 	param = &s->param_rx;
 
 	/* Slave ID, e.g., SHDMA_SLAVE_SCIF0_RX */
-	param->slave_id = s->cfg->dma_slave_rx;
+	param->simple_slave.slave_id = s->cfg->dma_slave_rx;
 
 	chan = dma_request_channel(mask, filter, param);
 	dev_dbg(port->dev, "%s: RX: got channel %p\n", __func__, chan);
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/7] ASoC: SIU: prepare for conversion to simple DMA
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
                   ` (4 preceding siblings ...)
  2012-01-18 10:22 ` [PATCH 5/7] serial: sh-sci: " Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  2012-01-18 10:22 ` [PATCH 7/7] dma: shdma: convert to the simple DMA library Guennadi Liakhovetski
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro, alsa-devel

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 sound/soc/sh/siu_pcm.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/sound/soc/sh/siu_pcm.c b/sound/soc/sh/siu_pcm.c
index f8f6816..6390fec 100644
--- a/sound/soc/sh/siu_pcm.c
+++ b/sound/soc/sh/siu_pcm.c
@@ -330,12 +330,12 @@ static bool filter(struct dma_chan *chan, void *slave)
 {
 	struct sh_dmae_slave *param = slave;
 
-	pr_debug("%s: slave ID %d\n", __func__, param->slave_id);
+	pr_debug("%s: slave ID %d\n", __func__, param->simple_slave.slave_id);
 
 	if (unlikely(param->dma_dev != chan->device->dev))
 		return false;
 
-	chan->private = param;
+	chan->private = &param->simple_slave;
 	return true;
 }
 
@@ -360,12 +360,12 @@ static int siu_pcm_open(struct snd_pcm_substream *ss)
 	if (ss->stream == SNDRV_PCM_STREAM_PLAYBACK) {
 		siu_stream = &port_info->playback;
 		param = &siu_stream->param;
-		param->slave_id = port ? pdata->dma_slave_tx_b :
+		param->simple_slave.slave_id = port ? pdata->dma_slave_tx_b :
 			pdata->dma_slave_tx_a;
 	} else {
 		siu_stream = &port_info->capture;
 		param = &siu_stream->param;
-		param->slave_id = port ? pdata->dma_slave_rx_b :
+		param->simple_slave.slave_id = port ? pdata->dma_slave_rx_b :
 			pdata->dma_slave_rx_a;
 	}
 
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/7] dma: shdma: convert to the simple DMA library
  2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
                   ` (5 preceding siblings ...)
  2012-01-18 10:22 ` [PATCH 6/7] ASoC: SIU: " Guennadi Liakhovetski
@ 2012-01-18 10:22 ` Guennadi Liakhovetski
  6 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 10:22 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro

The simple DMA library has originally been extracted from the shdma driver,
which now can be converted to actually use it.

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
---
 drivers/dma/Kconfig    |    1 +
 drivers/dma/shdma.c    | 1126 ++++++++++++------------------------------------
 drivers/dma/shdma.h    |   44 +-
 include/linux/sh_dma.h |   39 +--
 4 files changed, 311 insertions(+), 899 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 79093d9..614b549 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -157,6 +157,7 @@ config SH_DMAE
 	depends on (SUPERH && SH_DMA) || (ARM && ARCH_SHMOBILE)
 	depends on !SH_DMA_API
 	select DMA_ENGINE
+	select DMA_SIMPLE
 	help
 	  Enable support for the Renesas SuperH DMA controllers.
 
diff --git a/drivers/dma/shdma.c b/drivers/dma/shdma.c
index e4ed4da..08b390e 100644
--- a/drivers/dma/shdma.c
+++ b/drivers/dma/shdma.c
@@ -3,6 +3,7 @@
  *
  * base is drivers/dma/flsdma.c
  *
+ * Copyright (C) 2011-2012 Guennadi Liakhovetski <g.liakhovetski@gmx.de>
  * Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
  * Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved.
  * Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved.
@@ -33,18 +34,12 @@
 #include <linux/rculist.h>
 #include "shdma.h"
 
-/* DMA descriptor control */
-enum sh_dmae_desc_status {
-	DESC_IDLE,
-	DESC_PREPARED,
-	DESC_SUBMITTED,
-	DESC_COMPLETED,	/* completed, have to call callback */
-	DESC_WAITING,	/* callback called, waiting for ack / re-submit */
-};
+#define SH_DMAE_DRV_NAME "sh-dma-engine"
 
-#define NR_DESCS_PER_CHANNEL 32
 /* Default MEMCPY transfer size = 2^2 = 4 bytes */
 #define LOG2_DEFAULT_XFER_SIZE	2
+#define SH_DMA_SLAVE_NUMBER 256
+#define SH_DMA_TCR_MAX (16 * 1024 * 1024 - 1)
 
 /*
  * Used for write-side mutual exclusion for the global device list,
@@ -53,18 +48,12 @@ enum sh_dmae_desc_status {
 static DEFINE_SPINLOCK(sh_dmae_lock);
 static LIST_HEAD(sh_dmae_devices);
 
-/* A bitmask with bits enough for enum sh_dmae_slave_chan_id */
-static unsigned long sh_dmae_slave_used[BITS_TO_LONGS(SH_DMA_SLAVE_NUMBER)];
-
-static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all);
-static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan);
-
 static void chclr_write(struct sh_dmae_chan *sh_dc, u32 data)
 {
 	struct sh_dmae_device *shdev = to_sh_dev(sh_dc);
 
 	__raw_writel(data, shdev->chan_reg +
-		     shdev->pdata->channel[sh_dc->id].chclr_offset);
+		     shdev->pdata->channel[sh_dc->simple_chan.id].chclr_offset);
 }
 
 static void sh_dmae_writel(struct sh_dmae_chan *sh_dc, u32 data, u32 reg)
@@ -154,11 +143,11 @@ static int sh_dmae_rst(struct sh_dmae_device *shdev)
 	spin_unlock_irqrestore(&sh_dmae_lock, flags);
 
 	if (dmaor & (DMAOR_AE | DMAOR_NMIF)) {
-		dev_warn(shdev->common.dev, "Can't initialize DMAOR.\n");
+		dev_warn(shdev->simple_dev.dma_dev.dev, "Can't initialize DMAOR.\n");
 		return -EIO;
 	}
 	if (shdev->pdata->dmaor_init & ~dmaor)
-		dev_warn(shdev->common.dev,
+		dev_warn(shdev->simple_dev.dma_dev.dev,
 			 "DMAOR=0x%x hasn't latched the initial value 0x%x.\n",
 			 dmaor, shdev->pdata->dmaor_init);
 	return 0;
@@ -223,15 +212,6 @@ static void dmae_start(struct sh_dmae_chan *sh_chan)
 	chcr_write(sh_chan, chcr & ~CHCR_TE);
 }
 
-static void dmae_halt(struct sh_dmae_chan *sh_chan)
-{
-	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
-	u32 chcr = chcr_read(sh_chan);
-
-	chcr &= ~(CHCR_DE | CHCR_TE | shdev->chcr_ie_bit);
-	chcr_write(sh_chan, chcr);
-}
-
 static void dmae_init(struct sh_dmae_chan *sh_chan)
 {
 	/*
@@ -260,7 +240,7 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
 {
 	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
 	struct sh_dmae_pdata *pdata = shdev->pdata;
-	const struct sh_dmae_channel *chan_pdata = &pdata->channel[sh_chan->id];
+	const struct sh_dmae_channel *chan_pdata = &pdata->channel[sh_chan->simple_chan.id];
 	u16 __iomem *addr = shdev->dmars;
 	unsigned int shift = chan_pdata->dmars_bit;
 
@@ -281,648 +261,122 @@ static int dmae_set_dmars(struct sh_dmae_chan *sh_chan, u16 val)
 	return 0;
 }
 
-static dma_cookie_t sh_dmae_tx_submit(struct dma_async_tx_descriptor *tx)
+static void sh_dmae_start_xfer(struct dma_simple_chan *schan,
+			       struct dma_simple_desc *sdesc)
 {
-	struct sh_desc *desc = tx_to_sh_desc(tx), *chunk, *last = desc, *c;
-	struct sh_dmae_chan *sh_chan = to_sh_chan(tx->chan);
-	struct sh_dmae_slave *param = tx->chan->private;
-	dma_async_tx_callback callback = tx->callback;
-	dma_cookie_t cookie;
-	bool power_up;
-
-	spin_lock_irq(&sh_chan->desc_lock);
-
-	if (list_empty(&sh_chan->ld_queue))
-		power_up = true;
-	else
-		power_up = false;
-
-	cookie = sh_chan->common.cookie;
-	cookie++;
-	if (cookie < 0)
-		cookie = 1;
-
-	sh_chan->common.cookie = cookie;
-	tx->cookie = cookie;
-
-	/* Mark all chunks of this descriptor as submitted, move to the queue */
-	list_for_each_entry_safe(chunk, c, desc->node.prev, node) {
-		/*
-		 * All chunks are on the global ld_free, so, we have to find
-		 * the end of the chain ourselves
-		 */
-		if (chunk != desc && (chunk->mark == DESC_IDLE ||
-				      chunk->async_tx.cookie > 0 ||
-				      chunk->async_tx.cookie == -EBUSY ||
-				      &chunk->node == &sh_chan->ld_free))
-			break;
-		chunk->mark = DESC_SUBMITTED;
-		/* Callback goes to the last chunk */
-		chunk->async_tx.callback = NULL;
-		chunk->cookie = cookie;
-		list_move_tail(&chunk->node, &sh_chan->ld_queue);
-		last = chunk;
-	}
-
-	last->async_tx.callback = callback;
-	last->async_tx.callback_param = tx->callback_param;
-
-	dev_dbg(sh_chan->dev, "submit #%d@%p on %d: %x[%d] -> %x\n",
-		tx->cookie, &last->async_tx, sh_chan->id,
-		desc->hw.sar, desc->hw.tcr, desc->hw.dar);
-
-	if (power_up) {
-		sh_chan->pm_state = DMAE_PM_BUSY;
-
-		pm_runtime_get(sh_chan->dev);
-
-		spin_unlock_irq(&sh_chan->desc_lock);
-
-		pm_runtime_barrier(sh_chan->dev);
-
-		spin_lock_irq(&sh_chan->desc_lock);
-
-		/* Have we been reset, while waiting? */
-		if (sh_chan->pm_state != DMAE_PM_ESTABLISHED) {
-			dev_dbg(sh_chan->dev, "Bring up channel %d\n",
-				sh_chan->id);
-			if (param) {
-				const struct sh_dmae_slave_config *cfg =
-					param->config;
-
-				dmae_set_dmars(sh_chan, cfg->mid_rid);
-				dmae_set_chcr(sh_chan, cfg->chcr);
-			} else {
-				dmae_init(sh_chan);
-			}
-
-			if (sh_chan->pm_state == DMAE_PM_PENDING)
-				sh_chan_xfer_ld_queue(sh_chan);
-			sh_chan->pm_state = DMAE_PM_ESTABLISHED;
-		}
-	} else {
-		sh_chan->pm_state = DMAE_PM_PENDING;
-	}
-
-	spin_unlock_irq(&sh_chan->desc_lock);
-
-	return cookie;
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
+	struct sh_dmae_desc *sh_desc = container_of(sdesc,
+					struct sh_dmae_desc, simple_desc);
+	dev_dbg(sh_chan->simple_chan.dev, "Queue #%d to %d: %u@%x -> %x\n",
+		sdesc->async_tx.cookie, sh_chan->simple_chan.id,
+		sh_desc->hw.tcr, sh_desc->hw.sar, sh_desc->hw.dar);
+	/* Get the ld start address from ld_queue */
+	dmae_set_reg(sh_chan, &sh_desc->hw);
+	dmae_start(sh_chan);
 }
 
-/* Called with desc_lock held */
-static struct sh_desc *sh_dmae_get_desc(struct sh_dmae_chan *sh_chan)
+static bool sh_dmae_channel_busy(struct dma_simple_chan *schan)
 {
-	struct sh_desc *desc;
-
-	list_for_each_entry(desc, &sh_chan->ld_free, node)
-		if (desc->mark != DESC_PREPARED) {
-			BUG_ON(desc->mark != DESC_IDLE);
-			list_del(&desc->node);
-			return desc;
-		}
-
-	return NULL;
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
+	return dmae_is_busy(sh_chan);
 }
 
-static const struct sh_dmae_slave_config *sh_dmae_find_slave(
-	struct sh_dmae_chan *sh_chan, struct sh_dmae_slave *param)
+static void sh_dmae_setup_xfer(struct dma_simple_chan *schan,
+			       struct dma_simple_slave *sslave)
 {
-	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
-	struct sh_dmae_pdata *pdata = shdev->pdata;
-	int i;
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
 
-	if (param->slave_id >= SH_DMA_SLAVE_NUMBER)
-		return NULL;
-
-	for (i = 0; i < pdata->slave_num; i++)
-		if (pdata->slave[i].slave_id == param->slave_id)
-			return pdata->slave + i;
-
-	return NULL;
-}
+	if (sslave) {
+		struct sh_dmae_slave *slave = container_of(sslave,
+					struct sh_dmae_slave, simple_slave);
+		const struct sh_dmae_slave_config *cfg =
+			slave->config;
 
-static int sh_dmae_alloc_chan_resources(struct dma_chan *chan)
-{
-	struct sh_dmae_chan *sh_chan = to_sh_chan(chan);
-	struct sh_desc *desc;
-	struct sh_dmae_slave *param = chan->private;
-	int ret;
-
-	/*
-	 * This relies on the guarantee from dmaengine that alloc_chan_resources
-	 * never runs concurrently with itself or free_chan_resources.
-	 */
-	if (param) {
-		const struct sh_dmae_slave_config *cfg;
-
-		cfg = sh_dmae_find_slave(sh_chan, param);
-		if (!cfg) {
-			ret = -EINVAL;
-			goto efindslave;
-		}
-
-		if (test_and_set_bit(param->slave_id, sh_dmae_slave_used)) {
-			ret = -EBUSY;
-			goto etestused;
-		}
-
-		param->config = cfg;
-	}
-
-	while (sh_chan->descs_allocated < NR_DESCS_PER_CHANNEL) {
-		desc = kzalloc(sizeof(struct sh_desc), GFP_KERNEL);
-		if (!desc)
-			break;
-		dma_async_tx_descriptor_init(&desc->async_tx,
-					&sh_chan->common);
-		desc->async_tx.tx_submit = sh_dmae_tx_submit;
-		desc->mark = DESC_IDLE;
-
-		list_add(&desc->node, &sh_chan->ld_free);
-		sh_chan->descs_allocated++;
-	}
-
-	if (!sh_chan->descs_allocated) {
-		ret = -ENOMEM;
-		goto edescalloc;
-	}
-
-	return sh_chan->descs_allocated;
-
-edescalloc:
-	if (param)
-		clear_bit(param->slave_id, sh_dmae_slave_used);
-etestused:
-efindslave:
-	chan->private = NULL;
-	return ret;
-}
-
-/*
- * sh_dma_free_chan_resources - Free all resources of the channel.
- */
-static void sh_dmae_free_chan_resources(struct dma_chan *chan)
-{
-	struct sh_dmae_chan *sh_chan = to_sh_chan(chan);
-	struct sh_desc *desc, *_desc;
-	LIST_HEAD(list);
-
-	/* Protect against ISR */
-	spin_lock_irq(&sh_chan->desc_lock);
-	dmae_halt(sh_chan);
-	spin_unlock_irq(&sh_chan->desc_lock);
-
-	/* Now no new interrupts will occur */
-
-	/* Prepared and not submitted descriptors can still be on the queue */
-	if (!list_empty(&sh_chan->ld_queue))
-		sh_dmae_chan_ld_cleanup(sh_chan, true);
-
-	if (chan->private) {
-		/* The caller is holding dma_list_mutex */
-		struct sh_dmae_slave *param = chan->private;
-		clear_bit(param->slave_id, sh_dmae_slave_used);
-		chan->private = NULL;
-	}
-
-	spin_lock_irq(&sh_chan->desc_lock);
-
-	list_splice_init(&sh_chan->ld_free, &list);
-	sh_chan->descs_allocated = 0;
-
-	spin_unlock_irq(&sh_chan->desc_lock);
-
-	list_for_each_entry_safe(desc, _desc, &list, node)
-		kfree(desc);
-}
-
-/**
- * sh_dmae_add_desc - get, set up and return one transfer descriptor
- * @sh_chan:	DMA channel
- * @flags:	DMA transfer flags
- * @dest:	destination DMA address, incremented when direction equals
- *		DMA_FROM_DEVICE or DMA_BIDIRECTIONAL
- * @src:	source DMA address, incremented when direction equals
- *		DMA_TO_DEVICE or DMA_BIDIRECTIONAL
- * @len:	DMA transfer length
- * @first:	if NULL, set to the current descriptor and cookie set to -EBUSY
- * @direction:	needed for slave DMA to decide which address to keep constant,
- *		equals DMA_BIDIRECTIONAL for MEMCPY
- * Returns 0 or an error
- * Locks: called with desc_lock held
- */
-static struct sh_desc *sh_dmae_add_desc(struct sh_dmae_chan *sh_chan,
-	unsigned long flags, dma_addr_t *dest, dma_addr_t *src, size_t *len,
-	struct sh_desc **first, enum dma_data_direction direction)
-{
-	struct sh_desc *new;
-	size_t copy_size;
-
-	if (!*len)
-		return NULL;
-
-	/* Allocate the link descriptor from the free list */
-	new = sh_dmae_get_desc(sh_chan);
-	if (!new) {
-		dev_err(sh_chan->dev, "No free link descriptor available\n");
-		return NULL;
-	}
-
-	copy_size = min(*len, (size_t)SH_DMA_TCR_MAX + 1);
-
-	new->hw.sar = *src;
-	new->hw.dar = *dest;
-	new->hw.tcr = copy_size;
-
-	if (!*first) {
-		/* First desc */
-		new->async_tx.cookie = -EBUSY;
-		*first = new;
+		dmae_set_dmars(sh_chan, cfg->mid_rid);
+		dmae_set_chcr(sh_chan, cfg->chcr);
 	} else {
-		/* Other desc - invisible to the user */
-		new->async_tx.cookie = -EINVAL;
+		dmae_init(sh_chan);
 	}
-
-	dev_dbg(sh_chan->dev,
-		"chaining (%u/%u)@%x -> %x with %p, cookie %d, shift %d\n",
-		copy_size, *len, *src, *dest, &new->async_tx,
-		new->async_tx.cookie, sh_chan->xmit_shift);
-
-	new->mark = DESC_PREPARED;
-	new->async_tx.flags = flags;
-	new->direction = direction;
-
-	*len -= copy_size;
-	if (direction == DMA_BIDIRECTIONAL || direction == DMA_TO_DEVICE)
-		*src += copy_size;
-	if (direction == DMA_BIDIRECTIONAL || direction == DMA_FROM_DEVICE)
-		*dest += copy_size;
-
-	return new;
 }
 
-/*
- * sh_dmae_prep_sg - prepare transfer descriptors from an SG list
- *
- * Common routine for public (MEMCPY) and slave DMA. The MEMCPY case is also
- * converted to scatter-gather to guarantee consistent locking and a correct
- * list manipulation. For slave DMA direction carries the usual meaning, and,
- * logically, the SG list is RAM and the addr variable contains slave address,
- * e.g., the FIFO I/O register. For MEMCPY direction equals DMA_BIDIRECTIONAL
- * and the SG list contains only one element and points at the source buffer.
- */
-static struct dma_async_tx_descriptor *sh_dmae_prep_sg(struct sh_dmae_chan *sh_chan,
-	struct scatterlist *sgl, unsigned int sg_len, dma_addr_t *addr,
-	enum dma_data_direction direction, unsigned long flags)
+static const struct sh_dmae_slave_config *dmae_find_slave(
+	struct sh_dmae_chan *sh_chan, struct sh_dmae_slave *slave)
 {
-	struct scatterlist *sg;
-	struct sh_desc *first = NULL, *new = NULL /* compiler... */;
-	LIST_HEAD(tx_list);
-	int chunks = 0;
-	unsigned long irq_flags;
+	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
+	struct sh_dmae_pdata *pdata = shdev->pdata;
+	const struct sh_dmae_slave_config *cfg;
 	int i;
 
-	if (!sg_len)
+	if (slave->simple_slave.slave_id >= SH_DMA_SLAVE_NUMBER)
 		return NULL;
 
-	for_each_sg(sgl, sg, sg_len, i)
-		chunks += (sg_dma_len(sg) + SH_DMA_TCR_MAX) /
-			(SH_DMA_TCR_MAX + 1);
-
-	/* Have to lock the whole loop to protect against concurrent release */
-	spin_lock_irqsave(&sh_chan->desc_lock, irq_flags);
-
-	/*
-	 * Chaining:
-	 * first descriptor is what user is dealing with in all API calls, its
-	 *	cookie is at first set to -EBUSY, at tx-submit to a positive
-	 *	number
-	 * if more than one chunk is needed further chunks have cookie = -EINVAL
-	 * the last chunk, if not equal to the first, has cookie = -ENOSPC
-	 * all chunks are linked onto the tx_list head with their .node heads
-	 *	only during this function, then they are immediately spliced
-	 *	back onto the free list in form of a chain
-	 */
-	for_each_sg(sgl, sg, sg_len, i) {
-		dma_addr_t sg_addr = sg_dma_address(sg);
-		size_t len = sg_dma_len(sg);
-
-		if (!len)
-			goto err_get_desc;
-
-		do {
-			dev_dbg(sh_chan->dev, "Add SG #%d@%p[%d], dma %llx\n",
-				i, sg, len, (unsigned long long)sg_addr);
-
-			if (direction == DMA_FROM_DEVICE)
-				new = sh_dmae_add_desc(sh_chan, flags,
-						&sg_addr, addr, &len, &first,
-						direction);
-			else
-				new = sh_dmae_add_desc(sh_chan, flags,
-						addr, &sg_addr, &len, &first,
-						direction);
-			if (!new)
-				goto err_get_desc;
-
-			new->chunks = chunks--;
-			list_add_tail(&new->node, &tx_list);
-		} while (len);
-	}
-
-	if (new != first)
-		new->async_tx.cookie = -ENOSPC;
-
-	/* Put them back on the free list, so, they don't get lost */
-	list_splice_tail(&tx_list, &sh_chan->ld_free);
-
-	spin_unlock_irqrestore(&sh_chan->desc_lock, irq_flags);
-
-	return &first->async_tx;
-
-err_get_desc:
-	list_for_each_entry(new, &tx_list, node)
-		new->mark = DESC_IDLE;
-	list_splice(&tx_list, &sh_chan->ld_free);
-
-	spin_unlock_irqrestore(&sh_chan->desc_lock, irq_flags);
+	for (i = 0, cfg = pdata->slave; i < pdata->slave_num; i++, cfg++)
+		if (cfg->slave_id == slave->simple_slave.slave_id)
+			return cfg;
 
 	return NULL;
 }
 
-static struct dma_async_tx_descriptor *sh_dmae_prep_memcpy(
-	struct dma_chan *chan, dma_addr_t dma_dest, dma_addr_t dma_src,
-	size_t len, unsigned long flags)
-{
-	struct sh_dmae_chan *sh_chan;
-	struct scatterlist sg;
-
-	if (!chan || !len)
-		return NULL;
-
-	sh_chan = to_sh_chan(chan);
-
-	sg_init_table(&sg, 1);
-	sg_set_page(&sg, pfn_to_page(PFN_DOWN(dma_src)), len,
-		    offset_in_page(dma_src));
-	sg_dma_address(&sg) = dma_src;
-	sg_dma_len(&sg) = len;
-
-	return sh_dmae_prep_sg(sh_chan, &sg, 1, &dma_dest, DMA_BIDIRECTIONAL,
-			       flags);
-}
-
-static struct dma_async_tx_descriptor *sh_dmae_prep_slave_sg(
-	struct dma_chan *chan, struct scatterlist *sgl, unsigned int sg_len,
-	enum dma_data_direction direction, unsigned long flags)
+static int sh_dmae_set_slave(struct dma_simple_chan *schan,
+			     struct dma_simple_slave *sslave)
 {
-	struct sh_dmae_slave *param;
-	struct sh_dmae_chan *sh_chan;
-	dma_addr_t slave_addr;
-
-	if (!chan)
-		return NULL;
-
-	sh_chan = to_sh_chan(chan);
-	param = chan->private;
-
-	/* Someone calling slave DMA on a public channel? */
-	if (!param || !sg_len) {
-		dev_warn(sh_chan->dev, "%s: bad parameter: %p, %d, %d\n",
-			 __func__, param, sg_len, param ? param->slave_id : -1);
-		return NULL;
-	}
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
+	struct sh_dmae_slave *slave = container_of(sslave, struct sh_dmae_slave,
+						   simple_slave);
+	const struct sh_dmae_slave_config *cfg = dmae_find_slave(sh_chan, slave);
+	if (!cfg)
+		return -ENODEV;
 
-	slave_addr = param->config->addr;
+	slave->config = cfg;
 
-	/*
-	 * if (param != NULL), this is a successfully requested slave channel,
-	 * therefore param->config != NULL too.
-	 */
-	return sh_dmae_prep_sg(sh_chan, sgl, sg_len, &slave_addr,
-			       direction, flags);
+	return 0;
 }
 
-static int sh_dmae_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
-			   unsigned long arg)
+static void dmae_halt(struct sh_dmae_chan *sh_chan)
 {
-	struct sh_dmae_chan *sh_chan = to_sh_chan(chan);
 	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
-	unsigned long flags;
-
-	/* Only supports DMA_TERMINATE_ALL */
-	if (cmd != DMA_TERMINATE_ALL)
-		return -ENXIO;
-
-	if (!chan)
-		return -EINVAL;
-
-	spin_lock_irqsave(&sh_chan->desc_lock, flags);
-	dmae_halt(sh_chan);
-
-	if (shdev->pdata->chclr_present)
-		chclr_write(sh_chan, 0);
-
-	if (!list_empty(&sh_chan->ld_queue)) {
-		/* Record partial transfer */
-		struct sh_desc *desc = list_entry(sh_chan->ld_queue.next,
-						  struct sh_desc, node);
-		desc->partial = (desc->hw.tcr - sh_dmae_readl(sh_chan, TCR)) <<
-			sh_chan->xmit_shift;
-	}
-	spin_unlock_irqrestore(&sh_chan->desc_lock, flags);
-
-	sh_dmae_chan_ld_cleanup(sh_chan, true);
+	u32 chcr = chcr_read(sh_chan);
 
-	return 0;
+	chcr &= ~(CHCR_DE | CHCR_TE | shdev->chcr_ie_bit);
+	chcr_write(sh_chan, chcr);
 }
 
-static dma_async_tx_callback __ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
+static int sh_dmae_desc_setup(struct dma_simple_chan *schan,
+			      struct dma_simple_desc *sdesc,
+			      dma_addr_t src, dma_addr_t dst, size_t *len)
 {
-	struct sh_desc *desc, *_desc;
-	/* Is the "exposed" head of a chain acked? */
-	bool head_acked = false;
-	dma_cookie_t cookie = 0;
-	dma_async_tx_callback callback = NULL;
-	void *param = NULL;
-	unsigned long flags;
-
-	spin_lock_irqsave(&sh_chan->desc_lock, flags);
-	list_for_each_entry_safe(desc, _desc, &sh_chan->ld_queue, node) {
-		struct dma_async_tx_descriptor *tx = &desc->async_tx;
-
-		BUG_ON(tx->cookie > 0 && tx->cookie != desc->cookie);
-		BUG_ON(desc->mark != DESC_SUBMITTED &&
-		       desc->mark != DESC_COMPLETED &&
-		       desc->mark != DESC_WAITING);
-
-		/*
-		 * queue is ordered, and we use this loop to (1) clean up all
-		 * completed descriptors, and to (2) update descriptor flags of
-		 * any chunks in a (partially) completed chain
-		 */
-		if (!all && desc->mark == DESC_SUBMITTED &&
-		    desc->cookie != cookie)
-			break;
-
-		if (tx->cookie > 0)
-			cookie = tx->cookie;
-
-		if (desc->mark == DESC_COMPLETED && desc->chunks == 1) {
-			if (sh_chan->completed_cookie != desc->cookie - 1)
-				dev_dbg(sh_chan->dev,
-					"Completing cookie %d, expected %d\n",
-					desc->cookie,
-					sh_chan->completed_cookie + 1);
-			sh_chan->completed_cookie = desc->cookie;
-		}
+	struct sh_dmae_desc *sh_desc = container_of(sdesc,
+					struct sh_dmae_desc, simple_desc);
 
-		/* Call callback on the last chunk */
-		if (desc->mark == DESC_COMPLETED && tx->callback) {
-			desc->mark = DESC_WAITING;
-			callback = tx->callback;
-			param = tx->callback_param;
-			dev_dbg(sh_chan->dev, "descriptor #%d@%p on %d callback\n",
-				tx->cookie, tx, sh_chan->id);
-			BUG_ON(desc->chunks != 1);
-			break;
-		}
+	if (*len > schan->max_xfer_len)
+		*len = schan->max_xfer_len;
 
-		if (tx->cookie > 0 || tx->cookie == -EBUSY) {
-			if (desc->mark == DESC_COMPLETED) {
-				BUG_ON(tx->cookie < 0);
-				desc->mark = DESC_WAITING;
-			}
-			head_acked = async_tx_test_ack(tx);
-		} else {
-			switch (desc->mark) {
-			case DESC_COMPLETED:
-				desc->mark = DESC_WAITING;
-				/* Fall through */
-			case DESC_WAITING:
-				if (head_acked)
-					async_tx_ack(&desc->async_tx);
-			}
-		}
+	sh_desc->hw.sar = src;
+	sh_desc->hw.dar = dst;
+	sh_desc->hw.tcr = *len;
 
-		dev_dbg(sh_chan->dev, "descriptor %p #%d completed.\n",
-			tx, tx->cookie);
-
-		if (((desc->mark == DESC_COMPLETED ||
-		      desc->mark == DESC_WAITING) &&
-		     async_tx_test_ack(&desc->async_tx)) || all) {
-			/* Remove from ld_queue list */
-			desc->mark = DESC_IDLE;
-
-			list_move(&desc->node, &sh_chan->ld_free);
-
-			if (list_empty(&sh_chan->ld_queue)) {
-				dev_dbg(sh_chan->dev, "Bring down channel %d\n", sh_chan->id);
-				pm_runtime_put(sh_chan->dev);
-			}
-		}
-	}
-
-	if (all && !callback)
-		/*
-		 * Terminating and the loop completed normally: forgive
-		 * uncompleted cookies
-		 */
-		sh_chan->completed_cookie = sh_chan->common.cookie;
-
-	spin_unlock_irqrestore(&sh_chan->desc_lock, flags);
-
-	if (callback)
-		callback(param);
-
-	return callback;
-}
-
-/*
- * sh_chan_ld_cleanup - Clean up link descriptors
- *
- * This function cleans up the ld_queue of DMA channel.
- */
-static void sh_dmae_chan_ld_cleanup(struct sh_dmae_chan *sh_chan, bool all)
-{
-	while (__ld_cleanup(sh_chan, all))
-		;
-}
-
-/* Called under spin_lock_irq(&sh_chan->desc_lock) */
-static void sh_chan_xfer_ld_queue(struct sh_dmae_chan *sh_chan)
-{
-	struct sh_desc *desc;
-
-	/* DMA work check */
-	if (dmae_is_busy(sh_chan))
-		return;
-
-	/* Find the first not transferred descriptor */
-	list_for_each_entry(desc, &sh_chan->ld_queue, node)
-		if (desc->mark == DESC_SUBMITTED) {
-			dev_dbg(sh_chan->dev, "Queue #%d to %d: %u@%x -> %x\n",
-				desc->async_tx.cookie, sh_chan->id,
-				desc->hw.tcr, desc->hw.sar, desc->hw.dar);
-			/* Get the ld start address from ld_queue */
-			dmae_set_reg(sh_chan, &desc->hw);
-			dmae_start(sh_chan);
-			break;
-		}
+	return 0;
 }
 
-static void sh_dmae_memcpy_issue_pending(struct dma_chan *chan)
+static void sh_dmae_halt(struct dma_simple_chan *schan)
 {
-	struct sh_dmae_chan *sh_chan = to_sh_chan(chan);
-
-	spin_lock_irq(&sh_chan->desc_lock);
-	if (sh_chan->pm_state == DMAE_PM_ESTABLISHED)
-		sh_chan_xfer_ld_queue(sh_chan);
-	else
-		sh_chan->pm_state = DMAE_PM_PENDING;
-	spin_unlock_irq(&sh_chan->desc_lock);
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
+	dmae_halt(sh_chan);
 }
 
-static enum dma_status sh_dmae_tx_status(struct dma_chan *chan,
-					dma_cookie_t cookie,
-					struct dma_tx_state *txstate)
+static void sh_dmae_clear(struct dma_simple_chan *schan)
 {
-	struct sh_dmae_chan *sh_chan = to_sh_chan(chan);
-	dma_cookie_t last_used;
-	dma_cookie_t last_complete;
-	enum dma_status status;
-	unsigned long flags;
-
-	sh_dmae_chan_ld_cleanup(sh_chan, false);
-
-	/* First read completed cookie to avoid a skew */
-	last_complete = sh_chan->completed_cookie;
-	rmb();
-	last_used = chan->cookie;
-	BUG_ON(last_complete < 0);
-	dma_set_tx_state(txstate, last_complete, last_used, 0);
-
-	spin_lock_irqsave(&sh_chan->desc_lock, flags);
-
-	status = dma_async_is_complete(cookie, last_complete, last_used);
-
-	/*
-	 * If we don't find cookie on the queue, it has been aborted and we have
-	 * to report error
-	 */
-	if (status != DMA_SUCCESS) {
-		struct sh_desc *desc;
-		status = DMA_ERROR;
-		list_for_each_entry(desc, &sh_chan->ld_queue, node)
-			if (desc->cookie == cookie) {
-				status = DMA_IN_PROGRESS;
-				break;
-			}
-	}
-
-	spin_unlock_irqrestore(&sh_chan->desc_lock, flags);
+	struct sh_dmae_chan *sh_chan = container_of(schan, struct sh_dmae_chan,
+						    simple_chan);
+	struct sh_dmae_device *shdev = to_sh_dev(sh_chan);
 
-	return status;
+	if (shdev->pdata->chclr_present)
+		chclr_write(sh_chan, 0);
 }
 
 static irqreturn_t sh_dmae_interrupt(int irq, void *data)
@@ -931,7 +385,7 @@ static irqreturn_t sh_dmae_interrupt(int irq, void *data)
 	struct sh_dmae_chan *sh_chan = data;
 	u32 chcr;
 
-	spin_lock(&sh_chan->desc_lock);
+	dma_simple_lock(&sh_chan->simple_chan);
 
 	chcr = chcr_read(sh_chan);
 
@@ -940,10 +394,11 @@ static irqreturn_t sh_dmae_interrupt(int irq, void *data)
 		dmae_halt(sh_chan);
 
 		ret = IRQ_HANDLED;
-		tasklet_schedule(&sh_chan->tasklet);
+		/* clean up completed, submit new transfers */
+		dma_simple_reload(&sh_chan->simple_chan);
 	}
 
-	spin_unlock(&sh_chan->desc_lock);
+	dma_simple_unlock(&sh_chan->simple_chan);
 
 	return ret;
 }
@@ -951,54 +406,17 @@ static irqreturn_t sh_dmae_interrupt(int irq, void *data)
 /* Called from error IRQ or NMI */
 static bool sh_dmae_reset(struct sh_dmae_device *shdev)
 {
-	unsigned int handled = 0;
-	int i;
+	bool ret;
 
 	/* halt the dma controller */
 	sh_dmae_ctl_stop(shdev);
 
 	/* We cannot detect, which channel caused the error, have to reset all */
-	for (i = 0; i < SH_DMAC_MAX_CHANNELS; i++) {
-		struct sh_dmae_chan *sh_chan = shdev->chan[i];
-		struct sh_desc *desc;
-		LIST_HEAD(dl);
-
-		if (!sh_chan)
-			continue;
-
-		spin_lock(&sh_chan->desc_lock);
-
-		/* Stop the channel */
-		dmae_halt(sh_chan);
-
-		list_splice_init(&sh_chan->ld_queue, &dl);
-
-		if (!list_empty(&dl)) {
-			dev_dbg(sh_chan->dev, "Bring down channel %d\n", sh_chan->id);
-			pm_runtime_put(sh_chan->dev);
-		}
-		sh_chan->pm_state = DMAE_PM_ESTABLISHED;
-
-		spin_unlock(&sh_chan->desc_lock);
-
-		/* Complete all  */
-		list_for_each_entry(desc, &dl, node) {
-			struct dma_async_tx_descriptor *tx = &desc->async_tx;
-			desc->mark = DESC_IDLE;
-			if (tx->callback)
-				tx->callback(tx->callback_param);
-		}
-
-		spin_lock(&sh_chan->desc_lock);
-		list_splice(&dl, &sh_chan->ld_free);
-		spin_unlock(&sh_chan->desc_lock);
-
-		handled++;
-	}
+	ret = dma_simple_reset(&shdev->simple_dev);
 
 	sh_dmae_rst(shdev);
 
-	return !!handled;
+	return ret;
 }
 
 static irqreturn_t sh_dmae_err(int irq, void *data)
@@ -1008,35 +426,24 @@ static irqreturn_t sh_dmae_err(int irq, void *data)
 	if (!(dmaor_read(shdev) & DMAOR_AE))
 		return IRQ_NONE;
 
-	sh_dmae_reset(data);
+	sh_dmae_reset(shdev);
 	return IRQ_HANDLED;
 }
 
-static void dmae_do_tasklet(unsigned long data)
+static bool sh_dmae_desc_completed(struct dma_simple_chan *schan,
+				   struct dma_simple_desc *sdesc)
 {
-	struct sh_dmae_chan *sh_chan = (struct sh_dmae_chan *)data;
-	struct sh_desc *desc;
+	struct sh_dmae_chan *sh_chan = container_of(schan,
+					struct sh_dmae_chan, simple_chan);
+	struct sh_dmae_desc *sh_desc = container_of(sdesc,
+					struct sh_dmae_desc, simple_desc);
 	u32 sar_buf = sh_dmae_readl(sh_chan, SAR);
 	u32 dar_buf = sh_dmae_readl(sh_chan, DAR);
 
-	spin_lock_irq(&sh_chan->desc_lock);
-	list_for_each_entry(desc, &sh_chan->ld_queue, node) {
-		if (desc->mark == DESC_SUBMITTED &&
-		    ((desc->direction == DMA_FROM_DEVICE &&
-		      (desc->hw.dar + desc->hw.tcr) == dar_buf) ||
-		     (desc->hw.sar + desc->hw.tcr) == sar_buf)) {
-			dev_dbg(sh_chan->dev, "done #%d@%p dst %u\n",
-				desc->async_tx.cookie, &desc->async_tx,
-				desc->hw.dar);
-			desc->mark = DESC_COMPLETED;
-			break;
-		}
-	}
-	/* Next desc */
-	sh_chan_xfer_ld_queue(sh_chan);
-	spin_unlock_irq(&sh_chan->desc_lock);
-
-	sh_dmae_chan_ld_cleanup(sh_chan, false);
+	return	(sdesc->direction == DMA_FROM_DEVICE &&
+		 (sh_desc->hw.dar + sh_desc->hw.tcr) == dar_buf) ||
+		(sdesc->direction != DMA_FROM_DEVICE &&
+		 (sh_desc->hw.sar + sh_desc->hw.tcr) == sar_buf);
 }
 
 static bool sh_dmae_nmi_notify(struct sh_dmae_device *shdev)
@@ -1090,96 +497,176 @@ static struct notifier_block sh_dmae_nmi_notifier __read_mostly = {
 static int __devinit sh_dmae_chan_probe(struct sh_dmae_device *shdev, int id,
 					int irq, unsigned long flags)
 {
-	int err;
 	const struct sh_dmae_channel *chan_pdata = &shdev->pdata->channel[id];
-	struct platform_device *pdev = to_platform_device(shdev->common.dev);
-	struct sh_dmae_chan *new_sh_chan;
+	struct dma_simple_dev *sdev = &shdev->simple_dev;
+	struct platform_device *pdev = to_platform_device(sdev->dma_dev.dev);
+	struct sh_dmae_chan *sh_chan;
+	struct dma_simple_chan *schan;
+	int err;
 
-	/* alloc channel */
-	new_sh_chan = kzalloc(sizeof(struct sh_dmae_chan), GFP_KERNEL);
-	if (!new_sh_chan) {
-		dev_err(shdev->common.dev,
+	sh_chan = kzalloc(sizeof(struct sh_dmae_chan), GFP_KERNEL);
+	if (!sh_chan) {
+		dev_err(sdev->dma_dev.dev,
 			"No free memory for allocating dma channels!\n");
 		return -ENOMEM;
 	}
 
-	new_sh_chan->pm_state = DMAE_PM_ESTABLISHED;
-
-	/* reference struct dma_device */
-	new_sh_chan->common.device = &shdev->common;
-
-	new_sh_chan->dev = shdev->common.dev;
-	new_sh_chan->id = id;
-	new_sh_chan->irq = irq;
-	new_sh_chan->base = shdev->chan_reg + chan_pdata->offset / sizeof(u32);
+	schan = &sh_chan->simple_chan;
+	schan->max_xfer_len = SH_DMA_TCR_MAX + 1;
 
-	/* Init DMA tasklet */
-	tasklet_init(&new_sh_chan->tasklet, dmae_do_tasklet,
-			(unsigned long)new_sh_chan);
+	dma_simple_chan_probe(sdev, schan, id);
 
-	spin_lock_init(&new_sh_chan->desc_lock);
-
-	/* Init descripter manage list */
-	INIT_LIST_HEAD(&new_sh_chan->ld_queue);
-	INIT_LIST_HEAD(&new_sh_chan->ld_free);
-
-	/* Add the channel to DMA device channel list */
-	list_add_tail(&new_sh_chan->common.device_node,
-			&shdev->common.channels);
-	shdev->common.chancnt++;
+	sh_chan->irq = irq;
+	sh_chan->base = shdev->chan_reg + chan_pdata->offset / sizeof(u32);
 
+	/* set up channel irq */
 	if (pdev->id >= 0)
-		snprintf(new_sh_chan->dev_id, sizeof(new_sh_chan->dev_id),
-			 "sh-dmae%d.%d", pdev->id, new_sh_chan->id);
+		snprintf(sh_chan->dev_id, sizeof(sh_chan->dev_id),
+			 "sh-dmae%d.%d", pdev->id, id);
 	else
-		snprintf(new_sh_chan->dev_id, sizeof(new_sh_chan->dev_id),
-			 "sh-dma%d", new_sh_chan->id);
+		snprintf(sh_chan->dev_id, sizeof(sh_chan->dev_id),
+			 "sh-dma%d", id);
 
-	/* set up channel irq */
 	err = request_irq(irq, &sh_dmae_interrupt, flags,
-			  new_sh_chan->dev_id, new_sh_chan);
+			  sh_chan->dev_id, sh_chan);
 	if (err) {
-		dev_err(shdev->common.dev, "DMA channel %d request_irq error "
-			"with return %d\n", id, err);
+		dev_err(sdev->dma_dev.dev,
+			"DMA channel %d request_irq error %d\n",
+			id, err);
 		goto err_no_irq;
 	}
 
-	shdev->chan[id] = new_sh_chan;
+	shdev->chan[id] = sh_chan;
 	return 0;
 
 err_no_irq:
 	/* remove from dmaengine device node */
-	list_del(&new_sh_chan->common.device_node);
-	kfree(new_sh_chan);
+	dma_simple_chan_remove(schan);
+	kfree(sh_chan);
 	return err;
 }
 
 static void sh_dmae_chan_remove(struct sh_dmae_device *shdev)
 {
+	struct dma_device *dma_dev = &shdev->simple_dev.dma_dev;
+	struct dma_simple_chan *schan;
 	int i;
 
-	for (i = shdev->common.chancnt - 1 ; i >= 0 ; i--) {
-		if (shdev->chan[i]) {
-			struct sh_dmae_chan *sh_chan = shdev->chan[i];
+	dma_simple_for_each_chan(schan, &shdev->simple_dev, i) {
+		struct sh_dmae_chan *sh_chan = container_of(schan,
+					struct sh_dmae_chan, simple_chan);
+		BUG_ON(!schan);
+
+		free_irq(sh_chan->irq, sh_chan);
+
+		dma_simple_chan_remove(schan);
+		kfree(sh_chan);
+	}
+	dma_dev->chancnt = 0;
+}
+
+static void sh_dmae_shutdown(struct platform_device *pdev)
+{
+	struct sh_dmae_device *shdev = platform_get_drvdata(pdev);
+	sh_dmae_ctl_stop(shdev);
+}
+
+static int sh_dmae_runtime_suspend(struct device *dev)
+{
+	return 0;
+}
+
+static int sh_dmae_runtime_resume(struct device *dev)
+{
+	struct sh_dmae_device *shdev = dev_get_drvdata(dev);
+
+	return sh_dmae_rst(shdev);
+}
 
-			free_irq(sh_chan->irq, sh_chan);
+#ifdef CONFIG_PM
+static int sh_dmae_suspend(struct device *dev)
+{
+	return 0;
+}
 
-			list_del(&sh_chan->common.device_node);
-			kfree(sh_chan);
-			shdev->chan[i] = NULL;
+static int sh_dmae_resume(struct device *dev)
+{
+	struct sh_dmae_device *shdev = dev_get_drvdata(dev);
+	int i, ret;
+
+	ret = sh_dmae_rst(shdev);
+	if (ret < 0)
+		dev_err(dev, "Failed to reset!\n");
+
+	for (i = 0; i < shdev->pdata->channel_num; i++) {
+		struct sh_dmae_chan *sh_chan = shdev->chan[i];
+		struct sh_dmae_slave *param = sh_chan->simple_chan.dma_chan.private;
+
+		if (!sh_chan->simple_chan.desc_num)
+			continue;
+
+		if (param) {
+			const struct sh_dmae_slave_config *cfg = param->config;
+			dmae_set_dmars(sh_chan, cfg->mid_rid);
+			dmae_set_chcr(sh_chan, cfg->chcr);
+		} else {
+			dmae_init(sh_chan);
 		}
 	}
-	shdev->common.chancnt = 0;
+
+	return 0;
+}
+#else
+#define sh_dmae_suspend NULL
+#define sh_dmae_resume NULL
+#endif
+
+const struct dev_pm_ops sh_dmae_pm = {
+	.suspend		= sh_dmae_suspend,
+	.resume			= sh_dmae_resume,
+	.runtime_suspend	= sh_dmae_runtime_suspend,
+	.runtime_resume		= sh_dmae_runtime_resume,
+};
+
+static dma_addr_t sh_dmae_slave_addr(struct dma_simple_chan *schan)
+{
+	struct sh_dmae_slave *param = schan->dma_chan.private;
+
+	/*
+	 * Implicit BUG_ON(!param)
+	 * if (param != NULL), this is a successfully requested slave channel,
+	 * therefore param->config != NULL too.
+	 */
+	return param->config->addr;
+}
+
+static struct dma_simple_desc *sh_dmae_embedded_desc(void *buf, int i)
+{
+	return &((struct sh_dmae_desc *)buf)[i].simple_desc;
 }
 
-static int __init sh_dmae_probe(struct platform_device *pdev)
+static const struct dma_simple_ops sh_dmae_simple_ops = {
+	.desc_completed = sh_dmae_desc_completed,
+	.halt_channel = sh_dmae_halt,
+	.clear_channel = sh_dmae_clear,
+	.channel_busy = sh_dmae_channel_busy,
+	.slave_addr = sh_dmae_slave_addr,
+	.desc_setup = sh_dmae_desc_setup,
+	.set_slave = sh_dmae_set_slave,
+	.setup_xfer = sh_dmae_setup_xfer,
+	.start_xfer = sh_dmae_start_xfer,
+	.embedded_desc = sh_dmae_embedded_desc,
+};
+
+static int __devinit sh_dmae_probe(struct platform_device *pdev)
 {
 	struct sh_dmae_pdata *pdata = pdev->dev.platform_data;
 	unsigned long irqflags = IRQF_DISABLED,
-		chan_flag[SH_DMAC_MAX_CHANNELS] = {};
-	int errirq, chan_irq[SH_DMAC_MAX_CHANNELS];
+		chan_flag[SH_DMAE_MAX_CHANNELS] = {};
+	int errirq, chan_irq[SH_DMAE_MAX_CHANNELS];
 	int err, i, irq_cnt = 0, irqres = 0, irq_cap = 0;
 	struct sh_dmae_device *shdev;
+	struct dma_device *dma_dev;
 	struct resource *chan, *dmars, *errirq_res, *chanirq_res;
 
 	/* get platform data */
@@ -1227,6 +714,8 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 		goto ealloc;
 	}
 
+	dma_dev = &shdev->simple_dev.dma_dev;
+
 	shdev->chan_reg = ioremap(chan->start, resource_size(chan));
 	if (!shdev->chan_reg)
 		goto emapchan;
@@ -1236,8 +725,23 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 			goto emapdmars;
 	}
 
+	if (!pdata->slave_only)
+		dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
+	if (pdata->slave && pdata->slave_num)
+		dma_cap_set(DMA_SLAVE, dma_dev->cap_mask);
+
+	/* Default transfer size of 32 bytes requires 32-byte alignment */
+	dma_dev->copy_align = LOG2_DEFAULT_XFER_SIZE;
+
+	shdev->simple_dev.ops = &sh_dmae_simple_ops;
+	shdev->simple_dev.desc_size = sizeof(struct sh_dmae_desc);
+	err = dma_simple_init(&pdev->dev, &shdev->simple_dev,
+			      pdata->channel_num);
+	if (err < 0)
+		goto esimple;
+
 	/* platform data */
-	shdev->pdata = pdata;
+	shdev->pdata = pdev->dev.platform_data;
 
 	if (pdata->chcr_offset)
 		shdev->chcr_offset = pdata->chcr_offset;
@@ -1251,10 +755,10 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 
 	platform_set_drvdata(pdev, shdev);
 
-	shdev->common.dev = &pdev->dev;
-
 	pm_runtime_enable(&pdev->dev);
-	pm_runtime_get_sync(&pdev->dev);
+	err = pm_runtime_get_sync(&pdev->dev);
+	if (err < 0)
+		dev_err(&pdev->dev, "%s(): GET = %d\n", __func__, err);
 
 	spin_lock_irq(&sh_dmae_lock);
 	list_add_tail_rcu(&shdev->node, &sh_dmae_devices);
@@ -1265,27 +769,6 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	if (err)
 		goto rst_err;
 
-	INIT_LIST_HEAD(&shdev->common.channels);
-
-	if (!pdata->slave_only)
-		dma_cap_set(DMA_MEMCPY, shdev->common.cap_mask);
-	if (pdata->slave && pdata->slave_num)
-		dma_cap_set(DMA_SLAVE, shdev->common.cap_mask);
-
-	shdev->common.device_alloc_chan_resources
-		= sh_dmae_alloc_chan_resources;
-	shdev->common.device_free_chan_resources = sh_dmae_free_chan_resources;
-	shdev->common.device_prep_dma_memcpy = sh_dmae_prep_memcpy;
-	shdev->common.device_tx_status = sh_dmae_tx_status;
-	shdev->common.device_issue_pending = sh_dmae_memcpy_issue_pending;
-
-	/* Compulsory for DMA_SLAVE fields */
-	shdev->common.device_prep_slave_sg = sh_dmae_prep_slave_sg;
-	shdev->common.device_control = sh_dmae_control;
-
-	/* Default transfer size of 32 bytes requires 32-byte alignment */
-	shdev->common.copy_align = LOG2_DEFAULT_XFER_SIZE;
-
 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_ARCH_SHMOBILE)
 	chanirq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 1);
 
@@ -1317,7 +800,7 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	    !platform_get_resource(pdev, IORESOURCE_IRQ, 1)) {
 		/* Special case - all multiplexed */
 		for (; irq_cnt < pdata->channel_num; irq_cnt++) {
-			if (irq_cnt < SH_DMAC_MAX_CHANNELS) {
+			if (irq_cnt < SH_DMAE_MAX_CHANNELS) {
 				chan_irq[irq_cnt] = chanirq_res->start;
 				chan_flag[irq_cnt] = IRQF_SHARED;
 			} else {
@@ -1328,7 +811,7 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	} else {
 		do {
 			for (i = chanirq_res->start; i <= chanirq_res->end; i++) {
-				if (irq_cnt >= SH_DMAC_MAX_CHANNELS) {
+				if (irq_cnt >= SH_DMAE_MAX_CHANNELS) {
 					irq_cap = 1;
 					break;
 				}
@@ -1344,7 +827,7 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 				chan_irq[irq_cnt++] = i;
 			}
 
-			if (irq_cnt >= SH_DMAC_MAX_CHANNELS)
+			if (irq_cnt >= SH_DMAE_MAX_CHANNELS)
 				break;
 
 			chanirq_res = platform_get_resource(pdev,
@@ -1362,14 +845,19 @@ static int __init sh_dmae_probe(struct platform_device *pdev)
 	if (irq_cap)
 		dev_notice(&pdev->dev, "Attempting to register %d DMA "
 			   "channels when a maximum of %d are supported.\n",
-			   pdata->channel_num, SH_DMAC_MAX_CHANNELS);
+			   pdata->channel_num, SH_DMAE_MAX_CHANNELS);
 
 	pm_runtime_put(&pdev->dev);
 
-	dma_async_device_register(&shdev->common);
+	err = dma_async_device_register(&shdev->simple_dev.dma_dev);
+	if (err < 0)
+		goto edmadevreg;
 
 	return err;
 
+edmadevreg:
+	pm_runtime_get(&pdev->dev);
+
 chan_probe_err:
 	sh_dmae_chan_remove(shdev);
 
@@ -1385,10 +873,11 @@ rst_err:
 	pm_runtime_put(&pdev->dev);
 	pm_runtime_disable(&pdev->dev);
 
+	platform_set_drvdata(pdev, NULL);
+	dma_simple_cleanup(&shdev->simple_dev);
+esimple:
 	if (dmars)
 		iounmap(shdev->dmars);
-
-	platform_set_drvdata(pdev, NULL);
 emapdmars:
 	iounmap(shdev->chan_reg);
 	synchronize_rcu();
@@ -1403,13 +892,14 @@ ermrdmars:
 	return err;
 }
 
-static int __exit sh_dmae_remove(struct platform_device *pdev)
+static int __devexit sh_dmae_remove(struct platform_device *pdev)
 {
 	struct sh_dmae_device *shdev = platform_get_drvdata(pdev);
+	struct dma_device *dma_dev = &shdev->simple_dev.dma_dev;
 	struct resource *res;
 	int errirq = platform_get_irq(pdev, 0);
 
-	dma_async_device_unregister(&shdev->common);
+	dma_async_device_unregister(dma_dev);
 
 	if (errirq > 0)
 		free_irq(errirq, shdev);
@@ -1418,11 +908,11 @@ static int __exit sh_dmae_remove(struct platform_device *pdev)
 	list_del_rcu(&shdev->node);
 	spin_unlock_irq(&sh_dmae_lock);
 
-	/* channel data remove */
-	sh_dmae_chan_remove(shdev);
-
 	pm_runtime_disable(&pdev->dev);
 
+	sh_dmae_chan_remove(shdev);
+	dma_simple_cleanup(&shdev->simple_dev);
+
 	if (shdev->dmars)
 		iounmap(shdev->dmars);
 	iounmap(shdev->chan_reg);
@@ -1442,77 +932,14 @@ static int __exit sh_dmae_remove(struct platform_device *pdev)
 	return 0;
 }
 
-static void sh_dmae_shutdown(struct platform_device *pdev)
-{
-	struct sh_dmae_device *shdev = platform_get_drvdata(pdev);
-	sh_dmae_ctl_stop(shdev);
-}
-
-static int sh_dmae_runtime_suspend(struct device *dev)
-{
-	return 0;
-}
-
-static int sh_dmae_runtime_resume(struct device *dev)
-{
-	struct sh_dmae_device *shdev = dev_get_drvdata(dev);
-
-	return sh_dmae_rst(shdev);
-}
-
-#ifdef CONFIG_PM
-static int sh_dmae_suspend(struct device *dev)
-{
-	return 0;
-}
-
-static int sh_dmae_resume(struct device *dev)
-{
-	struct sh_dmae_device *shdev = dev_get_drvdata(dev);
-	int i, ret;
-
-	ret = sh_dmae_rst(shdev);
-	if (ret < 0)
-		dev_err(dev, "Failed to reset!\n");
-
-	for (i = 0; i < shdev->pdata->channel_num; i++) {
-		struct sh_dmae_chan *sh_chan = shdev->chan[i];
-		struct sh_dmae_slave *param = sh_chan->common.private;
-
-		if (!sh_chan->descs_allocated)
-			continue;
-
-		if (param) {
-			const struct sh_dmae_slave_config *cfg = param->config;
-			dmae_set_dmars(sh_chan, cfg->mid_rid);
-			dmae_set_chcr(sh_chan, cfg->chcr);
-		} else {
-			dmae_init(sh_chan);
-		}
-	}
-
-	return 0;
-}
-#else
-#define sh_dmae_suspend NULL
-#define sh_dmae_resume NULL
-#endif
-
-const struct dev_pm_ops sh_dmae_pm = {
-	.suspend		= sh_dmae_suspend,
-	.resume			= sh_dmae_resume,
-	.runtime_suspend	= sh_dmae_runtime_suspend,
-	.runtime_resume		= sh_dmae_runtime_resume,
-};
-
 static struct platform_driver sh_dmae_driver = {
-	.remove		= __exit_p(sh_dmae_remove),
-	.shutdown	= sh_dmae_shutdown,
-	.driver = {
+	.driver 	= {
 		.owner	= THIS_MODULE,
-		.name	= "sh-dma-engine",
 		.pm	= &sh_dmae_pm,
+		.name	= SH_DMAE_DRV_NAME,
 	},
+	.remove		= __devexit_p(sh_dmae_remove),
+	.shutdown	= sh_dmae_shutdown,
 };
 
 static int __init sh_dmae_init(void)
@@ -1536,5 +963,6 @@ module_exit(sh_dmae_exit);
 
 MODULE_AUTHOR("Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>");
 MODULE_DESCRIPTION("Renesas SH DMA Engine driver");
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:sh-dma-engine");
+MODULE_LICENSE("GPL v2");
+MODULE_VERSION("0.1.0");
+MODULE_ALIAS("platform:" SH_DMAE_DRV_NAME);
diff --git a/drivers/dma/shdma.h b/drivers/dma/shdma.h
index 2b55a27..8c7c5b7 100644
--- a/drivers/dma/shdma.h
+++ b/drivers/dma/shdma.h
@@ -13,43 +13,28 @@
 #ifndef __DMA_SHDMA_H
 #define __DMA_SHDMA_H
 
+#include <linux/dma-simple.h>
 #include <linux/dmaengine.h>
 #include <linux/interrupt.h>
 #include <linux/list.h>
 
-#define SH_DMAC_MAX_CHANNELS 20
-#define SH_DMA_SLAVE_NUMBER 256
-#define SH_DMA_TCR_MAX 0x00FFFFFF	/* 16MB */
+#define SH_DMAE_MAX_CHANNELS 20
+#define SH_DMAE_TCR_MAX 0x00FFFFFF	/* 16MB */
 
 struct device;
 
-enum dmae_pm_state {
-	DMAE_PM_ESTABLISHED,
-	DMAE_PM_BUSY,
-	DMAE_PM_PENDING,
-};
-
 struct sh_dmae_chan {
-	dma_cookie_t completed_cookie;	/* The maximum cookie completed */
-	spinlock_t desc_lock;		/* Descriptor operation lock */
-	struct list_head ld_queue;	/* Link descriptors queue */
-	struct list_head ld_free;	/* Link descriptors free */
-	struct dma_chan common;		/* DMA common channel */
-	struct device *dev;		/* Channel device */
-	struct tasklet_struct tasklet;	/* Tasklet */
-	int descs_allocated;		/* desc count */
+	struct dma_simple_chan simple_chan;
 	int xmit_shift;			/* log_2(bytes_per_xfer) */
 	int irq;
-	int id;				/* Raw id of this channel */
 	u32 __iomem *base;
 	char dev_id[16];		/* unique name per DMAC of channel */
 	int pm_error;
-	enum dmae_pm_state pm_state;
 };
 
 struct sh_dmae_device {
-	struct dma_device common;
-	struct sh_dmae_chan *chan[SH_DMAC_MAX_CHANNELS];
+	struct dma_simple_dev simple_dev;
+	struct sh_dmae_chan *chan[SH_DMAE_MAX_CHANNELS];
 	struct sh_dmae_pdata *pdata;
 	struct list_head node;
 	u32 __iomem *chan_reg;
@@ -58,10 +43,21 @@ struct sh_dmae_device {
 	u32 chcr_ie_bit;
 };
 
-#define to_sh_chan(chan) container_of(chan, struct sh_dmae_chan, common)
+struct sh_dmae_regs {
+	u32 sar; /* SAR / source address */
+	u32 dar; /* DAR / destination address */
+	u32 tcr; /* TCR / transfer count */
+};
+
+struct sh_dmae_desc {
+	struct sh_dmae_regs hw;
+	struct dma_simple_desc simple_desc;
+};
+
+#define to_sh_chan(chan) container_of(chan, struct sh_dmae_chan, simple_chan)
 #define to_sh_desc(lh) container_of(lh, struct sh_desc, node)
 #define tx_to_sh_desc(tx) container_of(tx, struct sh_desc, async_tx)
-#define to_sh_dev(chan) container_of(chan->common.device,\
-				     struct sh_dmae_device, common)
+#define to_sh_dev(chan) container_of(chan->simple_chan.dma_chan.device,\
+				     struct sh_dmae_device, simple_dev.dma_dev)
 
 #endif	/* __DMA_SHDMA_H */
diff --git a/include/linux/sh_dma.h b/include/linux/sh_dma.h
index 1b14cf4..4f22466 100644
--- a/include/linux/sh_dma.h
+++ b/include/linux/sh_dma.h
@@ -13,34 +13,14 @@
 #include <linux/dma-simple.h>
 #include <linux/dmaengine.h>
 #include <linux/list.h>
+#include <linux/types.h>
 
-/* Used by slave DMA clients to request DMA to/from a specific peripheral */
-struct sh_dmae_slave {
-	union {
-		unsigned int		slave_id; /* Set by the platform */
-		struct dma_simple_slave	simple_slave;
-	};
-	struct device			*dma_dev; /* Set by the platform */
-	const struct sh_dmae_slave_config	*config;  /* Set by the driver */
-};
-
-struct sh_dmae_regs {
-	u32 sar; /* SAR / source address */
-	u32 dar; /* DAR / destination address */
-	u32 tcr; /* TCR / transfer count */
-};
-
-struct sh_desc {
-	struct sh_dmae_regs hw;
-	struct list_head node;
-	struct dma_async_tx_descriptor async_tx;
-	enum dma_data_direction direction;
-	dma_cookie_t cookie;
-	size_t partial;
-	int chunks;
-	int mark;
-};
+struct device;
 
+/*
+ * Supplied by platforms to specify, how a DMA channel has to be configured for
+ * a certain peripheral
+ */
 struct sh_dmae_slave_config {
 	unsigned int			slave_id;
 	dma_addr_t			addr;
@@ -48,6 +28,13 @@ struct sh_dmae_slave_config {
 	char				mid_rid;
 };
 
+/* Used by slave DMA clients to request DMA to/from a specific peripheral */
+struct sh_dmae_slave {
+	struct dma_simple_slave		simple_slave;	/* Set by the platform */
+	struct device			*dma_dev;	/* Set by the platform */
+	const struct sh_dmae_slave_config *config;	/* Set by the driver */
+};
+
 struct sh_dmae_channel {
 	unsigned int	offset;
 	unsigned int	dmars;
-- 
1.7.2.5


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/7] dma: add a simple dma library
  2012-01-18 10:22 ` [PATCH 1/7] dma: add a simple dma library Guennadi Liakhovetski
@ 2012-01-18 12:34   ` Paul Mundt
  2012-01-18 14:07     ` Guennadi Liakhovetski
  0 siblings, 1 reply; 10+ messages in thread
From: Paul Mundt @ 2012-01-18 12:34 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: linux-kernel, linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro

On Wed, Jan 18, 2012 at 11:22:18AM +0100, Guennadi Liakhovetski wrote:
> This patch adds a library of functions, helping to implement dmaengine
> drivers for hardware, unable to handle scatter-gather lists natively.
> The first version of this driver only supports memcpy and slave DMA
> operation.
> 
> Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>

While I like the direction this is going there are just a few minor
things that have popped up.

> +/*
> + * For slave DMA we assume, that there is a finate number of DMA slaves in the
> + * system, and that each such slave can only use a finate number of channels.
> + * We use slave channel IDs to make sure, that no such slave channel ID is
> + * allocated more than once.
> + */
Presumably you meant finite.

> +static unsigned int slave_num = 256;
> +module_param(slave_num, uint, 0444);
> +
> +/* A bitmask with slave_num bits */
> +static unsigned long *simple_slave_used;
> +
The shdma code explicitly used BITS_TO_LONGS here which explicitly
requires a defined (or max) size, but this could all presumably be buried
under DECLARE_BITMAP().

> +		BUG_ON(tx->cookie > 0 && tx->cookie != desc->cookie);
> +		BUG_ON(desc->mark != DESC_SUBMITTED &&
> +		       desc->mark != DESC_COMPLETED &&
> +		       desc->mark != DESC_WAITING);
> +
..
> +			BUG_ON(desc->chunks != 1);
..
> +				BUG_ON(tx->cookie < 0);

This seems to depend on BUG_ON() as its primary assert/sanity check path,
which seems a bit excessive. Drivers get fed garbage all the time,
some effort to provide meaningful debug output or less fatal error
handling would be nice. As it is, one misbehaving user can pretty much
grind everything else to a halt for things that aren't really that big of
a deal.

> +/*
> + * simple_chan_ld_cleanup - Clean up link descriptors
> + *
> + * Clean up the ld_queue of DMA channel.
> + */
> +static void simple_chan_ld_cleanup(struct dma_simple_chan *schan, bool all)
> +{
> +	while (__ld_cleanup(schan, all))
> +		;
> +}
> +
cpu_relax()

> +/*
> + * simple_free_chan_resources - Free all resources of the channel.
> + */
> +static void simple_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct dma_simple_chan *schan = to_simple_chan(chan);
> +	struct dma_simple_dev *sdev = to_simple_dev(chan->device);
> +	const struct dma_simple_ops *ops = sdev->ops;
> +	LIST_HEAD(list);
> +
> +	/* Protect against ISR */
> +	spin_lock_irq(&schan->chan_lock);
> +	ops->halt_channel(schan);
> +	spin_unlock_irq(&schan->chan_lock);
> +
> +	/* Now no new interrupts will occur */
> +
> +	/* Prepared and not submitted descriptors can still be on the queue */
> +	if (!list_empty(&schan->ld_queue))
> +		simple_chan_ld_cleanup(schan, true);
> +
Why doesn't simple_chan_ld_cleanup() do a list_empty test instead?

> +int __devinit dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
> +			      int chan_num)
> +{
..
> +}
> +EXPORT_SYMBOL(dma_simple_init);
> +
> +void __devexit dma_simple_cleanup(struct dma_simple_dev *sdev)
> +{
..
> +}
> +EXPORT_SYMBOL(dma_simple_cleanup);
> +
__devinit/exit on exported symbols? You could try __init_or_module, but I
think it's more trouble than it is worth.

> +static int __init dma_simple_enter(void)
> +{
> +	simple_slave_used = kzalloc(DIV_ROUND_UP(slave_num, BITS_PER_BYTE),
> +				    GFP_KERNEL);
> +	if (!simple_slave_used)
> +		return -ENOMEM;

Pretty weird way to do BITS_PER_LONGS, and as noted above can just be
handled by DECLARE_BITMAP() if the max case isn't too outlandish.

> +module_init(dma_simple_enter);
> +
> +static void __exit dma_simple_exit(void)
> +{
> +	kfree(simple_slave_used);
> +}
> +module_exit(dma_simple_exit);
> +
At which point these can probably just go away completely.

> +static inline void dma_simple_lock(struct dma_simple_chan *schan)
> +{
> +	spin_lock(&schan->chan_lock);
> +}
> +
> +static inline void dma_simple_unlock(struct dma_simple_chan *schan)
> +{
> +	spin_unlock(&schan->chan_lock);
> +}
> +
These deserve to die a violent death, it's ok for driver writers to
handle a spinlock on their own, or so one hopes.

> +void dma_simple_chan_remove(struct dma_simple_chan *schan);
> +int dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
> +		    int chan_num) __devinit;
> +void dma_simple_cleanup(struct dma_simple_dev *sdev) __devexit;
> +
Not really convinced about the section annotations here, either.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/7] dma: add a simple dma library
  2012-01-18 12:34   ` Paul Mundt
@ 2012-01-18 14:07     ` Guennadi Liakhovetski
  0 siblings, 0 replies; 10+ messages in thread
From: Guennadi Liakhovetski @ 2012-01-18 14:07 UTC (permalink / raw)
  To: Paul Mundt
  Cc: linux-kernel, linux-sh, Vinod Koul, Magnus Damm, Shimoda, Yoshihiro

Hi Paul

Thanks for the review!

On Wed, 18 Jan 2012, Paul Mundt wrote:

> On Wed, Jan 18, 2012 at 11:22:18AM +0100, Guennadi Liakhovetski wrote:
> > This patch adds a library of functions, helping to implement dmaengine
> > drivers for hardware, unable to handle scatter-gather lists natively.
> > The first version of this driver only supports memcpy and slave DMA
> > operation.
> > 
> > Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
> 
> While I like the direction this is going there are just a few minor
> things that have popped up.
> 
> > +/*
> > + * For slave DMA we assume, that there is a finate number of DMA slaves in the
> > + * system, and that each such slave can only use a finate number of channels.
> > + * We use slave channel IDs to make sure, that no such slave channel ID is
> > + * allocated more than once.
> > + */
> Presumably you meant finite.

I think so too:)

> > +static unsigned int slave_num = 256;
> > +module_param(slave_num, uint, 0444);
> > +
> > +/* A bitmask with slave_num bits */
> > +static unsigned long *simple_slave_used;
> > +
> The shdma code explicitly used BITS_TO_LONGS here which explicitly
> requires a defined (or max) size, but this could all presumably be buried
> under DECLARE_BITMAP().

I didn't want to put any fixed limit on the bitmap size. Do you really 
think we should use one?

> > +		BUG_ON(tx->cookie > 0 && tx->cookie != desc->cookie);
> > +		BUG_ON(desc->mark != DESC_SUBMITTED &&
> > +		       desc->mark != DESC_COMPLETED &&
> > +		       desc->mark != DESC_WAITING);
> > +
> ..
> > +			BUG_ON(desc->chunks != 1);
> ..
> > +				BUG_ON(tx->cookie < 0);
> 
> This seems to depend on BUG_ON() as its primary assert/sanity check path,
> which seems a bit excessive. Drivers get fed garbage all the time,
> some effort to provide meaningful debug output or less fatal error
> handling would be nice. As it is, one misbehaving user can pretty much
> grind everything else to a halt for things that aren't really that big of
> a deal.

These have been certainly brought over from the shdma.c driver, their 
purpose is to catch really bad _internal_ _algorithmic_ errors, not API 
abuses. If any of these triggers this means, that the driver has hit an 
internal bug. But ok, I'll revisit them and see whether all of them are 
really reasonable.

> > +/*
> > + * simple_chan_ld_cleanup - Clean up link descriptors
> > + *
> > + * Clean up the ld_queue of DMA channel.
> > + */
> > +static void simple_chan_ld_cleanup(struct dma_simple_chan *schan, bool all)
> > +{
> > +	while (__ld_cleanup(schan, all))
> > +		;
> > +}
> > +
> cpu_relax()

Well, I thought, there was enough relaxing going on in __ld_cleanup(), why 
do we need cpu_relax() here?

> > +/*
> > + * simple_free_chan_resources - Free all resources of the channel.
> > + */
> > +static void simple_free_chan_resources(struct dma_chan *chan)
> > +{
> > +	struct dma_simple_chan *schan = to_simple_chan(chan);
> > +	struct dma_simple_dev *sdev = to_simple_dev(chan->device);
> > +	const struct dma_simple_ops *ops = sdev->ops;
> > +	LIST_HEAD(list);
> > +
> > +	/* Protect against ISR */
> > +	spin_lock_irq(&schan->chan_lock);
> > +	ops->halt_channel(schan);
> > +	spin_unlock_irq(&schan->chan_lock);
> > +
> > +	/* Now no new interrupts will occur */
> > +
> > +	/* Prepared and not submitted descriptors can still be on the queue */
> > +	if (!list_empty(&schan->ld_queue))
> > +		simple_chan_ld_cleanup(schan, true);
> > +
> Why doesn't simple_chan_ld_cleanup() do a list_empty test instead?

because it removes entries from it.

> > +int __devinit dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
> > +			      int chan_num)
> > +{
> ..
> > +}
> > +EXPORT_SYMBOL(dma_simple_init);
> > +
> > +void __devexit dma_simple_cleanup(struct dma_simple_dev *sdev)
> > +{
> ..
> > +}
> > +EXPORT_SYMBOL(dma_simple_cleanup);
> > +
> __devinit/exit on exported symbols? You could try __init_or_module, but I
> think it's more trouble than it is worth.

Well, these routines are supposed to be called from client driver .probe() 
and .remove() methods... But probably you're right, better drop them.

> > +static int __init dma_simple_enter(void)
> > +{
> > +	simple_slave_used = kzalloc(DIV_ROUND_UP(slave_num, BITS_PER_BYTE),
> > +				    GFP_KERNEL);
> > +	if (!simple_slave_used)
> > +		return -ENOMEM;
> 
> Pretty weird way to do BITS_PER_LONGS, and as noted above can just be
> handled by DECLARE_BITMAP() if the max case isn't too outlandish.

I don't think it's weird - for dynamic allocation you need bytes anyway, 
or you'll have to use BITS_TO_LONGS (or DECLARE_BITMAP()) and sizeof(long) 
somewhere. But the above is actually buggy - it only rounds up to the next 
byte, whereas it should round up to the next long. Will fix.

> > +module_init(dma_simple_enter);
> > +
> > +static void __exit dma_simple_exit(void)
> > +{
> > +	kfree(simple_slave_used);
> > +}
> > +module_exit(dma_simple_exit);
> > +
> At which point these can probably just go away completely.

You really think we should use a fixed-size bitmask?

> > +static inline void dma_simple_lock(struct dma_simple_chan *schan)
> > +{
> > +	spin_lock(&schan->chan_lock);
> > +}
> > +
> > +static inline void dma_simple_unlock(struct dma_simple_chan *schan)
> > +{
> > +	spin_unlock(&schan->chan_lock);
> > +}
> > +
> These deserve to die a violent death, it's ok for driver writers to
> handle a spinlock on their own, or so one hopes.

Well, it's the same spinlock, as the one, used internally by the 
dma-simple driver... I'll see if I come up with a better idea.

> > +void dma_simple_chan_remove(struct dma_simple_chan *schan);
> > +int dma_simple_init(struct device *dev, struct dma_simple_dev *sdev,
> > +		    int chan_num) __devinit;
> > +void dma_simple_cleanup(struct dma_simple_dev *sdev) __devexit;
> > +
> Not really convinced about the section annotations here, either.

These are the same functions, as above.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-01-18 14:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-18 10:22 [PATCH 0/7] extract a simple dmaengine library from shdma.c Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 1/7] dma: add a simple dma library Guennadi Liakhovetski
2012-01-18 12:34   ` Paul Mundt
2012-01-18 14:07     ` Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 2/7] dma: shdma: prepare for simple DMA conversion Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 3/7] mmc: sh_mmcif: remove unneeded struct sh_mmcif_dma, prepare for simple DMA Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 4/7] mmc: sh_mobile_sdhi: prepare for conversion to " Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 5/7] serial: sh-sci: " Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 6/7] ASoC: SIU: " Guennadi Liakhovetski
2012-01-18 10:22 ` [PATCH 7/7] dma: shdma: convert to the simple DMA library Guennadi Liakhovetski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).