linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
@ 2015-06-09  6:35 Kedareswara rao Appana
  2015-06-16 19:19 ` Nicolae Rosia
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Kedareswara rao Appana @ 2015-06-09  6:35 UTC (permalink / raw)
  To: vinod.koul, dan.j.williams, michal.simek, soren.brinkmann,
	appanad, anirudh, punnaia
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Srikanth Thokala

This is the driver for the AXI Direct Memory Access (AXI DMA)
core, which is a soft Xilinx IP core that provides high-
bandwidth direct memory access between memory and AXI4-Stream
type target peripherals.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
The deivce tree doc got applied in the slave-dmaengine.git.

This patch is rebased on the commit
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

- Need's to work on the comment "not making efficient use of DMA's coalesce
capability" will send the patch for that soon.

Changes in v7:
- updated license in the driver as suggested by Paul.
- Corrected return value in is_idle funtion.

Changes in v6:
- Fixed Odd indention in the Kconfig.
- used GFP_NOWAIT instead of GFP_KERNEL during the desc allocation
- Calculated residue in the tx_status instead of complete_descriptor.
- Update copy right to 2015.
- Modified spin_lock handling moved the spin_lock to the appropriate functions
(instead of xilinx_dma_start_transfer doing it xilinx_dma_issue_pending api).
- device_control and declare slave caps updated as per newer APi's.
Changes in v5:
- Modified the xilinx_dma.h header file location to the 
  include/linux/dma/xilinx_dma.h
Changes in v4:
- Add direction field to DMA descriptor structure and removed from
  channel structure to avoid duplication.
- Check for DMA idle condition before changing the configuration.
- Residue is being calculated in complete_descriptor() and is reported
  to slave driver.
Changes in v3:
- Rebased on 3.16-rc7
Changes in v2:
- Simplified the logic to set SOP and APP words in prep_slave_sg().
- Corrected function description comments to match the return type.
- Fixed some minor comments as suggested by Andy.

 drivers/dma/Kconfig             |   13 +
 drivers/dma/xilinx/Makefile     |    1 +
 drivers/dma/xilinx/xilinx_dma.c | 1196 +++++++++++++++++++++++++++++++++++++++
 include/linux/dma/xilinx_dma.h  |   14 +
 4 files changed, 1224 insertions(+)
 create mode 100644 drivers/dma/xilinx/xilinx_dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index bda2cb0..cb4fa57 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -492,4 +492,17 @@ config QCOM_BAM_DMA
 	  Enable support for the QCOM BAM DMA controller.  This controller
 	  provides DMA capabilities for a variety of on-chip devices.
 
+config XILINX_DMA
+        tristate "Xilinx AXI DMA Engine"
+        depends on (ARCH_ZYNQ || MICROBLAZE)
+        select DMA_ENGINE
+        help
+          Enable support for Xilinx AXI DMA Soft IP.
+
+          This engine provides high-bandwidth direct memory access
+          between memory and AXI4-Stream type target peripherals.
+          It has two stream interfaces/channels, Memory Mapped to
+          Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
+          data transfers.
+
 endif
diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
index 3c4e9f2..6224a49 100644
--- a/drivers/dma/xilinx/Makefile
+++ b/drivers/dma/xilinx/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
+obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
new file mode 100644
index 0000000..53582a7
--- /dev/null
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -0,0 +1,1196 @@
+/*
+ * DMA driver for Xilinx DMA Engine
+ *
+ * Copyright (C) 2010 - 2015 Xilinx, Inc. All rights reserved.
+ *
+ * Based on the Freescale DMA driver.
+ *
+ * Description:
+ *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
+ *  Access between memory and AXI4-Stream-type target peripherals. It can be
+ *  configured to have one channel or two channels and if configured as two
+ *  channels, one is to transmit data from memory to a device and another is
+ *  to receive from a device.
+ *
+ * This is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/bitops.h>
+#include <linux/dma/xilinx_dma.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_dma.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/slab.h>
+
+#include "../dmaengine.h"
+
+/* Register Offsets */
+#define XILINX_DMA_REG_CONTROL		0x00
+#define XILINX_DMA_REG_STATUS		0x04
+#define XILINX_DMA_REG_CURDESC		0x08
+#define XILINX_DMA_REG_TAILDESC		0x10
+#define XILINX_DMA_REG_SRCADDR		0x18
+#define XILINX_DMA_REG_DSTADDR		0x20
+#define XILINX_DMA_REG_BTT		0x28
+
+/* Channel/Descriptor Offsets */
+#define XILINX_DMA_MM2S_CTRL_OFFSET	0x00
+#define XILINX_DMA_S2MM_CTRL_OFFSET	0x30
+
+/* General register bits definitions */
+#define XILINX_DMA_CR_RUNSTOP_MASK	BIT(0)
+#define XILINX_DMA_CR_RESET_MASK	BIT(2)
+
+#define XILINX_DMA_CR_DELAY_SHIFT	24
+#define XILINX_DMA_CR_COALESCE_SHIFT	16
+
+#define XILINX_DMA_CR_DELAY_MAX		GENMASK(7, 0)
+#define XILINX_DMA_CR_COALESCE_MAX	GENMASK(7, 0)
+
+#define XILINX_DMA_SR_HALTED_MASK	BIT(0)
+#define XILINX_DMA_SR_IDLE_MASK		BIT(1)
+
+#define XILINX_DMA_XR_IRQ_IOC_MASK	BIT(12)
+#define XILINX_DMA_XR_IRQ_DELAY_MASK	BIT(13)
+#define XILINX_DMA_XR_IRQ_ERROR_MASK	BIT(14)
+#define XILINX_DMA_XR_IRQ_ALL_MASK	GENMASK(14, 12)
+
+/* BD definitions */
+#define XILINX_DMA_BD_STS_ALL_MASK	GENMASK(31, 28)
+#define XILINX_DMA_BD_SOP		BIT(27)
+#define XILINX_DMA_BD_EOP		BIT(26)
+
+/* Hw specific definitions */
+#define XILINX_DMA_MAX_CHANS_PER_DEVICE	0x2
+#define XILINX_DMA_MAX_TRANS_LEN	GENMASK(22, 0)
+
+/* Delay loop counter to prevent hardware failure */
+#define XILINX_DMA_LOOP_COUNT		1000000
+
+/* Maximum number of Descriptors */
+#define XILINX_DMA_NUM_DESCS		64
+#define XILINX_DMA_NUM_APP_WORDS	5
+
+/**
+ * struct xilinx_dma_desc_hw - Hardware Descriptor
+ * @next_desc: Next Descriptor Pointer @0x00
+ * @pad1: Reserved @0x04
+ * @buf_addr: Buffer address @0x08
+ * @pad2: Reserved @0x0C
+ * @pad3: Reserved @0x10
+ * @pad4: Reserved @0x14
+ * @control: Control field @0x18
+ * @status: Status field @0x1C
+ * @app: APP Fields @0x20 - 0x30
+ */
+struct xilinx_dma_desc_hw {
+	u32 next_desc;
+	u32 pad1;
+	u32 buf_addr;
+	u32 pad2;
+	u32 pad3;
+	u32 pad4;
+	u32 control;
+	u32 status;
+	u32 app[XILINX_DMA_NUM_APP_WORDS];
+} __aligned(64);
+
+/**
+ * struct xilinx_dma_tx_segment - Descriptor segment
+ * @hw: Hardware descriptor
+ * @node: Node in the descriptor segments list
+ * @phys: Physical address of segment
+ */
+struct xilinx_dma_tx_segment {
+	struct xilinx_dma_desc_hw hw;
+	struct list_head node;
+	dma_addr_t phys;
+} __aligned(64);
+
+/**
+ * struct xilinx_dma_tx_descriptor - Per Transaction structure
+ * @async_tx: Async transaction descriptor
+ * @segments: TX segments list
+ * @node: Node in the channel descriptors list
+ * @direction: Transfer direction
+ */
+struct xilinx_dma_tx_descriptor {
+	struct dma_async_tx_descriptor async_tx;
+	struct list_head segments;
+	struct list_head node;
+	enum dma_transfer_direction direction;
+};
+
+/**
+ * struct xilinx_dma_chan - Driver specific DMA channel structure
+ * @xdev: Driver specific device structure
+ * @ctrl_offset: Control registers offset
+ * @lock: Descriptor operation lock
+ * @pending_list: Descriptors waiting
+ * @active_desc: Active descriptor
+ * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
+ * @common: DMA common channel
+ * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
+ * @dev: The dma device
+ * @irq: Channel IRQ
+ * @id: Channel ID
+ * @has_sg: Support scatter transfers
+ * @err: Channel has errors
+ * @idle: Channel status
+ * @tasklet: Cleanup work after irq
+ * @residue: Residue
+ */
+struct xilinx_dma_chan {
+	struct xilinx_dma_device *xdev;
+	u32 ctrl_offset;
+	spinlock_t lock;
+	struct list_head pending_list;
+	struct xilinx_dma_tx_descriptor *active_desc;
+	struct list_head done_list;
+	struct list_head free_seg_list;
+	struct dma_chan common;
+	struct xilinx_dma_tx_segment *seg_v;
+	dma_addr_t seg_p;
+	struct device *dev;
+	int irq;
+	int id;
+	bool has_sg;
+	int err;
+	bool idle;
+	struct tasklet_struct tasklet;
+	u32 residue;
+};
+
+/**
+ * struct xilinx_dma_device - DMA device structure
+ * @regs: I/O mapped base address
+ * @dev: Device Structure
+ * @common: DMA device structure
+ * @chan: Driver specific DMA channel
+ * @has_sg: Specifies whether Scatter-Gather is present or not
+ */
+struct xilinx_dma_device {
+	void __iomem *regs;
+	struct device *dev;
+	struct dma_device common;
+	struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
+	bool has_sg;
+};
+
+/* Macros */
+#define to_xilinx_chan(chan) \
+	container_of(chan, struct xilinx_dma_chan, common)
+#define to_dma_tx_descriptor(tx) \
+	container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
+
+/* IO accessors */
+static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg, u32 value)
+{
+	iowrite32(value, chan->xdev->regs + reg);
+}
+
+static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg)
+{
+	return ioread32(chan->xdev->regs + reg);
+}
+
+static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32 reg)
+{
+	return dma_read(chan, chan->ctrl_offset + reg);
+}
+
+static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
+				  u32 value)
+{
+	dma_write(chan, chan->ctrl_offset + reg, value);
+}
+
+static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32 reg, u32 clr)
+{
+	dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr);
+}
+
+static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32 reg, u32 set)
+{
+	dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set);
+}
+
+/* -----------------------------------------------------------------------------
+ * Descriptors and segments alloc and free
+ */
+
+/**
+ * xilinx_dma_alloc_tx_segment - Allocate transaction segment
+ * @chan: Driver specific dma channel
+ *
+ * Return: The allocated segment on success and NULL on failure.
+ */
+static struct xilinx_dma_tx_segment *
+xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_segment *segment = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	if (!list_empty(&chan->free_seg_list)) {
+		segment = list_first_entry(&chan->free_seg_list,
+					   struct xilinx_dma_tx_segment,
+					   node);
+		list_del(&segment->node);
+	}
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return segment;
+}
+
+/**
+ * xilinx_dma_clean_hw_desc - Clean hardware descriptor
+ * @hw: HW descriptor to clean
+ */
+static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw)
+{
+	u32 next_desc = hw->next_desc;
+
+	memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
+
+	hw->next_desc = next_desc;
+}
+
+/**
+ * xilinx_dma_free_tx_segment - Free transaction segment
+ * @chan: Driver specific dma channel
+ * @segment: dma transaction segment
+ */
+static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
+				       struct xilinx_dma_tx_segment *segment)
+{
+	xilinx_dma_clean_hw_desc(&segment->hw);
+
+	list_add_tail(&segment->node, &chan->free_seg_list);
+}
+
+/**
+ * xilinx_dma_tx_descriptor - Allocate transaction descriptor
+ * @chan: Driver specific dma channel
+ *
+ * Return: The allocated descriptor on success and NULL on failure.
+ */
+static struct xilinx_dma_tx_descriptor *
+xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+
+	desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+	if (!desc)
+		return NULL;
+
+	INIT_LIST_HEAD(&desc->segments);
+
+	return desc;
+}
+
+/**
+ * xilinx_dma_free_tx_descriptor - Free transaction descriptor
+ * @chan: Driver specific dma channel
+ * @desc: dma transaction descriptor
+ */
+static void
+xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
+			      struct xilinx_dma_tx_descriptor *desc)
+{
+	struct xilinx_dma_tx_segment *segment, *next;
+
+	if (!desc)
+		return;
+
+	list_for_each_entry_safe(segment, next, &desc->segments, node) {
+		list_del(&segment->node);
+		xilinx_dma_free_tx_segment(chan, segment);
+	}
+
+	kfree(desc);
+}
+
+/**
+ * xilinx_dma_alloc_chan_resources - Allocate channel resources
+ * @dchan: DMA channel
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	int i;
+
+	/* Allocate the buffer descriptors. */
+	chan->seg_v = dma_zalloc_coherent(chan->dev,
+					  sizeof(*chan->seg_v) *
+					  XILINX_DMA_NUM_DESCS,
+					  &chan->seg_p, GFP_KERNEL);
+	if (!chan->seg_v) {
+		dev_err(chan->dev,
+			"unable to allocate channel %d descriptors\n",
+			chan->id);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
+		chan->seg_v[i].hw.next_desc =
+				chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS);
+		chan->seg_v[i].phys =
+				chan->seg_p + sizeof(*chan->seg_v) * i;
+		list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
+	}
+
+	dma_cookie_init(dchan);
+	return 0;
+}
+
+/**
+ * xilinx_dma_free_desc_list - Free descriptors list
+ * @chan: Driver specific dma channel
+ * @list: List to parse and delete the descriptor
+ */
+static void xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
+				      struct list_head *list)
+{
+	struct xilinx_dma_tx_descriptor *desc, *next;
+
+	list_for_each_entry_safe(desc, next, list, node) {
+		list_del(&desc->node);
+		xilinx_dma_free_tx_descriptor(chan, desc);
+	}
+}
+
+/**
+ * xilinx_dma_free_descriptors - Free channel descriptors
+ * @chan: Driver specific dma channel
+ */
+static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	xilinx_dma_free_desc_list(chan, &chan->pending_list);
+	xilinx_dma_free_desc_list(chan, &chan->done_list);
+
+	xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
+	chan->active_desc = NULL;
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_free_chan_resources - Free channel resources
+ * @dchan: DMA channel
+ */
+static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+
+	xilinx_dma_free_descriptors(chan);
+
+	dma_free_coherent(chan->dev,
+			  sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
+			  chan->seg_v, chan->seg_p);
+}
+
+/**
+ * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
+ * @chan: Driver specific dma channel
+ */
+static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc, *next;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	list_for_each_entry_safe(desc, next, &chan->done_list, node) {
+		dma_async_tx_callback callback;
+		void *callback_param;
+
+		/* Remove from the list of running transactions */
+		list_del(&desc->node);
+
+		/* Run the link descriptor callback function */
+		callback = desc->async_tx.callback;
+		callback_param = desc->async_tx.callback_param;
+		if (callback) {
+			spin_unlock_irqrestore(&chan->lock, flags);
+			callback(callback_param);
+			spin_lock_irqsave(&chan->lock, flags);
+		}
+
+		/* Run any dependencies, then free the descriptor */
+		dma_run_dependencies(&desc->async_tx);
+		xilinx_dma_free_tx_descriptor(chan, desc);
+	}
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_tx_status - Get dma transaction status
+ * @dchan: DMA channel
+ * @cookie: Transaction identifier
+ * @txstate: Transaction state
+ *
+ * Return: DMA transaction status
+ */
+static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
+					    dma_cookie_t cookie,
+					    struct dma_tx_state *txstate)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *segment;
+	struct xilinx_dma_desc_hw *hw;
+	enum dma_status ret;
+	unsigned long flags;
+	u32 residue;
+
+	ret = dma_cookie_status(dchan, cookie, txstate);
+	if (ret == DMA_COMPLETE || !txstate)
+		return ret;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	if (chan->has_sg) {
+		while (!list_empty(&desc->segments)) {
+			segment = list_first_entry(&desc->segments,
+					struct xilinx_dma_tx_segment, node);
+			hw = &segment->hw;
+			residue += (hw->control - hw->status) &
+				   XILINX_DMA_MAX_TRANS_LEN;
+		}
+	}
+
+	chan->residue = residue;
+	dma_set_residue(txstate, chan->residue);
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return ret;
+}
+
+/**
+ * xilinx_dma_is_running - Check if DMA channel is running
+ * @chan: Driver specific DMA channel
+ *
+ * Return: 'true' if running, 'false' if not.
+ */
+static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan)
+{
+	return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+		 XILINX_DMA_SR_HALTED_MASK) &&
+		(dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+		 XILINX_DMA_CR_RUNSTOP_MASK);
+}
+
+/**
+ * xilinx_dma_is_idle - Check if DMA channel is idle
+ * @chan: Driver specific DMA channel
+ *
+ * Return: 'true' if idle, 'false' if not.
+ */
+static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan)
+{
+	return !!(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+		XILINX_DMA_SR_IDLE_MASK);
+}
+
+/**
+ * xilinx_dma_halt - Halt DMA channel
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+
+	dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RUNSTOP_MASK);
+
+	/* Wait for the hardware to halt */
+	do {
+		if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+			XILINX_DMA_SR_HALTED_MASK)
+			break;
+	} while (loop--);
+
+	if (!loop) {
+		dev_err(chan->dev, "Cannot stop channel %p: %x\n",
+			chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		chan->err = true;
+	}
+}
+
+/**
+ * xilinx_dma_start - Start DMA channel
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_start(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RUNSTOP_MASK);
+
+	/* Wait for the hardware to start */
+	do {
+		if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+			XILINX_DMA_SR_HALTED_MASK)
+			break;
+	} while (loop--);
+
+	if (!loop) {
+		dev_err(chan->dev, "Cannot start channel %p: %x\n",
+			 chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		chan->err = true;
+	}
+}
+
+/**
+ * xilinx_dma_start_transfer - Starts DMA transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *head, *tail = NULL;
+
+	if (chan->err)
+		return;
+
+	if (list_empty(&chan->pending_list))
+		return;
+
+	if (!chan->idle)
+		return;
+
+	desc = list_first_entry(&chan->pending_list,
+				struct xilinx_dma_tx_descriptor, node);
+
+	if (chan->has_sg && xilinx_dma_is_running(chan) &&
+	    !xilinx_dma_is_idle(chan)) {
+		tail = list_entry(desc->segments.prev,
+				  struct xilinx_dma_tx_segment, node);
+		dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
+		goto out_free_desc;
+	}
+
+	if (chan->has_sg) {
+		head = list_first_entry(&desc->segments,
+					struct xilinx_dma_tx_segment, node);
+		tail = list_entry(desc->segments.prev,
+				  struct xilinx_dma_tx_segment, node);
+		dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
+	}
+
+	/* Enable interrupts */
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	xilinx_dma_start(chan);
+	if (chan->err)
+		return;
+
+	/* Start the transfer */
+	if (chan->has_sg) {
+		dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
+	} else {
+		struct xilinx_dma_tx_segment *segment;
+		struct xilinx_dma_desc_hw *hw;
+
+		segment = list_first_entry(&desc->segments,
+					   struct xilinx_dma_tx_segment, node);
+		hw = &segment->hw;
+
+		if (desc->direction == DMA_MEM_TO_DEV)
+			dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
+				       hw->buf_addr);
+		else
+			dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
+				       hw->buf_addr);
+
+		/* Start the transfer */
+		dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+			       hw->control & XILINX_DMA_MAX_TRANS_LEN);
+	}
+
+out_free_desc:
+	list_del(&desc->node);
+	chan->idle = false;
+	chan->active_desc = desc;
+}
+
+/**
+ * xilinx_dma_issue_pending - Issue pending transactions
+ * @dchan: DMA channel
+ */
+static void xilinx_dma_issue_pending(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	xilinx_dma_start_transfer(chan);
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_complete_descriptor - Mark the active descriptor as complete
+ * @chan : xilinx DMA channel
+ */
+static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+
+	desc = chan->active_desc;
+	if (!desc) {
+		dev_dbg(chan->dev, "no running descriptors\n");
+		return;
+	}
+
+	dma_cookie_complete(&desc->async_tx);
+	list_add_tail(&desc->node, &chan->done_list);
+
+	chan->active_desc = NULL;
+
+}
+
+/**
+ * xilinx_dma_reset - Reset DMA channel
+ * @chan: Driver specific DMA channel
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_chan_reset(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+	u32 tmp;
+
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RESET_MASK);
+
+	tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+	      XILINX_DMA_CR_RESET_MASK;
+
+	/* Wait for the hardware to finish reset */
+	do {
+		tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+		      XILINX_DMA_CR_RESET_MASK;
+	} while (loop-- && tmp);
+
+	if (!loop) {
+		dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
+			dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
+			dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		return -EBUSY;
+	}
+
+	chan->err = false;
+
+	return 0;
+}
+
+/**
+ * xilinx_dma_irq_handler - DMA Interrupt handler
+ * @irq: IRQ number
+ * @data: Pointer to the Xilinx DMA channel structure
+ *
+ * Return: IRQ_HANDLED/IRQ_NONE
+ */
+static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
+{
+	struct xilinx_dma_chan *chan = data;
+	u32 status;
+
+	/* Read the status and ack the interrupts. */
+	status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
+	if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
+		return IRQ_NONE;
+
+	dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
+		       status & XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
+		dev_err(chan->dev,
+			"Channel %p has errors %x, cdr %x tdr %x\n",
+			chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
+			dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
+			dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
+		chan->err = true;
+	}
+
+	/*
+	 * Device takes too long to do the transfer when user requires
+	 * responsiveness
+	 */
+	if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
+		dev_dbg(chan->dev, "Inter-packet latency too long\n");
+
+	if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
+		spin_lock(&chan->lock);
+		xilinx_dma_complete_descriptor(chan);
+		chan->idle = true;
+		xilinx_dma_start_transfer(chan);
+		spin_unlock(&chan->lock);
+	}
+
+	tasklet_schedule(&chan->tasklet);
+	return IRQ_HANDLED;
+}
+
+/**
+ * xilinx_dma_do_tasklet - Schedule completion tasklet
+ * @data: Pointer to the Xilinx dma channel structure
+ */
+static void xilinx_dma_do_tasklet(unsigned long data)
+{
+	struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
+
+	xilinx_dma_chan_desc_cleanup(chan);
+}
+
+/**
+ * xilinx_dma_tx_submit - Submit DMA transaction
+ * @tx: Async transaction descriptor
+ *
+ * Return: cookie value on success and failure value on error
+ */
+static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+	struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
+	struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
+	dma_cookie_t cookie;
+	unsigned long flags;
+	int err;
+
+	if (chan->err) {
+		/*
+		 * If reset fails, need to hard reset the system.
+		 * Channel is no longer functional
+		 */
+		err = xilinx_dma_chan_reset(chan);
+		if (err < 0)
+			return err;
+	}
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	cookie = dma_cookie_assign(tx);
+
+	/* Append the transaction to the pending transactions queue. */
+	list_add_tail(&desc->node, &chan->pending_list);
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return cookie;
+}
+
+/**
+ * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
+ * @dchan: DMA channel
+ * @sgl: scatterlist to transfer to/from
+ * @sg_len: number of entries in @scatterlist
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ * @context: APP words of the descriptor
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
+	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_transfer_direction direction, unsigned long flags,
+	void *context)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *segment;
+	struct xilinx_dma_desc_hw *hw;
+	u32 *app_w = (u32 *)context;
+	struct scatterlist *sg;
+	size_t copy, sg_used;
+	int i;
+
+	if (!is_slave_direction(direction))
+		return NULL;
+
+	/* Allocate a transaction descriptor. */
+	desc = xilinx_dma_alloc_tx_descriptor(chan);
+	if (!desc)
+		return NULL;
+
+	desc->direction = direction;
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+	desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+
+	/* Build transactions using information in the scatter gather list */
+	for_each_sg(sgl, sg, sg_len, i) {
+		sg_used = 0;
+
+		/* Loop until the entire scatterlist entry is used */
+		while (sg_used < sg_dma_len(sg)) {
+
+			/* Get a free segment */
+			segment = xilinx_dma_alloc_tx_segment(chan);
+			if (!segment)
+				goto error;
+
+			/*
+			 * Calculate the maximum number of bytes to transfer,
+			 * making sure it is less than the hw limit
+			 */
+			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
+				     XILINX_DMA_MAX_TRANS_LEN);
+			hw = &segment->hw;
+
+			/* Fill in the descriptor */
+			hw->buf_addr = sg_dma_address(sg) + sg_used;
+
+			hw->control = copy;
+
+			if (direction == DMA_MEM_TO_DEV) {
+				if (app_w)
+					memcpy(hw->app, app_w, sizeof(u32) *
+					       XILINX_DMA_NUM_APP_WORDS);
+
+				/*
+				 * For the first DMA_MEM_TO_DEV transfer,
+				 * set SOP
+				 */
+				if (!i)
+					hw->control |= XILINX_DMA_BD_SOP;
+			}
+
+			sg_used += copy;
+
+			/*
+			 * Insert the segment into the descriptor segments
+			 * list.
+			 */
+			list_add_tail(&segment->node, &desc->segments);
+		}
+	}
+
+	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
+	if (direction == DMA_MEM_TO_DEV) {
+		segment = list_last_entry(&desc->segments,
+					  struct xilinx_dma_tx_segment,
+					  node);
+		segment->hw.control |= XILINX_DMA_BD_EOP;
+	}
+
+	return &desc->async_tx;
+
+error:
+	xilinx_dma_free_tx_descriptor(chan, desc);
+	return NULL;
+}
+
+/**
+ * xilinx_dma_terminate_all - Halt the channel and free descriptors
+ * @dchan: DMA Channel pointer
+ *
+ * Return: '0' always
+ */
+static int xilinx_dma_terminate_all(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+
+	/* Halt the DMA engine */
+	xilinx_dma_halt(chan);
+
+	/* Remove and free all of the descriptors in the lists */
+	xilinx_dma_free_descriptors(chan);
+
+	return 0;
+}
+
+/**
+ * xilinx_dma_channel_set_config - Configure DMA channel
+ * @dchan: DMA channel
+ * @cfg: DMA device configuration pointer
+ * Return: '0' on success and failure value on error
+ */
+int xilinx_dma_channel_set_config(struct dma_chan *dchan,
+				  struct xilinx_dma_config *cfg)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
+
+	if (!xilinx_dma_is_idle(chan))
+		return -EBUSY;
+
+	if (cfg->reset)
+		return xilinx_dma_chan_reset(chan);
+
+	if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
+		reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
+
+	if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
+		reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
+
+	dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
+
+	return 0;
+}
+EXPORT_SYMBOL(xilinx_dma_channel_set_config);
+
+/**
+ * xilinx_dma_chan_remove - Per Channel remove function
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
+{
+	/* Disable interrupts */
+	dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL, XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	if (chan->irq > 0)
+		free_irq(chan->irq, chan);
+
+	tasklet_kill(&chan->tasklet);
+
+	list_del(&chan->common.device_node);
+}
+
+/**
+ * xilinx_dma_chan_probe - Per Channel Probing
+ * It get channel features from the device tree entry and
+ * initialize special channel handling routines
+ *
+ * @xdev: Driver specific device structure
+ * @node: Device node
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+				 struct device_node *node)
+{
+	struct xilinx_dma_chan *chan;
+	int err;
+	bool has_dre;
+	u32 value, width = 0;
+
+	/* alloc channel */
+	chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	chan->dev = xdev->dev;
+	chan->xdev = xdev;
+	chan->has_sg = xdev->has_sg;
+
+	has_dre = of_property_read_bool(node, "xlnx,include-dre");
+
+	err = of_property_read_u32(node, "xlnx,datawidth", &value);
+	if (err) {
+		dev_err(xdev->dev, "unable to read datawidth property");
+		return err;
+	}
+
+	width = value >> 3; /* Convert bits to bytes */
+
+	/* If data width is greater than 8 bytes, DRE is not in hw */
+	if (width > 8)
+		has_dre = false;
+
+	if (!has_dre)
+		xdev->common.copy_align = fls(width - 1);
+
+	if (of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel")) {
+		chan->id = 0;
+		chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
+	} else if (of_device_is_compatible(node, "xlnx,axi-dma-s2mm-channel")) {
+		chan->id = 1;
+		chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
+	} else {
+		dev_err(xdev->dev, "Invalid channel compatible node\n");
+		return -EINVAL;
+	}
+
+	xdev->chan[chan->id] = chan;
+
+	/* Initialize the channel */
+	err = xilinx_dma_chan_reset(chan);
+	if (err) {
+		dev_err(xdev->dev, "Reset channel failed\n");
+		return err;
+	}
+
+	spin_lock_init(&chan->lock);
+	INIT_LIST_HEAD(&chan->pending_list);
+	INIT_LIST_HEAD(&chan->done_list);
+	INIT_LIST_HEAD(&chan->free_seg_list);
+
+	chan->common.device = &xdev->common;
+
+	/* find the IRQ line, if it exists in the device tree */
+	chan->irq = irq_of_parse_and_map(node, 0);
+	err = request_irq(chan->irq, xilinx_dma_irq_handler,
+			  IRQF_SHARED,
+			  "xilinx-dma-controller", chan);
+	if (err) {
+		dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
+		return err;
+	}
+
+	/* Initialize the tasklet */
+	tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
+		     (unsigned long)chan);
+
+	/* Add the channel to DMA device channel list */
+	list_add_tail(&chan->common.device_node, &xdev->common.channels);
+
+	chan->idle = true;
+
+	return 0;
+}
+
+/**
+ * of_dma_xilinx_xlate - Translation function
+ * @dma_spec: Pointer to DMA specifier as found in the device tree
+ * @ofdma: Pointer to DMA controller data
+ *
+ * Return: DMA channel pointer on success and NULL on error
+ */
+static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
+					    struct of_dma *ofdma)
+{
+	struct xilinx_dma_device *xdev = ofdma->of_dma_data;
+	int chan_id = dma_spec->args[0];
+
+	if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE)
+		return NULL;
+
+	return dma_get_slave_channel(&xdev->chan[chan_id]->common);
+}
+
+/**
+ * xilinx_dma_probe - Driver probe function
+ * @pdev: Pointer to the platform_device structure
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_probe(struct platform_device *pdev)
+{
+	struct xilinx_dma_device *xdev;
+	struct device_node *child, *node;
+	struct resource *res;
+	int i, ret;
+
+	xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
+	if (!xdev)
+		return -ENOMEM;
+
+	xdev->dev = &(pdev->dev);
+	INIT_LIST_HEAD(&xdev->common.channels);
+
+	node = pdev->dev.of_node;
+
+	/* Map the registers */
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	xdev->regs = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(xdev->regs))
+		return PTR_ERR(xdev->regs);
+
+	/* Check if SG is enabled */
+	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
+
+	/* Axi DMA only do slave transfers */
+	dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
+	dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+	xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
+	xdev->common.device_terminate_all = xilinx_dma_terminate_all;
+	xdev->common.device_issue_pending = xilinx_dma_issue_pending;
+	xdev->common.device_alloc_chan_resources =
+		xilinx_dma_alloc_chan_resources;
+	xdev->common.device_free_chan_resources =
+		xilinx_dma_free_chan_resources;
+	xdev->common.device_tx_status = xilinx_dma_tx_status;
+	xdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	xdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	xdev->common.dev = &pdev->dev;
+
+	platform_set_drvdata(pdev, xdev);
+
+	for_each_child_of_node(node, child) {
+		ret = xilinx_dma_chan_probe(xdev, child);
+		if (ret) {
+			dev_err(&pdev->dev, "Probing channels failed\n");
+			goto free_chan_resources;
+		}
+	}
+
+	dma_async_device_register(&xdev->common);
+
+	ret = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
+	if (ret) {
+		dev_err(&pdev->dev, "Unable to register DMA to DT\n");
+		dma_async_device_unregister(&xdev->common);
+		goto free_chan_resources;
+	}
+
+	dev_info(&pdev->dev, "Xilinx AXI DMA Engine driver Probed!!\n");
+
+	return 0;
+
+free_chan_resources:
+	for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
+		if (xdev->chan[i])
+			xilinx_dma_chan_remove(xdev->chan[i]);
+
+	return ret;
+}
+
+/**
+ * xilinx_dma_remove - Driver remove function
+ * @pdev: Pointer to the platform_device structure
+ *
+ * Return: Always '0'
+ */
+static int xilinx_dma_remove(struct platform_device *pdev)
+{
+	struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
+	int i;
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&xdev->common);
+
+	for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
+		if (xdev->chan[i])
+			xilinx_dma_chan_remove(xdev->chan[i]);
+
+	return 0;
+}
+
+static const struct of_device_id xilinx_dma_of_match[] = {
+	{ .compatible = "xlnx,axi-dma-1.00.a",},
+	{}
+};
+MODULE_DEVICE_TABLE(of, xilinx_dma_of_match);
+
+static struct platform_driver xilinx_dma_driver = {
+	.driver = {
+		.name = "xilinx-dma",
+		.of_match_table = xilinx_dma_of_match,
+	},
+	.probe = xilinx_dma_probe,
+	.remove = xilinx_dma_remove,
+};
+
+module_platform_driver(xilinx_dma_driver);
+
+MODULE_AUTHOR("Xilinx, Inc.");
+MODULE_DESCRIPTION("Xilinx DMA driver");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/dma/xilinx_dma.h b/include/linux/dma/xilinx_dma.h
index 34b98f2..de38599 100644
--- a/include/linux/dma/xilinx_dma.h
+++ b/include/linux/dma/xilinx_dma.h
@@ -41,7 +41,21 @@ struct xilinx_vdma_config {
 	int ext_fsync;
 };
 
+/**
+ * struct xilinx_dma_config - DMA Configuration structure
+ * @coalesc: Interrupt coalescing threshold
+ * @delay: Delay counter
+ * @reset: Reset Channel
+ */
+struct xilinx_dma_config {
+	int coalesc;
+	int delay;
+	int reset;
+};
+
 int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
 					struct xilinx_vdma_config *cfg);
+int xilinx_dma_channel_set_config(struct dma_chan *dchan,
+					struct xilinx_dma_config *cfg);
 
 #endif
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-09  6:35 [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
@ 2015-06-16 19:19 ` Nicolae Rosia
  2015-06-18 10:16   ` Appana Durga Kedareswara Rao
  2015-06-19 16:49 ` Jeremy Trimble
  2015-06-22 10:49 ` Vinod Koul
  2 siblings, 1 reply; 12+ messages in thread
From: Nicolae Rosia @ 2015-06-16 19:19 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: vinod.koul, dan.j.williams, Michal Simek, Sören Brinkmann,
	appanad, Anirudha Sarangi, punnaia, dmaengine, linux-kernel,
	linux-arm-kernel, Srikanth Thokala

Hi,

How do I match the driver? The old one, from Xilinx git, was setting
the private field so I could use a filter function.

On Tue, Jun 9, 2015 at 9:35 AM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
>
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
> The deivce tree doc got applied in the slave-dmaengine.git.
>
> This patch is rebased on the commit
> Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
>
> - Need's to work on the comment "not making efficient use of DMA's coalesce
> capability" will send the patch for that soon.
This can be done afterwards. I can send a patch if you want.
[...]
> +/**
> + * xilinx_dma_tx_status - Get dma transaction status
> + * @dchan: DMA channel
> + * @cookie: Transaction identifier
> + * @txstate: Transaction state
> + *
> + * Return: DMA transaction status
> + */
> +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> +                                           dma_cookie_t cookie,
> +                                           struct dma_tx_state *txstate)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *segment;
> +       struct xilinx_dma_desc_hw *hw;
> +       enum dma_status ret;
> +       unsigned long flags;
> +       u32 residue;
> +
> +       ret = dma_cookie_status(dchan, cookie, txstate);
> +       if (ret == DMA_COMPLETE || !txstate)
> +               return ret;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       if (chan->has_sg) {
> +               while (!list_empty(&desc->segments)) {
desc is not initialized
> +                       segment = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +                       hw = &segment->hw;
> +                       residue += (hw->control - hw->status) &
> +                                  XILINX_DMA_MAX_TRANS_LEN;
residue is not initialized
> +               }
> +       }
> +
> +       chan->residue = residue;
residue is not initialized
> +       dma_set_residue(txstate, chan->residue);
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return ret;
> +}

> +/**
> + * xilinx_dma_start_transfer - Starts DMA transfer
> + * @chan: Driver specific channel struct pointer
> + */
> +static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> +
> +       if (chan->err)
> +               return;
> +
> +       if (list_empty(&chan->pending_list))
> +               return;
> +
> +       if (!chan->idle)
> +               return;
> +
> +       desc = list_first_entry(&chan->pending_list,
> +                               struct xilinx_dma_tx_descriptor, node);
> +
> +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> +           !xilinx_dma_is_idle(chan)) {
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +               goto out_free_desc;
> +       }
> +
> +       if (chan->has_sg) {
> +               head = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
Even though we have SG, we are using it on a descriptor basis.
If the user queues 10 descriptors, each with one segment, this will
work like a DMA without SG.
Why don't we set the tail to the last segment of the last descriptor queued?
> +       }
> +
> +       /* Enable interrupts */
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       xilinx_dma_start(chan);
> +       if (chan->err)
> +               return;
> +
> +       /* Start the transfer */
> +       if (chan->has_sg) {
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +       } else {
> +               struct xilinx_dma_tx_segment *segment;
> +               struct xilinx_dma_desc_hw *hw;
> +
> +               segment = list_first_entry(&desc->segments,
> +                                          struct xilinx_dma_tx_segment, node);
> +               hw = &segment->hw;
> +
> +               if (desc->direction == DMA_MEM_TO_DEV)
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> +                                      hw->buf_addr);
> +               else
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> +                                      hw->buf_addr);
> +
> +               /* Start the transfer */
> +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> +       }
> +
> +out_free_desc:
> +       list_del(&desc->node);
> +       chan->idle = false;
> +       chan->active_desc = desc;
> +}
> +
We should cache XILINX_DMA_REG_CONTROL register instead of reading it
again and again, especially on hot paths, since the AXI4 bus is
running at low speeds.
I have addressed all these comments at [0] based on the Xilinx DMA
Driver, but it is not ready for mainlining (I have commented code/dead
code, no comments, etc.) and I don't have time to work on it anymore.
You can cherry pick parts from it and adapt them here.

Anyway, I'll try to use this driver in my workcase (thousands of
requests, very high load and high IRQ activity) and report back, but I
don't expect much since I had these problems with the original
implementation.

[0] http://pastebin.com/0fQAJ0qm

On Tue, Jun 9, 2015 at 9:35 AM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
>
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
> The deivce tree doc got applied in the slave-dmaengine.git.
>
> This patch is rebased on the commit
> Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
>
> - Need's to work on the comment "not making efficient use of DMA's coalesce
> capability" will send the patch for that soon.
>
> Changes in v7:
> - updated license in the driver as suggested by Paul.
> - Corrected return value in is_idle funtion.
>
> Changes in v6:
> - Fixed Odd indention in the Kconfig.
> - used GFP_NOWAIT instead of GFP_KERNEL during the desc allocation
> - Calculated residue in the tx_status instead of complete_descriptor.
> - Update copy right to 2015.
> - Modified spin_lock handling moved the spin_lock to the appropriate functions
> (instead of xilinx_dma_start_transfer doing it xilinx_dma_issue_pending api).
> - device_control and declare slave caps updated as per newer APi's.
> Changes in v5:
> - Modified the xilinx_dma.h header file location to the
>   include/linux/dma/xilinx_dma.h
> Changes in v4:
> - Add direction field to DMA descriptor structure and removed from
>   channel structure to avoid duplication.
> - Check for DMA idle condition before changing the configuration.
> - Residue is being calculated in complete_descriptor() and is reported
>   to slave driver.
> Changes in v3:
> - Rebased on 3.16-rc7
> Changes in v2:
> - Simplified the logic to set SOP and APP words in prep_slave_sg().
> - Corrected function description comments to match the return type.
> - Fixed some minor comments as suggested by Andy.
>
>  drivers/dma/Kconfig             |   13 +
>  drivers/dma/xilinx/Makefile     |    1 +
>  drivers/dma/xilinx/xilinx_dma.c | 1196 +++++++++++++++++++++++++++++++++++++++
>  include/linux/dma/xilinx_dma.h  |   14 +
>  4 files changed, 1224 insertions(+)
>  create mode 100644 drivers/dma/xilinx/xilinx_dma.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index bda2cb0..cb4fa57 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -492,4 +492,17 @@ config QCOM_BAM_DMA
>           Enable support for the QCOM BAM DMA controller.  This controller
>           provides DMA capabilities for a variety of on-chip devices.
>
> +config XILINX_DMA
> +        tristate "Xilinx AXI DMA Engine"
> +        depends on (ARCH_ZYNQ || MICROBLAZE)
> +        select DMA_ENGINE
> +        help
> +          Enable support for Xilinx AXI DMA Soft IP.
> +
> +          This engine provides high-bandwidth direct memory access
> +          between memory and AXI4-Stream type target peripherals.
> +          It has two stream interfaces/channels, Memory Mapped to
> +          Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> +          data transfers.
> +
>  endif
> diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
> index 3c4e9f2..6224a49 100644
> --- a/drivers/dma/xilinx/Makefile
> +++ b/drivers/dma/xilinx/Makefile
> @@ -1 +1,2 @@
>  obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
> +obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> new file mode 100644
> index 0000000..53582a7
> --- /dev/null
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -0,0 +1,1196 @@
> +/*
> + * DMA driver for Xilinx DMA Engine
> + *
> + * Copyright (C) 2010 - 2015 Xilinx, Inc. All rights reserved.
> + *
> + * Based on the Freescale DMA driver.
> + *
> + * Description:
> + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
> + *  Access between memory and AXI4-Stream-type target peripherals. It can be
> + *  configured to have one channel or two channels and if configured as two
> + *  channels, one is to transmit data from memory to a device and another is
> + *  to receive from a device.
> + *
> + * This is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/dma/xilinx_dma.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_dma.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/slab.h>
> +
> +#include "../dmaengine.h"
> +
> +/* Register Offsets */
> +#define XILINX_DMA_REG_CONTROL         0x00
> +#define XILINX_DMA_REG_STATUS          0x04
> +#define XILINX_DMA_REG_CURDESC         0x08
> +#define XILINX_DMA_REG_TAILDESC                0x10
> +#define XILINX_DMA_REG_SRCADDR         0x18
> +#define XILINX_DMA_REG_DSTADDR         0x20
> +#define XILINX_DMA_REG_BTT             0x28
> +
> +/* Channel/Descriptor Offsets */
> +#define XILINX_DMA_MM2S_CTRL_OFFSET    0x00
> +#define XILINX_DMA_S2MM_CTRL_OFFSET    0x30
> +
> +/* General register bits definitions */
> +#define XILINX_DMA_CR_RUNSTOP_MASK     BIT(0)
> +#define XILINX_DMA_CR_RESET_MASK       BIT(2)
> +
> +#define XILINX_DMA_CR_DELAY_SHIFT      24
> +#define XILINX_DMA_CR_COALESCE_SHIFT   16
> +
> +#define XILINX_DMA_CR_DELAY_MAX                GENMASK(7, 0)
> +#define XILINX_DMA_CR_COALESCE_MAX     GENMASK(7, 0)
> +
> +#define XILINX_DMA_SR_HALTED_MASK      BIT(0)
> +#define XILINX_DMA_SR_IDLE_MASK                BIT(1)
> +
> +#define XILINX_DMA_XR_IRQ_IOC_MASK     BIT(12)
> +#define XILINX_DMA_XR_IRQ_DELAY_MASK   BIT(13)
> +#define XILINX_DMA_XR_IRQ_ERROR_MASK   BIT(14)
> +#define XILINX_DMA_XR_IRQ_ALL_MASK     GENMASK(14, 12)
> +
> +/* BD definitions */
> +#define XILINX_DMA_BD_STS_ALL_MASK     GENMASK(31, 28)
> +#define XILINX_DMA_BD_SOP              BIT(27)
> +#define XILINX_DMA_BD_EOP              BIT(26)
> +
> +/* Hw specific definitions */
> +#define XILINX_DMA_MAX_CHANS_PER_DEVICE        0x2
> +#define XILINX_DMA_MAX_TRANS_LEN       GENMASK(22, 0)
> +
> +/* Delay loop counter to prevent hardware failure */
> +#define XILINX_DMA_LOOP_COUNT          1000000
> +
> +/* Maximum number of Descriptors */
> +#define XILINX_DMA_NUM_DESCS           64
> +#define XILINX_DMA_NUM_APP_WORDS       5
> +
> +/**
> + * struct xilinx_dma_desc_hw - Hardware Descriptor
> + * @next_desc: Next Descriptor Pointer @0x00
> + * @pad1: Reserved @0x04
> + * @buf_addr: Buffer address @0x08
> + * @pad2: Reserved @0x0C
> + * @pad3: Reserved @0x10
> + * @pad4: Reserved @0x14
> + * @control: Control field @0x18
> + * @status: Status field @0x1C
> + * @app: APP Fields @0x20 - 0x30
> + */
> +struct xilinx_dma_desc_hw {
> +       u32 next_desc;
> +       u32 pad1;
> +       u32 buf_addr;
> +       u32 pad2;
> +       u32 pad3;
> +       u32 pad4;
> +       u32 control;
> +       u32 status;
> +       u32 app[XILINX_DMA_NUM_APP_WORDS];
> +} __aligned(64);
> +
> +/**
> + * struct xilinx_dma_tx_segment - Descriptor segment
> + * @hw: Hardware descriptor
> + * @node: Node in the descriptor segments list
> + * @phys: Physical address of segment
> + */
> +struct xilinx_dma_tx_segment {
> +       struct xilinx_dma_desc_hw hw;
> +       struct list_head node;
> +       dma_addr_t phys;
> +} __aligned(64);
> +
> +/**
> + * struct xilinx_dma_tx_descriptor - Per Transaction structure
> + * @async_tx: Async transaction descriptor
> + * @segments: TX segments list
> + * @node: Node in the channel descriptors list
> + * @direction: Transfer direction
> + */
> +struct xilinx_dma_tx_descriptor {
> +       struct dma_async_tx_descriptor async_tx;
> +       struct list_head segments;
> +       struct list_head node;
> +       enum dma_transfer_direction direction;
> +};
> +
> +/**
> + * struct xilinx_dma_chan - Driver specific DMA channel structure
> + * @xdev: Driver specific device structure
> + * @ctrl_offset: Control registers offset
> + * @lock: Descriptor operation lock
> + * @pending_list: Descriptors waiting
> + * @active_desc: Active descriptor
> + * @done_list: Complete descriptors
> + * @free_seg_list: Free descriptors
> + * @common: DMA common channel
> + * @seg_v: Statically allocated segments base
> + * @seg_p: Physical allocated segments base
> + * @dev: The dma device
> + * @irq: Channel IRQ
> + * @id: Channel ID
> + * @has_sg: Support scatter transfers
> + * @err: Channel has errors
> + * @idle: Channel status
> + * @tasklet: Cleanup work after irq
> + * @residue: Residue
> + */
> +struct xilinx_dma_chan {
> +       struct xilinx_dma_device *xdev;
> +       u32 ctrl_offset;
> +       spinlock_t lock;
> +       struct list_head pending_list;
> +       struct xilinx_dma_tx_descriptor *active_desc;
> +       struct list_head done_list;
> +       struct list_head free_seg_list;
> +       struct dma_chan common;
> +       struct xilinx_dma_tx_segment *seg_v;
> +       dma_addr_t seg_p;
> +       struct device *dev;
> +       int irq;
> +       int id;
> +       bool has_sg;
> +       int err;
> +       bool idle;
> +       struct tasklet_struct tasklet;
> +       u32 residue;
> +};
> +
> +/**
> + * struct xilinx_dma_device - DMA device structure
> + * @regs: I/O mapped base address
> + * @dev: Device Structure
> + * @common: DMA device structure
> + * @chan: Driver specific DMA channel
> + * @has_sg: Specifies whether Scatter-Gather is present or not
> + */
> +struct xilinx_dma_device {
> +       void __iomem *regs;
> +       struct device *dev;
> +       struct dma_device common;
> +       struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
> +       bool has_sg;
> +};
> +
> +/* Macros */
> +#define to_xilinx_chan(chan) \
> +       container_of(chan, struct xilinx_dma_chan, common)
> +#define to_dma_tx_descriptor(tx) \
> +       container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
> +
> +/* IO accessors */
> +static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg, u32 value)
> +{
> +       iowrite32(value, chan->xdev->regs + reg);
> +}
> +
> +static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg)
> +{
> +       return ioread32(chan->xdev->regs + reg);
> +}
> +
> +static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32 reg)
> +{
> +       return dma_read(chan, chan->ctrl_offset + reg);
> +}
> +
> +static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
> +                                 u32 value)
> +{
> +       dma_write(chan, chan->ctrl_offset + reg, value);
> +}
> +
> +static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32 reg, u32 clr)
> +{
> +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr);
> +}
> +
> +static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32 reg, u32 set)
> +{
> +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set);
> +}
> +
> +/* -----------------------------------------------------------------------------
> + * Descriptors and segments alloc and free
> + */
> +
> +/**
> + * xilinx_dma_alloc_tx_segment - Allocate transaction segment
> + * @chan: Driver specific dma channel
> + *
> + * Return: The allocated segment on success and NULL on failure.
> + */
> +static struct xilinx_dma_tx_segment *
> +xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_segment *segment = NULL;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       if (!list_empty(&chan->free_seg_list)) {
> +               segment = list_first_entry(&chan->free_seg_list,
> +                                          struct xilinx_dma_tx_segment,
> +                                          node);
> +               list_del(&segment->node);
> +       }
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return segment;
> +}
> +
> +/**
> + * xilinx_dma_clean_hw_desc - Clean hardware descriptor
> + * @hw: HW descriptor to clean
> + */
> +static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw)
> +{
> +       u32 next_desc = hw->next_desc;
> +
> +       memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
> +
> +       hw->next_desc = next_desc;
> +}
> +
> +/**
> + * xilinx_dma_free_tx_segment - Free transaction segment
> + * @chan: Driver specific dma channel
> + * @segment: dma transaction segment
> + */
> +static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
> +                                      struct xilinx_dma_tx_segment *segment)
> +{
> +       xilinx_dma_clean_hw_desc(&segment->hw);
> +
> +       list_add_tail(&segment->node, &chan->free_seg_list);
> +}
> +
> +/**
> + * xilinx_dma_tx_descriptor - Allocate transaction descriptor
> + * @chan: Driver specific dma channel
> + *
> + * Return: The allocated descriptor on success and NULL on failure.
> + */
> +static struct xilinx_dma_tx_descriptor *
> +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +
> +       desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
> +       if (!desc)
> +               return NULL;
> +
> +       INIT_LIST_HEAD(&desc->segments);
> +
> +       return desc;
> +}
> +
> +/**
> + * xilinx_dma_free_tx_descriptor - Free transaction descriptor
> + * @chan: Driver specific dma channel
> + * @desc: dma transaction descriptor
> + */
> +static void
> +xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
> +                             struct xilinx_dma_tx_descriptor *desc)
> +{
> +       struct xilinx_dma_tx_segment *segment, *next;
> +
> +       if (!desc)
> +               return;
> +
> +       list_for_each_entry_safe(segment, next, &desc->segments, node) {
> +               list_del(&segment->node);
> +               xilinx_dma_free_tx_segment(chan, segment);
> +       }
> +
> +       kfree(desc);
> +}
> +
> +/**
> + * xilinx_dma_alloc_chan_resources - Allocate channel resources
> + * @dchan: DMA channel
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       int i;
> +
> +       /* Allocate the buffer descriptors. */
> +       chan->seg_v = dma_zalloc_coherent(chan->dev,
> +                                         sizeof(*chan->seg_v) *
> +                                         XILINX_DMA_NUM_DESCS,
> +                                         &chan->seg_p, GFP_KERNEL);
> +       if (!chan->seg_v) {
> +               dev_err(chan->dev,
> +                       "unable to allocate channel %d descriptors\n",
> +                       chan->id);
> +               return -ENOMEM;
> +       }
> +
> +       for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
> +               chan->seg_v[i].hw.next_desc =
> +                               chan->seg_p + sizeof(*chan->seg_v) *
> +                               ((i + 1) % XILINX_DMA_NUM_DESCS);
> +               chan->seg_v[i].phys =
> +                               chan->seg_p + sizeof(*chan->seg_v) * i;
> +               list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
> +       }
> +
> +       dma_cookie_init(dchan);
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_free_desc_list - Free descriptors list
> + * @chan: Driver specific dma channel
> + * @list: List to parse and delete the descriptor
> + */
> +static void xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
> +                                     struct list_head *list)
> +{
> +       struct xilinx_dma_tx_descriptor *desc, *next;
> +
> +       list_for_each_entry_safe(desc, next, list, node) {
> +               list_del(&desc->node);
> +               xilinx_dma_free_tx_descriptor(chan, desc);
> +       }
> +}
> +
> +/**
> + * xilinx_dma_free_descriptors - Free channel descriptors
> + * @chan: Driver specific dma channel
> + */
> +static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       xilinx_dma_free_desc_list(chan, &chan->pending_list);
> +       xilinx_dma_free_desc_list(chan, &chan->done_list);
> +
> +       xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
> +       chan->active_desc = NULL;
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_free_chan_resources - Free channel resources
> + * @dchan: DMA channel
> + */
> +static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +
> +       xilinx_dma_free_descriptors(chan);
> +
> +       dma_free_coherent(chan->dev,
> +                         sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
> +                         chan->seg_v, chan->seg_p);
> +}
> +
> +/**
> + * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
> + * @chan: Driver specific dma channel
> + */
> +static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc, *next;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       list_for_each_entry_safe(desc, next, &chan->done_list, node) {
> +               dma_async_tx_callback callback;
> +               void *callback_param;
> +
> +               /* Remove from the list of running transactions */
> +               list_del(&desc->node);
> +
> +               /* Run the link descriptor callback function */
> +               callback = desc->async_tx.callback;
> +               callback_param = desc->async_tx.callback_param;
> +               if (callback) {
> +                       spin_unlock_irqrestore(&chan->lock, flags);
> +                       callback(callback_param);
> +                       spin_lock_irqsave(&chan->lock, flags);
> +               }
> +
> +               /* Run any dependencies, then free the descriptor */
> +               dma_run_dependencies(&desc->async_tx);
> +               xilinx_dma_free_tx_descriptor(chan, desc);
> +       }
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_tx_status - Get dma transaction status
> + * @dchan: DMA channel
> + * @cookie: Transaction identifier
> + * @txstate: Transaction state
> + *
> + * Return: DMA transaction status
> + */
> +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> +                                           dma_cookie_t cookie,
> +                                           struct dma_tx_state *txstate)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *segment;
> +       struct xilinx_dma_desc_hw *hw;
> +       enum dma_status ret;
> +       unsigned long flags;
> +       u32 residue;
> +
> +       ret = dma_cookie_status(dchan, cookie, txstate);
> +       if (ret == DMA_COMPLETE || !txstate)
> +               return ret;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       if (chan->has_sg) {
> +               while (!list_empty(&desc->segments)) {
> +                       segment = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +                       hw = &segment->hw;
> +                       residue += (hw->control - hw->status) &
> +                                  XILINX_DMA_MAX_TRANS_LEN;
> +               }
> +       }
> +
> +       chan->residue = residue;
> +       dma_set_residue(txstate, chan->residue);
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return ret;
> +}
> +
> +/**
> + * xilinx_dma_is_running - Check if DMA channel is running
> + * @chan: Driver specific DMA channel
> + *
> + * Return: 'true' if running, 'false' if not.
> + */
> +static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan)
> +{
> +       return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                XILINX_DMA_SR_HALTED_MASK) &&
> +               (dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +                XILINX_DMA_CR_RUNSTOP_MASK);
> +}
> +
> +/**
> + * xilinx_dma_is_idle - Check if DMA channel is idle
> + * @chan: Driver specific DMA channel
> + *
> + * Return: 'true' if idle, 'false' if not.
> + */
> +static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan)
> +{
> +       return !!(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +               XILINX_DMA_SR_IDLE_MASK);
> +}
> +
> +/**
> + * xilinx_dma_halt - Halt DMA channel
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +
> +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RUNSTOP_MASK);
> +
> +       /* Wait for the hardware to halt */
> +       do {
> +               if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                       XILINX_DMA_SR_HALTED_MASK)
> +                       break;
> +       } while (loop--);
> +
> +       if (!loop) {
> +               dev_err(chan->dev, "Cannot stop channel %p: %x\n",
> +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               chan->err = true;
> +       }
> +}
> +
> +/**
> + * xilinx_dma_start - Start DMA channel
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_start(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RUNSTOP_MASK);
> +
> +       /* Wait for the hardware to start */
> +       do {
> +               if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                       XILINX_DMA_SR_HALTED_MASK)
> +                       break;
> +       } while (loop--);
> +
> +       if (!loop) {
> +               dev_err(chan->dev, "Cannot start channel %p: %x\n",
> +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               chan->err = true;
> +       }
> +}
> +
> +/**
> + * xilinx_dma_start_transfer - Starts DMA transfer
> + * @chan: Driver specific channel struct pointer
> + */
> +static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> +
> +       if (chan->err)
> +               return;
> +
> +       if (list_empty(&chan->pending_list))
> +               return;
> +
> +       if (!chan->idle)
> +               return;
> +
> +       desc = list_first_entry(&chan->pending_list,
> +                               struct xilinx_dma_tx_descriptor, node);
> +
> +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> +           !xilinx_dma_is_idle(chan)) {
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +               goto out_free_desc;
> +       }
> +
> +       if (chan->has_sg) {
> +               head = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> +       }
> +
> +       /* Enable interrupts */
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       xilinx_dma_start(chan);
> +       if (chan->err)
> +               return;
> +
> +       /* Start the transfer */
> +       if (chan->has_sg) {
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +       } else {
> +               struct xilinx_dma_tx_segment *segment;
> +               struct xilinx_dma_desc_hw *hw;
> +
> +               segment = list_first_entry(&desc->segments,
> +                                          struct xilinx_dma_tx_segment, node);
> +               hw = &segment->hw;
> +
> +               if (desc->direction == DMA_MEM_TO_DEV)
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> +                                      hw->buf_addr);
> +               else
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> +                                      hw->buf_addr);
> +
> +               /* Start the transfer */
> +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> +       }
> +
> +out_free_desc:
> +       list_del(&desc->node);
> +       chan->idle = false;
> +       chan->active_desc = desc;
> +}
> +
> +/**
> + * xilinx_dma_issue_pending - Issue pending transactions
> + * @dchan: DMA channel
> + */
> +static void xilinx_dma_issue_pending(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       xilinx_dma_start_transfer(chan);
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_complete_descriptor - Mark the active descriptor as complete
> + * @chan : xilinx DMA channel
> + */
> +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +
> +       desc = chan->active_desc;
> +       if (!desc) {
> +               dev_dbg(chan->dev, "no running descriptors\n");
> +               return;
> +       }
> +
> +       dma_cookie_complete(&desc->async_tx);
> +       list_add_tail(&desc->node, &chan->done_list);
> +
> +       chan->active_desc = NULL;
> +
> +}
> +
> +/**
> + * xilinx_dma_reset - Reset DMA channel
> + * @chan: Driver specific DMA channel
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_chan_reset(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +       u32 tmp;
> +
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RESET_MASK);
> +
> +       tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +             XILINX_DMA_CR_RESET_MASK;
> +
> +       /* Wait for the hardware to finish reset */
> +       do {
> +               tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +                     XILINX_DMA_CR_RESET_MASK;
> +       } while (loop-- && tmp);
> +
> +       if (!loop) {
> +               dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               return -EBUSY;
> +       }
> +
> +       chan->err = false;
> +
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_irq_handler - DMA Interrupt handler
> + * @irq: IRQ number
> + * @data: Pointer to the Xilinx DMA channel structure
> + *
> + * Return: IRQ_HANDLED/IRQ_NONE
> + */
> +static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
> +{
> +       struct xilinx_dma_chan *chan = data;
> +       u32 status;
> +
> +       /* Read the status and ack the interrupts. */
> +       status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
> +       if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
> +               return IRQ_NONE;
> +
> +       dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
> +                      status & XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
> +               dev_err(chan->dev,
> +                       "Channel %p has errors %x, cdr %x tdr %x\n",
> +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
> +               chan->err = true;
> +       }
> +
> +       /*
> +        * Device takes too long to do the transfer when user requires
> +        * responsiveness
> +        */
> +       if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
> +               dev_dbg(chan->dev, "Inter-packet latency too long\n");
> +
> +       if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
> +               spin_lock(&chan->lock);
> +               xilinx_dma_complete_descriptor(chan);
> +               chan->idle = true;
> +               xilinx_dma_start_transfer(chan);
> +               spin_unlock(&chan->lock);
> +       }
> +
> +       tasklet_schedule(&chan->tasklet);
> +       return IRQ_HANDLED;
> +}
> +
> +/**
> + * xilinx_dma_do_tasklet - Schedule completion tasklet
> + * @data: Pointer to the Xilinx dma channel structure
> + */
> +static void xilinx_dma_do_tasklet(unsigned long data)
> +{
> +       struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
> +
> +       xilinx_dma_chan_desc_cleanup(chan);
> +}
> +
> +/**
> + * xilinx_dma_tx_submit - Submit DMA transaction
> + * @tx: Async transaction descriptor
> + *
> + * Return: cookie value on success and failure value on error
> + */
> +static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
> +{
> +       struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +       int err;
> +
> +       if (chan->err) {
> +               /*
> +                * If reset fails, need to hard reset the system.
> +                * Channel is no longer functional
> +                */
> +               err = xilinx_dma_chan_reset(chan);
> +               if (err < 0)
> +                       return err;
> +       }
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       cookie = dma_cookie_assign(tx);
> +
> +       /* Append the transaction to the pending transactions queue. */
> +       list_add_tail(&desc->node, &chan->pending_list);
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return cookie;
> +}
> +
> +/**
> + * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
> + * @dchan: DMA channel
> + * @sgl: scatterlist to transfer to/from
> + * @sg_len: number of entries in @scatterlist
> + * @direction: DMA direction
> + * @flags: transfer ack flags
> + * @context: APP words of the descriptor
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> + */
> +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> +       struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> +       enum dma_transfer_direction direction, unsigned long flags,
> +       void *context)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *segment;
> +       struct xilinx_dma_desc_hw *hw;
> +       u32 *app_w = (u32 *)context;
> +       struct scatterlist *sg;
> +       size_t copy, sg_used;
> +       int i;
> +
> +       if (!is_slave_direction(direction))
> +               return NULL;
> +
> +       /* Allocate a transaction descriptor. */
> +       desc = xilinx_dma_alloc_tx_descriptor(chan);
> +       if (!desc)
> +               return NULL;
> +
> +       desc->direction = direction;
> +       dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +       desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +
> +       /* Build transactions using information in the scatter gather list */
> +       for_each_sg(sgl, sg, sg_len, i) {
> +               sg_used = 0;
> +
> +               /* Loop until the entire scatterlist entry is used */
> +               while (sg_used < sg_dma_len(sg)) {
> +
> +                       /* Get a free segment */
> +                       segment = xilinx_dma_alloc_tx_segment(chan);
> +                       if (!segment)
> +                               goto error;
> +
> +                       /*
> +                        * Calculate the maximum number of bytes to transfer,
> +                        * making sure it is less than the hw limit
> +                        */
> +                       copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> +                                    XILINX_DMA_MAX_TRANS_LEN);
> +                       hw = &segment->hw;
> +
> +                       /* Fill in the descriptor */
> +                       hw->buf_addr = sg_dma_address(sg) + sg_used;
> +
> +                       hw->control = copy;
> +
> +                       if (direction == DMA_MEM_TO_DEV) {
> +                               if (app_w)
> +                                       memcpy(hw->app, app_w, sizeof(u32) *
> +                                              XILINX_DMA_NUM_APP_WORDS);
> +
> +                               /*
> +                                * For the first DMA_MEM_TO_DEV transfer,
> +                                * set SOP
> +                                */
> +                               if (!i)
> +                                       hw->control |= XILINX_DMA_BD_SOP;
> +                       }
> +
> +                       sg_used += copy;
> +
> +                       /*
> +                        * Insert the segment into the descriptor segments
> +                        * list.
> +                        */
> +                       list_add_tail(&segment->node, &desc->segments);
> +               }
> +       }
> +
> +       /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> +       if (direction == DMA_MEM_TO_DEV) {
> +               segment = list_last_entry(&desc->segments,
> +                                         struct xilinx_dma_tx_segment,
> +                                         node);
> +               segment->hw.control |= XILINX_DMA_BD_EOP;
> +       }
> +
> +       return &desc->async_tx;
> +
> +error:
> +       xilinx_dma_free_tx_descriptor(chan, desc);
> +       return NULL;
> +}
> +
> +/**
> + * xilinx_dma_terminate_all - Halt the channel and free descriptors
> + * @dchan: DMA Channel pointer
> + *
> + * Return: '0' always
> + */
> +static int xilinx_dma_terminate_all(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +
> +       /* Halt the DMA engine */
> +       xilinx_dma_halt(chan);
> +
> +       /* Remove and free all of the descriptors in the lists */
> +       xilinx_dma_free_descriptors(chan);
> +
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_channel_set_config - Configure DMA channel
> + * @dchan: DMA channel
> + * @cfg: DMA device configuration pointer
> + * Return: '0' on success and failure value on error
> + */
> +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> +                                 struct xilinx_dma_config *cfg)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
> +
> +       if (!xilinx_dma_is_idle(chan))
> +               return -EBUSY;
> +
> +       if (cfg->reset)
> +               return xilinx_dma_chan_reset(chan);
> +
> +       if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
> +               reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
> +
> +       if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
> +               reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
> +
> +       dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(xilinx_dma_channel_set_config);
> +
> +/**
> + * xilinx_dma_chan_remove - Per Channel remove function
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
> +{
> +       /* Disable interrupts */
> +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL, XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       if (chan->irq > 0)
> +               free_irq(chan->irq, chan);
> +
> +       tasklet_kill(&chan->tasklet);
> +
> +       list_del(&chan->common.device_node);
> +}
> +
> +/**
> + * xilinx_dma_chan_probe - Per Channel Probing
> + * It get channel features from the device tree entry and
> + * initialize special channel handling routines
> + *
> + * @xdev: Driver specific device structure
> + * @node: Device node
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
> +                                struct device_node *node)
> +{
> +       struct xilinx_dma_chan *chan;
> +       int err;
> +       bool has_dre;
> +       u32 value, width = 0;
> +
> +       /* alloc channel */
> +       chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL);
> +       if (!chan)
> +               return -ENOMEM;
> +
> +       chan->dev = xdev->dev;
> +       chan->xdev = xdev;
> +       chan->has_sg = xdev->has_sg;
> +
> +       has_dre = of_property_read_bool(node, "xlnx,include-dre");
> +
> +       err = of_property_read_u32(node, "xlnx,datawidth", &value);
> +       if (err) {
> +               dev_err(xdev->dev, "unable to read datawidth property");
> +               return err;
> +       }
> +
> +       width = value >> 3; /* Convert bits to bytes */
> +
> +       /* If data width is greater than 8 bytes, DRE is not in hw */
> +       if (width > 8)
> +               has_dre = false;
> +
> +       if (!has_dre)
> +               xdev->common.copy_align = fls(width - 1);
> +
> +       if (of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel")) {
> +               chan->id = 0;
> +               chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
> +       } else if (of_device_is_compatible(node, "xlnx,axi-dma-s2mm-channel")) {
> +               chan->id = 1;
> +               chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
> +       } else {
> +               dev_err(xdev->dev, "Invalid channel compatible node\n");
> +               return -EINVAL;
> +       }
> +
> +       xdev->chan[chan->id] = chan;
> +
> +       /* Initialize the channel */
> +       err = xilinx_dma_chan_reset(chan);
> +       if (err) {
> +               dev_err(xdev->dev, "Reset channel failed\n");
> +               return err;
> +       }
> +
> +       spin_lock_init(&chan->lock);
> +       INIT_LIST_HEAD(&chan->pending_list);
> +       INIT_LIST_HEAD(&chan->done_list);
> +       INIT_LIST_HEAD(&chan->free_seg_list);
> +
> +       chan->common.device = &xdev->common;
> +
> +       /* find the IRQ line, if it exists in the device tree */
> +       chan->irq = irq_of_parse_and_map(node, 0);
> +       err = request_irq(chan->irq, xilinx_dma_irq_handler,
> +                         IRQF_SHARED,
> +                         "xilinx-dma-controller", chan);
> +       if (err) {
> +               dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
> +               return err;
> +       }
> +
> +       /* Initialize the tasklet */
> +       tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
> +                    (unsigned long)chan);
> +
> +       /* Add the channel to DMA device channel list */
> +       list_add_tail(&chan->common.device_node, &xdev->common.channels);
> +
> +       chan->idle = true;
> +
> +       return 0;
> +}
> +
> +/**
> + * of_dma_xilinx_xlate - Translation function
> + * @dma_spec: Pointer to DMA specifier as found in the device tree
> + * @ofdma: Pointer to DMA controller data
> + *
> + * Return: DMA channel pointer on success and NULL on error
> + */
> +static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
> +                                           struct of_dma *ofdma)
> +{
> +       struct xilinx_dma_device *xdev = ofdma->of_dma_data;
> +       int chan_id = dma_spec->args[0];
> +
> +       if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE)
> +               return NULL;
> +
> +       return dma_get_slave_channel(&xdev->chan[chan_id]->common);
> +}
> +
> +/**
> + * xilinx_dma_probe - Driver probe function
> + * @pdev: Pointer to the platform_device structure
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_probe(struct platform_device *pdev)
> +{
> +       struct xilinx_dma_device *xdev;
> +       struct device_node *child, *node;
> +       struct resource *res;
> +       int i, ret;
> +
> +       xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
> +       if (!xdev)
> +               return -ENOMEM;
> +
> +       xdev->dev = &(pdev->dev);
> +       INIT_LIST_HEAD(&xdev->common.channels);
> +
> +       node = pdev->dev.of_node;
> +
> +       /* Map the registers */
> +       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +       xdev->regs = devm_ioremap_resource(&pdev->dev, res);
> +       if (IS_ERR(xdev->regs))
> +               return PTR_ERR(xdev->regs);
> +
> +       /* Check if SG is enabled */
> +       xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> +
> +       /* Axi DMA only do slave transfers */
> +       dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
> +       dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
> +       xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> +       xdev->common.device_terminate_all = xilinx_dma_terminate_all;
> +       xdev->common.device_issue_pending = xilinx_dma_issue_pending;
> +       xdev->common.device_alloc_chan_resources =
> +               xilinx_dma_alloc_chan_resources;
> +       xdev->common.device_free_chan_resources =
> +               xilinx_dma_free_chan_resources;
> +       xdev->common.device_tx_status = xilinx_dma_tx_status;
> +       xdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +       xdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +       xdev->common.dev = &pdev->dev;
> +
> +       platform_set_drvdata(pdev, xdev);
> +
> +       for_each_child_of_node(node, child) {
> +               ret = xilinx_dma_chan_probe(xdev, child);
> +               if (ret) {
> +                       dev_err(&pdev->dev, "Probing channels failed\n");
> +                       goto free_chan_resources;
> +               }
> +       }
> +
> +       dma_async_device_register(&xdev->common);
> +
> +       ret = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
> +       if (ret) {
> +               dev_err(&pdev->dev, "Unable to register DMA to DT\n");
> +               dma_async_device_unregister(&xdev->common);
> +               goto free_chan_resources;
> +       }
> +
> +       dev_info(&pdev->dev, "Xilinx AXI DMA Engine driver Probed!!\n");
> +
> +       return 0;
> +
> +free_chan_resources:
> +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> +               if (xdev->chan[i])
> +                       xilinx_dma_chan_remove(xdev->chan[i]);
> +
> +       return ret;
> +}
> +
> +/**
> + * xilinx_dma_remove - Driver remove function
> + * @pdev: Pointer to the platform_device structure
> + *
> + * Return: Always '0'
> + */
> +static int xilinx_dma_remove(struct platform_device *pdev)
> +{
> +       struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> +       int i;
> +
> +       of_dma_controller_free(pdev->dev.of_node);
> +       dma_async_device_unregister(&xdev->common);
> +
> +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> +               if (xdev->chan[i])
> +                       xilinx_dma_chan_remove(xdev->chan[i]);
> +
> +       return 0;
> +}
> +
> +static const struct of_device_id xilinx_dma_of_match[] = {
> +       { .compatible = "xlnx,axi-dma-1.00.a",},
> +       {}
> +};
> +MODULE_DEVICE_TABLE(of, xilinx_dma_of_match);
> +
> +static struct platform_driver xilinx_dma_driver = {
> +       .driver = {
> +               .name = "xilinx-dma",
> +               .of_match_table = xilinx_dma_of_match,
> +       },
> +       .probe = xilinx_dma_probe,
> +       .remove = xilinx_dma_remove,
> +};
> +
> +module_platform_driver(xilinx_dma_driver);
> +
> +MODULE_AUTHOR("Xilinx, Inc.");
> +MODULE_DESCRIPTION("Xilinx DMA driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/linux/dma/xilinx_dma.h b/include/linux/dma/xilinx_dma.h
> index 34b98f2..de38599 100644
> --- a/include/linux/dma/xilinx_dma.h
> +++ b/include/linux/dma/xilinx_dma.h
> @@ -41,7 +41,21 @@ struct xilinx_vdma_config {
>         int ext_fsync;
>  };
>
> +/**
> + * struct xilinx_dma_config - DMA Configuration structure
> + * @coalesc: Interrupt coalescing threshold
> + * @delay: Delay counter
> + * @reset: Reset Channel
> + */
> +struct xilinx_dma_config {
> +       int coalesc;
> +       int delay;
> +       int reset;
> +};
> +
>  int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
>                                         struct xilinx_vdma_config *cfg);
> +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> +                                       struct xilinx_dma_config *cfg);
>
>  #endif
> --
> 2.1.2
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-16 19:19 ` Nicolae Rosia
@ 2015-06-18 10:16   ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 12+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-06-18 10:16 UTC (permalink / raw)
  To: Nicolae Rosia
  Cc: vinod.koul, dan.j.williams, Michal Simek, Soren Brinkmann,
	Anirudha Sarangi, Punnaiah Choudary Kalluri, dmaengine,
	linux-kernel, linux-arm-kernel, Srikanth Thokala

[-- Attachment #1: Type: text/plain, Size: 56856 bytes --]

Hi,

Thanks for the comments.

> -----Original Message-----
> From: Nicolae Rosia [mailto:nicolae.rosia@gmail.com]
> Sent: Wednesday, June 17, 2015 12:50 AM
> To: Appana Durga Kedareswara Rao
> Cc: vinod.koul@intel.com; dan.j.williams@intel.com; Michal Simek; Soren
> Brinkmann; Appana Durga Kedareswara Rao; Anirudha Sarangi; Punnaiah
> Choudary Kalluri; dmaengine@vger.kernel.org; linux-
> kernel@vger.kernel.org; linux-arm-kernel@lists.infradead.org; Srikanth
> Thokala
> Subject: Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> Hi,
>
> How do I match the driver? The old one, from Xilinx git, was setting the
> private field so I could use a filter function.

No need to use filter anymore. You can use attached patches to test the test client driver.
In the dts file you need to add a child node for the test client as well.

>
> On Tue, Jun 9, 2015 at 9:35 AM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> > The deivce tree doc got applied in the slave-dmaengine.git.
> >
> > This patch is rebased on the commit
> > Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> >
> > - Need's to work on the comment "not making efficient use of DMA's
> > coalesce capability" will send the patch for that soon.
> This can be done afterwards. I can send a patch if you want.
> [...]
> > +/**
> > + * xilinx_dma_tx_status - Get dma transaction status
> > + * @dchan: DMA channel
> > + * @cookie: Transaction identifier
> > + * @txstate: Transaction state
> > + *
> > + * Return: DMA transaction status
> > + */
> > +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> > +                                           dma_cookie_t cookie,
> > +                                           struct dma_tx_state
> > +*txstate) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *segment;
> > +       struct xilinx_dma_desc_hw *hw;
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +       u32 residue;
> > +
> > +       ret = dma_cookie_status(dchan, cookie, txstate);
> > +       if (ret == DMA_COMPLETE || !txstate)
> > +               return ret;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       if (chan->has_sg) {
> > +               while (!list_empty(&desc->segments)) {
> desc is not initialized
Ok will fix

> > +                       segment = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +                       hw = &segment->hw;
> > +                       residue += (hw->control - hw->status) &
> > +                                  XILINX_DMA_MAX_TRANS_LEN;
> residue is not initialized

Ok will fix

> > +               }
> > +       }
> > +
> > +       chan->residue = residue;
> residue is not initialized

Ok will fix

> > +       dma_set_residue(txstate, chan->residue);
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       return ret;
> > +}
>
> > +/**
> > + * xilinx_dma_start_transfer - Starts DMA transfer
> > + * @chan: Driver specific channel struct pointer  */ static void
> > +xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> > +
> > +       if (chan->err)
> > +               return;
> > +
> > +       if (list_empty(&chan->pending_list))
> > +               return;
> > +
> > +       if (!chan->idle)
> > +               return;
> > +
> > +       desc = list_first_entry(&chan->pending_list,
> > +                               struct xilinx_dma_tx_descriptor,
> > + node);
> > +
> > +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> > +           !xilinx_dma_is_idle(chan)) {
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +               goto out_free_desc;
> > +       }
> > +
> > +       if (chan->has_sg) {
> > +               head = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC,
> > + head->phys);
> Even though we have SG, we are using it on a descriptor basis.
> If the user queues 10 descriptors, each with one segment, this will work like a
> DMA without SG.
> Why don't we set the tail to the last segment of the last descriptor queued?

Ok will work on it.

> > +       }
> > +
> > +       /* Enable interrupts */
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       xilinx_dma_start(chan);
> > +       if (chan->err)
> > +               return;
> > +
> > +       /* Start the transfer */
> > +       if (chan->has_sg) {
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +       } else {
> > +               struct xilinx_dma_tx_segment *segment;
> > +               struct xilinx_dma_desc_hw *hw;
> > +
> > +               segment = list_first_entry(&desc->segments,
> > +                                          struct xilinx_dma_tx_segment, node);
> > +               hw = &segment->hw;
> > +
> > +               if (desc->direction == DMA_MEM_TO_DEV)
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> > +                                      hw->buf_addr);
> > +               else
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> > +                                      hw->buf_addr);
> > +
> > +               /* Start the transfer */
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> > +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> > +       }
> > +
> > +out_free_desc:
> > +       list_del(&desc->node);
> > +       chan->idle = false;
> > +       chan->active_desc = desc;
> > +}
> > +
> We should cache XILINX_DMA_REG_CONTROL register instead of reading it
> again and again, especially on hot paths, since the AXI4 bus is running at low
> speeds.
> I have addressed all these comments at [0] based on the Xilinx DMA Driver,
> but it is not ready for mainlining (I have commented code/dead code, no
> comments, etc.) and I don't have time to work on it anymore.
> You can cherry pick parts from it and adapt them here.
>
> Anyway, I'll try to use this driver in my workcase (thousands of requests, very
> high load and high IRQ activity) and report back, but I don't expect much
> since I had these problems with the original implementation.
>
> [0] http://pastebin.com/0fQAJ0qm

Ok will look into it and will improve the SG handling in the next version of the patch.
Thanks for the pointers.

Thanks,
Kedar.

>
> On Tue, Jun 9, 2015 at 9:35 AM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> > The deivce tree doc got applied in the slave-dmaengine.git.
> >
> > This patch is rebased on the commit
> > Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> >
> > - Need's to work on the comment "not making efficient use of DMA's
> > coalesce capability" will send the patch for that soon.
> >
> > Changes in v7:
> > - updated license in the driver as suggested by Paul.
> > - Corrected return value in is_idle funtion.
> >
> > Changes in v6:
> > - Fixed Odd indention in the Kconfig.
> > - used GFP_NOWAIT instead of GFP_KERNEL during the desc allocation
> > - Calculated residue in the tx_status instead of complete_descriptor.
> > - Update copy right to 2015.
> > - Modified spin_lock handling moved the spin_lock to the appropriate
> > functions (instead of xilinx_dma_start_transfer doing it
> xilinx_dma_issue_pending api).
> > - device_control and declare slave caps updated as per newer APi's.
> > Changes in v5:
> > - Modified the xilinx_dma.h header file location to the
> >   include/linux/dma/xilinx_dma.h
> > Changes in v4:
> > - Add direction field to DMA descriptor structure and removed from
> >   channel structure to avoid duplication.
> > - Check for DMA idle condition before changing the configuration.
> > - Residue is being calculated in complete_descriptor() and is reported
> >   to slave driver.
> > Changes in v3:
> > - Rebased on 3.16-rc7
> > Changes in v2:
> > - Simplified the logic to set SOP and APP words in prep_slave_sg().
> > - Corrected function description comments to match the return type.
> > - Fixed some minor comments as suggested by Andy.
> >
> >  drivers/dma/Kconfig             |   13 +
> >  drivers/dma/xilinx/Makefile     |    1 +
> >  drivers/dma/xilinx/xilinx_dma.c | 1196
> +++++++++++++++++++++++++++++++++++++++
> >  include/linux/dma/xilinx_dma.h  |   14 +
> >  4 files changed, 1224 insertions(+)
> >  create mode 100644 drivers/dma/xilinx/xilinx_dma.c
> >
> > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index
> > bda2cb0..cb4fa57 100644
> > --- a/drivers/dma/Kconfig
> > +++ b/drivers/dma/Kconfig
> > @@ -492,4 +492,17 @@ config QCOM_BAM_DMA
> >           Enable support for the QCOM BAM DMA controller.  This controller
> >           provides DMA capabilities for a variety of on-chip devices.
> >
> > +config XILINX_DMA
> > +        tristate "Xilinx AXI DMA Engine"
> > +        depends on (ARCH_ZYNQ || MICROBLAZE)
> > +        select DMA_ENGINE
> > +        help
> > +          Enable support for Xilinx AXI DMA Soft IP.
> > +
> > +          This engine provides high-bandwidth direct memory access
> > +          between memory and AXI4-Stream type target peripherals.
> > +          It has two stream interfaces/channels, Memory Mapped to
> > +          Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> > +          data transfers.
> > +
> >  endif
> > diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
> > index 3c4e9f2..6224a49 100644
> > --- a/drivers/dma/xilinx/Makefile
> > +++ b/drivers/dma/xilinx/Makefile
> > @@ -1 +1,2 @@
> >  obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
> > +obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > b/drivers/dma/xilinx/xilinx_dma.c new file mode 100644 index
> > 0000000..53582a7
> > --- /dev/null
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -0,0 +1,1196 @@
> > +/*
> > + * DMA driver for Xilinx DMA Engine
> > + *
> > + * Copyright (C) 2010 - 2015 Xilinx, Inc. All rights reserved.
> > + *
> > + * Based on the Freescale DMA driver.
> > + *
> > + * Description:
> > + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct
> > +Memory
> > + *  Access between memory and AXI4-Stream-type target peripherals. It
> > +can be
> > + *  configured to have one channel or two channels and if configured
> > +as two
> > + *  channels, one is to transmit data from memory to a device and
> > +another is
> > + *  to receive from a device.
> > + *
> > + * This is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + */
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/dma/xilinx_dma.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/module.h>
> > +#include <linux/of_address.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_platform.h>
> > +#include <linux/slab.h>
> > +
> > +#include "../dmaengine.h"
> > +
> > +/* Register Offsets */
> > +#define XILINX_DMA_REG_CONTROL         0x00
> > +#define XILINX_DMA_REG_STATUS          0x04
> > +#define XILINX_DMA_REG_CURDESC         0x08
> > +#define XILINX_DMA_REG_TAILDESC                0x10
> > +#define XILINX_DMA_REG_SRCADDR         0x18
> > +#define XILINX_DMA_REG_DSTADDR         0x20
> > +#define XILINX_DMA_REG_BTT             0x28
> > +
> > +/* Channel/Descriptor Offsets */
> > +#define XILINX_DMA_MM2S_CTRL_OFFSET    0x00
> > +#define XILINX_DMA_S2MM_CTRL_OFFSET    0x30
> > +
> > +/* General register bits definitions */
> > +#define XILINX_DMA_CR_RUNSTOP_MASK     BIT(0)
> > +#define XILINX_DMA_CR_RESET_MASK       BIT(2)
> > +
> > +#define XILINX_DMA_CR_DELAY_SHIFT      24
> > +#define XILINX_DMA_CR_COALESCE_SHIFT   16
> > +
> > +#define XILINX_DMA_CR_DELAY_MAX                GENMASK(7, 0)
> > +#define XILINX_DMA_CR_COALESCE_MAX     GENMASK(7, 0)
> > +
> > +#define XILINX_DMA_SR_HALTED_MASK      BIT(0)
> > +#define XILINX_DMA_SR_IDLE_MASK                BIT(1)
> > +
> > +#define XILINX_DMA_XR_IRQ_IOC_MASK     BIT(12)
> > +#define XILINX_DMA_XR_IRQ_DELAY_MASK   BIT(13)
> > +#define XILINX_DMA_XR_IRQ_ERROR_MASK   BIT(14)
> > +#define XILINX_DMA_XR_IRQ_ALL_MASK     GENMASK(14, 12)
> > +
> > +/* BD definitions */
> > +#define XILINX_DMA_BD_STS_ALL_MASK     GENMASK(31, 28)
> > +#define XILINX_DMA_BD_SOP              BIT(27)
> > +#define XILINX_DMA_BD_EOP              BIT(26)
> > +
> > +/* Hw specific definitions */
> > +#define XILINX_DMA_MAX_CHANS_PER_DEVICE        0x2
> > +#define XILINX_DMA_MAX_TRANS_LEN       GENMASK(22, 0)
> > +
> > +/* Delay loop counter to prevent hardware failure */
> > +#define XILINX_DMA_LOOP_COUNT          1000000
> > +
> > +/* Maximum number of Descriptors */
> > +#define XILINX_DMA_NUM_DESCS           64
> > +#define XILINX_DMA_NUM_APP_WORDS       5
> > +
> > +/**
> > + * struct xilinx_dma_desc_hw - Hardware Descriptor
> > + * @next_desc: Next Descriptor Pointer @0x00
> > + * @pad1: Reserved @0x04
> > + * @buf_addr: Buffer address @0x08
> > + * @pad2: Reserved @0x0C
> > + * @pad3: Reserved @0x10
> > + * @pad4: Reserved @0x14
> > + * @control: Control field @0x18
> > + * @status: Status field @0x1C
> > + * @app: APP Fields @0x20 - 0x30
> > + */
> > +struct xilinx_dma_desc_hw {
> > +       u32 next_desc;
> > +       u32 pad1;
> > +       u32 buf_addr;
> > +       u32 pad2;
> > +       u32 pad3;
> > +       u32 pad4;
> > +       u32 control;
> > +       u32 status;
> > +       u32 app[XILINX_DMA_NUM_APP_WORDS]; } __aligned(64);
> > +
> > +/**
> > + * struct xilinx_dma_tx_segment - Descriptor segment
> > + * @hw: Hardware descriptor
> > + * @node: Node in the descriptor segments list
> > + * @phys: Physical address of segment  */ struct
> > +xilinx_dma_tx_segment {
> > +       struct xilinx_dma_desc_hw hw;
> > +       struct list_head node;
> > +       dma_addr_t phys;
> > +} __aligned(64);
> > +
> > +/**
> > + * struct xilinx_dma_tx_descriptor - Per Transaction structure
> > + * @async_tx: Async transaction descriptor
> > + * @segments: TX segments list
> > + * @node: Node in the channel descriptors list
> > + * @direction: Transfer direction
> > + */
> > +struct xilinx_dma_tx_descriptor {
> > +       struct dma_async_tx_descriptor async_tx;
> > +       struct list_head segments;
> > +       struct list_head node;
> > +       enum dma_transfer_direction direction; };
> > +
> > +/**
> > + * struct xilinx_dma_chan - Driver specific DMA channel structure
> > + * @xdev: Driver specific device structure
> > + * @ctrl_offset: Control registers offset
> > + * @lock: Descriptor operation lock
> > + * @pending_list: Descriptors waiting
> > + * @active_desc: Active descriptor
> > + * @done_list: Complete descriptors
> > + * @free_seg_list: Free descriptors
> > + * @common: DMA common channel
> > + * @seg_v: Statically allocated segments base
> > + * @seg_p: Physical allocated segments base
> > + * @dev: The dma device
> > + * @irq: Channel IRQ
> > + * @id: Channel ID
> > + * @has_sg: Support scatter transfers
> > + * @err: Channel has errors
> > + * @idle: Channel status
> > + * @tasklet: Cleanup work after irq
> > + * @residue: Residue
> > + */
> > +struct xilinx_dma_chan {
> > +       struct xilinx_dma_device *xdev;
> > +       u32 ctrl_offset;
> > +       spinlock_t lock;
> > +       struct list_head pending_list;
> > +       struct xilinx_dma_tx_descriptor *active_desc;
> > +       struct list_head done_list;
> > +       struct list_head free_seg_list;
> > +       struct dma_chan common;
> > +       struct xilinx_dma_tx_segment *seg_v;
> > +       dma_addr_t seg_p;
> > +       struct device *dev;
> > +       int irq;
> > +       int id;
> > +       bool has_sg;
> > +       int err;
> > +       bool idle;
> > +       struct tasklet_struct tasklet;
> > +       u32 residue;
> > +};
> > +
> > +/**
> > + * struct xilinx_dma_device - DMA device structure
> > + * @regs: I/O mapped base address
> > + * @dev: Device Structure
> > + * @common: DMA device structure
> > + * @chan: Driver specific DMA channel
> > + * @has_sg: Specifies whether Scatter-Gather is present or not  */
> > +struct xilinx_dma_device {
> > +       void __iomem *regs;
> > +       struct device *dev;
> > +       struct dma_device common;
> > +       struct xilinx_dma_chan
> *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
> > +       bool has_sg;
> > +};
> > +
> > +/* Macros */
> > +#define to_xilinx_chan(chan) \
> > +       container_of(chan, struct xilinx_dma_chan, common) #define
> > +to_dma_tx_descriptor(tx) \
> > +       container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
> > +
> > +/* IO accessors */
> > +static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg,
> > +u32 value) {
> > +       iowrite32(value, chan->xdev->regs + reg); }
> > +
> > +static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg) {
> > +       return ioread32(chan->xdev->regs + reg); }
> > +
> > +static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32
> > +reg) {
> > +       return dma_read(chan, chan->ctrl_offset + reg); }
> > +
> > +static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
> > +                                 u32 value) {
> > +       dma_write(chan, chan->ctrl_offset + reg, value); }
> > +
> > +static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32
> > +reg, u32 clr) {
> > +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr); }
> > +
> > +static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32
> > +reg, u32 set) {
> > +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set); }
> > +
> > +/*
> > +---------------------------------------------------------------------
> > +--------
> > + * Descriptors and segments alloc and free  */
> > +
> > +/**
> > + * xilinx_dma_alloc_tx_segment - Allocate transaction segment
> > + * @chan: Driver specific dma channel
> > + *
> > + * Return: The allocated segment on success and NULL on failure.
> > + */
> > +static struct xilinx_dma_tx_segment *
> > +xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_segment *segment = NULL;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       if (!list_empty(&chan->free_seg_list)) {
> > +               segment = list_first_entry(&chan->free_seg_list,
> > +                                          struct xilinx_dma_tx_segment,
> > +                                          node);
> > +               list_del(&segment->node);
> > +       }
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       return segment;
> > +}
> > +
> > +/**
> > + * xilinx_dma_clean_hw_desc - Clean hardware descriptor
> > + * @hw: HW descriptor to clean
> > + */
> > +static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw) {
> > +       u32 next_desc = hw->next_desc;
> > +
> > +       memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
> > +
> > +       hw->next_desc = next_desc;
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_tx_segment - Free transaction segment
> > + * @chan: Driver specific dma channel
> > + * @segment: dma transaction segment
> > + */
> > +static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
> > +                                      struct xilinx_dma_tx_segment
> > +*segment) {
> > +       xilinx_dma_clean_hw_desc(&segment->hw);
> > +
> > +       list_add_tail(&segment->node, &chan->free_seg_list); }
> > +
> > +/**
> > + * xilinx_dma_tx_descriptor - Allocate transaction descriptor
> > + * @chan: Driver specific dma channel
> > + *
> > + * Return: The allocated descriptor on success and NULL on failure.
> > + */
> > +static struct xilinx_dma_tx_descriptor *
> > +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +
> > +       desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
> > +       if (!desc)
> > +               return NULL;
> > +
> > +       INIT_LIST_HEAD(&desc->segments);
> > +
> > +       return desc;
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_tx_descriptor - Free transaction descriptor
> > + * @chan: Driver specific dma channel
> > + * @desc: dma transaction descriptor
> > + */
> > +static void
> > +xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
> > +                             struct xilinx_dma_tx_descriptor *desc) {
> > +       struct xilinx_dma_tx_segment *segment, *next;
> > +
> > +       if (!desc)
> > +               return;
> > +
> > +       list_for_each_entry_safe(segment, next, &desc->segments, node) {
> > +               list_del(&segment->node);
> > +               xilinx_dma_free_tx_segment(chan, segment);
> > +       }
> > +
> > +       kfree(desc);
> > +}
> > +
> > +/**
> > + * xilinx_dma_alloc_chan_resources - Allocate channel resources
> > + * @dchan: DMA channel
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       int i;
> > +
> > +       /* Allocate the buffer descriptors. */
> > +       chan->seg_v = dma_zalloc_coherent(chan->dev,
> > +                                         sizeof(*chan->seg_v) *
> > +                                         XILINX_DMA_NUM_DESCS,
> > +                                         &chan->seg_p, GFP_KERNEL);
> > +       if (!chan->seg_v) {
> > +               dev_err(chan->dev,
> > +                       "unable to allocate channel %d descriptors\n",
> > +                       chan->id);
> > +               return -ENOMEM;
> > +       }
> > +
> > +       for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
> > +               chan->seg_v[i].hw.next_desc =
> > +                               chan->seg_p + sizeof(*chan->seg_v) *
> > +                               ((i + 1) % XILINX_DMA_NUM_DESCS);
> > +               chan->seg_v[i].phys =
> > +                               chan->seg_p + sizeof(*chan->seg_v) * i;
> > +               list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
> > +       }
> > +
> > +       dma_cookie_init(dchan);
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_desc_list - Free descriptors list
> > + * @chan: Driver specific dma channel
> > + * @list: List to parse and delete the descriptor  */ static void
> > +xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
> > +                                     struct list_head *list) {
> > +       struct xilinx_dma_tx_descriptor *desc, *next;
> > +
> > +       list_for_each_entry_safe(desc, next, list, node) {
> > +               list_del(&desc->node);
> > +               xilinx_dma_free_tx_descriptor(chan, desc);
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_descriptors - Free channel descriptors
> > + * @chan: Driver specific dma channel  */ static void
> > +xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) {
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       xilinx_dma_free_desc_list(chan, &chan->pending_list);
> > +       xilinx_dma_free_desc_list(chan, &chan->done_list);
> > +
> > +       xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
> > +       chan->active_desc = NULL;
> > +
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_free_chan_resources - Free channel resources
> > + * @dchan: DMA channel
> > + */
> > +static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +
> > +       xilinx_dma_free_descriptors(chan);
> > +
> > +       dma_free_coherent(chan->dev,
> > +                         sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
> > +                         chan->seg_v, chan->seg_p); }
> > +
> > +/**
> > + * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
> > + * @chan: Driver specific dma channel  */ static void
> > +xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc, *next;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       list_for_each_entry_safe(desc, next, &chan->done_list, node) {
> > +               dma_async_tx_callback callback;
> > +               void *callback_param;
> > +
> > +               /* Remove from the list of running transactions */
> > +               list_del(&desc->node);
> > +
> > +               /* Run the link descriptor callback function */
> > +               callback = desc->async_tx.callback;
> > +               callback_param = desc->async_tx.callback_param;
> > +               if (callback) {
> > +                       spin_unlock_irqrestore(&chan->lock, flags);
> > +                       callback(callback_param);
> > +                       spin_lock_irqsave(&chan->lock, flags);
> > +               }
> > +
> > +               /* Run any dependencies, then free the descriptor */
> > +               dma_run_dependencies(&desc->async_tx);
> > +               xilinx_dma_free_tx_descriptor(chan, desc);
> > +       }
> > +
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_tx_status - Get dma transaction status
> > + * @dchan: DMA channel
> > + * @cookie: Transaction identifier
> > + * @txstate: Transaction state
> > + *
> > + * Return: DMA transaction status
> > + */
> > +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> > +                                           dma_cookie_t cookie,
> > +                                           struct dma_tx_state
> > +*txstate) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *segment;
> > +       struct xilinx_dma_desc_hw *hw;
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +       u32 residue;
> > +
> > +       ret = dma_cookie_status(dchan, cookie, txstate);
> > +       if (ret == DMA_COMPLETE || !txstate)
> > +               return ret;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       if (chan->has_sg) {
> > +               while (!list_empty(&desc->segments)) {
> > +                       segment = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +                       hw = &segment->hw;
> > +                       residue += (hw->control - hw->status) &
> > +                                  XILINX_DMA_MAX_TRANS_LEN;
> > +               }
> > +       }
> > +
> > +       chan->residue = residue;
> > +       dma_set_residue(txstate, chan->residue);
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       return ret;
> > +}
> > +
> > +/**
> > + * xilinx_dma_is_running - Check if DMA channel is running
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: 'true' if running, 'false' if not.
> > + */
> > +static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan) {
> > +       return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                XILINX_DMA_SR_HALTED_MASK) &&
> > +               (dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +                XILINX_DMA_CR_RUNSTOP_MASK); }
> > +
> > +/**
> > + * xilinx_dma_is_idle - Check if DMA channel is idle
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: 'true' if idle, 'false' if not.
> > + */
> > +static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan) {
> > +       return !!(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +               XILINX_DMA_SR_IDLE_MASK); }
> > +
> > +/**
> > + * xilinx_dma_halt - Halt DMA channel
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_halt(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +
> > +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RUNSTOP_MASK);
> > +
> > +       /* Wait for the hardware to halt */
> > +       do {
> > +               if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                       XILINX_DMA_SR_HALTED_MASK)
> > +                       break;
> > +       } while (loop--);
> > +
> > +       if (!loop) {
> > +               dev_err(chan->dev, "Cannot stop channel %p: %x\n",
> > +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               chan->err = true;
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_start - Start DMA channel
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_start(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RUNSTOP_MASK);
> > +
> > +       /* Wait for the hardware to start */
> > +       do {
> > +               if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                       XILINX_DMA_SR_HALTED_MASK)
> > +                       break;
> > +       } while (loop--);
> > +
> > +       if (!loop) {
> > +               dev_err(chan->dev, "Cannot start channel %p: %x\n",
> > +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               chan->err = true;
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_start_transfer - Starts DMA transfer
> > + * @chan: Driver specific channel struct pointer  */ static void
> > +xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> > +
> > +       if (chan->err)
> > +               return;
> > +
> > +       if (list_empty(&chan->pending_list))
> > +               return;
> > +
> > +       if (!chan->idle)
> > +               return;
> > +
> > +       desc = list_first_entry(&chan->pending_list,
> > +                               struct xilinx_dma_tx_descriptor,
> > + node);
> > +
> > +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> > +           !xilinx_dma_is_idle(chan)) {
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +               goto out_free_desc;
> > +       }
> > +
> > +       if (chan->has_sg) {
> > +               head = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> > +       }
> > +
> > +       /* Enable interrupts */
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       xilinx_dma_start(chan);
> > +       if (chan->err)
> > +               return;
> > +
> > +       /* Start the transfer */
> > +       if (chan->has_sg) {
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +       } else {
> > +               struct xilinx_dma_tx_segment *segment;
> > +               struct xilinx_dma_desc_hw *hw;
> > +
> > +               segment = list_first_entry(&desc->segments,
> > +                                          struct xilinx_dma_tx_segment, node);
> > +               hw = &segment->hw;
> > +
> > +               if (desc->direction == DMA_MEM_TO_DEV)
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> > +                                      hw->buf_addr);
> > +               else
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> > +                                      hw->buf_addr);
> > +
> > +               /* Start the transfer */
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> > +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> > +       }
> > +
> > +out_free_desc:
> > +       list_del(&desc->node);
> > +       chan->idle = false;
> > +       chan->active_desc = desc;
> > +}
> > +
> > +/**
> > + * xilinx_dma_issue_pending - Issue pending transactions
> > + * @dchan: DMA channel
> > + */
> > +static void xilinx_dma_issue_pending(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       xilinx_dma_start_transfer(chan);
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_complete_descriptor - Mark the active descriptor as
> > +complete
> > + * @chan : xilinx DMA channel
> > + */
> > +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan
> > +*chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +
> > +       desc = chan->active_desc;
> > +       if (!desc) {
> > +               dev_dbg(chan->dev, "no running descriptors\n");
> > +               return;
> > +       }
> > +
> > +       dma_cookie_complete(&desc->async_tx);
> > +       list_add_tail(&desc->node, &chan->done_list);
> > +
> > +       chan->active_desc = NULL;
> > +
> > +}
> > +
> > +/**
> > + * xilinx_dma_reset - Reset DMA channel
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_chan_reset(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +       u32 tmp;
> > +
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RESET_MASK);
> > +
> > +       tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +             XILINX_DMA_CR_RESET_MASK;
> > +
> > +       /* Wait for the hardware to finish reset */
> > +       do {
> > +               tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +                     XILINX_DMA_CR_RESET_MASK;
> > +       } while (loop-- && tmp);
> > +
> > +       if (!loop) {
> > +               dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               return -EBUSY;
> > +       }
> > +
> > +       chan->err = false;
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_irq_handler - DMA Interrupt handler
> > + * @irq: IRQ number
> > + * @data: Pointer to the Xilinx DMA channel structure
> > + *
> > + * Return: IRQ_HANDLED/IRQ_NONE
> > + */
> > +static irqreturn_t xilinx_dma_irq_handler(int irq, void *data) {
> > +       struct xilinx_dma_chan *chan = data;
> > +       u32 status;
> > +
> > +       /* Read the status and ack the interrupts. */
> > +       status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
> > +       if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
> > +               return IRQ_NONE;
> > +
> > +       dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
> > +                      status & XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
> > +               dev_err(chan->dev,
> > +                       "Channel %p has errors %x, cdr %x tdr %x\n",
> > +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
> > +               chan->err = true;
> > +       }
> > +
> > +       /*
> > +        * Device takes too long to do the transfer when user requires
> > +        * responsiveness
> > +        */
> > +       if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
> > +               dev_dbg(chan->dev, "Inter-packet latency too long\n");
> > +
> > +       if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
> > +               spin_lock(&chan->lock);
> > +               xilinx_dma_complete_descriptor(chan);
> > +               chan->idle = true;
> > +               xilinx_dma_start_transfer(chan);
> > +               spin_unlock(&chan->lock);
> > +       }
> > +
> > +       tasklet_schedule(&chan->tasklet);
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +/**
> > + * xilinx_dma_do_tasklet - Schedule completion tasklet
> > + * @data: Pointer to the Xilinx dma channel structure  */ static void
> > +xilinx_dma_do_tasklet(unsigned long data) {
> > +       struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
> > +
> > +       xilinx_dma_chan_desc_cleanup(chan);
> > +}
> > +
> > +/**
> > + * xilinx_dma_tx_submit - Submit DMA transaction
> > + * @tx: Async transaction descriptor
> > + *
> > + * Return: cookie value on success and failure value on error  */
> > +static dma_cookie_t xilinx_dma_tx_submit(struct
> > +dma_async_tx_descriptor *tx) {
> > +       struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +       int err;
> > +
> > +       if (chan->err) {
> > +               /*
> > +                * If reset fails, need to hard reset the system.
> > +                * Channel is no longer functional
> > +                */
> > +               err = xilinx_dma_chan_reset(chan);
> > +               if (err < 0)
> > +                       return err;
> > +       }
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       cookie = dma_cookie_assign(tx);
> > +
> > +       /* Append the transaction to the pending transactions queue. */
> > +       list_add_tail(&desc->node, &chan->pending_list);
> > +
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       return cookie;
> > +}
> > +
> > +/**
> > + * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE
> > +transaction
> > + * @dchan: DMA channel
> > + * @sgl: scatterlist to transfer to/from
> > + * @sg_len: number of entries in @scatterlist
> > + * @direction: DMA direction
> > + * @flags: transfer ack flags
> > + * @context: APP words of the descriptor
> > + *
> > + * Return: Async transaction descriptor on success and NULL on
> > +failure  */ static struct dma_async_tx_descriptor
> > +*xilinx_dma_prep_slave_sg(
> > +       struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> > +       enum dma_transfer_direction direction, unsigned long flags,
> > +       void *context)
> > +{
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *segment;
> > +       struct xilinx_dma_desc_hw *hw;
> > +       u32 *app_w = (u32 *)context;
> > +       struct scatterlist *sg;
> > +       size_t copy, sg_used;
> > +       int i;
> > +
> > +       if (!is_slave_direction(direction))
> > +               return NULL;
> > +
> > +       /* Allocate a transaction descriptor. */
> > +       desc = xilinx_dma_alloc_tx_descriptor(chan);
> > +       if (!desc)
> > +               return NULL;
> > +
> > +       desc->direction = direction;
> > +       dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> > +       desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> > +
> > +       /* Build transactions using information in the scatter gather list */
> > +       for_each_sg(sgl, sg, sg_len, i) {
> > +               sg_used = 0;
> > +
> > +               /* Loop until the entire scatterlist entry is used */
> > +               while (sg_used < sg_dma_len(sg)) {
> > +
> > +                       /* Get a free segment */
> > +                       segment = xilinx_dma_alloc_tx_segment(chan);
> > +                       if (!segment)
> > +                               goto error;
> > +
> > +                       /*
> > +                        * Calculate the maximum number of bytes to transfer,
> > +                        * making sure it is less than the hw limit
> > +                        */
> > +                       copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> > +                                    XILINX_DMA_MAX_TRANS_LEN);
> > +                       hw = &segment->hw;
> > +
> > +                       /* Fill in the descriptor */
> > +                       hw->buf_addr = sg_dma_address(sg) + sg_used;
> > +
> > +                       hw->control = copy;
> > +
> > +                       if (direction == DMA_MEM_TO_DEV) {
> > +                               if (app_w)
> > +                                       memcpy(hw->app, app_w, sizeof(u32) *
> > +
> > + XILINX_DMA_NUM_APP_WORDS);
> > +
> > +                               /*
> > +                                * For the first DMA_MEM_TO_DEV transfer,
> > +                                * set SOP
> > +                                */
> > +                               if (!i)
> > +                                       hw->control |= XILINX_DMA_BD_SOP;
> > +                       }
> > +
> > +                       sg_used += copy;
> > +
> > +                       /*
> > +                        * Insert the segment into the descriptor segments
> > +                        * list.
> > +                        */
> > +                       list_add_tail(&segment->node, &desc->segments);
> > +               }
> > +       }
> > +
> > +       /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> > +       if (direction == DMA_MEM_TO_DEV) {
> > +               segment = list_last_entry(&desc->segments,
> > +                                         struct xilinx_dma_tx_segment,
> > +                                         node);
> > +               segment->hw.control |= XILINX_DMA_BD_EOP;
> > +       }
> > +
> > +       return &desc->async_tx;
> > +
> > +error:
> > +       xilinx_dma_free_tx_descriptor(chan, desc);
> > +       return NULL;
> > +}
> > +
> > +/**
> > + * xilinx_dma_terminate_all - Halt the channel and free descriptors
> > + * @dchan: DMA Channel pointer
> > + *
> > + * Return: '0' always
> > + */
> > +static int xilinx_dma_terminate_all(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +
> > +       /* Halt the DMA engine */
> > +       xilinx_dma_halt(chan);
> > +
> > +       /* Remove and free all of the descriptors in the lists */
> > +       xilinx_dma_free_descriptors(chan);
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_channel_set_config - Configure DMA channel
> > + * @dchan: DMA channel
> > + * @cfg: DMA device configuration pointer
> > + * Return: '0' on success and failure value on error  */ int
> > +xilinx_dma_channel_set_config(struct dma_chan *dchan,
> > +                                 struct xilinx_dma_config *cfg) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
> > +
> > +       if (!xilinx_dma_is_idle(chan))
> > +               return -EBUSY;
> > +
> > +       if (cfg->reset)
> > +               return xilinx_dma_chan_reset(chan);
> > +
> > +       if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
> > +               reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
> > +
> > +       if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
> > +               reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
> > +
> > +       dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
> > +
> > +       return 0;
> > +}
> > +EXPORT_SYMBOL(xilinx_dma_channel_set_config);
> > +
> > +/**
> > + * xilinx_dma_chan_remove - Per Channel remove function
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_chan_remove(struct xilinx_dma_chan *chan) {
> > +       /* Disable interrupts */
> > +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> > +XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       if (chan->irq > 0)
> > +               free_irq(chan->irq, chan);
> > +
> > +       tasklet_kill(&chan->tasklet);
> > +
> > +       list_del(&chan->common.device_node);
> > +}
> > +
> > +/**
> > + * xilinx_dma_chan_probe - Per Channel Probing
> > + * It get channel features from the device tree entry and
> > + * initialize special channel handling routines
> > + *
> > + * @xdev: Driver specific device structure
> > + * @node: Device node
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
> > +                                struct device_node *node) {
> > +       struct xilinx_dma_chan *chan;
> > +       int err;
> > +       bool has_dre;
> > +       u32 value, width = 0;
> > +
> > +       /* alloc channel */
> > +       chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL);
> > +       if (!chan)
> > +               return -ENOMEM;
> > +
> > +       chan->dev = xdev->dev;
> > +       chan->xdev = xdev;
> > +       chan->has_sg = xdev->has_sg;
> > +
> > +       has_dre = of_property_read_bool(node, "xlnx,include-dre");
> > +
> > +       err = of_property_read_u32(node, "xlnx,datawidth", &value);
> > +       if (err) {
> > +               dev_err(xdev->dev, "unable to read datawidth property");
> > +               return err;
> > +       }
> > +
> > +       width = value >> 3; /* Convert bits to bytes */
> > +
> > +       /* If data width is greater than 8 bytes, DRE is not in hw */
> > +       if (width > 8)
> > +               has_dre = false;
> > +
> > +       if (!has_dre)
> > +               xdev->common.copy_align = fls(width - 1);
> > +
> > +       if (of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel")) {
> > +               chan->id = 0;
> > +               chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
> > +       } else if (of_device_is_compatible(node, "xlnx,axi-dma-s2mm-
> channel")) {
> > +               chan->id = 1;
> > +               chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
> > +       } else {
> > +               dev_err(xdev->dev, "Invalid channel compatible node\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       xdev->chan[chan->id] = chan;
> > +
> > +       /* Initialize the channel */
> > +       err = xilinx_dma_chan_reset(chan);
> > +       if (err) {
> > +               dev_err(xdev->dev, "Reset channel failed\n");
> > +               return err;
> > +       }
> > +
> > +       spin_lock_init(&chan->lock);
> > +       INIT_LIST_HEAD(&chan->pending_list);
> > +       INIT_LIST_HEAD(&chan->done_list);
> > +       INIT_LIST_HEAD(&chan->free_seg_list);
> > +
> > +       chan->common.device = &xdev->common;
> > +
> > +       /* find the IRQ line, if it exists in the device tree */
> > +       chan->irq = irq_of_parse_and_map(node, 0);
> > +       err = request_irq(chan->irq, xilinx_dma_irq_handler,
> > +                         IRQF_SHARED,
> > +                         "xilinx-dma-controller", chan);
> > +       if (err) {
> > +               dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
> > +               return err;
> > +       }
> > +
> > +       /* Initialize the tasklet */
> > +       tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
> > +                    (unsigned long)chan);
> > +
> > +       /* Add the channel to DMA device channel list */
> > +       list_add_tail(&chan->common.device_node,
> > + &xdev->common.channels);
> > +
> > +       chan->idle = true;
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * of_dma_xilinx_xlate - Translation function
> > + * @dma_spec: Pointer to DMA specifier as found in the device tree
> > + * @ofdma: Pointer to DMA controller data
> > + *
> > + * Return: DMA channel pointer on success and NULL on error  */
> > +static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args
> *dma_spec,
> > +                                           struct of_dma *ofdma) {
> > +       struct xilinx_dma_device *xdev = ofdma->of_dma_data;
> > +       int chan_id = dma_spec->args[0];
> > +
> > +       if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE)
> > +               return NULL;
> > +
> > +       return dma_get_slave_channel(&xdev->chan[chan_id]->common);
> > +}
> > +
> > +/**
> > + * xilinx_dma_probe - Driver probe function
> > + * @pdev: Pointer to the platform_device structure
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_probe(struct platform_device *pdev) {
> > +       struct xilinx_dma_device *xdev;
> > +       struct device_node *child, *node;
> > +       struct resource *res;
> > +       int i, ret;
> > +
> > +       xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
> > +       if (!xdev)
> > +               return -ENOMEM;
> > +
> > +       xdev->dev = &(pdev->dev);
> > +       INIT_LIST_HEAD(&xdev->common.channels);
> > +
> > +       node = pdev->dev.of_node;
> > +
> > +       /* Map the registers */
> > +       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> > +       xdev->regs = devm_ioremap_resource(&pdev->dev, res);
> > +       if (IS_ERR(xdev->regs))
> > +               return PTR_ERR(xdev->regs);
> > +
> > +       /* Check if SG is enabled */
> > +       xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> > +
> > +       /* Axi DMA only do slave transfers */
> > +       dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
> > +       dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
> > +       xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> > +       xdev->common.device_terminate_all = xilinx_dma_terminate_all;
> > +       xdev->common.device_issue_pending = xilinx_dma_issue_pending;
> > +       xdev->common.device_alloc_chan_resources =
> > +               xilinx_dma_alloc_chan_resources;
> > +       xdev->common.device_free_chan_resources =
> > +               xilinx_dma_free_chan_resources;
> > +       xdev->common.device_tx_status = xilinx_dma_tx_status;
> > +       xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
> BIT(DMA_MEM_TO_DEV);
> > +       xdev->common.residue_granularity =
> DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +       xdev->common.dev = &pdev->dev;
> > +
> > +       platform_set_drvdata(pdev, xdev);
> > +
> > +       for_each_child_of_node(node, child) {
> > +               ret = xilinx_dma_chan_probe(xdev, child);
> > +               if (ret) {
> > +                       dev_err(&pdev->dev, "Probing channels failed\n");
> > +                       goto free_chan_resources;
> > +               }
> > +       }
> > +
> > +       dma_async_device_register(&xdev->common);
> > +
> > +       ret = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
> > +       if (ret) {
> > +               dev_err(&pdev->dev, "Unable to register DMA to DT\n");
> > +               dma_async_device_unregister(&xdev->common);
> > +               goto free_chan_resources;
> > +       }
> > +
> > +       dev_info(&pdev->dev, "Xilinx AXI DMA Engine driver
> > + Probed!!\n");
> > +
> > +       return 0;
> > +
> > +free_chan_resources:
> > +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> > +               if (xdev->chan[i])
> > +                       xilinx_dma_chan_remove(xdev->chan[i]);
> > +
> > +       return ret;
> > +}
> > +
> > +/**
> > + * xilinx_dma_remove - Driver remove function
> > + * @pdev: Pointer to the platform_device structure
> > + *
> > + * Return: Always '0'
> > + */
> > +static int xilinx_dma_remove(struct platform_device *pdev) {
> > +       struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> > +       int i;
> > +
> > +       of_dma_controller_free(pdev->dev.of_node);
> > +       dma_async_device_unregister(&xdev->common);
> > +
> > +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> > +               if (xdev->chan[i])
> > +                       xilinx_dma_chan_remove(xdev->chan[i]);
> > +
> > +       return 0;
> > +}
> > +
> > +static const struct of_device_id xilinx_dma_of_match[] = {
> > +       { .compatible = "xlnx,axi-dma-1.00.a",},
> > +       {}
> > +};
> > +MODULE_DEVICE_TABLE(of, xilinx_dma_of_match);
> > +
> > +static struct platform_driver xilinx_dma_driver = {
> > +       .driver = {
> > +               .name = "xilinx-dma",
> > +               .of_match_table = xilinx_dma_of_match,
> > +       },
> > +       .probe = xilinx_dma_probe,
> > +       .remove = xilinx_dma_remove,
> > +};
> > +
> > +module_platform_driver(xilinx_dma_driver);
> > +
> > +MODULE_AUTHOR("Xilinx, Inc.");
> > +MODULE_DESCRIPTION("Xilinx DMA driver"); MODULE_LICENSE("GPL
> v2");
> > diff --git a/include/linux/dma/xilinx_dma.h
> > b/include/linux/dma/xilinx_dma.h index 34b98f2..de38599 100644
> > --- a/include/linux/dma/xilinx_dma.h
> > +++ b/include/linux/dma/xilinx_dma.h
> > @@ -41,7 +41,21 @@ struct xilinx_vdma_config {
> >         int ext_fsync;
> >  };
> >
> > +/**
> > + * struct xilinx_dma_config - DMA Configuration structure
> > + * @coalesc: Interrupt coalescing threshold
> > + * @delay: Delay counter
> > + * @reset: Reset Channel
> > + */
> > +struct xilinx_dma_config {
> > +       int coalesc;
> > +       int delay;
> > +       int reset;
> > +};
> > +
> >  int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
> >                                         struct xilinx_vdma_config
> > *cfg);
> > +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> > +                                       struct xilinx_dma_config
> > +*cfg);
> >
> >  #endif
> > --
> > 2.1.2
> >
> >
> > _______________________________________________
> > linux-arm-kernel mailing list
> > linux-arm-kernel@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.


[-- Attachment #2: 0002-dma-xilinx-Fix-axidmatest-driver-crashing-when-unloa.patch --]
[-- Type: application/octet-stream, Size: 4192 bytes --]

From 77cfcbc662a9cc95725292d06ba47abe25e4b3c4 Mon Sep 17 00:00:00 2001
From: Kedareswara rao Appana <appanad@xilinx.com>
Date: Sun, 7 Jun 2015 16:44:00 +0530
Subject: [LINUX PATCH 2/2] dma: xilinx: Fix axidmatest driver crashing when
 unloading as a module

This patch fixes the issue axidmatest
driver crashing when unloading as a module.

get_task_struct and put_task_struct manage a thread's reference count.
After we done get_task_struct, it will always be safe to access that memory,
until we release it.If we call put_task_struct and
We are the last one with a reference count, it does a bunch of cleanup.

crash dump:
 rmmod axidmatest
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = 7e368000
[00000000] *pgd=3e1a8831, *pte=00000000, *ppte=00000000
Internal error: Oops - BUG: 17 [#1] PREEMPT SMP ARM
Modules linked in: axidmatest(-)
CPU: 1 PID: 632 Comm: rmmod Not tainted 4.0.0-xilinx-25047-g52a84bf-dirty #32
Hardware name: Xilinx Zynq Platform
task: 7d421180 ti: 7dbae000 task.ti: 7dbae000
PC is at exit_creds+0x14/0x84
LR is at __put_task_struct+0x10/0x58
pc : [<400375d0>]    lr : [<4001ead4>]    psr: 60000013
sp : 7dbafec8  ip : 00000000  fp : 3f001784
r10: 7e2547f8  r9 : 00200200  r8 : 00100100
r7 : 7e0f6e0c  r6 : 00000000  r5 : 7e07b188  r4 : 7e07b180
r3 : 00000000  r2 : 00000000  r1 : 60000013  r0 : 00000000
Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 18c5387d  Table: 3e36804a  DAC: 00000015
Process rmmod (pid: 632, stack limit = 0x7dbae210)
Stack: (0x7dbafec8 to 0x7dbb0000)
fec0:                   7e07b180 4001ead4 7e07b180 40036448 7e0f6e0c 7e0f6e00
fee0: 7e0adec0 7e0adf00 7e0f6e0c 3f0004f0 7e129a00 7e0f6e0c 00000000 7e129a10
ff00: 3f0017b4 7e129a44 00000081 4000df64 7dbae000 00000000 00000000 40270458
ff20: 40270440 7e129a10 3f0017b4 4026ece8 3f0017b4 7e129a10 3f0017b4 4026f320
ff40: 3f0017b4 00000880 3e9b3f66 4026ea84 3f0017f8 4006f2a4 7dc38770 64697861
ff60: 6574616d 55007473 00000020 00000000 7d421180 7d421588 00000000 00000006
ff80: 7d421180 7d421588 000f2040 40034da4 7dbae000 00baffb0 3e9b3f66 00000000
ffa0: 00000001 4000dde0 3e9b3f66 00000000 3e9b3f66 00000880 000fa0e0 00000000
ffc0: 3e9b3f66 00000000 00000001 00000081 00000001 00000000 3e9b3e8c 00000000
ffe0: 000fa0e0 3e9b3af0 0002cd74 36e0da3c 60000010 3e9b3f66 00000000 00000000
[<400375d0>] (exit_creds) from [<4001ead4>] (__put_task_struct+0x10/0x58)
[<4001ead4>] (__put_task_struct) from [<40036448>] (kthread_stop+0x90/0x98)
[<40036448>] (kthread_stop) from [<3f0004f0>] (xilinx_axidmatest_remove+0x54/0xcc [axidmatest])
[<3f0004f0>] (xilinx_axidmatest_remove [axidmatest]) from [<40270458>] (platform_drv_remove+0x18/0x30)
[<40270458>] (platform_drv_remove) from [<4026ece8>] (__device_release_driver+0x7c/0xc0)
[<4026ece8>] (__device_release_driver) from [<4026f320>] (driver_detach+0x84/0xac)
[<4026f320>] (driver_detach) from [<4026ea84>] (bus_remove_driver+0x64/0x8c)
[<4026f320>] (driver_detach) from [<4026ea84>] (bus_remove_driver+0x64/0x8c)
[<4026ea84>] (bus_remove_driver) from [<4006f2a4>] (SyS_delete_module+0x154/0x1d0)
[<4006f2a4>] (SyS_delete_module) from [<4000dde0>] (ret_fast_syscall+0x0/0x34)
Code: e1a04000 e5903304 e3a02000 e5900300 (e5933000)
---[ end trace 565795e6963d5010 ]---
Segmentation fault

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/axidmatest.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/dma/xilinx/axidmatest.c b/drivers/dma/xilinx/axidmatest.c
index 3d570b1..a7627dc 100644
--- a/drivers/dma/xilinx/axidmatest.c
+++ b/drivers/dma/xilinx/axidmatest.c
@@ -509,6 +509,7 @@ static void dmatest_cleanup_channel(struct dmatest_chan *dtc)
 		pr_debug("dmatest: thread %s exited with status %d\n",
 				thread->task->comm, ret);
 		list_del(&thread->node);
+		put_task_struct(thread->task);
 		kfree(thread);
 	}
 	kfree(dtc);
@@ -542,7 +543,7 @@ static int dmatest_add_slave_threads(struct dmatest_chan *tx_dtc,
 	}
 
 	/* srcbuf and dstbuf are allocated by the thread itself */
-
+	get_task_struct(thread->task);
 	list_add_tail(&thread->node, &tx_dtc->threads);
 
 	/* Added one thread with 2 channels */
-- 
2.1.2


[-- Attachment #3: 0001-dma-xilinx-Use-of_dma-framework-in-dma-test-client.patch --]
[-- Type: application/octet-stream, Size: 7075 bytes --]

From 08b6ffc54a4b29e2072f7b4dc619ba3a2a5602c0 Mon Sep 17 00:00:00 2001
From: Kedareswara rao Appana <appanad@xilinx.com>
Date: Sun, 7 Jun 2015 16:30:08 +0530
Subject: [LINUX PATCH 1/2] dma: xilinx: Use of_dma framework in dma test
 client

This patch does following things:
- Uses of_dma framework in dma test client.
- Document the device node for dma test client.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 .../devicetree/bindings/dma/xilinx/axi-dma.txt     |  46 ++++-----
 drivers/dma/xilinx/axidmatest.c                    | 114 ++++++++++-----------
 2 files changed, 76 insertions(+), 84 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/xilinx/axi-dma.txt b/Documentation/devicetree/bindings/dma/xilinx/axi-dma.txt
index 80c4633..74199cc 100644
--- a/Documentation/devicetree/bindings/dma/xilinx/axi-dma.txt
+++ b/Documentation/devicetree/bindings/dma/xilinx/axi-dma.txt
@@ -1,46 +1,38 @@
-Xilinx AXI DMA engine, it does transfers between memory and device. It can be
-configured to have one channel or two channels. If configured as two
-channels, one is to transmit to device and another is to receive from
-device.
+* Xilinx AXI DMA Test client
 
 Required properties:
-- compatible: Should be "xlnx,axi-dma"
-- reg: Should contain DMA registers location and length.
+- compatible: Should be "xlnx,axi-dma-test-1.00.a"
+- dmas: a list of <[DMA device phandle] [Channel ID]> pairs,
+	where Channel ID is '0' for write/tx and '1' for read/rx
+	channel.
+- dma-names: a list of DMA channel names, one per "dmas" entry
 
-Option node properties:
-- xlnx,include-sg: Tells whether configured for Scatter-mode in
-	the hardware.
+Example:
+++++++++
 
-Required child node properties:
-- compatible: It should be either "xlnx,axi-dma-mm2s-channel" or
-	"xlnx,axi-dma-s2mm-channel". It depends on the hardware design and it
-	can also have both channels.
-- interrupts: Should contain per channel DMA interrupts.
-- xlnx,data-width: Should contain the stream data width, take values
-	{32,64...1024}.
-- xlnx,device-id: Should contain device number in each channel. It should be
-	{0,1,2...so on} to the number of DMA devices configured in hardware.
+dmatest_0: dmatest@0 {
+	compatible ="xlnx,axi-dma-test-1.00.a";
+	dmas = <&axi_dma_0 0
+		&axi_dma_0 1>;
+	dma-names = "axidma0", "axidma1";
+} ;
 
-Optional child node properties:
-- xlnx,include-dre: Tells whether hardware is configured for Data
-	Realignment Engine.
 
-Example:
-++++++++
+Xilinx AXI DMA Device Node Example
+++++++++++++++++++++++++++++++++++++
 
 axi_dma_0: axidma@40400000 {
-	compatible = "xlnx,axi-dma";
+	compatible = "xlnx,axi-dma-1.00.a";
+	#dma_cells = <1>;
 	reg = < 0x40400000 0x10000 >;
 	dma-channel@40400000 {
 		compatible = "xlnx,axi-dma-mm2s-channel";
 		interrupts = < 0 59 4 >;
 		xlnx,datawidth = <0x40>;
-		xlnx,device-id = <0x0>;
 	} ;
-	dma-channel@40030030 {
+	dma-channel@40400030 {
 		compatible = "xlnx,axi-dma-s2mm-channel";
 		interrupts = < 0 58 4 >;
 		xlnx,datawidth = <0x40>;
-		xlnx,device-id = <0x0>;
 	} ;
 } ;
diff --git a/drivers/dma/xilinx/axidmatest.c b/drivers/dma/xilinx/axidmatest.c
index bbc88d7..3d570b1 100644
--- a/drivers/dma/xilinx/axidmatest.c
+++ b/drivers/dma/xilinx/axidmatest.c
@@ -14,6 +14,8 @@
 #include <linux/init.h>
 #include <linux/kthread.h>
 #include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
 #include <linux/random.h>
 #include <linux/slab.h>
 #include <linux/wait.h>
@@ -589,71 +591,41 @@ static int dmatest_add_slave_channels(struct dma_chan *tx_chan,
 	return 0;
 }
 
-static bool xdma_filter(struct dma_chan *chan, void *param)
+static int xilinx_axidmatest_probe(struct platform_device *pdev)
 {
-	pr_debug("dmatest: Private is %x\n", *((int *)chan->private));
+	struct dma_chan *chan, *rx_chan;
+	int err;
 
-	if (*((int *)chan->private) == *(int *)param)
-		return true;
+	chan = dma_request_slave_channel(&pdev->dev, "axidma0");
+	if (IS_ERR(chan)) {
+		pr_err("xilinx_dmatest: No Tx channel\n");
+		return PTR_ERR(chan);
+	}
 
-	return false;
-}
+	rx_chan = dma_request_slave_channel(&pdev->dev, "axidma1");
+	if (IS_ERR(rx_chan)) {
+		err = PTR_ERR(rx_chan);
+		pr_err("xilinx_dmatest: No Rx channel\n");
+		goto free_tx;
+	}
 
-static int __init dmatest_init(void)
-{
-	dma_cap_mask_t mask;
-	struct dma_chan *chan;
-	int err = 0;
+	err = dmatest_add_slave_channels(chan, rx_chan);
+	if (err) {
+		pr_err("xilinx_dmatest: Unable to add channels\n");
+		goto free_rx;
+	}
 
-	/* JZ for slave transfer channels */
-	enum dma_data_direction direction;
-	struct dma_chan *rx_chan;
-	u32 match, device_id = 0;
-
-	dma_cap_zero(mask);
-	dma_cap_set(DMA_SLAVE | DMA_PRIVATE, mask);
-
-	for (;;) {
-		direction = DMA_MEM_TO_DEV;
-		match = (direction & 0xFF) | XILINX_DMA_IP_DMA |
-				(device_id << XILINX_DMA_DEVICE_ID_SHIFT);
-		pr_debug("dmatest: match is %x\n", match);
-
-		chan = dma_request_channel(mask, xdma_filter, (void *)&match);
-
-		if (chan)
-			pr_debug("dmatest: Found tx device\n");
-		else
-			pr_debug("dmatest: No more tx channels available\n");
-
-		direction = DMA_DEV_TO_MEM;
-		match = (direction & 0xFF) | XILINX_DMA_IP_DMA |
-				(device_id << XILINX_DMA_DEVICE_ID_SHIFT);
-		rx_chan = dma_request_channel(mask, xdma_filter, &match);
-
-		if (rx_chan)
-			pr_debug("dmatest: Found rx device\n");
-		else
-			pr_debug("dmatest: No more rx channels available\n");
-
-		if (chan && rx_chan) {
-			err = dmatest_add_slave_channels(chan, rx_chan);
-			if (err) {
-				dma_release_channel(chan);
-				dma_release_channel(rx_chan);
-			}
-		} else
-			break;
+	return 0;
 
-		device_id++;
-	}
+free_rx:
+	dma_release_channel(rx_chan);
+free_tx:
+	dma_release_channel(chan);
 
 	return err;
 }
-/* when compiled-in wait for drivers to load first */
-late_initcall(dmatest_init);
 
-static void __exit dmatest_exit(void)
+static int xilinx_axidmatest_remove(struct platform_device *pdev)
 {
 	struct dmatest_chan *dtc, *_dtc;
 	struct dma_chan *chan;
@@ -662,12 +634,40 @@ static void __exit dmatest_exit(void)
 		list_del(&dtc->node);
 		chan = dtc->chan;
 		dmatest_cleanup_channel(dtc);
-		pr_debug("dmatest: dropped channel %s\n",
+		pr_info("xilinx_dmatest: dropped channel %s\n",
 			dma_chan_name(chan));
 		dma_release_channel(chan);
 	}
+	return 0;
+}
+
+static const struct of_device_id xilinx_axidmatest_of_ids[] = {
+	{ .compatible = "xlnx,axi-dma-test-1.00.a",},
+	{}
+};
+
+static struct platform_driver xilinx_axidmatest_driver = {
+	.driver = {
+		.name = "xilinx_axidmatest",
+		.owner = THIS_MODULE,
+		.of_match_table = xilinx_axidmatest_of_ids,
+	},
+	.probe = xilinx_axidmatest_probe,
+	.remove = xilinx_axidmatest_remove,
+};
+
+static int __init axidma_init(void)
+{
+	return platform_driver_register(&xilinx_axidmatest_driver);
+
+}
+late_initcall(axidma_init);
+
+static void __exit axidma_exit(void)
+{
+	platform_driver_unregister(&xilinx_axidmatest_driver);
 }
-module_exit(dmatest_exit);
+module_exit(axidma_exit)
 
 MODULE_AUTHOR("Xilinx, Inc.");
 MODULE_DESCRIPTION("Xilinx AXI DMA Test Client");
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-09  6:35 [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
  2015-06-16 19:19 ` Nicolae Rosia
@ 2015-06-19 16:49 ` Jeremy Trimble
  2015-06-24 17:12   ` Appana Durga Kedareswara Rao
  2015-06-22 10:49 ` Vinod Koul
  2 siblings, 1 reply; 12+ messages in thread
From: Jeremy Trimble @ 2015-06-19 16:49 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: Vinod Koul, dan.j.williams, michal.simek, soren.brinkmann,
	appanad, anirudh, punnaia, dmaengine, linux-arm-kernel,
	linux-kernel, Srikanth Thokala

> +/**
> + * xilinx_dma_start_transfer - Starts DMA transfer
> + * @chan: Driver specific channel struct pointer
> + */
> +static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> +
> +       if (chan->err)
> +               return;
> +
> +       if (list_empty(&chan->pending_list))
> +               return;
> +
> +       if (!chan->idle)
> +               return;
> +
> +       desc = list_first_entry(&chan->pending_list,
> +                               struct xilinx_dma_tx_descriptor, node);
> +
> +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> +           !xilinx_dma_is_idle(chan)) {
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +               goto out_free_desc;
> +       }
> +
> +       if (chan->has_sg) {
> +               head = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> +       }
> +
> +       /* Enable interrupts */
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       xilinx_dma_start(chan);
> +       if (chan->err)
> +               return;
> +
> +       /* Start the transfer */
> +       if (chan->has_sg) {
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +       } else {
> +               struct xilinx_dma_tx_segment *segment;
> +               struct xilinx_dma_desc_hw *hw;
> +
> +               segment = list_first_entry(&desc->segments,
> +                                          struct xilinx_dma_tx_segment, node);
> +               hw = &segment->hw;
> +
> +               if (desc->direction == DMA_MEM_TO_DEV)
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> +                                      hw->buf_addr);
> +               else
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> +                                      hw->buf_addr);
> +
> +               /* Start the transfer */
> +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> +       }
> +
> +out_free_desc:
> +       list_del(&desc->node);
> +       chan->idle = false;
> +       chan->active_desc = desc;
> +}

What prevents chan->active_desc from being overwritten before the
previous descriptor is transferred to done_list.  For instance, if two
transfers are queued with issue_pending() in quick succession (such
that xilinx_dma_start_transfer() is called twice before the interrupt
for the first transfer occurs), won't the first descriptor be
overwritten and lost?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-09  6:35 [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
  2015-06-16 19:19 ` Nicolae Rosia
  2015-06-19 16:49 ` Jeremy Trimble
@ 2015-06-22 10:49 ` Vinod Koul
  2015-06-24 17:12   ` Appana Durga Kedareswara Rao
  2 siblings, 1 reply; 12+ messages in thread
From: Vinod Koul @ 2015-06-22 10:49 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, michal.simek, soren.brinkmann, appanad, anirudh,
	punnaia, dmaengine, linux-arm-kernel, linux-kernel,
	Srikanth Thokala

On Tue, Jun 09, 2015 at 12:05:36PM +0530, Kedareswara rao Appana wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
> 
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
> The deivce tree doc got applied in the slave-dmaengine.git.
> 
> This patch is rebased on the commit
> Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
same stuff everywhere, sigh

> +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> +	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> +	enum dma_transfer_direction direction, unsigned long flags,
> +	void *context)
> +{
> +	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +	struct xilinx_dma_tx_descriptor *desc;
> +	struct xilinx_dma_tx_segment *segment;
> +	struct xilinx_dma_desc_hw *hw;
> +	u32 *app_w = (u32 *)context;
> +	struct scatterlist *sg;
> +	size_t copy, sg_used;
> +	int i;
> +
> +	if (!is_slave_direction(direction))
> +		return NULL;
> +
> +	/* Allocate a transaction descriptor. */
> +	desc = xilinx_dma_alloc_tx_descriptor(chan);
> +	if (!desc)
> +		return NULL;
> +
> +	desc->direction = direction;
> +	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +	desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +
> +	/* Build transactions using information in the scatter gather list */
> +	for_each_sg(sgl, sg, sg_len, i) {
> +		sg_used = 0;
> +
> +		/* Loop until the entire scatterlist entry is used */
> +		while (sg_used < sg_dma_len(sg)) {
> +
> +			/* Get a free segment */
> +			segment = xilinx_dma_alloc_tx_segment(chan);
> +			if (!segment)
> +				goto error;
> +
> +			/*
> +			 * Calculate the maximum number of bytes to transfer,
> +			 * making sure it is less than the hw limit
> +			 */
> +			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> +				     XILINX_DMA_MAX_TRANS_LEN);
> +			hw = &segment->hw;
> +
> +			/* Fill in the descriptor */
> +			hw->buf_addr = sg_dma_address(sg) + sg_used;
> +
> +			hw->control = copy;
> +
> +			if (direction == DMA_MEM_TO_DEV) {
> +				if (app_w)
> +					memcpy(hw->app, app_w, sizeof(u32) *
> +					       XILINX_DMA_NUM_APP_WORDS);
> +
> +				/*
> +				 * For the first DMA_MEM_TO_DEV transfer,
> +				 * set SOP
> +				 */
> +				if (!i)
> +					hw->control |= XILINX_DMA_BD_SOP;
> +			}
> +
> +			sg_used += copy;
> +
> +			/*
> +			 * Insert the segment into the descriptor segments
> +			 * list.
> +			 */
> +			list_add_tail(&segment->node, &desc->segments);
> +		}
> +	}
> +
> +	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
> +	if (direction == DMA_MEM_TO_DEV) {
> +		segment = list_last_entry(&desc->segments,
> +					  struct xilinx_dma_tx_segment,
> +					  node);
> +		segment->hw.control |= XILINX_DMA_BD_EOP;
> +	}
where is the hardware addr programmed? I can see you are using sg list
passed for porgramming one side of a transfer where is other side
programmed?

> +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> +				  struct xilinx_dma_config *cfg)
> +{
> +	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +	u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
> +
> +	if (!xilinx_dma_is_idle(chan))
> +		return -EBUSY;
> +
> +	if (cfg->reset)
> +		return xilinx_dma_chan_reset(chan);
> +
> +	if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
> +		reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
> +
> +	if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
> +		reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
> +
> +	dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(xilinx_dma_channel_set_config);
Same question here as the other driver, why reset, why not _GPL here etc
etc.

Also what is differenace betwene these two drivers, why cant we have one
driver for both?

> +static int xilinx_dma_probe(struct platform_device *pdev)
> +{
> +	struct xilinx_dma_device *xdev;
> +	struct device_node *child, *node;
> +	struct resource *res;
> +	int i, ret;
> +
> +	xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
> +	if (!xdev)
> +		return -ENOMEM;
> +
> +	xdev->dev = &(pdev->dev);
> +	INIT_LIST_HEAD(&xdev->common.channels);
> +
> +	node = pdev->dev.of_node;
> +
> +	/* Map the registers */
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	xdev->regs = devm_ioremap_resource(&pdev->dev, res);
> +	if (IS_ERR(xdev->regs))
> +		return PTR_ERR(xdev->regs);
> +
> +	/* Check if SG is enabled */
> +	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> +
> +	/* Axi DMA only do slave transfers */
> +	dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
> +	dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
> +	xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> +	xdev->common.device_terminate_all = xilinx_dma_terminate_all;
> +	xdev->common.device_issue_pending = xilinx_dma_issue_pending;
> +	xdev->common.device_alloc_chan_resources =
> +		xilinx_dma_alloc_chan_resources;
> +	xdev->common.device_free_chan_resources =
> +		xilinx_dma_free_chan_resources;
> +	xdev->common.device_tx_status = xilinx_dma_tx_status;
> +	xdev->common.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	xdev->common.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +	xdev->common.dev = &pdev->dev;
no dma_slave_config handler?

 

-- 
~Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-19 16:49 ` Jeremy Trimble
@ 2015-06-24 17:12   ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 12+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-06-24 17:12 UTC (permalink / raw)
  To: Jeremy Trimble
  Cc: Vinod Koul, dan.j.williams, Michal Simek, Soren Brinkmann,
	Anirudha Sarangi, dmaengine, linux-arm-kernel, linux-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 4422 bytes --]

Hi Jeremy Trimble,

> -----Original Message-----
> From: Jeremy Trimble [mailto:jeremy.trimble@gmail.com]
> Sent: Friday, June 19, 2015 10:19 PM
> To: Appana Durga Kedareswara Rao
> Cc: Vinod Koul; dan.j.williams@intel.com; Michal Simek; Soren Brinkmann;
> Appana Durga Kedareswara Rao; Anirudha Sarangi; Punnaiah Choudary
> Kalluri; dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; Srikanth Thokala
> Subject: Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> > +/**
> > + * xilinx_dma_start_transfer - Starts DMA transfer
> > + * @chan: Driver specific channel struct pointer  */ static void
> > +xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> > +
> > +       if (chan->err)
> > +               return;
> > +
> > +       if (list_empty(&chan->pending_list))
> > +               return;
> > +
> > +       if (!chan->idle)
> > +               return;
> > +
> > +       desc = list_first_entry(&chan->pending_list,
> > +                               struct xilinx_dma_tx_descriptor,
> > + node);
> > +
> > +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> > +           !xilinx_dma_is_idle(chan)) {
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +               goto out_free_desc;
> > +       }
> > +
> > +       if (chan->has_sg) {
> > +               head = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> > +       }
> > +
> > +       /* Enable interrupts */
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       xilinx_dma_start(chan);
> > +       if (chan->err)
> > +               return;
> > +
> > +       /* Start the transfer */
> > +       if (chan->has_sg) {
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +       } else {
> > +               struct xilinx_dma_tx_segment *segment;
> > +               struct xilinx_dma_desc_hw *hw;
> > +
> > +               segment = list_first_entry(&desc->segments,
> > +                                          struct xilinx_dma_tx_segment, node);
> > +               hw = &segment->hw;
> > +
> > +               if (desc->direction == DMA_MEM_TO_DEV)
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> > +                                      hw->buf_addr);
> > +               else
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> > +                                      hw->buf_addr);
> > +
> > +               /* Start the transfer */
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> > +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> > +       }
> > +
> > +out_free_desc:
> > +       list_del(&desc->node);
> > +       chan->idle = false;
> > +       chan->active_desc = desc;
> > +}
>
> What prevents chan->active_desc from being overwritten before the
> previous descriptor is transferred to done_list.  For instance, if two transfers
> are queued with issue_pending() in quick succession (such that
> xilinx_dma_start_transfer() is called twice before the interrupt for the first
> transfer occurs), won't the first descriptor be overwritten and lost?

Yes there is some flaws in this implementation. Will fix it in the next version of the patch.

Regards,
Kedar.


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-22 10:49 ` Vinod Koul
@ 2015-06-24 17:12   ` Appana Durga Kedareswara Rao
  2015-06-27 14:40     ` Vinod Koul
  0 siblings, 1 reply; 12+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-06-24 17:12 UTC (permalink / raw)
  To: Vinod Koul
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, Anirudha Sarangi,
	Punnaiah Choudary Kalluri, dmaengine, linux-arm-kernel,
	linux-kernel, Srikanth Thokala

Hi Vinod,

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@intel.com]
> Sent: Monday, June 22, 2015 4:20 PM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; Michal Simek; Soren Brinkmann; Appana Durga
> Kedareswara Rao; Anirudha Sarangi; Punnaiah Choudary Kalluri;
> dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org; Srikanth Thokala
> Subject: Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> On Tue, Jun 09, 2015 at 12:05:36PM +0530, Kedareswara rao Appana wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> > The deivce tree doc got applied in the slave-dmaengine.git.
> >
> > This patch is rebased on the commit
> > Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
> same stuff everywhere, sigh

Ok will fix it in the next version of the patch.

>
> > +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> > +   struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> > +   enum dma_transfer_direction direction, unsigned long flags,
> > +   void *context)
> > +{
> > +   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +   struct xilinx_dma_tx_descriptor *desc;
> > +   struct xilinx_dma_tx_segment *segment;
> > +   struct xilinx_dma_desc_hw *hw;
> > +   u32 *app_w = (u32 *)context;
> > +   struct scatterlist *sg;
> > +   size_t copy, sg_used;
> > +   int i;
> > +
> > +   if (!is_slave_direction(direction))
> > +           return NULL;
> > +
> > +   /* Allocate a transaction descriptor. */
> > +   desc = xilinx_dma_alloc_tx_descriptor(chan);
> > +   if (!desc)
> > +           return NULL;
> > +
> > +   desc->direction = direction;
> > +   dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> > +   desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> > +
> > +   /* Build transactions using information in the scatter gather list */
> > +   for_each_sg(sgl, sg, sg_len, i) {
> > +           sg_used = 0;
> > +
> > +           /* Loop until the entire scatterlist entry is used */
> > +           while (sg_used < sg_dma_len(sg)) {
> > +
> > +                   /* Get a free segment */
> > +                   segment = xilinx_dma_alloc_tx_segment(chan);
> > +                   if (!segment)
> > +                           goto error;
> > +
> > +                   /*
> > +                    * Calculate the maximum number of bytes to
> transfer,
> > +                    * making sure it is less than the hw limit
> > +                    */
> > +                   copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> > +                                XILINX_DMA_MAX_TRANS_LEN);
> > +                   hw = &segment->hw;
> > +
> > +                   /* Fill in the descriptor */
> > +                   hw->buf_addr = sg_dma_address(sg) + sg_used;
> > +
> > +                   hw->control = copy;
> > +
> > +                   if (direction == DMA_MEM_TO_DEV) {
> > +                           if (app_w)
> > +                                   memcpy(hw->app, app_w,
> sizeof(u32) *
> > +
> XILINX_DMA_NUM_APP_WORDS);
> > +
> > +                           /*
> > +                            * For the first DMA_MEM_TO_DEV transfer,
> > +                            * set SOP
> > +                            */
> > +                           if (!i)
> > +                                   hw->control |=
> XILINX_DMA_BD_SOP;
> > +                   }
> > +
> > +                   sg_used += copy;
> > +
> > +                   /*
> > +                    * Insert the segment into the descriptor segments
> > +                    * list.
> > +                    */
> > +                   list_add_tail(&segment->node, &desc->segments);
> > +           }
> > +   }
> > +
> > +   /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> > +   if (direction == DMA_MEM_TO_DEV) {
> > +           segment = list_last_entry(&desc->segments,
> > +                                     struct xilinx_dma_tx_segment,
> > +                                     node);
> > +           segment->hw.control |= XILINX_DMA_BD_EOP;
> > +   }
> where is the hardware addr programmed? I can see you are using sg list
> passed for porgramming one side of a transfer where is other side
> programmed?

The actual programming happens in the start_transfer(I mean in issue_pending) API
There are two modes

All the h/w addresses are configured in the start_transfer API.

In simple transfer Mode the below write triggers the transfer
dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
                               hw->control & XILINX_DMA_MAX_TRANS_LEN);

In SG Mode the below write triggers the transfer.
dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);

There are two Channels MM2S (Memory to device) and S2MM (Device to Memory) channel.
--> In MM2S case we need to configure the SOF (Start of frame) for the first BD and we need to set EOF(end of frame) for the last BD
--> For S2MM case no need to configure SOF and EOF. Once we got the IOC interrupt will call mark the cookie as complete and will
Call the user callback. There users checks for the data.

Please let me know if you are not clear.

>
> > +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> > +                             struct xilinx_dma_config *cfg)
> > +{
> > +   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +   u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
> > +
> > +   if (!xilinx_dma_is_idle(chan))
> > +           return -EBUSY;
> > +
> > +   if (cfg->reset)
> > +           return xilinx_dma_chan_reset(chan);
> > +
> > +   if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
> > +           reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
> > +
> > +   if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
> > +           reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
> > +
> > +   dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
> > +
> > +   return 0;
> > +}
> > +EXPORT_SYMBOL(xilinx_dma_channel_set_config);
> Same question here as the other driver, why reset, why not _GPL here etc
> etc.

Ok will take care the comments in the next version of the patch.

>
> Also what is differenace betwene these two drivers, why cant we have one
> driver for both?

I agree with you and even initially we had a common driver with the similar implementation
As you were mentioning.  Later on, being soft IPs, new features were added and the IPs
became diversified. As an example, this driver has a residue calculation whereas the other
driver (DMA) is not applicable and the way interrupts are handled is completely different.
Briefly, they are two complete different IPs with a different register set and descriptor format.
Eventually, it became too complex to manage the common driver as the code became messy
with lot of conditions around. Mainly the validation process is a big concern, as every change
in the IP compels to test all the complete features of both IPs.  So, we got convinced to the
approach of separating the drivers to overcome this and it comes with few addition lines of
common code.

>
> > +static int xilinx_dma_probe(struct platform_device *pdev) {
> > +   struct xilinx_dma_device *xdev;
> > +   struct device_node *child, *node;
> > +   struct resource *res;
> > +   int i, ret;
> > +
> > +   xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
> > +   if (!xdev)
> > +           return -ENOMEM;
> > +
> > +   xdev->dev = &(pdev->dev);
> > +   INIT_LIST_HEAD(&xdev->common.channels);
> > +
> > +   node = pdev->dev.of_node;
> > +
> > +   /* Map the registers */
> > +   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> > +   xdev->regs = devm_ioremap_resource(&pdev->dev, res);
> > +   if (IS_ERR(xdev->regs))
> > +           return PTR_ERR(xdev->regs);
> > +
> > +   /* Check if SG is enabled */
> > +   xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> > +
> > +   /* Axi DMA only do slave transfers */
> > +   dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
> > +   dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
> > +   xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> > +   xdev->common.device_terminate_all = xilinx_dma_terminate_all;
> > +   xdev->common.device_issue_pending = xilinx_dma_issue_pending;
> > +   xdev->common.device_alloc_chan_resources =
> > +           xilinx_dma_alloc_chan_resources;
> > +   xdev->common.device_free_chan_resources =
> > +           xilinx_dma_free_chan_resources;
> > +   xdev->common.device_tx_status = xilinx_dma_tx_status;
> > +   xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
> BIT(DMA_MEM_TO_DEV);
> > +   xdev->common.residue_granularity =
> DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +   xdev->common.dev = &pdev->dev;
> no dma_slave_config handler?

No need of this callback earlier in the dma_slave_config we are doing terminate_all
Now we have a separate API for that so no need to have this call back.

Thanks for the comments.

Regards,
Kedar.

>
>
>
> --
> ~Vinod


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-24 17:12   ` Appana Durga Kedareswara Rao
@ 2015-06-27 14:40     ` Vinod Koul
  2015-06-27 14:44       ` Nicolae Rosia
  2015-07-07 15:31       ` Appana Durga Kedareswara Rao
  0 siblings, 2 replies; 12+ messages in thread
From: Vinod Koul @ 2015-06-27 14:40 UTC (permalink / raw)
  To: Appana Durga Kedareswara Rao
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, Anirudha Sarangi,
	Punnaiah Choudary Kalluri, dmaengine, linux-arm-kernel,
	linux-kernel, Srikanth Thokala

On Wed, Jun 24, 2015 at 05:12:13PM +0000, Appana Durga Kedareswara Rao wrote:
> > where is the hardware addr programmed? I can see you are using sg list
> > passed for porgramming one side of a transfer where is other side
> > programmed?
> 
> The actual programming happens in the start_transfer(I mean in issue_pending) API
> There are two modes
> 
> All the h/w addresses are configured in the start_transfer API.
> 
> In simple transfer Mode the below write triggers the transfer
> dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
>                                hw->control & XILINX_DMA_MAX_TRANS_LEN);
> 
> In SG Mode the below write triggers the transfer.
> dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> 
> There are two Channels MM2S (Memory to device) and S2MM (Device to Memory) channel.
> --> In MM2S case we need to configure the SOF (Start of frame) for the first BD and we need to set EOF(end of frame) for the last BD
> --> For S2MM case no need to configure SOF and EOF. Once we got the IOC interrupt will call mark the cookie as complete and will
> Call the user callback. There users checks for the data.
> 
> Please let me know if you are not clear.
No sorry am not...

I asked how the device address in configured. For both MM2S S2MM you are
using sg for memory address, where are you getting device adress, are you
assuming/hardcoding or getting somehow, if so how?

> > no dma_slave_config handler?
> No need of this callback earlier in the dma_slave_config we are doing terminate_all
> Now we have a separate API for that so no need to have this call back.

The question was on parameters

-- 
~Vinod

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-27 14:40     ` Vinod Koul
@ 2015-06-27 14:44       ` Nicolae Rosia
  2015-06-28 14:45         ` Vinod Koul
  2015-07-07 15:31       ` Appana Durga Kedareswara Rao
  1 sibling, 1 reply; 12+ messages in thread
From: Nicolae Rosia @ 2015-06-27 14:44 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Appana Durga Kedareswara Rao, linux-kernel, Srikanth Thokala,
	Michal Simek, Soren Brinkmann, Anirudha Sarangi, dmaengine,
	Punnaiah Choudary Kalluri, dan.j.williams, linux-arm-kernel

On Sat, Jun 27, 2015 at 5:40 PM, Vinod Koul <vinod.koul@intel.com> wrote:
[...]
>> Please let me know if you are not clear.
> No sorry am not...
>
> I asked how the device address in configured. For both MM2S S2MM you are
> using sg for memory address, where are you getting device adress, are you
> assuming/hardcoding or getting somehow, if so how?
As the name says, one end is memory (MM) and the other end is an AXI4
Stream Bus (S) which has no concept of memory address.
So yes, it is hardcoded at design time.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-27 14:44       ` Nicolae Rosia
@ 2015-06-28 14:45         ` Vinod Koul
  2015-06-28 15:06           ` Nicolae Rosia
  0 siblings, 1 reply; 12+ messages in thread
From: Vinod Koul @ 2015-06-28 14:45 UTC (permalink / raw)
  To: Nicolae Rosia
  Cc: Appana Durga Kedareswara Rao, linux-kernel, Srikanth Thokala,
	Michal Simek, Soren Brinkmann, Anirudha Sarangi, dmaengine,
	Punnaiah Choudary Kalluri, dan.j.williams, linux-arm-kernel

On Sat, Jun 27, 2015 at 05:44:38PM +0300, Nicolae Rosia wrote:
> On Sat, Jun 27, 2015 at 5:40 PM, Vinod Koul <vinod.koul@intel.com> wrote:
> [...]
> >> Please let me know if you are not clear.
> > No sorry am not...
> >
> > I asked how the device address in configured. For both MM2S S2MM you are
> > using sg for memory address, where are you getting device adress, are you
> > assuming/hardcoding or getting somehow, if so how?
> As the name says, one end is memory (MM) and the other end is an AXI4
> Stream Bus (S) which has no concept of memory address.
> So yes, it is hardcoded at design time.
So where does the data go at the end of stream bus, who configures that?
Shouldnt all this be at least documented...

-- 
~Vinod

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-28 14:45         ` Vinod Koul
@ 2015-06-28 15:06           ` Nicolae Rosia
  0 siblings, 0 replies; 12+ messages in thread
From: Nicolae Rosia @ 2015-06-28 15:06 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Appana Durga Kedareswara Rao, linux-kernel, Srikanth Thokala,
	Michal Simek, Soren Brinkmann, Anirudha Sarangi, dmaengine,
	Punnaiah Choudary Kalluri, dan.j.williams, linux-arm-kernel

HI,
On Sun, Jun 28, 2015 at 5:45 PM, Vinod Koul <vinod.koul@intel.com> wrote:
[...]
>> > I asked how the device address in configured. For both MM2S S2MM you are
>> > using sg for memory address, where are you getting device adress, are you
>> > assuming/hardcoding or getting somehow, if so how?
>> As the name says, one end is memory (MM) and the other end is an AXI4
>> Stream Bus (S) which has no concept of memory address.
>> So yes, it is hardcoded at design time.
> So where does the data go at the end of stream bus, who configures that?
> Shouldnt all this be at least documented...

You make the connection at design time. In Zynq7000 case, there's a
dual core Cortex A9 coupled with an FPGA.
While designing the FPGA part, you instantiate an Xilinx AXI DMA, on
one side you connect it to an AXI4 Lite bus (which is memory mapped)
and on the other side you connect your custom peripheral using an AXI4
Stream bus which has no concept of addresses.
Here's a picture which depicts all of this [0].
Does this clear things up?

[0] http://www.fpgadeveloper.com/wp-content/uploads/2014/08/fpga_developer_20140806_130447.png

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-06-27 14:40     ` Vinod Koul
  2015-06-27 14:44       ` Nicolae Rosia
@ 2015-07-07 15:31       ` Appana Durga Kedareswara Rao
  1 sibling, 0 replies; 12+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-07-07 15:31 UTC (permalink / raw)
  To: Vinod Koul
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, Anirudha Sarangi,
	Punnaiah Choudary Kalluri, dmaengine, linux-arm-kernel,
	linux-kernel, Srikanth Thokala,
	Nicolae Rosia (nicolae.rosia@gmail.com)

HI Vinod,

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@intel.com]
> Sent: Saturday, June 27, 2015 8:11 PM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; Michal Simek; Soren Brinkmann; Anirudha
> Sarangi; Punnaiah Choudary Kalluri; dmaengine@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Srikanth Thokala
> Subject: Re: [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
> 
> On Wed, Jun 24, 2015 at 05:12:13PM +0000, Appana Durga Kedareswara Rao
> wrote:
> > > where is the hardware addr programmed? I can see you are using sg
> > > list passed for porgramming one side of a transfer where is other
> > > side programmed?
> >
> > The actual programming happens in the start_transfer(I mean in
> > issue_pending) API There are two modes
> >
> > All the h/w addresses are configured in the start_transfer API.
> >
> > In simple transfer Mode the below write triggers the transfer
> > dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> >                                hw->control &
> > XILINX_DMA_MAX_TRANS_LEN);
> >
> > In SG Mode the below write triggers the transfer.
> > dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> >
> > There are two Channels MM2S (Memory to device) and S2MM (Device to
> Memory) channel.
> > --> In MM2S case we need to configure the SOF (Start of frame) for the
> > --> first BD and we need to set EOF(end of frame) for the last BD For
> > --> S2MM case no need to configure SOF and EOF. Once we got the IOC
> > --> interrupt will call mark the cookie as complete and will
> > Call the user callback. There users checks for the data.
> >
> > Please let me know if you are not clear.
> No sorry am not...
> 
> I asked how the device address in configured. For both MM2S S2MM you are
> using sg for memory address, where are you getting device adress, are you
> assuming/hardcoding or getting somehow, if so how?

As Nicolae Rosia explained there is no concept of address for this DMA.
Connections are made at the design time. 
http://www.fpgadeveloper.com/wp-content/uploads/2014/08/fpga_developer_20140806_130447.png


> 
> > > no dma_slave_config handler?
> > No need of this callback earlier in the dma_slave_config we are doing
> > terminate_all Now we have a separate API for that so no need to have this
> call back.
> 
> The question was on parameters

There is no address related parameters need to configure for this DMA.
That's why no need of dma_slave_config handler. 

Regards,
Kedar.

> 
> --
> ~Vinod

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-07-07 15:46 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-09  6:35 [PATCH v7] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
2015-06-16 19:19 ` Nicolae Rosia
2015-06-18 10:16   ` Appana Durga Kedareswara Rao
2015-06-19 16:49 ` Jeremy Trimble
2015-06-24 17:12   ` Appana Durga Kedareswara Rao
2015-06-22 10:49 ` Vinod Koul
2015-06-24 17:12   ` Appana Durga Kedareswara Rao
2015-06-27 14:40     ` Vinod Koul
2015-06-27 14:44       ` Nicolae Rosia
2015-06-28 14:45         ` Vinod Koul
2015-06-28 15:06           ` Nicolae Rosia
2015-07-07 15:31       ` Appana Durga Kedareswara Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).