linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
@ 2015-03-02 17:55 Kedareswara rao Appana
  2015-03-02 18:41 ` Josh Cartwright
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Kedareswara rao Appana @ 2015-03-02 17:55 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann
  Cc: dmaengine, linux-arm-kernel, linux-kernel, appanad, anirudh,
	svemula, Srikanth Thokala

This is the driver for the AXI Direct Memory Access (AXI DMA)
core, which is a soft Xilinx IP core that provides high-
bandwidth direct memory access between memory and AXI4-Stream
type target peripherals.

Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
This patch is rebased on top of dma: xilinx-dma: move header file
to common location.

The deivce tree doc got applied in the slave-dmaengine.git.

Changes in v5:
- Modified the xilinx_dma.h header file location to the 
  include/linux/dma/xilinx_dma.h 
Changes in v4:
- Add direction field to DMA descriptor structure and removed from
  channel structure to avoid duplication.
- Check for DMA idle condition before changing the configuration.
- Residue is being calculated in complete_descriptor() and is reported
  to slave driver.

Changes in v3:
- Rebased on 3.16-rc7

Changes in v2:
- Simplified the logic to set SOP and APP words in prep_slave_sg().
- Corrected function description comments to match the return type.
- Fixed some minor comments as suggested by Andy, Thanks.

 drivers/dma/Kconfig             |   13 +
 drivers/dma/xilinx/Makefile     |    1 +
 drivers/dma/xilinx/xilinx_dma.c | 1242 +++++++++++++++++++++++++++++++++++++++
 include/linux/dma/xilinx_dma.h  |   14 +
 4 files changed, 1270 insertions(+)
 create mode 100644 drivers/dma/xilinx/xilinx_dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index a874b6e..3271f47 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -425,6 +425,19 @@ config IMG_MDC_DMA
 	help
 	  Enable support for the IMG multi-threaded DMA controller (MDC).
 
+config XILINX_DMA
+	tristate "Xilinx AXI DMA Engine"
+	depends on (ARCH_ZYNQ || MICROBLAZE)
+	select DMA_ENGINE
+	help
+	  Enable support for Xilinx AXI DMA Soft IP.
+
+	This engine provides high-bandwidth direct memory access
+	between memory and AXI4-Stream type target peripherals.
+	It has two stream interfaces/channels, Memory Mapped to
+	Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
+	data transfers.
+
 config DMA_ENGINE
 	bool
 
diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
index 3c4e9f2..6224a49 100644
--- a/drivers/dma/xilinx/Makefile
+++ b/drivers/dma/xilinx/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
+obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
new file mode 100644
index 0000000..fdf2d54
--- /dev/null
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -0,0 +1,1242 @@
+/*
+ * DMA driver for Xilinx DMA Engine
+ *
+ * Copyright (C) 2010 - 2014 Xilinx, Inc. All rights reserved.
+ *
+ * Based on the Freescale DMA driver.
+ *
+ * Description:
+ *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
+ *  Access between memory and AXI4-Stream-type target peripherals. It can be
+ *  configured to have one channel or two channels and if configured as two
+ *  channels, one is to transmit data from memory to a device and another is
+ *  to receive from a device.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/bitops.h>
+#include <linux/dma/xilinx_dma.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_dma.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/slab.h>
+
+#include "../dmaengine.h"
+
+/* Register Offsets */
+#define XILINX_DMA_REG_CONTROL		0x00
+#define XILINX_DMA_REG_STATUS		0x04
+#define XILINX_DMA_REG_CURDESC		0x08
+#define XILINX_DMA_REG_TAILDESC		0x10
+#define XILINX_DMA_REG_SRCADDR		0x18
+#define XILINX_DMA_REG_DSTADDR		0x20
+#define XILINX_DMA_REG_BTT		0x28
+
+/* Channel/Descriptor Offsets */
+#define XILINX_DMA_MM2S_CTRL_OFFSET	0x00
+#define XILINX_DMA_S2MM_CTRL_OFFSET	0x30
+
+/* General register bits definitions */
+#define XILINX_DMA_CR_RUNSTOP_MASK	BIT(0)
+#define XILINX_DMA_CR_RESET_MASK	BIT(2)
+
+#define XILINX_DMA_CR_DELAY_SHIFT	24
+#define XILINX_DMA_CR_COALESCE_SHIFT	16
+
+#define XILINX_DMA_CR_DELAY_MAX		GENMASK(7, 0)
+#define XILINX_DMA_CR_COALESCE_MAX	GENMASK(7, 0)
+
+#define XILINX_DMA_SR_HALTED_MASK	BIT(0)
+#define XILINX_DMA_SR_IDLE_MASK		BIT(1)
+
+#define XILINX_DMA_XR_IRQ_IOC_MASK	BIT(12)
+#define XILINX_DMA_XR_IRQ_DELAY_MASK	BIT(13)
+#define XILINX_DMA_XR_IRQ_ERROR_MASK	BIT(14)
+#define XILINX_DMA_XR_IRQ_ALL_MASK	GENMASK(14, 12)
+
+/* BD definitions */
+#define XILINX_DMA_BD_STS_ALL_MASK	GENMASK(31, 28)
+#define XILINX_DMA_BD_SOP		BIT(27)
+#define XILINX_DMA_BD_EOP		BIT(26)
+
+/* Hw specific definitions */
+#define XILINX_DMA_MAX_CHANS_PER_DEVICE	0x2
+#define XILINX_DMA_MAX_TRANS_LEN	GENMASK(22, 0)
+
+/* Delay loop counter to prevent hardware failure */
+#define XILINX_DMA_LOOP_COUNT		1000000
+
+/* Maximum number of Descriptors */
+#define XILINX_DMA_NUM_DESCS		64
+#define XILINX_DMA_NUM_APP_WORDS	5
+
+/**
+ * struct xilinx_dma_desc_hw - Hardware Descriptor
+ * @next_desc: Next Descriptor Pointer @0x00
+ * @pad1: Reserved @0x04
+ * @buf_addr: Buffer address @0x08
+ * @pad2: Reserved @0x0C
+ * @pad3: Reserved @0x10
+ * @pad4: Reserved @0x14
+ * @control: Control field @0x18
+ * @status: Status field @0x1C
+ * @app: APP Fields @0x20 - 0x30
+ */
+struct xilinx_dma_desc_hw {
+	u32 next_desc;
+	u32 pad1;
+	u32 buf_addr;
+	u32 pad2;
+	u32 pad3;
+	u32 pad4;
+	u32 control;
+	u32 status;
+	u32 app[XILINX_DMA_NUM_APP_WORDS];
+} __aligned(64);
+
+/**
+ * struct xilinx_dma_tx_segment - Descriptor segment
+ * @hw: Hardware descriptor
+ * @node: Node in the descriptor segments list
+ * @phys: Physical address of segment
+ */
+struct xilinx_dma_tx_segment {
+	struct xilinx_dma_desc_hw hw;
+	struct list_head node;
+	dma_addr_t phys;
+} __aligned(64);
+
+/**
+ * struct xilinx_dma_tx_descriptor - Per Transaction structure
+ * @async_tx: Async transaction descriptor
+ * @segments: TX segments list
+ * @node: Node in the channel descriptors list
+ * @direction: Transfer direction
+ */
+struct xilinx_dma_tx_descriptor {
+	struct dma_async_tx_descriptor async_tx;
+	struct list_head segments;
+	struct list_head node;
+	enum dma_transfer_direction direction;
+};
+
+/**
+ * struct xilinx_dma_chan - Driver specific DMA channel structure
+ * @xdev: Driver specific device structure
+ * @ctrl_offset: Control registers offset
+ * @lock: Descriptor operation lock
+ * @pending_list: Descriptors waiting
+ * @active_desc: Active descriptor
+ * @allocated_desc: Allocated descriptor
+ * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
+ * @common: DMA common channel
+ * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
+ * @dev: The dma device
+ * @irq: Channel IRQ
+ * @id: Channel ID
+ * @has_sg: Support scatter transfers
+ * @err: Channel has errors
+ * @tasklet: Cleanup work after irq
+ * @residue: Residue
+ */
+struct xilinx_dma_chan {
+	struct xilinx_dma_device *xdev;
+	u32 ctrl_offset;
+	spinlock_t lock;
+	struct list_head pending_list;
+	struct xilinx_dma_tx_descriptor *active_desc;
+	struct xilinx_dma_tx_descriptor *allocated_desc;
+	struct list_head done_list;
+	struct list_head free_seg_list;
+	struct dma_chan common;
+	struct xilinx_dma_tx_segment *seg_v;
+	dma_addr_t seg_p;
+	struct device *dev;
+	int irq;
+	int id;
+	bool has_sg;
+	int err;
+	struct tasklet_struct tasklet;
+	u32 residue;
+};
+
+/**
+ * struct xilinx_dma_device - DMA device structure
+ * @regs: I/O mapped base address
+ * @dev: Device Structure
+ * @common: DMA device structure
+ * @chan: Driver specific DMA channel
+ * @has_sg: Specifies whether Scatter-Gather is present or not
+ */
+struct xilinx_dma_device {
+	void __iomem *regs;
+	struct device *dev;
+	struct dma_device common;
+	struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
+	bool has_sg;
+};
+
+/* Macros */
+#define to_xilinx_chan(chan) \
+	container_of(chan, struct xilinx_dma_chan, common)
+#define to_dma_tx_descriptor(tx) \
+	container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
+
+/* IO accessors */
+static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg)
+{
+	return ioread32(chan->xdev->regs + reg);
+}
+
+static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg, u32 value)
+{
+	iowrite32(value, chan->xdev->regs + reg);
+}
+
+static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32 reg)
+{
+	return dma_read(chan, chan->ctrl_offset + reg);
+}
+
+static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
+				  u32 value)
+{
+	dma_write(chan, chan->ctrl_offset + reg, value);
+}
+
+static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32 reg, u32 clr)
+{
+	dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr);
+}
+
+static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32 reg, u32 set)
+{
+	dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set);
+}
+
+/* -----------------------------------------------------------------------------
+ * Descriptors and segments alloc and free
+ */
+
+/**
+ * xilinx_dma_alloc_tx_segment - Allocate transaction segment
+ * @chan: Driver specific dma channel
+ *
+ * Return: The allocated segment on success and NULL on failure.
+ */
+static struct xilinx_dma_tx_segment *
+xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_segment *segment = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	if (!list_empty(&chan->free_seg_list)) {
+		segment = list_first_entry(&chan->free_seg_list,
+					   struct xilinx_dma_tx_segment,
+					   node);
+		list_del(&segment->node);
+	}
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return segment;
+}
+
+/**
+ * xilinx_dma_clean_hw_desc - Clean hardware descriptor
+ * @hw: HW descriptor to clean
+ */
+static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw)
+{
+	u32 next_desc = hw->next_desc;
+
+	memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
+
+	hw->next_desc = next_desc;
+}
+
+/**
+ * xilinx_dma_free_tx_segment - Free transaction segment
+ * @chan: Driver specific dma channel
+ * @segment: dma transaction segment
+ */
+static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
+				       struct xilinx_dma_tx_segment *segment)
+{
+	xilinx_dma_clean_hw_desc(&segment->hw);
+
+	list_add_tail(&segment->node, &chan->free_seg_list);
+}
+
+/**
+ * xilinx_dma_tx_descriptor - Allocate transaction descriptor
+ * @chan: Driver specific dma channel
+ *
+ * Return: The allocated descriptor on success and NULL on failure.
+ */
+static struct xilinx_dma_tx_descriptor *
+xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+	unsigned long flags;
+
+	if (chan->allocated_desc)
+		return chan->allocated_desc;
+
+	desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+	if (!desc)
+		return NULL;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	chan->allocated_desc = desc;
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	INIT_LIST_HEAD(&desc->segments);
+
+	return desc;
+}
+
+/**
+ * xilinx_dma_free_tx_descriptor - Free transaction descriptor
+ * @chan: Driver specific dma channel
+ * @desc: dma transaction descriptor
+ */
+static void
+xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
+			      struct xilinx_dma_tx_descriptor *desc)
+{
+	struct xilinx_dma_tx_segment *segment, *next;
+
+	if (!desc)
+		return;
+
+	list_for_each_entry_safe(segment, next, &desc->segments, node) {
+		list_del(&segment->node);
+		xilinx_dma_free_tx_segment(chan, segment);
+	}
+
+	kfree(desc);
+}
+
+/**
+ * xilinx_dma_free_desc_list - Free descriptors list
+ * @chan: Driver specific dma channel
+ * @list: List to parse and delete the descriptor
+ */
+static void xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
+				      struct list_head *list)
+{
+	struct xilinx_dma_tx_descriptor *desc, *next;
+
+	list_for_each_entry_safe(desc, next, list, node) {
+		list_del(&desc->node);
+		xilinx_dma_free_tx_descriptor(chan, desc);
+	}
+}
+
+/**
+ * xilinx_dma_free_descriptors - Free channel descriptors
+ * @chan: Driver specific dma channel
+ */
+static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	xilinx_dma_free_desc_list(chan, &chan->pending_list);
+	xilinx_dma_free_desc_list(chan, &chan->done_list);
+
+	xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
+	chan->active_desc = NULL;
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_free_chan_resources - Free channel resources
+ * @dchan: DMA channel
+ */
+static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+
+	xilinx_dma_free_descriptors(chan);
+
+	dma_free_coherent(chan->dev,
+			  sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
+			  chan->seg_v, chan->seg_p);
+}
+
+/**
+ * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
+ * @chan: Driver specific dma channel
+ */
+static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc, *next;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	list_for_each_entry_safe(desc, next, &chan->done_list, node) {
+		dma_async_tx_callback callback;
+		void *callback_param;
+
+		/* Remove from the list of running transactions */
+		list_del(&desc->node);
+
+		/* Run the link descriptor callback function */
+		callback = desc->async_tx.callback;
+		callback_param = desc->async_tx.callback_param;
+		if (callback) {
+			spin_unlock_irqrestore(&chan->lock, flags);
+			callback(callback_param);
+			spin_lock_irqsave(&chan->lock, flags);
+		}
+
+		/* Run any dependencies, then free the descriptor */
+		dma_run_dependencies(&desc->async_tx);
+		xilinx_dma_free_tx_descriptor(chan, desc);
+	}
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_do_tasklet - Schedule completion tasklet
+ * @data: Pointer to the Xilinx dma channel structure
+ */
+static void xilinx_dma_do_tasklet(unsigned long data)
+{
+	struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
+
+	xilinx_dma_chan_desc_cleanup(chan);
+}
+
+/**
+ * xilinx_dma_alloc_chan_resources - Allocate channel resources
+ * @dchan: DMA channel
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	int i;
+
+	/* Allocate the buffer descriptors. */
+	chan->seg_v = dma_zalloc_coherent(chan->dev,
+					  sizeof(*chan->seg_v) *
+					  XILINX_DMA_NUM_DESCS,
+					  &chan->seg_p, GFP_KERNEL);
+	if (!chan->seg_v) {
+		dev_err(chan->dev,
+			"unable to allocate channel %d descriptors\n",
+			chan->id);
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
+		chan->seg_v[i].hw.next_desc =
+				chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS);
+		chan->seg_v[i].phys =
+				chan->seg_p + sizeof(*chan->seg_v) * i;
+		list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
+	}
+
+	dma_cookie_init(dchan);
+	return 0;
+}
+
+/**
+ * xilinx_dma_tx_status - Get dma transaction status
+ * @dchan: DMA channel
+ * @cookie: Transaction identifier
+ * @txstate: Transaction state
+ *
+ * Return: DMA transaction status
+ */
+static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
+					    dma_cookie_t cookie,
+					    struct dma_tx_state *txstate)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	ret = dma_cookie_status(dchan, cookie, txstate);
+	if (ret != DMA_COMPLETE) {
+		spin_lock_irqsave(&chan->lock, flags);
+		dma_set_residue(txstate, chan->residue);
+		spin_unlock_irqrestore(&chan->lock, flags);
+	}
+
+	return ret;
+}
+
+/**
+ * xilinx_dma_is_running - Check if DMA channel is running
+ * @chan: Driver specific DMA channel
+ *
+ * Return: 'true' if running, 'false' if not.
+ */
+static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan)
+{
+	return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+		 XILINX_DMA_SR_HALTED_MASK) &&
+		(dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+		 XILINX_DMA_CR_RUNSTOP_MASK);
+}
+
+/**
+ * xilinx_dma_is_idle - Check if DMA channel is idle
+ * @chan: Driver specific DMA channel
+ *
+ * Return: 'true' if idle, 'false' if not.
+ */
+static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan)
+{
+	return dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+		XILINX_DMA_SR_IDLE_MASK;
+}
+
+/**
+ * xilinx_dma_halt - Halt DMA channel
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+
+	dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RUNSTOP_MASK);
+
+	/* Wait for the hardware to halt */
+	do {
+		if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+			XILINX_DMA_SR_HALTED_MASK)
+			break;
+	} while (loop--);
+
+	if (!loop) {
+		pr_debug("Cannot stop channel %p: %x\n",
+			 chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		chan->err = true;
+	}
+}
+
+/**
+ * xilinx_dma_start - Start DMA channel
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_start(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RUNSTOP_MASK);
+
+	/* Wait for the hardware to start */
+	do {
+		if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
+			XILINX_DMA_SR_HALTED_MASK)
+			break;
+	} while (loop--);
+
+	if (!loop) {
+		pr_debug("Cannot start channel %p: %x\n",
+			 chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		chan->err = true;
+	}
+}
+
+/**
+ * xilinx_dma_device_slave_caps - Slave channel capabilities
+ * @dchan: DMA channel
+ * @caps: Slave capabilities to set
+ *
+ * Return: Always '0'
+ */
+static int xilinx_dma_device_slave_caps(struct dma_chan *dchan,
+					struct dma_slave_caps *caps)
+{
+	caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	caps->cmd_terminate = true;
+	caps->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+
+	return 0;
+}
+
+/**
+ * xilinx_dma_start_transfer - Starts DMA transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *head, *tail = NULL;
+	unsigned long flags;
+
+	if (chan->err)
+		return;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	/* There's already an active descriptor, bail out. */
+	if (chan->active_desc)
+		goto out_unlock;
+
+	if (list_empty(&chan->pending_list))
+		goto out_unlock;
+
+	desc = list_first_entry(&chan->pending_list,
+				struct xilinx_dma_tx_descriptor, node);
+
+	if (chan->has_sg && xilinx_dma_is_running(chan) &&
+	    !xilinx_dma_is_idle(chan)) {
+		tail = list_entry(desc->segments.prev,
+				  struct xilinx_dma_tx_segment, node);
+		dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
+		goto out_free_desc;
+	}
+
+	if (chan->has_sg) {
+		head = list_first_entry(&desc->segments,
+					struct xilinx_dma_tx_segment, node);
+		tail = list_entry(desc->segments.prev,
+				  struct xilinx_dma_tx_segment, node);
+		dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
+	}
+
+	xilinx_dma_start(chan);
+	if (chan->err)
+		goto out_unlock;
+
+	/* Enable interrupts */
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	/* Start the transfer */
+	if (chan->has_sg) {
+		dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
+	} else {
+		struct xilinx_dma_tx_segment *segment;
+		struct xilinx_dma_desc_hw *hw;
+
+		segment = list_first_entry(&desc->segments,
+					   struct xilinx_dma_tx_segment, node);
+		hw = &segment->hw;
+
+		if (desc->direction == DMA_MEM_TO_DEV)
+			dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
+				       hw->buf_addr);
+		else
+			dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
+				       hw->buf_addr);
+
+		/* Start the transfer */
+		dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+			       hw->control & XILINX_DMA_MAX_TRANS_LEN);
+	}
+
+out_free_desc:
+	list_del(&desc->node);
+	chan->active_desc = desc;
+
+out_unlock:
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_issue_pending - Issue pending transactions
+ * @dchan: DMA channel
+ */
+static void xilinx_dma_issue_pending(struct dma_chan *dchan)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+
+	xilinx_dma_start_transfer(chan);
+}
+
+/**
+ * xilinx_dma_complete_descriptor - Mark the active descriptor as complete
+ * @chan : xilinx DMA channel
+ */
+static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
+{
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *segment, *next;
+	struct xilinx_dma_desc_hw *hw;
+	unsigned long flags;
+	u32 residue = 0;
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	desc = chan->active_desc;
+	if (!desc) {
+		dev_dbg(chan->dev, "no running descriptors\n");
+		goto out_unlock;
+	}
+
+	if (chan->has_sg) {
+		list_for_each_entry_safe(segment, next, &desc->segments, node) {
+			hw = &segment->hw;
+			residue += (hw->control - hw->status) &
+				   XILINX_DMA_MAX_TRANS_LEN;
+		}
+	}
+
+	chan->residue = residue;
+	dma_cookie_complete(&desc->async_tx);
+	list_add_tail(&desc->node, &chan->done_list);
+
+	chan->active_desc = NULL;
+
+out_unlock:
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
+ * xilinx_dma_reset - Reset DMA channel
+ * @chan: Driver specific DMA channel
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_reset(struct xilinx_dma_chan *chan)
+{
+	int loop = XILINX_DMA_LOOP_COUNT;
+	u32 tmp;
+
+	dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
+		     XILINX_DMA_CR_RESET_MASK);
+
+	tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+	      XILINX_DMA_CR_RESET_MASK;
+
+	/* Wait for the hardware to finish reset */
+	do {
+		tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
+		      XILINX_DMA_CR_RESET_MASK;
+	} while (loop-- && tmp);
+
+	if (!loop) {
+		dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
+			dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
+			dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
+		return -EBUSY;
+	}
+
+	chan->err = false;
+
+	return 0;
+}
+
+/**
+ * xilinx_dma_irq_handler - DMA Interrupt handler
+ * @irq: IRQ number
+ * @data: Pointer to the Xilinx DMA channel structure
+ *
+ * Return: IRQ_HANDLED/IRQ_NONE
+ */
+static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
+{
+	struct xilinx_dma_chan *chan = data;
+	u32 status;
+
+	/* Read the status and ack the interrupts. */
+	status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
+	if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
+		return IRQ_NONE;
+
+	dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
+		       status & XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
+		dev_err(chan->dev,
+			"Channel %p has errors %x, cdr %x tdr %x\n",
+			chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
+			dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
+			dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
+		chan->err = true;
+	}
+
+	/*
+	 * Device takes too long to do the transfer when user requires
+	 * responsiveness
+	 */
+	if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
+		dev_dbg(chan->dev, "Inter-packet latency too long\n");
+
+	if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
+		xilinx_dma_complete_descriptor(chan);
+		xilinx_dma_start_transfer(chan);
+	}
+
+	tasklet_schedule(&chan->tasklet);
+	return IRQ_HANDLED;
+}
+
+/**
+ * xilinx_dma_tx_submit - Submit DMA transaction
+ * @tx: Async transaction descriptor
+ *
+ * Return: cookie value on success and failure value on error
+ */
+static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
+{
+	struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
+	struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
+	dma_cookie_t cookie;
+	unsigned long flags;
+	int err;
+
+	if (chan->err) {
+		/*
+		 * If reset fails, need to hard reset the system.
+		 * Channel is no longer functional
+		 */
+		err = xilinx_dma_reset(chan);
+		if (err < 0)
+			return err;
+	}
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	cookie = dma_cookie_assign(tx);
+
+	/* Append the transaction to the pending transactions queue. */
+	list_add_tail(&desc->node, &chan->pending_list);
+
+	/* Free the allocated desc */
+	chan->allocated_desc = NULL;
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return cookie;
+}
+
+/**
+ * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
+ * @dchan: DMA channel
+ * @sgl: scatterlist to transfer to/from
+ * @sg_len: number of entries in @scatterlist
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ * @context: APP words of the descriptor
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
+	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_transfer_direction direction, unsigned long flags,
+	void *context)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_dma_tx_descriptor *desc;
+	struct xilinx_dma_tx_segment *segment;
+	struct xilinx_dma_desc_hw *hw;
+	u32 *app_w = (u32 *)context;
+	struct scatterlist *sg;
+	size_t copy, sg_used;
+	int i;
+
+	if (!is_slave_direction(direction))
+		return NULL;
+
+	/* Allocate a transaction descriptor. */
+	desc = xilinx_dma_alloc_tx_descriptor(chan);
+	if (!desc)
+		return NULL;
+
+	desc->direction = direction;
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+	desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+	desc->async_tx.cookie = 0;
+	async_tx_ack(&desc->async_tx);
+
+	/* Build transactions using information in the scatter gather list */
+	for_each_sg(sgl, sg, sg_len, i) {
+		sg_used = 0;
+
+		/* Loop until the entire scatterlist entry is used */
+		while (sg_used < sg_dma_len(sg)) {
+
+			/* Get a free segment */
+			segment = xilinx_dma_alloc_tx_segment(chan);
+			if (!segment)
+				goto error;
+
+			/*
+			 * Calculate the maximum number of bytes to transfer,
+			 * making sure it is less than the hw limit
+			 */
+			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
+				     XILINX_DMA_MAX_TRANS_LEN);
+			hw = &segment->hw;
+
+			/* Fill in the descriptor */
+			hw->buf_addr = sg_dma_address(sg) + sg_used;
+
+			hw->control = copy;
+
+			if (direction == DMA_MEM_TO_DEV) {
+				if (app_w)
+					memcpy(hw->app, app_w, sizeof(u32) *
+					       XILINX_DMA_NUM_APP_WORDS);
+
+				/*
+				 * For the first DMA_MEM_TO_DEV transfer,
+				 * set SOP
+				 */
+				if (!i)
+					hw->control |= XILINX_DMA_BD_SOP;
+			}
+
+			sg_used += copy;
+
+			/*
+			 * Insert the segment into the descriptor segments
+			 * list.
+			 */
+			list_add_tail(&segment->node, &desc->segments);
+		}
+	}
+
+	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
+	if (direction == DMA_MEM_TO_DEV) {
+		segment = list_last_entry(&desc->segments,
+					  struct xilinx_dma_tx_segment,
+					  node);
+		segment->hw.control |= XILINX_DMA_BD_EOP;
+	}
+
+	return &desc->async_tx;
+
+error:
+	xilinx_dma_free_tx_descriptor(chan, desc);
+	return NULL;
+}
+
+/**
+ * xilinx_dma_device_control - Configure DMA channel of the device
+ * @dchan: DMA Channel pointer
+ * @cmd: DMA control command
+ * @arg: Channel configuration
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_device_control(struct dma_chan *dchan,
+				     enum dma_ctrl_cmd cmd, unsigned long arg)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
+
+	if (cmd != DMA_TERMINATE_ALL)
+		return -ENXIO;
+
+	/* Halt the DMA engine */
+	xilinx_dma_halt(chan);
+
+	spin_lock_irqsave(&chan->lock, flags);
+
+	/* Remove and free all of the descriptors in the lists */
+	xilinx_dma_free_desc_list(chan, &chan->pending_list);
+	xilinx_dma_free_desc_list(chan, &chan->done_list);
+
+	spin_unlock_irqrestore(&chan->lock, flags);
+
+	return 0;
+}
+
+/**
+ * xilinx_dma_channel_set_config - Configure DMA channel
+ * @dchan: DMA channel
+ * @cfg: DMA device configuration pointer
+ * Return: '0' on success and failure value on error
+ */
+int xilinx_dma_channel_set_config(struct dma_chan *dchan,
+				  struct xilinx_dma_config *cfg)
+{
+	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
+
+	if (!xilinx_dma_is_idle(chan))
+		return -EBUSY;
+
+	if (cfg->reset)
+		return xilinx_dma_reset(chan);
+
+	if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
+		reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
+
+	if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
+		reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
+
+	dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
+
+	return 0;
+}
+EXPORT_SYMBOL(xilinx_dma_channel_set_config);
+
+/**
+ * xilinx_dma_chan_remove - Per Channel remove function
+ * @chan: Driver specific DMA channel
+ */
+static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
+{
+	/* Disable interrupts */
+	dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL, XILINX_DMA_XR_IRQ_ALL_MASK);
+
+	if (chan->irq > 0)
+		free_irq(chan->irq, chan);
+
+	tasklet_kill(&chan->tasklet);
+
+	list_del(&chan->common.device_node);
+}
+
+/**
+ * xilinx_dma_chan_probe - Per Channel Probing
+ * It get channel features from the device tree entry and
+ * initialize special channel handling routines
+ *
+ * @xdev: Driver specific device structure
+ * @node: Device node
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+				 struct device_node *node)
+{
+	struct xilinx_dma_chan *chan;
+	int err;
+	bool has_dre;
+	u32 value, width = 0;
+
+	/* Allocate a channel */
+	chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL);
+	if (!chan)
+		return -ENOMEM;
+
+	chan->dev = xdev->dev;
+	chan->xdev = xdev;
+	chan->has_sg = xdev->has_sg;
+
+	spin_lock_init(&chan->lock);
+	INIT_LIST_HEAD(&chan->pending_list);
+	INIT_LIST_HEAD(&chan->done_list);
+	INIT_LIST_HEAD(&chan->free_seg_list);
+
+	/* Get the DT properties */
+	has_dre = of_property_read_bool(node, "xlnx,include-dre");
+
+	err = of_property_read_u32(node, "xlnx,datawidth", &value);
+	if (err) {
+		dev_err(xdev->dev, "unable to read datawidth property");
+		return err;
+	}
+
+	width = value >> 3; /* Convert bits to bytes */
+
+	/* If data width is greater than 8 bytes, DRE is not in hw */
+	if (width > 8)
+		has_dre = false;
+
+	if (!has_dre)
+		xdev->common.copy_align = fls(width - 1);
+
+	if (of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel")) {
+		chan->id = 0;
+		chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
+	} else if (of_device_is_compatible(node,
+					   "xlnx,axi-dma-s2mm-channel")) {
+		chan->id = 1;
+		chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
+	} else {
+		dev_err(xdev->dev, "Invalid channel compatible node\n");
+		return -EINVAL;
+	}
+
+	/* Find the IRQ line, if it exists in the device tree */
+	chan->irq = irq_of_parse_and_map(node, 0);
+	err = request_irq(chan->irq, xilinx_dma_irq_handler,
+			  IRQF_SHARED,
+			  "xilinx-dma-controller", chan);
+	if (err) {
+		dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
+		return err;
+	}
+
+	/* Initialize the tasklet */
+	tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
+		     (unsigned long)chan);
+
+	/*
+	 * Initialize the DMA channel and add it to the DMA engine channels
+	 * list.
+	 */
+	chan->common.device = &xdev->common;
+
+	list_add_tail(&chan->common.device_node, &xdev->common.channels);
+	xdev->chan[chan->id] = chan;
+
+	/* Reset the channel */
+	err = xilinx_dma_reset(chan);
+	if (err) {
+		dev_err(xdev->dev, "Reset channel failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+/**
+ * of_dma_xilinx_xlate - Translation function
+ * @dma_spec: Pointer to DMA specifier as found in the device tree
+ * @ofdma: Pointer to DMA controller data
+ *
+ * Return: DMA channel pointer on success and NULL on error
+ */
+static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
+					    struct of_dma *ofdma)
+{
+	struct xilinx_dma_device *xdev = ofdma->of_dma_data;
+	int chan_id = dma_spec->args[0];
+
+	if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE)
+		return NULL;
+
+	return dma_get_slave_channel(&xdev->chan[chan_id]->common);
+}
+
+/**
+ * xilinx_dma_probe - Driver probe function
+ * @pdev: Pointer to the platform_device structure
+ *
+ * Return: '0' on success and failure value on error
+ */
+static int xilinx_dma_probe(struct platform_device *pdev)
+{
+	struct xilinx_dma_device *xdev;
+	struct device_node *child, *node;
+	struct resource *res;
+	int i, ret;
+
+	xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
+	if (!xdev)
+		return -ENOMEM;
+
+	xdev->dev = &(pdev->dev);
+	INIT_LIST_HEAD(&xdev->common.channels);
+
+	node = pdev->dev.of_node;
+
+	/* Map the registers */
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	xdev->regs = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(xdev->regs))
+		return PTR_ERR(xdev->regs);
+
+	/* Check if SG is enabled */
+	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
+
+	/* Axi DMA only do slave transfers */
+	dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
+	dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+	xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
+	xdev->common.device_control = xilinx_dma_device_control;
+	xdev->common.device_issue_pending = xilinx_dma_issue_pending;
+	xdev->common.device_alloc_chan_resources =
+		xilinx_dma_alloc_chan_resources;
+	xdev->common.device_free_chan_resources =
+		xilinx_dma_free_chan_resources;
+	xdev->common.device_tx_status = xilinx_dma_tx_status;
+	xdev->common.device_slave_caps = xilinx_dma_device_slave_caps;
+	xdev->common.dev = &pdev->dev;
+
+	platform_set_drvdata(pdev, xdev);
+
+	for_each_child_of_node(node, child) {
+		ret = xilinx_dma_chan_probe(xdev, child);
+		if (ret) {
+			dev_err(&pdev->dev, "Probing channels failed\n");
+			goto free_chan_resources;
+		}
+	}
+
+	dma_async_device_register(&xdev->common);
+
+	ret = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
+	if (ret) {
+		dev_err(&pdev->dev, "Unable to register DMA to DT\n");
+		dma_async_device_unregister(&xdev->common);
+		goto free_chan_resources;
+	}
+
+	dev_info(&pdev->dev, "Xilinx AXI DMA Engine driver Probed!!\n");
+
+	return 0;
+
+free_chan_resources:
+	for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
+		if (xdev->chan[i])
+			xilinx_dma_chan_remove(xdev->chan[i]);
+
+	return ret;
+}
+
+/**
+ * xilinx_dma_remove - Driver remove function
+ * @pdev: Pointer to the platform_device structure
+ *
+ * Return: Always '0'
+ */
+static int xilinx_dma_remove(struct platform_device *pdev)
+{
+	struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
+	int i;
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&xdev->common);
+
+	for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
+		if (xdev->chan[i])
+			xilinx_dma_chan_remove(xdev->chan[i]);
+
+	return 0;
+}
+
+static const struct of_device_id xilinx_dma_of_match[] = {
+	{ .compatible = "xlnx,axi-dma-1.00.a",},
+	{}
+};
+MODULE_DEVICE_TABLE(of, xilinx_dma_of_match);
+
+static struct platform_driver xilinx_dma_driver = {
+	.driver = {
+		.name = "xilinx-dma",
+		.owner = THIS_MODULE,
+		.of_match_table = xilinx_dma_of_match,
+	},
+	.probe = xilinx_dma_probe,
+	.remove = xilinx_dma_remove,
+};
+
+module_platform_driver(xilinx_dma_driver);
+
+MODULE_AUTHOR("Xilinx, Inc.");
+MODULE_DESCRIPTION("Xilinx DMA driver");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/dma/xilinx_dma.h b/include/linux/dma/xilinx_dma.h
index 34b98f2..de38599 100644
--- a/include/linux/dma/xilinx_dma.h
+++ b/include/linux/dma/xilinx_dma.h
@@ -41,7 +41,21 @@ struct xilinx_vdma_config {
 	int ext_fsync;
 };
 
+/**
+ * struct xilinx_dma_config - DMA Configuration structure
+ * @coalesc: Interrupt coalescing threshold
+ * @delay: Delay counter
+ * @reset: Reset Channel
+ */
+struct xilinx_dma_config {
+	int coalesc;
+	int delay;
+	int reset;
+};
+
 int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
 					struct xilinx_vdma_config *cfg);
+int xilinx_dma_channel_set_config(struct dma_chan *dchan,
+					struct xilinx_dma_config *cfg);
 
 #endif
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 17:55 [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
@ 2015-03-02 18:41 ` Josh Cartwright
  2015-03-05  9:34   ` Appana Durga Kedareswara Rao
  2015-03-02 18:59 ` Nicolae Rosia
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Josh Cartwright @ 2015-03-02 18:41 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	svemula, linux-kernel, Srikanth Thokala, anirudh, dmaengine,
	appanad, linux-arm-kernel

Hello!

I looked through your driver and have some comments.

On Mon, Mar 02, 2015 at 11:25:11PM +0530, Kedareswara rao Appana wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
> 
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
[..]
> +++ b/drivers/dma/Kconfig
> @@ -425,6 +425,19 @@ config IMG_MDC_DMA
>  	help
>  	  Enable support for the IMG multi-threaded DMA controller (MDC).
>  
> +config XILINX_DMA
> +	tristate "Xilinx AXI DMA Engine"
> +	depends on (ARCH_ZYNQ || MICROBLAZE)

Why do you need this dependency?  I'm assuming this IP has usefulness
outside of microblaze and Zynq.

> +	select DMA_ENGINE
> +	help
> +	  Enable support for Xilinx AXI DMA Soft IP.
> +
> +	This engine provides high-bandwidth direct memory access
> +	between memory and AXI4-Stream type target peripherals.
> +	It has two stream interfaces/channels, Memory Mapped to
> +	Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> +	data transfers.

Odd indention here.  At least indent this paragraph like the sentence
above.

[..]
> +++ b/drivers/dma/xilinx/xilinx_dma.c
[..]
> +/**
> + * xilinx_dma_halt - Halt DMA channel
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
> +{
> +	int loop = XILINX_DMA_LOOP_COUNT;
> +
> +	dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> +		     XILINX_DMA_CR_RUNSTOP_MASK);
> +
> +	/* Wait for the hardware to halt */
> +	do {
> +		if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +			XILINX_DMA_SR_HALTED_MASK)
> +			break;
> +	} while (loop--);
> +
> +	if (!loop) {

Looks like a very subtle off-by-one error here.  (And elsewhere you use
this pattern).

  Josh

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 17:55 [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
  2015-03-02 18:41 ` Josh Cartwright
@ 2015-03-02 18:59 ` Nicolae Rosia
  2015-03-05  9:35   ` Appana Durga Kedareswara Rao
  2015-03-02 22:01 ` Paul Bolle
  2015-03-17 11:07 ` Vinod Koul
  3 siblings, 1 reply; 10+ messages in thread
From: Nicolae Rosia @ 2015-03-02 18:59 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, Sören Brinkmann,
	Srikanth Vemula, linux-kernel, Srikanth Thokala,
	Anirudha Sarangi, dmaengine, appanad, linux-arm-kernel

Hello,

Here are my comments:
You are not making efficient use of DMA's coalesce capability and
chaining because you could keep a list of pending descriptors, chain
and process them with a single IRQ by setting the coalese field equal
to the number of End Of Frame bits set in the entire list. This works
with very well with scatter gatter.
You can cache the Control register to avoid reading it.

More comments are inline.

I have a version in which I've addressed all of the issues I've
presented if you are interested.

Best regards,
Nicolae Rosia

On Mon, Mar 2, 2015 at 7:55 PM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
>
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
> This patch is rebased on top of dma: xilinx-dma: move header file
> to common location.
>
> The deivce tree doc got applied in the slave-dmaengine.git.
>
> Changes in v5:
> - Modified the xilinx_dma.h header file location to the
>   include/linux/dma/xilinx_dma.h
> Changes in v4:
> - Add direction field to DMA descriptor structure and removed from
>   channel structure to avoid duplication.
> - Check for DMA idle condition before changing the configuration.
> - Residue is being calculated in complete_descriptor() and is reported
>   to slave driver.
>
> Changes in v3:
> - Rebased on 3.16-rc7
>
> Changes in v2:
> - Simplified the logic to set SOP and APP words in prep_slave_sg().
> - Corrected function description comments to match the return type.
> - Fixed some minor comments as suggested by Andy, Thanks.
>
>  drivers/dma/Kconfig             |   13 +
>  drivers/dma/xilinx/Makefile     |    1 +
>  drivers/dma/xilinx/xilinx_dma.c | 1242 +++++++++++++++++++++++++++++++++++++++
>  include/linux/dma/xilinx_dma.h  |   14 +
>  4 files changed, 1270 insertions(+)
>  create mode 100644 drivers/dma/xilinx/xilinx_dma.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index a874b6e..3271f47 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -425,6 +425,19 @@ config IMG_MDC_DMA
>         help
>           Enable support for the IMG multi-threaded DMA controller (MDC).
>
> +config XILINX_DMA
> +       tristate "Xilinx AXI DMA Engine"
> +       depends on (ARCH_ZYNQ || MICROBLAZE)
> +       select DMA_ENGINE
> +       help
> +         Enable support for Xilinx AXI DMA Soft IP.
> +
> +       This engine provides high-bandwidth direct memory access
> +       between memory and AXI4-Stream type target peripherals.
> +       It has two stream interfaces/channels, Memory Mapped to
> +       Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> +       data transfers.
> +
>  config DMA_ENGINE
>         bool
>
> diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
> index 3c4e9f2..6224a49 100644
> --- a/drivers/dma/xilinx/Makefile
> +++ b/drivers/dma/xilinx/Makefile
> @@ -1 +1,2 @@
>  obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
> +obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> new file mode 100644
> index 0000000..fdf2d54
> --- /dev/null
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -0,0 +1,1242 @@
> +/*
> + * DMA driver for Xilinx DMA Engine
> + *
> + * Copyright (C) 2010 - 2014 Xilinx, Inc. All rights reserved.
> + *
> + * Based on the Freescale DMA driver.
> + *
> + * Description:
> + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
> + *  Access between memory and AXI4-Stream-type target peripherals. It can be
> + *  configured to have one channel or two channels and if configured as two
> + *  channels, one is to transmit data from memory to a device and another is
> + *  to receive from a device.
> + *
> + * This program is free software: you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation, either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +
> +#include <linux/bitops.h>
> +#include <linux/dma/xilinx_dma.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/module.h>
> +#include <linux/of_address.h>
> +#include <linux/of_dma.h>
> +#include <linux/of_irq.h>
> +#include <linux/of_platform.h>
> +#include <linux/slab.h>
> +
> +#include "../dmaengine.h"
> +
> +/* Register Offsets */
> +#define XILINX_DMA_REG_CONTROL         0x00
> +#define XILINX_DMA_REG_STATUS          0x04
> +#define XILINX_DMA_REG_CURDESC         0x08
> +#define XILINX_DMA_REG_TAILDESC                0x10
> +#define XILINX_DMA_REG_SRCADDR         0x18
> +#define XILINX_DMA_REG_DSTADDR         0x20
> +#define XILINX_DMA_REG_BTT             0x28
> +
> +/* Channel/Descriptor Offsets */
> +#define XILINX_DMA_MM2S_CTRL_OFFSET    0x00
> +#define XILINX_DMA_S2MM_CTRL_OFFSET    0x30
> +
> +/* General register bits definitions */
> +#define XILINX_DMA_CR_RUNSTOP_MASK     BIT(0)
> +#define XILINX_DMA_CR_RESET_MASK       BIT(2)
> +
> +#define XILINX_DMA_CR_DELAY_SHIFT      24
> +#define XILINX_DMA_CR_COALESCE_SHIFT   16
> +
> +#define XILINX_DMA_CR_DELAY_MAX                GENMASK(7, 0)
> +#define XILINX_DMA_CR_COALESCE_MAX     GENMASK(7, 0)
> +
> +#define XILINX_DMA_SR_HALTED_MASK      BIT(0)
> +#define XILINX_DMA_SR_IDLE_MASK                BIT(1)
> +
> +#define XILINX_DMA_XR_IRQ_IOC_MASK     BIT(12)
> +#define XILINX_DMA_XR_IRQ_DELAY_MASK   BIT(13)
> +#define XILINX_DMA_XR_IRQ_ERROR_MASK   BIT(14)
> +#define XILINX_DMA_XR_IRQ_ALL_MASK     GENMASK(14, 12)
> +
> +/* BD definitions */
> +#define XILINX_DMA_BD_STS_ALL_MASK     GENMASK(31, 28)
> +#define XILINX_DMA_BD_SOP              BIT(27)
> +#define XILINX_DMA_BD_EOP              BIT(26)
> +
> +/* Hw specific definitions */
> +#define XILINX_DMA_MAX_CHANS_PER_DEVICE        0x2
> +#define XILINX_DMA_MAX_TRANS_LEN       GENMASK(22, 0)
> +
> +/* Delay loop counter to prevent hardware failure */
> +#define XILINX_DMA_LOOP_COUNT          1000000
> +
> +/* Maximum number of Descriptors */
> +#define XILINX_DMA_NUM_DESCS           64
> +#define XILINX_DMA_NUM_APP_WORDS       5
> +
> +/**
> + * struct xilinx_dma_desc_hw - Hardware Descriptor
> + * @next_desc: Next Descriptor Pointer @0x00
> + * @pad1: Reserved @0x04
> + * @buf_addr: Buffer address @0x08
> + * @pad2: Reserved @0x0C
> + * @pad3: Reserved @0x10
> + * @pad4: Reserved @0x14
> + * @control: Control field @0x18
> + * @status: Status field @0x1C
> + * @app: APP Fields @0x20 - 0x30
> + */
> +struct xilinx_dma_desc_hw {
> +       u32 next_desc;
> +       u32 pad1;
> +       u32 buf_addr;
> +       u32 pad2;
> +       u32 pad3;
> +       u32 pad4;
> +       u32 control;
> +       u32 status;
> +       u32 app[XILINX_DMA_NUM_APP_WORDS];
> +} __aligned(64);
> +
> +/**
> + * struct xilinx_dma_tx_segment - Descriptor segment
> + * @hw: Hardware descriptor
> + * @node: Node in the descriptor segments list
> + * @phys: Physical address of segment
> + */
> +struct xilinx_dma_tx_segment {
> +       struct xilinx_dma_desc_hw hw;
> +       struct list_head node;
> +       dma_addr_t phys;
> +} __aligned(64);
> +
> +/**
> + * struct xilinx_dma_tx_descriptor - Per Transaction structure
> + * @async_tx: Async transaction descriptor
> + * @segments: TX segments list
> + * @node: Node in the channel descriptors list
> + * @direction: Transfer direction
> + */
> +struct xilinx_dma_tx_descriptor {
> +       struct dma_async_tx_descriptor async_tx;
> +       struct list_head segments;
> +       struct list_head node;
> +       enum dma_transfer_direction direction;
> +};
> +
> +/**
> + * struct xilinx_dma_chan - Driver specific DMA channel structure
> + * @xdev: Driver specific device structure
> + * @ctrl_offset: Control registers offset
> + * @lock: Descriptor operation lock
> + * @pending_list: Descriptors waiting
> + * @active_desc: Active descriptor
> + * @allocated_desc: Allocated descriptor
> + * @done_list: Complete descriptors
> + * @free_seg_list: Free descriptors
> + * @common: DMA common channel
> + * @seg_v: Statically allocated segments base
> + * @seg_p: Physical allocated segments base
> + * @dev: The dma device
> + * @irq: Channel IRQ
> + * @id: Channel ID
> + * @has_sg: Support scatter transfers
> + * @err: Channel has errors
> + * @tasklet: Cleanup work after irq
> + * @residue: Residue
> + */
> +struct xilinx_dma_chan {
> +       struct xilinx_dma_device *xdev;
> +       u32 ctrl_offset;
> +       spinlock_t lock;
> +       struct list_head pending_list;
> +       struct xilinx_dma_tx_descriptor *active_desc;
> +       struct xilinx_dma_tx_descriptor *allocated_desc;
> +       struct list_head done_list;
> +       struct list_head free_seg_list;
> +       struct dma_chan common;
> +       struct xilinx_dma_tx_segment *seg_v;
> +       dma_addr_t seg_p;
> +       struct device *dev;
> +       int irq;
> +       int id;
> +       bool has_sg;
> +       int err;
> +       struct tasklet_struct tasklet;
> +       u32 residue;
> +};
> +
> +/**
> + * struct xilinx_dma_device - DMA device structure
> + * @regs: I/O mapped base address
> + * @dev: Device Structure
> + * @common: DMA device structure
> + * @chan: Driver specific DMA channel
> + * @has_sg: Specifies whether Scatter-Gather is present or not
> + */
> +struct xilinx_dma_device {
> +       void __iomem *regs;
> +       struct device *dev;
> +       struct dma_device common;
> +       struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
> +       bool has_sg;
> +};
> +
> +/* Macros */
> +#define to_xilinx_chan(chan) \
> +       container_of(chan, struct xilinx_dma_chan, common)
> +#define to_dma_tx_descriptor(tx) \
> +       container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
> +
> +/* IO accessors */
> +static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg)
> +{
> +       return ioread32(chan->xdev->regs + reg);
> +}
> +
> +static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg, u32 value)
> +{
> +       iowrite32(value, chan->xdev->regs + reg);
> +}
> +
> +static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32 reg)
> +{
> +       return dma_read(chan, chan->ctrl_offset + reg);
> +}
> +
> +static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
> +                                 u32 value)
> +{
> +       dma_write(chan, chan->ctrl_offset + reg, value);
> +}
> +
> +static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32 reg, u32 clr)
> +{
> +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr);
> +}
> +
> +static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32 reg, u32 set)
> +{
> +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set);
> +}
> +
> +/* -----------------------------------------------------------------------------
> + * Descriptors and segments alloc and free
> + */
> +
> +/**
> + * xilinx_dma_alloc_tx_segment - Allocate transaction segment
> + * @chan: Driver specific dma channel
> + *
> + * Return: The allocated segment on success and NULL on failure.
> + */
> +static struct xilinx_dma_tx_segment *
> +xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_segment *segment = NULL;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       if (!list_empty(&chan->free_seg_list)) {
> +               segment = list_first_entry(&chan->free_seg_list,
> +                                          struct xilinx_dma_tx_segment,
> +                                          node);
> +               list_del(&segment->node);
> +       }
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return segment;
> +}
> +
> +/**
> + * xilinx_dma_clean_hw_desc - Clean hardware descriptor
> + * @hw: HW descriptor to clean
> + */
> +static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw)
> +{
> +       u32 next_desc = hw->next_desc;
> +
> +       memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
> +
> +       hw->next_desc = next_desc;
> +}
> +
> +/**
> + * xilinx_dma_free_tx_segment - Free transaction segment
> + * @chan: Driver specific dma channel
> + * @segment: dma transaction segment
> + */
> +static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
> +                                      struct xilinx_dma_tx_segment *segment)
> +{
> +       xilinx_dma_clean_hw_desc(&segment->hw);
> +
> +       list_add_tail(&segment->node, &chan->free_seg_list);
> +}
> +
> +/**
> + * xilinx_dma_tx_descriptor - Allocate transaction descriptor
> + * @chan: Driver specific dma channel
> + *
> + * Return: The allocated descriptor on success and NULL on failure.
> + */
> +static struct xilinx_dma_tx_descriptor *
> +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       unsigned long flags;
> +
> +       if (chan->allocated_desc)
> +               return chan->allocated_desc;
> +
> +       desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> +       if (!desc)
> +               return NULL;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +       chan->allocated_desc = desc;
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       INIT_LIST_HEAD(&desc->segments);
> +
> +       return desc;
> +}
> +
> +/**
> + * xilinx_dma_free_tx_descriptor - Free transaction descriptor
> + * @chan: Driver specific dma channel
> + * @desc: dma transaction descriptor
> + */
> +static void
> +xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
> +                             struct xilinx_dma_tx_descriptor *desc)
> +{
> +       struct xilinx_dma_tx_segment *segment, *next;
> +
> +       if (!desc)
> +               return;
> +
> +       list_for_each_entry_safe(segment, next, &desc->segments, node) {
> +               list_del(&segment->node);
> +               xilinx_dma_free_tx_segment(chan, segment);
> +       }
> +
> +       kfree(desc);
> +}
> +
> +/**
> + * xilinx_dma_free_desc_list - Free descriptors list
> + * @chan: Driver specific dma channel
> + * @list: List to parse and delete the descriptor
> + */
> +static void xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
> +                                     struct list_head *list)
> +{
> +       struct xilinx_dma_tx_descriptor *desc, *next;
> +
> +       list_for_each_entry_safe(desc, next, list, node) {
> +               list_del(&desc->node);
> +               xilinx_dma_free_tx_descriptor(chan, desc);
> +       }
> +}
> +
> +/**
> + * xilinx_dma_free_descriptors - Free channel descriptors
> + * @chan: Driver specific dma channel
> + */
> +static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
> +{
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       xilinx_dma_free_desc_list(chan, &chan->pending_list);
> +       xilinx_dma_free_desc_list(chan, &chan->done_list);
> +
> +       xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
> +       chan->active_desc = NULL;
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_free_chan_resources - Free channel resources
> + * @dchan: DMA channel
> + */
> +static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +
> +       xilinx_dma_free_descriptors(chan);
> +
> +       dma_free_coherent(chan->dev,
> +                         sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
> +                         chan->seg_v, chan->seg_p);
> +}
> +
> +/**
> + * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
> + * @chan: Driver specific dma channel
> + */
> +static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc, *next;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       list_for_each_entry_safe(desc, next, &chan->done_list, node) {
> +               dma_async_tx_callback callback;
> +               void *callback_param;
> +
> +               /* Remove from the list of running transactions */
> +               list_del(&desc->node);
> +
> +               /* Run the link descriptor callback function */
> +               callback = desc->async_tx.callback;
> +               callback_param = desc->async_tx.callback_param;
> +               if (callback) {
> +                       spin_unlock_irqrestore(&chan->lock, flags);
> +                       callback(callback_param);
> +                       spin_lock_irqsave(&chan->lock, flags);
> +               }
> +
> +               /* Run any dependencies, then free the descriptor */
> +               dma_run_dependencies(&desc->async_tx);
> +               xilinx_dma_free_tx_descriptor(chan, desc);
> +       }
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_do_tasklet - Schedule completion tasklet
> + * @data: Pointer to the Xilinx dma channel structure
> + */
> +static void xilinx_dma_do_tasklet(unsigned long data)
> +{
> +       struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
> +
> +       xilinx_dma_chan_desc_cleanup(chan);
> +}
> +
> +/**
> + * xilinx_dma_alloc_chan_resources - Allocate channel resources
> + * @dchan: DMA channel
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       int i;
> +
> +       /* Allocate the buffer descriptors. */
> +       chan->seg_v = dma_zalloc_coherent(chan->dev,
> +                                         sizeof(*chan->seg_v) *
> +                                         XILINX_DMA_NUM_DESCS,
> +                                         &chan->seg_p, GFP_KERNEL);
> +       if (!chan->seg_v) {
> +               dev_err(chan->dev,
> +                       "unable to allocate channel %d descriptors\n",
> +                       chan->id);
> +               return -ENOMEM;
> +       }
> +
> +       for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
> +               chan->seg_v[i].hw.next_desc =
> +                               chan->seg_p + sizeof(*chan->seg_v) *
> +                               ((i + 1) % XILINX_DMA_NUM_DESCS);
> +               chan->seg_v[i].phys =
> +                               chan->seg_p + sizeof(*chan->seg_v) * i;
> +               list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
> +       }
> +
> +       dma_cookie_init(dchan);
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_tx_status - Get dma transaction status
> + * @dchan: DMA channel
> + * @cookie: Transaction identifier
> + * @txstate: Transaction state
> + *
> + * Return: DMA transaction status
> + */
> +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> +                                           dma_cookie_t cookie,
> +                                           struct dma_tx_state *txstate)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       enum dma_status ret;
> +       unsigned long flags;
> +
> +       ret = dma_cookie_status(dchan, cookie, txstate);
> +       if (ret != DMA_COMPLETE) {
> +               spin_lock_irqsave(&chan->lock, flags);
> +               dma_set_residue(txstate, chan->residue);
> +               spin_unlock_irqrestore(&chan->lock, flags);
> +       }
> +
> +       return ret;
> +}
> +
> +/**
> + * xilinx_dma_is_running - Check if DMA channel is running
> + * @chan: Driver specific DMA channel
> + *
> + * Return: 'true' if running, 'false' if not.
> + */
> +static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan)
> +{
> +       return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                XILINX_DMA_SR_HALTED_MASK) &&
> +               (dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +                XILINX_DMA_CR_RUNSTOP_MASK);
> +}
> +
> +/**
> + * xilinx_dma_is_idle - Check if DMA channel is idle
> + * @chan: Driver specific DMA channel
> + *
> + * Return: 'true' if idle, 'false' if not.
> + */
> +static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan)
> +{
> +       return dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +               XILINX_DMA_SR_IDLE_MASK;
> +}
> +
> +/**
> + * xilinx_dma_halt - Halt DMA channel
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +
> +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RUNSTOP_MASK);
> +
> +       /* Wait for the hardware to halt */
> +       do {
> +               if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                       XILINX_DMA_SR_HALTED_MASK)
> +                       break;
> +       } while (loop--);
> +
> +       if (!loop) {
> +               pr_debug("Cannot stop channel %p: %x\n",
> +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               chan->err = true;
> +       }
> +}
> +
> +/**
> + * xilinx_dma_start - Start DMA channel
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_start(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RUNSTOP_MASK);
> +
> +       /* Wait for the hardware to start */
> +       do {
> +               if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> +                       XILINX_DMA_SR_HALTED_MASK)
> +                       break;
> +       } while (loop--);
> +
> +       if (!loop) {
> +               pr_debug("Cannot start channel %p: %x\n",
> +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               chan->err = true;
> +       }
> +}
> +
> +/**
> + * xilinx_dma_device_slave_caps - Slave channel capabilities
> + * @dchan: DMA channel
> + * @caps: Slave capabilities to set
> + *
> + * Return: Always '0'
> + */
> +static int xilinx_dma_device_slave_caps(struct dma_chan *dchan,
> +                                       struct dma_slave_caps *caps)
> +{
> +       caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +       caps->cmd_terminate = true;
> +       caps->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_start_transfer - Starts DMA transfer
> + * @chan: Driver specific channel struct pointer
> + */
> +static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> +       unsigned long flags;
> +
> +       if (chan->err)
> +               return;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       /* There's already an active descriptor, bail out. */
> +       if (chan->active_desc)
> +               goto out_unlock;
> +
> +       if (list_empty(&chan->pending_list))
> +               goto out_unlock;
> +
> +       desc = list_first_entry(&chan->pending_list,
> +                               struct xilinx_dma_tx_descriptor, node);
> +
> +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> +           !xilinx_dma_is_idle(chan)) {
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +               goto out_free_desc;
> +       }
> +
> +       if (chan->has_sg) {
> +               head = list_first_entry(&desc->segments,
> +                                       struct xilinx_dma_tx_segment, node);
> +               tail = list_entry(desc->segments.prev,
> +                                 struct xilinx_dma_tx_segment, node);
> +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> +       }
> +
> +       xilinx_dma_start(chan);
> +       if (chan->err)
> +               goto out_unlock;
> +
> +       /* Enable interrupts */
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> +
There is no reason to enable the interrupts on every transfer. We can
do it only once.

> +       /* Start the transfer */
> +       if (chan->has_sg) {
> +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> +       } else {
> +               struct xilinx_dma_tx_segment *segment;
> +               struct xilinx_dma_desc_hw *hw;
> +
> +               segment = list_first_entry(&desc->segments,
> +                                          struct xilinx_dma_tx_segment, node);
> +               hw = &segment->hw;
> +
> +               if (desc->direction == DMA_MEM_TO_DEV)
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> +                                      hw->buf_addr);
> +               else
> +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> +                                      hw->buf_addr);
> +
> +               /* Start the transfer */
> +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> +       }
> +
> +out_free_desc:
> +       list_del(&desc->node);
> +       chan->active_desc = desc;
> +
> +out_unlock:
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_issue_pending - Issue pending transactions
> + * @dchan: DMA channel
> + */
> +static void xilinx_dma_issue_pending(struct dma_chan *dchan)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +
> +       xilinx_dma_start_transfer(chan);
You can lock here instead of locking inside the function in order to
acquire the lock a single time in IRQ.
> +}
> +
> +/**
> + * xilinx_dma_complete_descriptor - Mark the active descriptor as complete
> + * @chan : xilinx DMA channel
> + */
> +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
> +{
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *segment, *next;
> +       struct xilinx_dma_desc_hw *hw;
> +       unsigned long flags;
> +       u32 residue = 0;
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       desc = chan->active_desc;
> +       if (!desc) {
> +               dev_dbg(chan->dev, "no running descriptors\n");
> +               goto out_unlock;
> +       }
> +
> +       if (chan->has_sg) {
> +               list_for_each_entry_safe(segment, next, &desc->segments, node) {
> +                       hw = &segment->hw;
> +                       residue += (hw->control - hw->status) &
> +                                  XILINX_DMA_MAX_TRANS_LEN;
> +               }
> +       }
> +
> +       chan->residue = residue;
> +       dma_cookie_complete(&desc->async_tx);
> +       list_add_tail(&desc->node, &chan->done_list);
> +
> +       chan->active_desc = NULL;
> +
> +out_unlock:
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
> + * xilinx_dma_reset - Reset DMA channel
> + * @chan: Driver specific DMA channel
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_reset(struct xilinx_dma_chan *chan)
> +{
> +       int loop = XILINX_DMA_LOOP_COUNT;
> +       u32 tmp;
> +
> +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> +                    XILINX_DMA_CR_RESET_MASK);
> +
> +       tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +             XILINX_DMA_CR_RESET_MASK;
> +
> +       /* Wait for the hardware to finish reset */
> +       do {
> +               tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> +                     XILINX_DMA_CR_RESET_MASK;
> +       } while (loop-- && tmp);
> +
> +       if (!loop) {
> +               dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> +               return -EBUSY;
> +       }
> +
> +       chan->err = false;
> +
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_irq_handler - DMA Interrupt handler
> + * @irq: IRQ number
> + * @data: Pointer to the Xilinx DMA channel structure
> + *
> + * Return: IRQ_HANDLED/IRQ_NONE
> + */
> +static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
> +{
> +       struct xilinx_dma_chan *chan = data;
> +       u32 status;
> +
> +       /* Read the status and ack the interrupts. */
> +       status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
> +       if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
> +               return IRQ_NONE;
> +
> +       dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
> +                      status & XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
> +               dev_err(chan->dev,
> +                       "Channel %p has errors %x, cdr %x tdr %x\n",
> +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
> +                       dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
> +               chan->err = true;
> +       }
> +
> +       /*
> +        * Device takes too long to do the transfer when user requires
> +        * responsiveness
> +        */
> +       if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
> +               dev_dbg(chan->dev, "Inter-packet latency too long\n");
> +
> +       if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
> +               xilinx_dma_complete_descriptor(chan);
this call disables/restores IRQ state but we are already in IRQ.

> +               xilinx_dma_start_transfer(chan);
this one does the same thing.

> +       }
> +
> +       tasklet_schedule(&chan->tasklet);
> +       return IRQ_HANDLED;
> +}
> +
> +/**
> + * xilinx_dma_tx_submit - Submit DMA transaction
> + * @tx: Async transaction descriptor
> + *
> + * Return: cookie value on success and failure value on error
> + */
> +static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
> +{
> +       struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +       int err;
> +
> +       if (chan->err) {
> +               /*
> +                * If reset fails, need to hard reset the system.
> +                * Channel is no longer functional
> +                */
> +               err = xilinx_dma_reset(chan);
> +               if (err < 0)
> +                       return err;
> +       }
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       cookie = dma_cookie_assign(tx);
> +
> +       /* Append the transaction to the pending transactions queue. */
> +       list_add_tail(&desc->node, &chan->pending_list);
> +
> +       /* Free the allocated desc */
> +       chan->allocated_desc = NULL;
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return cookie;
> +}
> +
> +/**
> + * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
> + * @dchan: DMA channel
> + * @sgl: scatterlist to transfer to/from
> + * @sg_len: number of entries in @scatterlist
> + * @direction: DMA direction
> + * @flags: transfer ack flags
> + * @context: APP words of the descriptor
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> + */
> +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> +       struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> +       enum dma_transfer_direction direction, unsigned long flags,
> +       void *context)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       struct xilinx_dma_tx_descriptor *desc;
> +       struct xilinx_dma_tx_segment *segment;
> +       struct xilinx_dma_desc_hw *hw;
> +       u32 *app_w = (u32 *)context;
> +       struct scatterlist *sg;
> +       size_t copy, sg_used;
> +       int i;
> +
> +       if (!is_slave_direction(direction))
> +               return NULL;
> +
> +       /* Allocate a transaction descriptor. */
> +       desc = xilinx_dma_alloc_tx_descriptor(chan);
> +       if (!desc)
> +               return NULL;
> +
> +       desc->direction = direction;
> +       dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +       desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +       desc->async_tx.cookie = 0;
> +       async_tx_ack(&desc->async_tx);
> +
> +       /* Build transactions using information in the scatter gather list */
> +       for_each_sg(sgl, sg, sg_len, i) {
> +               sg_used = 0;
> +
> +               /* Loop until the entire scatterlist entry is used */
> +               while (sg_used < sg_dma_len(sg)) {
> +
> +                       /* Get a free segment */
> +                       segment = xilinx_dma_alloc_tx_segment(chan);
> +                       if (!segment)
> +                               goto error;
> +
> +                       /*
> +                        * Calculate the maximum number of bytes to transfer,
> +                        * making sure it is less than the hw limit
> +                        */
> +                       copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> +                                    XILINX_DMA_MAX_TRANS_LEN);
> +                       hw = &segment->hw;
> +
> +                       /* Fill in the descriptor */
> +                       hw->buf_addr = sg_dma_address(sg) + sg_used;
> +
> +                       hw->control = copy;
> +
> +                       if (direction == DMA_MEM_TO_DEV) {
> +                               if (app_w)
> +                                       memcpy(hw->app, app_w, sizeof(u32) *
> +                                              XILINX_DMA_NUM_APP_WORDS);
> +
> +                               /*
> +                                * For the first DMA_MEM_TO_DEV transfer,
> +                                * set SOP
> +                                */
> +                               if (!i)
> +                                       hw->control |= XILINX_DMA_BD_SOP;
> +                       }
> +
> +                       sg_used += copy;
> +
> +                       /*
> +                        * Insert the segment into the descriptor segments
> +                        * list.
> +                        */
> +                       list_add_tail(&segment->node, &desc->segments);
> +               }
> +       }
> +
> +       /* For the last DMA_MEM_TO_DEV transfer, set EOP */
> +       if (direction == DMA_MEM_TO_DEV) {
> +               segment = list_last_entry(&desc->segments,
> +                                         struct xilinx_dma_tx_segment,
> +                                         node);
> +               segment->hw.control |= XILINX_DMA_BD_EOP;
> +       }
> +
> +       return &desc->async_tx;
> +
> +error:
> +       xilinx_dma_free_tx_descriptor(chan, desc);
> +       return NULL;
> +}
> +
> +/**
> + * xilinx_dma_device_control - Configure DMA channel of the device
> + * @dchan: DMA Channel pointer
> + * @cmd: DMA control command
> + * @arg: Channel configuration
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_device_control(struct dma_chan *dchan,
> +                                    enum dma_ctrl_cmd cmd, unsigned long arg)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       unsigned long flags;
> +
> +       if (cmd != DMA_TERMINATE_ALL)
> +               return -ENXIO;
> +
> +       /* Halt the DMA engine */
> +       xilinx_dma_halt(chan);
> +
> +       spin_lock_irqsave(&chan->lock, flags);
> +
> +       /* Remove and free all of the descriptors in the lists */
> +       xilinx_dma_free_desc_list(chan, &chan->pending_list);
> +       xilinx_dma_free_desc_list(chan, &chan->done_list);
> +
> +       spin_unlock_irqrestore(&chan->lock, flags);
> +
> +       return 0;
> +}
> +
> +/**
> + * xilinx_dma_channel_set_config - Configure DMA channel
> + * @dchan: DMA channel
> + * @cfg: DMA device configuration pointer
> + * Return: '0' on success and failure value on error
> + */
> +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> +                                 struct xilinx_dma_config *cfg)
> +{
> +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +       u32 reg = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL);
> +
> +       if (!xilinx_dma_is_idle(chan))
> +               return -EBUSY;
> +
> +       if (cfg->reset)
> +               return xilinx_dma_reset(chan);
> +
> +       if (cfg->coalesc <= XILINX_DMA_CR_COALESCE_MAX)
> +               reg |= cfg->coalesc << XILINX_DMA_CR_COALESCE_SHIFT;
> +
> +       if (cfg->delay <= XILINX_DMA_CR_DELAY_MAX)
> +               reg |= cfg->delay << XILINX_DMA_CR_DELAY_SHIFT;
> +
> +       dma_ctrl_write(chan, XILINX_DMA_REG_CONTROL, reg);
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(xilinx_dma_channel_set_config);
> +
> +/**
> + * xilinx_dma_chan_remove - Per Channel remove function
> + * @chan: Driver specific DMA channel
> + */
> +static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
> +{
> +       /* Disable interrupts */
> +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL, XILINX_DMA_XR_IRQ_ALL_MASK);
> +
> +       if (chan->irq > 0)
> +               free_irq(chan->irq, chan);
> +
> +       tasklet_kill(&chan->tasklet);
> +
> +       list_del(&chan->common.device_node);
> +}
> +
> +/**
> + * xilinx_dma_chan_probe - Per Channel Probing
> + * It get channel features from the device tree entry and
> + * initialize special channel handling routines
> + *
> + * @xdev: Driver specific device structure
> + * @node: Device node
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
> +                                struct device_node *node)
> +{
> +       struct xilinx_dma_chan *chan;
> +       int err;
> +       bool has_dre;
> +       u32 value, width = 0;
> +
> +       /* Allocate a channel */
> +       chan = devm_kzalloc(xdev->dev, sizeof(*chan), GFP_KERNEL);
> +       if (!chan)
> +               return -ENOMEM;
> +
> +       chan->dev = xdev->dev;
> +       chan->xdev = xdev;
> +       chan->has_sg = xdev->has_sg;
> +
> +       spin_lock_init(&chan->lock);
> +       INIT_LIST_HEAD(&chan->pending_list);
> +       INIT_LIST_HEAD(&chan->done_list);
> +       INIT_LIST_HEAD(&chan->free_seg_list);
> +
> +       /* Get the DT properties */
> +       has_dre = of_property_read_bool(node, "xlnx,include-dre");
> +
> +       err = of_property_read_u32(node, "xlnx,datawidth", &value);
> +       if (err) {
> +               dev_err(xdev->dev, "unable to read datawidth property");
> +               return err;
> +       }
> +
> +       width = value >> 3; /* Convert bits to bytes */
> +
> +       /* If data width is greater than 8 bytes, DRE is not in hw */
> +       if (width > 8)
> +               has_dre = false;
> +
> +       if (!has_dre)
> +               xdev->common.copy_align = fls(width - 1);
> +
> +       if (of_device_is_compatible(node, "xlnx,axi-dma-mm2s-channel")) {
> +               chan->id = 0;
> +               chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
> +       } else if (of_device_is_compatible(node,
> +                                          "xlnx,axi-dma-s2mm-channel")) {
> +               chan->id = 1;
> +               chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
> +       } else {
> +               dev_err(xdev->dev, "Invalid channel compatible node\n");
> +               return -EINVAL;
> +       }
> +
> +       /* Find the IRQ line, if it exists in the device tree */
> +       chan->irq = irq_of_parse_and_map(node, 0);
> +       err = request_irq(chan->irq, xilinx_dma_irq_handler,
> +                         IRQF_SHARED,
> +                         "xilinx-dma-controller", chan);
> +       if (err) {
> +               dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
> +               return err;
> +       }
> +
> +       /* Initialize the tasklet */
> +       tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
> +                    (unsigned long)chan);
> +
> +       /*
> +        * Initialize the DMA channel and add it to the DMA engine channels
> +        * list.
> +        */
> +       chan->common.device = &xdev->common;
> +
> +       list_add_tail(&chan->common.device_node, &xdev->common.channels);
> +       xdev->chan[chan->id] = chan;
> +
> +       /* Reset the channel */
> +       err = xilinx_dma_reset(chan);
> +       if (err) {
> +               dev_err(xdev->dev, "Reset channel failed\n");
> +               return err;
> +       }
> +
> +       return 0;
> +}
> +
> +/**
> + * of_dma_xilinx_xlate - Translation function
> + * @dma_spec: Pointer to DMA specifier as found in the device tree
> + * @ofdma: Pointer to DMA controller data
> + *
> + * Return: DMA channel pointer on success and NULL on error
> + */
> +static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
> +                                           struct of_dma *ofdma)
> +{
> +       struct xilinx_dma_device *xdev = ofdma->of_dma_data;
> +       int chan_id = dma_spec->args[0];
> +
> +       if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE)
> +               return NULL;
> +
> +       return dma_get_slave_channel(&xdev->chan[chan_id]->common);
> +}
> +
> +/**
> + * xilinx_dma_probe - Driver probe function
> + * @pdev: Pointer to the platform_device structure
> + *
> + * Return: '0' on success and failure value on error
> + */
> +static int xilinx_dma_probe(struct platform_device *pdev)
> +{
> +       struct xilinx_dma_device *xdev;
> +       struct device_node *child, *node;
> +       struct resource *res;
> +       int i, ret;
> +
> +       xdev = devm_kzalloc(&pdev->dev, sizeof(*xdev), GFP_KERNEL);
> +       if (!xdev)
> +               return -ENOMEM;
> +
> +       xdev->dev = &(pdev->dev);
> +       INIT_LIST_HEAD(&xdev->common.channels);
> +
> +       node = pdev->dev.of_node;
> +
> +       /* Map the registers */
> +       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +       xdev->regs = devm_ioremap_resource(&pdev->dev, res);
> +       if (IS_ERR(xdev->regs))
> +               return PTR_ERR(xdev->regs);
> +
> +       /* Check if SG is enabled */
> +       xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> +
> +       /* Axi DMA only do slave transfers */
> +       dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
> +       dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
> +       xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> +       xdev->common.device_control = xilinx_dma_device_control;
> +       xdev->common.device_issue_pending = xilinx_dma_issue_pending;
> +       xdev->common.device_alloc_chan_resources =
> +               xilinx_dma_alloc_chan_resources;
> +       xdev->common.device_free_chan_resources =
> +               xilinx_dma_free_chan_resources;
> +       xdev->common.device_tx_status = xilinx_dma_tx_status;
> +       xdev->common.device_slave_caps = xilinx_dma_device_slave_caps;
> +       xdev->common.dev = &pdev->dev;
> +
> +       platform_set_drvdata(pdev, xdev);
> +
> +       for_each_child_of_node(node, child) {
> +               ret = xilinx_dma_chan_probe(xdev, child);
> +               if (ret) {
> +                       dev_err(&pdev->dev, "Probing channels failed\n");
> +                       goto free_chan_resources;
> +               }
> +       }
> +
> +       dma_async_device_register(&xdev->common);
> +
> +       ret = of_dma_controller_register(node, of_dma_xilinx_xlate, xdev);
> +       if (ret) {
> +               dev_err(&pdev->dev, "Unable to register DMA to DT\n");
> +               dma_async_device_unregister(&xdev->common);
> +               goto free_chan_resources;
> +       }
> +
> +       dev_info(&pdev->dev, "Xilinx AXI DMA Engine driver Probed!!\n");
> +
> +       return 0;
> +
> +free_chan_resources:
> +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> +               if (xdev->chan[i])
> +                       xilinx_dma_chan_remove(xdev->chan[i]);
> +
> +       return ret;
> +}
> +
> +/**
> + * xilinx_dma_remove - Driver remove function
> + * @pdev: Pointer to the platform_device structure
> + *
> + * Return: Always '0'
> + */
> +static int xilinx_dma_remove(struct platform_device *pdev)
> +{
> +       struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> +       int i;
> +
> +       of_dma_controller_free(pdev->dev.of_node);
> +       dma_async_device_unregister(&xdev->common);
> +
> +       for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> +               if (xdev->chan[i])
> +                       xilinx_dma_chan_remove(xdev->chan[i]);
> +
> +       return 0;
> +}
> +
> +static const struct of_device_id xilinx_dma_of_match[] = {
> +       { .compatible = "xlnx,axi-dma-1.00.a",},
> +       {}
> +};
> +MODULE_DEVICE_TABLE(of, xilinx_dma_of_match);
> +
> +static struct platform_driver xilinx_dma_driver = {
> +       .driver = {
> +               .name = "xilinx-dma",
> +               .owner = THIS_MODULE,
> +               .of_match_table = xilinx_dma_of_match,
> +       },
> +       .probe = xilinx_dma_probe,
> +       .remove = xilinx_dma_remove,
> +};
> +
> +module_platform_driver(xilinx_dma_driver);
> +
> +MODULE_AUTHOR("Xilinx, Inc.");
> +MODULE_DESCRIPTION("Xilinx DMA driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/include/linux/dma/xilinx_dma.h b/include/linux/dma/xilinx_dma.h
> index 34b98f2..de38599 100644
> --- a/include/linux/dma/xilinx_dma.h
> +++ b/include/linux/dma/xilinx_dma.h
> @@ -41,7 +41,21 @@ struct xilinx_vdma_config {
>         int ext_fsync;
>  };
>
> +/**
> + * struct xilinx_dma_config - DMA Configuration structure
> + * @coalesc: Interrupt coalescing threshold
> + * @delay: Delay counter
> + * @reset: Reset Channel
> + */
> +struct xilinx_dma_config {
> +       int coalesc;
> +       int delay;
> +       int reset;
> +};
> +
>  int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
>                                         struct xilinx_vdma_config *cfg);
> +int xilinx_dma_channel_set_config(struct dma_chan *dchan,
> +                                       struct xilinx_dma_config *cfg);
>
>  #endif
> --
> 2.1.2
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 17:55 [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
  2015-03-02 18:41 ` Josh Cartwright
  2015-03-02 18:59 ` Nicolae Rosia
@ 2015-03-02 22:01 ` Paul Bolle
  2015-03-05  9:34   ` Appana Durga Kedareswara Rao
  2015-03-17 11:07 ` Vinod Koul
  3 siblings, 1 reply; 10+ messages in thread
From: Paul Bolle @ 2015-03-02 22:01 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	dmaengine, linux-arm-kernel, linux-kernel, appanad, anirudh,
	svemula, Srikanth Thokala

On Mon, 2015-03-02 at 23:25 +0530, Kedareswara rao Appana wrote:
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -425,6 +425,19 @@ config IMG_MDC_DMA
>  	help
>  	  Enable support for the IMG multi-threaded DMA controller (MDC).
>  
> +config XILINX_DMA
> +	tristate "Xilinx AXI DMA Engine"
> +	depends on (ARCH_ZYNQ || MICROBLAZE)
> +	select DMA_ENGINE
> +	help
> +	  Enable support for Xilinx AXI DMA Soft IP.
> +
> +	This engine provides high-bandwidth direct memory access
> +	between memory and AXI4-Stream type target peripherals.
> +	It has two stream interfaces/channels, Memory Mapped to
> +	Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> +	data transfers.
> +

How did you test this patch? On next-20150302, running x86_64, I got:

$ make ARCH=microblaze menuconfig
  HOSTCC  scripts/basic/fixdep
  HOSTCC  scripts/kconfig/mconf.o
  SHIPPED scripts/kconfig/zconf.tab.c
  SHIPPED scripts/kconfig/zconf.lex.c
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTCC  scripts/kconfig/lxdialog/checklist.o
  HOSTCC  scripts/kconfig/lxdialog/util.o
  HOSTCC  scripts/kconfig/lxdialog/inputbox.o
  HOSTCC  scripts/kconfig/lxdialog/textbox.o
  HOSTCC  scripts/kconfig/lxdialog/yesno.o
  HOSTCC  scripts/kconfig/lxdialog/menubox.o
  HOSTLD  scripts/kconfig/mconf
scripts/kconfig/mconf Kconfig
drivers/dma/Kconfig:436: syntax error
drivers/dma/Kconfig:435: unknown option "This"
drivers/dma/Kconfig:436: unknown option "between"
drivers/dma/Kconfig:437: unknown option "It"
drivers/dma/Kconfig:438: unknown option "Stream"
drivers/dma/Kconfig:439: unknown option "data"
make[1]: *** [menuconfig] Error 1
make: *** [menuconfig] Error 2

Caused by the invalid indentation used here. You should add two spaces
after the initial tab in lines 436 through 439.


Paul Bolle


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 18:41 ` Josh Cartwright
@ 2015-03-05  9:34   ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 10+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-03-05  9:34 UTC (permalink / raw)
  To: Josh Cartwright
  Cc: dan.j.williams, vinod.koul, Michal Simek, Soren Brinkmann,
	Srikanth Vemula, linux-kernel, Srikanth Thokala,
	Anirudha Sarangi, dmaengine, linux-arm-kernel

Hi Josh,

        Thanks for reviewing the patch

> -----Original Message-----
> From: Josh Cartwright [mailto:joshc@ni.com]
> Sent: Tuesday, March 03, 2015 12:12 AM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; vinod.koul@intel.com; Michal Simek; Soren
> Brinkmann; Srikanth Vemula; linux-kernel@vger.kernel.org; Srikanth
> Thokala; Anirudha Sarangi; dmaengine@vger.kernel.org; Appana Durga
> Kedareswara Rao; linux-arm-kernel@lists.infradead.org
> Subject: Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> Hello!
>
> I looked through your driver and have some comments.
>
> On Mon, Mar 02, 2015 at 11:25:11PM +0530, Kedareswara rao Appana wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> [..]
> > +++ b/drivers/dma/Kconfig
> > @@ -425,6 +425,19 @@ config IMG_MDC_DMA
> >     help
> >       Enable support for the IMG multi-threaded DMA controller (MDC).
> >
> > +config XILINX_DMA
> > +   tristate "Xilinx AXI DMA Engine"
> > +   depends on (ARCH_ZYNQ || MICROBLAZE)
>
> Why do you need this dependency?  I'm assuming this IP has usefulness
> outside of microblaze and Zynq.
>
> > +   select DMA_ENGINE
> > +   help
> > +     Enable support for Xilinx AXI DMA Soft IP.
> > +
> > +   This engine provides high-bandwidth direct memory access
> > +   between memory and AXI4-Stream type target peripherals.
> > +   It has two stream interfaces/channels, Memory Mapped to
> > +   Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> > +   data transfers.
>
> Odd indention here.  At least indent this paragraph like the sentence above.

Ok Will do.

>
> [..]
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> [..]
> > +/**
> > + * xilinx_dma_halt - Halt DMA channel
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_halt(struct xilinx_dma_chan *chan) {
> > +   int loop = XILINX_DMA_LOOP_COUNT;
> > +
> > +   dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> > +                XILINX_DMA_CR_RUNSTOP_MASK);
> > +
> > +   /* Wait for the hardware to halt */
> > +   do {
> > +           if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                   XILINX_DMA_SR_HALTED_MASK)
> > +                   break;
> > +   } while (loop--);
> > +
> > +   if (!loop) {
>
> Looks like a very subtle off-by-one error here.  (And elsewhere you use this
> pattern).
>

Ok Will do this change in the next version of the patch.

Regards,
Kedar.

>   Josh


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 22:01 ` Paul Bolle
@ 2015-03-05  9:34   ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 10+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-03-05  9:34 UTC (permalink / raw)
  To: Paul Bolle
  Cc: dan.j.williams, vinod.koul, Michal Simek, Soren Brinkmann,
	dmaengine, linux-arm-kernel, linux-kernel, Anirudha Sarangi,
	Srikanth Vemula, Srikanth Thokala

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 3133 bytes --]

Hi Paul Bolle,

Thanks for reviewing the patch

> -----Original Message-----
> From: Paul Bolle [mailto:pebolle@tiscali.nl]
> Sent: Tuesday, March 03, 2015 3:31 AM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; vinod.koul@intel.com; Michal Simek; Soren
> Brinkmann; dmaengine@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Appana Durga
> Kedareswara Rao; Anirudha Sarangi; Srikanth Vemula; Srikanth Thokala
> Subject: Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> On Mon, 2015-03-02 at 23:25 +0530, Kedareswara rao Appana wrote:
> > --- a/drivers/dma/Kconfig
> > +++ b/drivers/dma/Kconfig
> > @@ -425,6 +425,19 @@ config IMG_MDC_DMA
> >     help
> >       Enable support for the IMG multi-threaded DMA controller (MDC).
> >
> > +config XILINX_DMA
> > +   tristate "Xilinx AXI DMA Engine"
> > +   depends on (ARCH_ZYNQ || MICROBLAZE)
> > +   select DMA_ENGINE
> > +   help
> > +     Enable support for Xilinx AXI DMA Soft IP.
> > +
> > +   This engine provides high-bandwidth direct memory access
> > +   between memory and AXI4-Stream type target peripherals.
> > +   It has two stream interfaces/channels, Memory Mapped to
> > +   Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> > +   data transfers.
> > +
>
> How did you test this patch? On next-20150302, running x86_64, I got:
>
> $ make ARCH=microblaze menuconfig
>   HOSTCC  scripts/basic/fixdep
>   HOSTCC  scripts/kconfig/mconf.o
>   SHIPPED scripts/kconfig/zconf.tab.c
>   SHIPPED scripts/kconfig/zconf.lex.c
>   HOSTCC  scripts/kconfig/zconf.tab.o
>   HOSTCC  scripts/kconfig/lxdialog/checklist.o
>   HOSTCC  scripts/kconfig/lxdialog/util.o
>   HOSTCC  scripts/kconfig/lxdialog/inputbox.o
>   HOSTCC  scripts/kconfig/lxdialog/textbox.o
>   HOSTCC  scripts/kconfig/lxdialog/yesno.o
>   HOSTCC  scripts/kconfig/lxdialog/menubox.o
>   HOSTLD  scripts/kconfig/mconf
> scripts/kconfig/mconf Kconfig
> drivers/dma/Kconfig:436: syntax error
> drivers/dma/Kconfig:435: unknown option "This"
> drivers/dma/Kconfig:436: unknown option "between"
> drivers/dma/Kconfig:437: unknown option "It"
> drivers/dma/Kconfig:438: unknown option "Stream"
> drivers/dma/Kconfig:439: unknown option "data"
> make[1]: *** [menuconfig] Error 1
> make: *** [menuconfig] Error 2
>
> Caused by the invalid indentation used here. You should add two spaces
> after the initial tab in lines 436 through 439.
>

My Bad I forgot to compile the patch before I sent to the open source.
Will fix this in the next version of the patch.

Regards,
Kedar.

>
> Paul Bolle



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 18:59 ` Nicolae Rosia
@ 2015-03-05  9:35   ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 10+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-03-05  9:35 UTC (permalink / raw)
  To: Nicolae Rosia
  Cc: dan.j.williams, vinod.koul, Michal Simek, Soren Brinkmann,
	Srikanth Vemula, linux-kernel, Srikanth Thokala,
	Anirudha Sarangi, dmaengine, linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 32923 bytes --]

Hi Nicolae Rosia,

        Thanks for reviewing the patch.

> -----Original Message-----
> From: Nicolae Rosia [mailto:nicolae.rosia@gmail.com]
> Sent: Tuesday, March 03, 2015 12:29 AM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; vinod.koul@intel.com; Michal Simek; Soren
> Brinkmann; Srikanth Vemula; linux-kernel@vger.kernel.org; Srikanth
> Thokala; Anirudha Sarangi; dmaengine@vger.kernel.org; Appana Durga
> Kedareswara Rao; linux-arm-kernel@lists.infradead.org
> Subject: Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> Hello,
>
> Here are my comments:
> You are not making efficient use of DMA's coalesce capability and chaining
> because you could keep a list of pending descriptors, chain and process them
> with a single IRQ by setting the coalese field equal to the number of End Of
> Frame bits set in the entire list. This works with very well with scatter gatter.
> You can cache the Control register to avoid reading it.

Ok will work on this.

>
> More comments are inline.
>
> I have a version in which I've addressed all of the issues I've presented if you
> are interested.

Yep sure please send me the changes if you have any will review it and will add it to next version of the patch.

>
> Best regards,
> Nicolae Rosia
>
> On Mon, Mar 2, 2015 at 7:55 PM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> > This patch is rebased on top of dma: xilinx-dma: move header file to
> > common location.
> >
> > The deivce tree doc got applied in the slave-dmaengine.git.
> >
> > Changes in v5:
> > - Modified the xilinx_dma.h header file location to the
> >   include/linux/dma/xilinx_dma.h
> > Changes in v4:
> > - Add direction field to DMA descriptor structure and removed from
> >   channel structure to avoid duplication.
> > - Check for DMA idle condition before changing the configuration.
> > - Residue is being calculated in complete_descriptor() and is reported
> >   to slave driver.
> >
> > Changes in v3:
> > - Rebased on 3.16-rc7
> >
> > Changes in v2:
> > - Simplified the logic to set SOP and APP words in prep_slave_sg().
> > - Corrected function description comments to match the return type.
> > - Fixed some minor comments as suggested by Andy, Thanks.
> >
> >  drivers/dma/Kconfig             |   13 +
> >  drivers/dma/xilinx/Makefile     |    1 +
> >  drivers/dma/xilinx/xilinx_dma.c | 1242
> +++++++++++++++++++++++++++++++++++++++
> >  include/linux/dma/xilinx_dma.h  |   14 +
> >  4 files changed, 1270 insertions(+)
> >  create mode 100644 drivers/dma/xilinx/xilinx_dma.c
> >
> > diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig index
> > a874b6e..3271f47 100644
> > --- a/drivers/dma/Kconfig
> > +++ b/drivers/dma/Kconfig
> > @@ -425,6 +425,19 @@ config IMG_MDC_DMA
> >         help
> >           Enable support for the IMG multi-threaded DMA controller (MDC).
> >
> > +config XILINX_DMA
> > +       tristate "Xilinx AXI DMA Engine"
> > +       depends on (ARCH_ZYNQ || MICROBLAZE)
> > +       select DMA_ENGINE
> > +       help
> > +         Enable support for Xilinx AXI DMA Soft IP.
> > +
> > +       This engine provides high-bandwidth direct memory access
> > +       between memory and AXI4-Stream type target peripherals.
> > +       It has two stream interfaces/channels, Memory Mapped to
> > +       Stream (MM2S) and Stream to Memory Mapped (S2MM) for the
> > +       data transfers.
> > +
> >  config DMA_ENGINE
> >         bool
> >
> > diff --git a/drivers/dma/xilinx/Makefile b/drivers/dma/xilinx/Makefile
> > index 3c4e9f2..6224a49 100644
> > --- a/drivers/dma/xilinx/Makefile
> > +++ b/drivers/dma/xilinx/Makefile
> > @@ -1 +1,2 @@
> >  obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
> > +obj-$(CONFIG_XILINX_DMA) += xilinx_dma.o
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > b/drivers/dma/xilinx/xilinx_dma.c new file mode 100644 index
> > 0000000..fdf2d54
> > --- /dev/null
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -0,0 +1,1242 @@
> > +/*
> > + * DMA driver for Xilinx DMA Engine
> > + *
> > + * Copyright (C) 2010 - 2014 Xilinx, Inc. All rights reserved.
> > + *
> > + * Based on the Freescale DMA driver.
> > + *
> > + * Description:
> > + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct
> > +Memory
> > + *  Access between memory and AXI4-Stream-type target peripherals. It
> > +can be
> > + *  configured to have one channel or two channels and if configured
> > +as two
> > + *  channels, one is to transmit data from memory to a device and
> > +another is
> > + *  to receive from a device.
> > + *
> > + * This program is free software: you can redistribute it and/or
> > +modify
> > + * it under the terms of the GNU General Public License as published
> > +by
> > + * the Free Software Foundation, either version 2 of the License, or
> > + * (at your option) any later version.
> > + */
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/dma/xilinx_dma.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/module.h>
> > +#include <linux/of_address.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/of_irq.h>
> > +#include <linux/of_platform.h>
> > +#include <linux/slab.h>
> > +
> > +#include "../dmaengine.h"
> > +
> > +/* Register Offsets */
> > +#define XILINX_DMA_REG_CONTROL         0x00
> > +#define XILINX_DMA_REG_STATUS          0x04
> > +#define XILINX_DMA_REG_CURDESC         0x08
> > +#define XILINX_DMA_REG_TAILDESC                0x10
> > +#define XILINX_DMA_REG_SRCADDR         0x18
> > +#define XILINX_DMA_REG_DSTADDR         0x20
> > +#define XILINX_DMA_REG_BTT             0x28
> > +
> > +/* Channel/Descriptor Offsets */
> > +#define XILINX_DMA_MM2S_CTRL_OFFSET    0x00
> > +#define XILINX_DMA_S2MM_CTRL_OFFSET    0x30
> > +
> > +/* General register bits definitions */
> > +#define XILINX_DMA_CR_RUNSTOP_MASK     BIT(0)
> > +#define XILINX_DMA_CR_RESET_MASK       BIT(2)
> > +
> > +#define XILINX_DMA_CR_DELAY_SHIFT      24
> > +#define XILINX_DMA_CR_COALESCE_SHIFT   16
> > +
> > +#define XILINX_DMA_CR_DELAY_MAX                GENMASK(7, 0)
> > +#define XILINX_DMA_CR_COALESCE_MAX     GENMASK(7, 0)
> > +
> > +#define XILINX_DMA_SR_HALTED_MASK      BIT(0)
> > +#define XILINX_DMA_SR_IDLE_MASK                BIT(1)
> > +
> > +#define XILINX_DMA_XR_IRQ_IOC_MASK     BIT(12)
> > +#define XILINX_DMA_XR_IRQ_DELAY_MASK   BIT(13)
> > +#define XILINX_DMA_XR_IRQ_ERROR_MASK   BIT(14)
> > +#define XILINX_DMA_XR_IRQ_ALL_MASK     GENMASK(14, 12)
> > +
> > +/* BD definitions */
> > +#define XILINX_DMA_BD_STS_ALL_MASK     GENMASK(31, 28)
> > +#define XILINX_DMA_BD_SOP              BIT(27)
> > +#define XILINX_DMA_BD_EOP              BIT(26)
> > +
> > +/* Hw specific definitions */
> > +#define XILINX_DMA_MAX_CHANS_PER_DEVICE        0x2
> > +#define XILINX_DMA_MAX_TRANS_LEN       GENMASK(22, 0)
> > +
> > +/* Delay loop counter to prevent hardware failure */
> > +#define XILINX_DMA_LOOP_COUNT          1000000
> > +
> > +/* Maximum number of Descriptors */
> > +#define XILINX_DMA_NUM_DESCS           64
> > +#define XILINX_DMA_NUM_APP_WORDS       5
> > +
> > +/**
> > + * struct xilinx_dma_desc_hw - Hardware Descriptor
> > + * @next_desc: Next Descriptor Pointer @0x00
> > + * @pad1: Reserved @0x04
> > + * @buf_addr: Buffer address @0x08
> > + * @pad2: Reserved @0x0C
> > + * @pad3: Reserved @0x10
> > + * @pad4: Reserved @0x14
> > + * @control: Control field @0x18
> > + * @status: Status field @0x1C
> > + * @app: APP Fields @0x20 - 0x30
> > + */
> > +struct xilinx_dma_desc_hw {
> > +       u32 next_desc;
> > +       u32 pad1;
> > +       u32 buf_addr;
> > +       u32 pad2;
> > +       u32 pad3;
> > +       u32 pad4;
> > +       u32 control;
> > +       u32 status;
> > +       u32 app[XILINX_DMA_NUM_APP_WORDS]; } __aligned(64);
> > +
> > +/**
> > + * struct xilinx_dma_tx_segment - Descriptor segment
> > + * @hw: Hardware descriptor
> > + * @node: Node in the descriptor segments list
> > + * @phys: Physical address of segment  */ struct
> > +xilinx_dma_tx_segment {
> > +       struct xilinx_dma_desc_hw hw;
> > +       struct list_head node;
> > +       dma_addr_t phys;
> > +} __aligned(64);
> > +
> > +/**
> > + * struct xilinx_dma_tx_descriptor - Per Transaction structure
> > + * @async_tx: Async transaction descriptor
> > + * @segments: TX segments list
> > + * @node: Node in the channel descriptors list
> > + * @direction: Transfer direction
> > + */
> > +struct xilinx_dma_tx_descriptor {
> > +       struct dma_async_tx_descriptor async_tx;
> > +       struct list_head segments;
> > +       struct list_head node;
> > +       enum dma_transfer_direction direction; };
> > +
> > +/**
> > + * struct xilinx_dma_chan - Driver specific DMA channel structure
> > + * @xdev: Driver specific device structure
> > + * @ctrl_offset: Control registers offset
> > + * @lock: Descriptor operation lock
> > + * @pending_list: Descriptors waiting
> > + * @active_desc: Active descriptor
> > + * @allocated_desc: Allocated descriptor
> > + * @done_list: Complete descriptors
> > + * @free_seg_list: Free descriptors
> > + * @common: DMA common channel
> > + * @seg_v: Statically allocated segments base
> > + * @seg_p: Physical allocated segments base
> > + * @dev: The dma device
> > + * @irq: Channel IRQ
> > + * @id: Channel ID
> > + * @has_sg: Support scatter transfers
> > + * @err: Channel has errors
> > + * @tasklet: Cleanup work after irq
> > + * @residue: Residue
> > + */
> > +struct xilinx_dma_chan {
> > +       struct xilinx_dma_device *xdev;
> > +       u32 ctrl_offset;
> > +       spinlock_t lock;
> > +       struct list_head pending_list;
> > +       struct xilinx_dma_tx_descriptor *active_desc;
> > +       struct xilinx_dma_tx_descriptor *allocated_desc;
> > +       struct list_head done_list;
> > +       struct list_head free_seg_list;
> > +       struct dma_chan common;
> > +       struct xilinx_dma_tx_segment *seg_v;
> > +       dma_addr_t seg_p;
> > +       struct device *dev;
> > +       int irq;
> > +       int id;
> > +       bool has_sg;
> > +       int err;
> > +       struct tasklet_struct tasklet;
> > +       u32 residue;
> > +};
> > +
> > +/**
> > + * struct xilinx_dma_device - DMA device structure
> > + * @regs: I/O mapped base address
> > + * @dev: Device Structure
> > + * @common: DMA device structure
> > + * @chan: Driver specific DMA channel
> > + * @has_sg: Specifies whether Scatter-Gather is present or not  */
> > +struct xilinx_dma_device {
> > +       void __iomem *regs;
> > +       struct device *dev;
> > +       struct dma_device common;
> > +       struct xilinx_dma_chan
> *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
> > +       bool has_sg;
> > +};
> > +
> > +/* Macros */
> > +#define to_xilinx_chan(chan) \
> > +       container_of(chan, struct xilinx_dma_chan, common) #define
> > +to_dma_tx_descriptor(tx) \
> > +       container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
> > +
> > +/* IO accessors */
> > +static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg) {
> > +       return ioread32(chan->xdev->regs + reg); }
> > +
> > +static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg,
> > +u32 value) {
> > +       iowrite32(value, chan->xdev->regs + reg); }
> > +
> > +static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32
> > +reg) {
> > +       return dma_read(chan, chan->ctrl_offset + reg); }
> > +
> > +static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
> > +                                 u32 value) {
> > +       dma_write(chan, chan->ctrl_offset + reg, value); }
> > +
> > +static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32
> > +reg, u32 clr) {
> > +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr); }
> > +
> > +static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32
> > +reg, u32 set) {
> > +       dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set); }
> > +
> > +/*
> > +---------------------------------------------------------------------
> > +--------
> > + * Descriptors and segments alloc and free  */
> > +
> > +/**
> > + * xilinx_dma_alloc_tx_segment - Allocate transaction segment
> > + * @chan: Driver specific dma channel
> > + *
> > + * Return: The allocated segment on success and NULL on failure.
> > + */
> > +static struct xilinx_dma_tx_segment *
> > +xilinx_dma_alloc_tx_segment(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_segment *segment = NULL;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       if (!list_empty(&chan->free_seg_list)) {
> > +               segment = list_first_entry(&chan->free_seg_list,
> > +                                          struct xilinx_dma_tx_segment,
> > +                                          node);
> > +               list_del(&segment->node);
> > +       }
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       return segment;
> > +}
> > +
> > +/**
> > + * xilinx_dma_clean_hw_desc - Clean hardware descriptor
> > + * @hw: HW descriptor to clean
> > + */
> > +static void xilinx_dma_clean_hw_desc(struct xilinx_dma_desc_hw *hw) {
> > +       u32 next_desc = hw->next_desc;
> > +
> > +       memset(hw, 0, sizeof(struct xilinx_dma_desc_hw));
> > +
> > +       hw->next_desc = next_desc;
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_tx_segment - Free transaction segment
> > + * @chan: Driver specific dma channel
> > + * @segment: dma transaction segment
> > + */
> > +static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
> > +                                      struct xilinx_dma_tx_segment
> > +*segment) {
> > +       xilinx_dma_clean_hw_desc(&segment->hw);
> > +
> > +       list_add_tail(&segment->node, &chan->free_seg_list); }
> > +
> > +/**
> > + * xilinx_dma_tx_descriptor - Allocate transaction descriptor
> > + * @chan: Driver specific dma channel
> > + *
> > + * Return: The allocated descriptor on success and NULL on failure.
> > + */
> > +static struct xilinx_dma_tx_descriptor *
> > +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       unsigned long flags;
> > +
> > +       if (chan->allocated_desc)
> > +               return chan->allocated_desc;
> > +
> > +       desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> > +       if (!desc)
> > +               return NULL;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +       chan->allocated_desc = desc;
> > +       spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +       INIT_LIST_HEAD(&desc->segments);
> > +
> > +       return desc;
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_tx_descriptor - Free transaction descriptor
> > + * @chan: Driver specific dma channel
> > + * @desc: dma transaction descriptor
> > + */
> > +static void
> > +xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
> > +                             struct xilinx_dma_tx_descriptor *desc) {
> > +       struct xilinx_dma_tx_segment *segment, *next;
> > +
> > +       if (!desc)
> > +               return;
> > +
> > +       list_for_each_entry_safe(segment, next, &desc->segments, node) {
> > +               list_del(&segment->node);
> > +               xilinx_dma_free_tx_segment(chan, segment);
> > +       }
> > +
> > +       kfree(desc);
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_desc_list - Free descriptors list
> > + * @chan: Driver specific dma channel
> > + * @list: List to parse and delete the descriptor  */ static void
> > +xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
> > +                                     struct list_head *list) {
> > +       struct xilinx_dma_tx_descriptor *desc, *next;
> > +
> > +       list_for_each_entry_safe(desc, next, list, node) {
> > +               list_del(&desc->node);
> > +               xilinx_dma_free_tx_descriptor(chan, desc);
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_free_descriptors - Free channel descriptors
> > + * @chan: Driver specific dma channel  */ static void
> > +xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan) {
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       xilinx_dma_free_desc_list(chan, &chan->pending_list);
> > +       xilinx_dma_free_desc_list(chan, &chan->done_list);
> > +
> > +       xilinx_dma_free_tx_descriptor(chan, chan->active_desc);
> > +       chan->active_desc = NULL;
> > +
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_free_chan_resources - Free channel resources
> > + * @dchan: DMA channel
> > + */
> > +static void xilinx_dma_free_chan_resources(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +
> > +       xilinx_dma_free_descriptors(chan);
> > +
> > +       dma_free_coherent(chan->dev,
> > +                         sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS,
> > +                         chan->seg_v, chan->seg_p); }
> > +
> > +/**
> > + * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
> > + * @chan: Driver specific dma channel  */ static void
> > +xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc, *next;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       list_for_each_entry_safe(desc, next, &chan->done_list, node) {
> > +               dma_async_tx_callback callback;
> > +               void *callback_param;
> > +
> > +               /* Remove from the list of running transactions */
> > +               list_del(&desc->node);
> > +
> > +               /* Run the link descriptor callback function */
> > +               callback = desc->async_tx.callback;
> > +               callback_param = desc->async_tx.callback_param;
> > +               if (callback) {
> > +                       spin_unlock_irqrestore(&chan->lock, flags);
> > +                       callback(callback_param);
> > +                       spin_lock_irqsave(&chan->lock, flags);
> > +               }
> > +
> > +               /* Run any dependencies, then free the descriptor */
> > +               dma_run_dependencies(&desc->async_tx);
> > +               xilinx_dma_free_tx_descriptor(chan, desc);
> > +       }
> > +
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_do_tasklet - Schedule completion tasklet
> > + * @data: Pointer to the Xilinx dma channel structure  */ static void
> > +xilinx_dma_do_tasklet(unsigned long data) {
> > +       struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
> > +
> > +       xilinx_dma_chan_desc_cleanup(chan);
> > +}
> > +
> > +/**
> > + * xilinx_dma_alloc_chan_resources - Allocate channel resources
> > + * @dchan: DMA channel
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       int i;
> > +
> > +       /* Allocate the buffer descriptors. */
> > +       chan->seg_v = dma_zalloc_coherent(chan->dev,
> > +                                         sizeof(*chan->seg_v) *
> > +                                         XILINX_DMA_NUM_DESCS,
> > +                                         &chan->seg_p, GFP_KERNEL);
> > +       if (!chan->seg_v) {
> > +               dev_err(chan->dev,
> > +                       "unable to allocate channel %d descriptors\n",
> > +                       chan->id);
> > +               return -ENOMEM;
> > +       }
> > +
> > +       for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
> > +               chan->seg_v[i].hw.next_desc =
> > +                               chan->seg_p + sizeof(*chan->seg_v) *
> > +                               ((i + 1) % XILINX_DMA_NUM_DESCS);
> > +               chan->seg_v[i].phys =
> > +                               chan->seg_p + sizeof(*chan->seg_v) * i;
> > +               list_add_tail(&chan->seg_v[i].node, &chan->free_seg_list);
> > +       }
> > +
> > +       dma_cookie_init(dchan);
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_tx_status - Get dma transaction status
> > + * @dchan: DMA channel
> > + * @cookie: Transaction identifier
> > + * @txstate: Transaction state
> > + *
> > + * Return: DMA transaction status
> > + */
> > +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> > +                                           dma_cookie_t cookie,
> > +                                           struct dma_tx_state
> > +*txstate) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +
> > +       ret = dma_cookie_status(dchan, cookie, txstate);
> > +       if (ret != DMA_COMPLETE) {
> > +               spin_lock_irqsave(&chan->lock, flags);
> > +               dma_set_residue(txstate, chan->residue);
> > +               spin_unlock_irqrestore(&chan->lock, flags);
> > +       }
> > +
> > +       return ret;
> > +}
> > +
> > +/**
> > + * xilinx_dma_is_running - Check if DMA channel is running
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: 'true' if running, 'false' if not.
> > + */
> > +static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan) {
> > +       return !(dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                XILINX_DMA_SR_HALTED_MASK) &&
> > +               (dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +                XILINX_DMA_CR_RUNSTOP_MASK); }
> > +
> > +/**
> > + * xilinx_dma_is_idle - Check if DMA channel is idle
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: 'true' if idle, 'false' if not.
> > + */
> > +static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan) {
> > +       return dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +               XILINX_DMA_SR_IDLE_MASK; }
> > +
> > +/**
> > + * xilinx_dma_halt - Halt DMA channel
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_halt(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +
> > +       dma_ctrl_clr(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RUNSTOP_MASK);
> > +
> > +       /* Wait for the hardware to halt */
> > +       do {
> > +               if (dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                       XILINX_DMA_SR_HALTED_MASK)
> > +                       break;
> > +       } while (loop--);
> > +
> > +       if (!loop) {
> > +               pr_debug("Cannot stop channel %p: %x\n",
> > +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               chan->err = true;
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_start - Start DMA channel
> > + * @chan: Driver specific DMA channel  */ static void
> > +xilinx_dma_start(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RUNSTOP_MASK);
> > +
> > +       /* Wait for the hardware to start */
> > +       do {
> > +               if (!dma_ctrl_read(chan, XILINX_DMA_REG_STATUS) &
> > +                       XILINX_DMA_SR_HALTED_MASK)
> > +                       break;
> > +       } while (loop--);
> > +
> > +       if (!loop) {
> > +               pr_debug("Cannot start channel %p: %x\n",
> > +                        chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               chan->err = true;
> > +       }
> > +}
> > +
> > +/**
> > + * xilinx_dma_device_slave_caps - Slave channel capabilities
> > + * @dchan: DMA channel
> > + * @caps: Slave capabilities to set
> > + *
> > + * Return: Always '0'
> > + */
> > +static int xilinx_dma_device_slave_caps(struct dma_chan *dchan,
> > +                                       struct dma_slave_caps *caps) {
> > +       caps->directions = BIT(DMA_DEV_TO_MEM) |
> BIT(DMA_MEM_TO_DEV);
> > +       caps->cmd_terminate = true;
> > +       caps->residue_granularity =
> DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_start_transfer - Starts DMA transfer
> > + * @chan: Driver specific channel struct pointer  */ static void
> > +xilinx_dma_start_transfer(struct xilinx_dma_chan *chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *head, *tail = NULL;
> > +       unsigned long flags;
> > +
> > +       if (chan->err)
> > +               return;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       /* There's already an active descriptor, bail out. */
> > +       if (chan->active_desc)
> > +               goto out_unlock;
> > +
> > +       if (list_empty(&chan->pending_list))
> > +               goto out_unlock;
> > +
> > +       desc = list_first_entry(&chan->pending_list,
> > +                               struct xilinx_dma_tx_descriptor,
> > + node);
> > +
> > +       if (chan->has_sg && xilinx_dma_is_running(chan) &&
> > +           !xilinx_dma_is_idle(chan)) {
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +               goto out_free_desc;
> > +       }
> > +
> > +       if (chan->has_sg) {
> > +               head = list_first_entry(&desc->segments,
> > +                                       struct xilinx_dma_tx_segment, node);
> > +               tail = list_entry(desc->segments.prev,
> > +                                 struct xilinx_dma_tx_segment, node);
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC, head->phys);
> > +       }
> > +
> > +       xilinx_dma_start(chan);
> > +       if (chan->err)
> > +               goto out_unlock;
> > +
> > +       /* Enable interrupts */
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> There is no reason to enable the interrupts on every transfer. We can do it
> only once.

Ok Will do this change in the next version of the patch.

>
> > +       /* Start the transfer */
> > +       if (chan->has_sg) {
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC, tail->phys);
> > +       } else {
> > +               struct xilinx_dma_tx_segment *segment;
> > +               struct xilinx_dma_desc_hw *hw;
> > +
> > +               segment = list_first_entry(&desc->segments,
> > +                                          struct xilinx_dma_tx_segment, node);
> > +               hw = &segment->hw;
> > +
> > +               if (desc->direction == DMA_MEM_TO_DEV)
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_SRCADDR,
> > +                                      hw->buf_addr);
> > +               else
> > +                       dma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
> > +                                      hw->buf_addr);
> > +
> > +               /* Start the transfer */
> > +               dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> > +                              hw->control & XILINX_DMA_MAX_TRANS_LEN);
> > +       }
> > +
> > +out_free_desc:
> > +       list_del(&desc->node);
> > +       chan->active_desc = desc;
> > +
> > +out_unlock:
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_issue_pending - Issue pending transactions
> > + * @dchan: DMA channel
> > + */
> > +static void xilinx_dma_issue_pending(struct dma_chan *dchan) {
> > +       struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +
> > +       xilinx_dma_start_transfer(chan);
> You can lock here instead of locking inside the function in order to acquire
> the lock a single time in IRQ.

Ok Will do.

> > +}
> > +
> > +/**
> > + * xilinx_dma_complete_descriptor - Mark the active descriptor as
> > +complete
> > + * @chan : xilinx DMA channel
> > + */
> > +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan
> > +*chan) {
> > +       struct xilinx_dma_tx_descriptor *desc;
> > +       struct xilinx_dma_tx_segment *segment, *next;
> > +       struct xilinx_dma_desc_hw *hw;
> > +       unsigned long flags;
> > +       u32 residue = 0;
> > +
> > +       spin_lock_irqsave(&chan->lock, flags);
> > +
> > +       desc = chan->active_desc;
> > +       if (!desc) {
> > +               dev_dbg(chan->dev, "no running descriptors\n");
> > +               goto out_unlock;
> > +       }
> > +
> > +       if (chan->has_sg) {
> > +               list_for_each_entry_safe(segment, next, &desc->segments,
> node) {
> > +                       hw = &segment->hw;
> > +                       residue += (hw->control - hw->status) &
> > +                                  XILINX_DMA_MAX_TRANS_LEN;
> > +               }
> > +       }
> > +
> > +       chan->residue = residue;
> > +       dma_cookie_complete(&desc->async_tx);
> > +       list_add_tail(&desc->node, &chan->done_list);
> > +
> > +       chan->active_desc = NULL;
> > +
> > +out_unlock:
> > +       spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> > + * xilinx_dma_reset - Reset DMA channel
> > + * @chan: Driver specific DMA channel
> > + *
> > + * Return: '0' on success and failure value on error  */ static int
> > +xilinx_dma_reset(struct xilinx_dma_chan *chan) {
> > +       int loop = XILINX_DMA_LOOP_COUNT;
> > +       u32 tmp;
> > +
> > +       dma_ctrl_set(chan, XILINX_DMA_REG_CONTROL,
> > +                    XILINX_DMA_CR_RESET_MASK);
> > +
> > +       tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +             XILINX_DMA_CR_RESET_MASK;
> > +
> > +       /* Wait for the hardware to finish reset */
> > +       do {
> > +               tmp = dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL) &
> > +                     XILINX_DMA_CR_RESET_MASK;
> > +       } while (loop-- && tmp);
> > +
> > +       if (!loop) {
> > +               dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_CONTROL),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_STATUS));
> > +               return -EBUSY;
> > +       }
> > +
> > +       chan->err = false;
> > +
> > +       return 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_irq_handler - DMA Interrupt handler
> > + * @irq: IRQ number
> > + * @data: Pointer to the Xilinx DMA channel structure
> > + *
> > + * Return: IRQ_HANDLED/IRQ_NONE
> > + */
> > +static irqreturn_t xilinx_dma_irq_handler(int irq, void *data) {
> > +       struct xilinx_dma_chan *chan = data;
> > +       u32 status;
> > +
> > +       /* Read the status and ack the interrupts. */
> > +       status = dma_ctrl_read(chan, XILINX_DMA_REG_STATUS);
> > +       if (!(status & XILINX_DMA_XR_IRQ_ALL_MASK))
> > +               return IRQ_NONE;
> > +
> > +       dma_ctrl_write(chan, XILINX_DMA_REG_STATUS,
> > +                      status & XILINX_DMA_XR_IRQ_ALL_MASK);
> > +
> > +       if (status & XILINX_DMA_XR_IRQ_ERROR_MASK) {
> > +               dev_err(chan->dev,
> > +                       "Channel %p has errors %x, cdr %x tdr %x\n",
> > +                       chan, dma_ctrl_read(chan, XILINX_DMA_REG_STATUS),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
> > +                       dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
> > +               chan->err = true;
> > +       }
> > +
> > +       /*
> > +        * Device takes too long to do the transfer when user requires
> > +        * responsiveness
> > +        */
> > +       if (status & XILINX_DMA_XR_IRQ_DELAY_MASK)
> > +               dev_dbg(chan->dev, "Inter-packet latency too long\n");
> > +
> > +       if (status & XILINX_DMA_XR_IRQ_IOC_MASK) {
> > +               xilinx_dma_complete_descriptor(chan);
> this call disables/restores IRQ state but we are already in IRQ.

Ok will cross check.

>
> > +               xilinx_dma_start_transfer(chan);
> this one does the same thing.

Ok will modify

Regards,
Kedar.


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-02 17:55 [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
                   ` (2 preceding siblings ...)
  2015-03-02 22:01 ` Paul Bolle
@ 2015-03-17 11:07 ` Vinod Koul
  2015-03-23 16:24   ` Appana Durga Kedareswara Rao
  3 siblings, 1 reply; 10+ messages in thread
From: Vinod Koul @ 2015-03-17 11:07 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, michal.simek, soren.brinkmann, dmaengine,
	linux-arm-kernel, linux-kernel, appanad, anirudh, svemula,
	Srikanth Thokala

On Mon, Mar 02, 2015 at 11:25:11PM +0530, Kedareswara rao Appana wrote:
> This is the driver for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
> 
> Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
> This patch is rebased on top of dma: xilinx-dma: move header file
> to common location.
but not on slave-dma next some API update is required..


> +/*
> + * DMA driver for Xilinx DMA Engine
> + *
> + * Copyright (C) 2010 - 2014 Xilinx, Inc. All rights reserved.
2015?

> +static struct xilinx_dma_tx_descriptor *
> +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
> +{
> +	struct xilinx_dma_tx_descriptor *desc;
> +	unsigned long flags;
> +
> +	if (chan->allocated_desc)
> +		return chan->allocated_desc;
> +
> +	desc = kzalloc(sizeof(*desc), GFP_KERNEL);
GFP_NOWAIT

> +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> +					    dma_cookie_t cookie,
> +					    struct dma_tx_state *txstate)
> +{
> +	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +	enum dma_status ret;
> +	unsigned long flags;
> +
> +	ret = dma_cookie_status(dchan, cookie, txstate);
> +	if (ret != DMA_COMPLETE) {
txstate can be null

> +		spin_lock_irqsave(&chan->lock, flags);
> +		dma_set_residue(txstate, chan->residue);
the queried descriptor may not be submitted to HW. Also the expectations are
that you will read current value from HW and calculate residue

> +static int xilinx_dma_device_slave_caps(struct dma_chan *dchan,
> +					struct dma_slave_caps *caps)
> +{
> +	caps->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	caps->cmd_terminate = true;
> +	caps->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +
> +	return 0;
> +}
this is based on older API, pls update


> +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
> +{
> +	struct xilinx_dma_tx_descriptor *desc;
> +	struct xilinx_dma_tx_segment *segment, *next;
> +	struct xilinx_dma_desc_hw *hw;
> +	unsigned long flags;
> +	u32 residue = 0;
> +
> +	spin_lock_irqsave(&chan->lock, flags);
> +
> +	desc = chan->active_desc;
> +	if (!desc) {
> +		dev_dbg(chan->dev, "no running descriptors\n");
> +		goto out_unlock;
> +	}
> +
> +	if (chan->has_sg) {
> +		list_for_each_entry_safe(segment, next, &desc->segments, node) {
> +			hw = &segment->hw;
> +			residue += (hw->control - hw->status) &
> +				   XILINX_DMA_MAX_TRANS_LEN;
> +		}
why are we calculating residue here?
> +	}
> +
> +	chan->residue = residue;
and this is used in status call, so completely wrong!

> +static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
> +{
> +	struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
> +	struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
> +	dma_cookie_t cookie;
> +	unsigned long flags;
> +	int err;
> +
> +	if (chan->err) {
> +		/*
> +		 * If reset fails, need to hard reset the system.
> +		 * Channel is no longer functional
> +		 */
> +		err = xilinx_dma_reset(chan);
> +		if (err < 0)
> +			return err;
> +	}
> +
> +	spin_lock_irqsave(&chan->lock, flags);
> +
> +	cookie = dma_cookie_assign(tx);
> +
> +	/* Append the transaction to the pending transactions queue. */
> +	list_add_tail(&desc->node, &chan->pending_list);
> +
> +	/* Free the allocated desc */
> +	chan->allocated_desc = NULL;
this bit is confusing, can you explain what is going on?

> +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> +	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> +	enum dma_transfer_direction direction, unsigned long flags,
> +	void *context)
> +{
> +	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> +	struct xilinx_dma_tx_descriptor *desc;
> +	struct xilinx_dma_tx_segment *segment;
> +	struct xilinx_dma_desc_hw *hw;
> +	u32 *app_w = (u32 *)context;
> +	struct scatterlist *sg;
> +	size_t copy, sg_used;
> +	int i;
> +
> +	if (!is_slave_direction(direction))
> +		return NULL;
> +
> +	/* Allocate a transaction descriptor. */
> +	desc = xilinx_dma_alloc_tx_descriptor(chan);
> +	if (!desc)
> +		return NULL;
> +
> +	desc->direction = direction;
> +	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +	desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> +	desc->async_tx.cookie = 0;
?

> +	async_tx_ack(&desc->async_tx);
why?

> +
> +	/* Build transactions using information in the scatter gather list */
> +	for_each_sg(sgl, sg, sg_len, i) {
> +		sg_used = 0;
> +
> +		/* Loop until the entire scatterlist entry is used */
> +		while (sg_used < sg_dma_len(sg)) {
> +
> +			/* Get a free segment */
> +			segment = xilinx_dma_alloc_tx_segment(chan);
> +			if (!segment)
> +				goto error;
> +
> +			/*
> +			 * Calculate the maximum number of bytes to transfer,
> +			 * making sure it is less than the hw limit
> +			 */
> +			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> +				     XILINX_DMA_MAX_TRANS_LEN);
> +			hw = &segment->hw;
> +
> +			/* Fill in the descriptor */
> +			hw->buf_addr = sg_dma_address(sg) + sg_used;
> +
> +			hw->control = copy;
> +
> +			if (direction == DMA_MEM_TO_DEV) {
> +				if (app_w)
> +					memcpy(hw->app, app_w, sizeof(u32) *
> +					       XILINX_DMA_NUM_APP_WORDS);
> +
> +				/*
> +				 * For the first DMA_MEM_TO_DEV transfer,
> +				 * set SOP
> +				 */
> +				if (!i)
> +					hw->control |= XILINX_DMA_BD_SOP;
> +			}
no else?

> +static int xilinx_dma_remove(struct platform_device *pdev)
> +{
> +	struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> +	int i;
> +
> +	of_dma_controller_free(pdev->dev.of_node);
> +	dma_async_device_unregister(&xdev->common);
> +
> +	for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> +		if (xdev->chan[i])
> +			xilinx_dma_chan_remove(xdev->chan[i]);
> +
at this point your irq is active and tasklet cna still be scheduled


-- 
~Vinod

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-17 11:07 ` Vinod Koul
@ 2015-03-23 16:24   ` Appana Durga Kedareswara Rao
  2015-03-24 16:28     ` Vinod Koul
  0 siblings, 1 reply; 10+ messages in thread
From: Appana Durga Kedareswara Rao @ 2015-03-23 16:24 UTC (permalink / raw)
  To: Vinod Koul
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, dmaengine,
	linux-arm-kernel, linux-kernel, Anirudha Sarangi,
	Srikanth Vemula, Srikanth Thokala

Hi Vinod,

Sorry for the delay in reply. Answers for the comments inline.

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@intel.com]
> Sent: Tuesday, March 17, 2015 4:38 PM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; Michal Simek; Soren Brinkmann;
> dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
> kernel@vger.kernel.org; Appana Durga Kedareswara Rao; Anirudha Sarangi;
> Srikanth Vemula; Srikanth Thokala
> Subject: Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine
> driver support
>
> On Mon, Mar 02, 2015 at 11:25:11PM +0530, Kedareswara rao Appana wrote:
> > This is the driver for the AXI Direct Memory Access (AXI DMA) core,
> > which is a soft Xilinx IP core that provides high- bandwidth direct
> > memory access between memory and AXI4-Stream type target
> peripherals.
> >
> > Signed-off-by: Srikanth Thokala <sthokal@xilinx.com>
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> > This patch is rebased on top of dma: xilinx-dma: move header file to
> > common location.
> but not on slave-dma next some API update is required..

Ok Will make the changes for API updates will send the next patch
On top of slave-dma next.

>
>
> > +/*
> > + * DMA driver for Xilinx DMA Engine
> > + *
> > + * Copyright (C) 2010 - 2014 Xilinx, Inc. All rights reserved.
> 2015?

Ok will modify.

>
> > +static struct xilinx_dma_tx_descriptor *
> > +xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan) {
> > +   struct xilinx_dma_tx_descriptor *desc;
> > +   unsigned long flags;
> > +
> > +   if (chan->allocated_desc)
> > +           return chan->allocated_desc;
> > +
> > +   desc = kzalloc(sizeof(*desc), GFP_KERNEL);
> GFP_NOWAIT

Ok

>
> > +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> > +                                       dma_cookie_t cookie,
> > +                                       struct dma_tx_state *txstate) {
> > +   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +   enum dma_status ret;
> > +   unsigned long flags;
> > +
> > +   ret = dma_cookie_status(dchan, cookie, txstate);
> > +   if (ret != DMA_COMPLETE) {
> txstate can be null

Ok will modify.
It will be something like below

ret = dma_cookie_status(dchan, cookie, txstate);
if (ret == DMA_COMPLETE || !txstate)
        return ret;
Calculate residue.

Please correct me if I am wrong


>
> > +           spin_lock_irqsave(&chan->lock, flags);
> > +           dma_set_residue(txstate, chan->residue);
> the queried descriptor may not be submitted to HW. Also the expectations
> are that you will read current value from HW and calculate residue

Ok Will modify will calculate residue here.

>
> > +static int xilinx_dma_device_slave_caps(struct dma_chan *dchan,
> > +                                   struct dma_slave_caps *caps)
> > +{
> > +   caps->directions = BIT(DMA_DEV_TO_MEM) |
> BIT(DMA_MEM_TO_DEV);
> > +   caps->cmd_terminate = true;
> > +   caps->residue_granularity =
> DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +
> > +   return 0;
> > +}
> this is based on older API, pls update

Ok Will update.

>
>
> > +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan
> > +*chan) {
> > +   struct xilinx_dma_tx_descriptor *desc;
> > +   struct xilinx_dma_tx_segment *segment, *next;
> > +   struct xilinx_dma_desc_hw *hw;
> > +   unsigned long flags;
> > +   u32 residue = 0;
> > +
> > +   spin_lock_irqsave(&chan->lock, flags);
> > +
> > +   desc = chan->active_desc;
> > +   if (!desc) {
> > +           dev_dbg(chan->dev, "no running descriptors\n");
> > +           goto out_unlock;
> > +   }
> > +
> > +   if (chan->has_sg) {
> > +           list_for_each_entry_safe(segment, next, &desc->segments,
> node) {
> > +                   hw = &segment->hw;
> > +                   residue += (hw->control - hw->status) &
> > +                              XILINX_DMA_MAX_TRANS_LEN;
> > +           }
> why are we calculating residue here?


This API is called from Interrupt handler when the BD(Buffer Descriptor) is
Successfully transmitted.
Thought of calculating residue here is more accurate than calculating the residue in the
tx_status.


> > +   }
> > +
> > +   chan->residue = residue;
> and this is used in status call, so completely wrong!

 OK

>
> > +static dma_cookie_t xilinx_dma_tx_submit(struct
> > +dma_async_tx_descriptor *tx) {
> > +   struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
> > +   struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
> > +   dma_cookie_t cookie;
> > +   unsigned long flags;
> > +   int err;
> > +
> > +   if (chan->err) {
> > +           /*
> > +            * If reset fails, need to hard reset the system.
> > +            * Channel is no longer functional
> > +            */
> > +           err = xilinx_dma_reset(chan);
> > +           if (err < 0)
> > +                   return err;
> > +   }
> > +
> > +   spin_lock_irqsave(&chan->lock, flags);
> > +
> > +   cookie = dma_cookie_assign(tx);
> > +
> > +   /* Append the transaction to the pending transactions queue. */
> > +   list_add_tail(&desc->node, &chan->pending_list);
> > +
> > +   /* Free the allocated desc */
> > +   chan->allocated_desc = NULL;
> this bit is confusing, can you explain what is going on?

This will allow to queue up multiple segments on to a single transaction descriptor.
User will submit this single desc and in the issue_pending() we decode multiple
Segments and submit to SG HW engine.  We free up the allocated_desc when it is
Submitted to the HW.

Please let me know if my explanation is not clear.

>
> > +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> > +   struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> > +   enum dma_transfer_direction direction, unsigned long flags,
> > +   void *context)
> > +{
> > +   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > +   struct xilinx_dma_tx_descriptor *desc;
> > +   struct xilinx_dma_tx_segment *segment;
> > +   struct xilinx_dma_desc_hw *hw;
> > +   u32 *app_w = (u32 *)context;
> > +   struct scatterlist *sg;
> > +   size_t copy, sg_used;
> > +   int i;
> > +
> > +   if (!is_slave_direction(direction))
> > +           return NULL;
> > +
> > +   /* Allocate a transaction descriptor. */
> > +   desc = xilinx_dma_alloc_tx_descriptor(chan);
> > +   if (!desc)
> > +           return NULL;
> > +
> > +   desc->direction = direction;
> > +   dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> > +   desc->async_tx.tx_submit = xilinx_dma_tx_submit;
> > +   desc->async_tx.cookie = 0;
> ?

This is while preparing the descs, but I see your point.  I will fix
it in my next version of the patch.

>
> > +   async_tx_ack(&desc->async_tx);
> why?

It is not required?
As far as I know we should ack the descriptor for Slave dma case right?
( https://www.kernel.org/doc/Documentation/crypto/async-tx-api.txt )

>
> > +
> > +   /* Build transactions using information in the scatter gather list */
> > +   for_each_sg(sgl, sg, sg_len, i) {
> > +           sg_used = 0;
> > +
> > +           /* Loop until the entire scatterlist entry is used */
> > +           while (sg_used < sg_dma_len(sg)) {
> > +
> > +                   /* Get a free segment */
> > +                   segment = xilinx_dma_alloc_tx_segment(chan);
> > +                   if (!segment)
> > +                           goto error;
> > +
> > +                   /*
> > +                    * Calculate the maximum number of bytes to
> transfer,
> > +                    * making sure it is less than the hw limit
> > +                    */
> > +                   copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> > +                                XILINX_DMA_MAX_TRANS_LEN);
> > +                   hw = &segment->hw;
> > +
> > +                   /* Fill in the descriptor */
> > +                   hw->buf_addr = sg_dma_address(sg) + sg_used;
> > +
> > +                   hw->control = copy;
> > +
> > +                   if (direction == DMA_MEM_TO_DEV) {
> > +                           if (app_w)
> > +                                   memcpy(hw->app, app_w,
> sizeof(u32) *
> > +
> XILINX_DMA_NUM_APP_WORDS);
> > +
> > +                           /*
> > +                            * For the first DMA_MEM_TO_DEV transfer,
> > +                            * set SOP
> > +                            */
> > +                           if (!i)
> > +                                   hw->control |=
> XILINX_DMA_BD_SOP;
> > +                   }
> no else?

Not required.

>
> > +static int xilinx_dma_remove(struct platform_device *pdev) {
> > +   struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> > +   int i;
> > +
> > +   of_dma_controller_free(pdev->dev.of_node);
> > +   dma_async_device_unregister(&xdev->common);
> > +
> > +   for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> > +           if (xdev->chan[i])
> > +                   xilinx_dma_chan_remove(xdev->chan[i]);
> > +
> at this point your irq is active and tasklet cna still be scheduled

We are freeing the IRQ and killing the tasklet in the chan_remove API why irq is still active at this point of time?
I didn't get you.
Could you please explain a bit?

Regards,
Kedar.

>
>
> --
> ~Vinod


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support
  2015-03-23 16:24   ` Appana Durga Kedareswara Rao
@ 2015-03-24 16:28     ` Vinod Koul
  0 siblings, 0 replies; 10+ messages in thread
From: Vinod Koul @ 2015-03-24 16:28 UTC (permalink / raw)
  To: Appana Durga Kedareswara Rao
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, dmaengine,
	linux-arm-kernel, linux-kernel, Anirudha Sarangi,
	Srikanth Vemula, Srikanth Thokala

On Mon, Mar 23, 2015 at 04:24:26PM +0000, Appana Durga Kedareswara Rao wrote:
> > > +static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
> > > +                                       dma_cookie_t cookie,
> > > +                                       struct dma_tx_state *txstate) {
> > > +   struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
> > > +   enum dma_status ret;
> > > +   unsigned long flags;
> > > +
> > > +   ret = dma_cookie_status(dchan, cookie, txstate);
> > > +   if (ret != DMA_COMPLETE) {
> > txstate can be null
> 
> Ok will modify.
> It will be something like below
> 
> ret = dma_cookie_status(dchan, cookie, txstate);
> if (ret == DMA_COMPLETE || !txstate)
>         return ret;
> Calculate residue.
> 
> Please correct me if I am wrong
Thats right

> > > +static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan
> > > +*chan) {
> > > +   struct xilinx_dma_tx_descriptor *desc;
> > > +   struct xilinx_dma_tx_segment *segment, *next;
> > > +   struct xilinx_dma_desc_hw *hw;
> > > +   unsigned long flags;
> > > +   u32 residue = 0;
> > > +
> > > +   spin_lock_irqsave(&chan->lock, flags);
> > > +
> > > +   desc = chan->active_desc;
> > > +   if (!desc) {
> > > +           dev_dbg(chan->dev, "no running descriptors\n");
> > > +           goto out_unlock;
> > > +   }
> > > +
> > > +   if (chan->has_sg) {
> > > +           list_for_each_entry_safe(segment, next, &desc->segments,
> > node) {
> > > +                   hw = &segment->hw;
> > > +                   residue += (hw->control - hw->status) &
> > > +                              XILINX_DMA_MAX_TRANS_LEN;
> > > +           }
> > why are we calculating residue here?
> 
> 
> This API is called from Interrupt handler when the BD(Buffer Descriptor) is
> Successfully transmitted.
> Thought of calculating residue here is more accurate than calculating the residue in the
> tx_status.
Nope, you need to report reside when asked not precompute!

> This is while preparing the descs, but I see your point.  I will fix
> it in my next version of the patch.
> 
> >
> > > +   async_tx_ack(&desc->async_tx);
> > why?
> 
> It is not required?
> As far as I know we should ack the descriptor for Slave dma case right?
> ( https://www.kernel.org/doc/Documentation/crypto/async-tx-api.txt )
No you dont, for slave you need to read Documentation/dmaengine/

> > > +static int xilinx_dma_remove(struct platform_device *pdev) {
> > > +   struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
> > > +   int i;
> > > +
> > > +   of_dma_controller_free(pdev->dev.of_node);
> > > +   dma_async_device_unregister(&xdev->common);
> > > +
> > > +   for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
> > > +           if (xdev->chan[i])
> > > +                   xilinx_dma_chan_remove(xdev->chan[i]);
> > > +
> > at this point your irq is active and tasklet cna still be scheduled
> 
> We are freeing the IRQ and killing the tasklet in the chan_remove API why irq is still active at this point of time?
I missed this  bit, if you are freeing irq explcitly then it should be okay

> I didn't get you.
> Could you please explain a bit?

-- 
~Vinod


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-03-24 16:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-02 17:55 [PATCH v5] dma: Add Xilinx AXI Direct Memory Access Engine driver support Kedareswara rao Appana
2015-03-02 18:41 ` Josh Cartwright
2015-03-05  9:34   ` Appana Durga Kedareswara Rao
2015-03-02 18:59 ` Nicolae Rosia
2015-03-05  9:35   ` Appana Durga Kedareswara Rao
2015-03-02 22:01 ` Paul Bolle
2015-03-05  9:34   ` Appana Durga Kedareswara Rao
2015-03-17 11:07 ` Vinod Koul
2015-03-23 16:24   ` Appana Durga Kedareswara Rao
2015-03-24 16:28     ` Vinod Koul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).