linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
@ 2016-03-15 17:23 Kedareswara rao Appana
  2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
                   ` (7 more replies)
  0 siblings, 8 replies; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch series does some enhancments to the VDMA driver
which includes
--> Adding support for AXI DMA IP.
--> Adding support for AXI CDMA IP.
--> Fixing checkpatch warnings.

Kedareswara rao Appana (7):
  dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
  dmaengine: xilinx_vdma: Add quirks support to differentiate differnet
    IP cores
  dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
    Access Engine
  dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
    Access Engine
  dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding
    doc
  dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
    Memory Access Engine
  dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
    Memory Access Engine

 .../devicetree/bindings/dma/xilinx/xilinx_dma.txt  |  65 ---
 .../devicetree/bindings/dma/xilinx/xilinx_vdma.txt |  37 +-
 drivers/dma/xilinx/xilinx_vdma.c                   | 593 +++++++++++++++++++--
 3 files changed, 574 insertions(+), 121 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt

-- 
2.1.2

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-15 17:50   ` Moritz Fischer
  2016-03-17  7:00   ` Laurent Pinchart
  2016-03-15 17:23 ` [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores Kedareswara rao Appana
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch fixes the below checkpatch.pl warnings.

WARNING: void function return statements are not generally useful
+	return;
+}

WARNING: void function return statements are not generally useful
+	return;
+}

WARNING: Missing a blank line after declarations
+		u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_vdma.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index 0ee0321..7ab6793 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -569,8 +569,6 @@ static void xilinx_vdma_halt(struct xilinx_vdma_chan *chan)
 			chan, vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR));
 		chan->err = true;
 	}
-
-	return;
 }
 
 /**
@@ -595,8 +593,6 @@ static void xilinx_vdma_start(struct xilinx_vdma_chan *chan)
 
 		chan->err = true;
 	}
-
-	return;
 }
 
 /**
@@ -829,6 +825,7 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
 		 * make sure not to write to other error bits to 1.
 		 */
 		u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
+
 		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
 				errors & XILINX_VDMA_DMASR_ERR_RECOVER_MASK);
 
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
  2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-16  3:13   ` Vinod Koul
  2016-03-17  7:19   ` Laurent Pinchart
  2016-03-15 17:23 ` [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine Kedareswara rao Appana
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch adds quirks support in the driver to differentiate differnet IP cores.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_vdma.c | 36 ++++++++++++++++++++++++++++++------
 1 file changed, 30 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index 7ab6793..f682bef 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -139,6 +139,8 @@
 /* Delay loop counter to prevent hardware failure */
 #define XILINX_VDMA_LOOP_COUNT		1000000
 
+#define AXIVDMA_SUPPORT		BIT(0)
+
 /**
  * struct xilinx_vdma_desc_hw - Hardware Descriptor
  * @next_desc: Next Descriptor Pointer @0x00
@@ -240,6 +242,7 @@ struct xilinx_vdma_chan {
  * @chan: Driver specific VDMA channel
  * @has_sg: Specifies whether Scatter-Gather is present or not
  * @flush_on_fsync: Flush on frame sync
+ * @quirks: Needed for different IP cores
  */
 struct xilinx_vdma_device {
 	void __iomem *regs;
@@ -248,6 +251,15 @@ struct xilinx_vdma_device {
 	struct xilinx_vdma_chan *chan[XILINX_VDMA_MAX_CHANS_PER_DEVICE];
 	bool has_sg;
 	u32 flush_on_fsync;
+	u32 quirks;
+};
+
+/**
+ * struct xdma_platform_data - DMA platform structure
+ * @quirks: quirks for platform specific data.
+ */
+struct xdma_platform_data {
+	u32 quirks;
 };
 
 /* Macros */
@@ -1239,6 +1251,16 @@ static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
 	return dma_get_slave_channel(&xdev->chan[chan_id]->common);
 }
 
+static const struct xdma_platform_data xvdma_def = {
+	.quirks = AXIVDMA_SUPPORT,
+};
+
+static const struct of_device_id xilinx_vdma_of_ids[] = {
+	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
+	{}
+};
+MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
+
 /**
  * xilinx_vdma_probe - Driver probe function
  * @pdev: Pointer to the platform_device structure
@@ -1251,6 +1273,7 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 	struct xilinx_vdma_device *xdev;
 	struct device_node *child;
 	struct resource *io;
+	const struct of_device_id *match;
 	u32 num_frames;
 	int i, err;
 
@@ -1259,6 +1282,13 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 	if (!xdev)
 		return -ENOMEM;
 
+	match = of_match_node(xilinx_vdma_of_ids, pdev->dev.of_node);
+	if (match && match->data) {
+		const struct xdma_platform_data *data = match->data;
+
+		xdev->quirks = data->quirks;
+	}
+
 	xdev->dev = &pdev->dev;
 
 	/* Request and map I/O memory */
@@ -1356,12 +1386,6 @@ static int xilinx_vdma_remove(struct platform_device *pdev)
 	return 0;
 }
 
-static const struct of_device_id xilinx_vdma_of_ids[] = {
-	{ .compatible = "xlnx,axi-vdma-1.00.a",},
-	{}
-};
-MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
-
 static struct platform_driver xilinx_vdma_driver = {
 	.driver = {
 		.name = "xilinx-vdma",
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
  2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
  2016-03-15 17:23 ` [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-17  8:05   ` Laurent Pinchart
  2016-03-15 17:23 ` [PATCH 4/7] " Kedareswara rao Appana
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch adds support for the AXI Direct Memory Access (AXI DMA)
core, which is a soft Xilinx IP core that provides high-
bandwidth direct memory access between memory and AXI4-Stream
type target peripherals.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_vdma.c | 385 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 341 insertions(+), 44 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index f682bef..87525a9 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -16,6 +16,12 @@
  * video device (S2MM). Initialization, status, interrupt and management
  * registers are accessed through an AXI4-Lite slave interface.
  *
+ *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
+ *  Access between memory and AXI4-Stream-type target peripherals. It can be
+ *  configured to have one channel or two channels and if configured as two
+ *  channels, one is to transmit data from memory to a device and another is
+ *  to receive from a device.
+ *
  * This program is free software: you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
  * the Free Software Foundation, either version 2 of the License, or
@@ -140,6 +146,20 @@
 #define XILINX_VDMA_LOOP_COUNT		1000000
 
 #define AXIVDMA_SUPPORT		BIT(0)
+#define AXIDMA_SUPPORT		BIT(1)
+
+/* AXI DMA Specific Registers/Offsets */
+#define XILINX_DMA_REG_SRCDSTADDR	0x18
+#define XILINX_DMA_REG_DSTADDR		0x20
+#define XILINX_DMA_REG_BTT		0x28
+
+#define XILINX_DMA_MAX_TRANS_LEN	GENMASK(22, 0)
+#define XILINX_DMA_CR_COALESCE_MAX	GENMASK(23, 16)
+#define XILINX_DMA_CR_COALESCE_SHIFT	16
+#define XILINX_DMA_BD_SOP		BIT(27)
+#define XILINX_DMA_BD_EOP		BIT(26)
+#define XILINX_DMA_COALESCE_MAX		255
+#define XILINX_DMA_NUM_APP_WORDS	5
 
 /**
  * struct xilinx_vdma_desc_hw - Hardware Descriptor
@@ -147,19 +167,23 @@
  * @pad1: Reserved @0x04
  * @buf_addr: Buffer address @0x08
  * @pad2: Reserved @0x0C
- * @vsize: Vertical Size @0x10
+ * @dstaddr_vsize: Vertical Size @0x10
  * @hsize: Horizontal Size @0x14
- * @stride: Number of bytes between the first
+ * @control_stride: Number of bytes between the first
  *	    pixels of each horizontal line @0x18
+ * @status: Status field @0x1C
+ * @app: APP Fields @0x20 - 0x30
  */
 struct xilinx_vdma_desc_hw {
 	u32 next_desc;
 	u32 pad1;
 	u32 buf_addr;
 	u32 pad2;
-	u32 vsize;
+	u32 dstaddr_vsize;
 	u32 hsize;
-	u32 stride;
+	u32 control_stride;
+	u32 status;
+	u32 app[XILINX_DMA_NUM_APP_WORDS];
 } __aligned(64);
 
 /**
@@ -209,6 +233,9 @@ struct xilinx_vdma_tx_descriptor {
  * @config: Device configuration info
  * @flush_on_fsync: Flush on Frame sync
  * @desc_pendingcount: Descriptor pending count
+ * @residue: Residue for AXI DMA
+ * @seg_v: Statically allocated segments base
+ * @start_transfer: Differentiate b/w DMA IP's transfer
  */
 struct xilinx_vdma_chan {
 	struct xilinx_vdma_device *xdev;
@@ -232,6 +259,9 @@ struct xilinx_vdma_chan {
 	struct xilinx_vdma_config config;
 	bool flush_on_fsync;
 	u32 desc_pendingcount;
+	u32 residue;
+	struct xilinx_vdma_tx_segment *seg_v;
+	void   (*start_transfer)(struct xilinx_vdma_chan *chan);
 };
 
 /**
@@ -436,6 +466,7 @@ static void xilinx_vdma_free_chan_resources(struct dma_chan *dchan)
 	dev_dbg(chan->dev, "Free all channel resources.\n");
 
 	xilinx_vdma_free_descriptors(chan);
+	xilinx_vdma_free_tx_segment(chan, chan->seg_v);
 	dma_pool_destroy(chan->desc_pool);
 	chan->desc_pool = NULL;
 }
@@ -515,7 +546,12 @@ static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan)
 		return -ENOMEM;
 	}
 
+	chan->seg_v = xilinx_vdma_alloc_tx_segment(chan);
 	dma_cookie_init(dchan);
+
+	/* Enable interrupts */
+	vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
+		      XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
 	return 0;
 }
 
@@ -531,7 +567,37 @@ static enum dma_status xilinx_vdma_tx_status(struct dma_chan *dchan,
 					dma_cookie_t cookie,
 					struct dma_tx_state *txstate)
 {
-	return dma_cookie_status(dchan, cookie, txstate);
+	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_vdma_tx_descriptor *desc;
+	struct xilinx_vdma_tx_segment *segment;
+	struct xilinx_vdma_desc_hw *hw;
+	enum dma_status ret;
+	unsigned long flags;
+	u32 residue = 0;
+
+	ret = dma_cookie_status(dchan, cookie, txstate);
+	if (ret == DMA_COMPLETE || !txstate)
+		return ret;
+
+	if (chan->xdev->quirks & AXIDMA_SUPPORT) {
+		desc = list_last_entry(&chan->active_list,
+				       struct xilinx_vdma_tx_descriptor, node);
+
+		spin_lock_irqsave(&chan->lock, flags);
+		if (chan->has_sg) {
+			list_for_each_entry(segment, &desc->segments, node) {
+				hw = &segment->hw;
+				residue += (hw->control_stride - hw->status) &
+					   XILINX_DMA_MAX_TRANS_LEN;
+			}
+		}
+		spin_unlock_irqrestore(&chan->lock, flags);
+
+		chan->residue = residue;
+		dma_set_residue(txstate, chan->residue);
+	}
+
+	return ret;
 }
 
 /**
@@ -713,8 +779,9 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
 		/* HW expects these parameters to be same for one transaction */
 		vdma_desc_write(chan, XILINX_VDMA_REG_HSIZE, last->hw.hsize);
 		vdma_desc_write(chan, XILINX_VDMA_REG_FRMDLY_STRIDE,
-				last->hw.stride);
-		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE, last->hw.vsize);
+				last->hw.control_stride);
+		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE,
+				last->hw.dstaddr_vsize);
 	}
 
 	list_splice_tail_init(&chan->pending_list, &chan->active_list);
@@ -736,6 +803,105 @@ static void xilinx_vdma_issue_pending(struct dma_chan *dchan)
 }
 
 /**
+ * xilinx_dma_start_transfer - Starts DMA transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_dma_start_transfer(struct xilinx_vdma_chan *chan)
+{
+	struct xilinx_vdma_tx_descriptor *head_desc, *tail_desc;
+	struct xilinx_vdma_tx_segment *tail_segment, *old_head, *new_head;
+	u32 reg;
+
+	/* This function was invoked with lock held */
+	if (chan->err)
+		return;
+
+	if (list_empty(&chan->pending_list))
+		return;
+
+	head_desc = list_first_entry(&chan->pending_list,
+				     struct xilinx_vdma_tx_descriptor, node);
+	tail_desc = list_last_entry(&chan->pending_list,
+				    struct xilinx_vdma_tx_descriptor, node);
+	tail_segment = list_last_entry(&tail_desc->segments,
+				       struct xilinx_vdma_tx_segment, node);
+
+	old_head = list_first_entry(&head_desc->segments,
+				struct xilinx_vdma_tx_segment, node);
+	new_head = chan->seg_v;
+	/* Copy Buffer Descriptor fields. */
+	new_head->hw = old_head->hw;
+
+	/* Swap and save new reserve */
+	list_replace_init(&old_head->node, &new_head->node);
+	chan->seg_v = old_head;
+
+	tail_segment->hw.next_desc = chan->seg_v->phys;
+	head_desc->async_tx.phys = new_head->phys;
+
+	/* If it is SG mode and hardware is busy, cannot submit */
+	if (chan->has_sg && xilinx_vdma_is_running(chan) &&
+	    !xilinx_vdma_is_idle(chan)) {
+		dev_dbg(chan->dev, "DMA controller still busy\n");
+		return;
+	}
+
+	reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
+
+	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
+		reg &= ~XILINX_DMA_CR_COALESCE_MAX;
+		reg |= chan->desc_pendingcount <<
+				  XILINX_DMA_CR_COALESCE_SHIFT;
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, reg);
+	}
+
+	if (chan->has_sg)
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC,
+				head_desc->async_tx.phys);
+
+	xilinx_vdma_start(chan);
+
+	if (chan->err)
+		return;
+
+	/* Start the transfer */
+	if (chan->has_sg) {
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC,
+			       tail_segment->phys);
+	} else {
+		struct xilinx_vdma_tx_segment *segment;
+		struct xilinx_vdma_desc_hw *hw;
+
+		segment = list_first_entry(&head_desc->segments,
+					   struct xilinx_vdma_tx_segment, node);
+		hw = &segment->hw;
+
+		vdma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr);
+
+		/* Start the transfer */
+		vdma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+			       hw->control_stride & XILINX_DMA_MAX_TRANS_LEN);
+	}
+
+	list_splice_tail_init(&chan->pending_list, &chan->active_list);
+	chan->desc_pendingcount = 0;
+}
+
+/**
+ * xilinx_dma_issue_pending - Issue pending transactions
+ * @dchan: DMA channel
+ */
+static void xilinx_dma_issue_pending(struct dma_chan *dchan)
+{
+	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	xilinx_dma_start_transfer(chan);
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
  * xilinx_vdma_complete_descriptor - Mark the active descriptor as complete
  * @chan : xilinx DMA channel
  *
@@ -841,15 +1007,12 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
 		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
 				errors & XILINX_VDMA_DMASR_ERR_RECOVER_MASK);
 
-		if (!chan->flush_on_fsync ||
-		    (errors & ~XILINX_VDMA_DMASR_ERR_RECOVER_MASK)) {
-			dev_err(chan->dev,
-				"Channel %p has errors %x, cdr %x tdr %x\n",
-				chan, errors,
-				vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
-				vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
-			chan->err = true;
-		}
+		dev_err(chan->dev,
+			"Channel %p has errors %x, cdr %x tdr %x\n",
+			chan, errors,
+			vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
+			vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
+		chan->err = true;
 	}
 
 	if (status & XILINX_VDMA_DMASR_DLY_CNT_IRQ) {
@@ -863,7 +1026,7 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
 	if (status & XILINX_VDMA_DMASR_FRM_CNT_IRQ) {
 		spin_lock(&chan->lock);
 		xilinx_vdma_complete_descriptor(chan);
-		xilinx_vdma_start_transfer(chan);
+		chan->start_transfer(chan);
 		spin_unlock(&chan->lock);
 	}
 
@@ -903,7 +1066,8 @@ append:
 	list_add_tail(&desc->node, &chan->pending_list);
 	chan->desc_pendingcount++;
 
-	if (unlikely(chan->desc_pendingcount > chan->num_frms)) {
+	if ((unlikely(chan->desc_pendingcount > chan->num_frms)) &&
+		(chan->xdev->quirks & AXIVDMA_SUPPORT)) {
 		dev_dbg(chan->dev, "desc pendingcount is too high\n");
 		chan->desc_pendingcount = chan->num_frms;
 	}
@@ -989,11 +1153,11 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
 
 	/* Fill in the hardware descriptor */
 	hw = &segment->hw;
-	hw->vsize = xt->numf;
+	hw->dstaddr_vsize = xt->numf;
 	hw->hsize = xt->sgl[0].size;
-	hw->stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
+	hw->control_stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
 			XILINX_VDMA_FRMDLY_STRIDE_STRIDE_SHIFT;
-	hw->stride |= chan->config.frm_dly <<
+	hw->control_stride |= chan->config.frm_dly <<
 			XILINX_VDMA_FRMDLY_STRIDE_FRMDLY_SHIFT;
 
 	if (xt->dir != DMA_MEM_TO_DEV)
@@ -1019,6 +1183,108 @@ error:
 }
 
 /**
+ * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
+ * @dchan: DMA channel
+ * @sgl: scatterlist to transfer to/from
+ * @sg_len: number of entries in @scatterlist
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ * @context: APP words of the descriptor
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
+	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
+	enum dma_transfer_direction direction, unsigned long flags,
+	void *context)
+{
+	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_vdma_tx_descriptor *desc;
+	struct xilinx_vdma_tx_segment *segment = NULL, *prev = NULL;
+	u32 *app_w = (u32 *)context;
+	struct scatterlist *sg;
+	size_t copy, sg_used;
+	int i;
+
+	if (!is_slave_direction(direction))
+		return NULL;
+
+	/* Allocate a transaction descriptor. */
+	desc = xilinx_vdma_alloc_tx_descriptor(chan);
+	if (!desc)
+		return NULL;
+
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+	desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
+
+	/* Build transactions using information in the scatter gather list */
+	for_each_sg(sgl, sg, sg_len, i) {
+		sg_used = 0;
+
+		/* Loop until the entire scatterlist entry is used */
+		while (sg_used < sg_dma_len(sg)) {
+			struct xilinx_vdma_desc_hw *hw;
+
+			/* Get a free segment */
+			segment = xilinx_vdma_alloc_tx_segment(chan);
+			if (!segment)
+				goto error;
+
+			/*
+			 * Calculate the maximum number of bytes to transfer,
+			 * making sure it is less than the hw limit
+			 */
+			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
+				     XILINX_DMA_MAX_TRANS_LEN);
+			hw = &segment->hw;
+
+			/* Fill in the descriptor */
+			hw->buf_addr = sg_dma_address(sg) + sg_used;
+
+			hw->control_stride = copy;
+
+			if (chan->direction == DMA_MEM_TO_DEV) {
+				if (app_w)
+					memcpy(hw->app, app_w, sizeof(u32) *
+					       XILINX_DMA_NUM_APP_WORDS);
+			}
+
+			if (prev)
+				prev->hw.next_desc = segment->phys;
+
+			prev = segment;
+			sg_used += copy;
+
+			/*
+			 * Insert the segment into the descriptor segments
+			 * list.
+			 */
+			list_add_tail(&segment->node, &desc->segments);
+		}
+	}
+
+	segment = list_first_entry(&desc->segments,
+				   struct xilinx_vdma_tx_segment, node);
+	desc->async_tx.phys = segment->phys;
+	prev->hw.next_desc = segment->phys;
+
+	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
+	if (chan->direction == DMA_MEM_TO_DEV) {
+		segment->hw.control_stride |= XILINX_DMA_BD_SOP;
+		segment = list_last_entry(&desc->segments,
+					  struct xilinx_vdma_tx_segment,
+					  node);
+		segment->hw.control_stride |= XILINX_DMA_BD_EOP;
+	}
+
+	return &desc->async_tx;
+
+error:
+	xilinx_vdma_free_tx_descriptor(chan, desc);
+	return NULL;
+}
+
+/**
  * xilinx_vdma_terminate_all - Halt the channel and free descriptors
  * @chan: Driver specific VDMA Channel pointer
  */
@@ -1179,27 +1445,36 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
 		chan->id = 0;
 
 		chan->ctrl_offset = XILINX_VDMA_MM2S_CTRL_OFFSET;
-		chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
+		if (xdev->quirks & AXIVDMA_SUPPORT) {
+			chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
 
-		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
-		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
-			chan->flush_on_fsync = true;
+			if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
+			    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
+				chan->flush_on_fsync = true;
+		}
 	} else if (of_device_is_compatible(node,
 					    "xlnx,axi-vdma-s2mm-channel")) {
 		chan->direction = DMA_DEV_TO_MEM;
 		chan->id = 1;
 
 		chan->ctrl_offset = XILINX_VDMA_S2MM_CTRL_OFFSET;
-		chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
+		if (xdev->quirks & AXIVDMA_SUPPORT) {
+			chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
 
-		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
-		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
-			chan->flush_on_fsync = true;
+			if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
+			    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
+				chan->flush_on_fsync = true;
+		}
 	} else {
 		dev_err(xdev->dev, "Invalid channel compatible node\n");
 		return -EINVAL;
 	}
 
+	if (xdev->quirks & AXIVDMA_SUPPORT)
+		chan->start_transfer = xilinx_vdma_start_transfer;
+	else
+		chan->start_transfer = xilinx_dma_start_transfer;
+
 	/* Request the interrupt */
 	chan->irq = irq_of_parse_and_map(node, 0);
 	err = request_irq(chan->irq, xilinx_vdma_irq_handler, IRQF_SHARED,
@@ -1255,8 +1530,13 @@ static const struct xdma_platform_data xvdma_def = {
 	.quirks = AXIVDMA_SUPPORT,
 };
 
+static const struct xdma_platform_data xdma_def = {
+	.quirks = AXIDMA_SUPPORT,
+};
+
 static const struct of_device_id xilinx_vdma_of_ids[] = {
 	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
+	{ .compatible = "xlnx,axi-dma-1.00.a", .data = &xdma_def},
 	{}
 };
 MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
@@ -1300,16 +1580,22 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 	/* Retrieve the DMA engine properties from the device tree */
 	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
 
-	err = of_property_read_u32(node, "xlnx,num-fstores", &num_frames);
-	if (err < 0) {
-		dev_err(xdev->dev, "missing xlnx,num-fstores property\n");
-		return err;
-	}
+	if ((xdev->quirks & AXIVDMA_SUPPORT)) {
 
-	err = of_property_read_u32(node, "xlnx,flush-fsync",
-					&xdev->flush_on_fsync);
-	if (err < 0)
-		dev_warn(xdev->dev, "missing xlnx,flush-fsync property\n");
+		err = of_property_read_u32(node, "xlnx,num-fstores",
+					   &num_frames);
+		if (err < 0) {
+			dev_err(xdev->dev,
+				"missing xlnx,num-fstores property\n");
+			return err;
+		}
+
+		err = of_property_read_u32(node, "xlnx,flush-fsync",
+						&xdev->flush_on_fsync);
+		if (err < 0)
+			dev_warn(xdev->dev,
+				 "missing xlnx,flush-fsync property\n");
+	}
 
 	/* Initialize the DMA engine */
 	xdev->common.dev = &pdev->dev;
@@ -1322,11 +1608,20 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 				xilinx_vdma_alloc_chan_resources;
 	xdev->common.device_free_chan_resources =
 				xilinx_vdma_free_chan_resources;
-	xdev->common.device_prep_interleaved_dma =
-				xilinx_vdma_dma_prep_interleaved;
 	xdev->common.device_terminate_all = xilinx_vdma_terminate_all;
 	xdev->common.device_tx_status = xilinx_vdma_tx_status;
-	xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
+	if (xdev->quirks & AXIVDMA_SUPPORT) {
+		xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
+		xdev->common.device_prep_interleaved_dma =
+				xilinx_vdma_dma_prep_interleaved;
+	} else {
+		xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
+		xdev->common.device_issue_pending = xilinx_dma_issue_pending;
+		xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
+					  BIT(DMA_MEM_TO_DEV);
+		xdev->common.residue_granularity =
+					  DMA_RESIDUE_GRANULARITY_SEGMENT;
+	}
 
 	platform_set_drvdata(pdev, xdev);
 
@@ -1337,9 +1632,11 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 			goto error;
 	}
 
-	for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
-		if (xdev->chan[i])
-			xdev->chan[i]->num_frms = num_frames;
+	if (xdev->quirks & AXIVDMA_SUPPORT) {
+		for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
+			if (xdev->chan[i])
+				xdev->chan[i]->num_frms = num_frames;
+	}
 
 	/* Register the DMA engine with the core */
 	dma_async_device_register(&xdev->common);
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
                   ` (2 preceding siblings ...)
  2016-03-15 17:23 ` [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-16  3:17   ` Vinod Koul
  2016-03-15 17:23 ` [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc Kedareswara rao Appana
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch updates the device-tree binding doc for
adding support for AXI DMA.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 .../devicetree/bindings/dma/xilinx/xilinx_vdma.txt | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
index e4c4d47..3d134a5 100644
--- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
+++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
@@ -3,8 +3,13 @@ It can be configured to have one channel or two channels. If configured
 as two channels, one is to transmit to the video device and another is
 to receive from the video device.
 
+Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
+target devices. It can be configured to have one channel or two channels.
+If configured as two channels, one is to transmit to the device and another
+is to receive from the device.
+
 Required properties:
-- compatible: Should be "xlnx,axi-vdma-1.00.a"
+- compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a"
 - #dma-cells: Should be <1>, see "dmas" property below
 - reg: Should contain VDMA registers location and length.
 - xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
@@ -55,6 +60,21 @@ axi_vdma_0: axivdma@40030000 {
 	} ;
 } ;
 
+axi_dma_0: axidma@40400000 {
+	compatible = "xlnx,axi-dma-1.00.a";
+	#dma_cells = <1>;
+	reg = < 0x40400000 0x10000 >;
+	dma-channel@40400000 {
+		compatible = "xlnx,axi-vdma-mm2s-channel";
+		interrupts = < 0 59 4 >;
+		xlnx,datawidth = <0x40>;
+	} ;
+	dma-channel@40400030 {
+		compatible = "xlnx,axi-vdma-s2mm-channel";
+		interrupts = < 0 58 4 >;
+		xlnx,datawidth = <0x40>;
+	} ;
+} ;
 
 * DMA client
 
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
                   ` (3 preceding siblings ...)
  2016-03-15 17:23 ` [PATCH 4/7] " Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-16  1:34   ` Moritz Fischer
  2016-03-15 17:23 ` [PATCH 6/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine Kedareswara rao Appana
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

AXI DMA support is added to the existing AXI VDMA driver.
The binding doc for AXI DMA should also be updated in the
VDMA device-tree binding doc.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 .../devicetree/bindings/dma/xilinx/xilinx_dma.txt  | 65 ----------------------
 1 file changed, 65 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt

diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
deleted file mode 100644
index 2291c40..0000000
--- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
+++ /dev/null
@@ -1,65 +0,0 @@
-Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
-target devices. It can be configured to have one channel or two channels.
-If configured as two channels, one is to transmit to the device and another
-is to receive from the device.
-
-Required properties:
-- compatible: Should be "xlnx,axi-dma-1.00.a"
-- #dma-cells: Should be <1>, see "dmas" property below
-- reg: Should contain DMA registers location and length.
-- dma-channel child node: Should have atleast one channel and can have upto
-	two channels per device. This node specifies the properties of each
-	DMA channel (see child node properties below).
-
-Optional properties:
-- xlnx,include-sg: Tells whether configured for Scatter-mode in
-	the hardware.
-
-Required child node properties:
-- compatible: It should be either "xlnx,axi-dma-mm2s-channel" or
-	"xlnx,axi-dma-s2mm-channel".
-- interrupts: Should contain per channel DMA interrupts.
-- xlnx,datawidth: Should contain the stream data width, take values
-	{32,64...1024}.
-
-Option child node properties:
-- xlnx,include-dre: Tells whether hardware is configured for Data
-	Realignment Engine.
-
-Example:
-++++++++
-
-axi_dma_0: axidma@40400000 {
-	compatible = "xlnx,axi-dma-1.00.a";
-	#dma_cells = <1>;
-	reg = < 0x40400000 0x10000 >;
-	dma-channel@40400000 {
-		compatible = "xlnx,axi-dma-mm2s-channel";
-		interrupts = < 0 59 4 >;
-		xlnx,datawidth = <0x40>;
-	} ;
-	dma-channel@40400030 {
-		compatible = "xlnx,axi-dma-s2mm-channel";
-		interrupts = < 0 58 4 >;
-		xlnx,datawidth = <0x40>;
-	} ;
-} ;
-
-
-* DMA client
-
-Required properties:
-- dmas: a list of <[DMA device phandle] [Channel ID]> pairs,
-	where Channel ID is '0' for write/tx and '1' for read/rx
-	channel.
-- dma-names: a list of DMA channel names, one per "dmas" entry
-
-Example:
-++++++++
-
-dmatest_0: dmatest@0 {
-	compatible ="xlnx,axi-dma-test-1.00.a";
-	dmas = <&axi_dma_0 0
-		&axi_dma_0 1>;
-	dma-names = "dma0", "dma1";
-} ;
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 6/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
                   ` (4 preceding siblings ...)
  2016-03-15 17:23 ` [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-15 17:23 ` [PATCH 7/7] " Kedareswara rao Appana
  2016-03-16  1:29 ` [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Moritz Fischer
  7 siblings, 0 replies; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch adds support for the AXI Central Direct Memory Access (AXI
CDMA) core, which is a soft Xilinx IP core that provides high-bandwidth
Direct Memory Access (DMA) between a memory-mapped source address and a
memory-mapped destination address.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_vdma.c | 173 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 169 insertions(+), 4 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index 87525a9..e6caf79 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -22,6 +22,10 @@
  *  channels, one is to transmit data from memory to a device and another is
  *  to receive from a device.
  *
+ *  The AXI CDMA, is a soft IP, which provides high-bandwidth Direct Memory
+ *  Access (DMA) between a memory-mapped source address and a memory-mapped
+ *  destination address.
+ *
  * This program is free software: you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
  * the Free Software Foundation, either version 2 of the License, or
@@ -147,6 +151,7 @@
 
 #define AXIVDMA_SUPPORT		BIT(0)
 #define AXIDMA_SUPPORT		BIT(1)
+#define AXICDMA_SUPPORT		BIT(2)
 
 /* AXI DMA Specific Registers/Offsets */
 #define XILINX_DMA_REG_SRCDSTADDR	0x18
@@ -161,6 +166,9 @@
 #define XILINX_DMA_COALESCE_MAX		255
 #define XILINX_DMA_NUM_APP_WORDS	5
 
+/* AXI CDMA Specific Masks */
+#define XILINX_CDMA_CR_SGMODE          BIT(3)
+
 /**
  * struct xilinx_vdma_desc_hw - Hardware Descriptor
  * @next_desc: Next Descriptor Pointer @0x00
@@ -552,6 +560,12 @@ static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan)
 	/* Enable interrupts */
 	vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
 		      XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
+
+	if ((chan->xdev->quirks & AXICDMA_SUPPORT) && chan->has_sg) {
+		vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
+			      XILINX_CDMA_CR_SGMODE);
+	}
+
 	return 0;
 }
 
@@ -674,6 +688,81 @@ static void xilinx_vdma_start(struct xilinx_vdma_chan *chan)
 }
 
 /**
+ * xilinx_cdma_start_transfer - Starts cdma transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_cdma_start_transfer(struct xilinx_vdma_chan *chan)
+{
+	struct xilinx_vdma_tx_descriptor *head_desc, *tail_desc;
+	struct xilinx_vdma_tx_segment *tail_segment;
+	u32 ctrl_reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
+
+	if (chan->err)
+		return;
+
+	if (list_empty(&chan->pending_list))
+		return;
+
+	head_desc = list_first_entry(&chan->pending_list,
+				     struct xilinx_vdma_tx_descriptor, node);
+	tail_desc = list_last_entry(&chan->pending_list,
+				    struct xilinx_vdma_tx_descriptor, node);
+	tail_segment = list_last_entry(&tail_desc->segments,
+				       struct xilinx_vdma_tx_segment, node);
+
+	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
+		ctrl_reg &= ~XILINX_DMA_CR_COALESCE_MAX;
+		ctrl_reg |= chan->desc_pendingcount <<
+				XILINX_DMA_CR_COALESCE_SHIFT;
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, ctrl_reg);
+	}
+
+	if (chan->has_sg) {
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC,
+			   head_desc->async_tx.phys);
+
+		/* Update tail ptr register which will start the transfer */
+		vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC,
+				tail_segment->phys);
+	} else {
+		/* In simple mode */
+		struct xilinx_vdma_tx_segment *segment;
+		struct xilinx_vdma_desc_hw *hw;
+
+		segment = list_first_entry(&head_desc->segments,
+					   struct xilinx_vdma_tx_segment,
+					   node);
+
+		hw = &segment->hw;
+
+		vdma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr);
+		vdma_ctrl_write(chan, XILINX_DMA_REG_DSTADDR,
+				hw->dstaddr_vsize);
+
+		/* Start the transfer */
+		vdma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+				hw->control_stride & XILINX_DMA_MAX_TRANS_LEN);
+	}
+
+	list_splice_tail_init(&chan->pending_list, &chan->active_list);
+	chan->desc_pendingcount = 0;
+}
+
+/**
+ * xilinx_cdma_issue_pending - Issue pending transactions
+ * @dchan: DMA channel
+ */
+static void xilinx_cdma_issue_pending(struct dma_chan *dchan)
+{
+	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->lock, flags);
+	xilinx_cdma_start_transfer(chan);
+	spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+/**
  * xilinx_vdma_start_transfer - Starts VDMA transfer
  * @chan: Driver specific channel struct pointer
  */
@@ -1285,6 +1374,69 @@ error:
 }
 
 /**
+ * xilinx_cdma_prep_memcpy - prepare descriptors for a memcpy transaction
+ * @dchan: DMA channel
+ * @dma_dst: destination address
+ * @dma_src: source address
+ * @len: transfer length
+ * @flags: transfer ack flags
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *
+xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst,
+			dma_addr_t dma_src, size_t len, unsigned long flags)
+{
+	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+	struct xilinx_vdma_desc_hw *hw;
+	struct xilinx_vdma_tx_descriptor *desc;
+	struct xilinx_vdma_tx_segment *segment, *prev;
+
+	if (!len || len > XILINX_DMA_MAX_TRANS_LEN)
+		return NULL;
+
+	desc = xilinx_vdma_alloc_tx_descriptor(chan);
+	if (!desc)
+		return NULL;
+
+	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+	desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
+	async_tx_ack(&desc->async_tx);
+
+	/* Allocate the link descriptor from DMA pool */
+	segment = xilinx_vdma_alloc_tx_segment(chan);
+	if (!segment)
+		goto error;
+
+	hw = &segment->hw;
+	hw->control_stride = len;
+	hw->buf_addr = dma_src;
+	hw->dstaddr_vsize = dma_dst;
+
+	/* Fill the previous next descriptor with current */
+	prev = list_last_entry(&desc->segments,
+			       struct xilinx_vdma_tx_segment, node);
+	prev->hw.next_desc = segment->phys;
+
+	/* Insert the segment into the descriptor segments list. */
+	list_add_tail(&segment->node, &desc->segments);
+
+	prev = segment;
+
+	/* Link the last hardware descriptor with the first. */
+	segment = list_first_entry(&desc->segments,
+				struct xilinx_vdma_tx_segment, node);
+	desc->async_tx.phys = segment->phys;
+	prev->hw.next_desc = segment->phys;
+
+	return &desc->async_tx;
+
+error:
+	xilinx_vdma_free_tx_descriptor(chan, desc);
+	return NULL;
+}
+
+/**
  * xilinx_vdma_terminate_all - Halt the channel and free descriptors
  * @chan: Driver specific VDMA Channel pointer
  */
@@ -1472,8 +1624,10 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
 
 	if (xdev->quirks & AXIVDMA_SUPPORT)
 		chan->start_transfer = xilinx_vdma_start_transfer;
-	else
+	else if (xdev->quirks & AXIDMA_SUPPORT)
 		chan->start_transfer = xilinx_dma_start_transfer;
+	else
+		chan->start_transfer = xilinx_cdma_start_transfer;
 
 	/* Request the interrupt */
 	chan->irq = irq_of_parse_and_map(node, 0);
@@ -1534,9 +1688,14 @@ static const struct xdma_platform_data xdma_def = {
 	.quirks = AXIDMA_SUPPORT,
 };
 
+static const struct xdma_platform_data xcdma_def = {
+	.quirks = AXICDMA_SUPPORT,
+};
+
 static const struct of_device_id xilinx_vdma_of_ids[] = {
 	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
 	{ .compatible = "xlnx,axi-dma-1.00.a", .data = &xdma_def},
+	{ .compatible = "xlnx,axi-cdma-1.00.a", .data = &xcdma_def},
 	{}
 };
 MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
@@ -1601,8 +1760,10 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 	xdev->common.dev = &pdev->dev;
 
 	INIT_LIST_HEAD(&xdev->common.channels);
-	dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
-	dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+	if (!(xdev->quirks & AXICDMA_SUPPORT)) {
+		dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
+		dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+	}
 
 	xdev->common.device_alloc_chan_resources =
 				xilinx_vdma_alloc_chan_resources;
@@ -1614,13 +1775,17 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
 		xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
 		xdev->common.device_prep_interleaved_dma =
 				xilinx_vdma_dma_prep_interleaved;
-	} else {
+	} else if (xdev->quirks & AXIDMA_SUPPORT) {
 		xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
 		xdev->common.device_issue_pending = xilinx_dma_issue_pending;
 		xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
 					  BIT(DMA_MEM_TO_DEV);
 		xdev->common.residue_granularity =
 					  DMA_RESIDUE_GRANULARITY_SEGMENT;
+	} else {
+		dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask);
+		xdev->common.device_prep_dma_memcpy = xilinx_cdma_prep_memcpy;
+		xdev->common.device_issue_pending = xilinx_cdma_issue_pending;
 	}
 
 	platform_set_drvdata(pdev, xdev);
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 7/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
                   ` (5 preceding siblings ...)
  2016-03-15 17:23 ` [PATCH 6/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine Kedareswara rao Appana
@ 2016-03-15 17:23 ` Kedareswara rao Appana
  2016-03-16  1:29 ` [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Moritz Fischer
  7 siblings, 0 replies; 24+ messages in thread
From: Kedareswara rao Appana @ 2016-03-15 17:23 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, anirudh
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch updates the device-tree binding doc for
adding support for AXI CDMA.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 .../devicetree/bindings/dma/xilinx/xilinx_vdma.txt        | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
index 3d134a5..f288175 100644
--- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
+++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
@@ -8,8 +8,12 @@ target devices. It can be configured to have one channel or two channels.
 If configured as two channels, one is to transmit to the device and another
 is to receive from the device.
 
+Xilinx AXI CDMA engine, it does transfers between memory-mapped source
+address and a memory-mapped destination address.
+
 Required properties:
 - compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a"
+	      or "xlnx,axi-cdma-1.00.a"
 - #dma-cells: Should be <1>, see "dmas" property below
 - reg: Should contain VDMA registers location and length.
 - xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
@@ -76,6 +80,17 @@ axi_dma_0: axidma@40400000 {
 	} ;
 } ;
 
+axi_cdma_0: axicdma@7e200000 {
+       compatible = "xlnx,axi-cdma-1.00.a";
+       #dma-cells = <1>;
+       reg = < 0x7e200000 0x10000 >;
+       dma-channel@7e200000 {
+               compatible = "xlnx,axi-vdma-mm2s-channel";
+               interrupts = < 0 55 4 >;
+               xlnx,datawidth = <0x40>;
+       } ;
+} ;
+
 * DMA client
 
 Required properties:
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
  2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
@ 2016-03-15 17:50   ` Moritz Fischer
  2016-03-17  7:00   ` Laurent Pinchart
  1 sibling, 0 replies; 24+ messages in thread
From: Moritz Fischer @ 2016-03-15 17:50 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: Dan Williams, Vinod Koul, Michal Simek, Sören Brinkmann,
	Appana Durga Kedareswara Rao, Laurent Pinchart,
	Luis de Bethencourt, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, Linux Kernel Mailing List

Hi,

this patch looks fine to me.

On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> This patch fixes the below checkpatch.pl warnings.
>
> WARNING: void function return statements are not generally useful
> +       return;
> +}
>
> WARNING: void function return statements are not generally useful
> +       return;
> +}
>
> WARNING: Missing a blank line after declarations
> +               u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
> +               vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
>
Acked-by: Moritz Fischer <moritz.fischer@ettus.com>

> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>

Cheers,

Moritz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
  2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
                   ` (6 preceding siblings ...)
  2016-03-15 17:23 ` [PATCH 7/7] " Kedareswara rao Appana
@ 2016-03-16  1:29 ` Moritz Fischer
  2016-03-16  3:11   ` Vinod Koul
  2016-03-16  6:06   ` Appana Durga Kedareswara Rao
  7 siblings, 2 replies; 24+ messages in thread
From: Moritz Fischer @ 2016-03-16  1:29 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: Dan Williams, Vinod Koul, Michal Simek, Sören Brinkmann,
	Appana Durga Kedareswara Rao, Laurent Pinchart,
	Luis de Bethencourt, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, Linux Kernel Mailing List

Hi,

On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> This patch series does some enhancments to the VDMA driver
> which includes
> --> Adding support for AXI DMA IP.
> --> Adding support for AXI CDMA IP.
> --> Fixing checkpatch warnings.
>
> Kedareswara rao Appana (7):
>   dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
>   dmaengine: xilinx_vdma: Add quirks support to differentiate differnet
>     IP cores

This commitmsg has a typo.
>   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
>     Access Engine
>   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
>     Access Engine

These two have the same commit message which is confusing.

>   dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding
>     doc
>   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
>     Memory Access Engine
>   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
>     Memory Access Engine

These two have the same commit message which is confusing.

Cheers,

Moritz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc
  2016-03-15 17:23 ` [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc Kedareswara rao Appana
@ 2016-03-16  1:34   ` Moritz Fischer
  2016-03-16  6:08     ` Appana Durga Kedareswara Rao
  0 siblings, 1 reply; 24+ messages in thread
From: Moritz Fischer @ 2016-03-16  1:34 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: Dan Williams, Vinod Koul, Michal Simek, Sören Brinkmann,
	Appana Durga Kedareswara Rao, Laurent Pinchart,
	Luis de Bethencourt, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, Linux Kernel Mailing List

Hi there,

On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
<appana.durga.rao@xilinx.com> wrote:
> AXI DMA support is added to the existing AXI VDMA driver.
> The binding doc for AXI DMA should also be updated in the
> VDMA device-tree binding doc.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  .../devicetree/bindings/dma/xilinx/xilinx_dma.txt  | 65 ----------------------
>  1 file changed, 65 deletions(-)
>  delete mode 100644 Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
>
> diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> deleted file mode 100644
> index 2291c40..0000000
> --- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> +++ /dev/null
> @@ -1,65 +0,0 @@
> -Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
> -target devices. It can be configured to have one channel or two channels.
> -If configured as two channels, one is to transmit to the device and another
> -is to receive from the device.
> -
> -Required properties:
> -- compatible: Should be "xlnx,axi-dma-1.00.a"
> -- #dma-cells: Should be <1>, see "dmas" property below
> -- reg: Should contain DMA registers location and length.
> -- dma-channel child node: Should have atleast one channel and can have upto
> -       two channels per device. This node specifies the properties of each
> -       DMA channel (see child node properties below).

at least vs atleast, up to vs upto.
> -
> -Optional properties:
> -- xlnx,include-sg: Tells whether configured for Scatter-mode in
> -       the hardware.

How about: 'If present, hardware supports scatter-gather mode'
> -
> -Required child node properties:
> -- compatible: It should be either "xlnx,axi-dma-mm2s-channel" or
> -       "xlnx,axi-dma-s2mm-channel".
> -- interrupts: Should contain per channel DMA interrupts.
> -- xlnx,datawidth: Should contain the stream data width, take values
> -       {32,64...1024}.
> -
> -Option child node properties:
> -- xlnx,include-dre: Tells whether hardware is configured for Data
> -       Realignment Engine.

How about: 'If present, hardware supports Data Realignment Engine'

> -
> -Example:
> -++++++++
> -
> -axi_dma_0: axidma@40400000 {
> -       compatible = "xlnx,axi-dma-1.00.a";
> -       #dma_cells = <1>;

I think you meant #dma-cells = <1>; here. That caught me while testing ;-)

Cheers,

Moritz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
  2016-03-16  1:29 ` [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Moritz Fischer
@ 2016-03-16  3:11   ` Vinod Koul
  2016-03-16  6:13     ` Appana Durga Kedareswara Rao
  2016-03-16  6:06   ` Appana Durga Kedareswara Rao
  1 sibling, 1 reply; 24+ messages in thread
From: Vinod Koul @ 2016-03-16  3:11 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: Kedareswara rao Appana, Dan Williams, Michal Simek,
	Sören Brinkmann, Appana Durga Kedareswara Rao,
	Laurent Pinchart, Luis de Bethencourt, Anirudha Sarangi,
	dmaengine, linux-arm-kernel, Linux Kernel Mailing List

On Tue, Mar 15, 2016 at 06:29:38PM -0700, Moritz Fischer wrote:
> Hi,
> 
> On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > This patch series does some enhancments to the VDMA driver
> > which includes
> > --> Adding support for AXI DMA IP.
> > --> Adding support for AXI CDMA IP.
> > --> Fixing checkpatch warnings.
> >
> > Kedareswara rao Appana (7):
> >   dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
> >   dmaengine: xilinx_vdma: Add quirks support to differentiate differnet
> >     IP cores
> 
> This commitmsg has a typo.
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> >     Access Engine
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> >     Access Engine
> 
> These two have the same commit message which is confusing.
> 
> >   dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding
> >     doc
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> >     Memory Access Engine
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> >     Memory Access Engine
> 
> These two have the same commit message which is confusing.

Yes this has been a consistent problem with xilinx patches :(

-- 
~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores
  2016-03-15 17:23 ` [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores Kedareswara rao Appana
@ 2016-03-16  3:13   ` Vinod Koul
  2016-03-16  6:12     ` Appana Durga Kedareswara Rao
  2016-03-17  7:19   ` Laurent Pinchart
  1 sibling, 1 reply; 24+ messages in thread
From: Vinod Koul @ 2016-03-16  3:13 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, michal.simek, soren.brinkmann, appanad,
	moritz.fischer, laurent.pinchart, luis, anirudh, dmaengine,
	linux-arm-kernel, linux-kernel

On Tue, Mar 15, 2016 at 10:53:07PM +0530, Kedareswara rao Appana wrote:
> This patch adds quirks support in the driver to differentiate differnet IP cores.

Wouldn't it help to explain why quirks are needed for these cores in
changelog?

Also limit your changelogs properly. Am sure checkpatch would have warned
you!

-- 
~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-15 17:23 ` [PATCH 4/7] " Kedareswara rao Appana
@ 2016-03-16  3:17   ` Vinod Koul
  2016-03-16  6:19     ` Appana Durga Kedareswara Rao
  0 siblings, 1 reply; 24+ messages in thread
From: Vinod Koul @ 2016-03-16  3:17 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, michal.simek, soren.brinkmann, appanad,
	moritz.fischer, laurent.pinchart, luis, anirudh, dmaengine,
	linux-arm-kernel, linux-kernel

On Tue, Mar 15, 2016 at 10:53:09PM +0530, Kedareswara rao Appana wrote:
> This patch updates the device-tree binding doc for
> adding support for AXI DMA.

Binding patch should precced the driver. and the title doesn't tell me its a
binding patch and might get ignore by folks.

Pls cc device tree ML on these patches. And please read
Documentation/devicetree/bindings/submitting-patches.txt. Its there for a
purpose

> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  .../devicetree/bindings/dma/xilinx/xilinx_vdma.txt | 22 +++++++++++++++++++++-
>  1 file changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> index e4c4d47..3d134a5 100644
> --- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> +++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> @@ -3,8 +3,13 @@ It can be configured to have one channel or two channels. If configured
>  as two channels, one is to transmit to the video device and another is
>  to receive from the video device.
>  
> +Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
> +target devices. It can be configured to have one channel or two channels.
> +If configured as two channels, one is to transmit to the device and another
> +is to receive from the device.
> +
>  Required properties:
> -- compatible: Should be "xlnx,axi-vdma-1.00.a"
> +- compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a"
>  - #dma-cells: Should be <1>, see "dmas" property below
>  - reg: Should contain VDMA registers location and length.
>  - xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
> @@ -55,6 +60,21 @@ axi_vdma_0: axivdma@40030000 {
>  	} ;
>  } ;
>  
> +axi_dma_0: axidma@40400000 {
> +	compatible = "xlnx,axi-dma-1.00.a";
> +	#dma_cells = <1>;
> +	reg = < 0x40400000 0x10000 >;
> +	dma-channel@40400000 {
> +		compatible = "xlnx,axi-vdma-mm2s-channel";
> +		interrupts = < 0 59 4 >;
> +		xlnx,datawidth = <0x40>;
> +	} ;
> +	dma-channel@40400030 {
> +		compatible = "xlnx,axi-vdma-s2mm-channel";
> +		interrupts = < 0 58 4 >;
> +		xlnx,datawidth = <0x40>;
> +	} ;
> +} ;
>  
>  * DMA client
>  
> -- 
> 2.1.2
> 

-- 
~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
  2016-03-16  1:29 ` [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Moritz Fischer
  2016-03-16  3:11   ` Vinod Koul
@ 2016-03-16  6:06   ` Appana Durga Kedareswara Rao
  1 sibling, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-16  6:06 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: Dan Williams, Vinod Koul, Michal Simek, Soren Brinkmann,
	Laurent Pinchart, Luis de Bethencourt, Anirudha Sarangi,
	dmaengine, linux-arm-kernel, Linux Kernel Mailing List

Hi Moritz, 

> -----Original Message-----
> From: Moritz Fischer [mailto:moritz.fischer@ettus.com]
> Sent: Wednesday, March 16, 2016 7:00 AM
> To: Appana Durga Kedareswara Rao
> Cc: Dan Williams; Vinod Koul; Michal Simek; Soren Brinkmann; Appana Durga
> Kedareswara Rao; Laurent Pinchart; Luis de Bethencourt; Anirudha Sarangi;
> dmaengine@vger.kernel.org; linux-arm-kernel; Linux Kernel Mailing List
> Subject: Re: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
> 
> Hi,
> 
> On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > This patch series does some enhancments to the VDMA driver which
> > includes
> > --> Adding support for AXI DMA IP.
> > --> Adding support for AXI CDMA IP.
> > --> Fixing checkpatch warnings.
> >
> > Kedareswara rao Appana (7):
> >   dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
> >   dmaengine: xilinx_vdma: Add quirks support to differentiate differnet
> >     IP cores
> 
> This commitmsg has a typo.

Ok will fix in the next version.

> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> >     Access Engine
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> >     Access Engine
> 
> These two have the same commit message which is confusing.

Ok will fix in the next version.

> 
> >   dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding
> >     doc
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> >     Memory Access Engine
> >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> >     Memory Access Engine
> 
> These two have the same commit message which is confusing.

Ok will fix in the next version.

Thanks,
Kedar.

> 
> Cheers,
> 
> Moritz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc
  2016-03-16  1:34   ` Moritz Fischer
@ 2016-03-16  6:08     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-16  6:08 UTC (permalink / raw)
  To: Moritz Fischer
  Cc: Dan Williams, Vinod Koul, Michal Simek, Soren Brinkmann,
	Laurent Pinchart, Luis de Bethencourt, Anirudha Sarangi,
	dmaengine, linux-arm-kernel, Linux Kernel Mailing List

Hi Moritz,


> -----Original Message-----
> From: Moritz Fischer [mailto:moritz.fischer@ettus.com]
> Sent: Wednesday, March 16, 2016 7:04 AM
> To: Appana Durga Kedareswara Rao
> Cc: Dan Williams; Vinod Koul; Michal Simek; Soren Brinkmann; Appana Durga
> Kedareswara Rao; Laurent Pinchart; Luis de Bethencourt; Anirudha Sarangi;
> dmaengine@vger.kernel.org; linux-arm-kernel; Linux Kernel Mailing List
> Subject: Re: [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma
> device-tree binding doc
> 
> Hi there,
> 
> On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
> <appana.durga.rao@xilinx.com> wrote:
> > AXI DMA support is added to the existing AXI VDMA driver.
> > The binding doc for AXI DMA should also be updated in the VDMA
> > device-tree binding doc.
> >
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> >  .../devicetree/bindings/dma/xilinx/xilinx_dma.txt  | 65
> > ----------------------
> >  1 file changed, 65 deletions(-)
> >  delete mode 100644
> > Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> > b/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> > deleted file mode 100644
> > index 2291c40..0000000
> > --- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
> > +++ /dev/null
> > @@ -1,65 +0,0 @@
> > -Xilinx AXI DMA engine, it does transfers between memory and AXI4
> > stream -target devices. It can be configured to have one channel or two
> channels.
> > -If configured as two channels, one is to transmit to the device and
> > another -is to receive from the device.
> > -
> > -Required properties:
> > -- compatible: Should be "xlnx,axi-dma-1.00.a"
> > -- #dma-cells: Should be <1>, see "dmas" property below
> > -- reg: Should contain DMA registers location and length.
> > -- dma-channel child node: Should have atleast one channel and can have upto
> > -       two channels per device. This node specifies the properties of each
> > -       DMA channel (see child node properties below).
> 
> at least vs atleast, up to vs upto.
> > -
> > -Optional properties:
> > -- xlnx,include-sg: Tells whether configured for Scatter-mode in
> > -       the hardware.
> 
> How about: 'If present, hardware supports scatter-gather mode'

I am deleting this binding doc as AXI DMA IP support is being added to the 
Existing VDMA driver.

Will fix your comments in the vdma device-tree binding doc.

Regards,
Kedar.

> > -
> > -Required child node properties:
> > -- compatible: It should be either "xlnx,axi-dma-mm2s-channel" or
> > -       "xlnx,axi-dma-s2mm-channel".
> > -- interrupts: Should contain per channel DMA interrupts.
> > -- xlnx,datawidth: Should contain the stream data width, take values
> > -       {32,64...1024}.
> > -
> > -Option child node properties:
> > -- xlnx,include-dre: Tells whether hardware is configured for Data
> > -       Realignment Engine.
> 
> How about: 'If present, hardware supports Data Realignment Engine'
> 
> > -
> > -Example:
> > -++++++++
> > -
> > -axi_dma_0: axidma@40400000 {
> > -       compatible = "xlnx,axi-dma-1.00.a";
> > -       #dma_cells = <1>;
> 
> I think you meant #dma-cells = <1>; here. That caught me while testing ;-)
> 
> Cheers,
> 
> Moritz

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores
  2016-03-16  3:13   ` Vinod Koul
@ 2016-03-16  6:12     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-16  6:12 UTC (permalink / raw)
  To: Vinod Koul
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, moritz.fischer,
	laurent.pinchart, luis, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, linux-kernel, Appana Durga Kedareswara Rao

Hi Vinod,

> -----Original Message-----
> From: dmaengine-owner@vger.kernel.org [mailto:dmaengine-
> owner@vger.kernel.org] On Behalf Of Vinod Koul
> Sent: Wednesday, March 16, 2016 8:44 AM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; Michal Simek; Soren Brinkmann; Appana Durga
> Kedareswara Rao; moritz.fischer@ettus.com;
> laurent.pinchart@ideasonboard.com; luis@debethencourt.com; Anirudha
> Sarangi; dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to
> differentiate differnet IP cores
> 
> On Tue, Mar 15, 2016 at 10:53:07PM +0530, Kedareswara rao Appana wrote:
> > This patch adds quirks support in the driver to differentiate differnet IP cores.
> 
> Wouldn't it help to explain why quirks are needed for these cores in changelog?

Will fix in the next version.

> 
> Also limit your changelogs properly. Am sure checkpatch would have warned
> you!

Sure will fix in the next version sorry for the noise.

Thanks,
Kedar.

> 
> --
> ~Vinod
> --
> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body
> of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
  2016-03-16  3:11   ` Vinod Koul
@ 2016-03-16  6:13     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-16  6:13 UTC (permalink / raw)
  To: Vinod Koul, Moritz Fischer
  Cc: Dan Williams, Michal Simek, Soren Brinkmann, Laurent Pinchart,
	Luis de Bethencourt, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, Linux Kernel Mailing List

Hi Vinod,

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@intel.com]
> Sent: Wednesday, March 16, 2016 8:41 AM
> To: Moritz Fischer
> Cc: Appana Durga Kedareswara Rao; Dan Williams; Michal Simek; Soren
> Brinkmann; Appana Durga Kedareswara Rao; Laurent Pinchart; Luis de
> Bethencourt; Anirudha Sarangi; dmaengine@vger.kernel.org; linux-arm-kernel;
> Linux Kernel Mailing List
> Subject: Re: [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments
> 
> On Tue, Mar 15, 2016 at 06:29:38PM -0700, Moritz Fischer wrote:
> > Hi,
> >
> > On Tue, Mar 15, 2016 at 10:23 AM, Kedareswara rao Appana
> > <appana.durga.rao@xilinx.com> wrote:
> > > This patch series does some enhancments to the VDMA driver which
> > > includes
> > > --> Adding support for AXI DMA IP.
> > > --> Adding support for AXI CDMA IP.
> > > --> Fixing checkpatch warnings.
> > >
> > > Kedareswara rao Appana (7):
> > >   dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
> > >   dmaengine: xilinx_vdma: Add quirks support to differentiate differnet
> > >     IP cores
> >
> > This commitmsg has a typo.
> > >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> > >     Access Engine
> > >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory
> > >     Access Engine
> >
> > These two have the same commit message which is confusing.
> >
> > >   dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding
> > >     doc
> > >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> > >     Memory Access Engine
> > >   dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct
> > >     Memory Access Engine
> >
> > These two have the same commit message which is confusing.
> 
> Yes this has been a consistent problem with xilinx patches :(

Will fix in the next version of the patch series.
Sorry for the noise. 

Thanks,
Kedar.

> 
> --
> ~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 4/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-16  3:17   ` Vinod Koul
@ 2016-03-16  6:19     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-16  6:19 UTC (permalink / raw)
  To: Vinod Koul
  Cc: dan.j.williams, Michal Simek, Soren Brinkmann, moritz.fischer,
	laurent.pinchart, luis, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, linux-kernel

Hi Vinod,

> -----Original Message-----
> From: Vinod Koul [mailto:vinod.koul@intel.com]
> Sent: Wednesday, March 16, 2016 8:47 AM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; Michal Simek; Soren Brinkmann; Appana Durga
> Kedareswara Rao; moritz.fischer@ettus.com;
> laurent.pinchart@ideasonboard.com; luis@debethencourt.com; Anirudha
> Sarangi; dmaengine@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 4/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI
> Direct Memory Access Engine
> 
> On Tue, Mar 15, 2016 at 10:53:09PM +0530, Kedareswara rao Appana wrote:
> > This patch updates the device-tree binding doc for adding support for
> > AXI DMA.
> 
> Binding patch should precced the driver. and the title doesn't tell me its a binding
> patch and might get ignore by folks.

Sure will fix in the next version of the patch series.

> 
> Pls cc device tree ML on these patches. And please read
> Documentation/devicetree/bindings/submitting-patches.txt. Its there for a
> purpose

Sure will cc device-tree people in the next version of the patch series for the device-tree binding doc patches.

Thanks,
Kedar.

> 
> >
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> >  .../devicetree/bindings/dma/xilinx/xilinx_vdma.txt | 22
> > +++++++++++++++++++++-
> >  1 file changed, 21 insertions(+), 1 deletion(-)
> >
> > diff --git
> > a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> > b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> > index e4c4d47..3d134a5 100644
> > --- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> > +++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
> > @@ -3,8 +3,13 @@ It can be configured to have one channel or two
> > channels. If configured  as two channels, one is to transmit to the
> > video device and another is  to receive from the video device.
> >
> > +Xilinx AXI DMA engine, it does transfers between memory and AXI4
> > +stream target devices. It can be configured to have one channel or two
> channels.
> > +If configured as two channels, one is to transmit to the device and
> > +another is to receive from the device.
> > +
> >  Required properties:
> > -- compatible: Should be "xlnx,axi-vdma-1.00.a"
> > +- compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a"
> >  - #dma-cells: Should be <1>, see "dmas" property below
> >  - reg: Should contain VDMA registers location and length.
> >  - xlnx,num-fstores: Should be the number of framebuffers as configured in
> h/w.
> > @@ -55,6 +60,21 @@ axi_vdma_0: axivdma@40030000 {
> >  	} ;
> >  } ;
> >
> > +axi_dma_0: axidma@40400000 {
> > +	compatible = "xlnx,axi-dma-1.00.a";
> > +	#dma_cells = <1>;
> > +	reg = < 0x40400000 0x10000 >;
> > +	dma-channel@40400000 {
> > +		compatible = "xlnx,axi-vdma-mm2s-channel";
> > +		interrupts = < 0 59 4 >;
> > +		xlnx,datawidth = <0x40>;
> > +	} ;
> > +	dma-channel@40400030 {
> > +		compatible = "xlnx,axi-vdma-s2mm-channel";
> > +		interrupts = < 0 58 4 >;
> > +		xlnx,datawidth = <0x40>;
> > +	} ;
> > +} ;
> >
> >  * DMA client
> >
> > --
> > 2.1.2
> >
> 
> --
> ~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings
  2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
  2016-03-15 17:50   ` Moritz Fischer
@ 2016-03-17  7:00   ` Laurent Pinchart
  1 sibling, 0 replies; 24+ messages in thread
From: Laurent Pinchart @ 2016-03-17  7:00 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, luis, anirudh, dmaengine,
	linux-arm-kernel, linux-kernel

Hello,

On Tuesday 15 March 2016 22:53:06 Kedareswara rao Appana wrote:
> This patch fixes the below checkpatch.pl warnings.
> 
> WARNING: void function return statements are not generally useful
> +	return;
> +}
> 
> WARNING: void function return statements are not generally useful
> +	return;
> +}
> 
> WARNING: Missing a blank line after declarations
> +		u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
> +		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> ---
>  drivers/dma/xilinx/xilinx_vdma.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_vdma.c
> b/drivers/dma/xilinx/xilinx_vdma.c index 0ee0321..7ab6793 100644
> --- a/drivers/dma/xilinx/xilinx_vdma.c
> +++ b/drivers/dma/xilinx/xilinx_vdma.c
> @@ -569,8 +569,6 @@ static void xilinx_vdma_halt(struct xilinx_vdma_chan
> *chan) chan, vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR));
>  		chan->err = true;
>  	}
> -
> -	return;
>  }
> 
>  /**
> @@ -595,8 +593,6 @@ static void xilinx_vdma_start(struct xilinx_vdma_chan
> *chan)
> 
>  		chan->err = true;
>  	}
> -
> -	return;
>  }
> 
>  /**
> @@ -829,6 +825,7 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void
> *data) * make sure not to write to other error bits to 1.
>  		 */
>  		u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
> +
>  		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
>  				errors & XILINX_VDMA_DMASR_ERR_RECOVER_MASK);

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores
  2016-03-15 17:23 ` [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores Kedareswara rao Appana
  2016-03-16  3:13   ` Vinod Koul
@ 2016-03-17  7:19   ` Laurent Pinchart
  2016-03-18 12:43     ` Appana Durga Kedareswara Rao
  1 sibling, 1 reply; 24+ messages in thread
From: Laurent Pinchart @ 2016-03-17  7:19 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, luis, anirudh, dmaengine,
	linux-arm-kernel, linux-kernel

Hello,

On Tuesday 15 March 2016 22:53:07 Kedareswara rao Appana wrote:
> This patch adds quirks support in the driver to differentiate differnet IP

s/differnet/different/

(and in the subject line too)

With this series applied the driver will not be vdma-specific anymore. The 
xilinx_vdma_ prefix will thus be confusing. I'd bite the bullet and apply a 
global rename to functions that are not shared between the three DMA IP core 
types before patch 2/7.

> cores.
> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_vdma.c | 36 ++++++++++++++++++++++++++++++------
>  1 file changed, 30 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_vdma.c
> b/drivers/dma/xilinx/xilinx_vdma.c index 7ab6793..f682bef 100644
> --- a/drivers/dma/xilinx/xilinx_vdma.c
> +++ b/drivers/dma/xilinx/xilinx_vdma.c
> @@ -139,6 +139,8 @@
>  /* Delay loop counter to prevent hardware failure */
>  #define XILINX_VDMA_LOOP_COUNT		1000000
> 
> +#define AXIVDMA_SUPPORT		BIT(0)
> +
>  /**
>   * struct xilinx_vdma_desc_hw - Hardware Descriptor
>   * @next_desc: Next Descriptor Pointer @0x00
> @@ -240,6 +242,7 @@ struct xilinx_vdma_chan {
>   * @chan: Driver specific VDMA channel
>   * @has_sg: Specifies whether Scatter-Gather is present or not
>   * @flush_on_fsync: Flush on frame sync
> + * @quirks: Needed for different IP cores
>   */
>  struct xilinx_vdma_device {
>  	void __iomem *regs;
> @@ -248,6 +251,15 @@ struct xilinx_vdma_device {
>  	struct xilinx_vdma_chan *chan[XILINX_VDMA_MAX_CHANS_PER_DEVICE];
>  	bool has_sg;
>  	u32 flush_on_fsync;
> +	u32 quirks;
> +};
> +
> +/**
> + * struct xdma_platform_data - DMA platform structure
> + * @quirks: quirks for platform specific data.
> + */
> +struct xdma_platform_data {

This isn't platform data but device information, I'd rename the structure to 
xdma_device_info (and update the description accordingly).

> +	u32 quirks;

I don't think you should call this field quirks as it stores the IP core type. 
Quirks usually refer to non-standard behaviour that requires specific handling 
in the driver, possibly in the form of work-arounds. I would thus call the 
field type and document it as "DMA IP core type". Similarly I'd rename 
AXIVDMA_SUPPORT to XDMA_TYPE_VDMA.

>  };
> 
>  /* Macros */
> @@ -1239,6 +1251,16 @@ static struct dma_chan *of_dma_xilinx_xlate(struct
> of_phandle_args *dma_spec, return
> dma_get_slave_channel(&xdev->chan[chan_id]->common);
>  }
> 
> +static const struct xdma_platform_data xvdma_def = {
> +	.quirks = AXIVDMA_SUPPORT,
> +};
> +
> +static const struct of_device_id xilinx_vdma_of_ids[] = {
> +	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},

Missing space before }, did you run checkpatch ?

As the xdma_platform_data structure contains a single integer, you could store 
it in the .data field directly.

> +	{}
> +};
> +MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
> +
>  /**
>   * xilinx_vdma_probe - Driver probe function
>   * @pdev: Pointer to the platform_device structure
> @@ -1251,6 +1273,7 @@ static int xilinx_vdma_probe(struct platform_device
> *pdev) struct xilinx_vdma_device *xdev;
>  	struct device_node *child;
>  	struct resource *io;
> +	const struct of_device_id *match;
>  	u32 num_frames;
>  	int i, err;
> 
> @@ -1259,6 +1282,13 @@ static int xilinx_vdma_probe(struct platform_device
> *pdev) if (!xdev)
>  		return -ENOMEM;
> 
> +	match = of_match_node(xilinx_vdma_of_ids, pdev->dev.of_node);

You can use of_device_get_match_data() to get the data directly.

> +	if (match && match->data) {

I don't think this condition can ever be false.

> +		const struct xdma_platform_data *data = match->data;
> +
> +		xdev->quirks = data->quirks;
> +	}
> +
>  	xdev->dev = &pdev->dev;
> 
>  	/* Request and map I/O memory */
> @@ -1356,12 +1386,6 @@ static int xilinx_vdma_remove(struct platform_device
> *pdev) return 0;
>  }
> 
> -static const struct of_device_id xilinx_vdma_of_ids[] = {
> -	{ .compatible = "xlnx,axi-vdma-1.00.a",},
> -	{}
> -};
> -MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
> -
>  static struct platform_driver xilinx_vdma_driver = {
>  	.driver = {
>  		.name = "xilinx-vdma",

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-15 17:23 ` [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine Kedareswara rao Appana
@ 2016-03-17  8:05   ` Laurent Pinchart
  2016-03-18 12:43     ` Appana Durga Kedareswara Rao
  0 siblings, 1 reply; 24+ messages in thread
From: Laurent Pinchart @ 2016-03-17  8:05 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, luis, anirudh, dmaengine,
	linux-arm-kernel, linux-kernel

Hello,

Thank you for the patch.

On Tuesday 15 March 2016 22:53:08 Kedareswara rao Appana wrote:
> This patch adds support for the AXI Direct Memory Access (AXI DMA)
> core, which is a soft Xilinx IP core that provides high-
> bandwidth direct memory access between memory and AXI4-Stream
> type target peripherals.
> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_vdma.c | 385 +++++++++++++++++++++++++++++++-----
>  1 file changed, 341 insertions(+), 44 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_vdma.c
> b/drivers/dma/xilinx/xilinx_vdma.c index f682bef..87525a9 100644
> --- a/drivers/dma/xilinx/xilinx_vdma.c
> +++ b/drivers/dma/xilinx/xilinx_vdma.c
> @@ -16,6 +16,12 @@
>   * video device (S2MM). Initialization, status, interrupt and management
>   * registers are accessed through an AXI4-Lite slave interface.
>   *
> + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct Memory
> + *  Access between memory and AXI4-Stream-type target peripherals. It can
> be
> + *  configured to have one channel or two channels and if configured as two
> + *  channels, one is to transmit data from memory to a device and another
> is
> + *  to receive from a device.

Let's try to be both descriptive and consistent.

"The AXI Direct Memory Access (AXI DMA) core is a soft Xilinx IP core the 
provides high-bandwidth one dimensional direct memory access between memory 
and AXI4-Stream target peripherals. It supports one receive and one transmit 
channel, both of them optional at synthesis time."

> + *
>   * This program is free software: you can redistribute it and/or modify
>   * it under the terms of the GNU General Public License as published by
>   * the Free Software Foundation, either version 2 of the License, or
> @@ -140,6 +146,20 @@
>  #define XILINX_VDMA_LOOP_COUNT		1000000
> 
>  #define AXIVDMA_SUPPORT		BIT(0)
> +#define AXIDMA_SUPPORT		BIT(1)
> +
> +/* AXI DMA Specific Registers/Offsets */
> +#define XILINX_DMA_REG_SRCDSTADDR	0x18
> +#define XILINX_DMA_REG_DSTADDR		0x20
> +#define XILINX_DMA_REG_BTT		0x28
> +
> +#define XILINX_DMA_MAX_TRANS_LEN	GENMASK(22, 0)
> +#define XILINX_DMA_CR_COALESCE_MAX	GENMASK(23, 16)
> +#define XILINX_DMA_CR_COALESCE_SHIFT	16
> +#define XILINX_DMA_BD_SOP		BIT(27)
> +#define XILINX_DMA_BD_EOP		BIT(26)
> +#define XILINX_DMA_COALESCE_MAX		255
> +#define XILINX_DMA_NUM_APP_WORDS	5
> 
>  /**
>   * struct xilinx_vdma_desc_hw - Hardware Descriptor
> @@ -147,19 +167,23 @@
>   * @pad1: Reserved @0x04
>   * @buf_addr: Buffer address @0x08
>   * @pad2: Reserved @0x0C
> - * @vsize: Vertical Size @0x10
> + * @dstaddr_vsize: Vertical Size @0x10
>   * @hsize: Horizontal Size @0x14
> - * @stride: Number of bytes between the first
> + * @control_stride: Number of bytes between the first
>   *	    pixels of each horizontal line @0x18
> + * @status: Status field @0x1C
> + * @app: APP Fields @0x20 - 0x30

As the descriptors are not identical for the DMA and VDMA cores, please define 
two structures instead of abusing this one.

>   */
>  struct xilinx_vdma_desc_hw {
>  	u32 next_desc;
>  	u32 pad1;
>  	u32 buf_addr;
>  	u32 pad2;
> -	u32 vsize;
> +	u32 dstaddr_vsize;
>  	u32 hsize;
> -	u32 stride;
> +	u32 control_stride;
> +	u32 status;
> +	u32 app[XILINX_DMA_NUM_APP_WORDS];
>  } __aligned(64);
> 
>  /**
> @@ -209,6 +233,9 @@ struct xilinx_vdma_tx_descriptor {
>   * @config: Device configuration info
>   * @flush_on_fsync: Flush on Frame sync
>   * @desc_pendingcount: Descriptor pending count
> + * @residue: Residue for AXI DMA
> + * @seg_v: Statically allocated segments base
> + * @start_transfer: Differentiate b/w DMA IP's transfer

Please clearly document which fields are common and which fields are specific 
to one DMA engine type.

>   */
>  struct xilinx_vdma_chan {
>  	struct xilinx_vdma_device *xdev;
> @@ -232,6 +259,9 @@ struct xilinx_vdma_chan {
>  	struct xilinx_vdma_config config;
>  	bool flush_on_fsync;
>  	u32 desc_pendingcount;
> +	u32 residue;
> +	struct xilinx_vdma_tx_segment *seg_v;
> +	void   (*start_transfer)(struct xilinx_vdma_chan *chan);

One space after void is enough.

>  };
> 
>  /**
> @@ -436,6 +466,7 @@ static void xilinx_vdma_free_chan_resources(struct
> dma_chan *dchan) dev_dbg(chan->dev, "Free all channel resources.\n");
> 
>  	xilinx_vdma_free_descriptors(chan);
> +	xilinx_vdma_free_tx_segment(chan, chan->seg_v);
>  	dma_pool_destroy(chan->desc_pool);
>  	chan->desc_pool = NULL;
>  }
> @@ -515,7 +546,12 @@ static int xilinx_vdma_alloc_chan_resources(struct
> dma_chan *dchan) return -ENOMEM;
>  	}
> 
> +	chan->seg_v = xilinx_vdma_alloc_tx_segment(chan);

This is a bit scary. You allocate a segment here and free it in 
xilinx_vdma_free_chan_resources, but seg_v is reassigned in 
xilinx_dma_start_transfer(). A comment explaining how seg_v is used and why 
this is safe would be nice.

>  	dma_cookie_init(dchan);
> +
> +	/* Enable interrupts */
> +	vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
> +		      XILINX_VDMA_DMAXR_ALL_IRQ_MASK);

Why is this needed here ?

>  	return 0;
>  }
> 
> @@ -531,7 +567,37 @@ static enum dma_status xilinx_vdma_tx_status(struct
> dma_chan *dchan, dma_cookie_t cookie,
>  					struct dma_tx_state *txstate)
>  {
> -	return dma_cookie_status(dchan, cookie, txstate);
> +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> +	struct xilinx_vdma_tx_descriptor *desc;
> +	struct xilinx_vdma_tx_segment *segment;
> +	struct xilinx_vdma_desc_hw *hw;
> +	enum dma_status ret;
> +	unsigned long flags;
> +	u32 residue = 0;
> +
> +	ret = dma_cookie_status(dchan, cookie, txstate);
> +	if (ret == DMA_COMPLETE || !txstate)
> +		return ret;
> +
> +	if (chan->xdev->quirks & AXIDMA_SUPPORT) {
> +		desc = list_last_entry(&chan->active_list,
> +				       struct xilinx_vdma_tx_descriptor, node);

Doesn't this need to be protected against race conditions ?

> +
> +		spin_lock_irqsave(&chan->lock, flags);
> +		if (chan->has_sg) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				hw = &segment->hw;
> +				residue += (hw->control_stride - hw->status) &
> +					   XILINX_DMA_MAX_TRANS_LEN;
> +			}
> +		}
> +		spin_unlock_irqrestore(&chan->lock, flags);
> +
> +		chan->residue = residue;
> +		dma_set_residue(txstate, chan->residue);
> +	}

Is this really specific to the DMA engine type, or is it applicable to the 
VDMA and CDMA engines as well ?

> +	return ret;
>  }
> 
>  /**
> @@ -713,8 +779,9 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_vdma_chan *chan) /* HW expects these parameters to be same for one
> transaction */ vdma_desc_write(chan, XILINX_VDMA_REG_HSIZE,
> last->hw.hsize);
>  		vdma_desc_write(chan, XILINX_VDMA_REG_FRMDLY_STRIDE,
> -				last->hw.stride);
> -		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE, last->hw.vsize);
> +				last->hw.control_stride);
> +		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE,
> +				last->hw.dstaddr_vsize);
>  	}
> 
>  	list_splice_tail_init(&chan->pending_list, &chan->active_list);
> @@ -736,6 +803,105 @@ static void xilinx_vdma_issue_pending(struct dma_chan
> *dchan) }
> 
>  /**
> + * xilinx_dma_start_transfer - Starts DMA transfer
> + * @chan: Driver specific channel struct pointer
> + */
> +static void xilinx_dma_start_transfer(struct xilinx_vdma_chan *chan)
> +{
> +	struct xilinx_vdma_tx_descriptor *head_desc, *tail_desc;
> +	struct xilinx_vdma_tx_segment *tail_segment, *old_head, *new_head;
> +	u32 reg;
> +
> +	/* This function was invoked with lock held */

If you want this to be mentioned in a comment please add it to the comment 
block before the function. I'd write it as "This function must be called with 
the channel lock held.".

> +	if (chan->err)
> +		return;
> +
> +	if (list_empty(&chan->pending_list))
> +		return;
> +
> +	head_desc = list_first_entry(&chan->pending_list,
> +				     struct xilinx_vdma_tx_descriptor, node);
> +	tail_desc = list_last_entry(&chan->pending_list,
> +				    struct xilinx_vdma_tx_descriptor, node);
> +	tail_segment = list_last_entry(&tail_desc->segments,
> +				       struct xilinx_vdma_tx_segment, node);
> +
> +	old_head = list_first_entry(&head_desc->segments,
> +				struct xilinx_vdma_tx_segment, node);
> +	new_head = chan->seg_v;
> +	/* Copy Buffer Descriptor fields. */
> +	new_head->hw = old_head->hw;
> +
> +	/* Swap and save new reserve */
> +	list_replace_init(&old_head->node, &new_head->node);
> +	chan->seg_v = old_head;
> +
> +	tail_segment->hw.next_desc = chan->seg_v->phys;
> +	head_desc->async_tx.phys = new_head->phys;
> +
> +	/* If it is SG mode and hardware is busy, cannot submit */
> +	if (chan->has_sg && xilinx_vdma_is_running(chan) &&
> +	    !xilinx_vdma_is_idle(chan)) {
> +		dev_dbg(chan->dev, "DMA controller still busy\n");
> +		return;

Shouldn't this be checked before modifying the descriptors above ?

> +	}
> +
> +	reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
> +
> +	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
> +		reg &= ~XILINX_DMA_CR_COALESCE_MAX;
> +		reg |= chan->desc_pendingcount <<
> +				  XILINX_DMA_CR_COALESCE_SHIFT;
> +		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, reg);
> +	}
> +
> +	if (chan->has_sg)
> +		vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC,
> +				head_desc->async_tx.phys);
> +
> +	xilinx_vdma_start(chan);
> +
> +	if (chan->err)
> +		return;
> +
> +	/* Start the transfer */
> +	if (chan->has_sg) {
> +		vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC,
> +			       tail_segment->phys);
> +	} else {
> +		struct xilinx_vdma_tx_segment *segment;
> +		struct xilinx_vdma_desc_hw *hw;
> +
> +		segment = list_first_entry(&head_desc->segments,
> +					   struct xilinx_vdma_tx_segment, node);
> +		hw = &segment->hw;
> +
> +		vdma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr);
> +
> +		/* Start the transfer */
> +		vdma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> +			       hw->control_stride & XILINX_DMA_MAX_TRANS_LEN);
> +	}
> +
> +	list_splice_tail_init(&chan->pending_list, &chan->active_list);
> +	chan->desc_pendingcount = 0;
> +}
> +
> +/**
> + * xilinx_dma_issue_pending - Issue pending transactions
> + * @dchan: DMA channel
> + */
> +static void xilinx_dma_issue_pending(struct dma_chan *dchan)
> +{
> +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->lock, flags);
> +	xilinx_dma_start_transfer(chan);

If you write it as chan->start_transfer(chan) you won't have to duplicate the 
issue_pending function.

> +	spin_unlock_irqrestore(&chan->lock, flags);
> +}
> +
> +/**
>   * xilinx_vdma_complete_descriptor - Mark the active descriptor as complete
> * @chan : xilinx DMA channel
>   *
> @@ -841,15 +1007,12 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq,
> void *data) vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
>  				errors & XILINX_VDMA_DMASR_ERR_RECOVER_MASK);
> 
> -		if (!chan->flush_on_fsync ||
> -		    (errors & ~XILINX_VDMA_DMASR_ERR_RECOVER_MASK)) {
> -			dev_err(chan->dev,
> -				"Channel %p has errors %x, cdr %x tdr %x\n",
> -				chan, errors,
> -				vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
> -				vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
> -			chan->err = true;
> -		}
> +		dev_err(chan->dev,
> +			"Channel %p has errors %x, cdr %x tdr %x\n",
> +			chan, errors,
> +			vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
> +			vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
> +		chan->err = true;

You're removing the ability to recover when C_FLUSH_ON_FSYNC is set. Why is 
that ?

>  	}
> 
>  	if (status & XILINX_VDMA_DMASR_DLY_CNT_IRQ) {
> @@ -863,7 +1026,7 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq,
> void *data) if (status & XILINX_VDMA_DMASR_FRM_CNT_IRQ) {
>  		spin_lock(&chan->lock);
>  		xilinx_vdma_complete_descriptor(chan);
> -		xilinx_vdma_start_transfer(chan);
> +		chan->start_transfer(chan);
>  		spin_unlock(&chan->lock);
>  	}
> 
> @@ -903,7 +1066,8 @@ append:
>  	list_add_tail(&desc->node, &chan->pending_list);
>  	chan->desc_pendingcount++;
> 
> -	if (unlikely(chan->desc_pendingcount > chan->num_frms)) {
> +	if ((unlikely(chan->desc_pendingcount > chan->num_frms)) &&

Parenthesis around unlikely are not needed.

> +		(chan->xdev->quirks & AXIVDMA_SUPPORT)) {
>  		dev_dbg(chan->dev, "desc pendingcount is too high\n");
>  		chan->desc_pendingcount = chan->num_frms;
>  	}
> @@ -989,11 +1153,11 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan
> *dchan,
> 
>  	/* Fill in the hardware descriptor */
>  	hw = &segment->hw;
> -	hw->vsize = xt->numf;
> +	hw->dstaddr_vsize = xt->numf;
>  	hw->hsize = xt->sgl[0].size;
> -	hw->stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
> +	hw->control_stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
>  			XILINX_VDMA_FRMDLY_STRIDE_STRIDE_SHIFT;
> -	hw->stride |= chan->config.frm_dly <<
> +	hw->control_stride |= chan->config.frm_dly <<
>  			XILINX_VDMA_FRMDLY_STRIDE_FRMDLY_SHIFT;
> 
>  	if (xt->dir != DMA_MEM_TO_DEV)
> @@ -1019,6 +1183,108 @@ error:
>  }
> 
>  /**
> + * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE
> transaction + * @dchan: DMA channel
> + * @sgl: scatterlist to transfer to/from
> + * @sg_len: number of entries in @scatterlist
> + * @direction: DMA direction
> + * @flags: transfer ack flags
> + * @context: APP words of the descriptor
> + *
> + * Return: Async transaction descriptor on success and NULL on failure
> + */
> +static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
> +	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> +	enum dma_transfer_direction direction, unsigned long flags,
> +	void *context)
> +{
> +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> +	struct xilinx_vdma_tx_descriptor *desc;
> +	struct xilinx_vdma_tx_segment *segment = NULL, *prev = NULL;
> +	u32 *app_w = (u32 *)context;
> +	struct scatterlist *sg;
> +	size_t copy, sg_used;

Please don't declare multiple unrelated variables on a single line.

> +	int i;

i will never be negative, you can make it an unsigned int.

> +
> +	if (!is_slave_direction(direction))
> +		return NULL;
> +
> +	/* Allocate a transaction descriptor. */
> +	desc = xilinx_vdma_alloc_tx_descriptor(chan);
> +	if (!desc)
> +		return NULL;
> +
> +	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> +	desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
> +
> +	/* Build transactions using information in the scatter gather list */
> +	for_each_sg(sgl, sg, sg_len, i) {
> +		sg_used = 0;
> +
> +		/* Loop until the entire scatterlist entry is used */
> +		while (sg_used < sg_dma_len(sg)) {
> +			struct xilinx_vdma_desc_hw *hw;
> +
> +			/* Get a free segment */
> +			segment = xilinx_vdma_alloc_tx_segment(chan);
> +			if (!segment)
> +				goto error;
> +
> +			/*
> +			 * Calculate the maximum number of bytes to transfer,
> +			 * making sure it is less than the hw limit
> +			 */
> +			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> +				     XILINX_DMA_MAX_TRANS_LEN);
> +			hw = &segment->hw;
> +
> +			/* Fill in the descriptor */
> +			hw->buf_addr = sg_dma_address(sg) + sg_used;
> +
> +			hw->control_stride = copy;
> +
> +			if (chan->direction == DMA_MEM_TO_DEV) {
> +				if (app_w)
> +					memcpy(hw->app, app_w, sizeof(u32) *
> +					       XILINX_DMA_NUM_APP_WORDS);
> +			}
> +
> +			if (prev)
> +				prev->hw.next_desc = segment->phys;
> +
> +			prev = segment;
> +			sg_used += copy;
> +
> +			/*
> +			 * Insert the segment into the descriptor segments
> +			 * list.
> +			 */
> +			list_add_tail(&segment->node, &desc->segments);
> +		}
> +	}
> +
> +	segment = list_first_entry(&desc->segments,
> +				   struct xilinx_vdma_tx_segment, node);
> +	desc->async_tx.phys = segment->phys;
> +	prev->hw.next_desc = segment->phys;
> +
> +	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
> +	if (chan->direction == DMA_MEM_TO_DEV) {
> +		segment->hw.control_stride |= XILINX_DMA_BD_SOP;
> +		segment = list_last_entry(&desc->segments,
> +					  struct xilinx_vdma_tx_segment,
> +					  node);
> +		segment->hw.control_stride |= XILINX_DMA_BD_EOP;
> +	}
> +
> +	return &desc->async_tx;
> +
> +error:
> +	xilinx_vdma_free_tx_descriptor(chan, desc);
> +	return NULL;
> +}
> +
> +/**
>   * xilinx_vdma_terminate_all - Halt the channel and free descriptors
>   * @chan: Driver specific VDMA Channel pointer
>   */
> @@ -1179,27 +1445,36 @@ static int xilinx_vdma_chan_probe(struct
> xilinx_vdma_device *xdev, chan->id = 0;
> 
>  		chan->ctrl_offset = XILINX_VDMA_MM2S_CTRL_OFFSET;
> -		chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
> +		if (xdev->quirks & AXIVDMA_SUPPORT) {
> +			chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
> 
> -		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> -		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
> -			chan->flush_on_fsync = true;
> +			if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> +			    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
> +				chan->flush_on_fsync = true;
> +		}
>  	} else if (of_device_is_compatible(node,
>  					    "xlnx,axi-vdma-s2mm-channel")) {
>  		chan->direction = DMA_DEV_TO_MEM;
>  		chan->id = 1;
> 
>  		chan->ctrl_offset = XILINX_VDMA_S2MM_CTRL_OFFSET;
> -		chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
> +		if (xdev->quirks & AXIVDMA_SUPPORT) {
> +			chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
> 
> -		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> -		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
> -			chan->flush_on_fsync = true;
> +			if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> +			    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
> +				chan->flush_on_fsync = true;
> +		}
>  	} else {
>  		dev_err(xdev->dev, "Invalid channel compatible node\n");
>  		return -EINVAL;
>  	}
> 
> +	if (xdev->quirks & AXIVDMA_SUPPORT)
> +		chan->start_transfer = xilinx_vdma_start_transfer;
> +	else
> +		chan->start_transfer = xilinx_dma_start_transfer;
> +
>  	/* Request the interrupt */
>  	chan->irq = irq_of_parse_and_map(node, 0);
>  	err = request_irq(chan->irq, xilinx_vdma_irq_handler, IRQF_SHARED,
> @@ -1255,8 +1530,13 @@ static const struct xdma_platform_data xvdma_def = {
>  	.quirks = AXIVDMA_SUPPORT,
>  };
> 
> +static const struct xdma_platform_data xdma_def = {
> +	.quirks = AXIDMA_SUPPORT,
> +};
> +
>  static const struct of_device_id xilinx_vdma_of_ids[] = {
>  	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
> +	{ .compatible = "xlnx,axi-dma-1.00.a", .data = &xdma_def},

Please keep the entries alphabetically sorted.

>  	{}
>  };
>  MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
> @@ -1300,16 +1580,22 @@ static int xilinx_vdma_probe(struct platform_device
> *pdev) /* Retrieve the DMA engine properties from the device tree */
>  	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> 
> -	err = of_property_read_u32(node, "xlnx,num-fstores", &num_frames);
> -	if (err < 0) {
> -		dev_err(xdev->dev, "missing xlnx,num-fstores property\n");
> -		return err;
> -	}
> +	if ((xdev->quirks & AXIVDMA_SUPPORT)) {

Extra parenthesis. Furthermore, as every DMA IP implements one and only one 
type, please replace the _SUPPORT bitmask macros with an enum and test for 
equality.

> 
> -	err = of_property_read_u32(node, "xlnx,flush-fsync",
> -					&xdev->flush_on_fsync);
> -	if (err < 0)
> -		dev_warn(xdev->dev, "missing xlnx,flush-fsync property\n");
> +		err = of_property_read_u32(node, "xlnx,num-fstores",
> +					   &num_frames);
> +		if (err < 0) {
> +			dev_err(xdev->dev,
> +				"missing xlnx,num-fstores property\n");
> +			return err;
> +		}
> +
> +		err = of_property_read_u32(node, "xlnx,flush-fsync",
> +						&xdev->flush_on_fsync);

Too many spaces, please align &xdev under node.

> +		if (err < 0)
> +			dev_warn(xdev->dev,
> +				 "missing xlnx,flush-fsync property\n");
> +	}
> 
>  	/* Initialize the DMA engine */
>  	xdev->common.dev = &pdev->dev;
> @@ -1322,11 +1608,20 @@ static int xilinx_vdma_probe(struct platform_device
> *pdev) xilinx_vdma_alloc_chan_resources;
>  	xdev->common.device_free_chan_resources =
>  				xilinx_vdma_free_chan_resources;
> -	xdev->common.device_prep_interleaved_dma =
> -				xilinx_vdma_dma_prep_interleaved;
>  	xdev->common.device_terminate_all = xilinx_vdma_terminate_all;
>  	xdev->common.device_tx_status = xilinx_vdma_tx_status;
> -	xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
> +	if (xdev->quirks & AXIVDMA_SUPPORT) {
> +		xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
> +		xdev->common.device_prep_interleaved_dma =
> +				xilinx_vdma_dma_prep_interleaved;

Shouldn't the VDMA also set directions and residue granularity ? Please add 
that as an additional patch before this one.

> +	} else {
> +		xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
> +		xdev->common.device_issue_pending = xilinx_dma_issue_pending;

I would use the same initialization order in both branches of the if, it will 
make the code easier to read.

> +		xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
> +					  BIT(DMA_MEM_TO_DEV);

Shouldn't that depend on which channels have been synthesized (and thus 
declared in DT) ? The code to set the directions field can probably be made 
for all three DMA engine types.

> +		xdev->common.residue_granularity =
> +					  DMA_RESIDUE_GRANULARITY_SEGMENT;
> +	}
> 
>  	platform_set_drvdata(pdev, xdev);
> 
> @@ -1337,9 +1632,11 @@ static int xilinx_vdma_probe(struct platform_device
> *pdev) goto error;
>  	}
> 
> -	for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
> -		if (xdev->chan[i])
> -			xdev->chan[i]->num_frms = num_frames;
> +	if (xdev->quirks & AXIVDMA_SUPPORT) {
> +		for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
> +			if (xdev->chan[i])
> +				xdev->chan[i]->num_frms = num_frames;
> +	}
> 
>  	/* Register the DMA engine with the core */
>  	dma_async_device_register(&xdev->common);

-- 
Regards,

Laurent Pinchart
:

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine
  2016-03-17  8:05   ` Laurent Pinchart
@ 2016-03-18 12:43     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-18 12:43 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: dan.j.williams, vinod.koul, Michal Simek, Soren Brinkmann,
	moritz.fischer, luis, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, linux-kernel

Hi Laurent Pinchart,

	Thanks for reviewing the patch.

> -----Original Message-----
> From: Laurent Pinchart [mailto:laurent.pinchart@ideasonboard.com]
> Sent: Thursday, March 17, 2016 1:35 PM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; vinod.koul@intel.com; Michal Simek; Soren
> Brinkmann; Appana Durga Kedareswara Rao; moritz.fischer@ettus.com;
> luis@debethencourt.com; Anirudha Sarangi; dmaengine@vger.kernel.org; linux-
> arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI
> Direct Memory Access Engine
> 
> Hello,
> 
> Thank you for the patch.
> 
> On Tuesday 15 March 2016 22:53:08 Kedareswara rao Appana wrote:
> > This patch adds support for the AXI Direct Memory Access (AXI DMA)
> > core, which is a soft Xilinx IP core that provides high- bandwidth
> > direct memory access between memory and AXI4-Stream type target
> > peripherals.
> >
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> >  drivers/dma/xilinx/xilinx_vdma.c | 385
> > +++++++++++++++++++++++++++++++-----
> >  1 file changed, 341 insertions(+), 44 deletions(-)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_vdma.c
> > b/drivers/dma/xilinx/xilinx_vdma.c index f682bef..87525a9 100644
> > --- a/drivers/dma/xilinx/xilinx_vdma.c
> > +++ b/drivers/dma/xilinx/xilinx_vdma.c
> > @@ -16,6 +16,12 @@
> >   * video device (S2MM). Initialization, status, interrupt and management
> >   * registers are accessed through an AXI4-Lite slave interface.
> >   *
> > + *  The AXI DMA, is a soft IP, which provides high-bandwidth Direct
> > + Memory
> > + *  Access between memory and AXI4-Stream-type target peripherals. It
> > + can
> > be
> > + *  configured to have one channel or two channels and if configured
> > + as two
> > + *  channels, one is to transmit data from memory to a device and
> > + another
> > is
> > + *  to receive from a device.
> 
> Let's try to be both descriptive and consistent.
> 
> "The AXI Direct Memory Access (AXI DMA) core is a soft Xilinx IP core the
> provides high-bandwidth one dimensional direct memory access between
> memory and AXI4-Stream target peripherals. It supports one receive and one
> transmit channel, both of them optional at synthesis time."


Ok Will fix it in v2.

> 
> > + *
> >   * This program is free software: you can redistribute it and/or modify
> >   * it under the terms of the GNU General Public License as published by
> >   * the Free Software Foundation, either version 2 of the License, or
> > @@ -140,6 +146,20 @@
> >  #define XILINX_VDMA_LOOP_COUNT		1000000
> >
> >  #define AXIVDMA_SUPPORT		BIT(0)
> > +#define AXIDMA_SUPPORT		BIT(1)
> > +
> > +/* AXI DMA Specific Registers/Offsets */
> > +#define XILINX_DMA_REG_SRCDSTADDR	0x18
> > +#define XILINX_DMA_REG_DSTADDR		0x20
> > +#define XILINX_DMA_REG_BTT		0x28
> > +
> > +#define XILINX_DMA_MAX_TRANS_LEN	GENMASK(22, 0)
> > +#define XILINX_DMA_CR_COALESCE_MAX	GENMASK(23, 16)
> > +#define XILINX_DMA_CR_COALESCE_SHIFT	16
> > +#define XILINX_DMA_BD_SOP		BIT(27)
> > +#define XILINX_DMA_BD_EOP		BIT(26)
> > +#define XILINX_DMA_COALESCE_MAX		255
> > +#define XILINX_DMA_NUM_APP_WORDS	5
> >
> >  /**
> >   * struct xilinx_vdma_desc_hw - Hardware Descriptor @@ -147,19
> > +167,23 @@
> >   * @pad1: Reserved @0x04
> >   * @buf_addr: Buffer address @0x08
> >   * @pad2: Reserved @0x0C
> > - * @vsize: Vertical Size @0x10
> > + * @dstaddr_vsize: Vertical Size @0x10
> >   * @hsize: Horizontal Size @0x14
> > - * @stride: Number of bytes between the first
> > + * @control_stride: Number of bytes between the first
> >   *	    pixels of each horizontal line @0x18
> > + * @status: Status field @0x1C
> > + * @app: APP Fields @0x20 - 0x30
> 
> As the descriptors are not identical for the DMA and VDMA cores, please define
> two structures instead of abusing this one.

Ok will fix in v2.

> 
> >   */
> >  struct xilinx_vdma_desc_hw {
> >  	u32 next_desc;
> >  	u32 pad1;
> >  	u32 buf_addr;
> >  	u32 pad2;
> > -	u32 vsize;
> > +	u32 dstaddr_vsize;
> >  	u32 hsize;
> > -	u32 stride;
> > +	u32 control_stride;
> > +	u32 status;
> > +	u32 app[XILINX_DMA_NUM_APP_WORDS];
> >  } __aligned(64);
> >
> >  /**
> > @@ -209,6 +233,9 @@ struct xilinx_vdma_tx_descriptor {
> >   * @config: Device configuration info
> >   * @flush_on_fsync: Flush on Frame sync
> >   * @desc_pendingcount: Descriptor pending count
> > + * @residue: Residue for AXI DMA
> > + * @seg_v: Statically allocated segments base
> > + * @start_transfer: Differentiate b/w DMA IP's transfer
> 
> Please clearly document which fields are common and which fields are specific
> to one DMA engine type.

Ok Sure will fix in v2.

> 
> >   */
> >  struct xilinx_vdma_chan {
> >  	struct xilinx_vdma_device *xdev;
> > @@ -232,6 +259,9 @@ struct xilinx_vdma_chan {
> >  	struct xilinx_vdma_config config;
> >  	bool flush_on_fsync;
> >  	u32 desc_pendingcount;
> > +	u32 residue;
> > +	struct xilinx_vdma_tx_segment *seg_v;
> > +	void   (*start_transfer)(struct xilinx_vdma_chan *chan);
> 
> One space after void is enough.

OK will fix in v2.

> 
> >  };
> >
> >  /**
> > @@ -436,6 +466,7 @@ static void xilinx_vdma_free_chan_resources(struct
> > dma_chan *dchan) dev_dbg(chan->dev, "Free all channel resources.\n");
> >
> >  	xilinx_vdma_free_descriptors(chan);
> > +	xilinx_vdma_free_tx_segment(chan, chan->seg_v);
> >  	dma_pool_destroy(chan->desc_pool);
> >  	chan->desc_pool = NULL;
> >  }
> > @@ -515,7 +546,12 @@ static int
> > xilinx_vdma_alloc_chan_resources(struct
> > dma_chan *dchan) return -ENOMEM;
> >  	}
> >
> > +	chan->seg_v = xilinx_vdma_alloc_tx_segment(chan);
> 
> This is a bit scary. You allocate a segment here and free it in
> xilinx_vdma_free_chan_resources, but seg_v is reassigned in
> xilinx_dma_start_transfer(). A comment explaining how seg_v is used and why
> this is safe would be nice.

OK will add comments in the v2.

> 
> >  	dma_cookie_init(dchan);
> > +
> > +	/* Enable interrupts */
> > +	vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
> > +		      XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
> 
> Why is this needed here ?

We are enable interrupts in the reset.
For VDMA Case if we reset one channel it will reset that particular channel only.
But in AXI DMA case if we reset one channel it will reset the other channel as well.
So one option is enable interrupts for the channels at the end of the probe.
Or enable interrupts during channel resource allocation. 

I did the changes in this patch for the 2nd option.


> 
> >  	return 0;
> >  }
> >
> > @@ -531,7 +567,37 @@ static enum dma_status
> > xilinx_vdma_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
> >  					struct dma_tx_state *txstate)
> >  {
> > -	return dma_cookie_status(dchan, cookie, txstate);
> > +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> > +	struct xilinx_vdma_tx_descriptor *desc;
> > +	struct xilinx_vdma_tx_segment *segment;
> > +	struct xilinx_vdma_desc_hw *hw;
> > +	enum dma_status ret;
> > +	unsigned long flags;
> > +	u32 residue = 0;
> > +
> > +	ret = dma_cookie_status(dchan, cookie, txstate);
> > +	if (ret == DMA_COMPLETE || !txstate)
> > +		return ret;
> > +
> > +	if (chan->xdev->quirks & AXIDMA_SUPPORT) {
> > +		desc = list_last_entry(&chan->active_list,
> > +				       struct xilinx_vdma_tx_descriptor, node);
> 
> Doesn't this need to be protected against race conditions ?

Yes will fix in v2.

> 
> > +
> > +		spin_lock_irqsave(&chan->lock, flags);
> > +		if (chan->has_sg) {
> > +			list_for_each_entry(segment, &desc->segments, node)
> {
> > +				hw = &segment->hw;
> > +				residue += (hw->control_stride - hw->status) &
> > +					   XILINX_DMA_MAX_TRANS_LEN;
> > +			}
> > +		}
> > +		spin_unlock_irqrestore(&chan->lock, flags);
> > +
> > +		chan->residue = residue;
> > +		dma_set_residue(txstate, chan->residue);
> > +	}
> 
> Is this really specific to the DMA engine type, or is it applicable to the VDMA and
> CDMA engines as well ?

Residue support is specific to AXI DMA IP only.
CDMA and VDMA IP's won't support residue.

> 
> > +	return ret;
> >  }
> >
> >  /**
> > @@ -713,8 +779,9 @@ static void xilinx_vdma_start_transfer(struct
> > xilinx_vdma_chan *chan) /* HW expects these parameters to be same for
> > one transaction */ vdma_desc_write(chan, XILINX_VDMA_REG_HSIZE,
> > last->hw.hsize);
> >  		vdma_desc_write(chan, XILINX_VDMA_REG_FRMDLY_STRIDE,
> > -				last->hw.stride);
> > -		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE, last-
> >hw.vsize);
> > +				last->hw.control_stride);
> > +		vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE,
> > +				last->hw.dstaddr_vsize);
> >  	}
> >
> >  	list_splice_tail_init(&chan->pending_list, &chan->active_list); @@
> > -736,6 +803,105 @@ static void xilinx_vdma_issue_pending(struct
> > dma_chan
> > *dchan) }
> >
> >  /**
> > + * xilinx_dma_start_transfer - Starts DMA transfer
> > + * @chan: Driver specific channel struct pointer  */ static void
> > +xilinx_dma_start_transfer(struct xilinx_vdma_chan *chan) {
> > +	struct xilinx_vdma_tx_descriptor *head_desc, *tail_desc;
> > +	struct xilinx_vdma_tx_segment *tail_segment, *old_head, *new_head;
> > +	u32 reg;
> > +
> > +	/* This function was invoked with lock held */
> 
> If you want this to be mentioned in a comment please add it to the comment
> block before the function. I'd write it as "This function must be called with the
> channel lock held.".

Sure will fix in v2.

> 
> > +	if (chan->err)
> > +		return;
> > +
> > +	if (list_empty(&chan->pending_list))
> > +		return;
> > +
> > +	head_desc = list_first_entry(&chan->pending_list,
> > +				     struct xilinx_vdma_tx_descriptor, node);
> > +	tail_desc = list_last_entry(&chan->pending_list,
> > +				    struct xilinx_vdma_tx_descriptor, node);
> > +	tail_segment = list_last_entry(&tail_desc->segments,
> > +				       struct xilinx_vdma_tx_segment, node);
> > +
> > +	old_head = list_first_entry(&head_desc->segments,
> > +				struct xilinx_vdma_tx_segment, node);
> > +	new_head = chan->seg_v;
> > +	/* Copy Buffer Descriptor fields. */
> > +	new_head->hw = old_head->hw;
> > +
> > +	/* Swap and save new reserve */
> > +	list_replace_init(&old_head->node, &new_head->node);
> > +	chan->seg_v = old_head;
> > +
> > +	tail_segment->hw.next_desc = chan->seg_v->phys;
> > +	head_desc->async_tx.phys = new_head->phys;
> > +
> > +	/* If it is SG mode and hardware is busy, cannot submit */
> > +	if (chan->has_sg && xilinx_vdma_is_running(chan) &&
> > +	    !xilinx_vdma_is_idle(chan)) {
> > +		dev_dbg(chan->dev, "DMA controller still busy\n");
> > +		return;
> 
> Shouldn't this be checked before modifying the descriptors above ?

Ok will fix in v2.

> 
> > +	}
> > +
> > +	reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
> > +
> > +	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
> > +		reg &= ~XILINX_DMA_CR_COALESCE_MAX;
> > +		reg |= chan->desc_pendingcount <<
> > +				  XILINX_DMA_CR_COALESCE_SHIFT;
> > +		vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, reg);
> > +	}
> > +
> > +	if (chan->has_sg)
> > +		vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC,
> > +				head_desc->async_tx.phys);
> > +
> > +	xilinx_vdma_start(chan);
> > +
> > +	if (chan->err)
> > +		return;
> > +
> > +	/* Start the transfer */
> > +	if (chan->has_sg) {
> > +		vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC,
> > +			       tail_segment->phys);
> > +	} else {
> > +		struct xilinx_vdma_tx_segment *segment;
> > +		struct xilinx_vdma_desc_hw *hw;
> > +
> > +		segment = list_first_entry(&head_desc->segments,
> > +					   struct xilinx_vdma_tx_segment,
> node);
> > +		hw = &segment->hw;
> > +
> > +		vdma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw-
> >buf_addr);
> > +
> > +		/* Start the transfer */
> > +		vdma_ctrl_write(chan, XILINX_DMA_REG_BTT,
> > +			       hw->control_stride &
> XILINX_DMA_MAX_TRANS_LEN);
> > +	}
> > +
> > +	list_splice_tail_init(&chan->pending_list, &chan->active_list);
> > +	chan->desc_pendingcount = 0;
> > +}
> > +
> > +/**
> > + * xilinx_dma_issue_pending - Issue pending transactions
> > + * @dchan: DMA channel
> > + */
> > +static void xilinx_dma_issue_pending(struct dma_chan *dchan) {
> > +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&chan->lock, flags);
> > +	xilinx_dma_start_transfer(chan);
> 
> If you write it as chan->start_transfer(chan) you won't have to duplicate the
> issue_pending function.

Sure will fix in v2.

> 
> > +	spin_unlock_irqrestore(&chan->lock, flags); }
> > +
> > +/**
> >   * xilinx_vdma_complete_descriptor - Mark the active descriptor as
> > complete
> > * @chan : xilinx DMA channel
> >   *
> > @@ -841,15 +1007,12 @@ static irqreturn_t xilinx_vdma_irq_handler(int
> > irq, void *data) vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
> >  				errors &
> XILINX_VDMA_DMASR_ERR_RECOVER_MASK);
> >
> > -		if (!chan->flush_on_fsync ||
> > -		    (errors & ~XILINX_VDMA_DMASR_ERR_RECOVER_MASK)) {
> > -			dev_err(chan->dev,
> > -				"Channel %p has errors %x, cdr %x tdr %x\n",
> > -				chan, errors,
> > -				vdma_ctrl_read(chan,
> XILINX_VDMA_REG_CURDESC),
> > -				vdma_ctrl_read(chan,
> XILINX_VDMA_REG_TAILDESC));
> > -			chan->err = true;
> > -		}
> > +		dev_err(chan->dev,
> > +			"Channel %p has errors %x, cdr %x tdr %x\n",
> > +			chan, errors,
> > +			vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
> > +			vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
> > +		chan->err = true;
> 
> You're removing the ability to recover when C_FLUSH_ON_FSYNC is set. Why is
> that ?

In any case (I mean if c_flush_on_sync is not being set that case)
 If we dump the register info it will be useful for the
Users to analyze the errors. That's why removed those checks.

> 
> >  	}
> >
> >  	if (status & XILINX_VDMA_DMASR_DLY_CNT_IRQ) { @@ -863,7 +1026,7
> @@
> > static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data) if
> > (status & XILINX_VDMA_DMASR_FRM_CNT_IRQ) {
> >  		spin_lock(&chan->lock);
> >  		xilinx_vdma_complete_descriptor(chan);
> > -		xilinx_vdma_start_transfer(chan);
> > +		chan->start_transfer(chan);
> >  		spin_unlock(&chan->lock);
> >  	}
> >
> > @@ -903,7 +1066,8 @@ append:
> >  	list_add_tail(&desc->node, &chan->pending_list);
> >  	chan->desc_pendingcount++;
> >
> > -	if (unlikely(chan->desc_pendingcount > chan->num_frms)) {
> > +	if ((unlikely(chan->desc_pendingcount > chan->num_frms)) &&
> 
> Parenthesis around unlikely are not needed.

Ok will fix.

> 
> > +		(chan->xdev->quirks & AXIVDMA_SUPPORT)) {
> >  		dev_dbg(chan->dev, "desc pendingcount is too high\n");
> >  		chan->desc_pendingcount = chan->num_frms;
> >  	}
> > @@ -989,11 +1153,11 @@ xilinx_vdma_dma_prep_interleaved(struct
> > dma_chan *dchan,
> >
> >  	/* Fill in the hardware descriptor */
> >  	hw = &segment->hw;
> > -	hw->vsize = xt->numf;
> > +	hw->dstaddr_vsize = xt->numf;
> >  	hw->hsize = xt->sgl[0].size;
> > -	hw->stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
> > +	hw->control_stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
> >  			XILINX_VDMA_FRMDLY_STRIDE_STRIDE_SHIFT;
> > -	hw->stride |= chan->config.frm_dly <<
> > +	hw->control_stride |= chan->config.frm_dly <<
> >  			XILINX_VDMA_FRMDLY_STRIDE_FRMDLY_SHIFT;
> >
> >  	if (xt->dir != DMA_MEM_TO_DEV)
> > @@ -1019,6 +1183,108 @@ error:
> >  }
> >
> >  /**
> > + * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE
> > transaction + * @dchan: DMA channel
> > + * @sgl: scatterlist to transfer to/from
> > + * @sg_len: number of entries in @scatterlist
> > + * @direction: DMA direction
> > + * @flags: transfer ack flags
> > + * @context: APP words of the descriptor
> > + *
> > + * Return: Async transaction descriptor on success and NULL on
> > +failure  */ static struct dma_async_tx_descriptor
> > +*xilinx_dma_prep_slave_sg(
> > +	struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
> > +	enum dma_transfer_direction direction, unsigned long flags,
> > +	void *context)
> > +{
> > +	struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
> > +	struct xilinx_vdma_tx_descriptor *desc;
> > +	struct xilinx_vdma_tx_segment *segment = NULL, *prev = NULL;
> > +	u32 *app_w = (u32 *)context;
> > +	struct scatterlist *sg;
> > +	size_t copy, sg_used;
> 
> Please don't declare multiple unrelated variables on a single line.


Ok will fix.

> 
> > +	int i;
> 
> i will never be negative, you can make it an unsigned int.

Ok will fix in v2.

> 
> > +
> > +	if (!is_slave_direction(direction))
> > +		return NULL;
> > +
> > +	/* Allocate a transaction descriptor. */
> > +	desc = xilinx_vdma_alloc_tx_descriptor(chan);
> > +	if (!desc)
> > +		return NULL;
> > +
> > +	dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
> > +	desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
> > +
> > +	/* Build transactions using information in the scatter gather list */
> > +	for_each_sg(sgl, sg, sg_len, i) {
> > +		sg_used = 0;
> > +
> > +		/* Loop until the entire scatterlist entry is used */
> > +		while (sg_used < sg_dma_len(sg)) {
> > +			struct xilinx_vdma_desc_hw *hw;
> > +
> > +			/* Get a free segment */
> > +			segment = xilinx_vdma_alloc_tx_segment(chan);
> > +			if (!segment)
> > +				goto error;
> > +
> > +			/*
> > +			 * Calculate the maximum number of bytes to transfer,
> > +			 * making sure it is less than the hw limit
> > +			 */
> > +			copy = min_t(size_t, sg_dma_len(sg) - sg_used,
> > +				     XILINX_DMA_MAX_TRANS_LEN);
> > +			hw = &segment->hw;
> > +
> > +			/* Fill in the descriptor */
> > +			hw->buf_addr = sg_dma_address(sg) + sg_used;
> > +
> > +			hw->control_stride = copy;
> > +
> > +			if (chan->direction == DMA_MEM_TO_DEV) {
> > +				if (app_w)
> > +					memcpy(hw->app, app_w, sizeof(u32)
> *
> > +					       XILINX_DMA_NUM_APP_WORDS);
> > +			}
> > +
> > +			if (prev)
> > +				prev->hw.next_desc = segment->phys;
> > +
> > +			prev = segment;
> > +			sg_used += copy;
> > +
> > +			/*
> > +			 * Insert the segment into the descriptor segments
> > +			 * list.
> > +			 */
> > +			list_add_tail(&segment->node, &desc->segments);
> > +		}
> > +	}
> > +
> > +	segment = list_first_entry(&desc->segments,
> > +				   struct xilinx_vdma_tx_segment, node);
> > +	desc->async_tx.phys = segment->phys;
> > +	prev->hw.next_desc = segment->phys;
> > +
> > +	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
> > +	if (chan->direction == DMA_MEM_TO_DEV) {
> > +		segment->hw.control_stride |= XILINX_DMA_BD_SOP;
> > +		segment = list_last_entry(&desc->segments,
> > +					  struct xilinx_vdma_tx_segment,
> > +					  node);
> > +		segment->hw.control_stride |= XILINX_DMA_BD_EOP;
> > +	}
> > +
> > +	return &desc->async_tx;
> > +
> > +error:
> > +	xilinx_vdma_free_tx_descriptor(chan, desc);
> > +	return NULL;
> > +}
> > +
> > +/**
> >   * xilinx_vdma_terminate_all - Halt the channel and free descriptors
> >   * @chan: Driver specific VDMA Channel pointer
> >   */
> > @@ -1179,27 +1445,36 @@ static int xilinx_vdma_chan_probe(struct
> > xilinx_vdma_device *xdev, chan->id = 0;
> >
> >  		chan->ctrl_offset = XILINX_VDMA_MM2S_CTRL_OFFSET;
> > -		chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
> > +		if (xdev->quirks & AXIVDMA_SUPPORT) {
> > +			chan->desc_offset =
> XILINX_VDMA_MM2S_DESC_OFFSET;
> >
> > -		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> > -		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
> > -			chan->flush_on_fsync = true;
> > +			if (xdev->flush_on_fsync ==
> XILINX_VDMA_FLUSH_BOTH ||
> > +			    xdev->flush_on_fsync ==
> XILINX_VDMA_FLUSH_MM2S)
> > +				chan->flush_on_fsync = true;
> > +		}
> >  	} else if (of_device_is_compatible(node,
> >  					    "xlnx,axi-vdma-s2mm-channel")) {
> >  		chan->direction = DMA_DEV_TO_MEM;
> >  		chan->id = 1;
> >
> >  		chan->ctrl_offset = XILINX_VDMA_S2MM_CTRL_OFFSET;
> > -		chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
> > +		if (xdev->quirks & AXIVDMA_SUPPORT) {
> > +			chan->desc_offset =
> XILINX_VDMA_S2MM_DESC_OFFSET;
> >
> > -		if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
> > -		    xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
> > -			chan->flush_on_fsync = true;
> > +			if (xdev->flush_on_fsync ==
> XILINX_VDMA_FLUSH_BOTH ||
> > +			    xdev->flush_on_fsync ==
> XILINX_VDMA_FLUSH_S2MM)
> > +				chan->flush_on_fsync = true;
> > +		}
> >  	} else {
> >  		dev_err(xdev->dev, "Invalid channel compatible node\n");
> >  		return -EINVAL;
> >  	}
> >
> > +	if (xdev->quirks & AXIVDMA_SUPPORT)
> > +		chan->start_transfer = xilinx_vdma_start_transfer;
> > +	else
> > +		chan->start_transfer = xilinx_dma_start_transfer;
> > +
> >  	/* Request the interrupt */
> >  	chan->irq = irq_of_parse_and_map(node, 0);
> >  	err = request_irq(chan->irq, xilinx_vdma_irq_handler, IRQF_SHARED,
> > @@ -1255,8 +1530,13 @@ static const struct xdma_platform_data
> xvdma_def = {
> >  	.quirks = AXIVDMA_SUPPORT,
> >  };
> >
> > +static const struct xdma_platform_data xdma_def = {
> > +	.quirks = AXIDMA_SUPPORT,
> > +};
> > +
> >  static const struct of_device_id xilinx_vdma_of_ids[] = {
> >  	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
> > +	{ .compatible = "xlnx,axi-dma-1.00.a", .data = &xdma_def},
> 
> Please keep the entries alphabetically sorted.

Ok Sure will fix in v2.

> 
> >  	{}
> >  };
> >  MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids); @@ -1300,16 +1580,22
> @@
> > static int xilinx_vdma_probe(struct platform_device
> > *pdev) /* Retrieve the DMA engine properties from the device tree */
> >  	xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
> >
> > -	err = of_property_read_u32(node, "xlnx,num-fstores", &num_frames);
> > -	if (err < 0) {
> > -		dev_err(xdev->dev, "missing xlnx,num-fstores property\n");
> > -		return err;
> > -	}
> > +	if ((xdev->quirks & AXIVDMA_SUPPORT)) {
> 
> Extra parenthesis. Furthermore, as every DMA IP implements one and only one
> type, please replace the _SUPPORT bitmask macros with an enum and test for
> equality.

Ok Sure will fix in v2.

> 
> >
> > -	err = of_property_read_u32(node, "xlnx,flush-fsync",
> > -					&xdev->flush_on_fsync);
> > -	if (err < 0)
> > -		dev_warn(xdev->dev, "missing xlnx,flush-fsync property\n");
> > +		err = of_property_read_u32(node, "xlnx,num-fstores",
> > +					   &num_frames);
> > +		if (err < 0) {
> > +			dev_err(xdev->dev,
> > +				"missing xlnx,num-fstores property\n");
> > +			return err;
> > +		}
> > +
> > +		err = of_property_read_u32(node, "xlnx,flush-fsync",
> > +						&xdev->flush_on_fsync);
> 
> Too many spaces, please align &xdev under node.

Ok Sure will fix in v2.

> 
> > +		if (err < 0)
> > +			dev_warn(xdev->dev,
> > +				 "missing xlnx,flush-fsync property\n");
> > +	}
> >
> >  	/* Initialize the DMA engine */
> >  	xdev->common.dev = &pdev->dev;
> > @@ -1322,11 +1608,20 @@ static int xilinx_vdma_probe(struct
> > platform_device
> > *pdev) xilinx_vdma_alloc_chan_resources;
> >  	xdev->common.device_free_chan_resources =
> >  				xilinx_vdma_free_chan_resources;
> > -	xdev->common.device_prep_interleaved_dma =
> > -				xilinx_vdma_dma_prep_interleaved;
> >  	xdev->common.device_terminate_all = xilinx_vdma_terminate_all;
> >  	xdev->common.device_tx_status = xilinx_vdma_tx_status;
> > -	xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
> > +	if (xdev->quirks & AXIVDMA_SUPPORT) {
> > +		xdev->common.device_issue_pending =
> xilinx_vdma_issue_pending;
> > +		xdev->common.device_prep_interleaved_dma =
> > +				xilinx_vdma_dma_prep_interleaved;
> 
> Shouldn't the VDMA also set directions and residue granularity ? Please add that
> as an additional patch before this one.

VDMA won't support residue granularity.
Directions it requires will add one more patch before this one.

> 
> > +	} else {
> > +		xdev->common.device_prep_slave_sg =
> xilinx_dma_prep_slave_sg;
> > +		xdev->common.device_issue_pending =
> xilinx_dma_issue_pending;
> 
> I would use the same initialization order in both branches of the if, it will make
> the code easier to read.

Ok will fix in v2.

> 
> > +		xdev->common.directions = BIT(DMA_DEV_TO_MEM) |
> > +					  BIT(DMA_MEM_TO_DEV);
> 
> Shouldn't that depend on which channels have been synthesized (and thus
> declared in DT) ? The code to set the directions field can probably be made for
> all three DMA engine types.

Ok will fix in v2.

Regards,
Kedar.

> 
> > +		xdev->common.residue_granularity =
> > +
> DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +	}
> >
> >  	platform_set_drvdata(pdev, xdev);
> >
> > @@ -1337,9 +1632,11 @@ static int xilinx_vdma_probe(struct
> > platform_device
> > *pdev) goto error;
> >  	}
> >
> > -	for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
> > -		if (xdev->chan[i])
> > -			xdev->chan[i]->num_frms = num_frames;
> > +	if (xdev->quirks & AXIVDMA_SUPPORT) {
> > +		for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
> > +			if (xdev->chan[i])
> > +				xdev->chan[i]->num_frms = num_frames;
> > +	}
> >
> >  	/* Register the DMA engine with the core */
> >  	dma_async_device_register(&xdev->common);
> 
> --
> Regards,
> 
> Laurent Pinchart
> :

^ permalink raw reply	[flat|nested] 24+ messages in thread

* RE: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores
  2016-03-17  7:19   ` Laurent Pinchart
@ 2016-03-18 12:43     ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 24+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-03-18 12:43 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: dan.j.williams, vinod.koul, Michal Simek, Soren Brinkmann,
	moritz.fischer, luis, Anirudha Sarangi, dmaengine,
	linux-arm-kernel, linux-kernel

Hi Laurent Pinchart,

	Thanks for reviewing the patch.

> -----Original Message-----
> From: Laurent Pinchart [mailto:laurent.pinchart@ideasonboard.com]
> Sent: Thursday, March 17, 2016 12:49 PM
> To: Appana Durga Kedareswara Rao
> Cc: dan.j.williams@intel.com; vinod.koul@intel.com; Michal Simek; Soren
> Brinkmann; Appana Durga Kedareswara Rao; moritz.fischer@ettus.com;
> luis@debethencourt.com; Anirudha Sarangi; dmaengine@vger.kernel.org; linux-
> arm-kernel@lists.infradead.org; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to
> differentiate differnet IP cores
> 
> Hello,
> 
> On Tuesday 15 March 2016 22:53:07 Kedareswara rao Appana wrote:
> > This patch adds quirks support in the driver to differentiate
> > differnet IP
> 
> s/differnet/different/

Ok will modify In the V2.

> 
> (and in the subject line too)
> 
> With this series applied the driver will not be vdma-specific anymore. The
> xilinx_vdma_ prefix will thus be confusing. I'd bite the bullet and apply a global
> rename to functions that are not shared between the three DMA IP core types
> before patch 2/7.

Ok will modify the General API's that will be shared across the DMA's to the dma instead of vdma in the v2.

> 
> > cores.
> >
> > Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> > ---
> >  drivers/dma/xilinx/xilinx_vdma.c | 36
> > ++++++++++++++++++++++++++++++------
> >  1 file changed, 30 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_vdma.c
> > b/drivers/dma/xilinx/xilinx_vdma.c index 7ab6793..f682bef 100644
> > --- a/drivers/dma/xilinx/xilinx_vdma.c
> > +++ b/drivers/dma/xilinx/xilinx_vdma.c
> > @@ -139,6 +139,8 @@
> >  /* Delay loop counter to prevent hardware failure */
> >  #define XILINX_VDMA_LOOP_COUNT		1000000
> >
> > +#define AXIVDMA_SUPPORT		BIT(0)
> > +
> >  /**
> >   * struct xilinx_vdma_desc_hw - Hardware Descriptor
> >   * @next_desc: Next Descriptor Pointer @0x00 @@ -240,6 +242,7 @@
> > struct xilinx_vdma_chan {
> >   * @chan: Driver specific VDMA channel
> >   * @has_sg: Specifies whether Scatter-Gather is present or not
> >   * @flush_on_fsync: Flush on frame sync
> > + * @quirks: Needed for different IP cores
> >   */
> >  struct xilinx_vdma_device {
> >  	void __iomem *regs;
> > @@ -248,6 +251,15 @@ struct xilinx_vdma_device {
> >  	struct xilinx_vdma_chan
> *chan[XILINX_VDMA_MAX_CHANS_PER_DEVICE];
> >  	bool has_sg;
> >  	u32 flush_on_fsync;
> > +	u32 quirks;
> > +};
> > +
> > +/**
> > + * struct xdma_platform_data - DMA platform structure
> > + * @quirks: quirks for platform specific data.
> > + */
> > +struct xdma_platform_data {
> 
> This isn't platform data but device information, I'd rename the structure to
> xdma_device_info (and update the description accordingly).

Ok will modify in v2.

> 
> > +	u32 quirks;
> 
> I don't think you should call this field quirks as it stores the IP core type.
> Quirks usually refer to non-standard behaviour that requires specific handling in
> the driver, possibly in the form of work-arounds. I would thus call the field type
> and document it as "DMA IP core type". Similarly I'd rename AXIVDMA_SUPPORT
> to XDMA_TYPE_VDMA.

Sure will modify in the v2.

> 
> >  };
> >
> >  /* Macros */
> > @@ -1239,6 +1251,16 @@ static struct dma_chan
> > *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec, return
> > dma_get_slave_channel(&xdev->chan[chan_id]->common);
> >  }
> >
> > +static const struct xdma_platform_data xvdma_def = {
> > +	.quirks = AXIVDMA_SUPPORT,
> > +};
> > +
> > +static const struct of_device_id xilinx_vdma_of_ids[] = {
> > +	{ .compatible = "xlnx,axi-vdma-1.00.a", .data = &xvdma_def},
> 
> Missing space before }, did you run checkpatch ?

Yes I ran check patch it didn't troughed any warning ok will fix in v2.
 
> 
> As the xdma_platform_data structure contains a single integer, you could store
> it in the .data field directly.

Ok will fix in v2.

> 
> > +	{}
> > +};
> > +MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
> > +
> >  /**
> >   * xilinx_vdma_probe - Driver probe function
> >   * @pdev: Pointer to the platform_device structure @@ -1251,6 +1273,7
> > @@ static int xilinx_vdma_probe(struct platform_device
> > *pdev) struct xilinx_vdma_device *xdev;
> >  	struct device_node *child;
> >  	struct resource *io;
> > +	const struct of_device_id *match;
> >  	u32 num_frames;
> >  	int i, err;
> >
> > @@ -1259,6 +1282,13 @@ static int xilinx_vdma_probe(struct
> > platform_device
> > *pdev) if (!xdev)
> >  		return -ENOMEM;
> >
> > +	match = of_match_node(xilinx_vdma_of_ids, pdev->dev.of_node);
> 
> You can use of_device_get_match_data() to get the data directly.

Ok will fix in v2

> 
> > +	if (match && match->data) {
> 
> I don't think this condition can ever be false.

OK will fix in v2.

Regards,
Kedar.

> 
> > +		const struct xdma_platform_data *data = match->data;
> > +
> > +		xdev->quirks = data->quirks;
> > +	}
> > +
> >  	xdev->dev = &pdev->dev;
> >
> >  	/* Request and map I/O memory */
> > @@ -1356,12 +1386,6 @@ static int xilinx_vdma_remove(struct
> > platform_device
> > *pdev) return 0;
> >  }
> >
> > -static const struct of_device_id xilinx_vdma_of_ids[] = {
> > -	{ .compatible = "xlnx,axi-vdma-1.00.a",},
> > -	{}
> > -};
> > -MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
> > -
> >  static struct platform_driver xilinx_vdma_driver = {
> >  	.driver = {
> >  		.name = "xilinx-vdma",
> 
> --
> Regards,
> 
> Laurent Pinchart

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-03-18 12:43 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-15 17:23 [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Kedareswara rao Appana
2016-03-15 17:23 ` [PATCH 1/7] dmaengine: xilinx_vdma: Fix checkpatch.pl warnings Kedareswara rao Appana
2016-03-15 17:50   ` Moritz Fischer
2016-03-17  7:00   ` Laurent Pinchart
2016-03-15 17:23 ` [PATCH 2/7] dmaengine: xilinx_vdma: Add quirks support to differentiate differnet IP cores Kedareswara rao Appana
2016-03-16  3:13   ` Vinod Koul
2016-03-16  6:12     ` Appana Durga Kedareswara Rao
2016-03-17  7:19   ` Laurent Pinchart
2016-03-18 12:43     ` Appana Durga Kedareswara Rao
2016-03-15 17:23 ` [PATCH 3/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Direct Memory Access Engine Kedareswara rao Appana
2016-03-17  8:05   ` Laurent Pinchart
2016-03-18 12:43     ` Appana Durga Kedareswara Rao
2016-03-15 17:23 ` [PATCH 4/7] " Kedareswara rao Appana
2016-03-16  3:17   ` Vinod Koul
2016-03-16  6:19     ` Appana Durga Kedareswara Rao
2016-03-15 17:23 ` [PATCH 5/7] dmaengine: xilinx_vdma: Remove unnecessary axi dma device-tree binding doc Kedareswara rao Appana
2016-03-16  1:34   ` Moritz Fischer
2016-03-16  6:08     ` Appana Durga Kedareswara Rao
2016-03-15 17:23 ` [PATCH 6/7] dmaengine: xilinx_vdma: Add Support for Xilinx AXI Central Direct Memory Access Engine Kedareswara rao Appana
2016-03-15 17:23 ` [PATCH 7/7] " Kedareswara rao Appana
2016-03-16  1:29 ` [PATCH 0/7] dmaengine: xilinx_vdma: AXI DMA's enhancments Moritz Fischer
2016-03-16  3:11   ` Vinod Koul
2016-03-16  6:13     ` Appana Durga Kedareswara Rao
2016-03-16  6:06   ` Appana Durga Kedareswara Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).