All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] dmaengine: xilinx_dma: Bug fixes
@ 2016-12-15 15:11 ` Kedareswara rao Appana
  0 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, svemula,
	anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

This patch series fixes below bugs in DMA and VDMA IP's
---> Do not start VDMA until frame buffer is processed by the h/w
---> Fix bug in Multi frame sotres handling in VDMA
---> Fix issues w.r.to multi frame descriptors submit with AXI DMA S2MM(recv) Side.


Kedareswara rao Appana (3):
  dmaengine: xilinx_dma: Check for channel idle state before submitting
    dma descriptor
  dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in
    vdma
  dmaengine: xilinx_dma: Fix race condition in the driver for multiple
    descriptor scenario

 drivers/dma/xilinx/xilinx_dma.c | 185 +++++++++++++++++++++++++---------------
 1 file changed, 118 insertions(+), 67 deletions(-)

-- 
2.1.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 0/3] dmaengine: xilinx_dma: Bug fixes
@ 2016-12-15 15:11 ` Kedareswara rao Appana
  0 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

This patch series fixes below bugs in DMA and VDMA IP's
---> Do not start VDMA until frame buffer is processed by the h/w
---> Fix bug in Multi frame sotres handling in VDMA
---> Fix issues w.r.to multi frame descriptors submit with AXI DMA S2MM(recv) Side.


Kedareswara rao Appana (3):
  dmaengine: xilinx_dma: Check for channel idle state before submitting
    dma descriptor
  dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in
    vdma
  dmaengine: xilinx_dma: Fix race condition in the driver for multiple
    descriptor scenario

 drivers/dma/xilinx/xilinx_dma.c | 185 +++++++++++++++++++++++++---------------
 1 file changed, 118 insertions(+), 67 deletions(-)

-- 
2.1.2

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-15 15:11 ` Kedareswara rao Appana
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  -1 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, svemula,
	anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Add channel idle state to ensure that dma descriptor is not
submitted when VDMA engine is in progress.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 8288fe4..736c2a3 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
  * @cyclic: Check for cyclic transfers.
  * @genlock: Support genlock mode
  * @err: Channel has errors
+ * @idle: Check for channel idle
  * @tasklet: Cleanup work after irq
  * @config: Device configuration info
  * @flush_on_fsync: Flush on Frame sync
@@ -351,6 +352,7 @@ struct xilinx_dma_chan {
 	bool cyclic;
 	bool genlock;
 	bool err;
+	bool idle;
 	struct tasklet_struct tasklet;
 	struct xilinx_vdma_config config;
 	bool flush_on_fsync;
@@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
 			chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
 		chan->err = true;
 	}
+	chan->idle = true;
 }
 
 /**
@@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 	if (chan->err)
 		return;
 
+	if (!chan->idle)
+		return;
+
 	if (list_empty(&chan->pending_list))
 		return;
 
@@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
 	}
 
+	chan->idle = false;
 	if (!chan->has_sg) {
 		list_del(&desc->node);
 		list_add_tail(&desc->node, &chan->active_list);
@@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
 	if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
 		spin_lock(&chan->lock);
 		xilinx_dma_complete_descriptor(chan);
+		chan->idle = true;
 		chan->start_transfer(chan);
 		spin_unlock(&chan->lock);
 	}
@@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
 	chan->has_sg = xdev->has_sg;
 	chan->desc_pendingcount = 0x0;
 	chan->ext_addr = xdev->ext_addr;
+	chan->idle = true;
 
 	spin_lock_init(&chan->lock);
 	INIT_LIST_HEAD(&chan->pending_list);
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  0 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

Add channel idle state to ensure that dma descriptor is not
submitted when VDMA engine is in progress.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 8288fe4..736c2a3 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
  * @cyclic: Check for cyclic transfers.
  * @genlock: Support genlock mode
  * @err: Channel has errors
+ * @idle: Check for channel idle
  * @tasklet: Cleanup work after irq
  * @config: Device configuration info
  * @flush_on_fsync: Flush on Frame sync
@@ -351,6 +352,7 @@ struct xilinx_dma_chan {
 	bool cyclic;
 	bool genlock;
 	bool err;
+	bool idle;
 	struct tasklet_struct tasklet;
 	struct xilinx_vdma_config config;
 	bool flush_on_fsync;
@@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
 			chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
 		chan->err = true;
 	}
+	chan->idle = true;
 }
 
 /**
@@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 	if (chan->err)
 		return;
 
+	if (!chan->idle)
+		return;
+
 	if (list_empty(&chan->pending_list))
 		return;
 
@@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
 	}
 
+	chan->idle = false;
 	if (!chan->has_sg) {
 		list_del(&desc->node);
 		list_add_tail(&desc->node, &chan->active_list);
@@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
 	if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
 		spin_lock(&chan->lock);
 		xilinx_dma_complete_descriptor(chan);
+		chan->idle = true;
 		chan->start_transfer(chan);
 		spin_unlock(&chan->lock);
 	}
@@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
 	chan->has_sg = xdev->has_sg;
 	chan->desc_pendingcount = 0x0;
 	chan->ext_addr = xdev->ext_addr;
+	chan->idle = true;
 
 	spin_lock_init(&chan->lock);
 	INIT_LIST_HEAD(&chan->pending_list);
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-15 15:11 ` Kedareswara rao Appana
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  -1 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, svemula,
	anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

When VDMA is configured for more than one frame in the h/w
for example h/w is configured for n number of frames and user
Submits n number of frames and triggered the DMA using issue_pending API.
In the current driver flow we are submitting one frame at a time
but we should submit all the n number of frames at one time as the h/w
Is configured for n number of frames.

This patch fixes this issue.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
 1 file changed, 26 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 736c2a3..4f3fa94 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 				tail_segment->phys);
 	} else {
 		struct xilinx_vdma_tx_segment *segment, *last = NULL;
-		int i = 0;
+		int i = 0, j = 0;
 
 		if (chan->desc_submitcount < chan->num_frms)
 			i = chan->desc_submitcount;
 
-		list_for_each_entry(segment, &desc->segments, node) {
-			if (chan->ext_addr)
-				vdma_desc_write_64(chan,
-					XILINX_VDMA_REG_START_ADDRESS_64(i++),
-					segment->hw.buf_addr,
-					segment->hw.buf_addr_msb);
-			else
-				vdma_desc_write(chan,
-					XILINX_VDMA_REG_START_ADDRESS(i++),
-					segment->hw.buf_addr);
-
-			last = segment;
+		for (j = 0; j < chan->num_frms; ) {
+			list_for_each_entry(segment, &desc->segments, node) {
+				if (chan->ext_addr)
+					vdma_desc_write_64(chan,
+					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
+					  segment->hw.buf_addr,
+					  segment->hw.buf_addr_msb);
+				else
+					vdma_desc_write(chan,
+					    XILINX_VDMA_REG_START_ADDRESS(i++),
+					    segment->hw.buf_addr);
+
+				last = segment;
+			}
+			list_del(&desc->node);
+			list_add_tail(&desc->node, &chan->active_list);
+			j++;
+			if (list_empty(&chan->pending_list))
+				break;
+			desc = list_first_entry(&chan->pending_list,
+						struct xilinx_dma_tx_descriptor,
+						node);
 		}
 
 		if (!last)
@@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
 				last->hw.stride);
 		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
+
+		chan->desc_submitcount += j;
+		chan->desc_pendingcount -= j;
 	}
 
 	chan->idle = false;
 	if (!chan->has_sg) {
-		list_del(&desc->node);
-		list_add_tail(&desc->node, &chan->active_list);
-		chan->desc_submitcount++;
-		chan->desc_pendingcount--;
 		if (chan->desc_submitcount == chan->num_frms)
 			chan->desc_submitcount = 0;
 	} else {
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  0 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

When VDMA is configured for more than one frame in the h/w
for example h/w is configured for n number of frames and user
Submits n number of frames and triggered the DMA using issue_pending API.
In the current driver flow we are submitting one frame at a time
but we should submit all the n number of frames at one time as the h/w
Is configured for n number of frames.

This patch fixes this issue.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
 1 file changed, 26 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 736c2a3..4f3fa94 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 				tail_segment->phys);
 	} else {
 		struct xilinx_vdma_tx_segment *segment, *last = NULL;
-		int i = 0;
+		int i = 0, j = 0;
 
 		if (chan->desc_submitcount < chan->num_frms)
 			i = chan->desc_submitcount;
 
-		list_for_each_entry(segment, &desc->segments, node) {
-			if (chan->ext_addr)
-				vdma_desc_write_64(chan,
-					XILINX_VDMA_REG_START_ADDRESS_64(i++),
-					segment->hw.buf_addr,
-					segment->hw.buf_addr_msb);
-			else
-				vdma_desc_write(chan,
-					XILINX_VDMA_REG_START_ADDRESS(i++),
-					segment->hw.buf_addr);
-
-			last = segment;
+		for (j = 0; j < chan->num_frms; ) {
+			list_for_each_entry(segment, &desc->segments, node) {
+				if (chan->ext_addr)
+					vdma_desc_write_64(chan,
+					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
+					  segment->hw.buf_addr,
+					  segment->hw.buf_addr_msb);
+				else
+					vdma_desc_write(chan,
+					    XILINX_VDMA_REG_START_ADDRESS(i++),
+					    segment->hw.buf_addr);
+
+				last = segment;
+			}
+			list_del(&desc->node);
+			list_add_tail(&desc->node, &chan->active_list);
+			j++;
+			if (list_empty(&chan->pending_list))
+				break;
+			desc = list_first_entry(&chan->pending_list,
+						struct xilinx_dma_tx_descriptor,
+						node);
 		}
 
 		if (!last)
@@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
 		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
 				last->hw.stride);
 		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
+
+		chan->desc_submitcount += j;
+		chan->desc_pendingcount -= j;
 	}
 
 	chan->idle = false;
 	if (!chan->has_sg) {
-		list_del(&desc->node);
-		list_add_tail(&desc->node, &chan->active_list);
-		chan->desc_submitcount++;
-		chan->desc_pendingcount--;
 		if (chan->desc_submitcount == chan->num_frms)
 			chan->desc_submitcount = 0;
 	} else {
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario
  2016-12-15 15:11 ` Kedareswara rao Appana
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  -1 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, laurent.pinchart, luis, svemula,
	anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

When driver is handling AXI DMA SoftIP
When user submits multiple descriptors back to back on the S2MM(recv)
side with the current driver flow the last buffer descriptor next bd
points to a invalid location resulting the invalid data or errors in the
DMA engine.

This patch fixes this issue by creating a BD Chain during
channel allocation itself and use those BD's.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 133 +++++++++++++++++++++++++---------------
 1 file changed, 83 insertions(+), 50 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 4f3fa94..8a4cb56 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -163,6 +163,7 @@
 #define XILINX_DMA_BD_SOP		BIT(27)
 #define XILINX_DMA_BD_EOP		BIT(26)
 #define XILINX_DMA_COALESCE_MAX		255
+#define XILINX_DMA_NUM_DESCS		255
 #define XILINX_DMA_NUM_APP_WORDS	5
 
 /* Multi-Channel DMA Descriptor offsets*/
@@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
  * @pending_list: Descriptors waiting
  * @active_list: Descriptors ready to submit
  * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
  * @common: DMA common channel
  * @desc_pool: Descriptors pool
  * @dev: The dma device
@@ -330,7 +332,9 @@ struct xilinx_dma_tx_descriptor {
  * @desc_submitcount: Descriptor h/w submitted count
  * @residue: Residue for AXI DMA
  * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
  * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
+ * @cyclic_seg_p: Physical allocated segments base for cyclic dma
  * @start_transfer: Differentiate b/w DMA IP's transfer
  */
 struct xilinx_dma_chan {
@@ -341,6 +345,7 @@ struct xilinx_dma_chan {
 	struct list_head pending_list;
 	struct list_head active_list;
 	struct list_head done_list;
+	struct list_head free_seg_list;
 	struct dma_chan common;
 	struct dma_pool *desc_pool;
 	struct device *dev;
@@ -361,7 +366,9 @@ struct xilinx_dma_chan {
 	u32 desc_submitcount;
 	u32 residue;
 	struct xilinx_axidma_tx_segment *seg_v;
+	dma_addr_t seg_p;
 	struct xilinx_axidma_tx_segment *cyclic_seg_v;
+	dma_addr_t cyclic_seg_p;
 	void (*start_transfer)(struct xilinx_dma_chan *chan);
 	u16 tdest;
 };
@@ -567,17 +574,31 @@ static struct xilinx_axidma_tx_segment *
 xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 {
 	struct xilinx_axidma_tx_segment *segment;
-	dma_addr_t phys;
-
-	segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys);
-	if (!segment)
-		return NULL;
+	unsigned long flags;
 
-	segment->phys = phys;
+	spin_lock_irqsave(&chan->lock, flags);
+	if (!list_empty(&chan->free_seg_list)) {
+		segment = list_first_entry(&chan->free_seg_list,
+					   struct xilinx_axidma_tx_segment,
+					   node);
+		list_del(&segment->node);
+	}
+	spin_unlock_irqrestore(&chan->lock, flags);
 
 	return segment;
 }
 
+static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
+{
+	u32 next_desc = hw->next_desc;
+	u32 next_desc_msb = hw->next_desc_msb;
+
+	memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
+
+	hw->next_desc = next_desc;
+	hw->next_desc_msb = next_desc_msb;
+}
+
 /**
  * xilinx_dma_free_tx_segment - Free transaction segment
  * @chan: Driver specific DMA channel
@@ -586,7 +607,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
 				struct xilinx_axidma_tx_segment *segment)
 {
-	dma_pool_free(chan->desc_pool, segment, segment->phys);
+	xilinx_dma_clean_hw_desc(&segment->hw);
+
+	list_add_tail(&segment->node, &chan->free_seg_list);
 }
 
 /**
@@ -711,16 +734,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
 
 	dev_dbg(chan->dev, "Free all channel resources.\n");
 
 	xilinx_dma_free_descriptors(chan);
+
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-		xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
-		xilinx_dma_free_tx_segment(chan, chan->seg_v);
+		spin_lock_irqsave(&chan->lock, flags);
+		INIT_LIST_HEAD(&chan->free_seg_list);
+		spin_unlock_irqrestore(&chan->lock, flags);
+
+		/* Free Memory that is allocated for cyclic DMA Mode */
+		dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v),
+				  chan->cyclic_seg_v, chan->cyclic_seg_p);
+	}
+
+	if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) {
+		dma_pool_destroy(chan->desc_pool);
+		chan->desc_pool = NULL;
 	}
-	dma_pool_destroy(chan->desc_pool);
-	chan->desc_pool = NULL;
 }
 
 /**
@@ -803,6 +836,7 @@ static void xilinx_dma_do_tasklet(unsigned long data)
 static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	int i;
 
 	/* Has this channel already been allocated? */
 	if (chan->desc_pool)
@@ -813,11 +847,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 	 * for meeting Xilinx VDMA specification requirement.
 	 */
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-		chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool",
-				   chan->dev,
-				   sizeof(struct xilinx_axidma_tx_segment),
-				   __alignof__(struct xilinx_axidma_tx_segment),
-				   0);
+		/* Allocate the buffer descriptors. */
+		chan->seg_v = dma_zalloc_coherent(chan->dev,
+						  sizeof(*chan->seg_v) *
+						  XILINX_DMA_NUM_DESCS,
+						  &chan->seg_p, GFP_KERNEL);
+		if (!chan->seg_v) {
+			dev_err(chan->dev,
+				"unable to allocate channel %d descriptors\n",
+				chan->id);
+			return -ENOMEM;
+		}
+
+		for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
+			chan->seg_v[i].hw.next_desc =
+			lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS));
+			chan->seg_v[i].hw.next_desc_msb =
+			upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS));
+			chan->seg_v[i].phys = chan->seg_p +
+				sizeof(*chan->seg_v) * i;
+			list_add_tail(&chan->seg_v[i].node,
+				      &chan->free_seg_list);
+		}
 	} else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
 		chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool",
 				   chan->dev,
@@ -832,7 +885,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 				     0);
 	}
 
-	if (!chan->desc_pool) {
+	if (!chan->desc_pool &&
+	    (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) {
 		dev_err(chan->dev,
 			"unable to allocate channel %d descriptor pool\n",
 			chan->id);
@@ -841,22 +895,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
 		/*
-		 * For AXI DMA case after submitting a pending_list, keep
-		 * an extra segment allocated so that the "next descriptor"
-		 * pointer on the tail descriptor always points to a
-		 * valid descriptor, even when paused after reaching taildesc.
-		 * This way, it is possible to issue additional
-		 * transfers without halting and restarting the channel.
-		 */
-		chan->seg_v = xilinx_axidma_alloc_tx_segment(chan);
-
-		/*
 		 * For cyclic DMA mode we need to program the tail Descriptor
 		 * register with a value which is not a part of the BD chain
 		 * so allocating a desc segment during channel allocation for
 		 * programming tail descriptor.
 		 */
-		chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan);
+		chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev,
+					sizeof(*chan->cyclic_seg_v),
+					&chan->cyclic_seg_p, GFP_KERNEL);
+		if (!chan->cyclic_seg_v) {
+			dev_err(chan->dev,
+				"unable to allocate desc segment for cyclic DMA\n");
+			return -ENOMEM;
+		}
+		chan->cyclic_seg_v->phys = chan->cyclic_seg_p;
 	}
 
 	dma_cookie_init(dchan);
@@ -1206,7 +1258,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan)
 static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
 {
 	struct xilinx_dma_tx_descriptor *head_desc, *tail_desc;
-	struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head;
+	struct xilinx_axidma_tx_segment *tail_segment;
 	u32 reg;
 
 	if (chan->err)
@@ -1229,21 +1281,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
 	tail_segment = list_last_entry(&tail_desc->segments,
 				       struct xilinx_axidma_tx_segment, node);
 
-	if (chan->has_sg && !chan->xdev->mcdma) {
-		old_head = list_first_entry(&head_desc->segments,
-					struct xilinx_axidma_tx_segment, node);
-		new_head = chan->seg_v;
-		/* Copy Buffer Descriptor fields. */
-		new_head->hw = old_head->hw;
-
-		/* Swap and save new reserve */
-		list_replace_init(&old_head->node, &new_head->node);
-		chan->seg_v = old_head;
-
-		tail_segment->hw.next_desc = chan->seg_v->phys;
-		head_desc->async_tx.phys = new_head->phys;
-	}
-
 	reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
 
 	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
@@ -1738,7 +1775,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
 	struct xilinx_dma_tx_descriptor *desc;
-	struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL;
+	struct xilinx_axidma_tx_segment *segment = NULL;
 	u32 *app_w = (u32 *)context;
 	struct scatterlist *sg;
 	size_t copy;
@@ -1789,10 +1826,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 					       XILINX_DMA_NUM_APP_WORDS);
 			}
 
-			if (prev)
-				prev->hw.next_desc = segment->phys;
-
-			prev = segment;
 			sg_used += copy;
 
 			/*
@@ -1806,7 +1839,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 	segment = list_first_entry(&desc->segments,
 				   struct xilinx_axidma_tx_segment, node);
 	desc->async_tx.phys = segment->phys;
-	prev->hw.next_desc = segment->phys;
 
 	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
 	if (chan->direction == DMA_MEM_TO_DEV) {
@@ -2350,6 +2382,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
 	INIT_LIST_HEAD(&chan->pending_list);
 	INIT_LIST_HEAD(&chan->done_list);
 	INIT_LIST_HEAD(&chan->active_list);
+	INIT_LIST_HEAD(&chan->free_seg_list);
 
 	/* Retrieve the channel properties from the device tree */
 	has_dre = of_property_read_bool(node, "xlnx,include-dre");
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario
@ 2016-12-15 15:11   ` Kedareswara rao Appana
  0 siblings, 0 replies; 32+ messages in thread
From: Kedareswara rao Appana @ 2016-12-15 15:11 UTC (permalink / raw)
  To: linux-arm-kernel

When driver is handling AXI DMA SoftIP
When user submits multiple descriptors back to back on the S2MM(recv)
side with the current driver flow the last buffer descriptor next bd
points to a invalid location resulting the invalid data or errors in the
DMA engine.

This patch fixes this issue by creating a BD Chain during
channel allocation itself and use those BD's.

Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
---
 drivers/dma/xilinx/xilinx_dma.c | 133 +++++++++++++++++++++++++---------------
 1 file changed, 83 insertions(+), 50 deletions(-)

diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 4f3fa94..8a4cb56 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -163,6 +163,7 @@
 #define XILINX_DMA_BD_SOP		BIT(27)
 #define XILINX_DMA_BD_EOP		BIT(26)
 #define XILINX_DMA_COALESCE_MAX		255
+#define XILINX_DMA_NUM_DESCS		255
 #define XILINX_DMA_NUM_APP_WORDS	5
 
 /* Multi-Channel DMA Descriptor offsets*/
@@ -310,6 +311,7 @@ struct xilinx_dma_tx_descriptor {
  * @pending_list: Descriptors waiting
  * @active_list: Descriptors ready to submit
  * @done_list: Complete descriptors
+ * @free_seg_list: Free descriptors
  * @common: DMA common channel
  * @desc_pool: Descriptors pool
  * @dev: The dma device
@@ -330,7 +332,9 @@ struct xilinx_dma_tx_descriptor {
  * @desc_submitcount: Descriptor h/w submitted count
  * @residue: Residue for AXI DMA
  * @seg_v: Statically allocated segments base
+ * @seg_p: Physical allocated segments base
  * @cyclic_seg_v: Statically allocated segment base for cyclic transfers
+ * @cyclic_seg_p: Physical allocated segments base for cyclic dma
  * @start_transfer: Differentiate b/w DMA IP's transfer
  */
 struct xilinx_dma_chan {
@@ -341,6 +345,7 @@ struct xilinx_dma_chan {
 	struct list_head pending_list;
 	struct list_head active_list;
 	struct list_head done_list;
+	struct list_head free_seg_list;
 	struct dma_chan common;
 	struct dma_pool *desc_pool;
 	struct device *dev;
@@ -361,7 +366,9 @@ struct xilinx_dma_chan {
 	u32 desc_submitcount;
 	u32 residue;
 	struct xilinx_axidma_tx_segment *seg_v;
+	dma_addr_t seg_p;
 	struct xilinx_axidma_tx_segment *cyclic_seg_v;
+	dma_addr_t cyclic_seg_p;
 	void (*start_transfer)(struct xilinx_dma_chan *chan);
 	u16 tdest;
 };
@@ -567,17 +574,31 @@ static struct xilinx_axidma_tx_segment *
 xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 {
 	struct xilinx_axidma_tx_segment *segment;
-	dma_addr_t phys;
-
-	segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys);
-	if (!segment)
-		return NULL;
+	unsigned long flags;
 
-	segment->phys = phys;
+	spin_lock_irqsave(&chan->lock, flags);
+	if (!list_empty(&chan->free_seg_list)) {
+		segment = list_first_entry(&chan->free_seg_list,
+					   struct xilinx_axidma_tx_segment,
+					   node);
+		list_del(&segment->node);
+	}
+	spin_unlock_irqrestore(&chan->lock, flags);
 
 	return segment;
 }
 
+static void xilinx_dma_clean_hw_desc(struct xilinx_axidma_desc_hw *hw)
+{
+	u32 next_desc = hw->next_desc;
+	u32 next_desc_msb = hw->next_desc_msb;
+
+	memset(hw, 0, sizeof(struct xilinx_axidma_desc_hw));
+
+	hw->next_desc = next_desc;
+	hw->next_desc_msb = next_desc_msb;
+}
+
 /**
  * xilinx_dma_free_tx_segment - Free transaction segment
  * @chan: Driver specific DMA channel
@@ -586,7 +607,9 @@ xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
 				struct xilinx_axidma_tx_segment *segment)
 {
-	dma_pool_free(chan->desc_pool, segment, segment->phys);
+	xilinx_dma_clean_hw_desc(&segment->hw);
+
+	list_add_tail(&segment->node, &chan->free_seg_list);
 }
 
 /**
@@ -711,16 +734,26 @@ static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
 static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	unsigned long flags;
 
 	dev_dbg(chan->dev, "Free all channel resources.\n");
 
 	xilinx_dma_free_descriptors(chan);
+
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-		xilinx_dma_free_tx_segment(chan, chan->cyclic_seg_v);
-		xilinx_dma_free_tx_segment(chan, chan->seg_v);
+		spin_lock_irqsave(&chan->lock, flags);
+		INIT_LIST_HEAD(&chan->free_seg_list);
+		spin_unlock_irqrestore(&chan->lock, flags);
+
+		/* Free Memory that is allocated for cyclic DMA Mode */
+		dma_free_coherent(chan->dev, sizeof(*chan->cyclic_seg_v),
+				  chan->cyclic_seg_v, chan->cyclic_seg_p);
+	}
+
+	if (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA) {
+		dma_pool_destroy(chan->desc_pool);
+		chan->desc_pool = NULL;
 	}
-	dma_pool_destroy(chan->desc_pool);
-	chan->desc_pool = NULL;
 }
 
 /**
@@ -803,6 +836,7 @@ static void xilinx_dma_do_tasklet(unsigned long data)
 static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+	int i;
 
 	/* Has this channel already been allocated? */
 	if (chan->desc_pool)
@@ -813,11 +847,30 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 	 * for meeting Xilinx VDMA specification requirement.
 	 */
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
-		chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool",
-				   chan->dev,
-				   sizeof(struct xilinx_axidma_tx_segment),
-				   __alignof__(struct xilinx_axidma_tx_segment),
-				   0);
+		/* Allocate the buffer descriptors. */
+		chan->seg_v = dma_zalloc_coherent(chan->dev,
+						  sizeof(*chan->seg_v) *
+						  XILINX_DMA_NUM_DESCS,
+						  &chan->seg_p, GFP_KERNEL);
+		if (!chan->seg_v) {
+			dev_err(chan->dev,
+				"unable to allocate channel %d descriptors\n",
+				chan->id);
+			return -ENOMEM;
+		}
+
+		for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
+			chan->seg_v[i].hw.next_desc =
+			lower_32_bits(chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS));
+			chan->seg_v[i].hw.next_desc_msb =
+			upper_32_bits(chan->seg_p + sizeof(*chan->seg_v) *
+				((i + 1) % XILINX_DMA_NUM_DESCS));
+			chan->seg_v[i].phys = chan->seg_p +
+				sizeof(*chan->seg_v) * i;
+			list_add_tail(&chan->seg_v[i].node,
+				      &chan->free_seg_list);
+		}
 	} else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
 		chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool",
 				   chan->dev,
@@ -832,7 +885,8 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 				     0);
 	}
 
-	if (!chan->desc_pool) {
+	if (!chan->desc_pool &&
+	    (chan->xdev->dma_config->dmatype != XDMA_TYPE_AXIDMA)) {
 		dev_err(chan->dev,
 			"unable to allocate channel %d descriptor pool\n",
 			chan->id);
@@ -841,22 +895,20 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
 
 	if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
 		/*
-		 * For AXI DMA case after submitting a pending_list, keep
-		 * an extra segment allocated so that the "next descriptor"
-		 * pointer on the tail descriptor always points to a
-		 * valid descriptor, even when paused after reaching taildesc.
-		 * This way, it is possible to issue additional
-		 * transfers without halting and restarting the channel.
-		 */
-		chan->seg_v = xilinx_axidma_alloc_tx_segment(chan);
-
-		/*
 		 * For cyclic DMA mode we need to program the tail Descriptor
 		 * register with a value which is not a part of the BD chain
 		 * so allocating a desc segment during channel allocation for
 		 * programming tail descriptor.
 		 */
-		chan->cyclic_seg_v = xilinx_axidma_alloc_tx_segment(chan);
+		chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev,
+					sizeof(*chan->cyclic_seg_v),
+					&chan->cyclic_seg_p, GFP_KERNEL);
+		if (!chan->cyclic_seg_v) {
+			dev_err(chan->dev,
+				"unable to allocate desc segment for cyclic DMA\n");
+			return -ENOMEM;
+		}
+		chan->cyclic_seg_v->phys = chan->cyclic_seg_p;
 	}
 
 	dma_cookie_init(dchan);
@@ -1206,7 +1258,7 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan)
 static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
 {
 	struct xilinx_dma_tx_descriptor *head_desc, *tail_desc;
-	struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head;
+	struct xilinx_axidma_tx_segment *tail_segment;
 	u32 reg;
 
 	if (chan->err)
@@ -1229,21 +1281,6 @@ static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
 	tail_segment = list_last_entry(&tail_desc->segments,
 				       struct xilinx_axidma_tx_segment, node);
 
-	if (chan->has_sg && !chan->xdev->mcdma) {
-		old_head = list_first_entry(&head_desc->segments,
-					struct xilinx_axidma_tx_segment, node);
-		new_head = chan->seg_v;
-		/* Copy Buffer Descriptor fields. */
-		new_head->hw = old_head->hw;
-
-		/* Swap and save new reserve */
-		list_replace_init(&old_head->node, &new_head->node);
-		chan->seg_v = old_head;
-
-		tail_segment->hw.next_desc = chan->seg_v->phys;
-		head_desc->async_tx.phys = new_head->phys;
-	}
-
 	reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
 
 	if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
@@ -1738,7 +1775,7 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 {
 	struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
 	struct xilinx_dma_tx_descriptor *desc;
-	struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL;
+	struct xilinx_axidma_tx_segment *segment = NULL;
 	u32 *app_w = (u32 *)context;
 	struct scatterlist *sg;
 	size_t copy;
@@ -1789,10 +1826,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 					       XILINX_DMA_NUM_APP_WORDS);
 			}
 
-			if (prev)
-				prev->hw.next_desc = segment->phys;
-
-			prev = segment;
 			sg_used += copy;
 
 			/*
@@ -1806,7 +1839,6 @@ static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
 	segment = list_first_entry(&desc->segments,
 				   struct xilinx_axidma_tx_segment, node);
 	desc->async_tx.phys = segment->phys;
-	prev->hw.next_desc = segment->phys;
 
 	/* For the last DMA_MEM_TO_DEV transfer, set EOP */
 	if (chan->direction == DMA_MEM_TO_DEV) {
@@ -2350,6 +2382,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
 	INIT_LIST_HEAD(&chan->pending_list);
 	INIT_LIST_HEAD(&chan->done_list);
 	INIT_LIST_HEAD(&chan->active_list);
+	INIT_LIST_HEAD(&chan->free_seg_list);
 
 	/* Retrieve the channel properties from the device tree */
 	has_dre = of_property_read_bool(node, "xlnx,include-dre");
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-15 15:11   ` Kedareswara rao Appana
@ 2016-12-15 15:45     ` Jose Abreu
  -1 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-15 15:45 UTC (permalink / raw)
  To: Kedareswara rao Appana, dan.j.williams, vinod.koul, michal.simek,
	soren.brinkmann, appanad, moritz.fischer, laurent.pinchart, luis,
	svemula, anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> Add channel idle state to ensure that dma descriptor is not
> submitted when VDMA engine is in progress.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 8288fe4..736c2a3 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
>   * @cyclic: Check for cyclic transfers.
>   * @genlock: Support genlock mode
>   * @err: Channel has errors
> + * @idle: Check for channel idle
>   * @tasklet: Cleanup work after irq
>   * @config: Device configuration info
>   * @flush_on_fsync: Flush on Frame sync
> @@ -351,6 +352,7 @@ struct xilinx_dma_chan {
>  	bool cyclic;
>  	bool genlock;
>  	bool err;
> +	bool idle;
>  	struct tasklet_struct tasklet;
>  	struct xilinx_vdma_config config;
>  	bool flush_on_fsync;
> @@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
>  			chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
>  		chan->err = true;
>  	}
> +	chan->idle = true;
>  }
>  
>  /**
> @@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  	if (chan->err)
>  		return;
>  
> +	if (!chan->idle)
> +		return;
> +
>  	if (list_empty(&chan->pending_list))
>  		return;
>  
> @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
>  	}
>  
> +	chan->idle = false;
>  	if (!chan->has_sg) {
>  		list_del(&desc->node);
>  		list_add_tail(&desc->node, &chan->active_list);
> @@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
>  	if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
>  		spin_lock(&chan->lock);
>  		xilinx_dma_complete_descriptor(chan);
> +		chan->idle = true;
>  		chan->start_transfer(chan);
>  		spin_unlock(&chan->lock);
>  	}
> @@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
>  	chan->has_sg = xdev->has_sg;
>  	chan->desc_pendingcount = 0x0;
>  	chan->ext_addr = xdev->ext_addr;
> +	chan->idle = true;
>  
>  	spin_lock_init(&chan->lock);
>  	INIT_LIST_HEAD(&chan->pending_list);

I think there is missing a set to true in idle when a channel
reset is performed.
Otherwise: Reviewed-by: Jose Abreu <joabreu@synopsys.com>

Best regards,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-15 15:45     ` Jose Abreu
  0 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-15 15:45 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> Add channel idle state to ensure that dma descriptor is not
> submitted when VDMA engine is in progress.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 8288fe4..736c2a3 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
>   * @cyclic: Check for cyclic transfers.
>   * @genlock: Support genlock mode
>   * @err: Channel has errors
> + * @idle: Check for channel idle
>   * @tasklet: Cleanup work after irq
>   * @config: Device configuration info
>   * @flush_on_fsync: Flush on Frame sync
> @@ -351,6 +352,7 @@ struct xilinx_dma_chan {
>  	bool cyclic;
>  	bool genlock;
>  	bool err;
> +	bool idle;
>  	struct tasklet_struct tasklet;
>  	struct xilinx_vdma_config config;
>  	bool flush_on_fsync;
> @@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
>  			chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
>  		chan->err = true;
>  	}
> +	chan->idle = true;
>  }
>  
>  /**
> @@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  	if (chan->err)
>  		return;
>  
> +	if (!chan->idle)
> +		return;
> +
>  	if (list_empty(&chan->pending_list))
>  		return;
>  
> @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
>  	}
>  
> +	chan->idle = false;
>  	if (!chan->has_sg) {
>  		list_del(&desc->node);
>  		list_add_tail(&desc->node, &chan->active_list);
> @@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
>  	if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
>  		spin_lock(&chan->lock);
>  		xilinx_dma_complete_descriptor(chan);
> +		chan->idle = true;
>  		chan->start_transfer(chan);
>  		spin_unlock(&chan->lock);
>  	}
> @@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
>  	chan->has_sg = xdev->has_sg;
>  	chan->desc_pendingcount = 0x0;
>  	chan->ext_addr = xdev->ext_addr;
> +	chan->idle = true;
>  
>  	spin_lock_init(&chan->lock);
>  	INIT_LIST_HEAD(&chan->pending_list);

I think there is missing a set to true in idle when a channel
reset is performed.
Otherwise: Reviewed-by: Jose Abreu <joabreu@synopsys.com>

Best regards,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-15 15:11   ` Kedareswara rao Appana
@ 2016-12-15 16:10     ` Jose Abreu
  -1 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-15 16:10 UTC (permalink / raw)
  To: Kedareswara rao Appana, dan.j.williams, vinod.koul, michal.simek,
	soren.brinkmann, appanad, moritz.fischer, laurent.pinchart, luis,
	svemula, anirudh, Jose.Abreu
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
>
> This patch fixes this issue.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  				tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
>  
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;
>  
> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;
> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);
> +
> +				last = segment;

Hmm, is it possible to submit more than one segment? If so, then
i and j will get out of sync.

> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;

But if i is non zero and pending_list has more than num_frms then
i will not wrap-around as it should and will write to invalid
framebuffer location, right?

> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct xilinx_dma_tx_descriptor,
> +						node);
>  		}
>  
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);

Maybe a check that all framebuffers contain valid addresses
should be done before programming vsize so that VDMA does not try
to write to invalid addresses.

> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
>  
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;

"desc_submitcount >= chan->num_frms would be safer here.

>  	} else {

Best regards,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-15 16:10     ` Jose Abreu
  0 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-15 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
>
> This patch fixes this issue.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  				tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
>  
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;
>  
> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;
> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);
> +
> +				last = segment;

Hmm, is it possible to submit more than one segment? If so, then
i and j will get out of sync.

> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;

But if i is non zero and pending_list has more than num_frms then
i will not wrap-around as it should and will write to invalid
framebuffer location, right?

> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct xilinx_dma_tx_descriptor,
> +						node);
>  		}
>  
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);

Maybe a check that all framebuffers contain valid addresses
should be done before programming vsize so that VDMA does not try
to write to invalid addresses.

> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
>  
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;

"desc_submitcount >= chan->num_frms would be safer here.

>  	} else {

Best regards,
Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-15 15:45     ` Jose Abreu
@ 2016-12-15 18:49       ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-15 18:49 UTC (permalink / raw)
  To: Jose Abreu, dan.j.williams, vinod.koul, michal.simek,
	Soren Brinkmann, moritz.fischer, laurent.pinchart, luis,
	Anirudha Sarangi
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Jose Miguel Abreu,

	Thanks for the review...


> > +	chan->idle = true;
> >
> >  	spin_lock_init(&chan->lock);
> >  	INIT_LIST_HEAD(&chan->pending_list);
> 
> I think there is missing a set to true in idle when a channel reset is performed.
> Otherwise: Reviewed-by: Jose Abreu <joabreu@synopsys.com>

Sure will fix in v2... 

Regards,
Kedar.

> 
> Best regards,
> Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-15 18:49       ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-15 18:49 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jose Miguel Abreu,

	Thanks for the review...


> > +	chan->idle = true;
> >
> >  	spin_lock_init(&chan->lock);
> >  	INIT_LIST_HEAD(&chan->pending_list);
> 
> I think there is missing a set to true in idle when a channel reset is performed.
> Otherwise: Reviewed-by: Jose Abreu <joabreu@synopsys.com>

Sure will fix in v2... 

Regards,
Kedar.

> 
> Best regards,
> Jose Miguel Abreu

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-15 16:10     ` Jose Abreu
@ 2016-12-15 19:09       ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-15 19:09 UTC (permalink / raw)
  To: Jose Abreu, dan.j.williams, vinod.koul, michal.simek,
	Soren Brinkmann, moritz.fischer, laurent.pinchart, luis,
	Anirudha Sarangi
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Jose Miguel Abreu,

	Thanks for the review....

> > -			last = segment;
> > +		for (j = 0; j < chan->num_frms; ) {
> > +			list_for_each_entry(segment, &desc->segments, node)
> {
> > +				if (chan->ext_addr)
> > +					vdma_desc_write_64(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > +					  segment->hw.buf_addr,
> > +					  segment->hw.buf_addr_msb);
> > +				else
> > +					vdma_desc_write(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS(i++),
> > +					    segment->hw.buf_addr);
> > +
> > +				last = segment;
> 
> Hmm, is it possible to submit more than one segment? If so, then i and j will get
> out of sync.

If h/w is configured for more than 1 frame buffer and user submits more than one frame buffer
We can submit more than one frame/ segment to hw right??

> 
> > +			}
> > +			list_del(&desc->node);
> > +			list_add_tail(&desc->node, &chan->active_list);
> > +			j++;
> 
> But if i is non zero and pending_list has more than num_frms then i will not
> wrap-around as it should and will write to invalid framebuffer location, right?

Yep will fix in v2...
		
	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
		break;

Above condition is sufficient right???

> 
> > +			if (list_empty(&chan->pending_list))
> > +				break;
> > +			desc = list_first_entry(&chan->pending_list,
> > +						struct
> xilinx_dma_tx_descriptor,
> > +						node);
> >  		}
> >
> >  		if (!last)
> > @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> >  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
> >  				last->hw.stride);
> >  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
> >hw.vsize);
> 
> Maybe a check that all framebuffers contain valid addresses should be done
> before programming vsize so that VDMA does not try to write to invalid
> addresses.

Do we really need to check for valid address??? 
I didn't get you what to do you mean by invalid address could you please explain???
In the driver we are reading form the pending_list which will be updated by pep_interleaved_dma 
Call so we are under assumption that user sends the proper address right???

> 
> > +
> > +		chan->desc_submitcount += j;
> > +		chan->desc_pendingcount -= j;
> >  	}
> >
> >  	chan->idle = false;
> >  	if (!chan->has_sg) {
> > -		list_del(&desc->node);
> > -		list_add_tail(&desc->node, &chan->active_list);
> > -		chan->desc_submitcount++;
> > -		chan->desc_pendingcount--;
> >  		if (chan->desc_submitcount == chan->num_frms)
> >  			chan->desc_submitcount = 0;
> 
> "desc_submitcount >= chan->num_frms would be safer here.

Sure will fix in v2...

Regards,
Kedar.

> 
> >  	} else {
> 
> Best regards,
> Jose Miguel Abreu
> --
> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body
> of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-15 19:09       ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-15 19:09 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jose Miguel Abreu,

	Thanks for the review....

> > -			last = segment;
> > +		for (j = 0; j < chan->num_frms; ) {
> > +			list_for_each_entry(segment, &desc->segments, node)
> {
> > +				if (chan->ext_addr)
> > +					vdma_desc_write_64(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > +					  segment->hw.buf_addr,
> > +					  segment->hw.buf_addr_msb);
> > +				else
> > +					vdma_desc_write(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS(i++),
> > +					    segment->hw.buf_addr);
> > +
> > +				last = segment;
> 
> Hmm, is it possible to submit more than one segment? If so, then i and j will get
> out of sync.

If h/w is configured for more than 1 frame buffer and user submits more than one frame buffer
We can submit more than one frame/ segment to hw right??

> 
> > +			}
> > +			list_del(&desc->node);
> > +			list_add_tail(&desc->node, &chan->active_list);
> > +			j++;
> 
> But if i is non zero and pending_list has more than num_frms then i will not
> wrap-around as it should and will write to invalid framebuffer location, right?

Yep will fix in v2...
		
	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
		break;

Above condition is sufficient right???

> 
> > +			if (list_empty(&chan->pending_list))
> > +				break;
> > +			desc = list_first_entry(&chan->pending_list,
> > +						struct
> xilinx_dma_tx_descriptor,
> > +						node);
> >  		}
> >
> >  		if (!last)
> > @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> >  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
> >  				last->hw.stride);
> >  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
> >hw.vsize);
> 
> Maybe a check that all framebuffers contain valid addresses should be done
> before programming vsize so that VDMA does not try to write to invalid
> addresses.

Do we really need to check for valid address??? 
I didn't get you what to do you mean by invalid address could you please explain???
In the driver we are reading form the pending_list which will be updated by pep_interleaved_dma 
Call so we are under assumption that user sends the proper address right???

> 
> > +
> > +		chan->desc_submitcount += j;
> > +		chan->desc_pendingcount -= j;
> >  	}
> >
> >  	chan->idle = false;
> >  	if (!chan->has_sg) {
> > -		list_del(&desc->node);
> > -		list_add_tail(&desc->node, &chan->active_list);
> > -		chan->desc_submitcount++;
> > -		chan->desc_pendingcount--;
> >  		if (chan->desc_submitcount == chan->num_frms)
> >  			chan->desc_submitcount = 0;
> 
> "desc_submitcount >= chan->num_frms would be safer here.

Sure will fix in v2...

Regards,
Kedar.

> 
> >  	} else {
> 
> Best regards,
> Jose Miguel Abreu
> --
> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body
> of a message to majordomo at vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-15 19:09       ` Appana Durga Kedareswara Rao
@ 2016-12-16 10:11         ` Jose Abreu
  -1 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-16 10:11 UTC (permalink / raw)
  To: Appana Durga Kedareswara Rao, Jose Abreu, dan.j.williams,
	vinod.koul, michal.simek, Soren Brinkmann, moritz.fischer,
	laurent.pinchart, luis, Anirudha Sarangi
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Kedar,


On 15-12-2016 19:09, Appana Durga Kedareswara Rao wrote:
> Hi Jose Miguel Abreu,
>
> 	Thanks for the review....
>
>>> -			last = segment;
>>> +		for (j = 0; j < chan->num_frms; ) {
>>> +			list_for_each_entry(segment, &desc->segments, node)
>> {
>>> +				if (chan->ext_addr)
>>> +					vdma_desc_write_64(chan,
>>> +
>> XILINX_VDMA_REG_START_ADDRESS_64(i++),
>>> +					  segment->hw.buf_addr,
>>> +					  segment->hw.buf_addr_msb);
>>> +				else
>>> +					vdma_desc_write(chan,
>>> +
>> XILINX_VDMA_REG_START_ADDRESS(i++),
>>> +					    segment->hw.buf_addr);
>>> +
>>> +				last = segment;
>> Hmm, is it possible to submit more than one segment? If so, then i and j will get
>> out of sync.
> If h/w is configured for more than 1 frame buffer and user submits more than one frame buffer
> We can submit more than one frame/ segment to hw right??

I'm not sure. When I used VDMA driver I always submitted only one
segment and multiple descriptors. But the problem is, for example:

If you have:
descriptor1 (2 segments)
descriptor2 (2 segments)

And you have 3 frame buffers in the HW.

Then:
1st frame buffer will have: descriptor1 -> segment1
2nd frame buffer will have: descriptor1 -> segment2
3rd frame buffer will have: descriptor2 -> segment1
but, 4th frame buffer will have: descriptor2 -> segment2 <----
INVALID because there is only 3 frame buffers

So, maybe a check inside the loop "list_for_each_entry(segment,
&desc->segments, node)" could be a nice to have.

>
>>> +			}
>>> +			list_del(&desc->node);
>>> +			list_add_tail(&desc->node, &chan->active_list);
>>> +			j++;
>> But if i is non zero and pending_list has more than num_frms then i will not
>> wrap-around as it should and will write to invalid framebuffer location, right?
> Yep will fix in v2...
> 		
> 	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
> 		break;
>
> Above condition is sufficient right???

Looks ok.

>
>>> +			if (list_empty(&chan->pending_list))
>>> +				break;
>>> +			desc = list_first_entry(&chan->pending_list,
>>> +						struct
>> xilinx_dma_tx_descriptor,
>>> +						node);
>>>  		}
>>>
>>>  		if (!last)
>>> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
>> xilinx_dma_chan *chan)
>>>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>>>  				last->hw.stride);
>>>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
>>> hw.vsize);
>> Maybe a check that all framebuffers contain valid addresses should be done
>> before programming vsize so that VDMA does not try to write to invalid
>> addresses.
> Do we really need to check for valid address??? 
> I didn't get you what to do you mean by invalid address could you please explain???
> In the driver we are reading form the pending_list which will be updated by pep_interleaved_dma 
> Call so we are under assumption that user sends the proper address right???

What I mean by valid address is to check that i variable has
already been incremented by num_frms at least once since a VDMA
reset. This way you know that you have programmed all the
addresses of the frame buffers with an address and they are non-zero.

Best regards,
Jose Miguel Abreu

>
>>> +
>>> +		chan->desc_submitcount += j;
>>> +		chan->desc_pendingcount -= j;
>>>  	}
>>>
>>>  	chan->idle = false;
>>>  	if (!chan->has_sg) {
>>> -		list_del(&desc->node);
>>> -		list_add_tail(&desc->node, &chan->active_list);
>>> -		chan->desc_submitcount++;
>>> -		chan->desc_pendingcount--;
>>>  		if (chan->desc_submitcount == chan->num_frms)
>>>  			chan->desc_submitcount = 0;
>> "desc_submitcount >= chan->num_frms would be safer here.
> Sure will fix in v2...
>
> Regards,
> Kedar.
>
>>>  	} else {
>> Best regards,
>> Jose Miguel Abreu
>> --
>> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body
>> of a message to majordomo@vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-16 10:11         ` Jose Abreu
  0 siblings, 0 replies; 32+ messages in thread
From: Jose Abreu @ 2016-12-16 10:11 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedar,


On 15-12-2016 19:09, Appana Durga Kedareswara Rao wrote:
> Hi Jose Miguel Abreu,
>
> 	Thanks for the review....
>
>>> -			last = segment;
>>> +		for (j = 0; j < chan->num_frms; ) {
>>> +			list_for_each_entry(segment, &desc->segments, node)
>> {
>>> +				if (chan->ext_addr)
>>> +					vdma_desc_write_64(chan,
>>> +
>> XILINX_VDMA_REG_START_ADDRESS_64(i++),
>>> +					  segment->hw.buf_addr,
>>> +					  segment->hw.buf_addr_msb);
>>> +				else
>>> +					vdma_desc_write(chan,
>>> +
>> XILINX_VDMA_REG_START_ADDRESS(i++),
>>> +					    segment->hw.buf_addr);
>>> +
>>> +				last = segment;
>> Hmm, is it possible to submit more than one segment? If so, then i and j will get
>> out of sync.
> If h/w is configured for more than 1 frame buffer and user submits more than one frame buffer
> We can submit more than one frame/ segment to hw right??

I'm not sure. When I used VDMA driver I always submitted only one
segment and multiple descriptors. But the problem is, for example:

If you have:
descriptor1 (2 segments)
descriptor2 (2 segments)

And you have 3 frame buffers in the HW.

Then:
1st frame buffer will have: descriptor1 -> segment1
2nd frame buffer will have: descriptor1 -> segment2
3rd frame buffer will have: descriptor2 -> segment1
but, 4th frame buffer will have: descriptor2 -> segment2 <----
INVALID because there is only 3 frame buffers

So, maybe a check inside the loop "list_for_each_entry(segment,
&desc->segments, node)" could be a nice to have.

>
>>> +			}
>>> +			list_del(&desc->node);
>>> +			list_add_tail(&desc->node, &chan->active_list);
>>> +			j++;
>> But if i is non zero and pending_list has more than num_frms then i will not
>> wrap-around as it should and will write to invalid framebuffer location, right?
> Yep will fix in v2...
> 		
> 	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
> 		break;
>
> Above condition is sufficient right???

Looks ok.

>
>>> +			if (list_empty(&chan->pending_list))
>>> +				break;
>>> +			desc = list_first_entry(&chan->pending_list,
>>> +						struct
>> xilinx_dma_tx_descriptor,
>>> +						node);
>>>  		}
>>>
>>>  		if (!last)
>>> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
>> xilinx_dma_chan *chan)
>>>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>>>  				last->hw.stride);
>>>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
>>> hw.vsize);
>> Maybe a check that all framebuffers contain valid addresses should be done
>> before programming vsize so that VDMA does not try to write to invalid
>> addresses.
> Do we really need to check for valid address??? 
> I didn't get you what to do you mean by invalid address could you please explain???
> In the driver we are reading form the pending_list which will be updated by pep_interleaved_dma 
> Call so we are under assumption that user sends the proper address right???

What I mean by valid address is to check that i variable has
already been incremented by num_frms at least once since a VDMA
reset. This way you know that you have programmed all the
addresses of the frame buffers with an address and they are non-zero.

Best regards,
Jose Miguel Abreu

>
>>> +
>>> +		chan->desc_submitcount += j;
>>> +		chan->desc_pendingcount -= j;
>>>  	}
>>>
>>>  	chan->idle = false;
>>>  	if (!chan->has_sg) {
>>> -		list_del(&desc->node);
>>> -		list_add_tail(&desc->node, &chan->active_list);
>>> -		chan->desc_submitcount++;
>>> -		chan->desc_pendingcount--;
>>>  		if (chan->desc_submitcount == chan->num_frms)
>>>  			chan->desc_submitcount = 0;
>> "desc_submitcount >= chan->num_frms would be safer here.
> Sure will fix in v2...
>
> Regards,
> Kedar.
>
>>>  	} else {
>> Best regards,
>> Jose Miguel Abreu
>> --
>> To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body
>> of a message to majordomo at vger.kernel.org More majordomo info at
>> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-15 15:11   ` Kedareswara rao Appana
@ 2016-12-16 15:35     ` Laurent Pinchart
  -1 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-16 15:35 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, luis, svemula, anirudh, Jose.Abreu,
	dmaengine, linux-arm-kernel, linux-kernel

Hi Kedareswara,

Thank you for the patch.

On Thursday 15 Dec 2016 20:41:20 Kedareswara rao Appana wrote:
> Add channel idle state to ensure that dma descriptor is not
> submitted when VDMA engine is in progress.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c
> b/drivers/dma/xilinx/xilinx_dma.c index 8288fe4..736c2a3 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
>   * @cyclic: Check for cyclic transfers.
>   * @genlock: Support genlock mode
>   * @err: Channel has errors
> + * @idle: Check for channel idle
>   * @tasklet: Cleanup work after irq
>   * @config: Device configuration info
>   * @flush_on_fsync: Flush on Frame sync
> @@ -351,6 +352,7 @@ struct xilinx_dma_chan {
>  	bool cyclic;
>  	bool genlock;
>  	bool err;
> +	bool idle;
>  	struct tasklet_struct tasklet;
>  	struct xilinx_vdma_config config;
>  	bool flush_on_fsync;
> @@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan
> *chan) chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
>  		chan->err = true;
>  	}
> +	chan->idle = true;
>  }
> 
>  /**
> @@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> 	if (chan->err)
>  		return;
> 
> +	if (!chan->idle)
> +		return;

Don't you need to perform the same check for the DMA and CDMA channels ? If 
so, shouldn't this be moved to common code ?

There's another problem (not strictly introduced by this patch) I wanted to 
mention. The append_desc_queue() function, called from your tx_submit handler, 
appends descriptors to the pending_list. The DMA engine API states that a 
transfer submitted by tx_submit will not be executed until .issue_pending() is 
called. However, if a transfer is in progress at tx_submit time, I believe 
that the IRQ handler, at transfer completion, will start the next transfer 
from the pending_list even if .issue_pending() hasn't been called for it.

>  	if (list_empty(&chan->pending_list))
>  		return;

Now that you catch busy channels with a software flag, do you still need the 
xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three different 
checks for the same or very similar conditions are confusing, if you really 
need them you should document clearly how they differ.

> @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> last->hw.vsize);
>  	}
> 
> +	chan->idle = false;

(and this too)

>  	if (!chan->has_sg) {
>  		list_del(&desc->node);
>  		list_add_tail(&desc->node, &chan->active_list);
> @@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq,
> void *data) if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
>  		spin_lock(&chan->lock);
>  		xilinx_dma_complete_descriptor(chan);
> +		chan->idle = true;
>  		chan->start_transfer(chan);
>  		spin_unlock(&chan->lock);
>  	}
> @@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct
> xilinx_dma_device *xdev, chan->has_sg = xdev->has_sg;
>  	chan->desc_pendingcount = 0x0;
>  	chan->ext_addr = xdev->ext_addr;
> +	chan->idle = true;
> 
>  	spin_lock_init(&chan->lock);
>  	INIT_LIST_HEAD(&chan->pending_list);

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-16 15:35     ` Laurent Pinchart
  0 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-16 15:35 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedareswara,

Thank you for the patch.

On Thursday 15 Dec 2016 20:41:20 Kedareswara rao Appana wrote:
> Add channel idle state to ensure that dma descriptor is not
> submitted when VDMA engine is in progress.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c
> b/drivers/dma/xilinx/xilinx_dma.c index 8288fe4..736c2a3 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -321,6 +321,7 @@ struct xilinx_dma_tx_descriptor {
>   * @cyclic: Check for cyclic transfers.
>   * @genlock: Support genlock mode
>   * @err: Channel has errors
> + * @idle: Check for channel idle
>   * @tasklet: Cleanup work after irq
>   * @config: Device configuration info
>   * @flush_on_fsync: Flush on Frame sync
> @@ -351,6 +352,7 @@ struct xilinx_dma_chan {
>  	bool cyclic;
>  	bool genlock;
>  	bool err;
> +	bool idle;
>  	struct tasklet_struct tasklet;
>  	struct xilinx_vdma_config config;
>  	bool flush_on_fsync;
> @@ -966,6 +968,7 @@ static void xilinx_dma_halt(struct xilinx_dma_chan
> *chan) chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
>  		chan->err = true;
>  	}
> +	chan->idle = true;
>  }
> 
>  /**
> @@ -1007,6 +1010,9 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> 	if (chan->err)
>  		return;
> 
> +	if (!chan->idle)
> +		return;

Don't you need to perform the same check for the DMA and CDMA channels ? If 
so, shouldn't this be moved to common code ?

There's another problem (not strictly introduced by this patch) I wanted to 
mention. The append_desc_queue() function, called from your tx_submit handler, 
appends descriptors to the pending_list. The DMA engine API states that a 
transfer submitted by tx_submit will not be executed until .issue_pending() is 
called. However, if a transfer is in progress at tx_submit time, I believe 
that the IRQ handler, at transfer completion, will start the next transfer 
from the pending_list even if .issue_pending() hasn't been called for it.

>  	if (list_empty(&chan->pending_list))
>  		return;

Now that you catch busy channels with a software flag, do you still need the 
xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three different 
checks for the same or very similar conditions are confusing, if you really 
need them you should document clearly how they differ.

> @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> last->hw.vsize);
>  	}
> 
> +	chan->idle = false;

(and this too)

>  	if (!chan->has_sg) {
>  		list_del(&desc->node);
>  		list_add_tail(&desc->node, &chan->active_list);
> @@ -1447,6 +1454,7 @@ static irqreturn_t xilinx_dma_irq_handler(int irq,
> void *data) if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
>  		spin_lock(&chan->lock);
>  		xilinx_dma_complete_descriptor(chan);
> +		chan->idle = true;
>  		chan->start_transfer(chan);
>  		spin_unlock(&chan->lock);
>  	}
> @@ -2327,6 +2335,7 @@ static int xilinx_dma_chan_probe(struct
> xilinx_dma_device *xdev, chan->has_sg = xdev->has_sg;
>  	chan->desc_pendingcount = 0x0;
>  	chan->ext_addr = xdev->ext_addr;
> +	chan->idle = true;
> 
>  	spin_lock_init(&chan->lock);
>  	INIT_LIST_HEAD(&chan->pending_list);

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-15 15:11   ` Kedareswara rao Appana
@ 2016-12-16 15:54     ` Laurent Pinchart
  -1 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-16 15:54 UTC (permalink / raw)
  To: Kedareswara rao Appana
  Cc: dan.j.williams, vinod.koul, michal.simek, soren.brinkmann,
	appanad, moritz.fischer, luis, svemula, anirudh, Jose.Abreu,
	dmaengine, linux-arm-kernel, linux-kernel

Hi Kedareswara,

Thank you for the patch.

On Thursday 15 Dec 2016 20:41:21 Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
> 
> This patch fixes this issue.
> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 ++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c
> b/drivers/dma/xilinx/xilinx_dma.c index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> 		tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
> 
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;

I don't get this. i seems to index into a segment start address array, but 
gets initialized with a variable documented as "Descriptor h/w submitted 
count". I'm not familiar with the hardware, but it makes no sense to me.

> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;

Isn't it an issue to write the descriptors only after calling 
xilinx_dma_start() ?

> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  
XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    
XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);

I assume the size of the start address array to be limited by the hardware, 
but I don't see how this code prevents from overflowing this.

The whole function is very difficult to understand, it probably requires a 
rewrite.

> +				last = segment;
> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;
> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct 
xilinx_dma_tx_descriptor,
> +						node);
>  		}
> 
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
> 
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;
>  	} else {

While at it, can you merge this into the previous if (chan->has_sg) { ... } 
else { ... } ? Having them separate is confusing.


-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-16 15:54     ` Laurent Pinchart
  0 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-16 15:54 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedareswara,

Thank you for the patch.

On Thursday 15 Dec 2016 20:41:21 Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
> 
> This patch fixes this issue.
> 
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 ++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/dma/xilinx/xilinx_dma.c
> b/drivers/dma/xilinx/xilinx_dma.c index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan)
> 		tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
> 
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;

I don't get this. i seems to index into a segment start address array, but 
gets initialized with a variable documented as "Descriptor h/w submitted 
count". I'm not familiar with the hardware, but it makes no sense to me.

> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;

Isn't it an issue to write the descriptors only after calling 
xilinx_dma_start() ?

> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  
XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    
XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);

I assume the size of the start address array to be limited by the hardware, 
but I don't see how this code prevents from overflowing this.

The whole function is very difficult to understand, it probably requires a 
rewrite.

> +				last = segment;
> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;
> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct 
xilinx_dma_tx_descriptor,
> +						node);
>  		}
> 
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct
> xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
> 
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;
>  	} else {

While at it, can you merge this into the previous if (chan->has_sg) { ... } 
else { ... } ? Having them separate is confusing.


-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-16 15:35     ` Laurent Pinchart
@ 2016-12-19 15:39       ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:39 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: dan.j.williams, vinod.koul, michal.simek, Soren Brinkmann,
	moritz.fischer, luis, Jose.Abreu, dmaengine, linux-arm-kernel,
	linux-kernel

Hi Laurent Pinchart,

	Thanks for the review...

> >
> > +	if (!chan->idle)
> > +		return;
> 
> Don't you need to perform the same check for the DMA and CDMA channels ? If
> so, shouldn't this be moved to common code ?

Will fix it in v2...

> 
> There's another problem (not strictly introduced by this patch) I wanted to
> mention. The append_desc_queue() function, called from your tx_submit
> handler, appends descriptors to the pending_list. The DMA engine API states
> that a transfer submitted by tx_submit will not be executed until .issue_pending()
> is called. However, if a transfer is in progress at tx_submit time, I believe that
> the IRQ handler, at transfer completion, will start the next transfer from the
> pending_list even if .issue_pending() hasn't been called for it.
> 
> >  	if (list_empty(&chan->pending_list))
> >  		return;


If user submits more than h/w limit then that case only driver will process
The descriptors from the pending_list for other cases the pending_list will be 
Empty so driver just returns from there.

> 
> Now that you catch busy channels with a software flag, do you still need the
> xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three different checks
> for the same or very similar conditions are confusing, if you really need them
> you should document clearly how they differ.

Will remove the xilinx_dma_is_running and xilinx_dmais_idle() checks and will use
Chan->idle check across all the IP's. Will fix it v2...

> 
> > @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> > xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> > last->hw.vsize);
> >  	}
> >
> > +	chan->idle = false;
> 
> (and this too)

Will fix in v2...

Regards,
Kedar.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-19 15:39       ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Laurent Pinchart,

	Thanks for the review...

> >
> > +	if (!chan->idle)
> > +		return;
> 
> Don't you need to perform the same check for the DMA and CDMA channels ? If
> so, shouldn't this be moved to common code ?

Will fix it in v2...

> 
> There's another problem (not strictly introduced by this patch) I wanted to
> mention. The append_desc_queue() function, called from your tx_submit
> handler, appends descriptors to the pending_list. The DMA engine API states
> that a transfer submitted by tx_submit will not be executed until .issue_pending()
> is called. However, if a transfer is in progress at tx_submit time, I believe that
> the IRQ handler, at transfer completion, will start the next transfer from the
> pending_list even if .issue_pending() hasn't been called for it.
> 
> >  	if (list_empty(&chan->pending_list))
> >  		return;


If user submits more than h/w limit then that case only driver will process
The descriptors from the pending_list for other cases the pending_list will be 
Empty so driver just returns from there.

> 
> Now that you catch busy channels with a software flag, do you still need the
> xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three different checks
> for the same or very similar conditions are confusing, if you really need them
> you should document clearly how they differ.

Will remove the xilinx_dma_is_running and xilinx_dmais_idle() checks and will use
Chan->idle check across all the IP's. Will fix it v2...

> 
> > @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> > xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> > last->hw.vsize);
> >  	}
> >
> > +	chan->idle = false;
> 
> (and this too)

Will fix in v2...

Regards,
Kedar.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-16 10:11         ` Jose Abreu
@ 2016-12-19 15:40           ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:40 UTC (permalink / raw)
  To: Jose Abreu, dan.j.williams, vinod.koul, michal.simek,
	Soren Brinkmann, moritz.fischer, laurent.pinchart, luis
  Cc: dmaengine, linux-arm-kernel, linux-kernel

Hi Jose Miguel Abreu,

	Thanks for the review... 

> >
> >>> -			last = segment;
> >>> +		for (j = 0; j < chan->num_frms; ) {
> >>> +			list_for_each_entry(segment, &desc->segments, node)
> >> {
> >>> +				if (chan->ext_addr)
> >>> +					vdma_desc_write_64(chan,
> >>> +
> >> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> >>> +					  segment->hw.buf_addr,
> >>> +					  segment->hw.buf_addr_msb);
> >>> +				else
> >>> +					vdma_desc_write(chan,
> >>> +
> >> XILINX_VDMA_REG_START_ADDRESS(i++),
> >>> +					    segment->hw.buf_addr);
> >>> +
> >>> +				last = segment;
> >> Hmm, is it possible to submit more than one segment? If so, then i
> >> and j will get out of sync.
> > If h/w is configured for more than 1 frame buffer and user submits
> > more than one frame buffer We can submit more than one frame/ segment to
> hw right??
> 
> I'm not sure. When I used VDMA driver I always submitted only one segment and
> multiple descriptors. But the problem is, for example:
> 
> If you have:
> descriptor1 (2 segments)
> descriptor2 (2 segments)
> 
> And you have 3 frame buffers in the HW.
> 
> Then:
> 1st frame buffer will have: descriptor1 -> segment1 2nd frame buffer will have:
> descriptor1 -> segment2 3rd frame buffer will have: descriptor2 -> segment1
> but, 4th frame buffer will have: descriptor2 -> segment2 <---- INVALID because
> there is only 3 frame buffers
> 
> So, maybe a check inside the loop "list_for_each_entry(segment, &desc-
> >segments, node)" could be a nice to have.

With the current driver flow user can submit only 1 segment per descriptor
That's why didn't checked the list_for_each_entry for each descriptors...
Hope it clarifies your query...

> 
> >
> >>> +			}
> >>> +			list_del(&desc->node);
> >>> +			list_add_tail(&desc->node, &chan->active_list);
> >>> +			j++;
> >> But if i is non zero and pending_list has more than num_frms then i
> >> will not wrap-around as it should and will write to invalid framebuffer
> location, right?
> > Yep will fix in v2...
> >
> > 	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
> > 		break;
> >
> > Above condition is sufficient right???
> 
> Looks ok.

Thanks...

> >>> @@ -1114,14 +1124,13 @@ static void
> >>> xilinx_vdma_start_transfer(struct
> >> xilinx_dma_chan *chan)
> >>>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
> >>>  				last->hw.stride);
> >>>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
> hw.vsize);
> >> Maybe a check that all framebuffers contain valid addresses should be
> >> done before programming vsize so that VDMA does not try to write to
> >> invalid addresses.
> > Do we really need to check for valid address???
> > I didn't get you what to do you mean by invalid address could you please
> explain???
> > In the driver we are reading form the pending_list which will be
> > updated by pep_interleaved_dma Call so we are under assumption that user
> sends the proper address right???
> 
> What I mean by valid address is to check that i variable has already been
> incremented by num_frms at least once since a VDMA reset. This way you know
> that you have programmed all the addresses of the frame buffers with an
> address and they are non-zero.

Ok Sure will fix in v2...

Regards,
Kedar.

> 
> Best regards,
> Jose Miguel Abreu
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-19 15:40           ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:40 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Jose Miguel Abreu,

	Thanks for the review... 

> >
> >>> -			last = segment;
> >>> +		for (j = 0; j < chan->num_frms; ) {
> >>> +			list_for_each_entry(segment, &desc->segments, node)
> >> {
> >>> +				if (chan->ext_addr)
> >>> +					vdma_desc_write_64(chan,
> >>> +
> >> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> >>> +					  segment->hw.buf_addr,
> >>> +					  segment->hw.buf_addr_msb);
> >>> +				else
> >>> +					vdma_desc_write(chan,
> >>> +
> >> XILINX_VDMA_REG_START_ADDRESS(i++),
> >>> +					    segment->hw.buf_addr);
> >>> +
> >>> +				last = segment;
> >> Hmm, is it possible to submit more than one segment? If so, then i
> >> and j will get out of sync.
> > If h/w is configured for more than 1 frame buffer and user submits
> > more than one frame buffer We can submit more than one frame/ segment to
> hw right??
> 
> I'm not sure. When I used VDMA driver I always submitted only one segment and
> multiple descriptors. But the problem is, for example:
> 
> If you have:
> descriptor1 (2 segments)
> descriptor2 (2 segments)
> 
> And you have 3 frame buffers in the HW.
> 
> Then:
> 1st frame buffer will have: descriptor1 -> segment1 2nd frame buffer will have:
> descriptor1 -> segment2 3rd frame buffer will have: descriptor2 -> segment1
> but, 4th frame buffer will have: descriptor2 -> segment2 <---- INVALID because
> there is only 3 frame buffers
> 
> So, maybe a check inside the loop "list_for_each_entry(segment, &desc-
> >segments, node)" could be a nice to have.

With the current driver flow user can submit only 1 segment per descriptor
That's why didn't checked the list_for_each_entry for each descriptors...
Hope it clarifies your query...

> 
> >
> >>> +			}
> >>> +			list_del(&desc->node);
> >>> +			list_add_tail(&desc->node, &chan->active_list);
> >>> +			j++;
> >> But if i is non zero and pending_list has more than num_frms then i
> >> will not wrap-around as it should and will write to invalid framebuffer
> location, right?
> > Yep will fix in v2...
> >
> > 	If (if (list_empty(&chan->pending_list)) || (i == chan->num_frms)
> > 		break;
> >
> > Above condition is sufficient right???
> 
> Looks ok.

Thanks...

> >>> @@ -1114,14 +1124,13 @@ static void
> >>> xilinx_vdma_start_transfer(struct
> >> xilinx_dma_chan *chan)
> >>>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
> >>>  				last->hw.stride);
> >>>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last-
> hw.vsize);
> >> Maybe a check that all framebuffers contain valid addresses should be
> >> done before programming vsize so that VDMA does not try to write to
> >> invalid addresses.
> > Do we really need to check for valid address???
> > I didn't get you what to do you mean by invalid address could you please
> explain???
> > In the driver we are reading form the pending_list which will be
> > updated by pep_interleaved_dma Call so we are under assumption that user
> sends the proper address right???
> 
> What I mean by valid address is to check that i variable has already been
> incremented by num_frms at least once since a VDMA reset. This way you know
> that you have programmed all the addresses of the frame buffers with an
> address and they are non-zero.

Ok Sure will fix in v2...

Regards,
Kedar.

> 
> Best regards,
> Jose Miguel Abreu
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
  2016-12-16 15:54     ` Laurent Pinchart
@ 2016-12-19 15:41       ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:41 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: dan.j.williams, vinod.koul, michal.simek, Soren Brinkmann,
	moritz.fischer, luis, Jose.Abreu, dmaengine, linux-arm-kernel,
	linux-kernel

Hi Laurent Pinchart,

	Thanks for the review...

> > +		int i = 0, j = 0;
> >
> >  		if (chan->desc_submitcount < chan->num_frms)
> >  			i = chan->desc_submitcount;
> 
> I don't get this. i seems to index into a segment start address array, but gets
> initialized with a variable documented as "Descriptor h/w submitted count". I'm
> not familiar with the hardware, but it makes no sense to me.

Here i is the h/w buffer address. 
For ex: If the h/w is configured for 3 frame buffers and user submits 4 desc's
Then we need to submit only 3 frame buffers to the h/w and the next desc will be submitted
After there is a room for buffers I mean when the free buffer is available.

> 
> > -		list_for_each_entry(segment, &desc->segments, node) {
> > -			if (chan->ext_addr)
> > -				vdma_desc_write_64(chan,
> > -
> 	XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > -					segment->hw.buf_addr,
> > -					segment->hw.buf_addr_msb);
> > -			else
> > -				vdma_desc_write(chan,
> > -
> 	XILINX_VDMA_REG_START_ADDRESS(i++),
> > -					segment->hw.buf_addr);
> > -
> > -			last = segment;
> 
> Isn't it an issue to write the descriptors only after calling
> xilinx_dma_start() ?

Until writing to the VSIZE h/w won't get started...

> 
> > +		for (j = 0; j < chan->num_frms; ) {
> > +			list_for_each_entry(segment, &desc->segments, node)
> {
> > +				if (chan->ext_addr)
> > +					vdma_desc_write_64(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > +					  segment->hw.buf_addr,
> > +					  segment->hw.buf_addr_msb);
> > +				else
> > +					vdma_desc_write(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS(i++),
> > +					    segment->hw.buf_addr);
> 
> I assume the size of the start address array to be limited by the hardware, but I
> don't see how this code prevents from overflowing this.
> 
> The whole function is very difficult to understand, it probably requires a rewrite.

Will fix it in v2...

> 
> > +				last = segment;
> > +			}
> > +			list_del(&desc->node);
> > +			list_add_tail(&desc->node, &chan->active_list);
> > +			j++;
> > +			if (list_empty(&chan->pending_list))
> > +				break;
> > +			desc = list_first_entry(&chan->pending_list,
> > +						struct
> xilinx_dma_tx_descriptor,
> > +						node);
> >  		}
> >  	if (!chan->has_sg) {
> > -		list_del(&desc->node);
> > -		list_add_tail(&desc->node, &chan->active_list);
> > -		chan->desc_submitcount++;
> > -		chan->desc_pendingcount--;
> >  		if (chan->desc_submitcount == chan->num_frms)
> >  			chan->desc_submitcount = 0;
> >  	} else {
> 
> While at it, can you merge this into the previous if (chan->has_sg) { ... } else { ... }
> ? Having them separate is confusing.

Ok sure will fix in v2...

Regards,
Kedar.

> 
> 
> --
> Regards,
> 
> Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
@ 2016-12-19 15:41       ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-19 15:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Laurent Pinchart,

	Thanks for the review...

> > +		int i = 0, j = 0;
> >
> >  		if (chan->desc_submitcount < chan->num_frms)
> >  			i = chan->desc_submitcount;
> 
> I don't get this. i seems to index into a segment start address array, but gets
> initialized with a variable documented as "Descriptor h/w submitted count". I'm
> not familiar with the hardware, but it makes no sense to me.

Here i is the h/w buffer address. 
For ex: If the h/w is configured for 3 frame buffers and user submits 4 desc's
Then we need to submit only 3 frame buffers to the h/w and the next desc will be submitted
After there is a room for buffers I mean when the free buffer is available.

> 
> > -		list_for_each_entry(segment, &desc->segments, node) {
> > -			if (chan->ext_addr)
> > -				vdma_desc_write_64(chan,
> > -
> 	XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > -					segment->hw.buf_addr,
> > -					segment->hw.buf_addr_msb);
> > -			else
> > -				vdma_desc_write(chan,
> > -
> 	XILINX_VDMA_REG_START_ADDRESS(i++),
> > -					segment->hw.buf_addr);
> > -
> > -			last = segment;
> 
> Isn't it an issue to write the descriptors only after calling
> xilinx_dma_start() ?

Until writing to the VSIZE h/w won't get started...

> 
> > +		for (j = 0; j < chan->num_frms; ) {
> > +			list_for_each_entry(segment, &desc->segments, node)
> {
> > +				if (chan->ext_addr)
> > +					vdma_desc_write_64(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS_64(i++),
> > +					  segment->hw.buf_addr,
> > +					  segment->hw.buf_addr_msb);
> > +				else
> > +					vdma_desc_write(chan,
> > +
> XILINX_VDMA_REG_START_ADDRESS(i++),
> > +					    segment->hw.buf_addr);
> 
> I assume the size of the start address array to be limited by the hardware, but I
> don't see how this code prevents from overflowing this.
> 
> The whole function is very difficult to understand, it probably requires a rewrite.

Will fix it in v2...

> 
> > +				last = segment;
> > +			}
> > +			list_del(&desc->node);
> > +			list_add_tail(&desc->node, &chan->active_list);
> > +			j++;
> > +			if (list_empty(&chan->pending_list))
> > +				break;
> > +			desc = list_first_entry(&chan->pending_list,
> > +						struct
> xilinx_dma_tx_descriptor,
> > +						node);
> >  		}
> >  	if (!chan->has_sg) {
> > -		list_del(&desc->node);
> > -		list_add_tail(&desc->node, &chan->active_list);
> > -		chan->desc_submitcount++;
> > -		chan->desc_pendingcount--;
> >  		if (chan->desc_submitcount == chan->num_frms)
> >  			chan->desc_submitcount = 0;
> >  	} else {
> 
> While at it, can you merge this into the previous if (chan->has_sg) { ... } else { ... }
> ? Having them separate is confusing.

Ok sure will fix in v2...

Regards,
Kedar.

> 
> 
> --
> Regards,
> 
> Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-19 15:39       ` Appana Durga Kedareswara Rao
@ 2016-12-19 17:18         ` Laurent Pinchart
  -1 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-19 17:18 UTC (permalink / raw)
  To: Appana Durga Kedareswara Rao
  Cc: dan.j.williams, vinod.koul, michal.simek, Soren Brinkmann,
	moritz.fischer, luis, Jose.Abreu, dmaengine, linux-arm-kernel,
	linux-kernel

Hi Kedar,

On Monday 19 Dec 2016 15:39:43 Appana Durga Kedareswara Rao wrote:
> Hi Laurent Pinchart,
> 
> 	Thanks for the review...
> 
> > > +	if (!chan->idle)
> > > +		return;
> > 
> > Don't you need to perform the same check for the DMA and CDMA channels ?
> > If so, shouldn't this be moved to common code ?
> 
> Will fix it in v2...
> 
> > There's another problem (not strictly introduced by this patch) I wanted
> > to mention. The append_desc_queue() function, called from your tx_submit
> > handler, appends descriptors to the pending_list. The DMA engine API
> > states that a transfer submitted by tx_submit will not be executed until
> > .issue_pending() is called. However, if a transfer is in progress at
> > tx_submit time, I believe that the IRQ handler, at transfer completion,
> > will start the next transfer from the pending_list even if
> > .issue_pending() hasn't been called for it.
> > 
> > >  	if (list_empty(&chan->pending_list))
> > >  		return;
> 
> If user submits more than h/w limit then that case only driver will process
> The descriptors from the pending_list for other cases the pending_list will
> be Empty so driver just returns from there.

I understand that, but that's not the problem. Your .tx_submit() handler calls 
append_desc_queue() which adds the tx descriptor to the pending_list. If a 
transfer is in progress at that time, I believe the transfer completion IRQ 
handler will take the next descriptor from the pending_list and process it, 
even though issue_pending() hasn't been called for it.

> > Now that you catch busy channels with a software flag, do you still need
> > the xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three
> > different checks for the same or very similar conditions are confusing,
> > if you really need them you should document clearly how they differ.
> 
> Will remove the xilinx_dma_is_running and xilinx_dmais_idle() checks and
> will use Chan->idle check across all the IP's. Will fix it v2...
> 
> > > @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> > > xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> > > last->hw.vsize);
> > > 
> > >  	}
> > > 
> > > +	chan->idle = false;
> > 
> > (and this too)
> 
> Will fix in v2...

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-19 17:18         ` Laurent Pinchart
  0 siblings, 0 replies; 32+ messages in thread
From: Laurent Pinchart @ 2016-12-19 17:18 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kedar,

On Monday 19 Dec 2016 15:39:43 Appana Durga Kedareswara Rao wrote:
> Hi Laurent Pinchart,
> 
> 	Thanks for the review...
> 
> > > +	if (!chan->idle)
> > > +		return;
> > 
> > Don't you need to perform the same check for the DMA and CDMA channels ?
> > If so, shouldn't this be moved to common code ?
> 
> Will fix it in v2...
> 
> > There's another problem (not strictly introduced by this patch) I wanted
> > to mention. The append_desc_queue() function, called from your tx_submit
> > handler, appends descriptors to the pending_list. The DMA engine API
> > states that a transfer submitted by tx_submit will not be executed until
> > .issue_pending() is called. However, if a transfer is in progress at
> > tx_submit time, I believe that the IRQ handler, at transfer completion,
> > will start the next transfer from the pending_list even if
> > .issue_pending() hasn't been called for it.
> > 
> > >  	if (list_empty(&chan->pending_list))
> > >  		return;
> 
> If user submits more than h/w limit then that case only driver will process
> The descriptors from the pending_list for other cases the pending_list will
> be Empty so driver just returns from there.

I understand that, but that's not the problem. Your .tx_submit() handler calls 
append_desc_queue() which adds the tx descriptor to the pending_list. If a 
transfer is in progress at that time, I believe the transfer completion IRQ 
handler will take the next descriptor from the pending_list and process it, 
even though issue_pending() hasn't been called for it.

> > Now that you catch busy channels with a software flag, do you still need
> > the xilinx_dma_is_running() and xilinx_dma_is_idle() checks ? Three
> > different checks for the same or very similar conditions are confusing,
> > if you really need them you should document clearly how they differ.
> 
> Will remove the xilinx_dma_is_running and xilinx_dmais_idle() checks and
> will use Chan->idle check across all the IP's. Will fix it v2...
> 
> > > @@ -1110,6 +1116,7 @@ static void xilinx_vdma_start_transfer(struct
> > > xilinx_dma_chan *chan) vdma_desc_write(chan, XILINX_DMA_REG_VSIZE,
> > > last->hw.vsize);
> > > 
> > >  	}
> > > 
> > > +	chan->idle = false;
> > 
> > (and this too)
> 
> Will fix in v2...

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RE: [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
  2016-12-19 17:18         ` Laurent Pinchart
@ 2016-12-23  8:49           ` Appana Durga Kedareswara Rao
  -1 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-23  8:49 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: dan.j.williams, vinod.koul, michal.simek, Soren Brinkmann,
	moritz.fischer, luis, Jose.Abreu, dmaengine, linux-arm-kernel,
	linux-kernel

Hi Laurent Pinchart,

	Sorry for the delay in the reply.
	Thanks for the review...
> 
> Hi Kedar,
> 
> On Monday 19 Dec 2016 15:39:43 Appana Durga Kedareswara Rao wrote:
> > Hi Laurent Pinchart,
> >
> > 	Thanks for the review...
> >
> > > > +	if (!chan->idle)
> > > > +		return;
> > >
> > > Don't you need to perform the same check for the DMA and CDMA channels
> ?
> > > If so, shouldn't this be moved to common code ?
> >
> > Will fix it in v2...
> >
> > > There's another problem (not strictly introduced by this patch) I
> > > wanted to mention. The append_desc_queue() function, called from
> > > your tx_submit handler, appends descriptors to the pending_list. The
> > > DMA engine API states that a transfer submitted by tx_submit will
> > > not be executed until
> > > .issue_pending() is called. However, if a transfer is in progress at
> > > tx_submit time, I believe that the IRQ handler, at transfer
> > > completion, will start the next transfer from the pending_list even
> > > if
> > > .issue_pending() hasn't been called for it.
> > >
> > > >  	if (list_empty(&chan->pending_list))
> > > >  		return;
> >
> > If user submits more than h/w limit then that case only driver will
> > process The descriptors from the pending_list for other cases the
> > pending_list will be Empty so driver just returns from there.
> 
> I understand that, but that's not the problem. Your .tx_submit() handler calls
> append_desc_queue() which adds the tx descriptor to the pending_list. If a
> transfer is in progress at that time, I believe the transfer completion IRQ handler
> will take the next descriptor from the pending_list and process it, even though
> issue_pending() hasn't been called for it.
> 

Thanks for the explanation...
Agree will keep this my to-do list...

Regards,
Kedar.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor
@ 2016-12-23  8:49           ` Appana Durga Kedareswara Rao
  0 siblings, 0 replies; 32+ messages in thread
From: Appana Durga Kedareswara Rao @ 2016-12-23  8:49 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Laurent Pinchart,

	Sorry for the delay in the reply.
	Thanks for the review...
> 
> Hi Kedar,
> 
> On Monday 19 Dec 2016 15:39:43 Appana Durga Kedareswara Rao wrote:
> > Hi Laurent Pinchart,
> >
> > 	Thanks for the review...
> >
> > > > +	if (!chan->idle)
> > > > +		return;
> > >
> > > Don't you need to perform the same check for the DMA and CDMA channels
> ?
> > > If so, shouldn't this be moved to common code ?
> >
> > Will fix it in v2...
> >
> > > There's another problem (not strictly introduced by this patch) I
> > > wanted to mention. The append_desc_queue() function, called from
> > > your tx_submit handler, appends descriptors to the pending_list. The
> > > DMA engine API states that a transfer submitted by tx_submit will
> > > not be executed until
> > > .issue_pending() is called. However, if a transfer is in progress at
> > > tx_submit time, I believe that the IRQ handler, at transfer
> > > completion, will start the next transfer from the pending_list even
> > > if
> > > .issue_pending() hasn't been called for it.
> > >
> > > >  	if (list_empty(&chan->pending_list))
> > > >  		return;
> >
> > If user submits more than h/w limit then that case only driver will
> > process The descriptors from the pending_list for other cases the
> > pending_list will be Empty so driver just returns from there.
> 
> I understand that, but that's not the problem. Your .tx_submit() handler calls
> append_desc_queue() which adds the tx descriptor to the pending_list. If a
> transfer is in progress at that time, I believe the transfer completion IRQ handler
> will take the next descriptor from the pending_list and process it, even though
> issue_pending() hasn't been called for it.
> 

Thanks for the explanation...
Agree will keep this my to-do list...

Regards,
Kedar.

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2016-12-23  9:21 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-15 15:11 [PATCH 0/3] dmaengine: xilinx_dma: Bug fixes Kedareswara rao Appana
2016-12-15 15:11 ` Kedareswara rao Appana
2016-12-15 15:11 ` [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana
2016-12-15 15:45   ` Jose Abreu
2016-12-15 15:45     ` Jose Abreu
2016-12-15 18:49     ` Appana Durga Kedareswara Rao
2016-12-15 18:49       ` Appana Durga Kedareswara Rao
2016-12-16 15:35   ` Laurent Pinchart
2016-12-16 15:35     ` Laurent Pinchart
2016-12-19 15:39     ` Appana Durga Kedareswara Rao
2016-12-19 15:39       ` Appana Durga Kedareswara Rao
2016-12-19 17:18       ` Laurent Pinchart
2016-12-19 17:18         ` Laurent Pinchart
2016-12-23  8:49         ` Appana Durga Kedareswara Rao
2016-12-23  8:49           ` Appana Durga Kedareswara Rao
2016-12-15 15:11 ` [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana
2016-12-15 16:10   ` Jose Abreu
2016-12-15 16:10     ` Jose Abreu
2016-12-15 19:09     ` Appana Durga Kedareswara Rao
2016-12-15 19:09       ` Appana Durga Kedareswara Rao
2016-12-16 10:11       ` Jose Abreu
2016-12-16 10:11         ` Jose Abreu
2016-12-19 15:40         ` Appana Durga Kedareswara Rao
2016-12-19 15:40           ` Appana Durga Kedareswara Rao
2016-12-16 15:54   ` Laurent Pinchart
2016-12-16 15:54     ` Laurent Pinchart
2016-12-19 15:41     ` Appana Durga Kedareswara Rao
2016-12-19 15:41       ` Appana Durga Kedareswara Rao
2016-12-15 15:11 ` [PATCH 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.