All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements
@ 2015-05-26 13:07 ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard

Hi,

This serie is about various fixes and enhancements that were
previously submitted as part of the "ARM: mvebu: Add support for RAID6
PQ offloading" serie.

Seems the RAID6 PQ stuff seems more controversial than what I
expected, we took out the various bugs and enhancements patches, and
will submit the other patches as a separate set.

Changes from v1:
  - Moved the bugfix commit as the first commit and cc'd stable
  - Changed the Armada 38x compatible to armada-380-xor
  - Made some misc cleanups

Lior Amsalem (4):
  dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
  dmaengine: mv_xor: add support for a38x command in descriptor mode
  dmaengine: mv_xor: Enlarge descriptor pool size
  dmaengine: mv_xor: improve descriptors list handling and reduce
    locking

Maxime Ripard (1):
  dmaengine: mv_xor: Rename function for consistent naming

 Documentation/devicetree/bindings/dma/mv-xor.txt |   2 +-
 drivers/dma/mv_xor.c                             | 352 ++++++++++++-----------
 drivers/dma/mv_xor.h                             |  27 +-
 3 files changed, 206 insertions(+), 175 deletions(-)

-- 
2.4.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements
@ 2015-05-26 13:07 ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This serie is about various fixes and enhancements that were
previously submitted as part of the "ARM: mvebu: Add support for RAID6
PQ offloading" serie.

Seems the RAID6 PQ stuff seems more controversial than what I
expected, we took out the various bugs and enhancements patches, and
will submit the other patches as a separate set.

Changes from v1:
  - Moved the bugfix commit as the first commit and cc'd stable
  - Changed the Armada 38x compatible to armada-380-xor
  - Made some misc cleanups

Lior Amsalem (4):
  dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
  dmaengine: mv_xor: add support for a38x command in descriptor mode
  dmaengine: mv_xor: Enlarge descriptor pool size
  dmaengine: mv_xor: improve descriptors list handling and reduce
    locking

Maxime Ripard (1):
  dmaengine: mv_xor: Rename function for consistent naming

 Documentation/devicetree/bindings/dma/mv-xor.txt |   2 +-
 drivers/dma/mv_xor.c                             | 352 ++++++++++++-----------
 drivers/dma/mv_xor.h                             |  27 +-
 3 files changed, 206 insertions(+), 175 deletions(-)

-- 
2.4.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/5] dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-05-26 13:07   ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard, stable

From: Lior Amsalem <alior@marvell.com>

This patch fixes a bug in the XOR driver where the cleanup function can be
called and free descriptors that never been processed by the engine (which
result in data errors).

The cleanup function will free descriptors based on the ownership bit in
the descriptors.

Fixes: ff7b04796d98 ("dmaengine: DMA engine driver for Marvell XOR engine")
Signed-off-by: Lior Amsalem <alior@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Cc: stable@vger.kernel.org

---
 drivers/dma/mv_xor.c | 72 +++++++++++++++++++++++++++++++++-------------------
 drivers/dma/mv_xor.h |  1 +
 2 files changed, 47 insertions(+), 26 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 1c56001df676..50f1b422dee3 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -273,7 +273,8 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 	dma_cookie_t cookie = 0;
 	int busy = mv_chan_is_busy(mv_chan);
 	u32 current_desc = mv_chan_get_current_desc(mv_chan);
-	int seen_current = 0;
+	int current_cleaned = 0;
+	struct mv_xor_desc *hw_desc;
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
@@ -285,38 +286,57 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
 					chain_node) {
-		prefetch(_iter);
-		prefetch(&_iter->async_tx);
 
-		/* do not advance past the current descriptor loaded into the
-		 * hardware channel, subsequent descriptors are either in
-		 * process or have not been submitted
-		 */
-		if (seen_current)
-			break;
+		/* clean finished descriptors */
+		hw_desc = iter->hw_desc;
+		if (hw_desc->status & XOR_DESC_SUCCESS) {
+			cookie = mv_xor_run_tx_complete_actions(iter, mv_chan,
+								cookie);
 
-		/* stop the search if we reach the current descriptor and the
-		 * channel is busy
-		 */
-		if (iter->async_tx.phys == current_desc) {
-			seen_current = 1;
-			if (busy)
+			/* done processing desc, clean slot */
+			mv_xor_clean_slot(iter, mv_chan);
+
+			/* break if we did cleaned the current */
+			if (iter->async_tx.phys == current_desc) {
+				current_cleaned = 1;
+				break;
+			}
+		} else {
+			if (iter->async_tx.phys == current_desc) {
+				current_cleaned = 0;
 				break;
+			}
 		}
-
-		cookie = mv_xor_run_tx_complete_actions(iter, mv_chan, cookie);
-
-		if (mv_xor_clean_slot(iter, mv_chan))
-			break;
 	}
 
 	if ((busy == 0) && !list_empty(&mv_chan->chain)) {
-		struct mv_xor_desc_slot *chain_head;
-		chain_head = list_entry(mv_chan->chain.next,
-					struct mv_xor_desc_slot,
-					chain_node);
-
-		mv_xor_start_new_chain(mv_chan, chain_head);
+		if (current_cleaned) {
+			/*
+			 * current descriptor cleaned and removed, run
+			 * from list head
+			 */
+			iter = list_entry(mv_chan->chain.next,
+					  struct mv_xor_desc_slot,
+					  chain_node);
+			mv_xor_start_new_chain(mv_chan, iter);
+		} else {
+			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
+				/*
+				 * descriptors are still waiting after
+				 * current, trigger them
+				 */
+				iter = list_entry(iter->chain_node.next,
+						  struct mv_xor_desc_slot,
+						  chain_node);
+				mv_xor_start_new_chain(mv_chan, iter);
+			} else {
+				/*
+				 * some descriptors are still waiting
+				 * to be cleaned
+				 */
+				tasklet_schedule(&mv_chan->irq_tasklet);
+			}
+		}
 	}
 
 	if (cookie > 0)
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 91958dba39a2..0e302b3a33ad 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -31,6 +31,7 @@
 #define XOR_OPERATION_MODE_XOR		0
 #define XOR_OPERATION_MODE_MEMCPY	2
 #define XOR_DESCRIPTOR_SWAP		BIT(14)
+#define XOR_DESC_SUCCESS		0x40000000
 
 #define XOR_DESC_DMA_OWNED		BIT(31)
 #define XOR_DESC_EOD_INT_EN		BIT(31)
-- 
2.4.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 1/5] dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
@ 2015-05-26 13:07   ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

From: Lior Amsalem <alior@marvell.com>

This patch fixes a bug in the XOR driver where the cleanup function can be
called and free descriptors that never been processed by the engine (which
result in data errors).

The cleanup function will free descriptors based on the ownership bit in
the descriptors.

Fixes: ff7b04796d98 ("dmaengine: DMA engine driver for Marvell XOR engine")
Signed-off-by: Lior Amsalem <alior@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Cc: stable at vger.kernel.org

---
 drivers/dma/mv_xor.c | 72 +++++++++++++++++++++++++++++++++-------------------
 drivers/dma/mv_xor.h |  1 +
 2 files changed, 47 insertions(+), 26 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 1c56001df676..50f1b422dee3 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -273,7 +273,8 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 	dma_cookie_t cookie = 0;
 	int busy = mv_chan_is_busy(mv_chan);
 	u32 current_desc = mv_chan_get_current_desc(mv_chan);
-	int seen_current = 0;
+	int current_cleaned = 0;
+	struct mv_xor_desc *hw_desc;
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
@@ -285,38 +286,57 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
 					chain_node) {
-		prefetch(_iter);
-		prefetch(&_iter->async_tx);
 
-		/* do not advance past the current descriptor loaded into the
-		 * hardware channel, subsequent descriptors are either in
-		 * process or have not been submitted
-		 */
-		if (seen_current)
-			break;
+		/* clean finished descriptors */
+		hw_desc = iter->hw_desc;
+		if (hw_desc->status & XOR_DESC_SUCCESS) {
+			cookie = mv_xor_run_tx_complete_actions(iter, mv_chan,
+								cookie);
 
-		/* stop the search if we reach the current descriptor and the
-		 * channel is busy
-		 */
-		if (iter->async_tx.phys == current_desc) {
-			seen_current = 1;
-			if (busy)
+			/* done processing desc, clean slot */
+			mv_xor_clean_slot(iter, mv_chan);
+
+			/* break if we did cleaned the current */
+			if (iter->async_tx.phys == current_desc) {
+				current_cleaned = 1;
+				break;
+			}
+		} else {
+			if (iter->async_tx.phys == current_desc) {
+				current_cleaned = 0;
 				break;
+			}
 		}
-
-		cookie = mv_xor_run_tx_complete_actions(iter, mv_chan, cookie);
-
-		if (mv_xor_clean_slot(iter, mv_chan))
-			break;
 	}
 
 	if ((busy == 0) && !list_empty(&mv_chan->chain)) {
-		struct mv_xor_desc_slot *chain_head;
-		chain_head = list_entry(mv_chan->chain.next,
-					struct mv_xor_desc_slot,
-					chain_node);
-
-		mv_xor_start_new_chain(mv_chan, chain_head);
+		if (current_cleaned) {
+			/*
+			 * current descriptor cleaned and removed, run
+			 * from list head
+			 */
+			iter = list_entry(mv_chan->chain.next,
+					  struct mv_xor_desc_slot,
+					  chain_node);
+			mv_xor_start_new_chain(mv_chan, iter);
+		} else {
+			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
+				/*
+				 * descriptors are still waiting after
+				 * current, trigger them
+				 */
+				iter = list_entry(iter->chain_node.next,
+						  struct mv_xor_desc_slot,
+						  chain_node);
+				mv_xor_start_new_chain(mv_chan, iter);
+			} else {
+				/*
+				 * some descriptors are still waiting
+				 * to be cleaned
+				 */
+				tasklet_schedule(&mv_chan->irq_tasklet);
+			}
+		}
 	}
 
 	if (cookie > 0)
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 91958dba39a2..0e302b3a33ad 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -31,6 +31,7 @@
 #define XOR_OPERATION_MODE_XOR		0
 #define XOR_OPERATION_MODE_MEMCPY	2
 #define XOR_DESCRIPTOR_SWAP		BIT(14)
+#define XOR_DESC_SUCCESS		0x40000000
 
 #define XOR_DESC_DMA_OWNED		BIT(31)
 #define XOR_DESC_EOD_INT_EN		BIT(31)
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/5] dmaengine: mv_xor: Rename function for consistent naming
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-05-26 13:07   ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard

The current function names isn't very consistent, and functions with the
same prefix might operate on either a channel or a descriptor, which is
kind of confusing.

Rename these functions to have a consistent and clearer naming scheme.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.c | 87 ++++++++++++++++++++++++++--------------------------
 1 file changed, 44 insertions(+), 43 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 50f1b422dee3..51433c4020fe 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -104,7 +104,7 @@ static u32 mv_chan_get_intr_cause(struct mv_xor_chan *chan)
 	return intr_cause;
 }
 
-static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
+static void mv_chan_clear_eoc_cause(struct mv_xor_chan *chan)
 {
 	u32 val;
 
@@ -114,14 +114,14 @@ static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
 	writel_relaxed(val, XOR_INTR_CAUSE(chan));
 }
 
-static void mv_xor_device_clear_err_status(struct mv_xor_chan *chan)
+static void mv_chan_clear_err_status(struct mv_xor_chan *chan)
 {
 	u32 val = 0xFFFF0000 >> (chan->idx * 16);
 	writel_relaxed(val, XOR_INTR_CAUSE(chan));
 }
 
-static void mv_set_mode(struct mv_xor_chan *chan,
-			       enum dma_transaction_type type)
+static void mv_chan_set_mode(struct mv_xor_chan *chan,
+			     enum dma_transaction_type type)
 {
 	u32 op_mode;
 	u32 config = readl_relaxed(XOR_CONFIG(chan));
@@ -172,12 +172,12 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
 }
 
 /**
- * mv_xor_free_slots - flags descriptor slots for reuse
+ * mv_chan_free_slots - flags descriptor slots for reuse
  * @slot: Slot to free
  * Caller must hold &mv_chan->lock while calling this function
  */
-static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
-			      struct mv_xor_desc_slot *slot)
+static void mv_chan_free_slots(struct mv_xor_chan *mv_chan,
+			       struct mv_xor_desc_slot *slot)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
 		__func__, __LINE__, slot);
@@ -187,12 +187,12 @@ static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
 }
 
 /*
- * mv_xor_start_new_chain - program the engine to operate on new chain headed by
- * sw_desc
+ * mv_chan_start_new_chain - program the engine to operate on new
+ * chain headed by sw_desc
  * Caller must hold &mv_chan->lock while calling this function
  */
-static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
-				   struct mv_xor_desc_slot *sw_desc)
+static void mv_chan_start_new_chain(struct mv_xor_chan *mv_chan,
+				    struct mv_xor_desc_slot *sw_desc)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: sw_desc %p\n",
 		__func__, __LINE__, sw_desc);
@@ -205,8 +205,9 @@ static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
 }
 
 static dma_cookie_t
-mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
-	struct mv_xor_chan *mv_chan, dma_cookie_t cookie)
+mv_desc_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
+				struct mv_xor_chan *mv_chan,
+				dma_cookie_t cookie)
 {
 	BUG_ON(desc->async_tx.cookie < 0);
 
@@ -230,7 +231,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
 }
 
 static int
-mv_xor_clean_completed_slots(struct mv_xor_chan *mv_chan)
+mv_chan_clean_completed_slots(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 
@@ -240,15 +241,15 @@ mv_xor_clean_completed_slots(struct mv_xor_chan *mv_chan)
 
 		if (async_tx_test_ack(&iter->async_tx)) {
 			list_del(&iter->completed_node);
-			mv_xor_free_slots(mv_chan, iter);
+			mv_chan_free_slots(mv_chan, iter);
 		}
 	}
 	return 0;
 }
 
 static int
-mv_xor_clean_slot(struct mv_xor_desc_slot *desc,
-	struct mv_xor_chan *mv_chan)
+mv_desc_clean_slot(struct mv_xor_desc_slot *desc,
+		   struct mv_xor_chan *mv_chan)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n",
 		__func__, __LINE__, desc, desc->async_tx.flags);
@@ -262,12 +263,12 @@ mv_xor_clean_slot(struct mv_xor_desc_slot *desc,
 		return 0;
 	}
 
-	mv_xor_free_slots(mv_chan, desc);
+	mv_chan_free_slots(mv_chan, desc);
 	return 0;
 }
 
 /* This function must be called with the mv_xor_chan spinlock held */
-static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
+static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 	dma_cookie_t cookie = 0;
@@ -278,7 +279,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
-	mv_xor_clean_completed_slots(mv_chan);
+	mv_chan_clean_completed_slots(mv_chan);
 
 	/* free completed slots from the chain starting with
 	 * the oldest descriptor
@@ -290,11 +291,11 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 		/* clean finished descriptors */
 		hw_desc = iter->hw_desc;
 		if (hw_desc->status & XOR_DESC_SUCCESS) {
-			cookie = mv_xor_run_tx_complete_actions(iter, mv_chan,
-								cookie);
+			cookie = mv_desc_run_tx_complete_actions(iter, mv_chan,
+								 cookie);
 
 			/* done processing desc, clean slot */
-			mv_xor_clean_slot(iter, mv_chan);
+			mv_desc_clean_slot(iter, mv_chan);
 
 			/* break if we did cleaned the current */
 			if (iter->async_tx.phys == current_desc) {
@@ -318,7 +319,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 			iter = list_entry(mv_chan->chain.next,
 					  struct mv_xor_desc_slot,
 					  chain_node);
-			mv_xor_start_new_chain(mv_chan, iter);
+			mv_chan_start_new_chain(mv_chan, iter);
 		} else {
 			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
 				/*
@@ -328,7 +329,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 				iter = list_entry(iter->chain_node.next,
 						  struct mv_xor_desc_slot,
 						  chain_node);
-				mv_xor_start_new_chain(mv_chan, iter);
+				mv_chan_start_new_chain(mv_chan, iter);
 			} else {
 				/*
 				 * some descriptors are still waiting
@@ -348,12 +349,12 @@ static void mv_xor_tasklet(unsigned long data)
 	struct mv_xor_chan *chan = (struct mv_xor_chan *) data;
 
 	spin_lock_bh(&chan->lock);
-	mv_xor_slot_cleanup(chan);
+	mv_chan_slot_cleanup(chan);
 	spin_unlock_bh(&chan->lock);
 }
 
 static struct mv_xor_desc_slot *
-mv_xor_alloc_slot(struct mv_xor_chan *mv_chan)
+mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 	int retry = 0;
@@ -451,7 +452,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
 	}
 
 	if (new_hw_chain)
-		mv_xor_start_new_chain(mv_chan, sw_desc);
+		mv_chan_start_new_chain(mv_chan, sw_desc);
 
 	spin_unlock_bh(&mv_chan->lock);
 
@@ -524,7 +525,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		__func__, src_cnt, len, &dest, flags);
 
 	spin_lock_bh(&mv_chan->lock);
-	sw_desc = mv_xor_alloc_slot(mv_chan);
+	sw_desc = mv_chan_alloc_slot(mv_chan);
 	if (sw_desc) {
 		sw_desc->type = DMA_XOR;
 		sw_desc->async_tx.flags = flags;
@@ -576,7 +577,7 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan)
 
 	spin_lock_bh(&mv_chan->lock);
 
-	mv_xor_slot_cleanup(mv_chan);
+	mv_chan_slot_cleanup(mv_chan);
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
 					chain_node) {
@@ -623,13 +624,13 @@ static enum dma_status mv_xor_status(struct dma_chan *chan,
 		return ret;
 
 	spin_lock_bh(&mv_chan->lock);
-	mv_xor_slot_cleanup(mv_chan);
+	mv_chan_slot_cleanup(mv_chan);
 	spin_unlock_bh(&mv_chan->lock);
 
 	return dma_cookie_status(chan, cookie, txstate);
 }
 
-static void mv_dump_xor_regs(struct mv_xor_chan *chan)
+static void mv_chan_dump_regs(struct mv_xor_chan *chan)
 {
 	u32 val;
 
@@ -652,8 +653,8 @@ static void mv_dump_xor_regs(struct mv_xor_chan *chan)
 	dev_err(mv_chan_to_devp(chan), "error addr   0x%08x\n", val);
 }
 
-static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
-					 u32 intr_cause)
+static void mv_chan_err_interrupt_handler(struct mv_xor_chan *chan,
+					  u32 intr_cause)
 {
 	if (intr_cause & XOR_INT_ERR_DECODE) {
 		dev_dbg(mv_chan_to_devp(chan), "ignoring address decode error\n");
@@ -663,7 +664,7 @@ static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
 	dev_err(mv_chan_to_devp(chan), "error on chan %d. intr cause 0x%08x\n",
 		chan->idx, intr_cause);
 
-	mv_dump_xor_regs(chan);
+	mv_chan_dump_regs(chan);
 	WARN_ON(1);
 }
 
@@ -675,11 +676,11 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
 	dev_dbg(mv_chan_to_devp(chan), "intr cause %x\n", intr_cause);
 
 	if (intr_cause & XOR_INTR_ERRORS)
-		mv_xor_err_interrupt_handler(chan, intr_cause);
+		mv_chan_err_interrupt_handler(chan, intr_cause);
 
 	tasklet_schedule(&chan->irq_tasklet);
 
-	mv_xor_device_clear_eoc_cause(chan);
+	mv_chan_clear_eoc_cause(chan);
 
 	return IRQ_HANDLED;
 }
@@ -698,7 +699,7 @@ static void mv_xor_issue_pending(struct dma_chan *chan)
  * Perform a transaction to verify the HW works.
  */
 
-static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan)
+static int mv_chan_memcpy_self_test(struct mv_xor_chan *mv_chan)
 {
 	int i, ret;
 	void *src, *dest;
@@ -807,7 +808,7 @@ out:
 
 #define MV_XOR_NUM_SRC_TEST 4 /* must be <= 15 */
 static int
-mv_xor_xor_self_test(struct mv_xor_chan *mv_chan)
+mv_chan_xor_self_test(struct mv_xor_chan *mv_chan)
 {
 	int i, src_idx, ret;
 	struct page *dest;
@@ -1034,7 +1035,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 		     mv_chan);
 
 	/* clear errors before enabling interrupts */
-	mv_xor_device_clear_err_status(mv_chan);
+	mv_chan_clear_err_status(mv_chan);
 
 	ret = request_irq(mv_chan->irq, mv_xor_interrupt_handler,
 			  0, dev_name(&pdev->dev), mv_chan);
@@ -1043,7 +1044,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan_unmask_interrupts(mv_chan);
 
-	mv_set_mode(mv_chan, DMA_XOR);
+	mv_chan_set_mode(mv_chan, DMA_XOR);
 
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
@@ -1055,14 +1056,14 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 	list_add_tail(&mv_chan->dmachan.device_node, &dma_dev->channels);
 
 	if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) {
-		ret = mv_xor_memcpy_self_test(mv_chan);
+		ret = mv_chan_memcpy_self_test(mv_chan);
 		dev_dbg(&pdev->dev, "memcpy self test returned %d\n", ret);
 		if (ret)
 			goto err_free_irq;
 	}
 
 	if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) {
-		ret = mv_xor_xor_self_test(mv_chan);
+		ret = mv_chan_xor_self_test(mv_chan);
 		dev_dbg(&pdev->dev, "xor self test returned %d\n", ret);
 		if (ret)
 			goto err_free_irq;
-- 
2.4.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/5] dmaengine: mv_xor: Rename function for consistent naming
@ 2015-05-26 13:07   ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

The current function names isn't very consistent, and functions with the
same prefix might operate on either a channel or a descriptor, which is
kind of confusing.

Rename these functions to have a consistent and clearer naming scheme.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.c | 87 ++++++++++++++++++++++++++--------------------------
 1 file changed, 44 insertions(+), 43 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 50f1b422dee3..51433c4020fe 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -104,7 +104,7 @@ static u32 mv_chan_get_intr_cause(struct mv_xor_chan *chan)
 	return intr_cause;
 }
 
-static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
+static void mv_chan_clear_eoc_cause(struct mv_xor_chan *chan)
 {
 	u32 val;
 
@@ -114,14 +114,14 @@ static void mv_xor_device_clear_eoc_cause(struct mv_xor_chan *chan)
 	writel_relaxed(val, XOR_INTR_CAUSE(chan));
 }
 
-static void mv_xor_device_clear_err_status(struct mv_xor_chan *chan)
+static void mv_chan_clear_err_status(struct mv_xor_chan *chan)
 {
 	u32 val = 0xFFFF0000 >> (chan->idx * 16);
 	writel_relaxed(val, XOR_INTR_CAUSE(chan));
 }
 
-static void mv_set_mode(struct mv_xor_chan *chan,
-			       enum dma_transaction_type type)
+static void mv_chan_set_mode(struct mv_xor_chan *chan,
+			     enum dma_transaction_type type)
 {
 	u32 op_mode;
 	u32 config = readl_relaxed(XOR_CONFIG(chan));
@@ -172,12 +172,12 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
 }
 
 /**
- * mv_xor_free_slots - flags descriptor slots for reuse
+ * mv_chan_free_slots - flags descriptor slots for reuse
  * @slot: Slot to free
  * Caller must hold &mv_chan->lock while calling this function
  */
-static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
-			      struct mv_xor_desc_slot *slot)
+static void mv_chan_free_slots(struct mv_xor_chan *mv_chan,
+			       struct mv_xor_desc_slot *slot)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
 		__func__, __LINE__, slot);
@@ -187,12 +187,12 @@ static void mv_xor_free_slots(struct mv_xor_chan *mv_chan,
 }
 
 /*
- * mv_xor_start_new_chain - program the engine to operate on new chain headed by
- * sw_desc
+ * mv_chan_start_new_chain - program the engine to operate on new
+ * chain headed by sw_desc
  * Caller must hold &mv_chan->lock while calling this function
  */
-static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
-				   struct mv_xor_desc_slot *sw_desc)
+static void mv_chan_start_new_chain(struct mv_xor_chan *mv_chan,
+				    struct mv_xor_desc_slot *sw_desc)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: sw_desc %p\n",
 		__func__, __LINE__, sw_desc);
@@ -205,8 +205,9 @@ static void mv_xor_start_new_chain(struct mv_xor_chan *mv_chan,
 }
 
 static dma_cookie_t
-mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
-	struct mv_xor_chan *mv_chan, dma_cookie_t cookie)
+mv_desc_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
+				struct mv_xor_chan *mv_chan,
+				dma_cookie_t cookie)
 {
 	BUG_ON(desc->async_tx.cookie < 0);
 
@@ -230,7 +231,7 @@ mv_xor_run_tx_complete_actions(struct mv_xor_desc_slot *desc,
 }
 
 static int
-mv_xor_clean_completed_slots(struct mv_xor_chan *mv_chan)
+mv_chan_clean_completed_slots(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 
@@ -240,15 +241,15 @@ mv_xor_clean_completed_slots(struct mv_xor_chan *mv_chan)
 
 		if (async_tx_test_ack(&iter->async_tx)) {
 			list_del(&iter->completed_node);
-			mv_xor_free_slots(mv_chan, iter);
+			mv_chan_free_slots(mv_chan, iter);
 		}
 	}
 	return 0;
 }
 
 static int
-mv_xor_clean_slot(struct mv_xor_desc_slot *desc,
-	struct mv_xor_chan *mv_chan)
+mv_desc_clean_slot(struct mv_xor_desc_slot *desc,
+		   struct mv_xor_chan *mv_chan)
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n",
 		__func__, __LINE__, desc, desc->async_tx.flags);
@@ -262,12 +263,12 @@ mv_xor_clean_slot(struct mv_xor_desc_slot *desc,
 		return 0;
 	}
 
-	mv_xor_free_slots(mv_chan, desc);
+	mv_chan_free_slots(mv_chan, desc);
 	return 0;
 }
 
 /* This function must be called with the mv_xor_chan spinlock held */
-static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
+static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 	dma_cookie_t cookie = 0;
@@ -278,7 +279,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
-	mv_xor_clean_completed_slots(mv_chan);
+	mv_chan_clean_completed_slots(mv_chan);
 
 	/* free completed slots from the chain starting with
 	 * the oldest descriptor
@@ -290,11 +291,11 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 		/* clean finished descriptors */
 		hw_desc = iter->hw_desc;
 		if (hw_desc->status & XOR_DESC_SUCCESS) {
-			cookie = mv_xor_run_tx_complete_actions(iter, mv_chan,
-								cookie);
+			cookie = mv_desc_run_tx_complete_actions(iter, mv_chan,
+								 cookie);
 
 			/* done processing desc, clean slot */
-			mv_xor_clean_slot(iter, mv_chan);
+			mv_desc_clean_slot(iter, mv_chan);
 
 			/* break if we did cleaned the current */
 			if (iter->async_tx.phys == current_desc) {
@@ -318,7 +319,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 			iter = list_entry(mv_chan->chain.next,
 					  struct mv_xor_desc_slot,
 					  chain_node);
-			mv_xor_start_new_chain(mv_chan, iter);
+			mv_chan_start_new_chain(mv_chan, iter);
 		} else {
 			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
 				/*
@@ -328,7 +329,7 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
 				iter = list_entry(iter->chain_node.next,
 						  struct mv_xor_desc_slot,
 						  chain_node);
-				mv_xor_start_new_chain(mv_chan, iter);
+				mv_chan_start_new_chain(mv_chan, iter);
 			} else {
 				/*
 				 * some descriptors are still waiting
@@ -348,12 +349,12 @@ static void mv_xor_tasklet(unsigned long data)
 	struct mv_xor_chan *chan = (struct mv_xor_chan *) data;
 
 	spin_lock_bh(&chan->lock);
-	mv_xor_slot_cleanup(chan);
+	mv_chan_slot_cleanup(chan);
 	spin_unlock_bh(&chan->lock);
 }
 
 static struct mv_xor_desc_slot *
-mv_xor_alloc_slot(struct mv_xor_chan *mv_chan)
+mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
 {
 	struct mv_xor_desc_slot *iter, *_iter;
 	int retry = 0;
@@ -451,7 +452,7 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
 	}
 
 	if (new_hw_chain)
-		mv_xor_start_new_chain(mv_chan, sw_desc);
+		mv_chan_start_new_chain(mv_chan, sw_desc);
 
 	spin_unlock_bh(&mv_chan->lock);
 
@@ -524,7 +525,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		__func__, src_cnt, len, &dest, flags);
 
 	spin_lock_bh(&mv_chan->lock);
-	sw_desc = mv_xor_alloc_slot(mv_chan);
+	sw_desc = mv_chan_alloc_slot(mv_chan);
 	if (sw_desc) {
 		sw_desc->type = DMA_XOR;
 		sw_desc->async_tx.flags = flags;
@@ -576,7 +577,7 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan)
 
 	spin_lock_bh(&mv_chan->lock);
 
-	mv_xor_slot_cleanup(mv_chan);
+	mv_chan_slot_cleanup(mv_chan);
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
 					chain_node) {
@@ -623,13 +624,13 @@ static enum dma_status mv_xor_status(struct dma_chan *chan,
 		return ret;
 
 	spin_lock_bh(&mv_chan->lock);
-	mv_xor_slot_cleanup(mv_chan);
+	mv_chan_slot_cleanup(mv_chan);
 	spin_unlock_bh(&mv_chan->lock);
 
 	return dma_cookie_status(chan, cookie, txstate);
 }
 
-static void mv_dump_xor_regs(struct mv_xor_chan *chan)
+static void mv_chan_dump_regs(struct mv_xor_chan *chan)
 {
 	u32 val;
 
@@ -652,8 +653,8 @@ static void mv_dump_xor_regs(struct mv_xor_chan *chan)
 	dev_err(mv_chan_to_devp(chan), "error addr   0x%08x\n", val);
 }
 
-static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
-					 u32 intr_cause)
+static void mv_chan_err_interrupt_handler(struct mv_xor_chan *chan,
+					  u32 intr_cause)
 {
 	if (intr_cause & XOR_INT_ERR_DECODE) {
 		dev_dbg(mv_chan_to_devp(chan), "ignoring address decode error\n");
@@ -663,7 +664,7 @@ static void mv_xor_err_interrupt_handler(struct mv_xor_chan *chan,
 	dev_err(mv_chan_to_devp(chan), "error on chan %d. intr cause 0x%08x\n",
 		chan->idx, intr_cause);
 
-	mv_dump_xor_regs(chan);
+	mv_chan_dump_regs(chan);
 	WARN_ON(1);
 }
 
@@ -675,11 +676,11 @@ static irqreturn_t mv_xor_interrupt_handler(int irq, void *data)
 	dev_dbg(mv_chan_to_devp(chan), "intr cause %x\n", intr_cause);
 
 	if (intr_cause & XOR_INTR_ERRORS)
-		mv_xor_err_interrupt_handler(chan, intr_cause);
+		mv_chan_err_interrupt_handler(chan, intr_cause);
 
 	tasklet_schedule(&chan->irq_tasklet);
 
-	mv_xor_device_clear_eoc_cause(chan);
+	mv_chan_clear_eoc_cause(chan);
 
 	return IRQ_HANDLED;
 }
@@ -698,7 +699,7 @@ static void mv_xor_issue_pending(struct dma_chan *chan)
  * Perform a transaction to verify the HW works.
  */
 
-static int mv_xor_memcpy_self_test(struct mv_xor_chan *mv_chan)
+static int mv_chan_memcpy_self_test(struct mv_xor_chan *mv_chan)
 {
 	int i, ret;
 	void *src, *dest;
@@ -807,7 +808,7 @@ out:
 
 #define MV_XOR_NUM_SRC_TEST 4 /* must be <= 15 */
 static int
-mv_xor_xor_self_test(struct mv_xor_chan *mv_chan)
+mv_chan_xor_self_test(struct mv_xor_chan *mv_chan)
 {
 	int i, src_idx, ret;
 	struct page *dest;
@@ -1034,7 +1035,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 		     mv_chan);
 
 	/* clear errors before enabling interrupts */
-	mv_xor_device_clear_err_status(mv_chan);
+	mv_chan_clear_err_status(mv_chan);
 
 	ret = request_irq(mv_chan->irq, mv_xor_interrupt_handler,
 			  0, dev_name(&pdev->dev), mv_chan);
@@ -1043,7 +1044,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan_unmask_interrupts(mv_chan);
 
-	mv_set_mode(mv_chan, DMA_XOR);
+	mv_chan_set_mode(mv_chan, DMA_XOR);
 
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
@@ -1055,14 +1056,14 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 	list_add_tail(&mv_chan->dmachan.device_node, &dma_dev->channels);
 
 	if (dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask)) {
-		ret = mv_xor_memcpy_self_test(mv_chan);
+		ret = mv_chan_memcpy_self_test(mv_chan);
 		dev_dbg(&pdev->dev, "memcpy self test returned %d\n", ret);
 		if (ret)
 			goto err_free_irq;
 	}
 
 	if (dma_has_cap(DMA_XOR, dma_dev->cap_mask)) {
-		ret = mv_xor_xor_self_test(mv_chan);
+		ret = mv_chan_xor_self_test(mv_chan);
 		dev_dbg(&pdev->dev, "xor self test returned %d\n", ret);
 		if (ret)
 			goto err_free_irq;
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-05-26 13:07   ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard

From: Lior Amsalem <alior@marvell.com>

The Marvell Armada 38x SoC introduce new features to the XOR engine,
especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
the descriptor and not set through the controller registers.

This new feature allows mixing of different commands (even PQ) on the same
channel/chain without the need to stop the engine to reconfigure the engine
mode.

Refactor the driver to be able to use that new feature on the Armada 38x,
while keeping the old behaviour on the older SoCs.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
 drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
 drivers/dma/mv_xor.h                             |  7 ++
 3 files changed, 76 insertions(+), 15 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
index 7c6cb7fcecd2..cc29c35266e2 100644
--- a/Documentation/devicetree/bindings/dma/mv-xor.txt
+++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
@@ -1,7 +1,7 @@
 * Marvell XOR engines
 
 Required properties:
-- compatible: Should be "marvell,orion-xor"
+- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
 - reg: Should contain registers location and length (two sets)
     the first set is the low registers, the second set the high
     registers for the XOR engine.
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 51433c4020fe..669d0b5029d1 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -19,6 +19,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/spinlock.h>
 #include <linux/interrupt.h>
+#include <linux/of_device.h>
 #include <linux/platform_device.h>
 #include <linux/memory.h>
 #include <linux/clk.h>
@@ -30,6 +31,11 @@
 #include "dmaengine.h"
 #include "mv_xor.h"
 
+enum mv_xor_mode {
+	XOR_MODE_IN_REG,
+	XOR_MODE_IN_DESC,
+};
+
 static void mv_xor_issue_pending(struct dma_chan *chan);
 
 #define to_mv_xor_chan(chan)		\
@@ -56,6 +62,24 @@ static void mv_desc_init(struct mv_xor_desc_slot *desc,
 	hw_desc->byte_count = byte_count;
 }
 
+static void mv_desc_set_mode(struct mv_xor_desc_slot *desc)
+{
+	struct mv_xor_desc *hw_desc = desc->hw_desc;
+
+	switch (desc->type) {
+	case DMA_XOR:
+	case DMA_INTERRUPT:
+		hw_desc->desc_command |= XOR_DESC_OPERATION_XOR;
+		break;
+	case DMA_MEMCPY:
+		hw_desc->desc_command |= XOR_DESC_OPERATION_MEMCPY;
+		break;
+	default:
+		BUG();
+		return;
+	}
+}
+
 static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc,
 				  u32 next_desc_addr)
 {
@@ -144,6 +168,25 @@ static void mv_chan_set_mode(struct mv_xor_chan *chan,
 	config &= ~0x7;
 	config |= op_mode;
 
+	if (IS_ENABLED(__BIG_ENDIAN))
+		config |= XOR_DESCRIPTOR_SWAP;
+	else
+		config &= ~XOR_DESCRIPTOR_SWAP;
+
+	writel_relaxed(config, XOR_CONFIG(chan));
+	chan->current_type = type;
+}
+
+static void mv_chan_set_mode_to_desc(struct mv_xor_chan *chan)
+{
+	u32 op_mode;
+	u32 config = readl_relaxed(XOR_CONFIG(chan));
+
+	op_mode = XOR_OPERATION_MODE_IN_DESC;
+
+	config &= ~0x7;
+	config |= op_mode;
+
 #if defined(__BIG_ENDIAN)
 	config |= XOR_DESCRIPTOR_SWAP;
 #else
@@ -151,7 +194,6 @@ static void mv_chan_set_mode(struct mv_xor_chan *chan,
 #endif
 
 	writel_relaxed(config, XOR_CONFIG(chan));
-	chan->current_type = type;
 }
 
 static void mv_chan_activate(struct mv_xor_chan *chan)
@@ -530,6 +572,8 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		sw_desc->type = DMA_XOR;
 		sw_desc->async_tx.flags = flags;
 		mv_desc_init(sw_desc, dest, len, flags);
+		if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
+			mv_desc_set_mode(sw_desc);
 		while (src_cnt--)
 			mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
 	}
@@ -972,7 +1016,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
 static struct mv_xor_chan *
 mv_xor_channel_add(struct mv_xor_device *xordev,
 		   struct platform_device *pdev,
-		   int idx, dma_cap_mask_t cap_mask, int irq)
+		   int idx, dma_cap_mask_t cap_mask, int irq, int op_in_desc)
 {
 	int ret = 0;
 	struct mv_xor_chan *mv_chan;
@@ -984,6 +1028,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan->idx = idx;
 	mv_chan->irq = irq;
+	mv_chan->op_in_desc = op_in_desc;
 
 	dma_dev = &mv_chan->dmadev;
 
@@ -1044,7 +1089,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan_unmask_interrupts(mv_chan);
 
-	mv_chan_set_mode(mv_chan, DMA_XOR);
+	if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
+		mv_chan_set_mode_to_desc(mv_chan);
+	else
+		mv_chan_set_mode(mv_chan, DMA_XOR);
 
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
@@ -1069,7 +1117,8 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 			goto err_free_irq;
 	}
 
-	dev_info(&pdev->dev, "Marvell XOR: ( %s%s%s)\n",
+	dev_info(&pdev->dev, "Marvell XOR (%s): ( %s%s%s)\n",
+		 mv_chan->op_in_desc ? "Descriptor Mode" : "Registers Mode",
 		 dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "xor " : "",
 		 dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "",
 		 dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : "");
@@ -1118,6 +1167,13 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
 	writel(0, base + WINDOW_OVERRIDE_CTRL(1));
 }
 
+static const struct of_device_id mv_xor_dt_ids[] = {
+	{ .compatible = "marvell,orion-xor", .data = (void *)XOR_MODE_IN_REG },
+	{ .compatible = "marvell,armada-380-xor", .data = (void *)XOR_MODE_IN_DESC },
+	{},
+};
+MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
+
 static int mv_xor_probe(struct platform_device *pdev)
 {
 	const struct mbus_dram_target_info *dram;
@@ -1125,6 +1181,7 @@ static int mv_xor_probe(struct platform_device *pdev)
 	struct mv_xor_platform_data *pdata = dev_get_platdata(&pdev->dev);
 	struct resource *res;
 	int i, ret;
+	int op_in_desc;
 
 	dev_notice(&pdev->dev, "Marvell shared XOR driver\n");
 
@@ -1169,11 +1226,15 @@ static int mv_xor_probe(struct platform_device *pdev)
 	if (pdev->dev.of_node) {
 		struct device_node *np;
 		int i = 0;
+		const struct of_device_id *of_id =
+			of_match_device(mv_xor_dt_ids,
+					&pdev->dev);
 
 		for_each_child_of_node(pdev->dev.of_node, np) {
 			struct mv_xor_chan *chan;
 			dma_cap_mask_t cap_mask;
 			int irq;
+			op_in_desc = (int)of_id->data;
 
 			dma_cap_zero(cap_mask);
 			if (of_property_read_bool(np, "dmacap,memcpy"))
@@ -1190,7 +1251,7 @@ static int mv_xor_probe(struct platform_device *pdev)
 			}
 
 			chan = mv_xor_channel_add(xordev, pdev, i,
-						  cap_mask, irq);
+						  cap_mask, irq, op_in_desc);
 			if (IS_ERR(chan)) {
 				ret = PTR_ERR(chan);
 				irq_dispose_mapping(irq);
@@ -1219,7 +1280,8 @@ static int mv_xor_probe(struct platform_device *pdev)
 			}
 
 			chan = mv_xor_channel_add(xordev, pdev, i,
-						  cd->cap_mask, irq);
+						  cd->cap_mask, irq,
+						  XOR_MODE_IN_REG);
 			if (IS_ERR(chan)) {
 				ret = PTR_ERR(chan);
 				goto err_channel_add;
@@ -1265,14 +1327,6 @@ static int mv_xor_remove(struct platform_device *pdev)
 	return 0;
 }
 
-#ifdef CONFIG_OF
-static const struct of_device_id mv_xor_dt_ids[] = {
-       { .compatible = "marvell,orion-xor", },
-       {},
-};
-MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
-#endif
-
 static struct platform_driver mv_xor_driver = {
 	.probe		= mv_xor_probe,
 	.remove		= mv_xor_remove,
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 0e302b3a33ad..ac1ce87935de 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -30,9 +30,14 @@
 /* Values for the XOR_CONFIG register */
 #define XOR_OPERATION_MODE_XOR		0
 #define XOR_OPERATION_MODE_MEMCPY	2
+#define XOR_OPERATION_MODE_IN_DESC      7
 #define XOR_DESCRIPTOR_SWAP		BIT(14)
 #define XOR_DESC_SUCCESS		0x40000000
 
+#define XOR_DESC_OPERATION_XOR          (0 << 24)
+#define XOR_DESC_OPERATION_CRC32C       (1 << 24)
+#define XOR_DESC_OPERATION_MEMCPY       (2 << 24)
+
 #define XOR_DESC_DMA_OWNED		BIT(31)
 #define XOR_DESC_EOD_INT_EN		BIT(31)
 
@@ -96,6 +101,7 @@ struct mv_xor_device {
  * @all_slots: complete domain of slots usable by the channel
  * @slots_allocated: records the actual size of the descriptor slot pool
  * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
+ * @op_in_desc: new mode of driver, each op is writen to descriptor.
  */
 struct mv_xor_chan {
 	int			pending;
@@ -116,6 +122,7 @@ struct mv_xor_chan {
 	struct list_head	all_slots;
 	int			slots_allocated;
 	struct tasklet_struct	irq_tasklet;
+	int                     op_in_desc;
 	char			dummy_src[MV_XOR_MIN_BYTE_COUNT];
 	char			dummy_dst[MV_XOR_MIN_BYTE_COUNT];
 	dma_addr_t		dummy_src_addr, dummy_dst_addr;
-- 
2.4.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
@ 2015-05-26 13:07   ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

From: Lior Amsalem <alior@marvell.com>

The Marvell Armada 38x SoC introduce new features to the XOR engine,
especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
the descriptor and not set through the controller registers.

This new feature allows mixing of different commands (even PQ) on the same
channel/chain without the need to stop the engine to reconfigure the engine
mode.

Refactor the driver to be able to use that new feature on the Armada 38x,
while keeping the old behaviour on the older SoCs.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
 drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
 drivers/dma/mv_xor.h                             |  7 ++
 3 files changed, 76 insertions(+), 15 deletions(-)

diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
index 7c6cb7fcecd2..cc29c35266e2 100644
--- a/Documentation/devicetree/bindings/dma/mv-xor.txt
+++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
@@ -1,7 +1,7 @@
 * Marvell XOR engines
 
 Required properties:
-- compatible: Should be "marvell,orion-xor"
+- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
 - reg: Should contain registers location and length (two sets)
     the first set is the low registers, the second set the high
     registers for the XOR engine.
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 51433c4020fe..669d0b5029d1 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -19,6 +19,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/spinlock.h>
 #include <linux/interrupt.h>
+#include <linux/of_device.h>
 #include <linux/platform_device.h>
 #include <linux/memory.h>
 #include <linux/clk.h>
@@ -30,6 +31,11 @@
 #include "dmaengine.h"
 #include "mv_xor.h"
 
+enum mv_xor_mode {
+	XOR_MODE_IN_REG,
+	XOR_MODE_IN_DESC,
+};
+
 static void mv_xor_issue_pending(struct dma_chan *chan);
 
 #define to_mv_xor_chan(chan)		\
@@ -56,6 +62,24 @@ static void mv_desc_init(struct mv_xor_desc_slot *desc,
 	hw_desc->byte_count = byte_count;
 }
 
+static void mv_desc_set_mode(struct mv_xor_desc_slot *desc)
+{
+	struct mv_xor_desc *hw_desc = desc->hw_desc;
+
+	switch (desc->type) {
+	case DMA_XOR:
+	case DMA_INTERRUPT:
+		hw_desc->desc_command |= XOR_DESC_OPERATION_XOR;
+		break;
+	case DMA_MEMCPY:
+		hw_desc->desc_command |= XOR_DESC_OPERATION_MEMCPY;
+		break;
+	default:
+		BUG();
+		return;
+	}
+}
+
 static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc,
 				  u32 next_desc_addr)
 {
@@ -144,6 +168,25 @@ static void mv_chan_set_mode(struct mv_xor_chan *chan,
 	config &= ~0x7;
 	config |= op_mode;
 
+	if (IS_ENABLED(__BIG_ENDIAN))
+		config |= XOR_DESCRIPTOR_SWAP;
+	else
+		config &= ~XOR_DESCRIPTOR_SWAP;
+
+	writel_relaxed(config, XOR_CONFIG(chan));
+	chan->current_type = type;
+}
+
+static void mv_chan_set_mode_to_desc(struct mv_xor_chan *chan)
+{
+	u32 op_mode;
+	u32 config = readl_relaxed(XOR_CONFIG(chan));
+
+	op_mode = XOR_OPERATION_MODE_IN_DESC;
+
+	config &= ~0x7;
+	config |= op_mode;
+
 #if defined(__BIG_ENDIAN)
 	config |= XOR_DESCRIPTOR_SWAP;
 #else
@@ -151,7 +194,6 @@ static void mv_chan_set_mode(struct mv_xor_chan *chan,
 #endif
 
 	writel_relaxed(config, XOR_CONFIG(chan));
-	chan->current_type = type;
 }
 
 static void mv_chan_activate(struct mv_xor_chan *chan)
@@ -530,6 +572,8 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		sw_desc->type = DMA_XOR;
 		sw_desc->async_tx.flags = flags;
 		mv_desc_init(sw_desc, dest, len, flags);
+		if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
+			mv_desc_set_mode(sw_desc);
 		while (src_cnt--)
 			mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
 	}
@@ -972,7 +1016,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
 static struct mv_xor_chan *
 mv_xor_channel_add(struct mv_xor_device *xordev,
 		   struct platform_device *pdev,
-		   int idx, dma_cap_mask_t cap_mask, int irq)
+		   int idx, dma_cap_mask_t cap_mask, int irq, int op_in_desc)
 {
 	int ret = 0;
 	struct mv_xor_chan *mv_chan;
@@ -984,6 +1028,7 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan->idx = idx;
 	mv_chan->irq = irq;
+	mv_chan->op_in_desc = op_in_desc;
 
 	dma_dev = &mv_chan->dmadev;
 
@@ -1044,7 +1089,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 
 	mv_chan_unmask_interrupts(mv_chan);
 
-	mv_chan_set_mode(mv_chan, DMA_XOR);
+	if (mv_chan->op_in_desc == XOR_MODE_IN_DESC)
+		mv_chan_set_mode_to_desc(mv_chan);
+	else
+		mv_chan_set_mode(mv_chan, DMA_XOR);
 
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
@@ -1069,7 +1117,8 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 			goto err_free_irq;
 	}
 
-	dev_info(&pdev->dev, "Marvell XOR: ( %s%s%s)\n",
+	dev_info(&pdev->dev, "Marvell XOR (%s): ( %s%s%s)\n",
+		 mv_chan->op_in_desc ? "Descriptor Mode" : "Registers Mode",
 		 dma_has_cap(DMA_XOR, dma_dev->cap_mask) ? "xor " : "",
 		 dma_has_cap(DMA_MEMCPY, dma_dev->cap_mask) ? "cpy " : "",
 		 dma_has_cap(DMA_INTERRUPT, dma_dev->cap_mask) ? "intr " : "");
@@ -1118,6 +1167,13 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
 	writel(0, base + WINDOW_OVERRIDE_CTRL(1));
 }
 
+static const struct of_device_id mv_xor_dt_ids[] = {
+	{ .compatible = "marvell,orion-xor", .data = (void *)XOR_MODE_IN_REG },
+	{ .compatible = "marvell,armada-380-xor", .data = (void *)XOR_MODE_IN_DESC },
+	{},
+};
+MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
+
 static int mv_xor_probe(struct platform_device *pdev)
 {
 	const struct mbus_dram_target_info *dram;
@@ -1125,6 +1181,7 @@ static int mv_xor_probe(struct platform_device *pdev)
 	struct mv_xor_platform_data *pdata = dev_get_platdata(&pdev->dev);
 	struct resource *res;
 	int i, ret;
+	int op_in_desc;
 
 	dev_notice(&pdev->dev, "Marvell shared XOR driver\n");
 
@@ -1169,11 +1226,15 @@ static int mv_xor_probe(struct platform_device *pdev)
 	if (pdev->dev.of_node) {
 		struct device_node *np;
 		int i = 0;
+		const struct of_device_id *of_id =
+			of_match_device(mv_xor_dt_ids,
+					&pdev->dev);
 
 		for_each_child_of_node(pdev->dev.of_node, np) {
 			struct mv_xor_chan *chan;
 			dma_cap_mask_t cap_mask;
 			int irq;
+			op_in_desc = (int)of_id->data;
 
 			dma_cap_zero(cap_mask);
 			if (of_property_read_bool(np, "dmacap,memcpy"))
@@ -1190,7 +1251,7 @@ static int mv_xor_probe(struct platform_device *pdev)
 			}
 
 			chan = mv_xor_channel_add(xordev, pdev, i,
-						  cap_mask, irq);
+						  cap_mask, irq, op_in_desc);
 			if (IS_ERR(chan)) {
 				ret = PTR_ERR(chan);
 				irq_dispose_mapping(irq);
@@ -1219,7 +1280,8 @@ static int mv_xor_probe(struct platform_device *pdev)
 			}
 
 			chan = mv_xor_channel_add(xordev, pdev, i,
-						  cd->cap_mask, irq);
+						  cd->cap_mask, irq,
+						  XOR_MODE_IN_REG);
 			if (IS_ERR(chan)) {
 				ret = PTR_ERR(chan);
 				goto err_channel_add;
@@ -1265,14 +1327,6 @@ static int mv_xor_remove(struct platform_device *pdev)
 	return 0;
 }
 
-#ifdef CONFIG_OF
-static const struct of_device_id mv_xor_dt_ids[] = {
-       { .compatible = "marvell,orion-xor", },
-       {},
-};
-MODULE_DEVICE_TABLE(of, mv_xor_dt_ids);
-#endif
-
 static struct platform_driver mv_xor_driver = {
 	.probe		= mv_xor_probe,
 	.remove		= mv_xor_remove,
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 0e302b3a33ad..ac1ce87935de 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -30,9 +30,14 @@
 /* Values for the XOR_CONFIG register */
 #define XOR_OPERATION_MODE_XOR		0
 #define XOR_OPERATION_MODE_MEMCPY	2
+#define XOR_OPERATION_MODE_IN_DESC      7
 #define XOR_DESCRIPTOR_SWAP		BIT(14)
 #define XOR_DESC_SUCCESS		0x40000000
 
+#define XOR_DESC_OPERATION_XOR          (0 << 24)
+#define XOR_DESC_OPERATION_CRC32C       (1 << 24)
+#define XOR_DESC_OPERATION_MEMCPY       (2 << 24)
+
 #define XOR_DESC_DMA_OWNED		BIT(31)
 #define XOR_DESC_EOD_INT_EN		BIT(31)
 
@@ -96,6 +101,7 @@ struct mv_xor_device {
  * @all_slots: complete domain of slots usable by the channel
  * @slots_allocated: records the actual size of the descriptor slot pool
  * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
+ * @op_in_desc: new mode of driver, each op is writen to descriptor.
  */
 struct mv_xor_chan {
 	int			pending;
@@ -116,6 +122,7 @@ struct mv_xor_chan {
 	struct list_head	all_slots;
 	int			slots_allocated;
 	struct tasklet_struct	irq_tasklet;
+	int                     op_in_desc;
 	char			dummy_src[MV_XOR_MIN_BYTE_COUNT];
 	char			dummy_dst[MV_XOR_MIN_BYTE_COUNT];
 	dma_addr_t		dummy_src_addr, dummy_dst_addr;
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/5] dmaengine: mv_xor: Enlarge descriptor pool size
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-05-26 13:07   ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard

From: Lior Amsalem <alior@marvell.com>

Now that we have 2 channels assigned to 2 CPUs and all requests are chained
on same channels, we need much more descriptors available to satisfy
async_tx workload.

3072 descriptors was found in our lab as the number of descriptors which
allow the async_tx stack to work without waiting for free descriptors on
submission of new requests.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Nadav Haklai <nadavh@marvell.com>
Tested-by: Nadav Haklai <nadavh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index ac1ce87935de..6b10c8c647b9 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -19,7 +19,7 @@
 #include <linux/dmaengine.h>
 #include <linux/interrupt.h>
 
-#define MV_XOR_POOL_SIZE		PAGE_SIZE
+#define MV_XOR_POOL_SIZE		(MV_XOR_SLOT_SIZE * 3072)
 #define MV_XOR_SLOT_SIZE		64
 #define MV_XOR_THRESHOLD		1
 #define MV_XOR_MAX_CHANNELS             2
-- 
2.4.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/5] dmaengine: mv_xor: Enlarge descriptor pool size
@ 2015-05-26 13:07   ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

From: Lior Amsalem <alior@marvell.com>

Now that we have 2 channels assigned to 2 CPUs and all requests are chained
on same channels, we need much more descriptors available to satisfy
async_tx workload.

3072 descriptors was found in our lab as the number of descriptors which
allow the async_tx stack to work without waiting for free descriptors on
submission of new requests.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Nadav Haklai <nadavh@marvell.com>
Tested-by: Nadav Haklai <nadavh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index ac1ce87935de..6b10c8c647b9 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -19,7 +19,7 @@
 #include <linux/dmaengine.h>
 #include <linux/interrupt.h>
 
-#define MV_XOR_POOL_SIZE		PAGE_SIZE
+#define MV_XOR_POOL_SIZE		(MV_XOR_SLOT_SIZE * 3072)
 #define MV_XOR_SLOT_SIZE		64
 #define MV_XOR_THRESHOLD		1
 #define MV_XOR_MAX_CHANNELS             2
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/5] dmaengine: mv_xor: improve descriptors list handling and reduce locking
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-05-26 13:07   ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: Vinod Koul, Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth
  Cc: dmaengine, linux-arm-kernel, linux-kernel, Lior Amsalem,
	Thomas Petazzoni, Maxime Ripard

From: Lior Amsalem <alior@marvell.com>

This patch change the way free descriptors are marked.

Instead of having a field for descriptor in use, all the descriptors in the
all_slots list are free for use.

This simplify the allocation method and reduce the locking needed.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.c | 141 +++++++++++++++++----------------------------------
 drivers/dma/mv_xor.h |  17 +++----
 2 files changed, 53 insertions(+), 105 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 669d0b5029d1..fbaf1ead2597 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -88,12 +88,6 @@ static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc,
 	hw_desc->phy_next_desc = next_desc_addr;
 }
 
-static void mv_desc_clear_next_desc(struct mv_xor_desc_slot *desc)
-{
-	struct mv_xor_desc *hw_desc = desc->hw_desc;
-	hw_desc->phy_next_desc = 0;
-}
-
 static void mv_desc_set_src_addr(struct mv_xor_desc_slot *desc,
 				 int index, dma_addr_t addr)
 {
@@ -213,21 +207,6 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
 	return (state == 1) ? 1 : 0;
 }
 
-/**
- * mv_chan_free_slots - flags descriptor slots for reuse
- * @slot: Slot to free
- * Caller must hold &mv_chan->lock while calling this function
- */
-static void mv_chan_free_slots(struct mv_xor_chan *mv_chan,
-			       struct mv_xor_desc_slot *slot)
-{
-	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
-		__func__, __LINE__, slot);
-
-	slot->slot_used = 0;
-
-}
-
 /*
  * mv_chan_start_new_chain - program the engine to operate on new
  * chain headed by sw_desc
@@ -279,12 +258,10 @@ mv_chan_clean_completed_slots(struct mv_xor_chan *mv_chan)
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
-				 completed_node) {
+				 node) {
 
-		if (async_tx_test_ack(&iter->async_tx)) {
-			list_del(&iter->completed_node);
-			mv_chan_free_slots(mv_chan, iter);
-		}
+		if (async_tx_test_ack(&iter->async_tx))
+			list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	return 0;
 }
@@ -295,17 +272,16 @@ mv_desc_clean_slot(struct mv_xor_desc_slot *desc,
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n",
 		__func__, __LINE__, desc, desc->async_tx.flags);
-	list_del(&desc->chain_node);
+
 	/* the client is allowed to attach dependent operations
 	 * until 'ack' is set
 	 */
-	if (!async_tx_test_ack(&desc->async_tx)) {
+	if (!async_tx_test_ack(&desc->async_tx))
 		/* move this slot to the completed_slots */
-		list_add_tail(&desc->completed_node, &mv_chan->completed_slots);
-		return 0;
-	}
+		list_move_tail(&desc->node, &mv_chan->completed_slots);
+	else
+		list_move_tail(&desc->node, &mv_chan->free_slots);
 
-	mv_chan_free_slots(mv_chan, desc);
 	return 0;
 }
 
@@ -328,7 +304,7 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 	 */
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
-					chain_node) {
+				 node) {
 
 		/* clean finished descriptors */
 		hw_desc = iter->hw_desc;
@@ -360,17 +336,17 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 			 */
 			iter = list_entry(mv_chan->chain.next,
 					  struct mv_xor_desc_slot,
-					  chain_node);
+					  node);
 			mv_chan_start_new_chain(mv_chan, iter);
 		} else {
-			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
+			if (!list_is_last(&iter->node, &mv_chan->chain)) {
 				/*
 				 * descriptors are still waiting after
 				 * current, trigger them
 				 */
-				iter = list_entry(iter->chain_node.next,
+				iter = list_entry(iter->node.next,
 						  struct mv_xor_desc_slot,
-						  chain_node);
+						  node);
 				mv_chan_start_new_chain(mv_chan, iter);
 			} else {
 				/*
@@ -398,49 +374,28 @@ static void mv_xor_tasklet(unsigned long data)
 static struct mv_xor_desc_slot *
 mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
 {
-	struct mv_xor_desc_slot *iter, *_iter;
-	int retry = 0;
+	struct mv_xor_desc_slot *iter;
 
-	/* start search from the last allocated descrtiptor
-	 * if a contiguous allocation can not be found start searching
-	 * from the beginning of the list
-	 */
-retry:
-	if (retry == 0)
-		iter = mv_chan->last_used;
-	else
-		iter = list_entry(&mv_chan->all_slots,
-			struct mv_xor_desc_slot,
-			slot_node);
-
-	list_for_each_entry_safe_continue(
-		iter, _iter, &mv_chan->all_slots, slot_node) {
-
-		prefetch(_iter);
-		prefetch(&_iter->async_tx);
-		if (iter->slot_used) {
-			/* give up after finding the first busy slot
-			 * on the second pass through the list
-			 */
-			if (retry)
-				break;
-			continue;
-		}
+	spin_lock_bh(&mv_chan->lock);
+
+	if (!list_empty(&mv_chan->free_slots)) {
+		iter = list_first_entry(&mv_chan->free_slots,
+					struct mv_xor_desc_slot,
+					node);
+
+		list_move_tail(&iter->node, &mv_chan->allocated_slots);
+
+		spin_unlock_bh(&mv_chan->lock);
 
 		/* pre-ack descriptor */
 		async_tx_ack(&iter->async_tx);
-
-		iter->slot_used = 1;
-		INIT_LIST_HEAD(&iter->chain_node);
 		iter->async_tx.cookie = -EBUSY;
-		mv_chan->last_used = iter;
-		mv_desc_clear_next_desc(iter);
 
 		return iter;
 
 	}
-	if (!retry++)
-		goto retry;
+
+	spin_unlock_bh(&mv_chan->lock);
 
 	/* try to free some slots if the allocation fails */
 	tasklet_schedule(&mv_chan->irq_tasklet);
@@ -466,14 +421,14 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
 	cookie = dma_cookie_assign(tx);
 
 	if (list_empty(&mv_chan->chain))
-		list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
+		list_move_tail(&sw_desc->node, &mv_chan->chain);
 	else {
 		new_hw_chain = 0;
 
 		old_chain_tail = list_entry(mv_chan->chain.prev,
 					    struct mv_xor_desc_slot,
-					    chain_node);
-		list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
+					    node);
+		list_move_tail(&sw_desc->node, &mv_chan->chain);
 
 		dev_dbg(mv_chan_to_devp(mv_chan), "Append to last desc %pa\n",
 			&old_chain_tail->async_tx.phys);
@@ -526,26 +481,20 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
 
 		dma_async_tx_descriptor_init(&slot->async_tx, chan);
 		slot->async_tx.tx_submit = mv_xor_tx_submit;
-		INIT_LIST_HEAD(&slot->chain_node);
-		INIT_LIST_HEAD(&slot->slot_node);
+		INIT_LIST_HEAD(&slot->node);
 		dma_desc = mv_chan->dma_desc_pool;
 		slot->async_tx.phys = dma_desc + idx * MV_XOR_SLOT_SIZE;
 		slot->idx = idx++;
 
 		spin_lock_bh(&mv_chan->lock);
 		mv_chan->slots_allocated = idx;
-		list_add_tail(&slot->slot_node, &mv_chan->all_slots);
+		list_add_tail(&slot->node, &mv_chan->free_slots);
 		spin_unlock_bh(&mv_chan->lock);
 	}
 
-	if (mv_chan->slots_allocated && !mv_chan->last_used)
-		mv_chan->last_used = list_entry(mv_chan->all_slots.next,
-					struct mv_xor_desc_slot,
-					slot_node);
-
 	dev_dbg(mv_chan_to_devp(mv_chan),
-		"allocated %d descriptor slots last_used: %p\n",
-		mv_chan->slots_allocated, mv_chan->last_used);
+		"allocated %d descriptor slots\n",
+		mv_chan->slots_allocated);
 
 	return mv_chan->slots_allocated ? : -ENOMEM;
 }
@@ -566,7 +515,6 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		"%s src_cnt: %d len: %u dest %pad flags: %ld\n",
 		__func__, src_cnt, len, &dest, flags);
 
-	spin_lock_bh(&mv_chan->lock);
 	sw_desc = mv_chan_alloc_slot(mv_chan);
 	if (sw_desc) {
 		sw_desc->type = DMA_XOR;
@@ -577,7 +525,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		while (src_cnt--)
 			mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
 	}
-	spin_unlock_bh(&mv_chan->lock);
+
 	dev_dbg(mv_chan_to_devp(mv_chan),
 		"%s sw_desc %p async_tx %p \n",
 		__func__, sw_desc, &sw_desc->async_tx);
@@ -624,22 +572,26 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan)
 	mv_chan_slot_cleanup(mv_chan);
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
-					chain_node) {
+					node) {
 		in_use_descs++;
-		list_del(&iter->chain_node);
+		list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
-				 completed_node) {
+				 node) {
+		in_use_descs++;
+		list_move_tail(&iter->node, &mv_chan->free_slots);
+	}
+	list_for_each_entry_safe(iter, _iter, &mv_chan->allocated_slots,
+				 node) {
 		in_use_descs++;
-		list_del(&iter->completed_node);
+		list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	list_for_each_entry_safe_reverse(
-		iter, _iter, &mv_chan->all_slots, slot_node) {
-		list_del(&iter->slot_node);
+		iter, _iter, &mv_chan->free_slots, node) {
+		list_del(&iter->node);
 		kfree(iter);
 		mv_chan->slots_allocated--;
 	}
-	mv_chan->last_used = NULL;
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s slots_allocated %d\n",
 		__func__, mv_chan->slots_allocated);
@@ -1097,7 +1049,8 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
 	INIT_LIST_HEAD(&mv_chan->completed_slots);
-	INIT_LIST_HEAD(&mv_chan->all_slots);
+	INIT_LIST_HEAD(&mv_chan->free_slots);
+	INIT_LIST_HEAD(&mv_chan->allocated_slots);
 	mv_chan->dmachan.device = dma_dev;
 	dma_cookie_init(&mv_chan->dmachan);
 
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 6b10c8c647b9..b7455b42137b 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -94,11 +94,11 @@ struct mv_xor_device {
  * @mmr_base: memory mapped register base
  * @idx: the index of the xor channel
  * @chain: device chain view of the descriptors
+ * @free_slots: free slots usable by the channel
+ * @allocated_slots: slots allocated by the driver
  * @completed_slots: slots completed by HW but still need to be acked
  * @device: parent device
  * @common: common dmaengine channel object members
- * @last_used: place holder for allocation to continue from where it left off
- * @all_slots: complete domain of slots usable by the channel
  * @slots_allocated: records the actual size of the descriptor slot pool
  * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
  * @op_in_desc: new mode of driver, each op is writen to descriptor.
@@ -112,14 +112,14 @@ struct mv_xor_chan {
 	int                     irq;
 	enum dma_transaction_type	current_type;
 	struct list_head	chain;
+	struct list_head	free_slots;
+	struct list_head	allocated_slots;
 	struct list_head	completed_slots;
 	dma_addr_t		dma_desc_pool;
 	void			*dma_desc_pool_virt;
 	size_t                  pool_size;
 	struct dma_device	dmadev;
 	struct dma_chan		dmachan;
-	struct mv_xor_desc_slot	*last_used;
-	struct list_head	all_slots;
 	int			slots_allocated;
 	struct tasklet_struct	irq_tasklet;
 	int                     op_in_desc;
@@ -130,9 +130,7 @@ struct mv_xor_chan {
 
 /**
  * struct mv_xor_desc_slot - software descriptor
- * @slot_node: node on the mv_xor_chan.all_slots list
- * @chain_node: node on the mv_xor_chan.chain list
- * @completed_node: node on the mv_xor_chan.completed_slots list
+ * @node: node on the mv_xor_chan lists
  * @hw_desc: virtual address of the hardware descriptor chain
  * @phys: hardware address of the hardware descriptor chain
  * @slot_used: slot in use or not
@@ -141,12 +139,9 @@ struct mv_xor_chan {
  * @async_tx: support for the async_tx api
  */
 struct mv_xor_desc_slot {
-	struct list_head	slot_node;
-	struct list_head	chain_node;
-	struct list_head	completed_node;
+	struct list_head	node;
 	enum dma_transaction_type	type;
 	void			*hw_desc;
-	u16			slot_used;
 	u16			idx;
 	struct dma_async_tx_descriptor	async_tx;
 };
-- 
2.4.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/5] dmaengine: mv_xor: improve descriptors list handling and reduce locking
@ 2015-05-26 13:07   ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-05-26 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

From: Lior Amsalem <alior@marvell.com>

This patch change the way free descriptors are marked.

Instead of having a field for descriptor in use, all the descriptors in the
all_slots list are free for use.

This simplify the allocation method and reduce the locking needed.

Signed-off-by: Lior Amsalem <alior@marvell.com>
Reviewed-by: Ofer Heifetz <oferh@marvell.com>
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 drivers/dma/mv_xor.c | 141 +++++++++++++++++----------------------------------
 drivers/dma/mv_xor.h |  17 +++----
 2 files changed, 53 insertions(+), 105 deletions(-)

diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 669d0b5029d1..fbaf1ead2597 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -88,12 +88,6 @@ static void mv_desc_set_next_desc(struct mv_xor_desc_slot *desc,
 	hw_desc->phy_next_desc = next_desc_addr;
 }
 
-static void mv_desc_clear_next_desc(struct mv_xor_desc_slot *desc)
-{
-	struct mv_xor_desc *hw_desc = desc->hw_desc;
-	hw_desc->phy_next_desc = 0;
-}
-
 static void mv_desc_set_src_addr(struct mv_xor_desc_slot *desc,
 				 int index, dma_addr_t addr)
 {
@@ -213,21 +207,6 @@ static char mv_chan_is_busy(struct mv_xor_chan *chan)
 	return (state == 1) ? 1 : 0;
 }
 
-/**
- * mv_chan_free_slots - flags descriptor slots for reuse
- * @slot: Slot to free
- * Caller must hold &mv_chan->lock while calling this function
- */
-static void mv_chan_free_slots(struct mv_xor_chan *mv_chan,
-			       struct mv_xor_desc_slot *slot)
-{
-	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d slot %p\n",
-		__func__, __LINE__, slot);
-
-	slot->slot_used = 0;
-
-}
-
 /*
  * mv_chan_start_new_chain - program the engine to operate on new
  * chain headed by sw_desc
@@ -279,12 +258,10 @@ mv_chan_clean_completed_slots(struct mv_xor_chan *mv_chan)
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
 	list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
-				 completed_node) {
+				 node) {
 
-		if (async_tx_test_ack(&iter->async_tx)) {
-			list_del(&iter->completed_node);
-			mv_chan_free_slots(mv_chan, iter);
-		}
+		if (async_tx_test_ack(&iter->async_tx))
+			list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	return 0;
 }
@@ -295,17 +272,16 @@ mv_desc_clean_slot(struct mv_xor_desc_slot *desc,
 {
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d: desc %p flags %d\n",
 		__func__, __LINE__, desc, desc->async_tx.flags);
-	list_del(&desc->chain_node);
+
 	/* the client is allowed to attach dependent operations
 	 * until 'ack' is set
 	 */
-	if (!async_tx_test_ack(&desc->async_tx)) {
+	if (!async_tx_test_ack(&desc->async_tx))
 		/* move this slot to the completed_slots */
-		list_add_tail(&desc->completed_node, &mv_chan->completed_slots);
-		return 0;
-	}
+		list_move_tail(&desc->node, &mv_chan->completed_slots);
+	else
+		list_move_tail(&desc->node, &mv_chan->free_slots);
 
-	mv_chan_free_slots(mv_chan, desc);
 	return 0;
 }
 
@@ -328,7 +304,7 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 	 */
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
-					chain_node) {
+				 node) {
 
 		/* clean finished descriptors */
 		hw_desc = iter->hw_desc;
@@ -360,17 +336,17 @@ static void mv_chan_slot_cleanup(struct mv_xor_chan *mv_chan)
 			 */
 			iter = list_entry(mv_chan->chain.next,
 					  struct mv_xor_desc_slot,
-					  chain_node);
+					  node);
 			mv_chan_start_new_chain(mv_chan, iter);
 		} else {
-			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
+			if (!list_is_last(&iter->node, &mv_chan->chain)) {
 				/*
 				 * descriptors are still waiting after
 				 * current, trigger them
 				 */
-				iter = list_entry(iter->chain_node.next,
+				iter = list_entry(iter->node.next,
 						  struct mv_xor_desc_slot,
-						  chain_node);
+						  node);
 				mv_chan_start_new_chain(mv_chan, iter);
 			} else {
 				/*
@@ -398,49 +374,28 @@ static void mv_xor_tasklet(unsigned long data)
 static struct mv_xor_desc_slot *
 mv_chan_alloc_slot(struct mv_xor_chan *mv_chan)
 {
-	struct mv_xor_desc_slot *iter, *_iter;
-	int retry = 0;
+	struct mv_xor_desc_slot *iter;
 
-	/* start search from the last allocated descrtiptor
-	 * if a contiguous allocation can not be found start searching
-	 * from the beginning of the list
-	 */
-retry:
-	if (retry == 0)
-		iter = mv_chan->last_used;
-	else
-		iter = list_entry(&mv_chan->all_slots,
-			struct mv_xor_desc_slot,
-			slot_node);
-
-	list_for_each_entry_safe_continue(
-		iter, _iter, &mv_chan->all_slots, slot_node) {
-
-		prefetch(_iter);
-		prefetch(&_iter->async_tx);
-		if (iter->slot_used) {
-			/* give up after finding the first busy slot
-			 * on the second pass through the list
-			 */
-			if (retry)
-				break;
-			continue;
-		}
+	spin_lock_bh(&mv_chan->lock);
+
+	if (!list_empty(&mv_chan->free_slots)) {
+		iter = list_first_entry(&mv_chan->free_slots,
+					struct mv_xor_desc_slot,
+					node);
+
+		list_move_tail(&iter->node, &mv_chan->allocated_slots);
+
+		spin_unlock_bh(&mv_chan->lock);
 
 		/* pre-ack descriptor */
 		async_tx_ack(&iter->async_tx);
-
-		iter->slot_used = 1;
-		INIT_LIST_HEAD(&iter->chain_node);
 		iter->async_tx.cookie = -EBUSY;
-		mv_chan->last_used = iter;
-		mv_desc_clear_next_desc(iter);
 
 		return iter;
 
 	}
-	if (!retry++)
-		goto retry;
+
+	spin_unlock_bh(&mv_chan->lock);
 
 	/* try to free some slots if the allocation fails */
 	tasklet_schedule(&mv_chan->irq_tasklet);
@@ -466,14 +421,14 @@ mv_xor_tx_submit(struct dma_async_tx_descriptor *tx)
 	cookie = dma_cookie_assign(tx);
 
 	if (list_empty(&mv_chan->chain))
-		list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
+		list_move_tail(&sw_desc->node, &mv_chan->chain);
 	else {
 		new_hw_chain = 0;
 
 		old_chain_tail = list_entry(mv_chan->chain.prev,
 					    struct mv_xor_desc_slot,
-					    chain_node);
-		list_add_tail(&sw_desc->chain_node, &mv_chan->chain);
+					    node);
+		list_move_tail(&sw_desc->node, &mv_chan->chain);
 
 		dev_dbg(mv_chan_to_devp(mv_chan), "Append to last desc %pa\n",
 			&old_chain_tail->async_tx.phys);
@@ -526,26 +481,20 @@ static int mv_xor_alloc_chan_resources(struct dma_chan *chan)
 
 		dma_async_tx_descriptor_init(&slot->async_tx, chan);
 		slot->async_tx.tx_submit = mv_xor_tx_submit;
-		INIT_LIST_HEAD(&slot->chain_node);
-		INIT_LIST_HEAD(&slot->slot_node);
+		INIT_LIST_HEAD(&slot->node);
 		dma_desc = mv_chan->dma_desc_pool;
 		slot->async_tx.phys = dma_desc + idx * MV_XOR_SLOT_SIZE;
 		slot->idx = idx++;
 
 		spin_lock_bh(&mv_chan->lock);
 		mv_chan->slots_allocated = idx;
-		list_add_tail(&slot->slot_node, &mv_chan->all_slots);
+		list_add_tail(&slot->node, &mv_chan->free_slots);
 		spin_unlock_bh(&mv_chan->lock);
 	}
 
-	if (mv_chan->slots_allocated && !mv_chan->last_used)
-		mv_chan->last_used = list_entry(mv_chan->all_slots.next,
-					struct mv_xor_desc_slot,
-					slot_node);
-
 	dev_dbg(mv_chan_to_devp(mv_chan),
-		"allocated %d descriptor slots last_used: %p\n",
-		mv_chan->slots_allocated, mv_chan->last_used);
+		"allocated %d descriptor slots\n",
+		mv_chan->slots_allocated);
 
 	return mv_chan->slots_allocated ? : -ENOMEM;
 }
@@ -566,7 +515,6 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		"%s src_cnt: %d len: %u dest %pad flags: %ld\n",
 		__func__, src_cnt, len, &dest, flags);
 
-	spin_lock_bh(&mv_chan->lock);
 	sw_desc = mv_chan_alloc_slot(mv_chan);
 	if (sw_desc) {
 		sw_desc->type = DMA_XOR;
@@ -577,7 +525,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
 		while (src_cnt--)
 			mv_desc_set_src_addr(sw_desc, src_cnt, src[src_cnt]);
 	}
-	spin_unlock_bh(&mv_chan->lock);
+
 	dev_dbg(mv_chan_to_devp(mv_chan),
 		"%s sw_desc %p async_tx %p \n",
 		__func__, sw_desc, &sw_desc->async_tx);
@@ -624,22 +572,26 @@ static void mv_xor_free_chan_resources(struct dma_chan *chan)
 	mv_chan_slot_cleanup(mv_chan);
 
 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
-					chain_node) {
+					node) {
 		in_use_descs++;
-		list_del(&iter->chain_node);
+		list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	list_for_each_entry_safe(iter, _iter, &mv_chan->completed_slots,
-				 completed_node) {
+				 node) {
+		in_use_descs++;
+		list_move_tail(&iter->node, &mv_chan->free_slots);
+	}
+	list_for_each_entry_safe(iter, _iter, &mv_chan->allocated_slots,
+				 node) {
 		in_use_descs++;
-		list_del(&iter->completed_node);
+		list_move_tail(&iter->node, &mv_chan->free_slots);
 	}
 	list_for_each_entry_safe_reverse(
-		iter, _iter, &mv_chan->all_slots, slot_node) {
-		list_del(&iter->slot_node);
+		iter, _iter, &mv_chan->free_slots, node) {
+		list_del(&iter->node);
 		kfree(iter);
 		mv_chan->slots_allocated--;
 	}
-	mv_chan->last_used = NULL;
 
 	dev_dbg(mv_chan_to_devp(mv_chan), "%s slots_allocated %d\n",
 		__func__, mv_chan->slots_allocated);
@@ -1097,7 +1049,8 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
 	spin_lock_init(&mv_chan->lock);
 	INIT_LIST_HEAD(&mv_chan->chain);
 	INIT_LIST_HEAD(&mv_chan->completed_slots);
-	INIT_LIST_HEAD(&mv_chan->all_slots);
+	INIT_LIST_HEAD(&mv_chan->free_slots);
+	INIT_LIST_HEAD(&mv_chan->allocated_slots);
 	mv_chan->dmachan.device = dma_dev;
 	dma_cookie_init(&mv_chan->dmachan);
 
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index 6b10c8c647b9..b7455b42137b 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -94,11 +94,11 @@ struct mv_xor_device {
  * @mmr_base: memory mapped register base
  * @idx: the index of the xor channel
  * @chain: device chain view of the descriptors
+ * @free_slots: free slots usable by the channel
+ * @allocated_slots: slots allocated by the driver
  * @completed_slots: slots completed by HW but still need to be acked
  * @device: parent device
  * @common: common dmaengine channel object members
- * @last_used: place holder for allocation to continue from where it left off
- * @all_slots: complete domain of slots usable by the channel
  * @slots_allocated: records the actual size of the descriptor slot pool
  * @irq_tasklet: bottom half where mv_xor_slot_cleanup runs
  * @op_in_desc: new mode of driver, each op is writen to descriptor.
@@ -112,14 +112,14 @@ struct mv_xor_chan {
 	int                     irq;
 	enum dma_transaction_type	current_type;
 	struct list_head	chain;
+	struct list_head	free_slots;
+	struct list_head	allocated_slots;
 	struct list_head	completed_slots;
 	dma_addr_t		dma_desc_pool;
 	void			*dma_desc_pool_virt;
 	size_t                  pool_size;
 	struct dma_device	dmadev;
 	struct dma_chan		dmachan;
-	struct mv_xor_desc_slot	*last_used;
-	struct list_head	all_slots;
 	int			slots_allocated;
 	struct tasklet_struct	irq_tasklet;
 	int                     op_in_desc;
@@ -130,9 +130,7 @@ struct mv_xor_chan {
 
 /**
  * struct mv_xor_desc_slot - software descriptor
- * @slot_node: node on the mv_xor_chan.all_slots list
- * @chain_node: node on the mv_xor_chan.chain list
- * @completed_node: node on the mv_xor_chan.completed_slots list
+ * @node: node on the mv_xor_chan lists
  * @hw_desc: virtual address of the hardware descriptor chain
  * @phys: hardware address of the hardware descriptor chain
  * @slot_used: slot in use or not
@@ -141,12 +139,9 @@ struct mv_xor_chan {
  * @async_tx: support for the async_tx api
  */
 struct mv_xor_desc_slot {
-	struct list_head	slot_node;
-	struct list_head	chain_node;
-	struct list_head	completed_node;
+	struct list_head	node;
 	enum dma_transaction_type	type;
 	void			*hw_desc;
-	u16			slot_used;
 	u16			idx;
 	struct dma_async_tx_descriptor	async_tx;
 };
-- 
2.4.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
  2015-05-26 13:07   ` Maxime Ripard
@ 2015-06-08 10:34     ` Vinod Koul
  -1 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-08 10:34 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, dmaengine, linux-arm-kernel, linux-kernel,
	Lior Amsalem, Thomas Petazzoni

On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> From: Lior Amsalem <alior@marvell.com>
> 
> The Marvell Armada 38x SoC introduce new features to the XOR engine,
> especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> the descriptor and not set through the controller registers.
> 
> This new feature allows mixing of different commands (even PQ) on the same
> channel/chain without the need to stop the engine to reconfigure the engine
> mode.
> 
> Refactor the driver to be able to use that new feature on the Armada 38x,
> while keeping the old behaviour on the older SoCs.
> 
> Signed-off-by: Lior Amsalem <alior@marvell.com>
> Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> ---
>  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
>  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
>  drivers/dma/mv_xor.h                             |  7 ++
>  3 files changed, 76 insertions(+), 15 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> index 7c6cb7fcecd2..cc29c35266e2 100644
> --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> @@ -1,7 +1,7 @@
>  * Marvell XOR engines
>  
>  Required properties:
> -- compatible: Should be "marvell,orion-xor"
> +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
marvell,armada-380-xor doesnt seem to exist in binding ?

-- 
~Vinod


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
@ 2015-06-08 10:34     ` Vinod Koul
  0 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-08 10:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> From: Lior Amsalem <alior@marvell.com>
> 
> The Marvell Armada 38x SoC introduce new features to the XOR engine,
> especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> the descriptor and not set through the controller registers.
> 
> This new feature allows mixing of different commands (even PQ) on the same
> channel/chain without the need to stop the engine to reconfigure the engine
> mode.
> 
> Refactor the driver to be able to use that new feature on the Armada 38x,
> while keeping the old behaviour on the older SoCs.
> 
> Signed-off-by: Lior Amsalem <alior@marvell.com>
> Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> ---
>  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
>  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
>  drivers/dma/mv_xor.h                             |  7 ++
>  3 files changed, 76 insertions(+), 15 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> index 7c6cb7fcecd2..cc29c35266e2 100644
> --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> @@ -1,7 +1,7 @@
>  * Marvell XOR engines
>  
>  Required properties:
> -- compatible: Should be "marvell,orion-xor"
> +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
marvell,armada-380-xor doesnt seem to exist in binding ?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
  2015-06-08 10:34     ` Vinod Koul
@ 2015-06-08 20:53       ` Maxime Ripard
  -1 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-06-08 20:53 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, dmaengine, linux-arm-kernel, linux-kernel,
	Lior Amsalem, Thomas Petazzoni

[-- Attachment #1: Type: text/plain, Size: 1977 bytes --]

Hi Vinod,

On Mon, Jun 08, 2015 at 04:04:47PM +0530, Vinod Koul wrote:
> On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> > From: Lior Amsalem <alior@marvell.com>
> > 
> > The Marvell Armada 38x SoC introduce new features to the XOR engine,
> > especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> > the descriptor and not set through the controller registers.
> > 
> > This new feature allows mixing of different commands (even PQ) on the same
> > channel/chain without the need to stop the engine to reconfigure the engine
> > mode.
> > 
> > Refactor the driver to be able to use that new feature on the Armada 38x,
> > while keeping the old behaviour on the older SoCs.
> > 
> > Signed-off-by: Lior Amsalem <alior@marvell.com>
> > Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> > Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> > ---
> >  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
> >  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
> >  drivers/dma/mv_xor.h                             |  7 ++
> >  3 files changed, 76 insertions(+), 15 deletions(-)
> > 
> > diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > index 7c6cb7fcecd2..cc29c35266e2 100644
> > --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> > +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > @@ -1,7 +1,7 @@
> >  * Marvell XOR engines
> >  
> >  Required properties:
> > -- compatible: Should be "marvell,orion-xor"
> > +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
>
> marvell,armada-380-xor doesnt seem to exist in binding ?

I'm not sure what you mean, this patch precisely adds that compatible
to the bindings documentation.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
@ 2015-06-08 20:53       ` Maxime Ripard
  0 siblings, 0 replies; 20+ messages in thread
From: Maxime Ripard @ 2015-06-08 20:53 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Vinod,

On Mon, Jun 08, 2015 at 04:04:47PM +0530, Vinod Koul wrote:
> On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> > From: Lior Amsalem <alior@marvell.com>
> > 
> > The Marvell Armada 38x SoC introduce new features to the XOR engine,
> > especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> > the descriptor and not set through the controller registers.
> > 
> > This new feature allows mixing of different commands (even PQ) on the same
> > channel/chain without the need to stop the engine to reconfigure the engine
> > mode.
> > 
> > Refactor the driver to be able to use that new feature on the Armada 38x,
> > while keeping the old behaviour on the older SoCs.
> > 
> > Signed-off-by: Lior Amsalem <alior@marvell.com>
> > Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> > Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> > ---
> >  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
> >  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
> >  drivers/dma/mv_xor.h                             |  7 ++
> >  3 files changed, 76 insertions(+), 15 deletions(-)
> > 
> > diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > index 7c6cb7fcecd2..cc29c35266e2 100644
> > --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> > +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > @@ -1,7 +1,7 @@
> >  * Marvell XOR engines
> >  
> >  Required properties:
> > -- compatible: Should be "marvell,orion-xor"
> > +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
>
> marvell,armada-380-xor doesnt seem to exist in binding ?

I'm not sure what you mean, this patch precisely adds that compatible
to the bindings documentation.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20150608/acaef5d5/attachment.sig>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
  2015-06-08 20:53       ` Maxime Ripard
@ 2015-06-10 16:49         ` Vinod Koul
  -1 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-10 16:49 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, dmaengine, linux-arm-kernel, linux-kernel,
	Lior Amsalem, Thomas Petazzoni

On Mon, Jun 08, 2015 at 10:53:51PM +0200, Maxime Ripard wrote:
> Hi Vinod,
> 
> On Mon, Jun 08, 2015 at 04:04:47PM +0530, Vinod Koul wrote:
> > On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> > > From: Lior Amsalem <alior@marvell.com>
> > > 
> > > The Marvell Armada 38x SoC introduce new features to the XOR engine,
> > > especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> > > the descriptor and not set through the controller registers.
> > > 
> > > This new feature allows mixing of different commands (even PQ) on the same
> > > channel/chain without the need to stop the engine to reconfigure the engine
> > > mode.
> > > 
> > > Refactor the driver to be able to use that new feature on the Armada 38x,
> > > while keeping the old behaviour on the older SoCs.
> > > 
> > > Signed-off-by: Lior Amsalem <alior@marvell.com>
> > > Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> > > Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> > > ---
> > >  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
> > >  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
> > >  drivers/dma/mv_xor.h                             |  7 ++
> > >  3 files changed, 76 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > index 7c6cb7fcecd2..cc29c35266e2 100644
> > > --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > @@ -1,7 +1,7 @@
> > >  * Marvell XOR engines
> > >  
> > >  Required properties:
> > > -- compatible: Should be "marvell,orion-xor"
> > > +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
> >
> > marvell,armada-380-xor doesnt seem to exist in binding ?
> 
> I'm not sure what you mean, this patch precisely adds that compatible
> to the bindings documentation.
Ah my bad, didnt payrequired attention, trusted checkpatch warn.

-- 
~Vinod


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode
@ 2015-06-10 16:49         ` Vinod Koul
  0 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-10 16:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jun 08, 2015 at 10:53:51PM +0200, Maxime Ripard wrote:
> Hi Vinod,
> 
> On Mon, Jun 08, 2015 at 04:04:47PM +0530, Vinod Koul wrote:
> > On Tue, May 26, 2015 at 03:07:34PM +0200, Maxime Ripard wrote:
> > > From: Lior Amsalem <alior@marvell.com>
> > > 
> > > The Marvell Armada 38x SoC introduce new features to the XOR engine,
> > > especially the fact that the engine mode (MEMCPY/XOR/PQ/etc) can be part of
> > > the descriptor and not set through the controller registers.
> > > 
> > > This new feature allows mixing of different commands (even PQ) on the same
> > > channel/chain without the need to stop the engine to reconfigure the engine
> > > mode.
> > > 
> > > Refactor the driver to be able to use that new feature on the Armada 38x,
> > > while keeping the old behaviour on the older SoCs.
> > > 
> > > Signed-off-by: Lior Amsalem <alior@marvell.com>
> > > Reviewed-by: Ofer Heifetz <oferh@marvell.com>
> > > Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
> > > ---
> > >  Documentation/devicetree/bindings/dma/mv-xor.txt |  2 +-
> > >  drivers/dma/mv_xor.c                             | 82 ++++++++++++++++++++----
> > >  drivers/dma/mv_xor.h                             |  7 ++
> > >  3 files changed, 76 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > index 7c6cb7fcecd2..cc29c35266e2 100644
> > > --- a/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > +++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
> > > @@ -1,7 +1,7 @@
> > >  * Marvell XOR engines
> > >  
> > >  Required properties:
> > > -- compatible: Should be "marvell,orion-xor"
> > > +- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
> >
> > marvell,armada-380-xor doesnt seem to exist in binding ?
> 
> I'm not sure what you mean, this patch precisely adds that compatible
> to the bindings documentation.
Ah my bad, didnt payrequired attention, trusted checkpatch warn.

-- 
~Vinod

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements
  2015-05-26 13:07 ` Maxime Ripard
@ 2015-06-10 16:54   ` Vinod Koul
  -1 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-10 16:54 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Gregory Clement, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, dmaengine, linux-arm-kernel, linux-kernel,
	Lior Amsalem, Thomas Petazzoni

On Tue, May 26, 2015 at 03:07:31PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This serie is about various fixes and enhancements that were
> previously submitted as part of the "ARM: mvebu: Add support for RAID6
> PQ offloading" serie.
> 
> Seems the RAID6 PQ stuff seems more controversial than what I
> expected, we took out the various bugs and enhancements patches, and
> will submit the other patches as a separate set.
Applied, thanks

-- 
~Vinod

> 
> Changes from v1:
>   - Moved the bugfix commit as the first commit and cc'd stable
>   - Changed the Armada 38x compatible to armada-380-xor
>   - Made some misc cleanups
> 
> Lior Amsalem (4):
>   dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
>   dmaengine: mv_xor: add support for a38x command in descriptor mode
>   dmaengine: mv_xor: Enlarge descriptor pool size
>   dmaengine: mv_xor: improve descriptors list handling and reduce
>     locking
> 
> Maxime Ripard (1):
>   dmaengine: mv_xor: Rename function for consistent naming
> 
>  Documentation/devicetree/bindings/dma/mv-xor.txt |   2 +-
>  drivers/dma/mv_xor.c                             | 352 ++++++++++++-----------
>  drivers/dma/mv_xor.h                             |  27 +-
>  3 files changed, 206 insertions(+), 175 deletions(-)
> 
> -- 
> 2.4.1
> 

-- 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements
@ 2015-06-10 16:54   ` Vinod Koul
  0 siblings, 0 replies; 20+ messages in thread
From: Vinod Koul @ 2015-06-10 16:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 26, 2015 at 03:07:31PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This serie is about various fixes and enhancements that were
> previously submitted as part of the "ARM: mvebu: Add support for RAID6
> PQ offloading" serie.
> 
> Seems the RAID6 PQ stuff seems more controversial than what I
> expected, we took out the various bugs and enhancements patches, and
> will submit the other patches as a separate set.
Applied, thanks

-- 
~Vinod

> 
> Changes from v1:
>   - Moved the bugfix commit as the first commit and cc'd stable
>   - Changed the Armada 38x compatible to armada-380-xor
>   - Made some misc cleanups
> 
> Lior Amsalem (4):
>   dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup
>   dmaengine: mv_xor: add support for a38x command in descriptor mode
>   dmaengine: mv_xor: Enlarge descriptor pool size
>   dmaengine: mv_xor: improve descriptors list handling and reduce
>     locking
> 
> Maxime Ripard (1):
>   dmaengine: mv_xor: Rename function for consistent naming
> 
>  Documentation/devicetree/bindings/dma/mv-xor.txt |   2 +-
>  drivers/dma/mv_xor.c                             | 352 ++++++++++++-----------
>  drivers/dma/mv_xor.h                             |  27 +-
>  3 files changed, 206 insertions(+), 175 deletions(-)
> 
> -- 
> 2.4.1
> 

-- 

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-06-10 16:54 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-26 13:07 [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements Maxime Ripard
2015-05-26 13:07 ` Maxime Ripard
2015-05-26 13:07 ` [PATCH v2 1/5] dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup Maxime Ripard
2015-05-26 13:07   ` Maxime Ripard
2015-05-26 13:07 ` [PATCH v2 2/5] dmaengine: mv_xor: Rename function for consistent naming Maxime Ripard
2015-05-26 13:07   ` Maxime Ripard
2015-05-26 13:07 ` [PATCH v2 3/5] dmaengine: mv_xor: add support for a38x command in descriptor mode Maxime Ripard
2015-05-26 13:07   ` Maxime Ripard
2015-06-08 10:34   ` Vinod Koul
2015-06-08 10:34     ` Vinod Koul
2015-06-08 20:53     ` Maxime Ripard
2015-06-08 20:53       ` Maxime Ripard
2015-06-10 16:49       ` Vinod Koul
2015-06-10 16:49         ` Vinod Koul
2015-05-26 13:07 ` [PATCH v2 4/5] dmaengine: mv_xor: Enlarge descriptor pool size Maxime Ripard
2015-05-26 13:07   ` Maxime Ripard
2015-05-26 13:07 ` [PATCH v2 5/5] dmaengine: mv_xor: improve descriptors list handling and reduce locking Maxime Ripard
2015-05-26 13:07   ` Maxime Ripard
2015-06-10 16:54 ` [PATCH v2 0/5] dmaengine: mv_xor: Fixes and enhancements Vinod Koul
2015-06-10 16:54   ` Vinod Koul

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.