All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Add slave DMA support for Actions Semi S900 SoC
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

This patchset adds slave DMA support for Actions Semi S900 SoC of the
Owl family. As a consumer, enable TX DMA support for UART peripheral
in S900. The UART driver still supports interrupt mode if there is no
DMA property specified in DT.

The dts patch depends on the previous DMA patches which is not yet
merged.

Thanks,
Mani

Changes in v2:

* Modified the comment for bus width as per Vinod's suggestion

Manivannan Sadhasivam (3):
  arm64: dts: actions: s900: Enable Tx DMA for UART5
  dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900
    SoC
  tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs

 arch/arm64/boot/dts/actions/s900.dtsi |   2 +
 drivers/dma/owl-dma.c                 | 279 +++++++++++++++++++++++++-
 drivers/tty/serial/owl-uart.c         | 172 +++++++++++++++-
 3 files changed, 445 insertions(+), 8 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 0/3] Add slave DMA support for Actions Semi S900 SoC
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: linux-arm-kernel

This patchset adds slave DMA support for Actions Semi S900 SoC of the
Owl family. As a consumer, enable TX DMA support for UART peripheral
in S900. The UART driver still supports interrupt mode if there is no
DMA property specified in DT.

The dts patch depends on the previous DMA patches which is not yet
merged.

Thanks,
Mani

Changes in v2:

* Modified the comment for bus width as per Vinod's suggestion

Manivannan Sadhasivam (3):
  arm64: dts: actions: s900: Enable Tx DMA for UART5
  dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900
    SoC
  tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs

 arch/arm64/boot/dts/actions/s900.dtsi |   2 +
 drivers/dma/owl-dma.c                 | 279 +++++++++++++++++++++++++-
 drivers/tty/serial/owl-uart.c         | 172 +++++++++++++++-
 3 files changed, 445 insertions(+), 8 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [v2,1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
  2018-09-29  7:46 ` Manivannan Sadhasivam
  (?)
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  -1 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Enable Tx DMA for UART5 in Actions Semi S900 SoC.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 arch/arm64/boot/dts/actions/s900.dtsi | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/boot/dts/actions/s900.dtsi b/arch/arm64/boot/dts/actions/s900.dtsi
index eceba914762c..39af1236f611 100644
--- a/arch/arm64/boot/dts/actions/s900.dtsi
+++ b/arch/arm64/boot/dts/actions/s900.dtsi
@@ -156,6 +156,8 @@
 			compatible = "actions,s900-uart", "actions,owl-uart";
 			reg = <0x0 0xe012a000 0x0 0x2000>;
 			interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+			dma-names = "tx";
+			dmas = <&dma 26>;
 			status = "disabled";
 		};
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Enable Tx DMA for UART5 in Actions Semi S900 SoC.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 arch/arm64/boot/dts/actions/s900.dtsi | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/boot/dts/actions/s900.dtsi b/arch/arm64/boot/dts/actions/s900.dtsi
index eceba914762c..39af1236f611 100644
--- a/arch/arm64/boot/dts/actions/s900.dtsi
+++ b/arch/arm64/boot/dts/actions/s900.dtsi
@@ -156,6 +156,8 @@
 			compatible = "actions,s900-uart", "actions,owl-uart";
 			reg = <0x0 0xe012a000 0x0 0x2000>;
 			interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+			dma-names = "tx";
+			dmas = <&dma 26>;
 			status = "disabled";
 		};
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: linux-arm-kernel

Enable Tx DMA for UART5 in Actions Semi S900 SoC.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 arch/arm64/boot/dts/actions/s900.dtsi | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/boot/dts/actions/s900.dtsi b/arch/arm64/boot/dts/actions/s900.dtsi
index eceba914762c..39af1236f611 100644
--- a/arch/arm64/boot/dts/actions/s900.dtsi
+++ b/arch/arm64/boot/dts/actions/s900.dtsi
@@ -156,6 +156,8 @@
 			compatible = "actions,s900-uart", "actions,owl-uart";
 			reg = <0x0 0xe012a000 0x0 0x2000>;
 			interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+			dma-names = "tx";
+			dmas = <&dma 26>;
 			status = "disabled";
 		};
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [v2,2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
  2018-09-29  7:46 ` Manivannan Sadhasivam
  (?)
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  -1 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
mode supports bus width of 4 bytes common for all peripherals and 1 byte
specific for UART.

The cyclic mode supports only block mode transfer.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/dma/owl-dma.c | 279 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 272 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
index 7812a6338acd..1d26db4c9229 100644
--- a/drivers/dma/owl-dma.c
+++ b/drivers/dma/owl-dma.c
@@ -21,6 +21,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/of_device.h>
+#include <linux/of_dma.h>
 #include <linux/slab.h>
 #include "virt-dma.h"
 
@@ -165,6 +166,7 @@ struct owl_dma_lli {
 struct owl_dma_txd {
 	struct virt_dma_desc	vd;
 	struct list_head	lli_list;
+	bool			cyclic;
 };
 
 /**
@@ -191,6 +193,8 @@ struct owl_dma_vchan {
 	struct virt_dma_chan	vc;
 	struct owl_dma_pchan	*pchan;
 	struct owl_dma_txd	*txd;
+	struct dma_slave_config cfg;
+	u8			drq;
 };
 
 /**
@@ -336,9 +340,11 @@ static struct owl_dma_lli *owl_dma_alloc_lli(struct owl_dma *od)
 
 static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 					   struct owl_dma_lli *prev,
-					   struct owl_dma_lli *next)
+					   struct owl_dma_lli *next,
+					   bool is_cyclic)
 {
-	list_add_tail(&next->node, &txd->lli_list);
+	if (!is_cyclic)
+		list_add_tail(&next->node, &txd->lli_list);
 
 	if (prev) {
 		prev->hw.next_lli = next->phys;
@@ -351,7 +357,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				  struct owl_dma_lli *lli,
 				  dma_addr_t src, dma_addr_t dst,
-				  u32 len, enum dma_transfer_direction dir)
+				  u32 len, enum dma_transfer_direction dir,
+				  struct dma_slave_config *sconfig,
+				  bool is_cyclic)
 {
 	struct owl_dma_lli_hw *hw = &lli->hw;
 	u32 mode;
@@ -364,6 +372,32 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
 			OWL_DMA_MODE_DAM_INC;
 
+		break;
+	case DMA_MEM_TO_DEV:
+		mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
+			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		 mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DEV | OWL_DMA_MODE_DT_DCU
+			| OWL_DMA_MODE_SAM_CONST | OWL_DMA_MODE_DAM_INC;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->src_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
 		break;
 	default:
 		return -EINVAL;
@@ -381,7 +415,10 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				 OWL_DMA_LLC_SAV_LOAD_NEXT |
 				 OWL_DMA_LLC_DAV_LOAD_NEXT);
 
-	hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
+	if (is_cyclic)
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK);
+	else
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
 
 	return 0;
 }
@@ -443,6 +480,16 @@ static void owl_dma_terminate_pchan(struct owl_dma *od,
 	spin_unlock_irqrestore(&od->lock, flags);
 }
 
+static void owl_dma_pause_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 1, OWL_DMAX_PAUSE);
+}
+
+static void owl_dma_resume_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 0, OWL_DMAX_PAUSE);
+}
+
 static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma *od = to_owl_dma(vchan->vc.chan.device);
@@ -464,7 +511,10 @@ static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 	lli = list_first_entry(&txd->lli_list,
 			       struct owl_dma_lli, node);
 
-	int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
+	if (txd->cyclic)
+		int_ctl = OWL_DMA_INTCTL_BLOCK;
+	else
+		int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
 
 	pchan_writel(pchan, OWL_DMAX_MODE, OWL_DMA_MODE_LME);
 	pchan_writel(pchan, OWL_DMAX_LINKLIST_CTL,
@@ -627,6 +677,54 @@ static int owl_dma_terminate_all(struct dma_chan *chan)
 	return 0;
 }
 
+static int owl_dma_config(struct dma_chan *chan,
+			  struct dma_slave_config *config)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+
+	/* Reject definitely invalid configurations */
+	if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
+	    config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
+		return -EINVAL;
+
+	memcpy(&vchan->cfg, config, sizeof(struct dma_slave_config));
+
+	return 0;
+}
+
+static int owl_dma_pause(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_pause_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
+static int owl_dma_resume(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	if (!vchan->pchan && !vchan->txd)
+		return 0;
+
+	dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_resume_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
 static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma_pchan *pchan;
@@ -754,13 +852,14 @@ static struct dma_async_tx_descriptor
 		bytes = min_t(size_t, (len - offset), OWL_DMA_FRAME_MAX_LENGTH);
 
 		ret = owl_dma_cfg_lli(vchan, lli, src + offset, dst + offset,
-				      bytes, DMA_MEM_TO_MEM);
+				      bytes, DMA_MEM_TO_MEM,
+				      &vchan->cfg, txd->cyclic);
 		if (ret) {
 			dev_warn(chan2dev(chan), "failed to config lli\n");
 			goto err_txd_free;
 		}
 
-		prev = owl_dma_add_lli(txd, prev, lli);
+		prev = owl_dma_add_lli(txd, prev, lli, false);
 	}
 
 	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
@@ -770,6 +869,133 @@ static struct dma_async_tx_descriptor
 	return NULL;
 }
 
+static struct dma_async_tx_descriptor
+		*owl_dma_prep_slave_sg(struct dma_chan *chan,
+				       struct scatterlist *sgl,
+				       unsigned int sg_len,
+				       enum dma_transfer_direction dir,
+				       unsigned long flags, void *context)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL;
+	struct scatterlist *sg;
+	dma_addr_t addr, src = 0, dst = 0;
+	size_t len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if (len > OWL_DMA_FRAME_MAX_LENGTH) {
+			dev_err(od->dma.dev,
+				"frame length exceeds max supported length");
+			goto err_txd_free;
+		}
+
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_err(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = addr;
+			dst = sconfig->dst_addr;
+		} else {
+			src = sconfig->src_addr;
+			dst = addr;
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, len, dir, sconfig,
+				      txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor
+		*owl_prep_dma_cyclic(struct dma_chan *chan,
+				     dma_addr_t buf_addr, size_t buf_len,
+				     size_t period_len,
+				     enum dma_transfer_direction dir,
+				     unsigned long flags)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL, *first = NULL;
+	dma_addr_t src = 0, dst = 0;
+	unsigned int periods = buf_len / period_len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+	txd->cyclic = true;
+
+	for (i = 0; i < periods; i++) {
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_warn(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = buf_addr + (period_len * i);
+			dst = sconfig->dst_addr;
+		} else if (dir == DMA_DEV_TO_MEM) {
+			src = sconfig->src_addr;
+			dst = buf_addr + (period_len * i);
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, period_len,
+				      dir, sconfig, txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		if (!first)
+			first = lli;
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	/* close the cyclic list */
+	owl_dma_add_lli(txd, prev, first, true);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
 static void owl_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
@@ -790,6 +1016,27 @@ static inline void owl_dma_free(struct owl_dma *od)
 	}
 }
 
+static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec,
+					 struct of_dma *ofdma)
+{
+	struct owl_dma *od = ofdma->of_dma_data;
+	struct owl_dma_vchan *vchan;
+	struct dma_chan *chan;
+	u8 drq = dma_spec->args[0];
+
+	if (drq > od->nr_vchans)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&od->dma);
+	if (!chan)
+		return NULL;
+
+	vchan = to_owl_vchan(chan);
+	vchan->drq = drq;
+
+	return chan;
+}
+
 static int owl_dma_probe(struct platform_device *pdev)
 {
 	struct device_node *np = pdev->dev.of_node;
@@ -833,12 +1080,19 @@ static int owl_dma_probe(struct platform_device *pdev)
 	spin_lock_init(&od->lock);
 
 	dma_cap_set(DMA_MEMCPY, od->dma.cap_mask);
+	dma_cap_set(DMA_SLAVE, od->dma.cap_mask);
+	dma_cap_set(DMA_CYCLIC, od->dma.cap_mask);
 
 	od->dma.dev = &pdev->dev;
 	od->dma.device_free_chan_resources = owl_dma_free_chan_resources;
 	od->dma.device_tx_status = owl_dma_tx_status;
 	od->dma.device_issue_pending = owl_dma_issue_pending;
 	od->dma.device_prep_dma_memcpy = owl_dma_prep_memcpy;
+	od->dma.device_prep_slave_sg = owl_dma_prep_slave_sg;
+	od->dma.device_prep_dma_cyclic = owl_prep_dma_cyclic;
+	od->dma.device_config = owl_dma_config;
+	od->dma.device_pause = owl_dma_pause;
+	od->dma.device_resume = owl_dma_resume;
 	od->dma.device_terminate_all = owl_dma_terminate_all;
 	od->dma.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
 	od->dma.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
@@ -910,8 +1164,18 @@ static int owl_dma_probe(struct platform_device *pdev)
 		goto err_pool_free;
 	}
 
+	/* Device-tree DMA controller registration */
+	ret = of_dma_controller_register(pdev->dev.of_node,
+					 owl_dma_of_xlate, od);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
 	return 0;
 
+err_dma_unregister:
+	dma_async_device_unregister(&od->dma);
 err_pool_free:
 	clk_disable_unprepare(od->clk);
 	dma_pool_destroy(od->lli_pool);
@@ -923,6 +1187,7 @@ static int owl_dma_remove(struct platform_device *pdev)
 {
 	struct owl_dma *od = platform_get_drvdata(pdev);
 
+	of_dma_controller_free(pdev->dev.of_node);
 	dma_async_device_unregister(&od->dma);
 
 	/* Mask all interrupts for this execution environment */

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
mode supports bus width of 4 bytes common for all peripherals and 1 byte
specific for UART.

The cyclic mode supports only block mode transfer.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/dma/owl-dma.c | 279 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 272 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
index 7812a6338acd..1d26db4c9229 100644
--- a/drivers/dma/owl-dma.c
+++ b/drivers/dma/owl-dma.c
@@ -21,6 +21,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/of_device.h>
+#include <linux/of_dma.h>
 #include <linux/slab.h>
 #include "virt-dma.h"
 
@@ -165,6 +166,7 @@ struct owl_dma_lli {
 struct owl_dma_txd {
 	struct virt_dma_desc	vd;
 	struct list_head	lli_list;
+	bool			cyclic;
 };
 
 /**
@@ -191,6 +193,8 @@ struct owl_dma_vchan {
 	struct virt_dma_chan	vc;
 	struct owl_dma_pchan	*pchan;
 	struct owl_dma_txd	*txd;
+	struct dma_slave_config cfg;
+	u8			drq;
 };
 
 /**
@@ -336,9 +340,11 @@ static struct owl_dma_lli *owl_dma_alloc_lli(struct owl_dma *od)
 
 static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 					   struct owl_dma_lli *prev,
-					   struct owl_dma_lli *next)
+					   struct owl_dma_lli *next,
+					   bool is_cyclic)
 {
-	list_add_tail(&next->node, &txd->lli_list);
+	if (!is_cyclic)
+		list_add_tail(&next->node, &txd->lli_list);
 
 	if (prev) {
 		prev->hw.next_lli = next->phys;
@@ -351,7 +357,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				  struct owl_dma_lli *lli,
 				  dma_addr_t src, dma_addr_t dst,
-				  u32 len, enum dma_transfer_direction dir)
+				  u32 len, enum dma_transfer_direction dir,
+				  struct dma_slave_config *sconfig,
+				  bool is_cyclic)
 {
 	struct owl_dma_lli_hw *hw = &lli->hw;
 	u32 mode;
@@ -364,6 +372,32 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
 			OWL_DMA_MODE_DAM_INC;
 
+		break;
+	case DMA_MEM_TO_DEV:
+		mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
+			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		 mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DEV | OWL_DMA_MODE_DT_DCU
+			| OWL_DMA_MODE_SAM_CONST | OWL_DMA_MODE_DAM_INC;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->src_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
 		break;
 	default:
 		return -EINVAL;
@@ -381,7 +415,10 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				 OWL_DMA_LLC_SAV_LOAD_NEXT |
 				 OWL_DMA_LLC_DAV_LOAD_NEXT);
 
-	hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
+	if (is_cyclic)
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK);
+	else
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
 
 	return 0;
 }
@@ -443,6 +480,16 @@ static void owl_dma_terminate_pchan(struct owl_dma *od,
 	spin_unlock_irqrestore(&od->lock, flags);
 }
 
+static void owl_dma_pause_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 1, OWL_DMAX_PAUSE);
+}
+
+static void owl_dma_resume_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 0, OWL_DMAX_PAUSE);
+}
+
 static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma *od = to_owl_dma(vchan->vc.chan.device);
@@ -464,7 +511,10 @@ static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 	lli = list_first_entry(&txd->lli_list,
 			       struct owl_dma_lli, node);
 
-	int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
+	if (txd->cyclic)
+		int_ctl = OWL_DMA_INTCTL_BLOCK;
+	else
+		int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
 
 	pchan_writel(pchan, OWL_DMAX_MODE, OWL_DMA_MODE_LME);
 	pchan_writel(pchan, OWL_DMAX_LINKLIST_CTL,
@@ -627,6 +677,54 @@ static int owl_dma_terminate_all(struct dma_chan *chan)
 	return 0;
 }
 
+static int owl_dma_config(struct dma_chan *chan,
+			  struct dma_slave_config *config)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+
+	/* Reject definitely invalid configurations */
+	if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
+	    config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
+		return -EINVAL;
+
+	memcpy(&vchan->cfg, config, sizeof(struct dma_slave_config));
+
+	return 0;
+}
+
+static int owl_dma_pause(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_pause_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
+static int owl_dma_resume(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	if (!vchan->pchan && !vchan->txd)
+		return 0;
+
+	dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_resume_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
 static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma_pchan *pchan;
@@ -754,13 +852,14 @@ static struct dma_async_tx_descriptor
 		bytes = min_t(size_t, (len - offset), OWL_DMA_FRAME_MAX_LENGTH);
 
 		ret = owl_dma_cfg_lli(vchan, lli, src + offset, dst + offset,
-				      bytes, DMA_MEM_TO_MEM);
+				      bytes, DMA_MEM_TO_MEM,
+				      &vchan->cfg, txd->cyclic);
 		if (ret) {
 			dev_warn(chan2dev(chan), "failed to config lli\n");
 			goto err_txd_free;
 		}
 
-		prev = owl_dma_add_lli(txd, prev, lli);
+		prev = owl_dma_add_lli(txd, prev, lli, false);
 	}
 
 	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
@@ -770,6 +869,133 @@ static struct dma_async_tx_descriptor
 	return NULL;
 }
 
+static struct dma_async_tx_descriptor
+		*owl_dma_prep_slave_sg(struct dma_chan *chan,
+				       struct scatterlist *sgl,
+				       unsigned int sg_len,
+				       enum dma_transfer_direction dir,
+				       unsigned long flags, void *context)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL;
+	struct scatterlist *sg;
+	dma_addr_t addr, src = 0, dst = 0;
+	size_t len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if (len > OWL_DMA_FRAME_MAX_LENGTH) {
+			dev_err(od->dma.dev,
+				"frame length exceeds max supported length");
+			goto err_txd_free;
+		}
+
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_err(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = addr;
+			dst = sconfig->dst_addr;
+		} else {
+			src = sconfig->src_addr;
+			dst = addr;
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, len, dir, sconfig,
+				      txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor
+		*owl_prep_dma_cyclic(struct dma_chan *chan,
+				     dma_addr_t buf_addr, size_t buf_len,
+				     size_t period_len,
+				     enum dma_transfer_direction dir,
+				     unsigned long flags)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL, *first = NULL;
+	dma_addr_t src = 0, dst = 0;
+	unsigned int periods = buf_len / period_len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+	txd->cyclic = true;
+
+	for (i = 0; i < periods; i++) {
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_warn(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = buf_addr + (period_len * i);
+			dst = sconfig->dst_addr;
+		} else if (dir == DMA_DEV_TO_MEM) {
+			src = sconfig->src_addr;
+			dst = buf_addr + (period_len * i);
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, period_len,
+				      dir, sconfig, txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		if (!first)
+			first = lli;
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	/* close the cyclic list */
+	owl_dma_add_lli(txd, prev, first, true);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
 static void owl_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
@@ -790,6 +1016,27 @@ static inline void owl_dma_free(struct owl_dma *od)
 	}
 }
 
+static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec,
+					 struct of_dma *ofdma)
+{
+	struct owl_dma *od = ofdma->of_dma_data;
+	struct owl_dma_vchan *vchan;
+	struct dma_chan *chan;
+	u8 drq = dma_spec->args[0];
+
+	if (drq > od->nr_vchans)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&od->dma);
+	if (!chan)
+		return NULL;
+
+	vchan = to_owl_vchan(chan);
+	vchan->drq = drq;
+
+	return chan;
+}
+
 static int owl_dma_probe(struct platform_device *pdev)
 {
 	struct device_node *np = pdev->dev.of_node;
@@ -833,12 +1080,19 @@ static int owl_dma_probe(struct platform_device *pdev)
 	spin_lock_init(&od->lock);
 
 	dma_cap_set(DMA_MEMCPY, od->dma.cap_mask);
+	dma_cap_set(DMA_SLAVE, od->dma.cap_mask);
+	dma_cap_set(DMA_CYCLIC, od->dma.cap_mask);
 
 	od->dma.dev = &pdev->dev;
 	od->dma.device_free_chan_resources = owl_dma_free_chan_resources;
 	od->dma.device_tx_status = owl_dma_tx_status;
 	od->dma.device_issue_pending = owl_dma_issue_pending;
 	od->dma.device_prep_dma_memcpy = owl_dma_prep_memcpy;
+	od->dma.device_prep_slave_sg = owl_dma_prep_slave_sg;
+	od->dma.device_prep_dma_cyclic = owl_prep_dma_cyclic;
+	od->dma.device_config = owl_dma_config;
+	od->dma.device_pause = owl_dma_pause;
+	od->dma.device_resume = owl_dma_resume;
 	od->dma.device_terminate_all = owl_dma_terminate_all;
 	od->dma.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
 	od->dma.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
@@ -910,8 +1164,18 @@ static int owl_dma_probe(struct platform_device *pdev)
 		goto err_pool_free;
 	}
 
+	/* Device-tree DMA controller registration */
+	ret = of_dma_controller_register(pdev->dev.of_node,
+					 owl_dma_of_xlate, od);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
 	return 0;
 
+err_dma_unregister:
+	dma_async_device_unregister(&od->dma);
 err_pool_free:
 	clk_disable_unprepare(od->clk);
 	dma_pool_destroy(od->lli_pool);
@@ -923,6 +1187,7 @@ static int owl_dma_remove(struct platform_device *pdev)
 {
 	struct owl_dma *od = platform_get_drvdata(pdev);
 
+	of_dma_controller_free(pdev->dev.of_node);
 	dma_async_device_unregister(&od->dma);
 
 	/* Mask all interrupts for this execution environment */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: linux-arm-kernel

Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
mode supports bus width of 4 bytes common for all peripherals and 1 byte
specific for UART.

The cyclic mode supports only block mode transfer.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/dma/owl-dma.c | 279 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 272 insertions(+), 7 deletions(-)

diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
index 7812a6338acd..1d26db4c9229 100644
--- a/drivers/dma/owl-dma.c
+++ b/drivers/dma/owl-dma.c
@@ -21,6 +21,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/of_device.h>
+#include <linux/of_dma.h>
 #include <linux/slab.h>
 #include "virt-dma.h"
 
@@ -165,6 +166,7 @@ struct owl_dma_lli {
 struct owl_dma_txd {
 	struct virt_dma_desc	vd;
 	struct list_head	lli_list;
+	bool			cyclic;
 };
 
 /**
@@ -191,6 +193,8 @@ struct owl_dma_vchan {
 	struct virt_dma_chan	vc;
 	struct owl_dma_pchan	*pchan;
 	struct owl_dma_txd	*txd;
+	struct dma_slave_config cfg;
+	u8			drq;
 };
 
 /**
@@ -336,9 +340,11 @@ static struct owl_dma_lli *owl_dma_alloc_lli(struct owl_dma *od)
 
 static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 					   struct owl_dma_lli *prev,
-					   struct owl_dma_lli *next)
+					   struct owl_dma_lli *next,
+					   bool is_cyclic)
 {
-	list_add_tail(&next->node, &txd->lli_list);
+	if (!is_cyclic)
+		list_add_tail(&next->node, &txd->lli_list);
 
 	if (prev) {
 		prev->hw.next_lli = next->phys;
@@ -351,7 +357,9 @@ static struct owl_dma_lli *owl_dma_add_lli(struct owl_dma_txd *txd,
 static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				  struct owl_dma_lli *lli,
 				  dma_addr_t src, dma_addr_t dst,
-				  u32 len, enum dma_transfer_direction dir)
+				  u32 len, enum dma_transfer_direction dir,
+				  struct dma_slave_config *sconfig,
+				  bool is_cyclic)
 {
 	struct owl_dma_lli_hw *hw = &lli->hw;
 	u32 mode;
@@ -364,6 +372,32 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 			OWL_DMA_MODE_DT_DCU | OWL_DMA_MODE_SAM_INC |
 			OWL_DMA_MODE_DAM_INC;
 
+		break;
+	case DMA_MEM_TO_DEV:
+		mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DCU | OWL_DMA_MODE_DT_DEV
+			| OWL_DMA_MODE_SAM_INC | OWL_DMA_MODE_DAM_CONST;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->dst_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
+		break;
+	case DMA_DEV_TO_MEM:
+		 mode |= OWL_DMA_MODE_TS(vchan->drq)
+			| OWL_DMA_MODE_ST_DEV | OWL_DMA_MODE_DT_DCU
+			| OWL_DMA_MODE_SAM_CONST | OWL_DMA_MODE_DAM_INC;
+
+		/*
+		 * Hardware only supports 32bit and 8bit buswidth. Since the
+		 * default is 32bit, select 8bit only when requested.
+		 */
+		if (sconfig->src_addr_width == DMA_SLAVE_BUSWIDTH_1_BYTE)
+			mode |= OWL_DMA_MODE_NDDBW_8BIT;
+
 		break;
 	default:
 		return -EINVAL;
@@ -381,7 +415,10 @@ static inline int owl_dma_cfg_lli(struct owl_dma_vchan *vchan,
 				 OWL_DMA_LLC_SAV_LOAD_NEXT |
 				 OWL_DMA_LLC_DAV_LOAD_NEXT);
 
-	hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
+	if (is_cyclic)
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_BLOCK);
+	else
+		hw->ctrlb = llc_hw_ctrlb(OWL_DMA_INTCTL_SUPER_BLOCK);
 
 	return 0;
 }
@@ -443,6 +480,16 @@ static void owl_dma_terminate_pchan(struct owl_dma *od,
 	spin_unlock_irqrestore(&od->lock, flags);
 }
 
+static void owl_dma_pause_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 1, OWL_DMAX_PAUSE);
+}
+
+static void owl_dma_resume_pchan(struct owl_dma_pchan *pchan)
+{
+	pchan_writel(pchan, 0, OWL_DMAX_PAUSE);
+}
+
 static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma *od = to_owl_dma(vchan->vc.chan.device);
@@ -464,7 +511,10 @@ static int owl_dma_start_next_txd(struct owl_dma_vchan *vchan)
 	lli = list_first_entry(&txd->lli_list,
 			       struct owl_dma_lli, node);
 
-	int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
+	if (txd->cyclic)
+		int_ctl = OWL_DMA_INTCTL_BLOCK;
+	else
+		int_ctl = OWL_DMA_INTCTL_SUPER_BLOCK;
 
 	pchan_writel(pchan, OWL_DMAX_MODE, OWL_DMA_MODE_LME);
 	pchan_writel(pchan, OWL_DMAX_LINKLIST_CTL,
@@ -627,6 +677,54 @@ static int owl_dma_terminate_all(struct dma_chan *chan)
 	return 0;
 }
 
+static int owl_dma_config(struct dma_chan *chan,
+			  struct dma_slave_config *config)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+
+	/* Reject definitely invalid configurations */
+	if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES ||
+	    config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES)
+		return -EINVAL;
+
+	memcpy(&vchan->cfg, config, sizeof(struct dma_slave_config));
+
+	return 0;
+}
+
+static int owl_dma_pause(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_pause_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
+static int owl_dma_resume(struct dma_chan *chan)
+{
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	unsigned long flags;
+
+	if (!vchan->pchan && !vchan->txd)
+		return 0;
+
+	dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	owl_dma_resume_pchan(vchan->pchan);
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	return 0;
+}
+
 static u32 owl_dma_getbytes_chan(struct owl_dma_vchan *vchan)
 {
 	struct owl_dma_pchan *pchan;
@@ -754,13 +852,14 @@ static struct dma_async_tx_descriptor
 		bytes = min_t(size_t, (len - offset), OWL_DMA_FRAME_MAX_LENGTH);
 
 		ret = owl_dma_cfg_lli(vchan, lli, src + offset, dst + offset,
-				      bytes, DMA_MEM_TO_MEM);
+				      bytes, DMA_MEM_TO_MEM,
+				      &vchan->cfg, txd->cyclic);
 		if (ret) {
 			dev_warn(chan2dev(chan), "failed to config lli\n");
 			goto err_txd_free;
 		}
 
-		prev = owl_dma_add_lli(txd, prev, lli);
+		prev = owl_dma_add_lli(txd, prev, lli, false);
 	}
 
 	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
@@ -770,6 +869,133 @@ static struct dma_async_tx_descriptor
 	return NULL;
 }
 
+static struct dma_async_tx_descriptor
+		*owl_dma_prep_slave_sg(struct dma_chan *chan,
+				       struct scatterlist *sgl,
+				       unsigned int sg_len,
+				       enum dma_transfer_direction dir,
+				       unsigned long flags, void *context)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL;
+	struct scatterlist *sg;
+	dma_addr_t addr, src = 0, dst = 0;
+	size_t len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		addr = sg_dma_address(sg);
+		len = sg_dma_len(sg);
+
+		if (len > OWL_DMA_FRAME_MAX_LENGTH) {
+			dev_err(od->dma.dev,
+				"frame length exceeds max supported length");
+			goto err_txd_free;
+		}
+
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_err(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = addr;
+			dst = sconfig->dst_addr;
+		} else {
+			src = sconfig->src_addr;
+			dst = addr;
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, len, dir, sconfig,
+				      txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor
+		*owl_prep_dma_cyclic(struct dma_chan *chan,
+				     dma_addr_t buf_addr, size_t buf_len,
+				     size_t period_len,
+				     enum dma_transfer_direction dir,
+				     unsigned long flags)
+{
+	struct owl_dma *od = to_owl_dma(chan->device);
+	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct owl_dma_txd *txd;
+	struct owl_dma_lli *lli, *prev = NULL, *first = NULL;
+	dma_addr_t src = 0, dst = 0;
+	unsigned int periods = buf_len / period_len;
+	int ret, i;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	INIT_LIST_HEAD(&txd->lli_list);
+	txd->cyclic = true;
+
+	for (i = 0; i < periods; i++) {
+		lli = owl_dma_alloc_lli(od);
+		if (!lli) {
+			dev_warn(chan2dev(chan), "failed to allocate lli");
+			goto err_txd_free;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			src = buf_addr + (period_len * i);
+			dst = sconfig->dst_addr;
+		} else if (dir == DMA_DEV_TO_MEM) {
+			src = sconfig->src_addr;
+			dst = buf_addr + (period_len * i);
+		}
+
+		ret = owl_dma_cfg_lli(vchan, lli, src, dst, period_len,
+				      dir, sconfig, txd->cyclic);
+		if (ret) {
+			dev_warn(chan2dev(chan), "failed to config lli");
+			goto err_txd_free;
+		}
+
+		if (!first)
+			first = lli;
+
+		prev = owl_dma_add_lli(txd, prev, lli, false);
+	}
+
+	/* close the cyclic list */
+	owl_dma_add_lli(txd, prev, first, true);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_txd_free:
+	owl_dma_free_txd(od, txd);
+
+	return NULL;
+}
+
 static void owl_dma_free_chan_resources(struct dma_chan *chan)
 {
 	struct owl_dma_vchan *vchan = to_owl_vchan(chan);
@@ -790,6 +1016,27 @@ static inline void owl_dma_free(struct owl_dma *od)
 	}
 }
 
+static struct dma_chan *owl_dma_of_xlate(struct of_phandle_args *dma_spec,
+					 struct of_dma *ofdma)
+{
+	struct owl_dma *od = ofdma->of_dma_data;
+	struct owl_dma_vchan *vchan;
+	struct dma_chan *chan;
+	u8 drq = dma_spec->args[0];
+
+	if (drq > od->nr_vchans)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&od->dma);
+	if (!chan)
+		return NULL;
+
+	vchan = to_owl_vchan(chan);
+	vchan->drq = drq;
+
+	return chan;
+}
+
 static int owl_dma_probe(struct platform_device *pdev)
 {
 	struct device_node *np = pdev->dev.of_node;
@@ -833,12 +1080,19 @@ static int owl_dma_probe(struct platform_device *pdev)
 	spin_lock_init(&od->lock);
 
 	dma_cap_set(DMA_MEMCPY, od->dma.cap_mask);
+	dma_cap_set(DMA_SLAVE, od->dma.cap_mask);
+	dma_cap_set(DMA_CYCLIC, od->dma.cap_mask);
 
 	od->dma.dev = &pdev->dev;
 	od->dma.device_free_chan_resources = owl_dma_free_chan_resources;
 	od->dma.device_tx_status = owl_dma_tx_status;
 	od->dma.device_issue_pending = owl_dma_issue_pending;
 	od->dma.device_prep_dma_memcpy = owl_dma_prep_memcpy;
+	od->dma.device_prep_slave_sg = owl_dma_prep_slave_sg;
+	od->dma.device_prep_dma_cyclic = owl_prep_dma_cyclic;
+	od->dma.device_config = owl_dma_config;
+	od->dma.device_pause = owl_dma_pause;
+	od->dma.device_resume = owl_dma_resume;
 	od->dma.device_terminate_all = owl_dma_terminate_all;
 	od->dma.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
 	od->dma.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
@@ -910,8 +1164,18 @@ static int owl_dma_probe(struct platform_device *pdev)
 		goto err_pool_free;
 	}
 
+	/* Device-tree DMA controller registration */
+	ret = of_dma_controller_register(pdev->dev.of_node,
+					 owl_dma_of_xlate, od);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
 	return 0;
 
+err_dma_unregister:
+	dma_async_device_unregister(&od->dma);
 err_pool_free:
 	clk_disable_unprepare(od->clk);
 	dma_pool_destroy(od->lli_pool);
@@ -923,6 +1187,7 @@ static int owl_dma_remove(struct platform_device *pdev)
 {
 	struct owl_dma *od = platform_get_drvdata(pdev);
 
+	of_dma_controller_free(pdev->dev.of_node);
 	dma_async_device_unregister(&od->dma);
 
 	/* Mask all interrupts for this execution environment */
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [v2,3/3] tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs
  2018-09-29  7:46 ` Manivannan Sadhasivam
  (?)
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  -1 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Add Tx DMA support for Actions Semi Owl SoCs. If there is no DMA
property specified in DT, it will fallback to default interrupt mode.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/tty/serial/owl-uart.c | 172 +++++++++++++++++++++++++++++++++-
 1 file changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
index 29a6dc6a8d23..1b3016db7ae2 100644
--- a/drivers/tty/serial/owl-uart.c
+++ b/drivers/tty/serial/owl-uart.c
@@ -11,6 +11,8 @@
 #include <linux/clk.h>
 #include <linux/console.h>
 #include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
 #include <linux/io.h>
 #include <linux/module.h>
 #include <linux/of.h>
@@ -48,6 +50,8 @@
 #define OWL_UART_CTL_RXIE		BIT(18)
 #define OWL_UART_CTL_TXIE		BIT(19)
 #define OWL_UART_CTL_LBEN		BIT(20)
+#define OWL_UART_CTL_DRCR		BIT(21)
+#define OWL_UART_CTL_DTCR		BIT(22)
 
 #define OWL_UART_STAT_RIP		BIT(0)
 #define OWL_UART_STAT_TIP		BIT(1)
@@ -71,12 +75,21 @@ struct owl_uart_info {
 struct owl_uart_port {
 	struct uart_port port;
 	struct clk *clk;
+
+	struct dma_chan *tx_ch;
+	dma_addr_t tx_dma_buf;
+	dma_cookie_t dma_tx_cookie;
+	u32 tx_size;
+	bool tx_dma;
+	bool dma_tx_running;
 };
 
 #define to_owl_uart_port(prt) container_of(prt, struct owl_uart_port, prt)
 
 static struct owl_uart_port *owl_uart_ports[OWL_UART_PORT_NUM];
 
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port);
+
 static inline void owl_uart_write(struct uart_port *port, u32 val, unsigned int off)
 {
 	writel(val, port->membase + off);
@@ -115,6 +128,83 @@ static unsigned int owl_uart_get_mctrl(struct uart_port *port)
 	return mctrl;
 }
 
+static void owl_uart_dma_tx_callback(void *data)
+{
+	struct owl_uart_port *owl_port = data;
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf	*xmit = &port->state->xmit;
+	unsigned long flags;
+	u32 val;
+
+	dma_sync_single_for_cpu(port->dev, owl_port->tx_dma_buf,
+				UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	owl_port->dma_tx_running = 0;
+
+	xmit->tail += owl_port->tx_size;
+	xmit->tail &= UART_XMIT_SIZE - 1;
+	port->icount.tx += owl_port->tx_size;
+
+	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+		uart_write_wakeup(port);
+
+	/* Disable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXDE;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Clear pending Tx IRQ */
+	val = owl_uart_read(port, OWL_UART_STAT);
+	val |= OWL_UART_STAT_TIP;
+	owl_uart_write(port, val, OWL_UART_STAT);
+
+	if (!uart_circ_empty(xmit) && !uart_tx_stopped(port))
+		owl_uart_dma_start_tx(owl_port);
+
+	spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port)
+{
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf *xmit = &port->state->xmit;
+	struct dma_async_tx_descriptor *desc;
+	u32 val;
+
+	if (uart_tx_stopped(port) || uart_circ_empty(xmit) ||
+	    owl_port->dma_tx_running)
+		return;
+
+	dma_sync_single_for_device(port->dev, owl_port->tx_dma_buf,
+				   UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	owl_port->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail,
+					    UART_XMIT_SIZE);
+
+	desc = dmaengine_prep_slave_single(owl_port->tx_ch,
+					   owl_port->tx_dma_buf + xmit->tail,
+					   owl_port->tx_size, DMA_MEM_TO_DEV,
+					   DMA_PREP_INTERRUPT);
+	if (!desc)
+		return;
+
+	desc->callback = owl_uart_dma_tx_callback;
+	desc->callback_param = owl_port;
+
+	/* Enable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXIE;
+	val |= OWL_UART_CTL_TXDE | OWL_UART_CTL_DTCR;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Start Tx DMA transfer */
+	owl_port->dma_tx_running = true;
+	owl_port->dma_tx_cookie = dmaengine_submit(desc);
+	dma_async_issue_pending(owl_port->tx_ch);
+}
+
 static unsigned int owl_uart_tx_empty(struct uart_port *port)
 {
 	unsigned long flags;
@@ -159,6 +249,7 @@ static void owl_uart_stop_tx(struct uart_port *port)
 
 static void owl_uart_start_tx(struct uart_port *port)
 {
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	u32 val;
 
 	if (uart_tx_stopped(port)) {
@@ -166,6 +257,11 @@ static void owl_uart_start_tx(struct uart_port *port)
 		return;
 	}
 
+	if (owl_port->tx_dma) {
+		owl_uart_dma_start_tx(owl_port);
+		return;
+	}
+
 	val = owl_uart_read(port, OWL_UART_STAT);
 	val |= OWL_UART_STAT_TIP;
 	owl_uart_write(port, val, OWL_UART_STAT);
@@ -273,13 +369,27 @@ static irqreturn_t owl_uart_irq(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void owl_dma_channel_free(struct owl_uart_port *owl_port)
+{
+	dmaengine_terminate_all(owl_port->tx_ch);
+	dma_release_channel(owl_port->tx_ch);
+	dma_unmap_single(owl_port->port.dev, owl_port->tx_dma_buf,
+			 UART_XMIT_SIZE, DMA_TO_DEVICE);
+	owl_port->dma_tx_running = false;
+	owl_port->tx_ch = NULL;
+}
+
 static void owl_uart_shutdown(struct uart_port *port)
 {
-	u32 val;
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	unsigned long flags;
+	u32 val;
 
 	spin_lock_irqsave(&port->lock, flags);
 
+	if (owl_port->tx_dma)
+		owl_dma_channel_free(owl_port);
+
 	val = owl_uart_read(port, OWL_UART_CTL);
 	val &= ~(OWL_UART_CTL_TXIE | OWL_UART_CTL_RXIE
 		| OWL_UART_CTL_TXDE | OWL_UART_CTL_RXDE | OWL_UART_CTL_EN);
@@ -290,6 +400,62 @@ static void owl_uart_shutdown(struct uart_port *port)
 	free_irq(port->irq, port);
 }
 
+static int owl_uart_dma_tx_init(struct uart_port *port)
+{
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
+	struct device *dev = port->dev;
+	struct dma_slave_config slave_config;
+	int ret;
+
+	owl_port->tx_dma = false;
+
+	/* Request DMA TX channel */
+	owl_port->tx_ch = dma_request_slave_channel(dev, "tx");
+	if (!owl_port->tx_ch) {
+		dev_info(dev, "tx dma alloc failed\n");
+		return -ENODEV;
+	}
+
+	owl_port->tx_dma_buf = dma_map_single(dev,
+					      owl_port->port.state->xmit.buf,
+					      UART_XMIT_SIZE, DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, owl_port->tx_dma_buf)) {
+		ret = -ENOMEM;
+		goto alloc_err;
+	}
+
+	/* Configure DMA channel */
+	memset(&slave_config, 0, sizeof(slave_config));
+	slave_config.direction = DMA_MEM_TO_DEV;
+	slave_config.dst_addr = port->mapbase + OWL_UART_TXDAT;
+	slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+
+	ret = dmaengine_slave_config(owl_port->tx_ch, &slave_config);
+	if (ret < 0) {
+		dev_err(dev, "tx dma channel config failed\n");
+		ret = -ENODEV;
+		goto map_err;
+	}
+
+	/* Use DMA buffer size as the FIFO size */
+	port->fifosize = UART_XMIT_SIZE;
+
+	/* Set DMA flag */
+	owl_port->tx_dma = true;
+	owl_port->dma_tx_running = false;
+
+	return 0;
+
+map_err:
+	dma_unmap_single(dev, owl_port->tx_dma_buf, UART_XMIT_SIZE,
+			 DMA_TO_DEVICE);
+alloc_err:
+	dma_release_channel(owl_port->tx_ch);
+	owl_port->tx_ch = NULL;
+
+	return ret;
+}
+
 static int owl_uart_startup(struct uart_port *port)
 {
 	u32 val;
@@ -301,6 +467,10 @@ static int owl_uart_startup(struct uart_port *port)
 	if (ret)
 		return ret;
 
+	ret = owl_uart_dma_tx_init(port);
+	if (!ret)
+		dev_info(port->dev, "using DMA for tx\n");
+
 	spin_lock_irqsave(&port->lock, flags);
 
 	val = owl_uart_read(port, OWL_UART_STAT);

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/3] tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: vkoul, dan.j.williams, afaerber, robh+dt, gregkh, jslaby
  Cc: linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

Add Tx DMA support for Actions Semi Owl SoCs. If there is no DMA
property specified in DT, it will fallback to default interrupt mode.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/tty/serial/owl-uart.c | 172 +++++++++++++++++++++++++++++++++-
 1 file changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
index 29a6dc6a8d23..1b3016db7ae2 100644
--- a/drivers/tty/serial/owl-uart.c
+++ b/drivers/tty/serial/owl-uart.c
@@ -11,6 +11,8 @@
 #include <linux/clk.h>
 #include <linux/console.h>
 #include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
 #include <linux/io.h>
 #include <linux/module.h>
 #include <linux/of.h>
@@ -48,6 +50,8 @@
 #define OWL_UART_CTL_RXIE		BIT(18)
 #define OWL_UART_CTL_TXIE		BIT(19)
 #define OWL_UART_CTL_LBEN		BIT(20)
+#define OWL_UART_CTL_DRCR		BIT(21)
+#define OWL_UART_CTL_DTCR		BIT(22)
 
 #define OWL_UART_STAT_RIP		BIT(0)
 #define OWL_UART_STAT_TIP		BIT(1)
@@ -71,12 +75,21 @@ struct owl_uart_info {
 struct owl_uart_port {
 	struct uart_port port;
 	struct clk *clk;
+
+	struct dma_chan *tx_ch;
+	dma_addr_t tx_dma_buf;
+	dma_cookie_t dma_tx_cookie;
+	u32 tx_size;
+	bool tx_dma;
+	bool dma_tx_running;
 };
 
 #define to_owl_uart_port(prt) container_of(prt, struct owl_uart_port, prt)
 
 static struct owl_uart_port *owl_uart_ports[OWL_UART_PORT_NUM];
 
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port);
+
 static inline void owl_uart_write(struct uart_port *port, u32 val, unsigned int off)
 {
 	writel(val, port->membase + off);
@@ -115,6 +128,83 @@ static unsigned int owl_uart_get_mctrl(struct uart_port *port)
 	return mctrl;
 }
 
+static void owl_uart_dma_tx_callback(void *data)
+{
+	struct owl_uart_port *owl_port = data;
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf	*xmit = &port->state->xmit;
+	unsigned long flags;
+	u32 val;
+
+	dma_sync_single_for_cpu(port->dev, owl_port->tx_dma_buf,
+				UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	owl_port->dma_tx_running = 0;
+
+	xmit->tail += owl_port->tx_size;
+	xmit->tail &= UART_XMIT_SIZE - 1;
+	port->icount.tx += owl_port->tx_size;
+
+	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+		uart_write_wakeup(port);
+
+	/* Disable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXDE;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Clear pending Tx IRQ */
+	val = owl_uart_read(port, OWL_UART_STAT);
+	val |= OWL_UART_STAT_TIP;
+	owl_uart_write(port, val, OWL_UART_STAT);
+
+	if (!uart_circ_empty(xmit) && !uart_tx_stopped(port))
+		owl_uart_dma_start_tx(owl_port);
+
+	spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port)
+{
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf *xmit = &port->state->xmit;
+	struct dma_async_tx_descriptor *desc;
+	u32 val;
+
+	if (uart_tx_stopped(port) || uart_circ_empty(xmit) ||
+	    owl_port->dma_tx_running)
+		return;
+
+	dma_sync_single_for_device(port->dev, owl_port->tx_dma_buf,
+				   UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	owl_port->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail,
+					    UART_XMIT_SIZE);
+
+	desc = dmaengine_prep_slave_single(owl_port->tx_ch,
+					   owl_port->tx_dma_buf + xmit->tail,
+					   owl_port->tx_size, DMA_MEM_TO_DEV,
+					   DMA_PREP_INTERRUPT);
+	if (!desc)
+		return;
+
+	desc->callback = owl_uart_dma_tx_callback;
+	desc->callback_param = owl_port;
+
+	/* Enable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXIE;
+	val |= OWL_UART_CTL_TXDE | OWL_UART_CTL_DTCR;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Start Tx DMA transfer */
+	owl_port->dma_tx_running = true;
+	owl_port->dma_tx_cookie = dmaengine_submit(desc);
+	dma_async_issue_pending(owl_port->tx_ch);
+}
+
 static unsigned int owl_uart_tx_empty(struct uart_port *port)
 {
 	unsigned long flags;
@@ -159,6 +249,7 @@ static void owl_uart_stop_tx(struct uart_port *port)
 
 static void owl_uart_start_tx(struct uart_port *port)
 {
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	u32 val;
 
 	if (uart_tx_stopped(port)) {
@@ -166,6 +257,11 @@ static void owl_uart_start_tx(struct uart_port *port)
 		return;
 	}
 
+	if (owl_port->tx_dma) {
+		owl_uart_dma_start_tx(owl_port);
+		return;
+	}
+
 	val = owl_uart_read(port, OWL_UART_STAT);
 	val |= OWL_UART_STAT_TIP;
 	owl_uart_write(port, val, OWL_UART_STAT);
@@ -273,13 +369,27 @@ static irqreturn_t owl_uart_irq(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void owl_dma_channel_free(struct owl_uart_port *owl_port)
+{
+	dmaengine_terminate_all(owl_port->tx_ch);
+	dma_release_channel(owl_port->tx_ch);
+	dma_unmap_single(owl_port->port.dev, owl_port->tx_dma_buf,
+			 UART_XMIT_SIZE, DMA_TO_DEVICE);
+	owl_port->dma_tx_running = false;
+	owl_port->tx_ch = NULL;
+}
+
 static void owl_uart_shutdown(struct uart_port *port)
 {
-	u32 val;
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	unsigned long flags;
+	u32 val;
 
 	spin_lock_irqsave(&port->lock, flags);
 
+	if (owl_port->tx_dma)
+		owl_dma_channel_free(owl_port);
+
 	val = owl_uart_read(port, OWL_UART_CTL);
 	val &= ~(OWL_UART_CTL_TXIE | OWL_UART_CTL_RXIE
 		| OWL_UART_CTL_TXDE | OWL_UART_CTL_RXDE | OWL_UART_CTL_EN);
@@ -290,6 +400,62 @@ static void owl_uart_shutdown(struct uart_port *port)
 	free_irq(port->irq, port);
 }
 
+static int owl_uart_dma_tx_init(struct uart_port *port)
+{
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
+	struct device *dev = port->dev;
+	struct dma_slave_config slave_config;
+	int ret;
+
+	owl_port->tx_dma = false;
+
+	/* Request DMA TX channel */
+	owl_port->tx_ch = dma_request_slave_channel(dev, "tx");
+	if (!owl_port->tx_ch) {
+		dev_info(dev, "tx dma alloc failed\n");
+		return -ENODEV;
+	}
+
+	owl_port->tx_dma_buf = dma_map_single(dev,
+					      owl_port->port.state->xmit.buf,
+					      UART_XMIT_SIZE, DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, owl_port->tx_dma_buf)) {
+		ret = -ENOMEM;
+		goto alloc_err;
+	}
+
+	/* Configure DMA channel */
+	memset(&slave_config, 0, sizeof(slave_config));
+	slave_config.direction = DMA_MEM_TO_DEV;
+	slave_config.dst_addr = port->mapbase + OWL_UART_TXDAT;
+	slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+
+	ret = dmaengine_slave_config(owl_port->tx_ch, &slave_config);
+	if (ret < 0) {
+		dev_err(dev, "tx dma channel config failed\n");
+		ret = -ENODEV;
+		goto map_err;
+	}
+
+	/* Use DMA buffer size as the FIFO size */
+	port->fifosize = UART_XMIT_SIZE;
+
+	/* Set DMA flag */
+	owl_port->tx_dma = true;
+	owl_port->dma_tx_running = false;
+
+	return 0;
+
+map_err:
+	dma_unmap_single(dev, owl_port->tx_dma_buf, UART_XMIT_SIZE,
+			 DMA_TO_DEVICE);
+alloc_err:
+	dma_release_channel(owl_port->tx_ch);
+	owl_port->tx_ch = NULL;
+
+	return ret;
+}
+
 static int owl_uart_startup(struct uart_port *port)
 {
 	u32 val;
@@ -301,6 +467,10 @@ static int owl_uart_startup(struct uart_port *port)
 	if (ret)
 		return ret;
 
+	ret = owl_uart_dma_tx_init(port);
+	if (!ret)
+		dev_info(port->dev, "using DMA for tx\n");
+
 	spin_lock_irqsave(&port->lock, flags);
 
 	val = owl_uart_read(port, OWL_UART_STAT);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/3] tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs
@ 2018-09-29  7:46 ` Manivannan Sadhasivam
  0 siblings, 0 replies; 18+ messages in thread
From: Manivannan Sadhasivam @ 2018-09-29  7:46 UTC (permalink / raw)
  To: linux-arm-kernel

Add Tx DMA support for Actions Semi Owl SoCs. If there is no DMA
property specified in DT, it will fallback to default interrupt mode.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/tty/serial/owl-uart.c | 172 +++++++++++++++++++++++++++++++++-
 1 file changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
index 29a6dc6a8d23..1b3016db7ae2 100644
--- a/drivers/tty/serial/owl-uart.c
+++ b/drivers/tty/serial/owl-uart.c
@@ -11,6 +11,8 @@
 #include <linux/clk.h>
 #include <linux/console.h>
 #include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
 #include <linux/io.h>
 #include <linux/module.h>
 #include <linux/of.h>
@@ -48,6 +50,8 @@
 #define OWL_UART_CTL_RXIE		BIT(18)
 #define OWL_UART_CTL_TXIE		BIT(19)
 #define OWL_UART_CTL_LBEN		BIT(20)
+#define OWL_UART_CTL_DRCR		BIT(21)
+#define OWL_UART_CTL_DTCR		BIT(22)
 
 #define OWL_UART_STAT_RIP		BIT(0)
 #define OWL_UART_STAT_TIP		BIT(1)
@@ -71,12 +75,21 @@ struct owl_uart_info {
 struct owl_uart_port {
 	struct uart_port port;
 	struct clk *clk;
+
+	struct dma_chan *tx_ch;
+	dma_addr_t tx_dma_buf;
+	dma_cookie_t dma_tx_cookie;
+	u32 tx_size;
+	bool tx_dma;
+	bool dma_tx_running;
 };
 
 #define to_owl_uart_port(prt) container_of(prt, struct owl_uart_port, prt)
 
 static struct owl_uart_port *owl_uart_ports[OWL_UART_PORT_NUM];
 
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port);
+
 static inline void owl_uart_write(struct uart_port *port, u32 val, unsigned int off)
 {
 	writel(val, port->membase + off);
@@ -115,6 +128,83 @@ static unsigned int owl_uart_get_mctrl(struct uart_port *port)
 	return mctrl;
 }
 
+static void owl_uart_dma_tx_callback(void *data)
+{
+	struct owl_uart_port *owl_port = data;
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf	*xmit = &port->state->xmit;
+	unsigned long flags;
+	u32 val;
+
+	dma_sync_single_for_cpu(port->dev, owl_port->tx_dma_buf,
+				UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	spin_lock_irqsave(&port->lock, flags);
+
+	owl_port->dma_tx_running = 0;
+
+	xmit->tail += owl_port->tx_size;
+	xmit->tail &= UART_XMIT_SIZE - 1;
+	port->icount.tx += owl_port->tx_size;
+
+	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+		uart_write_wakeup(port);
+
+	/* Disable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXDE;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Clear pending Tx IRQ */
+	val = owl_uart_read(port, OWL_UART_STAT);
+	val |= OWL_UART_STAT_TIP;
+	owl_uart_write(port, val, OWL_UART_STAT);
+
+	if (!uart_circ_empty(xmit) && !uart_tx_stopped(port))
+		owl_uart_dma_start_tx(owl_port);
+
+	spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void owl_uart_dma_start_tx(struct owl_uart_port *owl_port)
+{
+	struct uart_port *port = &owl_port->port;
+	struct circ_buf *xmit = &port->state->xmit;
+	struct dma_async_tx_descriptor *desc;
+	u32 val;
+
+	if (uart_tx_stopped(port) || uart_circ_empty(xmit) ||
+	    owl_port->dma_tx_running)
+		return;
+
+	dma_sync_single_for_device(port->dev, owl_port->tx_dma_buf,
+				   UART_XMIT_SIZE, DMA_TO_DEVICE);
+
+	owl_port->tx_size = CIRC_CNT_TO_END(xmit->head, xmit->tail,
+					    UART_XMIT_SIZE);
+
+	desc = dmaengine_prep_slave_single(owl_port->tx_ch,
+					   owl_port->tx_dma_buf + xmit->tail,
+					   owl_port->tx_size, DMA_MEM_TO_DEV,
+					   DMA_PREP_INTERRUPT);
+	if (!desc)
+		return;
+
+	desc->callback = owl_uart_dma_tx_callback;
+	desc->callback_param = owl_port;
+
+	/* Enable Tx DRQ */
+	val = owl_uart_read(port, OWL_UART_CTL);
+	val &= ~OWL_UART_CTL_TXIE;
+	val |= OWL_UART_CTL_TXDE | OWL_UART_CTL_DTCR;
+	owl_uart_write(port, val, OWL_UART_CTL);
+
+	/* Start Tx DMA transfer */
+	owl_port->dma_tx_running = true;
+	owl_port->dma_tx_cookie = dmaengine_submit(desc);
+	dma_async_issue_pending(owl_port->tx_ch);
+}
+
 static unsigned int owl_uart_tx_empty(struct uart_port *port)
 {
 	unsigned long flags;
@@ -159,6 +249,7 @@ static void owl_uart_stop_tx(struct uart_port *port)
 
 static void owl_uart_start_tx(struct uart_port *port)
 {
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	u32 val;
 
 	if (uart_tx_stopped(port)) {
@@ -166,6 +257,11 @@ static void owl_uart_start_tx(struct uart_port *port)
 		return;
 	}
 
+	if (owl_port->tx_dma) {
+		owl_uart_dma_start_tx(owl_port);
+		return;
+	}
+
 	val = owl_uart_read(port, OWL_UART_STAT);
 	val |= OWL_UART_STAT_TIP;
 	owl_uart_write(port, val, OWL_UART_STAT);
@@ -273,13 +369,27 @@ static irqreturn_t owl_uart_irq(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static void owl_dma_channel_free(struct owl_uart_port *owl_port)
+{
+	dmaengine_terminate_all(owl_port->tx_ch);
+	dma_release_channel(owl_port->tx_ch);
+	dma_unmap_single(owl_port->port.dev, owl_port->tx_dma_buf,
+			 UART_XMIT_SIZE, DMA_TO_DEVICE);
+	owl_port->dma_tx_running = false;
+	owl_port->tx_ch = NULL;
+}
+
 static void owl_uart_shutdown(struct uart_port *port)
 {
-	u32 val;
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
 	unsigned long flags;
+	u32 val;
 
 	spin_lock_irqsave(&port->lock, flags);
 
+	if (owl_port->tx_dma)
+		owl_dma_channel_free(owl_port);
+
 	val = owl_uart_read(port, OWL_UART_CTL);
 	val &= ~(OWL_UART_CTL_TXIE | OWL_UART_CTL_RXIE
 		| OWL_UART_CTL_TXDE | OWL_UART_CTL_RXDE | OWL_UART_CTL_EN);
@@ -290,6 +400,62 @@ static void owl_uart_shutdown(struct uart_port *port)
 	free_irq(port->irq, port);
 }
 
+static int owl_uart_dma_tx_init(struct uart_port *port)
+{
+	struct owl_uart_port *owl_port = to_owl_uart_port(port);
+	struct device *dev = port->dev;
+	struct dma_slave_config slave_config;
+	int ret;
+
+	owl_port->tx_dma = false;
+
+	/* Request DMA TX channel */
+	owl_port->tx_ch = dma_request_slave_channel(dev, "tx");
+	if (!owl_port->tx_ch) {
+		dev_info(dev, "tx dma alloc failed\n");
+		return -ENODEV;
+	}
+
+	owl_port->tx_dma_buf = dma_map_single(dev,
+					      owl_port->port.state->xmit.buf,
+					      UART_XMIT_SIZE, DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, owl_port->tx_dma_buf)) {
+		ret = -ENOMEM;
+		goto alloc_err;
+	}
+
+	/* Configure DMA channel */
+	memset(&slave_config, 0, sizeof(slave_config));
+	slave_config.direction = DMA_MEM_TO_DEV;
+	slave_config.dst_addr = port->mapbase + OWL_UART_TXDAT;
+	slave_config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+
+	ret = dmaengine_slave_config(owl_port->tx_ch, &slave_config);
+	if (ret < 0) {
+		dev_err(dev, "tx dma channel config failed\n");
+		ret = -ENODEV;
+		goto map_err;
+	}
+
+	/* Use DMA buffer size as the FIFO size */
+	port->fifosize = UART_XMIT_SIZE;
+
+	/* Set DMA flag */
+	owl_port->tx_dma = true;
+	owl_port->dma_tx_running = false;
+
+	return 0;
+
+map_err:
+	dma_unmap_single(dev, owl_port->tx_dma_buf, UART_XMIT_SIZE,
+			 DMA_TO_DEVICE);
+alloc_err:
+	dma_release_channel(owl_port->tx_ch);
+	owl_port->tx_ch = NULL;
+
+	return ret;
+}
+
 static int owl_uart_startup(struct uart_port *port)
 {
 	u32 val;
@@ -301,6 +467,10 @@ static int owl_uart_startup(struct uart_port *port)
 	if (ret)
 		return ret;
 
+	ret = owl_uart_dma_tx_init(port);
+	if (!ret)
+		dev_info(port->dev, "using DMA for tx\n");
+
 	spin_lock_irqsave(&port->lock, flags);
 
 	val = owl_uart_read(port, OWL_UART_STAT);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [v2,1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
  2018-09-29  7:46 ` Manivannan Sadhasivam
  (?)
  (?)
@ 2018-09-30  2:04 ` kbuild test robot
  -1 siblings, 0 replies; 18+ messages in thread
From: kbuild test robot @ 2018-09-30  2:04 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: kbuild-all, vkoul, dan.j.williams, afaerber, robh+dt, gregkh,
	jslaby, linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi

Hi Manivannan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tty/tty-testing]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Manivannan-Sadhasivam/Add-slave-DMA-support-for-Actions-Semi-S900-SoC/20180929-155016
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git tty-testing
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

>> ERROR: Input tree has errors, aborting (use -f to force output)
---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
@ 2018-09-30  2:04 ` kbuild test robot
  0 siblings, 0 replies; 18+ messages in thread
From: kbuild test robot @ 2018-09-30  2:04 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: kbuild-all, vkoul, dan.j.williams, afaerber, robh+dt, gregkh,
	jslaby, linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

[-- Attachment #1: Type: text/plain, Size: 1106 bytes --]

Hi Manivannan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tty/tty-testing]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Manivannan-Sadhasivam/Add-slave-DMA-support-for-Actions-Semi-S900-SoC/20180929-155016
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git tty-testing
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

>> ERROR: Input tree has errors, aborting (use -f to force output)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40255 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
@ 2018-09-30  2:04 ` kbuild test robot
  0 siblings, 0 replies; 18+ messages in thread
From: kbuild test robot @ 2018-09-30  2:04 UTC (permalink / raw)
  Cc: kbuild-all, vkoul, dan.j.williams, afaerber, robh+dt, gregkh,
	jslaby, linux-serial, dmaengine, liuwei, 96boards, devicetree,
	daniel.thompson, amit.kucheria, linux-arm-kernel, linux-kernel,
	hzhang, bdong, manivannanece23, thomas.liau, jeff.chen, pn,
	edgar.righi, Manivannan Sadhasivam

[-- Attachment #1: Type: text/plain, Size: 1106 bytes --]

Hi Manivannan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tty/tty-testing]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Manivannan-Sadhasivam/Add-slave-DMA-support-for-Actions-Semi-S900-SoC/20180929-155016
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git tty-testing
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

>> ERROR: Input tree has errors, aborting (use -f to force output)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 40255 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5
@ 2018-09-30  2:04 ` kbuild test robot
  0 siblings, 0 replies; 18+ messages in thread
From: kbuild test robot @ 2018-09-30  2:04 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Manivannan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tty/tty-testing]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Manivannan-Sadhasivam/Add-slave-DMA-support-for-Actions-Semi-S900-SoC/20180929-155016
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git tty-testing
config: arm64-defconfig (attached as .config)
compiler: aarch64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.2.0 make.cross ARCH=arm64 

All errors (new ones prefixed by >>):

>> ERROR: Input tree has errors, aborting (use -f to force output)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 40255 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20180930/a569519f/attachment-0001.gz>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [v2,2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
  2018-09-29  7:46 ` Manivannan Sadhasivam
  (?)
@ 2018-10-05 15:01 ` Vinod
  -1 siblings, 0 replies; 18+ messages in thread
From: Vinod Koul @ 2018-10-05 15:01 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: dan.j.williams, afaerber, robh+dt, gregkh, jslaby, linux-serial,
	dmaengine, liuwei, 96boards, devicetree, daniel.thompson,
	amit.kucheria, linux-arm-kernel, linux-kernel, hzhang, bdong,
	manivannanece23, thomas.liau, jeff.chen, pn, edgar.righi

On 29-09-18, 13:16, Manivannan Sadhasivam wrote:
> Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
> mode supports bus width of 4 bytes common for all peripherals and 1 byte
> specific for UART.
> 
> The cyclic mode supports only block mode transfer.

Applied after adding driver tag, thanks

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
@ 2018-10-05 15:01 ` Vinod
  0 siblings, 0 replies; 18+ messages in thread
From: Vinod @ 2018-10-05 15:01 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: dan.j.williams, afaerber, robh+dt, gregkh, jslaby, linux-serial,
	dmaengine, liuwei, 96boards, devicetree, daniel.thompson,
	amit.kucheria, linux-arm-kernel, linux-kernel, hzhang, bdong,
	manivannanece23, thomas.liau, jeff.chen, pn, edgar.righi

On 29-09-18, 13:16, Manivannan Sadhasivam wrote:
> Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
> mode supports bus width of 4 bytes common for all peripherals and 1 byte
> specific for UART.
> 
> The cyclic mode supports only block mode transfer.

Applied after adding driver tag, thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC
@ 2018-10-05 15:01 ` Vinod
  0 siblings, 0 replies; 18+ messages in thread
From: Vinod @ 2018-10-05 15:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 29-09-18, 13:16, Manivannan Sadhasivam wrote:
> Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC. The slave
> mode supports bus width of 4 bytes common for all peripherals and 1 byte
> specific for UART.
> 
> The cyclic mode supports only block mode transfer.

Applied after adding driver tag, thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-10-05 15:01 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-05 15:01 [v2,2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC Vinod Koul
2018-10-05 15:01 ` [PATCH v2 2/3] " Vinod
2018-10-05 15:01 ` Vinod
  -- strict thread matches above, loose matches on Subject: below --
2018-09-30  2:04 [v2,1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5 kbuild test robot
2018-09-30  2:04 ` [PATCH v2 1/3] " kbuild test robot
2018-09-30  2:04 ` kbuild test robot
2018-09-30  2:04 ` kbuild test robot
2018-09-29  7:46 [v2,3/3] tty: serial: Add Tx DMA support for UART in Actions Semi Owl SoCs Manivannan Sadhasivam
2018-09-29  7:46 ` [PATCH v2 3/3] " Manivannan Sadhasivam
2018-09-29  7:46 ` Manivannan Sadhasivam
2018-09-29  7:46 [v2,2/3] dmaengine: Add Slave and Cyclic mode support for Actions Semi Owl S900 SoC Manivannan Sadhasivam
2018-09-29  7:46 ` [PATCH v2 2/3] " Manivannan Sadhasivam
2018-09-29  7:46 ` Manivannan Sadhasivam
2018-09-29  7:46 [v2,1/3] arm64: dts: actions: s900: Enable Tx DMA for UART5 Manivannan Sadhasivam
2018-09-29  7:46 ` [PATCH v2 1/3] " Manivannan Sadhasivam
2018-09-29  7:46 ` Manivannan Sadhasivam
2018-09-29  7:46 [PATCH v2 0/3] Add slave DMA support for Actions Semi S900 SoC Manivannan Sadhasivam
2018-09-29  7:46 ` Manivannan Sadhasivam

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.