linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/3] add uart DMA function
@ 2019-01-18  3:10 Long Cheng
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Long Cheng @ 2019-01-18  3:10 UTC (permalink / raw)
  To: Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee,
	Sean Wang, Nicolas Boichat
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

In Mediatek SOCs, the uart can support DMA function.
Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma/mediatek.

This series contains document bindings, Kconfig to control the function enable or not,
device tree including interrupt and dma device node, the code of UART DMA

Changes compared to v9
-rename dt-bindings file
-remove direction from device_config
-simplified code
Changes compared to v8
-revise missing items
Changes compared to v7:
-modify apdma uart tx
Changes compared to v6:
-Correct spelling
Changes compared to v5:
-move 'requst irqs' to alloc channel
-remove tasklet.
Changes compared to v4:
-modify Kconfig depends on.
Changes compared to v3:
-fix CONFIG_PM, will cause build fail
Changes compared to v2:
-remove unimportant parameters
-instead of cookie, use APIs of virtual channel.
-use of_dma_xlate_by_chan_id.
Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation

Long Cheng (3):
  dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  arm: dts: mt2712: add uart APDMA to device tree
  dt-bindings: dma: uart: rename binding

 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 -
 .../devicetree/bindings/dma/mtk-uart-apdma.txt     |   33 +
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi          |   51 ++
 drivers/dma/mediatek/Kconfig                       |   11 +
 drivers/dma/mediatek/Makefile                      |    1 +
 drivers/dma/mediatek/mtk-uart-apdma.c              |  669 ++++++++++++++++++++
 6 files changed, 765 insertions(+), 33 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
 create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
 create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-01-18  3:10 [PATCH v10 0/3] add uart DMA function Long Cheng
@ 2019-01-18  3:10 ` Long Cheng
  2019-01-28  3:25   ` Long Cheng
                     ` (2 more replies)
  2019-01-18  3:10 ` [PATCH v10 2/3] arm: dts: mt2712: add uart APDMA to device tree Long Cheng
  2019-01-18  3:10 ` [PATCH v10 3/3] dt-bindings: dma: uart: rename binding Long Cheng
  2 siblings, 3 replies; 11+ messages in thread
From: Long Cheng @ 2019-01-18  3:10 UTC (permalink / raw)
  To: Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee,
	Sean Wang, Nicolas Boichat
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

In DMA engine framework, add 8250 uart dma to support MediaTek uart.
If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
the performance, can enable the function.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/dma/mediatek/Kconfig          |   11 +
 drivers/dma/mediatek/Makefile         |    1 +
 drivers/dma/mediatek/mtk-uart-apdma.c |  669 +++++++++++++++++++++++++++++++++
 3 files changed, 681 insertions(+)
 create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c

diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 680fc05..ac49eb6 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -24,3 +24,14 @@ config MTK_CQDMA
 
 	  This controller provides the channels which is dedicated to
 	  memory-to-memory transfer to offload from CPU.
+
+config MTK_UART_APDMA
+	tristate "MediaTek SoCs APDMA support for UART"
+	depends on OF && SERIAL_8250_MT6577
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the UART DMA engine found on MediaTek MTK SoCs.
+	  When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
+	  you can enable the config. The DMA engine can only be used
+	  with MediaTek SoCs.
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 41bb381..61a6d29 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1,2 +1,3 @@
+obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
 obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
 obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
new file mode 100644
index 0000000..427db69
--- /dev/null
+++ b/drivers/dma/mediatek/mtk-uart-apdma.c
@@ -0,0 +1,669 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MediaTek Uart APDMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <long.cheng@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "../virt-dma.h"
+
+/* The default number of virtual channel */
+#define MTK_UART_APDMA_NR_VCHANS	8
+
+#define VFF_EN_B		BIT(0)
+#define VFF_STOP_B		BIT(0)
+#define VFF_FLUSH_B		BIT(0)
+#define VFF_4G_SUPPORT_B	BIT(0)
+#define VFF_RX_INT_EN0_B	BIT(0)	/* rx valid size >=  vff thre */
+#define VFF_RX_INT_EN1_B	BIT(1)
+#define VFF_TX_INT_EN_B		BIT(0)	/* tx left size >= vff thre */
+#define VFF_WARM_RST_B		BIT(0)
+#define VFF_RX_INT_CLR_B	(BIT(0) | BIT(1))
+#define VFF_TX_INT_CLR_B	0
+#define VFF_STOP_CLR_B		0
+#define VFF_INT_EN_CLR_B	0
+#define VFF_4G_SUPPORT_CLR_B	0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n)		((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n)		((n) * 3 / 4)
+
+#define VFF_RING_SIZE	0xffffU
+/* invert this bit when wrap ring head again */
+#define VFF_RING_WRAP	0x10000U
+
+#define VFF_INT_FLAG		0x00
+#define VFF_INT_EN		0x04
+#define VFF_EN			0x08
+#define VFF_RST			0x0c
+#define VFF_STOP		0x10
+#define VFF_FLUSH		0x14
+#define VFF_ADDR		0x1c
+#define VFF_LEN			0x24
+#define VFF_THRE		0x28
+#define VFF_WPT			0x2c
+#define VFF_RPT			0x30
+/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
+#define VFF_VALID_SIZE		0x3c
+/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
+#define VFF_LEFT_SIZE		0x40
+#define VFF_DEBUG_STATUS	0x50
+#define VFF_4G_SUPPORT		0x54
+
+struct mtk_uart_apdmadev {
+	struct dma_device ddev;
+	struct clk *clk;
+	bool support_33bits;
+	unsigned int dma_requests;
+	unsigned int *dma_irq;
+};
+
+struct mtk_uart_apdma_desc {
+	struct virt_dma_desc vd;
+
+	unsigned int avail_len;
+};
+
+struct mtk_chan {
+	struct virt_dma_chan vc;
+	struct dma_slave_config	cfg;
+	void __iomem *base;
+	struct mtk_uart_apdma_desc *desc;
+
+	enum dma_transfer_direction dir;
+
+	bool requested;
+
+	unsigned int rx_status;
+};
+
+static inline struct mtk_uart_apdmadev *
+to_mtk_uart_apdma_dev(struct dma_device *d)
+{
+	return container_of(d, struct mtk_uart_apdmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
+{
+	return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
+	(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
+}
+
+static void mtk_uart_apdma_write(struct mtk_chan *c,
+			       unsigned int reg, unsigned int val)
+{
+	writel(val, c->base + reg);
+}
+
+static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
+{
+	return readl(c->base + reg);
+}
+
+static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
+{
+	struct dma_chan *chan = vd->tx.chan;
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+	kfree(c->desc);
+}
+
+static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
+{
+	unsigned int len, send, left, wpt, d_wpt, tmp;
+	int ret;
+
+	left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
+	if (!left) {
+		mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+		return;
+	}
+
+	/* Wait 1sec for flush, can't sleep */
+	ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
+			tmp != VFF_FLUSH_B, 0, 1000000);
+	if (ret)
+		dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
+			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+	send = min_t(unsigned int, left, c->desc->avail_len);
+	wpt = mtk_uart_apdma_read(c, VFF_WPT);
+	len = mtk_uart_apdma_read(c, VFF_LEN);
+
+	d_wpt = wpt + send;
+	if ((d_wpt & VFF_RING_SIZE) >= len) {
+		d_wpt = d_wpt - len;
+		d_wpt = d_wpt ^ VFF_RING_WRAP;
+	}
+	mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
+
+	c->desc->avail_len -= send;
+
+	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+	if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
+		mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
+{
+	struct mtk_uart_apdma_desc *d = c->desc;
+	unsigned int len, wg, rg;
+	int cnt;
+
+	if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
+		!d || !vchan_next_desc(&c->vc))
+		return;
+
+	len = mtk_uart_apdma_read(c, VFF_LEN);
+	rg = mtk_uart_apdma_read(c, VFF_RPT);
+	wg = mtk_uart_apdma_read(c, VFF_WPT);
+	cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
+	/*
+	 * The buffer is ring buffer. If wrap bit different,
+	 * represents the start of the next cycle for WPT
+	 */
+	if ((rg ^ wg) & VFF_RING_WRAP)
+		cnt += len;
+
+	c->rx_status = cnt;
+	mtk_uart_apdma_write(c, VFF_RPT, wg);
+
+	list_del(&d->vd.node);
+	vchan_cookie_complete(&d->vd);
+}
+
+static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	struct mtk_uart_apdma_desc *d;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->dir == DMA_DEV_TO_MEM) {
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+		mtk_uart_apdma_start_rx(c);
+	} else if (c->dir == DMA_MEM_TO_DEV) {
+		d = c->desc;
+
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+
+		if (d->avail_len != 0U) {
+			mtk_uart_apdma_start_tx(c);
+		} else {
+			list_del(&d->vd.node);
+			vchan_cookie_complete(&d->vd);
+		}
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	unsigned int tmp;
+	int ret;
+
+	pm_runtime_get_sync(mtkd->ddev.dev);
+
+	mtk_uart_apdma_write(c, VFF_ADDR, 0);
+	mtk_uart_apdma_write(c, VFF_THRE, 0);
+	mtk_uart_apdma_write(c, VFF_LEN, 0);
+	mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
+
+	ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
+	if (ret) {
+		dev_err(chan->device->dev, "dma reset: fail, timeout\n");
+		return ret;
+	}
+
+	if (!c->requested) {
+		c->requested = true;
+		ret = request_irq(mtkd->dma_irq[chan->chan_id],
+				  mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
+				  KBUILD_MODNAME, chan);
+		if (ret < 0) {
+			dev_err(chan->device->dev, "Can't request dma IRQ\n");
+			return -EINVAL;
+		}
+	}
+
+	if (mtkd->support_33bits)
+		mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+
+	return ret;
+}
+
+static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+	if (c->requested) {
+		c->requested = false;
+		free_irq(mtkd->dma_irq[chan->chan_id], chan);
+	}
+
+	tasklet_kill(&c->vc.task);
+
+	vchan_free_chan_resources(&c->vc);
+
+	pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	if (!txstate)
+		return DMA_ERROR;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (ret == DMA_IN_PROGRESS) {
+		c->rx_status = mtk_uart_apdma_read(c, VFF_RPT) & VFF_RING_SIZE;
+		dma_set_residue(txstate, c->rx_status);
+	} else if (ret == DMA_COMPLETE && c->dir == DMA_DEV_TO_MEM) {
+		dma_set_residue(txstate, c->rx_status);
+	} else {
+		dma_set_residue(txstate, 0);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return ret;
+}
+
+static void mtk_uart_apdma_config_write(struct dma_chan *chan,
+			       struct dma_slave_config *cfg,
+			       enum dma_transfer_direction dir)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	struct mtk_uart_apdmadev *mtkd =
+				to_mtk_uart_apdma_dev(c->vc.chan.device);
+	unsigned int tmp;
+
+	if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
+		return;
+
+	c->dir = dir;
+
+	if (dir == DMA_DEV_TO_MEM) {
+		tmp = cfg->src_addr_width * 1024;
+
+		mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
+		mtk_uart_apdma_write(c, VFF_LEN, tmp);
+		mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
+		mtk_uart_apdma_write(c, VFF_INT_EN,
+				VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
+		mtk_uart_apdma_write(c, VFF_RPT, 0);
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+	} else if (dir == DMA_MEM_TO_DEV)	{
+		tmp = cfg->dst_addr_width * 1024;
+
+		mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
+		mtk_uart_apdma_write(c, VFF_LEN, tmp);
+		mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
+		mtk_uart_apdma_write(c, VFF_WPT, 0);
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+	}
+
+	mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
+
+	if (mtkd->support_33bits)
+		mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+	if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
+		dev_err(chan->device->dev, "dir[%d] fail\n", dir);
+}
+
+/*
+ * dmaengine_prep_slave_single will call the function. and sglen is 1.
+ * 8250 uart using one ring buffer, and deal with one sg.
+ */
+static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
+	(struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sglen, enum dma_transfer_direction dir,
+	unsigned long tx_flags, void *context)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	struct mtk_uart_apdma_desc *d;
+
+	if (!is_slave_direction(dir))
+		return NULL;
+
+	mtk_uart_apdma_config_write(chan, &c->cfg, dir);
+
+	/* Now allocate and setup the descriptor */
+	d = kzalloc(sizeof(*d), GFP_ATOMIC);
+	if (!d)
+		return NULL;
+
+	/* sglen is 1 */
+	d->avail_len = sg_dma_len(sgl);
+
+	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (vchan_issue_pending(&c->vc)) {
+		vd = vchan_next_desc(&c->vc);
+		c->desc = to_mtk_uart_apdma_desc(&vd->tx);
+	}
+
+	if (c->dir == DMA_DEV_TO_MEM)
+		mtk_uart_apdma_start_rx(c);
+	else if (c->dir == DMA_MEM_TO_DEV)
+		mtk_uart_apdma_start_tx(c);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
+				   struct dma_slave_config *config)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+
+	memcpy(&c->cfg, config, sizeof(*config));
+
+	return 0;
+}
+
+static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
+	unsigned long flags;
+	unsigned int tmp;
+	int ret;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+
+	mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
+	/* Wait 1sec for flush, can't sleep */
+	ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
+			tmp != VFF_FLUSH_B, 0, 1000000);
+	if (ret)
+		dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
+			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+	/* set stop as 1 -> wait until en is 0 -> set stop as 0 */
+	mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
+	ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
+			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
+
+	mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
+	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+	if (c->dir == DMA_DEV_TO_MEM)
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
+	else if (c->dir == DMA_MEM_TO_DEV)
+		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return 0;
+}
+
+static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return 0;
+}
+
+static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
+{
+	while (!list_empty(&mtkd->ddev.channels)) {
+		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+			struct mtk_chan, vc.chan.device_node);
+
+		list_del(&c->vc.chan.device_node);
+		tasklet_kill(&c->vc.task);
+	}
+}
+
+static const struct of_device_id mtk_uart_apdma_match[] = {
+	{ .compatible = "mediatek,mt6577-uart-dma", },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
+
+static int mtk_uart_apdma_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct mtk_uart_apdmadev *mtkd;
+	struct resource *res;
+	struct mtk_chan *c;
+	int bit_mask = 32, rc;
+	unsigned int i;
+
+	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+	if (!mtkd)
+		return -ENOMEM;
+
+	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(mtkd->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		rc = PTR_ERR(mtkd->clk);
+		return rc;
+	}
+
+	if (of_property_read_bool(np, "dma-33bits"))
+		mtkd->support_33bits = true;
+
+	if (mtkd->support_33bits)
+		bit_mask = 33;
+
+	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
+	if (rc)
+		return rc;
+
+	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+	mtkd->ddev.device_alloc_chan_resources =
+				mtk_uart_apdma_alloc_chan_resources;
+	mtkd->ddev.device_free_chan_resources =
+				mtk_uart_apdma_free_chan_resources;
+	mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
+	mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
+	mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
+	mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
+	mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
+	mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
+	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	mtkd->ddev.dev = &pdev->dev;
+	INIT_LIST_HEAD(&mtkd->ddev.channels);
+
+	mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
+	if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
+		dev_info(&pdev->dev,
+			 "Using %u as missing dma-requests property\n",
+			 MTK_UART_APDMA_NR_VCHANS);
+	}
+
+	mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
+				 sizeof(*mtkd->dma_irq), GFP_KERNEL);
+	if (!mtkd->dma_irq)
+		return -ENOMEM;
+
+	for (i = 0; i < mtkd->dma_requests; i++) {
+		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+		if (!c) {
+			rc = -ENODEV;
+			goto err_no_dma;
+		}
+
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res) {
+			rc = -ENODEV;
+			goto err_no_dma;
+		}
+
+		c->base = devm_ioremap_resource(&pdev->dev, res);
+		if (IS_ERR(c->base)) {
+			rc = PTR_ERR(c->base);
+			goto err_no_dma;
+		}
+		c->requested = false;
+		c->vc.desc_free = mtk_uart_apdma_desc_free;
+		vchan_init(&c->vc, &mtkd->ddev);
+
+		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+		if ((int)mtkd->dma_irq[i] < 0) {
+			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+			rc = -EINVAL;
+			goto err_no_dma;
+		}
+	}
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+
+	rc = dma_async_device_register(&mtkd->ddev);
+	if (rc)
+		goto rpm_disable;
+
+	platform_set_drvdata(pdev, mtkd);
+
+	/* Device-tree DMA controller registration */
+	rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
+	if (rc)
+		goto dma_remove;
+
+	return rc;
+
+dma_remove:
+	dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+	pm_runtime_disable(&pdev->dev);
+err_no_dma:
+	mtk_uart_apdma_free(mtkd);
+	return rc;
+}
+
+static int mtk_uart_apdma_remove(struct platform_device *pdev)
+{
+	struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
+
+	if (pdev->dev.of_node)
+		of_dma_controller_free(pdev->dev.of_node);
+
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_noidle(&pdev->dev);
+
+	dma_async_device_unregister(&mtkd->ddev);
+	mtk_uart_apdma_free(mtkd);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_uart_apdma_suspend(struct device *dev)
+{
+	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev))
+		clk_disable_unprepare(mtkd->clk);
+
+	return 0;
+}
+
+static int mtk_uart_apdma_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev)) {
+		ret = clk_prepare_enable(mtkd->clk);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+#endif /* CONFIG_PM_SLEEP */
+
+#ifdef CONFIG_PM
+static int mtk_uart_apdma_runtime_suspend(struct device *dev)
+{
+	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+	clk_disable_unprepare(mtkd->clk);
+
+	return 0;
+}
+
+static int mtk_uart_apdma_runtime_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
+
+	ret = clk_prepare_enable(mtkd->clk);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+#endif /* CONFIG_PM */
+
+static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
+	SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
+			   mtk_uart_apdma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_uart_apdma_driver = {
+	.probe	= mtk_uart_apdma_probe,
+	.remove	= mtk_uart_apdma_remove,
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.pm		= &mtk_uart_apdma_pm_ops,
+		.of_match_table = of_match_ptr(mtk_uart_apdma_match),
+	},
+};
+
+module_platform_driver(mtk_uart_apdma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
+MODULE_LICENSE("GPL v2");
+
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 2/3] arm: dts: mt2712: add uart APDMA to device tree
  2019-01-18  3:10 [PATCH v10 0/3] add uart DMA function Long Cheng
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
@ 2019-01-18  3:10 ` Long Cheng
  2019-01-18  3:10 ` [PATCH v10 3/3] dt-bindings: dma: uart: rename binding Long Cheng
  2 siblings, 0 replies; 11+ messages in thread
From: Long Cheng @ 2019-01-18  3:10 UTC (permalink / raw)
  To: Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee,
	Sean Wang, Nicolas Boichat
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi |   51 +++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index ee627a7..3469a6c 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -298,6 +298,9 @@
 		interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 10
+			&apdma 11>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -346,6 +349,39 @@
 			 (GIC_CPU_MASK_RAW(0x13) | IRQ_TYPE_LEVEL_HIGH)>;
 	};
 
+	apdma: dma-controller@11000400 {
+		compatible = "mediatek,mt2712-uart-dma",
+			     "mediatek,mt6577-uart-dma";
+		reg = <0 0x11000400 0 0x80>,
+		      <0 0x11000480 0 0x80>,
+		      <0 0x11000500 0 0x80>,
+		      <0 0x11000580 0 0x80>,
+		      <0 0x11000600 0 0x80>,
+		      <0 0x11000680 0 0x80>,
+		      <0 0x11000700 0 0x80>,
+		      <0 0x11000780 0 0x80>,
+		      <0 0x11000800 0 0x80>,
+		      <0 0x11000880 0 0x80>,
+		      <0 0x11000900 0 0x80>,
+		      <0 0x11000980 0 0x80>;
+		interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+		dma-requests = <12>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
 	auxadc: adc@11001000 {
 		compatible = "mediatek,mt2712-auxadc";
 		reg = <0 0x11001000 0 0x1000>;
@@ -362,6 +398,9 @@
 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 0
+			&apdma 1>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -372,6 +411,9 @@
 		interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 2
+			&apdma 3>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -382,6 +424,9 @@
 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 4
+			&apdma 5>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -392,6 +437,9 @@
 		interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 6
+			&apdma 7>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -402,6 +450,9 @@
 		interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 8
+			&apdma 9>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 3/3] dt-bindings: dma: uart: rename binding
  2019-01-18  3:10 [PATCH v10 0/3] add uart DMA function Long Cheng
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
  2019-01-18  3:10 ` [PATCH v10 2/3] arm: dts: mt2712: add uart APDMA to device tree Long Cheng
@ 2019-01-18  3:10 ` Long Cheng
  2019-01-21 14:18   ` Rob Herring
  2019-02-14 10:13   ` Matthias Brugger
  2 siblings, 2 replies; 11+ messages in thread
From: Long Cheng @ 2019-01-18  3:10 UTC (permalink / raw)
  To: Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee,
	Sean Wang, Nicolas Boichat
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

The filename matches mtk-uart-apdma.c.
So using "mtk-uart-apdma.txt" should be better.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 --------------------
 .../devicetree/bindings/dma/mtk-uart-apdma.txt     |   33 ++++++++++++++++++++
 2 files changed, 33 insertions(+), 33 deletions(-)
 delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
 create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
deleted file mode 100644
index 3fe0961..0000000
--- a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
+++ /dev/null
@@ -1,33 +0,0 @@
-* Mediatek UART APDMA Controller
-
-Required properties:
-- compatible should contain:
-  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
-  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
-
-- reg: The base address of the APDMA register bank.
-
-- interrupts: A single interrupt specifier.
-
-- clocks : Must contain an entry for each entry in clock-names.
-  See ../clocks/clock-bindings.txt for details.
-- clock-names: The APDMA clock for register accesses
-
-Examples:
-
-	apdma: dma-controller@11000380 {
-		compatible = "mediatek,mt2712-uart-dma";
-		reg = <0 0x11000380 0 0x400>;
-		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
-			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
-		clocks = <&pericfg CLK_PERI_AP_DMA>;
-		clock-names = "apdma";
-		#dma-cells = <1>;
-	};
-
diff --git a/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
new file mode 100644
index 0000000..3fe0961
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
@@ -0,0 +1,33 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+
+- clocks : Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+Examples:
+
+	apdma: dma-controller@11000380 {
+		compatible = "mediatek,mt2712-uart-dma";
+		reg = <0 0x11000380 0 0x400>;
+		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 3/3] dt-bindings: dma: uart: rename binding
  2019-01-18  3:10 ` [PATCH v10 3/3] dt-bindings: dma: uart: rename binding Long Cheng
@ 2019-01-21 14:18   ` Rob Herring
  2019-02-14 10:13   ` Matthias Brugger
  1 sibling, 0 replies; 11+ messages in thread
From: Rob Herring @ 2019-01-21 14:18 UTC (permalink / raw)
  To: Long Cheng
  Cc: Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee,
	Sean Wang, Nicolas Boichat, Matthias Brugger, Dan Williams,
	Greg Kroah-Hartman, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

On Fri, 18 Jan 2019 11:10:16 +0800, Long Cheng wrote:
> The filename matches mtk-uart-apdma.c.
> So using "mtk-uart-apdma.txt" should be better.
> 
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 --------------------
>  .../devicetree/bindings/dma/mtk-uart-apdma.txt     |   33 ++++++++++++++++++++
>  2 files changed, 33 insertions(+), 33 deletions(-)
>  delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
>  create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
@ 2019-01-28  3:25   ` Long Cheng
  2019-02-04  7:21   ` Vinod Koul
  2019-02-14 10:08   ` Matthias Brugger
  2 siblings, 0 replies; 11+ messages in thread
From: Long Cheng @ 2019-01-28  3:25 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee, Sean Wang,
	Nicolas Boichat, Matthias Brugger, Dan Williams,
	Greg Kroah-Hartman, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen, Zhenbao Liu

On Fri, 2019-01-18 at 11:10 +0800, Long Cheng wrote:

Hi Vinod Koul,

Just a gentle ping!

thanks.

> In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> the performance, can enable the function.
> 
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  drivers/dma/mediatek/Kconfig          |   11 +
>  drivers/dma/mediatek/Makefile         |    1 +
>  drivers/dma/mediatek/mtk-uart-apdma.c |  669 +++++++++++++++++++++++++++++++++
>  3 files changed, 681 insertions(+)
>  create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
> 
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 680fc05..ac49eb6 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -24,3 +24,14 @@ config MTK_CQDMA
>  
>  	  This controller provides the channels which is dedicated to
>  	  memory-to-memory transfer to offload from CPU.
> +
> +config MTK_UART_APDMA
> +	tristate "MediaTek SoCs APDMA support for UART"
> +	depends on OF && SERIAL_8250_MT6577
> +	select DMA_ENGINE
> +	select DMA_VIRTUAL_CHANNELS
> +	help
> +	  Support for the UART DMA engine found on MediaTek MTK SoCs.
> +	  When SERIAL_8250_MT6577 is enabled, and if you want to use DMA,
> +	  you can enable the config. The DMA engine can only be used
> +	  with MediaTek SoCs.
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 41bb381..61a6d29 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1,2 +1,3 @@
> +obj-$(CONFIG_MTK_UART_APDMA) += mtk-uart-apdma.o
>  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
>  obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
> diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
> new file mode 100644
> index 0000000..427db69
> --- /dev/null
> +++ b/drivers/dma/mediatek/mtk-uart-apdma.c
> @@ -0,0 +1,669 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MediaTek Uart APDMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <long.cheng@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_device.h>
> +#include <linux/of_dma.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +
> +#include "../virt-dma.h"
> +
> +/* The default number of virtual channel */
> +#define MTK_UART_APDMA_NR_VCHANS	8
> +
> +#define VFF_EN_B		BIT(0)
> +#define VFF_STOP_B		BIT(0)
> +#define VFF_FLUSH_B		BIT(0)
> +#define VFF_4G_SUPPORT_B	BIT(0)
> +#define VFF_RX_INT_EN0_B	BIT(0)	/* rx valid size >=  vff thre */
> +#define VFF_RX_INT_EN1_B	BIT(1)
> +#define VFF_TX_INT_EN_B		BIT(0)	/* tx left size >= vff thre */
> +#define VFF_WARM_RST_B		BIT(0)
> +#define VFF_RX_INT_CLR_B	(BIT(0) | BIT(1))
> +#define VFF_TX_INT_CLR_B	0
> +#define VFF_STOP_CLR_B		0
> +#define VFF_INT_EN_CLR_B	0
> +#define VFF_4G_SUPPORT_CLR_B	0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n)		((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n)		((n) * 3 / 4)
> +
> +#define VFF_RING_SIZE	0xffffU
> +/* invert this bit when wrap ring head again */
> +#define VFF_RING_WRAP	0x10000U
> +
> +#define VFF_INT_FLAG		0x00
> +#define VFF_INT_EN		0x04
> +#define VFF_EN			0x08
> +#define VFF_RST			0x0c
> +#define VFF_STOP		0x10
> +#define VFF_FLUSH		0x14
> +#define VFF_ADDR		0x1c
> +#define VFF_LEN			0x24
> +#define VFF_THRE		0x28
> +#define VFF_WPT			0x2c
> +#define VFF_RPT			0x30
> +/* TX: the buffer size HW can read. RX: the buffer size SW can read. */
> +#define VFF_VALID_SIZE		0x3c
> +/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
> +#define VFF_LEFT_SIZE		0x40
> +#define VFF_DEBUG_STATUS	0x50
> +#define VFF_4G_SUPPORT		0x54
> +
> +struct mtk_uart_apdmadev {
> +	struct dma_device ddev;
> +	struct clk *clk;
> +	bool support_33bits;
> +	unsigned int dma_requests;
> +	unsigned int *dma_irq;
> +};
> +
> +struct mtk_uart_apdma_desc {
> +	struct virt_dma_desc vd;
> +
> +	unsigned int avail_len;
> +};
> +
> +struct mtk_chan {
> +	struct virt_dma_chan vc;
> +	struct dma_slave_config	cfg;
> +	void __iomem *base;
> +	struct mtk_uart_apdma_desc *desc;
> +
> +	enum dma_transfer_direction dir;
> +
> +	bool requested;
> +
> +	unsigned int rx_status;
> +};
> +
> +static inline struct mtk_uart_apdmadev *
> +to_mtk_uart_apdma_dev(struct dma_device *d)
> +{
> +	return container_of(d, struct mtk_uart_apdmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_uart_apdma_chan(struct dma_chan *c)
> +{
> +	return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_uart_apdma_desc *to_mtk_uart_apdma_desc
> +	(struct dma_async_tx_descriptor *t)
> +{
> +	return container_of(t, struct mtk_uart_apdma_desc, vd.tx);
> +}
> +
> +static void mtk_uart_apdma_write(struct mtk_chan *c,
> +			       unsigned int reg, unsigned int val)
> +{
> +	writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_uart_apdma_read(struct mtk_chan *c, unsigned int reg)
> +{
> +	return readl(c->base + reg);
> +}
> +
> +static void mtk_uart_apdma_desc_free(struct virt_dma_desc *vd)
> +{
> +	struct dma_chan *chan = vd->tx.chan;
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> +	kfree(c->desc);
> +}
> +
> +static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
> +{
> +	unsigned int len, send, left, wpt, d_wpt, tmp;
> +	int ret;
> +
> +	left = mtk_uart_apdma_read(c, VFF_LEFT_SIZE);
> +	if (!left) {
> +		mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +		return;
> +	}
> +
> +	/* Wait 1sec for flush, can't sleep */
> +	ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> +			tmp != VFF_FLUSH_B, 0, 1000000);
> +	if (ret)
> +		dev_warn(c->vc.chan.device->dev, "tx: fail, debug=0x%x\n",
> +			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> +	send = min_t(unsigned int, left, c->desc->avail_len);
> +	wpt = mtk_uart_apdma_read(c, VFF_WPT);
> +	len = mtk_uart_apdma_read(c, VFF_LEN);
> +
> +	d_wpt = wpt + send;
> +	if ((d_wpt & VFF_RING_SIZE) >= len) {
> +		d_wpt = d_wpt - len;
> +		d_wpt = d_wpt ^ VFF_RING_WRAP;
> +	}
> +	mtk_uart_apdma_write(c, VFF_WPT, d_wpt);
> +
> +	c->desc->avail_len -= send;
> +
> +	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +	if (mtk_uart_apdma_read(c, VFF_FLUSH) == 0U)
> +		mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
> +{
> +	struct mtk_uart_apdma_desc *d = c->desc;
> +	unsigned int len, wg, rg;
> +	int cnt;
> +
> +	if ((mtk_uart_apdma_read(c, VFF_VALID_SIZE) == 0U) ||
> +		!d || !vchan_next_desc(&c->vc))
> +		return;
> +
> +	len = mtk_uart_apdma_read(c, VFF_LEN);
> +	rg = mtk_uart_apdma_read(c, VFF_RPT);
> +	wg = mtk_uart_apdma_read(c, VFF_WPT);
> +	cnt = (wg & VFF_RING_SIZE) - (rg & VFF_RING_SIZE);
> +	/*
> +	 * The buffer is ring buffer. If wrap bit different,
> +	 * represents the start of the next cycle for WPT
> +	 */
> +	if ((rg ^ wg) & VFF_RING_WRAP)
> +		cnt += len;
> +
> +	c->rx_status = cnt;
> +	mtk_uart_apdma_write(c, VFF_RPT, wg);
> +
> +	list_del(&d->vd.node);
> +	vchan_cookie_complete(&d->vd);
> +}
> +
> +static irqreturn_t mtk_uart_apdma_irq_handler(int irq, void *dev_id)
> +{
> +	struct dma_chan *chan = (struct dma_chan *)dev_id;
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	struct mtk_uart_apdma_desc *d;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&c->vc.lock, flags);
> +	if (c->dir == DMA_DEV_TO_MEM) {
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> +		mtk_uart_apdma_start_rx(c);
> +	} else if (c->dir == DMA_MEM_TO_DEV) {
> +		d = c->desc;
> +
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> +		if (d->avail_len != 0U) {
> +			mtk_uart_apdma_start_tx(c);
> +		} else {
> +			list_del(&d->vd.node);
> +			vchan_cookie_complete(&d->vd);
> +		}
> +	}
> +	spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	unsigned int tmp;
> +	int ret;
> +
> +	pm_runtime_get_sync(mtkd->ddev.dev);
> +
> +	mtk_uart_apdma_write(c, VFF_ADDR, 0);
> +	mtk_uart_apdma_write(c, VFF_THRE, 0);
> +	mtk_uart_apdma_write(c, VFF_LEN, 0);
> +	mtk_uart_apdma_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> +	ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> +	if (ret) {
> +		dev_err(chan->device->dev, "dma reset: fail, timeout\n");
> +		return ret;
> +	}
> +
> +	if (!c->requested) {
> +		c->requested = true;
> +		ret = request_irq(mtkd->dma_irq[chan->chan_id],
> +				  mtk_uart_apdma_irq_handler, IRQF_TRIGGER_NONE,
> +				  KBUILD_MODNAME, chan);
> +		if (ret < 0) {
> +			dev_err(chan->device->dev, "Can't request dma IRQ\n");
> +			return -EINVAL;
> +		}
> +	}
> +
> +	if (mtkd->support_33bits)
> +		mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +
> +	return ret;
> +}
> +
> +static void mtk_uart_apdma_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct mtk_uart_apdmadev *mtkd = to_mtk_uart_apdma_dev(chan->device);
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> +	if (c->requested) {
> +		c->requested = false;
> +		free_irq(mtkd->dma_irq[chan->chan_id], chan);
> +	}
> +
> +	tasklet_kill(&c->vc.task);
> +
> +	vchan_free_chan_resources(&c->vc);
> +
> +	pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> +					 dma_cookie_t cookie,
> +					 struct dma_tx_state *txstate)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	enum dma_status ret;
> +	unsigned long flags;
> +
> +	if (!txstate)
> +		return DMA_ERROR;
> +
> +	ret = dma_cookie_status(chan, cookie, txstate);
> +	spin_lock_irqsave(&c->vc.lock, flags);
> +	if (ret == DMA_IN_PROGRESS) {
> +		c->rx_status = mtk_uart_apdma_read(c, VFF_RPT) & VFF_RING_SIZE;
> +		dma_set_residue(txstate, c->rx_status);
> +	} else if (ret == DMA_COMPLETE && c->dir == DMA_DEV_TO_MEM) {
> +		dma_set_residue(txstate, c->rx_status);
> +	} else {
> +		dma_set_residue(txstate, 0);
> +	}
> +	spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +	return ret;
> +}
> +
> +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> +			       struct dma_slave_config *cfg,
> +			       enum dma_transfer_direction dir)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	struct mtk_uart_apdmadev *mtkd =
> +				to_mtk_uart_apdma_dev(c->vc.chan.device);
> +	unsigned int tmp;
> +
> +	if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> +		return;
> +
> +	c->dir = dir;
> +
> +	if (dir == DMA_DEV_TO_MEM) {
> +		tmp = cfg->src_addr_width * 1024;
> +
> +		mtk_uart_apdma_write(c, VFF_ADDR, cfg->src_addr);
> +		mtk_uart_apdma_write(c, VFF_LEN, tmp);
> +		mtk_uart_apdma_write(c, VFF_THRE, VFF_RX_THRE(tmp));
> +		mtk_uart_apdma_write(c, VFF_INT_EN,
> +				VFF_RX_INT_EN0_B | VFF_RX_INT_EN1_B);
> +		mtk_uart_apdma_write(c, VFF_RPT, 0);
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> +	} else if (dir == DMA_MEM_TO_DEV)	{
> +		tmp = cfg->dst_addr_width * 1024;
> +
> +		mtk_uart_apdma_write(c, VFF_ADDR, cfg->dst_addr);
> +		mtk_uart_apdma_write(c, VFF_LEN, tmp);
> +		mtk_uart_apdma_write(c, VFF_THRE, VFF_TX_THRE(tmp));
> +		mtk_uart_apdma_write(c, VFF_WPT, 0);
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +	}
> +
> +	mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
> +
> +	if (mtkd->support_33bits)
> +		mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> +	if (mtk_uart_apdma_read(c, VFF_EN) != VFF_EN_B)
> +		dev_err(chan->device->dev, "dir[%d] fail\n", dir);
> +}
> +
> +/*
> + * dmaengine_prep_slave_single will call the function. and sglen is 1.
> + * 8250 uart using one ring buffer, and deal with one sg.
> + */
> +static struct dma_async_tx_descriptor *mtk_uart_apdma_prep_slave_sg
> +	(struct dma_chan *chan, struct scatterlist *sgl,
> +	unsigned int sglen, enum dma_transfer_direction dir,
> +	unsigned long tx_flags, void *context)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	struct mtk_uart_apdma_desc *d;
> +
> +	if (!is_slave_direction(dir))
> +		return NULL;
> +
> +	mtk_uart_apdma_config_write(chan, &c->cfg, dir);
> +
> +	/* Now allocate and setup the descriptor */
> +	d = kzalloc(sizeof(*d), GFP_ATOMIC);
> +	if (!d)
> +		return NULL;
> +
> +	/* sglen is 1 */
> +	d->avail_len = sg_dma_len(sgl);
> +
> +	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_uart_apdma_issue_pending(struct dma_chan *chan)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	struct virt_dma_desc *vd;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&c->vc.lock, flags);
> +	if (vchan_issue_pending(&c->vc)) {
> +		vd = vchan_next_desc(&c->vc);
> +		c->desc = to_mtk_uart_apdma_desc(&vd->tx);
> +	}
> +
> +	if (c->dir == DMA_DEV_TO_MEM)
> +		mtk_uart_apdma_start_rx(c);
> +	else if (c->dir == DMA_MEM_TO_DEV)
> +		mtk_uart_apdma_start_tx(c);
> +
> +	spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static int mtk_uart_apdma_slave_config(struct dma_chan *chan,
> +				   struct dma_slave_config *config)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +
> +	memcpy(&c->cfg, config, sizeof(*config));
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_terminate_all(struct dma_chan *chan)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	unsigned long flags;
> +	unsigned int tmp;
> +	int ret;
> +
> +	spin_lock_irqsave(&c->vc.lock, flags);
> +
> +	mtk_uart_apdma_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +	/* Wait 1sec for flush, can't sleep */
> +	ret = readx_poll_timeout(readl, c->base + VFF_FLUSH, tmp,
> +			tmp != VFF_FLUSH_B, 0, 1000000);
> +	if (ret)
> +		dev_err(c->vc.chan.device->dev, "flush: fail, debug=0x%x\n",
> +			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> +	/* set stop as 1 -> wait until en is 0 -> set stop as 0 */
> +	mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_B);
> +	ret = readx_poll_timeout(readl, c->base + VFF_EN, tmp, !tmp, 10, 100);
> +	if (ret)
> +		dev_err(c->vc.chan.device->dev, "stop: fail, debug=0x%x\n",
> +			mtk_uart_apdma_read(c, VFF_DEBUG_STATUS));
> +
> +	mtk_uart_apdma_write(c, VFF_STOP, VFF_STOP_CLR_B);
> +	mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> +	if (c->dir == DMA_DEV_TO_MEM)
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);
> +	else if (c->dir == DMA_MEM_TO_DEV)
> +		mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);
> +
> +	spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> +{
> +	/* just for check caps pass */
> +	return 0;
> +}
> +
> +static void mtk_uart_apdma_free(struct mtk_uart_apdmadev *mtkd)
> +{
> +	while (!list_empty(&mtkd->ddev.channels)) {
> +		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> +			struct mtk_chan, vc.chan.device_node);
> +
> +		list_del(&c->vc.chan.device_node);
> +		tasklet_kill(&c->vc.task);
> +	}
> +}
> +
> +static const struct of_device_id mtk_uart_apdma_match[] = {
> +	{ .compatible = "mediatek,mt6577-uart-dma", },
> +	{ /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_apdma_match);
> +
> +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct mtk_uart_apdmadev *mtkd;
> +	struct resource *res;
> +	struct mtk_chan *c;
> +	int bit_mask = 32, rc;
> +	unsigned int i;
> +
> +	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> +	if (!mtkd)
> +		return -ENOMEM;
> +
> +	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> +	if (IS_ERR(mtkd->clk)) {
> +		dev_err(&pdev->dev, "No clock specified\n");
> +		rc = PTR_ERR(mtkd->clk);
> +		return rc;
> +	}
> +
> +	if (of_property_read_bool(np, "dma-33bits"))
> +		mtkd->support_33bits = true;
> +
> +	if (mtkd->support_33bits)
> +		bit_mask = 33;
> +
> +	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> +	if (rc)
> +		return rc;
> +
> +	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> +	mtkd->ddev.device_alloc_chan_resources =
> +				mtk_uart_apdma_alloc_chan_resources;
> +	mtkd->ddev.device_free_chan_resources =
> +				mtk_uart_apdma_free_chan_resources;
> +	mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> +	mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> +	mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> +	mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> +	mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> +	mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> +	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +	mtkd->ddev.dev = &pdev->dev;
> +	INIT_LIST_HEAD(&mtkd->ddev.channels);
> +
> +	mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;
> +	if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {
> +		dev_info(&pdev->dev,
> +			 "Using %u as missing dma-requests property\n",
> +			 MTK_UART_APDMA_NR_VCHANS);
> +	}
> +
> +	mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> +				 sizeof(*mtkd->dma_irq), GFP_KERNEL);
> +	if (!mtkd->dma_irq)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < mtkd->dma_requests; i++) {
> +		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> +		if (!c) {
> +			rc = -ENODEV;
> +			goto err_no_dma;
> +		}
> +
> +		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> +		if (!res) {
> +			rc = -ENODEV;
> +			goto err_no_dma;
> +		}
> +
> +		c->base = devm_ioremap_resource(&pdev->dev, res);
> +		if (IS_ERR(c->base)) {
> +			rc = PTR_ERR(c->base);
> +			goto err_no_dma;
> +		}
> +		c->requested = false;
> +		c->vc.desc_free = mtk_uart_apdma_desc_free;
> +		vchan_init(&c->vc, &mtkd->ddev);
> +
> +		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> +		if ((int)mtkd->dma_irq[i] < 0) {
> +			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> +			rc = -EINVAL;
> +			goto err_no_dma;
> +		}
> +	}
> +
> +	pm_runtime_enable(&pdev->dev);
> +	pm_runtime_set_active(&pdev->dev);
> +
> +	rc = dma_async_device_register(&mtkd->ddev);
> +	if (rc)
> +		goto rpm_disable;
> +
> +	platform_set_drvdata(pdev, mtkd);
> +
> +	/* Device-tree DMA controller registration */
> +	rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> +	if (rc)
> +		goto dma_remove;
> +
> +	return rc;
> +
> +dma_remove:
> +	dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> +	pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> +	mtk_uart_apdma_free(mtkd);
> +	return rc;
> +}
> +
> +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> +
> +	if (pdev->dev.of_node)
> +		of_dma_controller_free(pdev->dev.of_node);
> +
> +	pm_runtime_disable(&pdev->dev);
> +	pm_runtime_put_noidle(&pdev->dev);
> +
> +	dma_async_device_unregister(&mtkd->ddev);
> +	mtk_uart_apdma_free(mtkd);
> +
> +	return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_uart_apdma_suspend(struct device *dev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	if (!pm_runtime_suspended(dev))
> +		clk_disable_unprepare(mtkd->clk);
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_resume(struct device *dev)
> +{
> +	int ret;
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	if (!pm_runtime_suspended(dev)) {
> +		ret = clk_prepare_enable(mtkd->clk);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +#endif /* CONFIG_PM_SLEEP */
> +
> +#ifdef CONFIG_PM
> +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	clk_disable_unprepare(mtkd->clk);
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> +{
> +	int ret;
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	ret = clk_prepare_enable(mtkd->clk);
> +	if (ret)
> +		return ret;
> +
> +	return 0;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> +	SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> +	SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> +			   mtk_uart_apdma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_uart_apdma_driver = {
> +	.probe	= mtk_uart_apdma_probe,
> +	.remove	= mtk_uart_apdma_remove,
> +	.driver = {
> +		.name		= KBUILD_MODNAME,
> +		.pm		= &mtk_uart_apdma_pm_ops,
> +		.of_match_table = of_match_ptr(mtk_uart_apdma_match),
> +	},
> +};
> +
> +module_platform_driver(mtk_uart_apdma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> +MODULE_LICENSE("GPL v2");
> +



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
  2019-01-28  3:25   ` Long Cheng
@ 2019-02-04  7:21   ` Vinod Koul
  2019-02-14  2:44     ` Long Cheng
  2019-02-14 10:08   ` Matthias Brugger
  2 siblings, 1 reply; 11+ messages in thread
From: Vinod Koul @ 2019-02-04  7:21 UTC (permalink / raw)
  To: Long Cheng
  Cc: Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee, Sean Wang,
	Nicolas Boichat, Matthias Brugger, Dan Williams,
	Greg Kroah-Hartman, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen, Zhenbao Liu

On 18-01-19, 11:10, Long Cheng wrote:
> +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> +					 dma_cookie_t cookie,
> +					 struct dma_tx_state *txstate)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	enum dma_status ret;
> +	unsigned long flags;
> +
> +	if (!txstate)
> +		return DMA_ERROR;

Why, it is not a mandatory arg!

> +	ret = dma_cookie_status(chan, cookie, txstate);
> +	spin_lock_irqsave(&c->vc.lock, flags);
> +	if (ret == DMA_IN_PROGRESS) {
> +		c->rx_status = mtk_uart_apdma_read(c, VFF_RPT) & VFF_RING_SIZE;
> +		dma_set_residue(txstate, c->rx_status);
> +	} else if (ret == DMA_COMPLETE && c->dir == DMA_DEV_TO_MEM) {
> +		dma_set_residue(txstate, c->rx_status);

what is the point is setting residue to comleted txn, it is zero!

> +	} else {
> +		dma_set_residue(txstate, 0);

naah that doesnt sound correct!

> +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> +			       struct dma_slave_config *cfg,
> +			       enum dma_transfer_direction dir)
> +{
> +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> +	struct mtk_uart_apdmadev *mtkd =
> +				to_mtk_uart_apdma_dev(c->vc.chan.device);
> +	unsigned int tmp;
> +
> +	if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> +		return;
> +
> +	c->dir = dir;
> +
> +	if (dir == DMA_DEV_TO_MEM) {
> +		tmp = cfg->src_addr_width * 1024;

why multiply by 1024?

> +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> +{
> +	/* just for check caps pass */
> +	return 0;
> +}

please remove, this is not a mandatory fn
-- 
~Vinod

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-02-04  7:21   ` Vinod Koul
@ 2019-02-14  2:44     ` Long Cheng
  2019-02-19 12:14       ` Vinod Koul
  0 siblings, 1 reply; 11+ messages in thread
From: Long Cheng @ 2019-02-14  2:44 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee, Sean Wang,
	Nicolas Boichat, Matthias Brugger, Dan Williams,
	Greg Kroah-Hartman, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen, Zhenbao Liu, Long Cheng

On Mon, 2019-02-04 at 12:51 +0530, Vinod Koul wrote:

Hi Vinod sir,

> On 18-01-19, 11:10, Long Cheng wrote:
> > +static enum dma_status mtk_uart_apdma_tx_status(struct dma_chan *chan,
> > +					 dma_cookie_t cookie,
> > +					 struct dma_tx_state *txstate)
> > +{
> > +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +	enum dma_status ret;
> > +	unsigned long flags;
> > +
> > +	if (!txstate)
> > +		return DMA_ERROR;
> 
> Why, it is not a mandatory arg!

Next version, I will remove it.
> 
> > +	ret = dma_cookie_status(chan, cookie, txstate);
> > +	spin_lock_irqsave(&c->vc.lock, flags);
> > +	if (ret == DMA_IN_PROGRESS) {
> > +		c->rx_status = mtk_uart_apdma_read(c, VFF_RPT) & VFF_RING_SIZE;
> > +		dma_set_residue(txstate, c->rx_status);
> > +	} else if (ret == DMA_COMPLETE && c->dir == DMA_DEV_TO_MEM) {
> > +		dma_set_residue(txstate, c->rx_status);
> 
> what is the point is setting residue to comleted txn, it is zero!
> 
> > +	} else {
> > +		dma_set_residue(txstate, 0);
> 
> naah that doesnt sound correct!
> 
Next version, I will modify the function.

> > +static void mtk_uart_apdma_config_write(struct dma_chan *chan,
> > +			       struct dma_slave_config *cfg,
> > +			       enum dma_transfer_direction dir)
> > +{
> > +	struct mtk_chan *c = to_mtk_uart_apdma_chan(chan);
> > +	struct mtk_uart_apdmadev *mtkd =
> > +				to_mtk_uart_apdma_dev(c->vc.chan.device);
> > +	unsigned int tmp;
> > +
> > +	if (mtk_uart_apdma_read(c, VFF_EN) == VFF_EN_B)
> > +		return;
> > +
> > +	c->dir = dir;
> > +
> > +	if (dir == DMA_DEV_TO_MEM) {
> > +		tmp = cfg->src_addr_width * 1024;
> 
> why multiply by 1024?
> 
Next version, I will modify the this, with 'src_port_window_size' &&
'dst_port_window_size'.

> > +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> > +{
> > +	/* just for check caps pass */
> > +	return 0;
> > +}
> 
> please remove, this is not a mandatory fn

Can't remove it. Before the mail, i had explained it. in 8250 uart
framework, will check the function..

thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
  2019-01-28  3:25   ` Long Cheng
  2019-02-04  7:21   ` Vinod Koul
@ 2019-02-14 10:08   ` Matthias Brugger
  2 siblings, 0 replies; 11+ messages in thread
From: Matthias Brugger @ 2019-02-14 10:08 UTC (permalink / raw)
  To: Long Cheng, Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland,
	Ryder Lee, Sean Wang, Nicolas Boichat
  Cc: Dan Williams, Greg Kroah-Hartman, Jiri Slaby, Sean Wang,
	dmaengine, devicetree, linux-arm-kernel, linux-mediatek,
	linux-kernel, linux-serial, srv_heupstream, Yingjoe Chen,
	YT Shen, Zhenbao Liu



On 18/01/2019 04:10, Long Cheng wrote:
> In DMA engine framework, add 8250 uart dma to support MediaTek uart.
> If MediaTek uart enabled(SERIAL_8250_MT6577), and want to improve
> the performance, can enable the function.
> 
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  drivers/dma/mediatek/Kconfig          |   11 +
>  drivers/dma/mediatek/Makefile         |    1 +
>  drivers/dma/mediatek/mtk-uart-apdma.c |  669 +++++++++++++++++++++++++++++++++
>  3 files changed, 681 insertions(+)
>  create mode 100644 drivers/dma/mediatek/mtk-uart-apdma.c
> 
[...]
> +
> +static int mtk_uart_apdma_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct mtk_uart_apdmadev *mtkd;
> +	struct resource *res;
> +	struct mtk_chan *c;
> +	int bit_mask = 32, rc;
> +	unsigned int i;
> +
> +	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> +	if (!mtkd)
> +		return -ENOMEM;
> +
> +	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> +	if (IS_ERR(mtkd->clk)) {
> +		dev_err(&pdev->dev, "No clock specified\n");
> +		rc = PTR_ERR(mtkd->clk);
> +		return rc;
> +	}
> +
> +	if (of_property_read_bool(np, "dma-33bits"))
> +		mtkd->support_33bits = true;

dma-33bits not defined in the binding description.

> +
> +	if (mtkd->support_33bits)
> +		bit_mask = 33;
> +
> +	rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(bit_mask));
> +	if (rc)
> +		return rc;
> +
> +	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> +	mtkd->ddev.device_alloc_chan_resources =
> +				mtk_uart_apdma_alloc_chan_resources;
> +	mtkd->ddev.device_free_chan_resources =
> +				mtk_uart_apdma_free_chan_resources;
> +	mtkd->ddev.device_tx_status = mtk_uart_apdma_tx_status;
> +	mtkd->ddev.device_issue_pending = mtk_uart_apdma_issue_pending;
> +	mtkd->ddev.device_prep_slave_sg = mtk_uart_apdma_prep_slave_sg;
> +	mtkd->ddev.device_config = mtk_uart_apdma_slave_config;
> +	mtkd->ddev.device_pause = mtk_uart_apdma_device_pause;
> +	mtkd->ddev.device_terminate_all = mtk_uart_apdma_terminate_all;
> +	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +	mtkd->ddev.dev = &pdev->dev;
> +	INIT_LIST_HEAD(&mtkd->ddev.channels);
> +
> +	mtkd->dma_requests = MTK_UART_APDMA_NR_VCHANS;

Do this if dma-requests property is not present.

> +	if (of_property_read_u32(np, "dma-requests", &mtkd->dma_requests)) {

It's quite obvious what this does, but IMHO it should be reflected in the binding.

Regards,
Matthias

> +		dev_info(&pdev->dev,
> +			 "Using %u as missing dma-requests property\n",
> +			 MTK_UART_APDMA_NR_VCHANS);
> +	}
> +
> +	mtkd->dma_irq = devm_kcalloc(&pdev->dev, mtkd->dma_requests,
> +				 sizeof(*mtkd->dma_irq), GFP_KERNEL);
> +	if (!mtkd->dma_irq)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < mtkd->dma_requests; i++) {
> +		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> +		if (!c) {
> +			rc = -ENODEV;
> +			goto err_no_dma;
> +		}
> +
> +		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> +		if (!res) {
> +			rc = -ENODEV;
> +			goto err_no_dma;
> +		}
> +
> +		c->base = devm_ioremap_resource(&pdev->dev, res);
> +		if (IS_ERR(c->base)) {
> +			rc = PTR_ERR(c->base);
> +			goto err_no_dma;
> +		}
> +		c->requested = false;
> +		c->vc.desc_free = mtk_uart_apdma_desc_free;
> +		vchan_init(&c->vc, &mtkd->ddev);
> +
> +		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> +		if ((int)mtkd->dma_irq[i] < 0) {
> +			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> +			rc = -EINVAL;
> +			goto err_no_dma;
> +		}
> +	}
> +
> +	pm_runtime_enable(&pdev->dev);
> +	pm_runtime_set_active(&pdev->dev);
> +
> +	rc = dma_async_device_register(&mtkd->ddev);
> +	if (rc)
> +		goto rpm_disable;
> +
> +	platform_set_drvdata(pdev, mtkd);
> +
> +	/* Device-tree DMA controller registration */
> +	rc = of_dma_controller_register(np, of_dma_xlate_by_chan_id, mtkd);
> +	if (rc)
> +		goto dma_remove;
> +
> +	return rc;
> +
> +dma_remove:
> +	dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> +	pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> +	mtk_uart_apdma_free(mtkd);
> +	return rc;
> +}
> +
> +static int mtk_uart_apdma_remove(struct platform_device *pdev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = platform_get_drvdata(pdev);
> +
> +	if (pdev->dev.of_node)
> +		of_dma_controller_free(pdev->dev.of_node);
> +
> +	pm_runtime_disable(&pdev->dev);
> +	pm_runtime_put_noidle(&pdev->dev);
> +
> +	dma_async_device_unregister(&mtkd->ddev);
> +	mtk_uart_apdma_free(mtkd);
> +
> +	return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_uart_apdma_suspend(struct device *dev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	if (!pm_runtime_suspended(dev))
> +		clk_disable_unprepare(mtkd->clk);
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_resume(struct device *dev)
> +{
> +	int ret;
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	if (!pm_runtime_suspended(dev)) {
> +		ret = clk_prepare_enable(mtkd->clk);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}
> +#endif /* CONFIG_PM_SLEEP */
> +
> +#ifdef CONFIG_PM
> +static int mtk_uart_apdma_runtime_suspend(struct device *dev)
> +{
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	clk_disable_unprepare(mtkd->clk);
> +
> +	return 0;
> +}
> +
> +static int mtk_uart_apdma_runtime_resume(struct device *dev)
> +{
> +	int ret;
> +	struct mtk_uart_apdmadev *mtkd = dev_get_drvdata(dev);
> +
> +	ret = clk_prepare_enable(mtkd->clk);
> +	if (ret)
> +		return ret;
> +
> +	return 0;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_uart_apdma_pm_ops = {
> +	SET_SYSTEM_SLEEP_PM_OPS(mtk_uart_apdma_suspend, mtk_uart_apdma_resume)
> +	SET_RUNTIME_PM_OPS(mtk_uart_apdma_runtime_suspend,
> +			   mtk_uart_apdma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_uart_apdma_driver = {
> +	.probe	= mtk_uart_apdma_probe,
> +	.remove	= mtk_uart_apdma_remove,
> +	.driver = {
> +		.name		= KBUILD_MODNAME,
> +		.pm		= &mtk_uart_apdma_pm_ops,
> +		.of_match_table = of_match_ptr(mtk_uart_apdma_match),
> +	},
> +};
> +
> +module_platform_driver(mtk_uart_apdma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> +MODULE_LICENSE("GPL v2");
> +
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 3/3] dt-bindings: dma: uart: rename binding
  2019-01-18  3:10 ` [PATCH v10 3/3] dt-bindings: dma: uart: rename binding Long Cheng
  2019-01-21 14:18   ` Rob Herring
@ 2019-02-14 10:13   ` Matthias Brugger
  1 sibling, 0 replies; 11+ messages in thread
From: Matthias Brugger @ 2019-02-14 10:13 UTC (permalink / raw)
  To: Long Cheng, Vinod Koul, Randy Dunlap, Rob Herring, Mark Rutland,
	Ryder Lee, Sean Wang, Nicolas Boichat
  Cc: Dan Williams, Greg Kroah-Hartman, Jiri Slaby, Sean Wang,
	dmaengine, devicetree, linux-arm-kernel, linux-mediatek,
	linux-kernel, linux-serial, srv_heupstream, Yingjoe Chen,
	YT Shen, Zhenbao Liu



On 18/01/2019 04:10, Long Cheng wrote:
> The filename matches mtk-uart-apdma.c.
> So using "mtk-uart-apdma.txt" should be better.
> 
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 --------------------
>  .../devicetree/bindings/dma/mtk-uart-apdma.txt     |   33 ++++++++++++++++++++
>  2 files changed, 33 insertions(+), 33 deletions(-)
>  delete mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
>  create mode 100644 Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
> 
> diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
> deleted file mode 100644
> index 3fe0961..0000000
> --- a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
> +++ /dev/null
> @@ -1,33 +0,0 @@
> -* Mediatek UART APDMA Controller
> -
> -Required properties:
> -- compatible should contain:
> -  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
> -  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
> -
> -- reg: The base address of the APDMA register bank.> -
> -- interrupts: A single interrupt specifier.

One interrupt per dma-request, or 8 if no dma-requests property is present.

> -
> -- clocks : Must contain an entry for each entry in clock-names.
> -  See ../clocks/clock-bindings.txt for details.
> -- clock-names: The APDMA clock for register accesses> -
> -Examples:
> -
> -	apdma: dma-controller@11000380 {
> -		compatible = "mediatek,mt2712-uart-dma";
> -		reg = <0 0x11000380 0 0x400>;
> -		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
> -			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
> -		clocks = <&pericfg CLK_PERI_AP_DMA>;
> -		clock-names = "apdma";
> -		#dma-cells = <1>;
> -	};
> -
> diff --git a/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
> new file mode 100644
> index 0000000..3fe0961
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/mtk-uart-apdma.txt
> @@ -0,0 +1,33 @@
> +* Mediatek UART APDMA Controller
> +
> +Required properties:
> +- compatible should contain:
> +  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
> +  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
> +
> +- reg: The base address of the APDMA register bank.
> +

There is one address needed for every dma-request or 8 if dma-requests property
is not present. As already mentioned in the driver review dma-requests property
is missing.

> +- interrupts: A single interrupt specifier.
> +
> +- clocks : Must contain an entry for each entry in clock-names.
> +  See ../clocks/clock-bindings.txt for details.
> +- clock-names: The APDMA clock for register accesses
> +

Missing option property dma-33bits (should be mtk,dma-33bits?).

> +Examples:
> +
> +	apdma: dma-controller@11000380 {
> +		compatible = "mediatek,mt2712-uart-dma";
> +		reg = <0 0x11000380 0 0x400>;

We are missing the io mem areas here, right?

Regards,
Matthias

> +		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
> +			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
> +		clocks = <&pericfg CLK_PERI_AP_DMA>;
> +		clock-names = "apdma";
> +		#dma-cells = <1>;
> +	};
> +
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support
  2019-02-14  2:44     ` Long Cheng
@ 2019-02-19 12:14       ` Vinod Koul
  0 siblings, 0 replies; 11+ messages in thread
From: Vinod Koul @ 2019-02-19 12:14 UTC (permalink / raw)
  To: Long Cheng
  Cc: Randy Dunlap, Rob Herring, Mark Rutland, Ryder Lee, Sean Wang,
	Nicolas Boichat, Matthias Brugger, Dan Williams,
	Greg Kroah-Hartman, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen, Zhenbao Liu

On 14-02-19, 10:44, Long Cheng wrote:
> On Mon, 2019-02-04 at 12:51 +0530, Vinod Koul wrote:

> > > +static int mtk_uart_apdma_device_pause(struct dma_chan *chan)
> > > +{
> > > +	/* just for check caps pass */
> > > +	return 0;
> > > +}
> > 
> > please remove, this is not a mandatory fn
> 
> Can't remove it. Before the mail, i had explained it. in 8250 uart
> framework, will check the function..

I think you should fix the 8250 uart to handle driver not having a
pause. Keeping a dummy pause is just wrong and sets wrong expectations
for uart..

-- 
~Vinod

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-02-19 12:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-18  3:10 [PATCH v10 0/3] add uart DMA function Long Cheng
2019-01-18  3:10 ` [PATCH v10 1/3] dmaengine: 8250_mtk_dma: add MediaTek uart DMA support Long Cheng
2019-01-28  3:25   ` Long Cheng
2019-02-04  7:21   ` Vinod Koul
2019-02-14  2:44     ` Long Cheng
2019-02-19 12:14       ` Vinod Koul
2019-02-14 10:08   ` Matthias Brugger
2019-01-18  3:10 ` [PATCH v10 2/3] arm: dts: mt2712: add uart APDMA to device tree Long Cheng
2019-01-18  3:10 ` [PATCH v10 3/3] dt-bindings: dma: uart: rename binding Long Cheng
2019-01-21 14:18   ` Rob Herring
2019-02-14 10:13   ` Matthias Brugger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).