All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] add uart DMA function
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

In Mediatek SOCs, the uart can support DMA function.
Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.

This series contains document bindings, Kconfig to control the function enable or not,
device tree including interrupt and dma device node, the code of UART DM

Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation

Long Cheng (4):
  dt-bindings: dma: uart: add uart dma bindings
  dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  serial: 8250-mtk: add uart DMA support
  arm: dts: mt2701: add uart APDMA to device tree

 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 +
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi          |   50 ++
 drivers/dma/mediatek/8250_mtk_dma.c                |  894 ++++++++++++++++++++
 drivers/dma/mediatek/Kconfig                       |   11 +
 drivers/dma/mediatek/Makefile                      |    1 +
 drivers/tty/serial/8250/8250_mtk.c                 |  210 ++++-
 6 files changed, 1198 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 0/4] add uart DMA function
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

In Mediatek SOCs, the uart can support DMA function.
Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.

This series contains document bindings, Kconfig to control the function enable or not,
device tree including interrupt and dma device node, the code of UART DM

Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation

Long Cheng (4):
  dt-bindings: dma: uart: add uart dma bindings
  dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  serial: 8250-mtk: add uart DMA support
  arm: dts: mt2701: add uart APDMA to device tree

 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 +
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi          |   50 ++
 drivers/dma/mediatek/8250_mtk_dma.c                |  894 ++++++++++++++++++++
 drivers/dma/mediatek/Kconfig                       |   11 +
 drivers/dma/mediatek/Makefile                      |    1 +
 drivers/tty/serial/8250/8250_mtk.c                 |  210 ++++-
 6 files changed, 1198 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 0/4] add uart DMA function
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: devicetree, YT Shen, srv_heupstream, Greg Kroah-Hartman,
	Sean Wang, linux-kernel, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

In Mediatek SOCs, the uart can support DMA function.
Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.

This series contains document bindings, Kconfig to control the function enable or not,
device tree including interrupt and dma device node, the code of UART DM

Changes compared to v1:
-mian revised file, 8250_mtk_dma.c
--parameters renamed for standard
--remove atomic operation

Long Cheng (4):
  dt-bindings: dma: uart: add uart dma bindings
  dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  serial: 8250-mtk: add uart DMA support
  arm: dts: mt2701: add uart APDMA to device tree

 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 +
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi          |   50 ++
 drivers/dma/mediatek/8250_mtk_dma.c                |  894 ++++++++++++++++++++
 drivers/dma/mediatek/Kconfig                       |   11 +
 drivers/dma/mediatek/Makefile                      |    1 +
 drivers/tty/serial/8250/8250_mtk.c                 |  210 ++++-
 6 files changed, 1198 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [v2,1/4] dt-bindings: dma: uart: add uart dma bindings
  2018-12-05  8:42 ` Long Cheng
  (?)
  (?)
@ 2018-12-05  8:42 ` Long Cheng
  -1 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

add uart dma bindings

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Reviewed-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 ++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
new file mode 100644
index 0000000..3fe0961
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
@@ -0,0 +1,33 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+
+- clocks : Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+Examples:
+
+	apdma: dma-controller@11000380 {
+		compatible = "mediatek,mt2712-uart-dma";
+		reg = <0 0x11000380 0 0x400>;
+		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 1/4] dt-bindings: dma: uart: add uart dma bindings
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

add uart dma bindings

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Reviewed-by: Rob Herring <robh@kernel.org> 
---
 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 ++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
new file mode 100644
index 0000000..3fe0961
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
@@ -0,0 +1,33 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+
+- clocks : Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+Examples:
+
+	apdma: dma-controller@11000380 {
+		compatible = "mediatek,mt2712-uart-dma";
+		reg = <0 0x11000380 0 0x400>;
+		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 1/4] dt-bindings: dma: uart: add uart dma bindings
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

add uart dma bindings

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Reviewed-by: Rob Herring <robh@kernel.org> 
---
 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 ++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
new file mode 100644
index 0000000..3fe0961
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
@@ -0,0 +1,33 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+
+- clocks : Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+Examples:
+
+	apdma: dma-controller@11000380 {
+		compatible = "mediatek,mt2712-uart-dma";
+		reg = <0 0x11000380 0 0x400>;
+		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 1/4] dt-bindings: dma: uart: add uart dma bindings
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: devicetree, YT Shen, srv_heupstream, Greg Kroah-Hartman,
	Sean Wang, linux-kernel, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

add uart dma bindings

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
Reviewed-by: Rob Herring <robh@kernel.org> 
---
 .../devicetree/bindings/dma/8250_mtk_dma.txt       |   33 ++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/8250_mtk_dma.txt

diff --git a/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
new file mode 100644
index 0000000..3fe0961
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/8250_mtk_dma.txt
@@ -0,0 +1,33 @@
+* Mediatek UART APDMA Controller
+
+Required properties:
+- compatible should contain:
+  * "mediatek,mt2712-uart-dma" for MT2712 compatible APDMA
+  * "mediatek,mt6577-uart-dma" for MT6577 and all of the above
+
+- reg: The base address of the APDMA register bank.
+
+- interrupts: A single interrupt specifier.
+
+- clocks : Must contain an entry for each entry in clock-names.
+  See ../clocks/clock-bindings.txt for details.
+- clock-names: The APDMA clock for register accesses
+
+Examples:
+
+	apdma: dma-controller@11000380 {
+		compatible = "mediatek,mt2712-uart-dma";
+		reg = <0 0x11000380 0 0x400>;
+		interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 64 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 65 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 66 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 67 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 68 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 69 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 70 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
-- 
1.7.9.5


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  2018-12-05  8:42 ` Long Cheng
  (?)
  (?)
@ 2018-12-05  8:42 ` Long Cheng
  -1 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

In DMA engine framework, add 8250 mtk dma to support it.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
 drivers/dma/mediatek/Kconfig        |   11 +
 drivers/dma/mediatek/Makefile       |    1 +
 3 files changed, 906 insertions(+)
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c

diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
new file mode 100644
index 0000000..3454679
--- /dev/null
+++ b/drivers/dma/mediatek/8250_mtk_dma.c
@@ -0,0 +1,894 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Mediatek 8250 DMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <long.cheng@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/pm_runtime.h>
+#include <linux/iopoll.h>
+
+#include "../virt-dma.h"
+
+#define MTK_APDMA_DEFAULT_REQUESTS	127
+#define MTK_APDMA_CHANNELS		(CONFIG_SERIAL_8250_NR_UARTS * 2)
+
+#define VFF_EN_B		BIT(0)
+#define VFF_STOP_B		BIT(0)
+#define VFF_FLUSH_B		BIT(0)
+#define VFF_4G_SUPPORT_B	BIT(0)
+#define VFF_RX_INT_EN0_B	BIT(0)	/*rx valid size >=  vff thre*/
+#define VFF_RX_INT_EN1_B	BIT(1)
+#define VFF_TX_INT_EN_B		BIT(0)	/*tx left size >= vff thre*/
+#define VFF_WARM_RST_B		BIT(0)
+#define VFF_RX_INT_FLAG_CLR_B	(BIT(0) | BIT(1))
+#define VFF_TX_INT_FLAG_CLR_B	0
+#define VFF_STOP_CLR_B		0
+#define VFF_FLUSH_CLR_B		0
+#define VFF_INT_EN_CLR_B	0
+#define VFF_4G_SUPPORT_CLR_B	0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n)		((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n)		((n) * 3 / 4)
+
+#define MTK_DMA_RING_SIZE	0xffffU
+/* invert this bit when wrap ring head again*/
+#define MTK_DMA_RING_WRAP	0x10000U
+
+#define VFF_INT_FLAG		0x00
+#define VFF_INT_EN		0x04
+#define VFF_EN			0x08
+#define VFF_RST			0x0c
+#define VFF_STOP		0x10
+#define VFF_FLUSH		0x14
+#define VFF_ADDR		0x1c
+#define VFF_LEN			0x24
+#define VFF_THRE		0x28
+#define VFF_WPT			0x2c
+#define VFF_RPT			0x30
+/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
+#define VFF_VALID_SIZE		0x3c
+/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
+#define VFF_LEFT_SIZE		0x40
+#define VFF_DEBUG_STATUS	0x50
+#define VFF_4G_SUPPORT		0x54
+
+struct mtk_dmadev {
+	struct dma_device ddev;
+	void __iomem *mem_base[MTK_APDMA_CHANNELS];
+	spinlock_t lock; /* dma dev lock */
+	struct tasklet_struct task;
+	struct list_head pending;
+	struct clk *clk;
+	unsigned int dma_requests;
+	bool support_33bits;
+	unsigned int dma_irq[MTK_APDMA_CHANNELS];
+	struct mtk_chan *ch[MTK_APDMA_CHANNELS];
+};
+
+struct mtk_chan {
+	struct virt_dma_chan vc;
+	struct list_head node;
+	struct dma_slave_config	cfg;
+	void __iomem *base;
+	struct mtk_dma_desc *desc;
+
+	bool stop;
+	bool requested;
+
+	unsigned int dma_sig;
+	unsigned int dma_ch;
+	unsigned int sgidx;
+	unsigned int remain_size;
+	unsigned int rx_ptr;
+};
+
+struct mtk_dma_sg {
+	dma_addr_t addr;
+	unsigned int en;		/* number of elements (24-bit) */
+	unsigned int fn;		/* number of frames (16-bit) */
+};
+
+struct mtk_dma_desc {
+	struct virt_dma_desc vd;
+	enum dma_transfer_direction dir;
+
+	unsigned int sglen;
+	struct mtk_dma_sg sg[0];
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
+static struct of_dma_filter_info mtk_dma_info = {
+	.filter_fn = mtk_dma_filter_fn,
+};
+
+static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct mtk_dmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
+{
+	return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_dma_desc *to_mtk_dma_desc
+	(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct mtk_dma_desc, vd.tx);
+}
+
+static void mtk_dma_chan_write(struct mtk_chan *c,
+			       unsigned int reg, unsigned int val)
+{
+	writel(val, c->base + reg);
+}
+
+static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
+{
+	return readl(c->base + reg);
+}
+
+static void mtk_dma_desc_free(struct virt_dma_desc *vd)
+{
+	struct dma_chan *chan = vd->tx.chan;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	kfree(c->desc);
+	c->desc = NULL;
+}
+
+static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
+{
+	int ret;
+
+	ret = clk_prepare_enable(mtkd->clk);
+	if (ret) {
+		dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
+{
+	clk_disable_unprepare(mtkd->clk);
+}
+
+static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
+				     struct virt_dma_chan *vc)
+{
+	struct virt_dma_desc *vd;
+
+	if (list_empty(&vc->desc_issued) == 0) {
+		list_for_each_entry(vd, &vc->desc_issued, node) {
+			if (cookie == vd->tx.cookie) {
+				INIT_LIST_HEAD(&vc->desc_issued);
+				break;
+			}
+		}
+	}
+}
+
+static void mtk_dma_tx_flush(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
+		mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_dma_tx_write(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned int txcount = c->remain_size;
+	unsigned int len, send, left, wpt, wrap;
+
+	len = mtk_dma_chan_read(c, VFF_LEN);
+
+	while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
+		if (c->remain_size == 0U)
+			break;
+		send = min(left, c->remain_size);
+		wpt = mtk_dma_chan_read(c, VFF_WPT);
+		wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
+
+		if ((wpt & (len - 1U)) + send < len)
+			mtk_dma_chan_write(c, VFF_WPT, wpt + send);
+		else
+			mtk_dma_chan_write(c, VFF_WPT,
+					   ((wpt + send) & (len - 1U))
+					   | wrap);
+
+		c->remain_size -= send;
+	}
+
+	if (txcount != c->remain_size) {
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+		mtk_dma_tx_flush(chan);
+	}
+}
+
+static void mtk_dma_start_tx(struct mtk_chan *c)
+{
+	if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+	else
+		mtk_dma_tx_write(&c->vc.chan);
+
+	c->stop = false;
+}
+
+static void mtk_dma_get_rx_size(struct mtk_chan *c)
+{
+	unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
+	unsigned int rdptr, wrptr, wrreg, rdreg, count;
+
+	rdreg = mtk_dma_chan_read(c, VFF_RPT);
+	wrreg = mtk_dma_chan_read(c, VFF_WPT);
+	rdptr = rdreg & MTK_DMA_RING_SIZE;
+	wrptr = wrreg & MTK_DMA_RING_SIZE;
+	count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
+			(wrptr + rx_size - rdptr) : (wrptr - rdptr);
+
+	c->remain_size = count;
+	c->rx_ptr = rdptr;
+
+	mtk_dma_chan_write(c, VFF_RPT, wrreg);
+}
+
+static void mtk_dma_start_rx(struct mtk_chan *c)
+{
+	struct dma_chan *chan = &c->vc.chan;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_dma_desc *d = c->desc;
+
+	if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
+		return;
+
+	if (d && d->vd.tx.cookie != 0) {
+		mtk_dma_get_rx_size(c);
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	} else {
+		spin_lock(&mtkd->lock);
+		if (list_empty(&mtkd->pending))
+			list_add_tail(&c->node, &mtkd->pending);
+		spin_unlock(&mtkd->lock);
+		tasklet_schedule(&mtkd->task);
+	}
+}
+
+static void mtk_dma_reset(struct mtk_chan *c)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_ADDR, 0);
+	mtk_dma_chan_write(c, VFF_THRE, 0);
+	mtk_dma_chan_write(c, VFF_LEN, 0);
+	mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
+
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret) {
+		dev_err(c->vc.chan.device->dev,
+				"dma reset: fail, timeout\n");
+		return;
+	}
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_RPT, 0);
+	else if (c->cfg.direction == DMA_MEM_TO_DEV)
+		mtk_dma_chan_write(c, VFF_WPT, 0);
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+}
+
+static void mtk_dma_stop(struct mtk_chan *c)
+{
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
+	/* Wait for flush */
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_FLUSH,
+				 status,
+				 (status & VFF_FLUSH_B) != VFF_FLUSH_B,
+				 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling FLUSH fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	/*set stop as 1 -> wait until en is 0 -> set stop as 0*/
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
+	mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+	else
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	c->stop = true;
+}
+
+/*
+ * This callback schedules all pending channels. We could be more
+ * clever here by postponing allocation of the real DMA channels to
+ * this point, and freeing them when our virtual channel becomes idle.
+ *
+ * We would then need to deal with 'all channels in-use'
+ */
+static void mtk_dma_sched(unsigned long data)
+{
+	struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
+	struct virt_dma_desc *vd;
+	struct mtk_chan *c;
+	dma_cookie_t cookie;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irq(&mtkd->lock);
+	list_splice_tail_init(&mtkd->pending, &head);
+	spin_unlock_irq(&mtkd->lock);
+
+	if (!list_empty(&head)) {
+		c = list_first_entry(&head, struct mtk_chan, node);
+		cookie = c->vc.chan.cookie;
+
+		spin_lock_irqsave(&c->vc.lock, flags);
+		if (c->cfg.direction == DMA_DEV_TO_MEM) {
+			list_del_init(&c->node);
+			mtk_dma_start_rx(c);
+		} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			list_del_init(&c->node);
+			mtk_dma_start_tx(c);
+		}
+		spin_unlock_irqrestore(&c->vc.lock, flags);
+	}
+}
+
+static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	int ret = -EBUSY;
+
+	pm_runtime_get_sync(mtkd->ddev.dev);
+
+	if (!mtkd->ch[c->dma_ch]) {
+		c->base = mtkd->mem_base[c->dma_ch];
+		mtkd->ch[c->dma_ch] = c;
+		ret = 1;
+	}
+	c->requested = false;
+	mtk_dma_reset(c);
+
+	return ret;
+}
+
+static void mtk_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (c->requested) {
+		c->requested = false;
+		free_irq(mtkd->dma_irq[c->dma_ch], chan);
+	}
+
+	tasklet_kill(&mtkd->task);
+	tasklet_kill(&c->vc.task);
+
+	c->base = NULL;
+	mtkd->ch[c->dma_ch] = NULL;
+	vchan_free_chan_resources(&c->vc);
+
+	dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
+	c->dma_sig = 0;
+
+	pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	if (!txstate)
+		return DMA_ERROR;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (ret == DMA_IN_PROGRESS) {
+		c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
+		dma_set_residue(txstate, c->rx_ptr);
+	} else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
+		dma_set_residue(txstate, c->remain_size);
+	} else {
+		dma_set_residue(txstate, 0);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
+	(struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sglen,	enum dma_transfer_direction dir,
+	unsigned long tx_flags, void *context)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct scatterlist *sgent;
+	struct mtk_dma_desc *d;
+	struct mtk_dma_sg *sg;
+	unsigned int size, i, j, en;
+
+	en = 1;
+
+	if ((dir != DMA_DEV_TO_MEM) &&
+		(dir != DMA_MEM_TO_DEV)) {
+		dev_err(chan->device->dev, "bad direction\n");
+		return NULL;
+	}
+
+	/* Now allocate and setup the descriptor. */
+	d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
+	if (!d)
+		return NULL;
+
+	d->dir = dir;
+
+	j = 0;
+	for_each_sg(sgl, sgent, sglen, i) {
+		d->sg[j].addr = sg_dma_address(sgent);
+		d->sg[j].en = en;
+		d->sg[j].fn = sg_dma_len(sgent) / en;
+		j++;
+	}
+
+	d->sglen = j;
+
+	if (dir == DMA_MEM_TO_DEV) {
+		for (size = i = 0; i < d->sglen; i++) {
+			sg = &d->sg[i];
+			size += sg->en * sg->fn;
+		}
+		c->remain_size = size;
+	}
+
+	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_dma_issue_pending(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct virt_dma_desc *vd;
+	struct mtk_dmadev *mtkd;
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->cfg.direction == DMA_DEV_TO_MEM) {
+		cookie = c->vc.chan.cookie;
+		mtkd = to_mtk_dma_dev(chan->device);
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+		}
+	} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+		cookie = c->vc.chan.cookie;
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			mtk_dma_start_tx(c);
+		}
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+
+	mtk_dma_start_rx(c);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dma_desc *d = c->desc;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->remain_size != 0U) {
+		list_add_tail(&c->node, &mtkd->pending);
+		tasklet_schedule(&mtkd->task);
+	} else {
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_dma_slave_config(struct dma_chan *chan,
+				struct dma_slave_config *cfg)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	int ret;
+
+	c->cfg = *cfg;
+
+	if (cfg->direction == DMA_DEV_TO_MEM) {
+		unsigned int rx_len = cfg->src_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
+		mtk_dma_chan_write(c, VFF_LEN, rx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
+		mtk_dma_chan_write(c,
+				   VFF_INT_EN, VFF_RX_INT_EN0_B
+				   | VFF_RX_INT_EN1_B);
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_rx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	} else if (cfg->direction == DMA_MEM_TO_DEV)	{
+		unsigned int tx_len = cfg->dst_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
+		mtk_dma_chan_write(c, VFF_LEN, tx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_tx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+	if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
+		dev_err(chan->device->dev,
+			"config dma dir[%d] fail\n", cfg->direction);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_terminate_all(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	list_del_init(&c->node);
+	mtk_dma_stop(c);
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return 0;
+}
+
+static int mtk_dma_device_pause(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static int mtk_dma_device_resume(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static void mtk_dma_free(struct mtk_dmadev *mtkd)
+{
+	tasklet_kill(&mtkd->task);
+	while (list_empty(&mtkd->ddev.channels) == 0) {
+		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+			struct mtk_chan, vc.chan.device_node);
+
+		list_del(&c->vc.chan.device_node);
+		tasklet_kill(&c->vc.task);
+		devm_kfree(mtkd->ddev.dev, c);
+	}
+}
+
+static const struct of_device_id mtk_uart_dma_match[] = {
+	{ .compatible = "mediatek,mt6577-uart-dma", },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
+
+static int mtk_apdma_probe(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd;
+	struct resource *res;
+	struct mtk_chan *c;
+	unsigned int i;
+	int rc;
+
+	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+	if (!mtkd)
+		return -ENOMEM;
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			return -ENODEV;
+		mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
+		if (IS_ERR(mtkd->mem_base[i]))
+			return PTR_ERR(mtkd->mem_base[i]);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+		if ((int)mtkd->dma_irq[i] < 0) {
+			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+			return -EINVAL;
+		}
+	}
+
+	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(mtkd->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(mtkd->clk);
+	}
+
+	if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
+		dev_info(&pdev->dev, "Support dma 33bits\n");
+		mtkd->support_33bits = true;
+	}
+
+	if (mtkd->support_33bits)
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
+	else
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+	mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
+	mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
+	mtkd->ddev.device_tx_status = mtk_dma_tx_status;
+	mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
+	mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
+	mtkd->ddev.device_config = mtk_dma_slave_config;
+	mtkd->ddev.device_pause = mtk_dma_device_pause;
+	mtkd->ddev.device_resume = mtk_dma_device_resume;
+	mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
+	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	mtkd->ddev.dev = &pdev->dev;
+	INIT_LIST_HEAD(&mtkd->ddev.channels);
+	INIT_LIST_HEAD(&mtkd->pending);
+
+	spin_lock_init(&mtkd->lock);
+	tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
+
+	mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
+	if (of_property_read_u32(pdev->dev.of_node,
+				 "dma-requests", &mtkd->dma_requests)) {
+		dev_info(&pdev->dev,
+			 "Missing dma-requests property, using %u.\n",
+			 MTK_APDMA_DEFAULT_REQUESTS);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+		if (!c)
+			goto err_no_dma;
+
+		c->vc.desc_free = mtk_dma_desc_free;
+		vchan_init(&c->vc, &mtkd->ddev);
+		INIT_LIST_HEAD(&c->node);
+	}
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+
+	rc = dma_async_device_register(&mtkd->ddev);
+	if (rc)
+		goto rpm_disable;
+
+	platform_set_drvdata(pdev, mtkd);
+
+	if (pdev->dev.of_node) {
+		mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
+
+		/* Device-tree DMA controller registration */
+		rc = of_dma_controller_register(pdev->dev.of_node,
+						of_dma_simple_xlate,
+						&mtk_dma_info);
+		if (rc)
+			goto dma_remove;
+	}
+
+	return rc;
+
+dma_remove:
+	dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+	pm_runtime_disable(&pdev->dev);
+err_no_dma:
+	mtk_dma_free(mtkd);
+	return rc;
+}
+
+static int mtk_apdma_remove(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
+
+	if (pdev->dev.of_node)
+		of_dma_controller_free(pdev->dev.of_node);
+
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_noidle(&pdev->dev);
+
+	dma_async_device_unregister(&mtkd->ddev);
+
+	mtk_dma_free(mtkd);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_dma_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev))
+		mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev)) {
+		ret = mtk_dma_clk_enable(mtkd);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_runtime_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_runtime_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	ret = mtk_dma_clk_enable(mtkd);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+#endif /* CONFIG_PM_SLEEP */
+
+static const struct dev_pm_ops mtk_dma_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
+	SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
+			   mtk_dma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_dma_driver = {
+	.probe	= mtk_apdma_probe,
+	.remove	= mtk_apdma_remove,
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.pm		= &mtk_dma_pm_ops,
+		.of_match_table = of_match_ptr(mtk_uart_dma_match),
+	},
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
+{
+	if (chan->device->dev->driver == &mtk_dma_driver.driver) {
+		struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+		struct mtk_chan *c = to_mtk_dma_chan(chan);
+		unsigned int req = *(unsigned int *)param;
+
+		if (req <= mtkd->dma_requests) {
+			c->dma_sig = req;
+			c->dma_ch = req;
+			return true;
+		}
+	}
+	return false;
+}
+
+module_platform_driver(mtk_dma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
+MODULE_LICENSE("GPL v2");
+
diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 27bac0b..bef436e 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -1,4 +1,15 @@
 
+config DMA_MTK_UART
+	tristate "MediaTek SoCs APDMA support for UART"
+	depends on OF
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the UART DMA engine found on MediaTek MTK SoCs.
+	  when 8250 mtk uart is enabled, and if you want to using DMA,
+	  you can enable the config. the DMA engine just only be used
+	  with MediaTek Socs.
+
 config MTK_HSDMA
 	tristate "MediaTek High-Speed DMA controller support"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 6e778f8..2f2efd9 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1 +1,2 @@
+obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
 obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

In DMA engine framework, add 8250 mtk dma to support it.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
 drivers/dma/mediatek/Kconfig        |   11 +
 drivers/dma/mediatek/Makefile       |    1 +
 3 files changed, 906 insertions(+)
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c

diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
new file mode 100644
index 0000000..3454679
--- /dev/null
+++ b/drivers/dma/mediatek/8250_mtk_dma.c
@@ -0,0 +1,894 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Mediatek 8250 DMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <long.cheng@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/pm_runtime.h>
+#include <linux/iopoll.h>
+
+#include "../virt-dma.h"
+
+#define MTK_APDMA_DEFAULT_REQUESTS	127
+#define MTK_APDMA_CHANNELS		(CONFIG_SERIAL_8250_NR_UARTS * 2)
+
+#define VFF_EN_B		BIT(0)
+#define VFF_STOP_B		BIT(0)
+#define VFF_FLUSH_B		BIT(0)
+#define VFF_4G_SUPPORT_B	BIT(0)
+#define VFF_RX_INT_EN0_B	BIT(0)	/*rx valid size >=  vff thre*/
+#define VFF_RX_INT_EN1_B	BIT(1)
+#define VFF_TX_INT_EN_B		BIT(0)	/*tx left size >= vff thre*/
+#define VFF_WARM_RST_B		BIT(0)
+#define VFF_RX_INT_FLAG_CLR_B	(BIT(0) | BIT(1))
+#define VFF_TX_INT_FLAG_CLR_B	0
+#define VFF_STOP_CLR_B		0
+#define VFF_FLUSH_CLR_B		0
+#define VFF_INT_EN_CLR_B	0
+#define VFF_4G_SUPPORT_CLR_B	0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n)		((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n)		((n) * 3 / 4)
+
+#define MTK_DMA_RING_SIZE	0xffffU
+/* invert this bit when wrap ring head again*/
+#define MTK_DMA_RING_WRAP	0x10000U
+
+#define VFF_INT_FLAG		0x00
+#define VFF_INT_EN		0x04
+#define VFF_EN			0x08
+#define VFF_RST			0x0c
+#define VFF_STOP		0x10
+#define VFF_FLUSH		0x14
+#define VFF_ADDR		0x1c
+#define VFF_LEN			0x24
+#define VFF_THRE		0x28
+#define VFF_WPT			0x2c
+#define VFF_RPT			0x30
+/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
+#define VFF_VALID_SIZE		0x3c
+/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
+#define VFF_LEFT_SIZE		0x40
+#define VFF_DEBUG_STATUS	0x50
+#define VFF_4G_SUPPORT		0x54
+
+struct mtk_dmadev {
+	struct dma_device ddev;
+	void __iomem *mem_base[MTK_APDMA_CHANNELS];
+	spinlock_t lock; /* dma dev lock */
+	struct tasklet_struct task;
+	struct list_head pending;
+	struct clk *clk;
+	unsigned int dma_requests;
+	bool support_33bits;
+	unsigned int dma_irq[MTK_APDMA_CHANNELS];
+	struct mtk_chan *ch[MTK_APDMA_CHANNELS];
+};
+
+struct mtk_chan {
+	struct virt_dma_chan vc;
+	struct list_head node;
+	struct dma_slave_config	cfg;
+	void __iomem *base;
+	struct mtk_dma_desc *desc;
+
+	bool stop;
+	bool requested;
+
+	unsigned int dma_sig;
+	unsigned int dma_ch;
+	unsigned int sgidx;
+	unsigned int remain_size;
+	unsigned int rx_ptr;
+};
+
+struct mtk_dma_sg {
+	dma_addr_t addr;
+	unsigned int en;		/* number of elements (24-bit) */
+	unsigned int fn;		/* number of frames (16-bit) */
+};
+
+struct mtk_dma_desc {
+	struct virt_dma_desc vd;
+	enum dma_transfer_direction dir;
+
+	unsigned int sglen;
+	struct mtk_dma_sg sg[0];
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
+static struct of_dma_filter_info mtk_dma_info = {
+	.filter_fn = mtk_dma_filter_fn,
+};
+
+static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct mtk_dmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
+{
+	return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_dma_desc *to_mtk_dma_desc
+	(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct mtk_dma_desc, vd.tx);
+}
+
+static void mtk_dma_chan_write(struct mtk_chan *c,
+			       unsigned int reg, unsigned int val)
+{
+	writel(val, c->base + reg);
+}
+
+static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
+{
+	return readl(c->base + reg);
+}
+
+static void mtk_dma_desc_free(struct virt_dma_desc *vd)
+{
+	struct dma_chan *chan = vd->tx.chan;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	kfree(c->desc);
+	c->desc = NULL;
+}
+
+static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
+{
+	int ret;
+
+	ret = clk_prepare_enable(mtkd->clk);
+	if (ret) {
+		dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
+{
+	clk_disable_unprepare(mtkd->clk);
+}
+
+static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
+				     struct virt_dma_chan *vc)
+{
+	struct virt_dma_desc *vd;
+
+	if (list_empty(&vc->desc_issued) == 0) {
+		list_for_each_entry(vd, &vc->desc_issued, node) {
+			if (cookie == vd->tx.cookie) {
+				INIT_LIST_HEAD(&vc->desc_issued);
+				break;
+			}
+		}
+	}
+}
+
+static void mtk_dma_tx_flush(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
+		mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_dma_tx_write(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned int txcount = c->remain_size;
+	unsigned int len, send, left, wpt, wrap;
+
+	len = mtk_dma_chan_read(c, VFF_LEN);
+
+	while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
+		if (c->remain_size == 0U)
+			break;
+		send = min(left, c->remain_size);
+		wpt = mtk_dma_chan_read(c, VFF_WPT);
+		wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
+
+		if ((wpt & (len - 1U)) + send < len)
+			mtk_dma_chan_write(c, VFF_WPT, wpt + send);
+		else
+			mtk_dma_chan_write(c, VFF_WPT,
+					   ((wpt + send) & (len - 1U))
+					   | wrap);
+
+		c->remain_size -= send;
+	}
+
+	if (txcount != c->remain_size) {
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+		mtk_dma_tx_flush(chan);
+	}
+}
+
+static void mtk_dma_start_tx(struct mtk_chan *c)
+{
+	if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+	else
+		mtk_dma_tx_write(&c->vc.chan);
+
+	c->stop = false;
+}
+
+static void mtk_dma_get_rx_size(struct mtk_chan *c)
+{
+	unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
+	unsigned int rdptr, wrptr, wrreg, rdreg, count;
+
+	rdreg = mtk_dma_chan_read(c, VFF_RPT);
+	wrreg = mtk_dma_chan_read(c, VFF_WPT);
+	rdptr = rdreg & MTK_DMA_RING_SIZE;
+	wrptr = wrreg & MTK_DMA_RING_SIZE;
+	count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
+			(wrptr + rx_size - rdptr) : (wrptr - rdptr);
+
+	c->remain_size = count;
+	c->rx_ptr = rdptr;
+
+	mtk_dma_chan_write(c, VFF_RPT, wrreg);
+}
+
+static void mtk_dma_start_rx(struct mtk_chan *c)
+{
+	struct dma_chan *chan = &c->vc.chan;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_dma_desc *d = c->desc;
+
+	if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
+		return;
+
+	if (d && d->vd.tx.cookie != 0) {
+		mtk_dma_get_rx_size(c);
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	} else {
+		spin_lock(&mtkd->lock);
+		if (list_empty(&mtkd->pending))
+			list_add_tail(&c->node, &mtkd->pending);
+		spin_unlock(&mtkd->lock);
+		tasklet_schedule(&mtkd->task);
+	}
+}
+
+static void mtk_dma_reset(struct mtk_chan *c)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_ADDR, 0);
+	mtk_dma_chan_write(c, VFF_THRE, 0);
+	mtk_dma_chan_write(c, VFF_LEN, 0);
+	mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
+
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret) {
+		dev_err(c->vc.chan.device->dev,
+				"dma reset: fail, timeout\n");
+		return;
+	}
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_RPT, 0);
+	else if (c->cfg.direction == DMA_MEM_TO_DEV)
+		mtk_dma_chan_write(c, VFF_WPT, 0);
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+}
+
+static void mtk_dma_stop(struct mtk_chan *c)
+{
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
+	/* Wait for flush */
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_FLUSH,
+				 status,
+				 (status & VFF_FLUSH_B) != VFF_FLUSH_B,
+				 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling FLUSH fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	/*set stop as 1 -> wait until en is 0 -> set stop as 0*/
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
+	mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+	else
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	c->stop = true;
+}
+
+/*
+ * This callback schedules all pending channels. We could be more
+ * clever here by postponing allocation of the real DMA channels to
+ * this point, and freeing them when our virtual channel becomes idle.
+ *
+ * We would then need to deal with 'all channels in-use'
+ */
+static void mtk_dma_sched(unsigned long data)
+{
+	struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
+	struct virt_dma_desc *vd;
+	struct mtk_chan *c;
+	dma_cookie_t cookie;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irq(&mtkd->lock);
+	list_splice_tail_init(&mtkd->pending, &head);
+	spin_unlock_irq(&mtkd->lock);
+
+	if (!list_empty(&head)) {
+		c = list_first_entry(&head, struct mtk_chan, node);
+		cookie = c->vc.chan.cookie;
+
+		spin_lock_irqsave(&c->vc.lock, flags);
+		if (c->cfg.direction == DMA_DEV_TO_MEM) {
+			list_del_init(&c->node);
+			mtk_dma_start_rx(c);
+		} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			list_del_init(&c->node);
+			mtk_dma_start_tx(c);
+		}
+		spin_unlock_irqrestore(&c->vc.lock, flags);
+	}
+}
+
+static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	int ret = -EBUSY;
+
+	pm_runtime_get_sync(mtkd->ddev.dev);
+
+	if (!mtkd->ch[c->dma_ch]) {
+		c->base = mtkd->mem_base[c->dma_ch];
+		mtkd->ch[c->dma_ch] = c;
+		ret = 1;
+	}
+	c->requested = false;
+	mtk_dma_reset(c);
+
+	return ret;
+}
+
+static void mtk_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (c->requested) {
+		c->requested = false;
+		free_irq(mtkd->dma_irq[c->dma_ch], chan);
+	}
+
+	tasklet_kill(&mtkd->task);
+	tasklet_kill(&c->vc.task);
+
+	c->base = NULL;
+	mtkd->ch[c->dma_ch] = NULL;
+	vchan_free_chan_resources(&c->vc);
+
+	dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
+	c->dma_sig = 0;
+
+	pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	if (!txstate)
+		return DMA_ERROR;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (ret == DMA_IN_PROGRESS) {
+		c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
+		dma_set_residue(txstate, c->rx_ptr);
+	} else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
+		dma_set_residue(txstate, c->remain_size);
+	} else {
+		dma_set_residue(txstate, 0);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
+	(struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sglen,	enum dma_transfer_direction dir,
+	unsigned long tx_flags, void *context)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct scatterlist *sgent;
+	struct mtk_dma_desc *d;
+	struct mtk_dma_sg *sg;
+	unsigned int size, i, j, en;
+
+	en = 1;
+
+	if ((dir != DMA_DEV_TO_MEM) &&
+		(dir != DMA_MEM_TO_DEV)) {
+		dev_err(chan->device->dev, "bad direction\n");
+		return NULL;
+	}
+
+	/* Now allocate and setup the descriptor. */
+	d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
+	if (!d)
+		return NULL;
+
+	d->dir = dir;
+
+	j = 0;
+	for_each_sg(sgl, sgent, sglen, i) {
+		d->sg[j].addr = sg_dma_address(sgent);
+		d->sg[j].en = en;
+		d->sg[j].fn = sg_dma_len(sgent) / en;
+		j++;
+	}
+
+	d->sglen = j;
+
+	if (dir == DMA_MEM_TO_DEV) {
+		for (size = i = 0; i < d->sglen; i++) {
+			sg = &d->sg[i];
+			size += sg->en * sg->fn;
+		}
+		c->remain_size = size;
+	}
+
+	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_dma_issue_pending(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct virt_dma_desc *vd;
+	struct mtk_dmadev *mtkd;
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->cfg.direction == DMA_DEV_TO_MEM) {
+		cookie = c->vc.chan.cookie;
+		mtkd = to_mtk_dma_dev(chan->device);
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+		}
+	} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+		cookie = c->vc.chan.cookie;
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			mtk_dma_start_tx(c);
+		}
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+
+	mtk_dma_start_rx(c);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dma_desc *d = c->desc;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->remain_size != 0U) {
+		list_add_tail(&c->node, &mtkd->pending);
+		tasklet_schedule(&mtkd->task);
+	} else {
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_dma_slave_config(struct dma_chan *chan,
+				struct dma_slave_config *cfg)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	int ret;
+
+	c->cfg = *cfg;
+
+	if (cfg->direction == DMA_DEV_TO_MEM) {
+		unsigned int rx_len = cfg->src_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
+		mtk_dma_chan_write(c, VFF_LEN, rx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
+		mtk_dma_chan_write(c,
+				   VFF_INT_EN, VFF_RX_INT_EN0_B
+				   | VFF_RX_INT_EN1_B);
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_rx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	} else if (cfg->direction == DMA_MEM_TO_DEV)	{
+		unsigned int tx_len = cfg->dst_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
+		mtk_dma_chan_write(c, VFF_LEN, tx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_tx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+	if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
+		dev_err(chan->device->dev,
+			"config dma dir[%d] fail\n", cfg->direction);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_terminate_all(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	list_del_init(&c->node);
+	mtk_dma_stop(c);
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return 0;
+}
+
+static int mtk_dma_device_pause(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static int mtk_dma_device_resume(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static void mtk_dma_free(struct mtk_dmadev *mtkd)
+{
+	tasklet_kill(&mtkd->task);
+	while (list_empty(&mtkd->ddev.channels) == 0) {
+		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+			struct mtk_chan, vc.chan.device_node);
+
+		list_del(&c->vc.chan.device_node);
+		tasklet_kill(&c->vc.task);
+		devm_kfree(mtkd->ddev.dev, c);
+	}
+}
+
+static const struct of_device_id mtk_uart_dma_match[] = {
+	{ .compatible = "mediatek,mt6577-uart-dma", },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
+
+static int mtk_apdma_probe(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd;
+	struct resource *res;
+	struct mtk_chan *c;
+	unsigned int i;
+	int rc;
+
+	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+	if (!mtkd)
+		return -ENOMEM;
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			return -ENODEV;
+		mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
+		if (IS_ERR(mtkd->mem_base[i]))
+			return PTR_ERR(mtkd->mem_base[i]);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+		if ((int)mtkd->dma_irq[i] < 0) {
+			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+			return -EINVAL;
+		}
+	}
+
+	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(mtkd->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(mtkd->clk);
+	}
+
+	if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
+		dev_info(&pdev->dev, "Support dma 33bits\n");
+		mtkd->support_33bits = true;
+	}
+
+	if (mtkd->support_33bits)
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
+	else
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+	mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
+	mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
+	mtkd->ddev.device_tx_status = mtk_dma_tx_status;
+	mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
+	mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
+	mtkd->ddev.device_config = mtk_dma_slave_config;
+	mtkd->ddev.device_pause = mtk_dma_device_pause;
+	mtkd->ddev.device_resume = mtk_dma_device_resume;
+	mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
+	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	mtkd->ddev.dev = &pdev->dev;
+	INIT_LIST_HEAD(&mtkd->ddev.channels);
+	INIT_LIST_HEAD(&mtkd->pending);
+
+	spin_lock_init(&mtkd->lock);
+	tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
+
+	mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
+	if (of_property_read_u32(pdev->dev.of_node,
+				 "dma-requests", &mtkd->dma_requests)) {
+		dev_info(&pdev->dev,
+			 "Missing dma-requests property, using %u.\n",
+			 MTK_APDMA_DEFAULT_REQUESTS);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+		if (!c)
+			goto err_no_dma;
+
+		c->vc.desc_free = mtk_dma_desc_free;
+		vchan_init(&c->vc, &mtkd->ddev);
+		INIT_LIST_HEAD(&c->node);
+	}
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+
+	rc = dma_async_device_register(&mtkd->ddev);
+	if (rc)
+		goto rpm_disable;
+
+	platform_set_drvdata(pdev, mtkd);
+
+	if (pdev->dev.of_node) {
+		mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
+
+		/* Device-tree DMA controller registration */
+		rc = of_dma_controller_register(pdev->dev.of_node,
+						of_dma_simple_xlate,
+						&mtk_dma_info);
+		if (rc)
+			goto dma_remove;
+	}
+
+	return rc;
+
+dma_remove:
+	dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+	pm_runtime_disable(&pdev->dev);
+err_no_dma:
+	mtk_dma_free(mtkd);
+	return rc;
+}
+
+static int mtk_apdma_remove(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
+
+	if (pdev->dev.of_node)
+		of_dma_controller_free(pdev->dev.of_node);
+
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_noidle(&pdev->dev);
+
+	dma_async_device_unregister(&mtkd->ddev);
+
+	mtk_dma_free(mtkd);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_dma_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev))
+		mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev)) {
+		ret = mtk_dma_clk_enable(mtkd);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_runtime_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_runtime_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	ret = mtk_dma_clk_enable(mtkd);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+#endif /* CONFIG_PM_SLEEP */
+
+static const struct dev_pm_ops mtk_dma_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
+	SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
+			   mtk_dma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_dma_driver = {
+	.probe	= mtk_apdma_probe,
+	.remove	= mtk_apdma_remove,
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.pm		= &mtk_dma_pm_ops,
+		.of_match_table = of_match_ptr(mtk_uart_dma_match),
+	},
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
+{
+	if (chan->device->dev->driver == &mtk_dma_driver.driver) {
+		struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+		struct mtk_chan *c = to_mtk_dma_chan(chan);
+		unsigned int req = *(unsigned int *)param;
+
+		if (req <= mtkd->dma_requests) {
+			c->dma_sig = req;
+			c->dma_ch = req;
+			return true;
+		}
+	}
+	return false;
+}
+
+module_platform_driver(mtk_dma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
+MODULE_LICENSE("GPL v2");
+
diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 27bac0b..bef436e 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -1,4 +1,15 @@
 
+config DMA_MTK_UART
+	tristate "MediaTek SoCs APDMA support for UART"
+	depends on OF
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the UART DMA engine found on MediaTek MTK SoCs.
+	  when 8250 mtk uart is enabled, and if you want to using DMA,
+	  you can enable the config. the DMA engine just only be used
+	  with MediaTek Socs.
+
 config MTK_HSDMA
 	tristate "MediaTek High-Speed DMA controller support"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 6e778f8..2f2efd9 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1 +1,2 @@
+obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
 obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

In DMA engine framework, add 8250 mtk dma to support it.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
 drivers/dma/mediatek/Kconfig        |   11 +
 drivers/dma/mediatek/Makefile       |    1 +
 3 files changed, 906 insertions(+)
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c

diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
new file mode 100644
index 0000000..3454679
--- /dev/null
+++ b/drivers/dma/mediatek/8250_mtk_dma.c
@@ -0,0 +1,894 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Mediatek 8250 DMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <long.cheng@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/pm_runtime.h>
+#include <linux/iopoll.h>
+
+#include "../virt-dma.h"
+
+#define MTK_APDMA_DEFAULT_REQUESTS	127
+#define MTK_APDMA_CHANNELS		(CONFIG_SERIAL_8250_NR_UARTS * 2)
+
+#define VFF_EN_B		BIT(0)
+#define VFF_STOP_B		BIT(0)
+#define VFF_FLUSH_B		BIT(0)
+#define VFF_4G_SUPPORT_B	BIT(0)
+#define VFF_RX_INT_EN0_B	BIT(0)	/*rx valid size >=  vff thre*/
+#define VFF_RX_INT_EN1_B	BIT(1)
+#define VFF_TX_INT_EN_B		BIT(0)	/*tx left size >= vff thre*/
+#define VFF_WARM_RST_B		BIT(0)
+#define VFF_RX_INT_FLAG_CLR_B	(BIT(0) | BIT(1))
+#define VFF_TX_INT_FLAG_CLR_B	0
+#define VFF_STOP_CLR_B		0
+#define VFF_FLUSH_CLR_B		0
+#define VFF_INT_EN_CLR_B	0
+#define VFF_4G_SUPPORT_CLR_B	0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n)		((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n)		((n) * 3 / 4)
+
+#define MTK_DMA_RING_SIZE	0xffffU
+/* invert this bit when wrap ring head again*/
+#define MTK_DMA_RING_WRAP	0x10000U
+
+#define VFF_INT_FLAG		0x00
+#define VFF_INT_EN		0x04
+#define VFF_EN			0x08
+#define VFF_RST			0x0c
+#define VFF_STOP		0x10
+#define VFF_FLUSH		0x14
+#define VFF_ADDR		0x1c
+#define VFF_LEN			0x24
+#define VFF_THRE		0x28
+#define VFF_WPT			0x2c
+#define VFF_RPT			0x30
+/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
+#define VFF_VALID_SIZE		0x3c
+/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
+#define VFF_LEFT_SIZE		0x40
+#define VFF_DEBUG_STATUS	0x50
+#define VFF_4G_SUPPORT		0x54
+
+struct mtk_dmadev {
+	struct dma_device ddev;
+	void __iomem *mem_base[MTK_APDMA_CHANNELS];
+	spinlock_t lock; /* dma dev lock */
+	struct tasklet_struct task;
+	struct list_head pending;
+	struct clk *clk;
+	unsigned int dma_requests;
+	bool support_33bits;
+	unsigned int dma_irq[MTK_APDMA_CHANNELS];
+	struct mtk_chan *ch[MTK_APDMA_CHANNELS];
+};
+
+struct mtk_chan {
+	struct virt_dma_chan vc;
+	struct list_head node;
+	struct dma_slave_config	cfg;
+	void __iomem *base;
+	struct mtk_dma_desc *desc;
+
+	bool stop;
+	bool requested;
+
+	unsigned int dma_sig;
+	unsigned int dma_ch;
+	unsigned int sgidx;
+	unsigned int remain_size;
+	unsigned int rx_ptr;
+};
+
+struct mtk_dma_sg {
+	dma_addr_t addr;
+	unsigned int en;		/* number of elements (24-bit) */
+	unsigned int fn;		/* number of frames (16-bit) */
+};
+
+struct mtk_dma_desc {
+	struct virt_dma_desc vd;
+	enum dma_transfer_direction dir;
+
+	unsigned int sglen;
+	struct mtk_dma_sg sg[0];
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
+static struct of_dma_filter_info mtk_dma_info = {
+	.filter_fn = mtk_dma_filter_fn,
+};
+
+static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct mtk_dmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
+{
+	return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_dma_desc *to_mtk_dma_desc
+	(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct mtk_dma_desc, vd.tx);
+}
+
+static void mtk_dma_chan_write(struct mtk_chan *c,
+			       unsigned int reg, unsigned int val)
+{
+	writel(val, c->base + reg);
+}
+
+static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
+{
+	return readl(c->base + reg);
+}
+
+static void mtk_dma_desc_free(struct virt_dma_desc *vd)
+{
+	struct dma_chan *chan = vd->tx.chan;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	kfree(c->desc);
+	c->desc = NULL;
+}
+
+static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
+{
+	int ret;
+
+	ret = clk_prepare_enable(mtkd->clk);
+	if (ret) {
+		dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
+{
+	clk_disable_unprepare(mtkd->clk);
+}
+
+static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
+				     struct virt_dma_chan *vc)
+{
+	struct virt_dma_desc *vd;
+
+	if (list_empty(&vc->desc_issued) == 0) {
+		list_for_each_entry(vd, &vc->desc_issued, node) {
+			if (cookie == vd->tx.cookie) {
+				INIT_LIST_HEAD(&vc->desc_issued);
+				break;
+			}
+		}
+	}
+}
+
+static void mtk_dma_tx_flush(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
+		mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_dma_tx_write(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned int txcount = c->remain_size;
+	unsigned int len, send, left, wpt, wrap;
+
+	len = mtk_dma_chan_read(c, VFF_LEN);
+
+	while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
+		if (c->remain_size == 0U)
+			break;
+		send = min(left, c->remain_size);
+		wpt = mtk_dma_chan_read(c, VFF_WPT);
+		wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
+
+		if ((wpt & (len - 1U)) + send < len)
+			mtk_dma_chan_write(c, VFF_WPT, wpt + send);
+		else
+			mtk_dma_chan_write(c, VFF_WPT,
+					   ((wpt + send) & (len - 1U))
+					   | wrap);
+
+		c->remain_size -= send;
+	}
+
+	if (txcount != c->remain_size) {
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+		mtk_dma_tx_flush(chan);
+	}
+}
+
+static void mtk_dma_start_tx(struct mtk_chan *c)
+{
+	if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+	else
+		mtk_dma_tx_write(&c->vc.chan);
+
+	c->stop = false;
+}
+
+static void mtk_dma_get_rx_size(struct mtk_chan *c)
+{
+	unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
+	unsigned int rdptr, wrptr, wrreg, rdreg, count;
+
+	rdreg = mtk_dma_chan_read(c, VFF_RPT);
+	wrreg = mtk_dma_chan_read(c, VFF_WPT);
+	rdptr = rdreg & MTK_DMA_RING_SIZE;
+	wrptr = wrreg & MTK_DMA_RING_SIZE;
+	count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
+			(wrptr + rx_size - rdptr) : (wrptr - rdptr);
+
+	c->remain_size = count;
+	c->rx_ptr = rdptr;
+
+	mtk_dma_chan_write(c, VFF_RPT, wrreg);
+}
+
+static void mtk_dma_start_rx(struct mtk_chan *c)
+{
+	struct dma_chan *chan = &c->vc.chan;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_dma_desc *d = c->desc;
+
+	if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
+		return;
+
+	if (d && d->vd.tx.cookie != 0) {
+		mtk_dma_get_rx_size(c);
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	} else {
+		spin_lock(&mtkd->lock);
+		if (list_empty(&mtkd->pending))
+			list_add_tail(&c->node, &mtkd->pending);
+		spin_unlock(&mtkd->lock);
+		tasklet_schedule(&mtkd->task);
+	}
+}
+
+static void mtk_dma_reset(struct mtk_chan *c)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_ADDR, 0);
+	mtk_dma_chan_write(c, VFF_THRE, 0);
+	mtk_dma_chan_write(c, VFF_LEN, 0);
+	mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
+
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret) {
+		dev_err(c->vc.chan.device->dev,
+				"dma reset: fail, timeout\n");
+		return;
+	}
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_RPT, 0);
+	else if (c->cfg.direction == DMA_MEM_TO_DEV)
+		mtk_dma_chan_write(c, VFF_WPT, 0);
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+}
+
+static void mtk_dma_stop(struct mtk_chan *c)
+{
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
+	/* Wait for flush */
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_FLUSH,
+				 status,
+				 (status & VFF_FLUSH_B) != VFF_FLUSH_B,
+				 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling FLUSH fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	/*set stop as 1 -> wait until en is 0 -> set stop as 0*/
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
+	mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+	else
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	c->stop = true;
+}
+
+/*
+ * This callback schedules all pending channels. We could be more
+ * clever here by postponing allocation of the real DMA channels to
+ * this point, and freeing them when our virtual channel becomes idle.
+ *
+ * We would then need to deal with 'all channels in-use'
+ */
+static void mtk_dma_sched(unsigned long data)
+{
+	struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
+	struct virt_dma_desc *vd;
+	struct mtk_chan *c;
+	dma_cookie_t cookie;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irq(&mtkd->lock);
+	list_splice_tail_init(&mtkd->pending, &head);
+	spin_unlock_irq(&mtkd->lock);
+
+	if (!list_empty(&head)) {
+		c = list_first_entry(&head, struct mtk_chan, node);
+		cookie = c->vc.chan.cookie;
+
+		spin_lock_irqsave(&c->vc.lock, flags);
+		if (c->cfg.direction == DMA_DEV_TO_MEM) {
+			list_del_init(&c->node);
+			mtk_dma_start_rx(c);
+		} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			list_del_init(&c->node);
+			mtk_dma_start_tx(c);
+		}
+		spin_unlock_irqrestore(&c->vc.lock, flags);
+	}
+}
+
+static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	int ret = -EBUSY;
+
+	pm_runtime_get_sync(mtkd->ddev.dev);
+
+	if (!mtkd->ch[c->dma_ch]) {
+		c->base = mtkd->mem_base[c->dma_ch];
+		mtkd->ch[c->dma_ch] = c;
+		ret = 1;
+	}
+	c->requested = false;
+	mtk_dma_reset(c);
+
+	return ret;
+}
+
+static void mtk_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (c->requested) {
+		c->requested = false;
+		free_irq(mtkd->dma_irq[c->dma_ch], chan);
+	}
+
+	tasklet_kill(&mtkd->task);
+	tasklet_kill(&c->vc.task);
+
+	c->base = NULL;
+	mtkd->ch[c->dma_ch] = NULL;
+	vchan_free_chan_resources(&c->vc);
+
+	dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
+	c->dma_sig = 0;
+
+	pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	if (!txstate)
+		return DMA_ERROR;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (ret == DMA_IN_PROGRESS) {
+		c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
+		dma_set_residue(txstate, c->rx_ptr);
+	} else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
+		dma_set_residue(txstate, c->remain_size);
+	} else {
+		dma_set_residue(txstate, 0);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
+	(struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sglen,	enum dma_transfer_direction dir,
+	unsigned long tx_flags, void *context)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct scatterlist *sgent;
+	struct mtk_dma_desc *d;
+	struct mtk_dma_sg *sg;
+	unsigned int size, i, j, en;
+
+	en = 1;
+
+	if ((dir != DMA_DEV_TO_MEM) &&
+		(dir != DMA_MEM_TO_DEV)) {
+		dev_err(chan->device->dev, "bad direction\n");
+		return NULL;
+	}
+
+	/* Now allocate and setup the descriptor. */
+	d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
+	if (!d)
+		return NULL;
+
+	d->dir = dir;
+
+	j = 0;
+	for_each_sg(sgl, sgent, sglen, i) {
+		d->sg[j].addr = sg_dma_address(sgent);
+		d->sg[j].en = en;
+		d->sg[j].fn = sg_dma_len(sgent) / en;
+		j++;
+	}
+
+	d->sglen = j;
+
+	if (dir == DMA_MEM_TO_DEV) {
+		for (size = i = 0; i < d->sglen; i++) {
+			sg = &d->sg[i];
+			size += sg->en * sg->fn;
+		}
+		c->remain_size = size;
+	}
+
+	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_dma_issue_pending(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct virt_dma_desc *vd;
+	struct mtk_dmadev *mtkd;
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->cfg.direction == DMA_DEV_TO_MEM) {
+		cookie = c->vc.chan.cookie;
+		mtkd = to_mtk_dma_dev(chan->device);
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+		}
+	} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+		cookie = c->vc.chan.cookie;
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			mtk_dma_start_tx(c);
+		}
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+
+	mtk_dma_start_rx(c);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dma_desc *d = c->desc;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->remain_size != 0U) {
+		list_add_tail(&c->node, &mtkd->pending);
+		tasklet_schedule(&mtkd->task);
+	} else {
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_dma_slave_config(struct dma_chan *chan,
+				struct dma_slave_config *cfg)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	int ret;
+
+	c->cfg = *cfg;
+
+	if (cfg->direction == DMA_DEV_TO_MEM) {
+		unsigned int rx_len = cfg->src_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
+		mtk_dma_chan_write(c, VFF_LEN, rx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
+		mtk_dma_chan_write(c,
+				   VFF_INT_EN, VFF_RX_INT_EN0_B
+				   | VFF_RX_INT_EN1_B);
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_rx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	} else if (cfg->direction == DMA_MEM_TO_DEV)	{
+		unsigned int tx_len = cfg->dst_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
+		mtk_dma_chan_write(c, VFF_LEN, tx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_tx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+	if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
+		dev_err(chan->device->dev,
+			"config dma dir[%d] fail\n", cfg->direction);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_terminate_all(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	list_del_init(&c->node);
+	mtk_dma_stop(c);
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return 0;
+}
+
+static int mtk_dma_device_pause(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static int mtk_dma_device_resume(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static void mtk_dma_free(struct mtk_dmadev *mtkd)
+{
+	tasklet_kill(&mtkd->task);
+	while (list_empty(&mtkd->ddev.channels) == 0) {
+		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+			struct mtk_chan, vc.chan.device_node);
+
+		list_del(&c->vc.chan.device_node);
+		tasklet_kill(&c->vc.task);
+		devm_kfree(mtkd->ddev.dev, c);
+	}
+}
+
+static const struct of_device_id mtk_uart_dma_match[] = {
+	{ .compatible = "mediatek,mt6577-uart-dma", },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
+
+static int mtk_apdma_probe(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd;
+	struct resource *res;
+	struct mtk_chan *c;
+	unsigned int i;
+	int rc;
+
+	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+	if (!mtkd)
+		return -ENOMEM;
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			return -ENODEV;
+		mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
+		if (IS_ERR(mtkd->mem_base[i]))
+			return PTR_ERR(mtkd->mem_base[i]);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+		if ((int)mtkd->dma_irq[i] < 0) {
+			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+			return -EINVAL;
+		}
+	}
+
+	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(mtkd->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(mtkd->clk);
+	}
+
+	if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
+		dev_info(&pdev->dev, "Support dma 33bits\n");
+		mtkd->support_33bits = true;
+	}
+
+	if (mtkd->support_33bits)
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
+	else
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+	mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
+	mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
+	mtkd->ddev.device_tx_status = mtk_dma_tx_status;
+	mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
+	mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
+	mtkd->ddev.device_config = mtk_dma_slave_config;
+	mtkd->ddev.device_pause = mtk_dma_device_pause;
+	mtkd->ddev.device_resume = mtk_dma_device_resume;
+	mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
+	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	mtkd->ddev.dev = &pdev->dev;
+	INIT_LIST_HEAD(&mtkd->ddev.channels);
+	INIT_LIST_HEAD(&mtkd->pending);
+
+	spin_lock_init(&mtkd->lock);
+	tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
+
+	mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
+	if (of_property_read_u32(pdev->dev.of_node,
+				 "dma-requests", &mtkd->dma_requests)) {
+		dev_info(&pdev->dev,
+			 "Missing dma-requests property, using %u.\n",
+			 MTK_APDMA_DEFAULT_REQUESTS);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+		if (!c)
+			goto err_no_dma;
+
+		c->vc.desc_free = mtk_dma_desc_free;
+		vchan_init(&c->vc, &mtkd->ddev);
+		INIT_LIST_HEAD(&c->node);
+	}
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+
+	rc = dma_async_device_register(&mtkd->ddev);
+	if (rc)
+		goto rpm_disable;
+
+	platform_set_drvdata(pdev, mtkd);
+
+	if (pdev->dev.of_node) {
+		mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
+
+		/* Device-tree DMA controller registration */
+		rc = of_dma_controller_register(pdev->dev.of_node,
+						of_dma_simple_xlate,
+						&mtk_dma_info);
+		if (rc)
+			goto dma_remove;
+	}
+
+	return rc;
+
+dma_remove:
+	dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+	pm_runtime_disable(&pdev->dev);
+err_no_dma:
+	mtk_dma_free(mtkd);
+	return rc;
+}
+
+static int mtk_apdma_remove(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
+
+	if (pdev->dev.of_node)
+		of_dma_controller_free(pdev->dev.of_node);
+
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_noidle(&pdev->dev);
+
+	dma_async_device_unregister(&mtkd->ddev);
+
+	mtk_dma_free(mtkd);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_dma_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev))
+		mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev)) {
+		ret = mtk_dma_clk_enable(mtkd);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_runtime_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_runtime_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	ret = mtk_dma_clk_enable(mtkd);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+#endif /* CONFIG_PM_SLEEP */
+
+static const struct dev_pm_ops mtk_dma_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
+	SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
+			   mtk_dma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_dma_driver = {
+	.probe	= mtk_apdma_probe,
+	.remove	= mtk_apdma_remove,
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.pm		= &mtk_dma_pm_ops,
+		.of_match_table = of_match_ptr(mtk_uart_dma_match),
+	},
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
+{
+	if (chan->device->dev->driver == &mtk_dma_driver.driver) {
+		struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+		struct mtk_chan *c = to_mtk_dma_chan(chan);
+		unsigned int req = *(unsigned int *)param;
+
+		if (req <= mtkd->dma_requests) {
+			c->dma_sig = req;
+			c->dma_ch = req;
+			return true;
+		}
+	}
+	return false;
+}
+
+module_platform_driver(mtk_dma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
+MODULE_LICENSE("GPL v2");
+
diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 27bac0b..bef436e 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -1,4 +1,15 @@
 
+config DMA_MTK_UART
+	tristate "MediaTek SoCs APDMA support for UART"
+	depends on OF
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the UART DMA engine found on MediaTek MTK SoCs.
+	  when 8250 mtk uart is enabled, and if you want to using DMA,
+	  you can enable the config. the DMA engine just only be used
+	  with MediaTek Socs.
+
 config MTK_HSDMA
 	tristate "MediaTek High-Speed DMA controller support"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 6e778f8..2f2efd9 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1 +1,2 @@
+obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
 obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: devicetree, YT Shen, srv_heupstream, Greg Kroah-Hartman,
	Sean Wang, linux-kernel, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

In DMA engine framework, add 8250 mtk dma to support it.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
 drivers/dma/mediatek/Kconfig        |   11 +
 drivers/dma/mediatek/Makefile       |    1 +
 3 files changed, 906 insertions(+)
 create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c

diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
new file mode 100644
index 0000000..3454679
--- /dev/null
+++ b/drivers/dma/mediatek/8250_mtk_dma.c
@@ -0,0 +1,894 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Mediatek 8250 DMA driver.
+ *
+ * Copyright (c) 2018 MediaTek Inc.
+ * Author: Long Cheng <long.cheng@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/pm_runtime.h>
+#include <linux/iopoll.h>
+
+#include "../virt-dma.h"
+
+#define MTK_APDMA_DEFAULT_REQUESTS	127
+#define MTK_APDMA_CHANNELS		(CONFIG_SERIAL_8250_NR_UARTS * 2)
+
+#define VFF_EN_B		BIT(0)
+#define VFF_STOP_B		BIT(0)
+#define VFF_FLUSH_B		BIT(0)
+#define VFF_4G_SUPPORT_B	BIT(0)
+#define VFF_RX_INT_EN0_B	BIT(0)	/*rx valid size >=  vff thre*/
+#define VFF_RX_INT_EN1_B	BIT(1)
+#define VFF_TX_INT_EN_B		BIT(0)	/*tx left size >= vff thre*/
+#define VFF_WARM_RST_B		BIT(0)
+#define VFF_RX_INT_FLAG_CLR_B	(BIT(0) | BIT(1))
+#define VFF_TX_INT_FLAG_CLR_B	0
+#define VFF_STOP_CLR_B		0
+#define VFF_FLUSH_CLR_B		0
+#define VFF_INT_EN_CLR_B	0
+#define VFF_4G_SUPPORT_CLR_B	0
+
+/* interrupt trigger level for tx */
+#define VFF_TX_THRE(n)		((n) * 7 / 8)
+/* interrupt trigger level for rx */
+#define VFF_RX_THRE(n)		((n) * 3 / 4)
+
+#define MTK_DMA_RING_SIZE	0xffffU
+/* invert this bit when wrap ring head again*/
+#define MTK_DMA_RING_WRAP	0x10000U
+
+#define VFF_INT_FLAG		0x00
+#define VFF_INT_EN		0x04
+#define VFF_EN			0x08
+#define VFF_RST			0x0c
+#define VFF_STOP		0x10
+#define VFF_FLUSH		0x14
+#define VFF_ADDR		0x1c
+#define VFF_LEN			0x24
+#define VFF_THRE		0x28
+#define VFF_WPT			0x2c
+#define VFF_RPT			0x30
+/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
+#define VFF_VALID_SIZE		0x3c
+/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
+#define VFF_LEFT_SIZE		0x40
+#define VFF_DEBUG_STATUS	0x50
+#define VFF_4G_SUPPORT		0x54
+
+struct mtk_dmadev {
+	struct dma_device ddev;
+	void __iomem *mem_base[MTK_APDMA_CHANNELS];
+	spinlock_t lock; /* dma dev lock */
+	struct tasklet_struct task;
+	struct list_head pending;
+	struct clk *clk;
+	unsigned int dma_requests;
+	bool support_33bits;
+	unsigned int dma_irq[MTK_APDMA_CHANNELS];
+	struct mtk_chan *ch[MTK_APDMA_CHANNELS];
+};
+
+struct mtk_chan {
+	struct virt_dma_chan vc;
+	struct list_head node;
+	struct dma_slave_config	cfg;
+	void __iomem *base;
+	struct mtk_dma_desc *desc;
+
+	bool stop;
+	bool requested;
+
+	unsigned int dma_sig;
+	unsigned int dma_ch;
+	unsigned int sgidx;
+	unsigned int remain_size;
+	unsigned int rx_ptr;
+};
+
+struct mtk_dma_sg {
+	dma_addr_t addr;
+	unsigned int en;		/* number of elements (24-bit) */
+	unsigned int fn;		/* number of frames (16-bit) */
+};
+
+struct mtk_dma_desc {
+	struct virt_dma_desc vd;
+	enum dma_transfer_direction dir;
+
+	unsigned int sglen;
+	struct mtk_dma_sg sg[0];
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
+static struct of_dma_filter_info mtk_dma_info = {
+	.filter_fn = mtk_dma_filter_fn,
+};
+
+static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct mtk_dmadev, ddev);
+}
+
+static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
+{
+	return container_of(c, struct mtk_chan, vc.chan);
+}
+
+static inline struct mtk_dma_desc *to_mtk_dma_desc
+	(struct dma_async_tx_descriptor *t)
+{
+	return container_of(t, struct mtk_dma_desc, vd.tx);
+}
+
+static void mtk_dma_chan_write(struct mtk_chan *c,
+			       unsigned int reg, unsigned int val)
+{
+	writel(val, c->base + reg);
+}
+
+static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
+{
+	return readl(c->base + reg);
+}
+
+static void mtk_dma_desc_free(struct virt_dma_desc *vd)
+{
+	struct dma_chan *chan = vd->tx.chan;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	kfree(c->desc);
+	c->desc = NULL;
+}
+
+static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
+{
+	int ret;
+
+	ret = clk_prepare_enable(mtkd->clk);
+	if (ret) {
+		dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
+{
+	clk_disable_unprepare(mtkd->clk);
+}
+
+static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
+				     struct virt_dma_chan *vc)
+{
+	struct virt_dma_desc *vd;
+
+	if (list_empty(&vc->desc_issued) == 0) {
+		list_for_each_entry(vd, &vc->desc_issued, node) {
+			if (cookie == vd->tx.cookie) {
+				INIT_LIST_HEAD(&vc->desc_issued);
+				break;
+			}
+		}
+	}
+}
+
+static void mtk_dma_tx_flush(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
+		mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
+}
+
+static void mtk_dma_tx_write(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned int txcount = c->remain_size;
+	unsigned int len, send, left, wpt, wrap;
+
+	len = mtk_dma_chan_read(c, VFF_LEN);
+
+	while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
+		if (c->remain_size == 0U)
+			break;
+		send = min(left, c->remain_size);
+		wpt = mtk_dma_chan_read(c, VFF_WPT);
+		wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
+
+		if ((wpt & (len - 1U)) + send < len)
+			mtk_dma_chan_write(c, VFF_WPT, wpt + send);
+		else
+			mtk_dma_chan_write(c, VFF_WPT,
+					   ((wpt + send) & (len - 1U))
+					   | wrap);
+
+		c->remain_size -= send;
+	}
+
+	if (txcount != c->remain_size) {
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+		mtk_dma_tx_flush(chan);
+	}
+}
+
+static void mtk_dma_start_tx(struct mtk_chan *c)
+{
+	if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
+		mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
+	else
+		mtk_dma_tx_write(&c->vc.chan);
+
+	c->stop = false;
+}
+
+static void mtk_dma_get_rx_size(struct mtk_chan *c)
+{
+	unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
+	unsigned int rdptr, wrptr, wrreg, rdreg, count;
+
+	rdreg = mtk_dma_chan_read(c, VFF_RPT);
+	wrreg = mtk_dma_chan_read(c, VFF_WPT);
+	rdptr = rdreg & MTK_DMA_RING_SIZE;
+	wrptr = wrreg & MTK_DMA_RING_SIZE;
+	count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
+			(wrptr + rx_size - rdptr) : (wrptr - rdptr);
+
+	c->remain_size = count;
+	c->rx_ptr = rdptr;
+
+	mtk_dma_chan_write(c, VFF_RPT, wrreg);
+}
+
+static void mtk_dma_start_rx(struct mtk_chan *c)
+{
+	struct dma_chan *chan = &c->vc.chan;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_dma_desc *d = c->desc;
+
+	if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
+		return;
+
+	if (d && d->vd.tx.cookie != 0) {
+		mtk_dma_get_rx_size(c);
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	} else {
+		spin_lock(&mtkd->lock);
+		if (list_empty(&mtkd->pending))
+			list_add_tail(&c->node, &mtkd->pending);
+		spin_unlock(&mtkd->lock);
+		tasklet_schedule(&mtkd->task);
+	}
+}
+
+static void mtk_dma_reset(struct mtk_chan *c)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_ADDR, 0);
+	mtk_dma_chan_write(c, VFF_THRE, 0);
+	mtk_dma_chan_write(c, VFF_LEN, 0);
+	mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
+
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret) {
+		dev_err(c->vc.chan.device->dev,
+				"dma reset: fail, timeout\n");
+		return;
+	}
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_RPT, 0);
+	else if (c->cfg.direction == DMA_MEM_TO_DEV)
+		mtk_dma_chan_write(c, VFF_WPT, 0);
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+}
+
+static void mtk_dma_stop(struct mtk_chan *c)
+{
+	u32 status;
+	int ret;
+
+	mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
+	/* Wait for flush */
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_FLUSH,
+				 status,
+				 (status & VFF_FLUSH_B) != VFF_FLUSH_B,
+				 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling FLUSH fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	/*set stop as 1 -> wait until en is 0 -> set stop as 0*/
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
+	ret = readx_poll_timeout(readl,
+				 c->base + VFF_EN,
+				 status, status == 0, 10, 100);
+	if (ret)
+		dev_err(c->vc.chan.device->dev,
+			"dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
+			mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
+
+	mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
+	mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
+
+	if (c->cfg.direction == DMA_DEV_TO_MEM)
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+	else
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	c->stop = true;
+}
+
+/*
+ * This callback schedules all pending channels. We could be more
+ * clever here by postponing allocation of the real DMA channels to
+ * this point, and freeing them when our virtual channel becomes idle.
+ *
+ * We would then need to deal with 'all channels in-use'
+ */
+static void mtk_dma_sched(unsigned long data)
+{
+	struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
+	struct virt_dma_desc *vd;
+	struct mtk_chan *c;
+	dma_cookie_t cookie;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irq(&mtkd->lock);
+	list_splice_tail_init(&mtkd->pending, &head);
+	spin_unlock_irq(&mtkd->lock);
+
+	if (!list_empty(&head)) {
+		c = list_first_entry(&head, struct mtk_chan, node);
+		cookie = c->vc.chan.cookie;
+
+		spin_lock_irqsave(&c->vc.lock, flags);
+		if (c->cfg.direction == DMA_DEV_TO_MEM) {
+			list_del_init(&c->node);
+			mtk_dma_start_rx(c);
+		} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			list_del_init(&c->node);
+			mtk_dma_start_tx(c);
+		}
+		spin_unlock_irqrestore(&c->vc.lock, flags);
+	}
+}
+
+static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	int ret = -EBUSY;
+
+	pm_runtime_get_sync(mtkd->ddev.dev);
+
+	if (!mtkd->ch[c->dma_ch]) {
+		c->base = mtkd->mem_base[c->dma_ch];
+		mtkd->ch[c->dma_ch] = c;
+		ret = 1;
+	}
+	c->requested = false;
+	mtk_dma_reset(c);
+
+	return ret;
+}
+
+static void mtk_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+
+	if (c->requested) {
+		c->requested = false;
+		free_irq(mtkd->dma_irq[c->dma_ch], chan);
+	}
+
+	tasklet_kill(&mtkd->task);
+	tasklet_kill(&c->vc.task);
+
+	c->base = NULL;
+	mtkd->ch[c->dma_ch] = NULL;
+	vchan_free_chan_resources(&c->vc);
+
+	dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
+	c->dma_sig = 0;
+
+	pm_runtime_put_sync(mtkd->ddev.dev);
+}
+
+static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
+					 dma_cookie_t cookie,
+					 struct dma_tx_state *txstate)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	enum dma_status ret;
+	unsigned long flags;
+
+	if (!txstate)
+		return DMA_ERROR;
+
+	ret = dma_cookie_status(chan, cookie, txstate);
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (ret == DMA_IN_PROGRESS) {
+		c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
+		dma_set_residue(txstate, c->rx_ptr);
+	} else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
+		dma_set_residue(txstate, c->remain_size);
+	} else {
+		dma_set_residue(txstate, 0);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
+	(struct dma_chan *chan, struct scatterlist *sgl,
+	unsigned int sglen,	enum dma_transfer_direction dir,
+	unsigned long tx_flags, void *context)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct scatterlist *sgent;
+	struct mtk_dma_desc *d;
+	struct mtk_dma_sg *sg;
+	unsigned int size, i, j, en;
+
+	en = 1;
+
+	if ((dir != DMA_DEV_TO_MEM) &&
+		(dir != DMA_MEM_TO_DEV)) {
+		dev_err(chan->device->dev, "bad direction\n");
+		return NULL;
+	}
+
+	/* Now allocate and setup the descriptor. */
+	d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
+	if (!d)
+		return NULL;
+
+	d->dir = dir;
+
+	j = 0;
+	for_each_sg(sgl, sgent, sglen, i) {
+		d->sg[j].addr = sg_dma_address(sgent);
+		d->sg[j].en = en;
+		d->sg[j].fn = sg_dma_len(sgent) / en;
+		j++;
+	}
+
+	d->sglen = j;
+
+	if (dir == DMA_MEM_TO_DEV) {
+		for (size = i = 0; i < d->sglen; i++) {
+			sg = &d->sg[i];
+			size += sg->en * sg->fn;
+		}
+		c->remain_size = size;
+	}
+
+	return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
+}
+
+static void mtk_dma_issue_pending(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct virt_dma_desc *vd;
+	struct mtk_dmadev *mtkd;
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->cfg.direction == DMA_DEV_TO_MEM) {
+		cookie = c->vc.chan.cookie;
+		mtkd = to_mtk_dma_dev(chan->device);
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+		}
+	} else if (c->cfg.direction == DMA_MEM_TO_DEV) {
+		cookie = c->vc.chan.cookie;
+		if (vchan_issue_pending(&c->vc) && !c->desc) {
+			vd = vchan_find_desc(&c->vc, cookie);
+			c->desc = to_mtk_dma_desc(&vd->tx);
+			mtk_dma_start_tx(c);
+		}
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+}
+
+static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+
+	mtk_dma_start_rx(c);
+
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
+{
+	struct dma_chan *chan = (struct dma_chan *)dev_id;
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dma_desc *d = c->desc;
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	if (c->remain_size != 0U) {
+		list_add_tail(&c->node, &mtkd->pending);
+		tasklet_schedule(&mtkd->task);
+	} else {
+		mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
+		vchan_cookie_complete(&d->vd);
+	}
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_dma_slave_config(struct dma_chan *chan,
+				struct dma_slave_config *cfg)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
+	int ret;
+
+	c->cfg = *cfg;
+
+	if (cfg->direction == DMA_DEV_TO_MEM) {
+		unsigned int rx_len = cfg->src_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
+		mtk_dma_chan_write(c, VFF_LEN, rx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
+		mtk_dma_chan_write(c,
+				   VFF_INT_EN, VFF_RX_INT_EN0_B
+				   | VFF_RX_INT_EN1_B);
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_rx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	} else if (cfg->direction == DMA_MEM_TO_DEV)	{
+		unsigned int tx_len = cfg->dst_addr_width * 1024;
+
+		mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
+		mtk_dma_chan_write(c, VFF_LEN, tx_len);
+		mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
+		mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
+		mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
+
+		if (!c->requested) {
+			c->requested = true;
+			ret = request_irq(mtkd->dma_irq[c->dma_ch],
+					  mtk_dma_tx_interrupt,
+					  IRQF_TRIGGER_NONE,
+					  KBUILD_MODNAME, chan);
+			if (ret < 0) {
+				dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
+				return -EINVAL;
+			}
+		}
+	}
+
+	if (mtkd->support_33bits)
+		mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
+
+	if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
+		dev_err(chan->device->dev,
+			"config dma dir[%d] fail\n", cfg->direction);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_terminate_all(struct dma_chan *chan)
+{
+	struct mtk_chan *c = to_mtk_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&c->vc.lock, flags);
+	list_del_init(&c->node);
+	mtk_dma_stop(c);
+	spin_unlock_irqrestore(&c->vc.lock, flags);
+
+	return 0;
+}
+
+static int mtk_dma_device_pause(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static int mtk_dma_device_resume(struct dma_chan *chan)
+{
+	/* just for check caps pass */
+	return -EINVAL;
+}
+
+static void mtk_dma_free(struct mtk_dmadev *mtkd)
+{
+	tasklet_kill(&mtkd->task);
+	while (list_empty(&mtkd->ddev.channels) == 0) {
+		struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
+			struct mtk_chan, vc.chan.device_node);
+
+		list_del(&c->vc.chan.device_node);
+		tasklet_kill(&c->vc.task);
+		devm_kfree(mtkd->ddev.dev, c);
+	}
+}
+
+static const struct of_device_id mtk_uart_dma_match[] = {
+	{ .compatible = "mediatek,mt6577-uart-dma", },
+	{ /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
+
+static int mtk_apdma_probe(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd;
+	struct resource *res;
+	struct mtk_chan *c;
+	unsigned int i;
+	int rc;
+
+	mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
+	if (!mtkd)
+		return -ENOMEM;
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+		if (!res)
+			return -ENODEV;
+		mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
+		if (IS_ERR(mtkd->mem_base[i]))
+			return PTR_ERR(mtkd->mem_base[i]);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		mtkd->dma_irq[i] = platform_get_irq(pdev, i);
+		if ((int)mtkd->dma_irq[i] < 0) {
+			dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
+			return -EINVAL;
+		}
+	}
+
+	mtkd->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(mtkd->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(mtkd->clk);
+	}
+
+	if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
+		dev_info(&pdev->dev, "Support dma 33bits\n");
+		mtkd->support_33bits = true;
+	}
+
+	if (mtkd->support_33bits)
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
+	else
+		rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
+	if (rc)
+		return rc;
+
+	dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
+	mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
+	mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
+	mtkd->ddev.device_tx_status = mtk_dma_tx_status;
+	mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
+	mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
+	mtkd->ddev.device_config = mtk_dma_slave_config;
+	mtkd->ddev.device_pause = mtk_dma_device_pause;
+	mtkd->ddev.device_resume = mtk_dma_device_resume;
+	mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
+	mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
+	mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+	mtkd->ddev.dev = &pdev->dev;
+	INIT_LIST_HEAD(&mtkd->ddev.channels);
+	INIT_LIST_HEAD(&mtkd->pending);
+
+	spin_lock_init(&mtkd->lock);
+	tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
+
+	mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
+	if (of_property_read_u32(pdev->dev.of_node,
+				 "dma-requests", &mtkd->dma_requests)) {
+		dev_info(&pdev->dev,
+			 "Missing dma-requests property, using %u.\n",
+			 MTK_APDMA_DEFAULT_REQUESTS);
+	}
+
+	for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
+		c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
+		if (!c)
+			goto err_no_dma;
+
+		c->vc.desc_free = mtk_dma_desc_free;
+		vchan_init(&c->vc, &mtkd->ddev);
+		INIT_LIST_HEAD(&c->node);
+	}
+
+	pm_runtime_enable(&pdev->dev);
+	pm_runtime_set_active(&pdev->dev);
+
+	rc = dma_async_device_register(&mtkd->ddev);
+	if (rc)
+		goto rpm_disable;
+
+	platform_set_drvdata(pdev, mtkd);
+
+	if (pdev->dev.of_node) {
+		mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
+
+		/* Device-tree DMA controller registration */
+		rc = of_dma_controller_register(pdev->dev.of_node,
+						of_dma_simple_xlate,
+						&mtk_dma_info);
+		if (rc)
+			goto dma_remove;
+	}
+
+	return rc;
+
+dma_remove:
+	dma_async_device_unregister(&mtkd->ddev);
+rpm_disable:
+	pm_runtime_disable(&pdev->dev);
+err_no_dma:
+	mtk_dma_free(mtkd);
+	return rc;
+}
+
+static int mtk_apdma_remove(struct platform_device *pdev)
+{
+	struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
+
+	if (pdev->dev.of_node)
+		of_dma_controller_free(pdev->dev.of_node);
+
+	pm_runtime_disable(&pdev->dev);
+	pm_runtime_put_noidle(&pdev->dev);
+
+	dma_async_device_unregister(&mtkd->ddev);
+
+	mtk_dma_free(mtkd);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int mtk_dma_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev))
+		mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	if (!pm_runtime_suspended(dev)) {
+		ret = mtk_dma_clk_enable(mtkd);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static int mtk_dma_runtime_suspend(struct device *dev)
+{
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	mtk_dma_clk_disable(mtkd);
+
+	return 0;
+}
+
+static int mtk_dma_runtime_resume(struct device *dev)
+{
+	int ret;
+	struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
+
+	ret = mtk_dma_clk_enable(mtkd);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+#endif /* CONFIG_PM_SLEEP */
+
+static const struct dev_pm_ops mtk_dma_pm_ops = {
+	SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
+	SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
+			   mtk_dma_runtime_resume, NULL)
+};
+
+static struct platform_driver mtk_dma_driver = {
+	.probe	= mtk_apdma_probe,
+	.remove	= mtk_apdma_remove,
+	.driver = {
+		.name		= KBUILD_MODNAME,
+		.pm		= &mtk_dma_pm_ops,
+		.of_match_table = of_match_ptr(mtk_uart_dma_match),
+	},
+};
+
+static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
+{
+	if (chan->device->dev->driver == &mtk_dma_driver.driver) {
+		struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
+		struct mtk_chan *c = to_mtk_dma_chan(chan);
+		unsigned int req = *(unsigned int *)param;
+
+		if (req <= mtkd->dma_requests) {
+			c->dma_sig = req;
+			c->dma_ch = req;
+			return true;
+		}
+	}
+	return false;
+}
+
+module_platform_driver(mtk_dma_driver);
+
+MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
+MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
+MODULE_LICENSE("GPL v2");
+
diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
index 27bac0b..bef436e 100644
--- a/drivers/dma/mediatek/Kconfig
+++ b/drivers/dma/mediatek/Kconfig
@@ -1,4 +1,15 @@
 
+config DMA_MTK_UART
+	tristate "MediaTek SoCs APDMA support for UART"
+	depends on OF
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the UART DMA engine found on MediaTek MTK SoCs.
+	  when 8250 mtk uart is enabled, and if you want to using DMA,
+	  you can enable the config. the DMA engine just only be used
+	  with MediaTek Socs.
+
 config MTK_HSDMA
 	tristate "MediaTek High-Speed DMA controller support"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
index 6e778f8..2f2efd9 100644
--- a/drivers/dma/mediatek/Makefile
+++ b/drivers/dma/mediatek/Makefile
@@ -1 +1,2 @@
+obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
 obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
-- 
1.7.9.5


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [v2,3/4] serial: 8250-mtk: add uart DMA support
  2018-12-05  8:42 ` Long Cheng
  (?)
  (?)
@ 2018-12-05  8:42 ` Long Cheng
  -1 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

Modify uart register to support DMA function.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/tty/serial/8250/8250_mtk.c |  210 +++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
index dd5e1ce..1da73e8 100644
--- a/drivers/tty/serial/8250/8250_mtk.c
+++ b/drivers/tty/serial/8250/8250_mtk.c
@@ -14,6 +14,10 @@
 #include <linux/pm_runtime.h>
 #include <linux/serial_8250.h>
 #include <linux/serial_reg.h>
+#include <linux/console.h>
+#include <linux/dma-mapping.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
 
 #include "8250.h"
 
@@ -22,12 +26,172 @@
 #define UART_MTK_SAMPLE_POINT	0x0b	/* Sample point register */
 #define MTK_UART_RATE_FIX	0x0d	/* UART Rate Fix Register */
 
+#define MTK_UART_DMA_EN		0x13	/* DMA Enable register */
+#define MTK_UART_DMA_EN_TX	0x2
+#define MTK_UART_DMA_EN_RX	0x5
+
+#define MTK_UART_TX_SIZE	UART_XMIT_SIZE
+#define MTK_UART_RX_SIZE	0x8000
+#define MTK_UART_TX_TRIGGER	1
+#define MTK_UART_RX_TRIGGER	MTK_UART_RX_SIZE
+
+#ifdef CONFIG_SERIAL_8250_DMA
+enum dma_rx_status {
+	DMA_RX_START = 0,
+	DMA_RX_RUNNING = 1,
+	DMA_RX_SHUTDOWN = 2,
+};
+#endif
+
 struct mtk8250_data {
 	int			line;
+	unsigned int		rx_pos;
 	struct clk		*uart_clk;
 	struct clk		*bus_clk;
+	struct uart_8250_dma	*dma;
+#ifdef CONFIG_SERIAL_8250_DMA
+	enum dma_rx_status	rx_status;
+#endif
 };
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static void mtk8250_rx_dma(struct uart_8250_port *up);
+
+static void mtk8250_dma_rx_complete(void *param)
+{
+	struct uart_8250_port *up = param;
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct tty_port *tty_port = &up->port.state->port;
+	struct dma_tx_state state;
+	unsigned char *ptr;
+	int copied;
+
+	dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr,
+				dma->rx_size, DMA_FROM_DEVICE);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+
+	if (data->rx_status == DMA_RX_SHUTDOWN)
+		return;
+
+	if ((data->rx_pos + state.residue) <= dma->rx_size) {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr, state.residue);
+	} else {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr,
+						dma->rx_size - data->rx_pos);
+		ptr = (unsigned char *)(dma->rx_buf);
+		copied += tty_insert_flip_string(tty_port, ptr,
+				data->rx_pos + state.residue - dma->rx_size);
+	}
+	up->port.icount.rx += copied;
+
+	tty_flip_buffer_push(tty_port);
+
+	mtk8250_rx_dma(up);
+}
+
+static void mtk8250_rx_dma(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct dma_async_tx_descriptor	*desc;
+	struct dma_tx_state	 state;
+
+	desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr,
+					   dma->rx_size, DMA_DEV_TO_MEM,
+					   DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	if (!desc) {
+		pr_err("failed to prepare rx slave single\n");
+		return;
+	}
+
+	desc->callback = mtk8250_dma_rx_complete;
+	desc->callback_param = up;
+
+	dma->rx_cookie = dmaengine_submit(desc);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+	data->rx_pos = state.residue;
+
+	dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr,
+				   dma->rx_size, DMA_FROM_DEVICE);
+
+	dma_async_issue_pending(dma->rxchan);
+}
+
+static void mtk8250_dma_enable(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	int lcr = serial_in(up, UART_LCR);
+
+	if (data->rx_status != DMA_RX_START)
+		return;
+
+	dma->rxconf.direction		= DMA_DEV_TO_MEM;
+	dma->rxconf.src_addr_width	= dma->rx_size / 1024;
+	dma->rxconf.src_addr		= dma->rx_addr;
+
+	dma->txconf.direction		= DMA_MEM_TO_DEV;
+	dma->txconf.dst_addr_width	= MTK_UART_TX_SIZE / 1024;
+	dma->txconf.dst_addr		= dma->tx_addr;
+
+	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR |
+		UART_FCR_CLEAR_XMIT);
+	serial_out(up, MTK_UART_DMA_EN,
+		   MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+
+	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+	serial_out(up, UART_EFR, UART_EFR_ECB);
+	serial_out(up, UART_LCR, lcr);
+
+	if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+		pr_err("failed to configure rx dma channel\n");
+	if (dmaengine_slave_config(dma->txchan, &dma->txconf) != 0)
+		pr_err("failed to configure tx dma channel\n");
+
+	data->rx_status = DMA_RX_RUNNING;
+	data->rx_pos = 0;
+	mtk8250_rx_dma(up);
+}
+#endif
+
+static int mtk8250_startup(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	/* disable DMA for console */
+	if (uart_console(port))
+		up->dma = NULL;
+
+	if (up->dma) {
+		data->rx_status = DMA_RX_START;
+		uart_circ_clear(&port->state->xmit);
+	}
+#endif
+	memset(&port->icount, 0, sizeof(port->icount));
+
+	return serial8250_do_startup(port);
+}
+
+static void mtk8250_shutdown(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	if (up->dma)
+		data->rx_status = DMA_RX_SHUTDOWN;
+#endif
+
+	return serial8250_do_shutdown(port);
+}
+
 static void
 mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
 			struct ktermios *old)
@@ -36,6 +200,17 @@ struct mtk8250_data {
 	unsigned long flags;
 	unsigned int baud, quot;
 
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (up->dma) {
+		if (uart_console(port)) {
+			devm_kfree(up->port.dev, up->dma);
+			up->dma = NULL;
+		} else {
+			mtk8250_dma_enable(up);
+		}
+	}
+#endif
+
 	serial8250_do_set_termios(port, termios, old);
 
 	/*
@@ -143,9 +318,20 @@ static int __maybe_unused mtk8250_runtime_resume(struct device *dev)
 		pm_runtime_put_sync_suspend(port->dev);
 }
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static bool mtk8250_dma_filter(struct dma_chan *chan, void *param)
+{
+	return false;
+}
+#endif
+
 static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 			   struct mtk8250_data *data)
 {
+#ifdef CONFIG_SERIAL_8250_DMA
+	int dmacnt;
+#endif
+
 	data->uart_clk = devm_clk_get(&pdev->dev, "baud");
 	if (IS_ERR(data->uart_clk)) {
 		/*
@@ -162,7 +348,23 @@ static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 	}
 
 	data->bus_clk = devm_clk_get(&pdev->dev, "bus");
-	return PTR_ERR_OR_ZERO(data->bus_clk);
+	if (IS_ERR(data->bus_clk))
+		return PTR_ERR(data->bus_clk);
+
+	data->dma = NULL;
+#ifdef CONFIG_SERIAL_8250_DMA
+	dmacnt = of_property_count_strings(pdev->dev.of_node, "dma-names");
+	if (dmacnt == 2) {
+		data->dma = devm_kzalloc(&pdev->dev, sizeof(*data->dma),
+					 GFP_KERNEL);
+		data->dma->fn = mtk8250_dma_filter;
+		data->dma->rx_size = MTK_UART_RX_SIZE;
+		data->dma->rxconf.src_maxburst = MTK_UART_RX_TRIGGER;
+		data->dma->txconf.dst_maxburst = MTK_UART_TX_TRIGGER;
+	}
+#endif
+
+	return 0;
 }
 
 static int mtk8250_probe(struct platform_device *pdev)
@@ -204,8 +406,14 @@ static int mtk8250_probe(struct platform_device *pdev)
 	uart.port.iotype = UPIO_MEM32;
 	uart.port.regshift = 2;
 	uart.port.private_data = data;
+	uart.port.shutdown = mtk8250_shutdown;
+	uart.port.startup = mtk8250_startup;
 	uart.port.set_termios = mtk8250_set_termios;
 	uart.port.uartclk = clk_get_rate(data->uart_clk);
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (data->dma)
+		uart.dma = data->dma;
+#endif
 
 	/* Disable Rate Fix function */
 	writel(0x0, uart.port.membase +

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 3/4] serial: 8250-mtk: add uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

Modify uart register to support DMA function.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/tty/serial/8250/8250_mtk.c |  210 +++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
index dd5e1ce..1da73e8 100644
--- a/drivers/tty/serial/8250/8250_mtk.c
+++ b/drivers/tty/serial/8250/8250_mtk.c
@@ -14,6 +14,10 @@
 #include <linux/pm_runtime.h>
 #include <linux/serial_8250.h>
 #include <linux/serial_reg.h>
+#include <linux/console.h>
+#include <linux/dma-mapping.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
 
 #include "8250.h"
 
@@ -22,12 +26,172 @@
 #define UART_MTK_SAMPLE_POINT	0x0b	/* Sample point register */
 #define MTK_UART_RATE_FIX	0x0d	/* UART Rate Fix Register */
 
+#define MTK_UART_DMA_EN		0x13	/* DMA Enable register */
+#define MTK_UART_DMA_EN_TX	0x2
+#define MTK_UART_DMA_EN_RX	0x5
+
+#define MTK_UART_TX_SIZE	UART_XMIT_SIZE
+#define MTK_UART_RX_SIZE	0x8000
+#define MTK_UART_TX_TRIGGER	1
+#define MTK_UART_RX_TRIGGER	MTK_UART_RX_SIZE
+
+#ifdef CONFIG_SERIAL_8250_DMA
+enum dma_rx_status {
+	DMA_RX_START = 0,
+	DMA_RX_RUNNING = 1,
+	DMA_RX_SHUTDOWN = 2,
+};
+#endif
+
 struct mtk8250_data {
 	int			line;
+	unsigned int		rx_pos;
 	struct clk		*uart_clk;
 	struct clk		*bus_clk;
+	struct uart_8250_dma	*dma;
+#ifdef CONFIG_SERIAL_8250_DMA
+	enum dma_rx_status	rx_status;
+#endif
 };
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static void mtk8250_rx_dma(struct uart_8250_port *up);
+
+static void mtk8250_dma_rx_complete(void *param)
+{
+	struct uart_8250_port *up = param;
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct tty_port *tty_port = &up->port.state->port;
+	struct dma_tx_state state;
+	unsigned char *ptr;
+	int copied;
+
+	dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr,
+				dma->rx_size, DMA_FROM_DEVICE);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+
+	if (data->rx_status == DMA_RX_SHUTDOWN)
+		return;
+
+	if ((data->rx_pos + state.residue) <= dma->rx_size) {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr, state.residue);
+	} else {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr,
+						dma->rx_size - data->rx_pos);
+		ptr = (unsigned char *)(dma->rx_buf);
+		copied += tty_insert_flip_string(tty_port, ptr,
+				data->rx_pos + state.residue - dma->rx_size);
+	}
+	up->port.icount.rx += copied;
+
+	tty_flip_buffer_push(tty_port);
+
+	mtk8250_rx_dma(up);
+}
+
+static void mtk8250_rx_dma(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct dma_async_tx_descriptor	*desc;
+	struct dma_tx_state	 state;
+
+	desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr,
+					   dma->rx_size, DMA_DEV_TO_MEM,
+					   DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	if (!desc) {
+		pr_err("failed to prepare rx slave single\n");
+		return;
+	}
+
+	desc->callback = mtk8250_dma_rx_complete;
+	desc->callback_param = up;
+
+	dma->rx_cookie = dmaengine_submit(desc);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+	data->rx_pos = state.residue;
+
+	dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr,
+				   dma->rx_size, DMA_FROM_DEVICE);
+
+	dma_async_issue_pending(dma->rxchan);
+}
+
+static void mtk8250_dma_enable(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	int lcr = serial_in(up, UART_LCR);
+
+	if (data->rx_status != DMA_RX_START)
+		return;
+
+	dma->rxconf.direction		= DMA_DEV_TO_MEM;
+	dma->rxconf.src_addr_width	= dma->rx_size / 1024;
+	dma->rxconf.src_addr		= dma->rx_addr;
+
+	dma->txconf.direction		= DMA_MEM_TO_DEV;
+	dma->txconf.dst_addr_width	= MTK_UART_TX_SIZE / 1024;
+	dma->txconf.dst_addr		= dma->tx_addr;
+
+	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR |
+		UART_FCR_CLEAR_XMIT);
+	serial_out(up, MTK_UART_DMA_EN,
+		   MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+
+	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+	serial_out(up, UART_EFR, UART_EFR_ECB);
+	serial_out(up, UART_LCR, lcr);
+
+	if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+		pr_err("failed to configure rx dma channel\n");
+	if (dmaengine_slave_config(dma->txchan, &dma->txconf) != 0)
+		pr_err("failed to configure tx dma channel\n");
+
+	data->rx_status = DMA_RX_RUNNING;
+	data->rx_pos = 0;
+	mtk8250_rx_dma(up);
+}
+#endif
+
+static int mtk8250_startup(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	/* disable DMA for console */
+	if (uart_console(port))
+		up->dma = NULL;
+
+	if (up->dma) {
+		data->rx_status = DMA_RX_START;
+		uart_circ_clear(&port->state->xmit);
+	}
+#endif
+	memset(&port->icount, 0, sizeof(port->icount));
+
+	return serial8250_do_startup(port);
+}
+
+static void mtk8250_shutdown(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	if (up->dma)
+		data->rx_status = DMA_RX_SHUTDOWN;
+#endif
+
+	return serial8250_do_shutdown(port);
+}
+
 static void
 mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
 			struct ktermios *old)
@@ -36,6 +200,17 @@ struct mtk8250_data {
 	unsigned long flags;
 	unsigned int baud, quot;
 
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (up->dma) {
+		if (uart_console(port)) {
+			devm_kfree(up->port.dev, up->dma);
+			up->dma = NULL;
+		} else {
+			mtk8250_dma_enable(up);
+		}
+	}
+#endif
+
 	serial8250_do_set_termios(port, termios, old);
 
 	/*
@@ -143,9 +318,20 @@ static int __maybe_unused mtk8250_runtime_resume(struct device *dev)
 		pm_runtime_put_sync_suspend(port->dev);
 }
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static bool mtk8250_dma_filter(struct dma_chan *chan, void *param)
+{
+	return false;
+}
+#endif
+
 static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 			   struct mtk8250_data *data)
 {
+#ifdef CONFIG_SERIAL_8250_DMA
+	int dmacnt;
+#endif
+
 	data->uart_clk = devm_clk_get(&pdev->dev, "baud");
 	if (IS_ERR(data->uart_clk)) {
 		/*
@@ -162,7 +348,23 @@ static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 	}
 
 	data->bus_clk = devm_clk_get(&pdev->dev, "bus");
-	return PTR_ERR_OR_ZERO(data->bus_clk);
+	if (IS_ERR(data->bus_clk))
+		return PTR_ERR(data->bus_clk);
+
+	data->dma = NULL;
+#ifdef CONFIG_SERIAL_8250_DMA
+	dmacnt = of_property_count_strings(pdev->dev.of_node, "dma-names");
+	if (dmacnt == 2) {
+		data->dma = devm_kzalloc(&pdev->dev, sizeof(*data->dma),
+					 GFP_KERNEL);
+		data->dma->fn = mtk8250_dma_filter;
+		data->dma->rx_size = MTK_UART_RX_SIZE;
+		data->dma->rxconf.src_maxburst = MTK_UART_RX_TRIGGER;
+		data->dma->txconf.dst_maxburst = MTK_UART_TX_TRIGGER;
+	}
+#endif
+
+	return 0;
 }
 
 static int mtk8250_probe(struct platform_device *pdev)
@@ -204,8 +406,14 @@ static int mtk8250_probe(struct platform_device *pdev)
 	uart.port.iotype = UPIO_MEM32;
 	uart.port.regshift = 2;
 	uart.port.private_data = data;
+	uart.port.shutdown = mtk8250_shutdown;
+	uart.port.startup = mtk8250_startup;
 	uart.port.set_termios = mtk8250_set_termios;
 	uart.port.uartclk = clk_get_rate(data->uart_clk);
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (data->dma)
+		uart.dma = data->dma;
+#endif
 
 	/* Disable Rate Fix function */
 	writel(0x0, uart.port.membase +
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 3/4] serial: 8250-mtk: add uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

Modify uart register to support DMA function.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/tty/serial/8250/8250_mtk.c |  210 +++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
index dd5e1ce..1da73e8 100644
--- a/drivers/tty/serial/8250/8250_mtk.c
+++ b/drivers/tty/serial/8250/8250_mtk.c
@@ -14,6 +14,10 @@
 #include <linux/pm_runtime.h>
 #include <linux/serial_8250.h>
 #include <linux/serial_reg.h>
+#include <linux/console.h>
+#include <linux/dma-mapping.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
 
 #include "8250.h"
 
@@ -22,12 +26,172 @@
 #define UART_MTK_SAMPLE_POINT	0x0b	/* Sample point register */
 #define MTK_UART_RATE_FIX	0x0d	/* UART Rate Fix Register */
 
+#define MTK_UART_DMA_EN		0x13	/* DMA Enable register */
+#define MTK_UART_DMA_EN_TX	0x2
+#define MTK_UART_DMA_EN_RX	0x5
+
+#define MTK_UART_TX_SIZE	UART_XMIT_SIZE
+#define MTK_UART_RX_SIZE	0x8000
+#define MTK_UART_TX_TRIGGER	1
+#define MTK_UART_RX_TRIGGER	MTK_UART_RX_SIZE
+
+#ifdef CONFIG_SERIAL_8250_DMA
+enum dma_rx_status {
+	DMA_RX_START = 0,
+	DMA_RX_RUNNING = 1,
+	DMA_RX_SHUTDOWN = 2,
+};
+#endif
+
 struct mtk8250_data {
 	int			line;
+	unsigned int		rx_pos;
 	struct clk		*uart_clk;
 	struct clk		*bus_clk;
+	struct uart_8250_dma	*dma;
+#ifdef CONFIG_SERIAL_8250_DMA
+	enum dma_rx_status	rx_status;
+#endif
 };
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static void mtk8250_rx_dma(struct uart_8250_port *up);
+
+static void mtk8250_dma_rx_complete(void *param)
+{
+	struct uart_8250_port *up = param;
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct tty_port *tty_port = &up->port.state->port;
+	struct dma_tx_state state;
+	unsigned char *ptr;
+	int copied;
+
+	dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr,
+				dma->rx_size, DMA_FROM_DEVICE);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+
+	if (data->rx_status == DMA_RX_SHUTDOWN)
+		return;
+
+	if ((data->rx_pos + state.residue) <= dma->rx_size) {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr, state.residue);
+	} else {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr,
+						dma->rx_size - data->rx_pos);
+		ptr = (unsigned char *)(dma->rx_buf);
+		copied += tty_insert_flip_string(tty_port, ptr,
+				data->rx_pos + state.residue - dma->rx_size);
+	}
+	up->port.icount.rx += copied;
+
+	tty_flip_buffer_push(tty_port);
+
+	mtk8250_rx_dma(up);
+}
+
+static void mtk8250_rx_dma(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct dma_async_tx_descriptor	*desc;
+	struct dma_tx_state	 state;
+
+	desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr,
+					   dma->rx_size, DMA_DEV_TO_MEM,
+					   DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	if (!desc) {
+		pr_err("failed to prepare rx slave single\n");
+		return;
+	}
+
+	desc->callback = mtk8250_dma_rx_complete;
+	desc->callback_param = up;
+
+	dma->rx_cookie = dmaengine_submit(desc);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+	data->rx_pos = state.residue;
+
+	dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr,
+				   dma->rx_size, DMA_FROM_DEVICE);
+
+	dma_async_issue_pending(dma->rxchan);
+}
+
+static void mtk8250_dma_enable(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	int lcr = serial_in(up, UART_LCR);
+
+	if (data->rx_status != DMA_RX_START)
+		return;
+
+	dma->rxconf.direction		= DMA_DEV_TO_MEM;
+	dma->rxconf.src_addr_width	= dma->rx_size / 1024;
+	dma->rxconf.src_addr		= dma->rx_addr;
+
+	dma->txconf.direction		= DMA_MEM_TO_DEV;
+	dma->txconf.dst_addr_width	= MTK_UART_TX_SIZE / 1024;
+	dma->txconf.dst_addr		= dma->tx_addr;
+
+	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR |
+		UART_FCR_CLEAR_XMIT);
+	serial_out(up, MTK_UART_DMA_EN,
+		   MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+
+	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+	serial_out(up, UART_EFR, UART_EFR_ECB);
+	serial_out(up, UART_LCR, lcr);
+
+	if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+		pr_err("failed to configure rx dma channel\n");
+	if (dmaengine_slave_config(dma->txchan, &dma->txconf) != 0)
+		pr_err("failed to configure tx dma channel\n");
+
+	data->rx_status = DMA_RX_RUNNING;
+	data->rx_pos = 0;
+	mtk8250_rx_dma(up);
+}
+#endif
+
+static int mtk8250_startup(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	/* disable DMA for console */
+	if (uart_console(port))
+		up->dma = NULL;
+
+	if (up->dma) {
+		data->rx_status = DMA_RX_START;
+		uart_circ_clear(&port->state->xmit);
+	}
+#endif
+	memset(&port->icount, 0, sizeof(port->icount));
+
+	return serial8250_do_startup(port);
+}
+
+static void mtk8250_shutdown(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	if (up->dma)
+		data->rx_status = DMA_RX_SHUTDOWN;
+#endif
+
+	return serial8250_do_shutdown(port);
+}
+
 static void
 mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
 			struct ktermios *old)
@@ -36,6 +200,17 @@ struct mtk8250_data {
 	unsigned long flags;
 	unsigned int baud, quot;
 
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (up->dma) {
+		if (uart_console(port)) {
+			devm_kfree(up->port.dev, up->dma);
+			up->dma = NULL;
+		} else {
+			mtk8250_dma_enable(up);
+		}
+	}
+#endif
+
 	serial8250_do_set_termios(port, termios, old);
 
 	/*
@@ -143,9 +318,20 @@ static int __maybe_unused mtk8250_runtime_resume(struct device *dev)
 		pm_runtime_put_sync_suspend(port->dev);
 }
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static bool mtk8250_dma_filter(struct dma_chan *chan, void *param)
+{
+	return false;
+}
+#endif
+
 static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 			   struct mtk8250_data *data)
 {
+#ifdef CONFIG_SERIAL_8250_DMA
+	int dmacnt;
+#endif
+
 	data->uart_clk = devm_clk_get(&pdev->dev, "baud");
 	if (IS_ERR(data->uart_clk)) {
 		/*
@@ -162,7 +348,23 @@ static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 	}
 
 	data->bus_clk = devm_clk_get(&pdev->dev, "bus");
-	return PTR_ERR_OR_ZERO(data->bus_clk);
+	if (IS_ERR(data->bus_clk))
+		return PTR_ERR(data->bus_clk);
+
+	data->dma = NULL;
+#ifdef CONFIG_SERIAL_8250_DMA
+	dmacnt = of_property_count_strings(pdev->dev.of_node, "dma-names");
+	if (dmacnt == 2) {
+		data->dma = devm_kzalloc(&pdev->dev, sizeof(*data->dma),
+					 GFP_KERNEL);
+		data->dma->fn = mtk8250_dma_filter;
+		data->dma->rx_size = MTK_UART_RX_SIZE;
+		data->dma->rxconf.src_maxburst = MTK_UART_RX_TRIGGER;
+		data->dma->txconf.dst_maxburst = MTK_UART_TX_TRIGGER;
+	}
+#endif
+
+	return 0;
 }
 
 static int mtk8250_probe(struct platform_device *pdev)
@@ -204,8 +406,14 @@ static int mtk8250_probe(struct platform_device *pdev)
 	uart.port.iotype = UPIO_MEM32;
 	uart.port.regshift = 2;
 	uart.port.private_data = data;
+	uart.port.shutdown = mtk8250_shutdown;
+	uart.port.startup = mtk8250_startup;
 	uart.port.set_termios = mtk8250_set_termios;
 	uart.port.uartclk = clk_get_rate(data->uart_clk);
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (data->dma)
+		uart.dma = data->dma;
+#endif
 
 	/* Disable Rate Fix function */
 	writel(0x0, uart.port.membase +
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 3/4] serial: 8250-mtk: add uart DMA support
@ 2018-12-05  8:42 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:42 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: devicetree, YT Shen, srv_heupstream, Greg Kroah-Hartman,
	Sean Wang, linux-kernel, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

Modify uart register to support DMA function.

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 drivers/tty/serial/8250/8250_mtk.c |  210 +++++++++++++++++++++++++++++++++++-
 1 file changed, 209 insertions(+), 1 deletion(-)

diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c
index dd5e1ce..1da73e8 100644
--- a/drivers/tty/serial/8250/8250_mtk.c
+++ b/drivers/tty/serial/8250/8250_mtk.c
@@ -14,6 +14,10 @@
 #include <linux/pm_runtime.h>
 #include <linux/serial_8250.h>
 #include <linux/serial_reg.h>
+#include <linux/console.h>
+#include <linux/dma-mapping.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
 
 #include "8250.h"
 
@@ -22,12 +26,172 @@
 #define UART_MTK_SAMPLE_POINT	0x0b	/* Sample point register */
 #define MTK_UART_RATE_FIX	0x0d	/* UART Rate Fix Register */
 
+#define MTK_UART_DMA_EN		0x13	/* DMA Enable register */
+#define MTK_UART_DMA_EN_TX	0x2
+#define MTK_UART_DMA_EN_RX	0x5
+
+#define MTK_UART_TX_SIZE	UART_XMIT_SIZE
+#define MTK_UART_RX_SIZE	0x8000
+#define MTK_UART_TX_TRIGGER	1
+#define MTK_UART_RX_TRIGGER	MTK_UART_RX_SIZE
+
+#ifdef CONFIG_SERIAL_8250_DMA
+enum dma_rx_status {
+	DMA_RX_START = 0,
+	DMA_RX_RUNNING = 1,
+	DMA_RX_SHUTDOWN = 2,
+};
+#endif
+
 struct mtk8250_data {
 	int			line;
+	unsigned int		rx_pos;
 	struct clk		*uart_clk;
 	struct clk		*bus_clk;
+	struct uart_8250_dma	*dma;
+#ifdef CONFIG_SERIAL_8250_DMA
+	enum dma_rx_status	rx_status;
+#endif
 };
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static void mtk8250_rx_dma(struct uart_8250_port *up);
+
+static void mtk8250_dma_rx_complete(void *param)
+{
+	struct uart_8250_port *up = param;
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct tty_port *tty_port = &up->port.state->port;
+	struct dma_tx_state state;
+	unsigned char *ptr;
+	int copied;
+
+	dma_sync_single_for_cpu(dma->rxchan->device->dev, dma->rx_addr,
+				dma->rx_size, DMA_FROM_DEVICE);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+
+	if (data->rx_status == DMA_RX_SHUTDOWN)
+		return;
+
+	if ((data->rx_pos + state.residue) <= dma->rx_size) {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr, state.residue);
+	} else {
+		ptr = (unsigned char *)(data->rx_pos + dma->rx_buf);
+		copied = tty_insert_flip_string(tty_port, ptr,
+						dma->rx_size - data->rx_pos);
+		ptr = (unsigned char *)(dma->rx_buf);
+		copied += tty_insert_flip_string(tty_port, ptr,
+				data->rx_pos + state.residue - dma->rx_size);
+	}
+	up->port.icount.rx += copied;
+
+	tty_flip_buffer_push(tty_port);
+
+	mtk8250_rx_dma(up);
+}
+
+static void mtk8250_rx_dma(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	struct dma_async_tx_descriptor	*desc;
+	struct dma_tx_state	 state;
+
+	desc = dmaengine_prep_slave_single(dma->rxchan, dma->rx_addr,
+					   dma->rx_size, DMA_DEV_TO_MEM,
+					   DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	if (!desc) {
+		pr_err("failed to prepare rx slave single\n");
+		return;
+	}
+
+	desc->callback = mtk8250_dma_rx_complete;
+	desc->callback_param = up;
+
+	dma->rx_cookie = dmaengine_submit(desc);
+
+	dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+	data->rx_pos = state.residue;
+
+	dma_sync_single_for_device(dma->rxchan->device->dev, dma->rx_addr,
+				   dma->rx_size, DMA_FROM_DEVICE);
+
+	dma_async_issue_pending(dma->rxchan);
+}
+
+static void mtk8250_dma_enable(struct uart_8250_port *up)
+{
+	struct uart_8250_dma *dma = up->dma;
+	struct mtk8250_data *data = up->port.private_data;
+	int lcr = serial_in(up, UART_LCR);
+
+	if (data->rx_status != DMA_RX_START)
+		return;
+
+	dma->rxconf.direction		= DMA_DEV_TO_MEM;
+	dma->rxconf.src_addr_width	= dma->rx_size / 1024;
+	dma->rxconf.src_addr		= dma->rx_addr;
+
+	dma->txconf.direction		= DMA_MEM_TO_DEV;
+	dma->txconf.dst_addr_width	= MTK_UART_TX_SIZE / 1024;
+	dma->txconf.dst_addr		= dma->tx_addr;
+
+	serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO | UART_FCR_CLEAR_RCVR |
+		UART_FCR_CLEAR_XMIT);
+	serial_out(up, MTK_UART_DMA_EN,
+		   MTK_UART_DMA_EN_RX | MTK_UART_DMA_EN_TX);
+
+	serial_out(up, UART_LCR, UART_LCR_CONF_MODE_B);
+	serial_out(up, UART_EFR, UART_EFR_ECB);
+	serial_out(up, UART_LCR, lcr);
+
+	if (dmaengine_slave_config(dma->rxchan, &dma->rxconf) != 0)
+		pr_err("failed to configure rx dma channel\n");
+	if (dmaengine_slave_config(dma->txchan, &dma->txconf) != 0)
+		pr_err("failed to configure tx dma channel\n");
+
+	data->rx_status = DMA_RX_RUNNING;
+	data->rx_pos = 0;
+	mtk8250_rx_dma(up);
+}
+#endif
+
+static int mtk8250_startup(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	/* disable DMA for console */
+	if (uart_console(port))
+		up->dma = NULL;
+
+	if (up->dma) {
+		data->rx_status = DMA_RX_START;
+		uart_circ_clear(&port->state->xmit);
+	}
+#endif
+	memset(&port->icount, 0, sizeof(port->icount));
+
+	return serial8250_do_startup(port);
+}
+
+static void mtk8250_shutdown(struct uart_port *port)
+{
+#ifdef CONFIG_SERIAL_8250_DMA
+	struct uart_8250_port *up = up_to_u8250p(port);
+	struct mtk8250_data *data = port->private_data;
+
+	if (up->dma)
+		data->rx_status = DMA_RX_SHUTDOWN;
+#endif
+
+	return serial8250_do_shutdown(port);
+}
+
 static void
 mtk8250_set_termios(struct uart_port *port, struct ktermios *termios,
 			struct ktermios *old)
@@ -36,6 +200,17 @@ struct mtk8250_data {
 	unsigned long flags;
 	unsigned int baud, quot;
 
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (up->dma) {
+		if (uart_console(port)) {
+			devm_kfree(up->port.dev, up->dma);
+			up->dma = NULL;
+		} else {
+			mtk8250_dma_enable(up);
+		}
+	}
+#endif
+
 	serial8250_do_set_termios(port, termios, old);
 
 	/*
@@ -143,9 +318,20 @@ static int __maybe_unused mtk8250_runtime_resume(struct device *dev)
 		pm_runtime_put_sync_suspend(port->dev);
 }
 
+#ifdef CONFIG_SERIAL_8250_DMA
+static bool mtk8250_dma_filter(struct dma_chan *chan, void *param)
+{
+	return false;
+}
+#endif
+
 static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 			   struct mtk8250_data *data)
 {
+#ifdef CONFIG_SERIAL_8250_DMA
+	int dmacnt;
+#endif
+
 	data->uart_clk = devm_clk_get(&pdev->dev, "baud");
 	if (IS_ERR(data->uart_clk)) {
 		/*
@@ -162,7 +348,23 @@ static int mtk8250_probe_of(struct platform_device *pdev, struct uart_port *p,
 	}
 
 	data->bus_clk = devm_clk_get(&pdev->dev, "bus");
-	return PTR_ERR_OR_ZERO(data->bus_clk);
+	if (IS_ERR(data->bus_clk))
+		return PTR_ERR(data->bus_clk);
+
+	data->dma = NULL;
+#ifdef CONFIG_SERIAL_8250_DMA
+	dmacnt = of_property_count_strings(pdev->dev.of_node, "dma-names");
+	if (dmacnt == 2) {
+		data->dma = devm_kzalloc(&pdev->dev, sizeof(*data->dma),
+					 GFP_KERNEL);
+		data->dma->fn = mtk8250_dma_filter;
+		data->dma->rx_size = MTK_UART_RX_SIZE;
+		data->dma->rxconf.src_maxburst = MTK_UART_RX_TRIGGER;
+		data->dma->txconf.dst_maxburst = MTK_UART_TX_TRIGGER;
+	}
+#endif
+
+	return 0;
 }
 
 static int mtk8250_probe(struct platform_device *pdev)
@@ -204,8 +406,14 @@ static int mtk8250_probe(struct platform_device *pdev)
 	uart.port.iotype = UPIO_MEM32;
 	uart.port.regshift = 2;
 	uart.port.private_data = data;
+	uart.port.shutdown = mtk8250_shutdown;
+	uart.port.startup = mtk8250_startup;
 	uart.port.set_termios = mtk8250_set_termios;
 	uart.port.uartclk = clk_get_rate(data->uart_clk);
+#ifdef CONFIG_SERIAL_8250_DMA
+	if (data->dma)
+		uart.dma = data->dma;
+#endif
 
 	/* Disable Rate Fix function */
 	writel(0x0, uart.port.membase +
-- 
1.7.9.5


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [v2,4/4] arm: dts: mt2701: add uart APDMA to device tree
  2018-12-05  8:42 ` Long Cheng
  (?)
  (?)
@ 2018-12-05  8:43 ` Long Cheng
  -1 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:43 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi |   50 +++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index 976d92a..a59728b 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -300,6 +300,9 @@
 		interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 10
+			&apdma 11>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -378,6 +381,38 @@
 		status = "disabled";
 	};
 
+	apdma: dma-controller@11000400 {
+		compatible = "mediatek,mt2712-uart-dma",
+			     "mediatek,mt6577-uart-dma";
+		reg = <0 0x11000400 0 0x80>,
+		      <0 0x11000480 0 0x80>,
+		      <0 0x11000500 0 0x80>,
+		      <0 0x11000580 0 0x80>,
+		      <0 0x11000600 0 0x80>,
+		      <0 0x11000680 0 0x80>,
+		      <0 0x11000700 0 0x80>,
+		      <0 0x11000780 0 0x80>,
+		      <0 0x11000800 0 0x80>,
+		      <0 0x11000880 0 0x80>,
+		      <0 0x11000900 0 0x80>,
+		      <0 0x11000980 0 0x80>;
+		interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
 	uart0: serial@11002000 {
 		compatible = "mediatek,mt2712-uart",
 			     "mediatek,mt6577-uart";
@@ -385,6 +420,9 @@
 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 0
+			&apdma 1>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -395,6 +433,9 @@
 		interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 2
+			&apdma 3>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -405,6 +446,9 @@
 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 4
+			&apdma 5>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -415,6 +459,9 @@
 		interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 6
+			&apdma 7>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -629,6 +676,9 @@
 		interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 8
+			&apdma 9>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 4/4] arm: dts: mt2701: add uart APDMA to device tree
@ 2018-12-05  8:43 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:43 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi |   50 +++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index 976d92a..a59728b 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -300,6 +300,9 @@
 		interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 10
+			&apdma 11>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -378,6 +381,38 @@
 		status = "disabled";
 	};
 
+	apdma: dma-controller@11000400 {
+		compatible = "mediatek,mt2712-uart-dma",
+			     "mediatek,mt6577-uart-dma";
+		reg = <0 0x11000400 0 0x80>,
+		      <0 0x11000480 0 0x80>,
+		      <0 0x11000500 0 0x80>,
+		      <0 0x11000580 0 0x80>,
+		      <0 0x11000600 0 0x80>,
+		      <0 0x11000680 0 0x80>,
+		      <0 0x11000700 0 0x80>,
+		      <0 0x11000780 0 0x80>,
+		      <0 0x11000800 0 0x80>,
+		      <0 0x11000880 0 0x80>,
+		      <0 0x11000900 0 0x80>,
+		      <0 0x11000980 0 0x80>;
+		interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
 	uart0: serial@11002000 {
 		compatible = "mediatek,mt2712-uart",
 			     "mediatek,mt6577-uart";
@@ -385,6 +420,9 @@
 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 0
+			&apdma 1>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -395,6 +433,9 @@
 		interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 2
+			&apdma 3>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -405,6 +446,9 @@
 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 4
+			&apdma 5>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -415,6 +459,9 @@
 		interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 6
+			&apdma 7>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -629,6 +676,9 @@
 		interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 8
+			&apdma 9>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 4/4] arm: dts: mt2701: add uart APDMA to device tree
@ 2018-12-05  8:43 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:43 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: Matthias Brugger, Dan Williams, Greg Kroah-Hartman, Jiri Slaby,
	Sean Wang, Long Cheng, dmaengine, devicetree, linux-arm-kernel,
	linux-mediatek, linux-kernel, linux-serial, srv_heupstream,
	Yingjoe Chen, YT Shen

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi |   50 +++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index 976d92a..a59728b 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -300,6 +300,9 @@
 		interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 10
+			&apdma 11>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -378,6 +381,38 @@
 		status = "disabled";
 	};
 
+	apdma: dma-controller@11000400 {
+		compatible = "mediatek,mt2712-uart-dma",
+			     "mediatek,mt6577-uart-dma";
+		reg = <0 0x11000400 0 0x80>,
+		      <0 0x11000480 0 0x80>,
+		      <0 0x11000500 0 0x80>,
+		      <0 0x11000580 0 0x80>,
+		      <0 0x11000600 0 0x80>,
+		      <0 0x11000680 0 0x80>,
+		      <0 0x11000700 0 0x80>,
+		      <0 0x11000780 0 0x80>,
+		      <0 0x11000800 0 0x80>,
+		      <0 0x11000880 0 0x80>,
+		      <0 0x11000900 0 0x80>,
+		      <0 0x11000980 0 0x80>;
+		interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
 	uart0: serial@11002000 {
 		compatible = "mediatek,mt2712-uart",
 			     "mediatek,mt6577-uart";
@@ -385,6 +420,9 @@
 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 0
+			&apdma 1>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -395,6 +433,9 @@
 		interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 2
+			&apdma 3>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -405,6 +446,9 @@
 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 4
+			&apdma 5>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -415,6 +459,9 @@
 		interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 6
+			&apdma 7>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -629,6 +676,9 @@
 		interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 8
+			&apdma 9>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v2 4/4] arm: dts: mt2701: add uart APDMA to device tree
@ 2018-12-05  8:43 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-05  8:43 UTC (permalink / raw)
  To: Vinod Koul, Rob Herring, Mark Rutland
  Cc: devicetree, YT Shen, srv_heupstream, Greg Kroah-Hartman,
	Sean Wang, linux-kernel, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

1. add uart APDMA controller device node
2. add uart 0/1/2/3/4/5 DMA function

Signed-off-by: Long Cheng <long.cheng@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt2712e.dtsi |   50 +++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
index 976d92a..a59728b 100644
--- a/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt2712e.dtsi
@@ -300,6 +300,9 @@
 		interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 10
+			&apdma 11>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -378,6 +381,38 @@
 		status = "disabled";
 	};
 
+	apdma: dma-controller@11000400 {
+		compatible = "mediatek,mt2712-uart-dma",
+			     "mediatek,mt6577-uart-dma";
+		reg = <0 0x11000400 0 0x80>,
+		      <0 0x11000480 0 0x80>,
+		      <0 0x11000500 0 0x80>,
+		      <0 0x11000580 0 0x80>,
+		      <0 0x11000600 0 0x80>,
+		      <0 0x11000680 0 0x80>,
+		      <0 0x11000700 0 0x80>,
+		      <0 0x11000780 0 0x80>,
+		      <0 0x11000800 0 0x80>,
+		      <0 0x11000880 0 0x80>,
+		      <0 0x11000900 0 0x80>,
+		      <0 0x11000980 0 0x80>;
+		interrupts = <GIC_SPI 103 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 104 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 105 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 106 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 107 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 108 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 109 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 110 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 111 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 112 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 113 IRQ_TYPE_LEVEL_LOW>,
+			     <GIC_SPI 114 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&pericfg CLK_PERI_AP_DMA>;
+		clock-names = "apdma";
+		#dma-cells = <1>;
+	};
+
 	uart0: serial@11002000 {
 		compatible = "mediatek,mt2712-uart",
 			     "mediatek,mt6577-uart";
@@ -385,6 +420,9 @@
 		interrupts = <GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 0
+			&apdma 1>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -395,6 +433,9 @@
 		interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 2
+			&apdma 3>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -405,6 +446,9 @@
 		interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 4
+			&apdma 5>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -415,6 +459,9 @@
 		interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 6
+			&apdma 7>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
@@ -629,6 +676,9 @@
 		interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_LOW>;
 		clocks = <&baud_clk>, <&sys_clk>;
 		clock-names = "baud", "bus";
+		dmas = <&apdma 8
+			&apdma 9>;
+		dma-names = "tx", "rx";
 		status = "disabled";
 	};
 
-- 
1.7.9.5


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
  2018-12-05  8:42 ` Long Cheng
@ 2018-12-05 10:03   ` Greg Kroah-Hartman
  -1 siblings, 0 replies; 34+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-05 10:03 UTC (permalink / raw)
  To: Long Cheng
  Cc: Vinod Koul, Rob Herring, Mark Rutland, Matthias Brugger,
	Dan Williams, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen

On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> In Mediatek SOCs, the uart can support DMA function.
> Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> 
> This series contains document bindings, Kconfig to control the function enable or not,
> device tree including interrupt and dma device node, the code of UART DM

I've queued up patches 1 and 3 from this series in my tty tree, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
@ 2018-12-05 10:03   ` Greg Kroah-Hartman
  0 siblings, 0 replies; 34+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-05 10:03 UTC (permalink / raw)
  To: Long Cheng
  Cc: Mark Rutland, devicetree, YT Shen, srv_heupstream, Sean Wang,
	linux-kernel, dmaengine, Vinod Koul, Rob Herring, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> In Mediatek SOCs, the uart can support DMA function.
> Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> 
> This series contains document bindings, Kconfig to control the function enable or not,
> device tree including interrupt and dma device node, the code of UART DM

I've queued up patches 1 and 3 from this series in my tty tree, thanks.

greg k-h

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
  2018-12-05 10:03   ` Greg Kroah-Hartman
@ 2018-12-05 16:31     ` Vinod Koul
  -1 siblings, 0 replies; 34+ messages in thread
From: Vinod Koul @ 2018-12-05 16:31 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Long Cheng, Rob Herring, Mark Rutland, Matthias Brugger,
	Dan Williams, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen

Hi Greg,

On 05-12-18, 11:03, Greg Kroah-Hartman wrote:
> On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> > In Mediatek SOCs, the uart can support DMA function.
> > Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> > 
> > This series contains document bindings, Kconfig to control the function enable or not,
> > device tree including interrupt and dma device node, the code of UART DM
> 
> I've queued up patches 1 and 3 from this series in my tty tree, thanks.

Do you mind not taking patch 2, I would like that to go thru dmaengine
tree

Thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
@ 2018-12-05 16:31     ` Vinod Koul
  0 siblings, 0 replies; 34+ messages in thread
From: Vinod Koul @ 2018-12-05 16:31 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Mark Rutland, devicetree, YT Shen, srv_heupstream, Sean Wang,
	linux-kernel, Rob Herring, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

Hi Greg,

On 05-12-18, 11:03, Greg Kroah-Hartman wrote:
> On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> > In Mediatek SOCs, the uart can support DMA function.
> > Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> > 
> > This series contains document bindings, Kconfig to control the function enable or not,
> > device tree including interrupt and dma device node, the code of UART DM
> 
> I've queued up patches 1 and 3 from this series in my tty tree, thanks.

Do you mind not taking patch 2, I would like that to go thru dmaengine
tree

Thanks
-- 
~Vinod

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
  2018-12-05 16:31     ` Vinod Koul
@ 2018-12-05 18:50       ` Greg Kroah-Hartman
  -1 siblings, 0 replies; 34+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-05 18:50 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Long Cheng, Rob Herring, Mark Rutland, Matthias Brugger,
	Dan Williams, Jiri Slaby, Sean Wang, dmaengine, devicetree,
	linux-arm-kernel, linux-mediatek, linux-kernel, linux-serial,
	srv_heupstream, Yingjoe Chen, YT Shen

On Wed, Dec 05, 2018 at 10:01:33PM +0530, Vinod Koul wrote:
> Hi Greg,
> 
> On 05-12-18, 11:03, Greg Kroah-Hartman wrote:
> > On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> > > In Mediatek SOCs, the uart can support DMA function.
> > > Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> > > 
> > > This series contains document bindings, Kconfig to control the function enable or not,
> > > device tree including interrupt and dma device node, the code of UART DM
> > 
> > I've queued up patches 1 and 3 from this series in my tty tree, thanks.
> 
> Do you mind not taking patch 2, I would like that to go thru dmaengine
> tree

Like I said, I only took patches 1 and 3...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 0/4] add uart DMA function
@ 2018-12-05 18:50       ` Greg Kroah-Hartman
  0 siblings, 0 replies; 34+ messages in thread
From: Greg Kroah-Hartman @ 2018-12-05 18:50 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Mark Rutland, devicetree, YT Shen, srv_heupstream, Sean Wang,
	linux-kernel, Rob Herring, dmaengine, Long Cheng, linux-mediatek,
	linux-serial, Jiri Slaby, Matthias Brugger, Yingjoe Chen,
	Dan Williams, linux-arm-kernel

On Wed, Dec 05, 2018 at 10:01:33PM +0530, Vinod Koul wrote:
> Hi Greg,
> 
> On 05-12-18, 11:03, Greg Kroah-Hartman wrote:
> > On Wed, Dec 05, 2018 at 04:42:56PM +0800, Long Cheng wrote:
> > > In Mediatek SOCs, the uart can support DMA function.
> > > Base on DMA engine formwork, we add the DMA code to support uart. And put the code under drivers/dma.
> > > 
> > > This series contains document bindings, Kconfig to control the function enable or not,
> > > device tree including interrupt and dma device node, the code of UART DM
> > 
> > I've queued up patches 1 and 3 from this series in my tty tree, thanks.
> 
> Do you mind not taking patch 2, I would like that to go thru dmaengine
> tree

Like I said, I only took patches 1 and 3...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  2018-12-05  8:42 ` Long Cheng
  (?)
@ 2018-12-05 21:07 ` Sean Wang
  -1 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-05 21:07 UTC (permalink / raw)
  To: long.cheng
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel

.
On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> In DMA engine framework, add 8250 mtk dma to support it.
>
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
>  drivers/dma/mediatek/Kconfig        |   11 +
>  drivers/dma/mediatek/Makefile       |    1 +
>  3 files changed, 906 insertions(+)
>  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
>
> diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> new file mode 100644
> index 0000000..3454679
> --- /dev/null
> +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> @@ -0,0 +1,894 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Mediatek 8250 DMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <long.cheng@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_dma.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/iopoll.h>
> +
> +#include "../virt-dma.h"
> +
> +#define MTK_APDMA_DEFAULT_REQUESTS     127
> +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> +
> +#define VFF_EN_B               BIT(0)
> +#define VFF_STOP_B             BIT(0)
> +#define VFF_FLUSH_B            BIT(0)
> +#define VFF_4G_SUPPORT_B       BIT(0)
> +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> +#define VFF_RX_INT_EN1_B       BIT(1)
> +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> +#define VFF_WARM_RST_B         BIT(0)
> +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> +#define VFF_TX_INT_FLAG_CLR_B  0
> +#define VFF_STOP_CLR_B         0
> +#define VFF_FLUSH_CLR_B                0
> +#define VFF_INT_EN_CLR_B       0
> +#define VFF_4G_SUPPORT_CLR_B   0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> +
> +#define MTK_DMA_RING_SIZE      0xffffU
> +/* invert this bit when wrap ring head again*/
> +#define MTK_DMA_RING_WRAP      0x10000U
> +
> +#define VFF_INT_FLAG           0x00
> +#define VFF_INT_EN             0x04
> +#define VFF_EN                 0x08
> +#define VFF_RST                        0x0c
> +#define VFF_STOP               0x10
> +#define VFF_FLUSH              0x14
> +#define VFF_ADDR               0x1c
> +#define VFF_LEN                        0x24
> +#define VFF_THRE               0x28
> +#define VFF_WPT                        0x2c
> +#define VFF_RPT                        0x30
> +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> +#define VFF_VALID_SIZE         0x3c
> +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> +#define VFF_LEFT_SIZE          0x40
> +#define VFF_DEBUG_STATUS       0x50
> +#define VFF_4G_SUPPORT         0x54
> +
> +struct mtk_dmadev {
> +       struct dma_device ddev;
> +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> +       spinlock_t lock; /* dma dev lock */
> +       struct tasklet_struct task;
> +       struct list_head pending;
> +       struct clk *clk;
> +       unsigned int dma_requests;
> +       bool support_33bits;
> +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> +};
> +
> +struct mtk_chan {
> +       struct virt_dma_chan vc;
> +       struct list_head node;
> +       struct dma_slave_config cfg;
> +       void __iomem *base;
> +       struct mtk_dma_desc *desc;
> +
> +       bool stop;
> +       bool requested;
> +
> +       unsigned int dma_sig;

the member can be removed as no real user would refer to it

> +       unsigned int dma_ch;

a chan_id is already included in struct dma_chan, we can reuse it

> +       unsigned int sgidx;

the member also can be removed as no real user would refer to it

> +       unsigned int remain_size;

The member remain_size seems unnecessary data to maintain a channel.
The virtual channel struct virt_dma_chan already provide the way to
manage all descriptors you're operating on, you should reuse related
functions to virt_dma_chan first.

Or if you mean remain_size is about the remaining size of current
descriptor, and then putting into struct mtk_dma_desc would be better.

> +       unsigned int rx_ptr;
> +};
> +
> +struct mtk_dma_sg {
> +       dma_addr_t addr;
> +       unsigned int en;                /* number of elements (24-bit) */
> +       unsigned int fn;                /* number of frames (16-bit) */
> +};
> +
> +struct mtk_dma_desc {
> +       struct virt_dma_desc vd;
> +       enum dma_transfer_direction dir;
> +
> +       unsigned int sglen;
> +       struct mtk_dma_sg sg[0];
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> +static struct of_dma_filter_info mtk_dma_info = {
> +       .filter_fn = mtk_dma_filter_fn,
> +};
> +
> +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> +{
> +       return container_of(d, struct mtk_dmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> +{
> +       return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_dma_desc *to_mtk_dma_desc
> +       (struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct mtk_dma_desc, vd.tx);
> +}
> +
> +static void mtk_dma_chan_write(struct mtk_chan *c,
> +                              unsigned int reg, unsigned int val)
> +{
> +       writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> +{
> +       return readl(c->base + reg);
> +}
> +
> +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> +{
> +       struct dma_chan *chan = vd->tx.chan;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       kfree(c->desc);
> +       c->desc = NULL;
> +}
> +
> +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> +{
> +       int ret;
> +
> +       ret = clk_prepare_enable(mtkd->clk);
> +       if (ret) {
> +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> +               return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> +{
> +       clk_disable_unprepare(mtkd->clk);
> +}
> +
> +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> +                                    struct virt_dma_chan *vc)
> +{
> +       struct virt_dma_desc *vd;
> +
> +       if (list_empty(&vc->desc_issued) == 0) {
> +               list_for_each_entry(vd, &vc->desc_issued, node) {
> +                       if (cookie == vd->tx.cookie) {
> +                               INIT_LIST_HEAD(&vc->desc_issued);

generally, we don't force initialze the list desc_issued. Instead,
when each descriptor is completed, we need to move each descriptor
from the list desc_issued to the list desc_completed.

> +                               break;
> +                       }
> +               }
> +       }
> +}
> +
> +static void mtk_dma_tx_flush(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_dma_tx_write(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned int txcount = c->remain_size;
> +       unsigned int len, send, left, wpt, wrap;
> +
> +       len = mtk_dma_chan_read(c, VFF_LEN);
> +
> +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> +               if (c->remain_size == 0U)
> +                       break;
> +               send = min(left, c->remain_size);

min_t

> +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> +
> +               if ((wpt & (len - 1U)) + send < len)
> +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> +               else
> +                       mtk_dma_chan_write(c, VFF_WPT,
> +                                          ((wpt + send) & (len - 1U))
> +                                          | wrap);
> +
> +               c->remain_size -= send;

I'm curious why you don't need to set up the hardware from the
descriptor information

> +       }
> +
> +       if (txcount != c->remain_size) {
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +               mtk_dma_tx_flush(chan);
> +       }
> +}
> +
> +static void mtk_dma_start_tx(struct mtk_chan *c)
> +{
> +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +       else
> +               mtk_dma_tx_write(&c->vc.chan);
> +
> +       c->stop = false;
> +}
> +
> +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> +{
> +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> +
> +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> +
> +       c->remain_size = count;
> +       c->rx_ptr = rdptr;
> +
> +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> +}
> +
> +static void mtk_dma_start_rx(struct mtk_chan *c)
> +{
> +       struct dma_chan *chan = &c->vc.chan;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_dma_desc *d = c->desc;
> +
> +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> +               return;
> +
> +       if (d && d->vd.tx.cookie != 0) {

the driver benefits from virtual_channel so you can manage the life
cycle of each descriptor by the virtual channel related functions
without handling details about the cookie.

> +               mtk_dma_get_rx_size(c);
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       } else {
> +               spin_lock(&mtkd->lock);
> +               if (list_empty(&mtkd->pending))
> +                       list_add_tail(&c->node, &mtkd->pending);
> +               spin_unlock(&mtkd->lock);
> +               tasklet_schedule(&mtkd->task);
> +       }
> +}
> +
> +static void mtk_dma_reset(struct mtk_chan *c)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> +       mtk_dma_chan_write(c, VFF_THRE, 0);
> +       mtk_dma_chan_write(c, VFF_LEN, 0);
> +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret) {
> +               dev_err(c->vc.chan.device->dev,
> +                               "dma reset: fail, timeout\n");
> +               return;
> +       }
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_RPT, 0);
> +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> +               mtk_dma_chan_write(c, VFF_WPT, 0);
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +}
> +
> +static void mtk_dma_stop(struct mtk_chan *c)
> +{
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> +       /* Wait for flush */
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_FLUSH,
> +                                status,
> +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> +                                10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +       else
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       c->stop = true;
> +}
> +
> +/*
> + * This callback schedules all pending channels. We could be more
> + * clever here by postponing allocation of the real DMA channels to
> + * this point, and freeing them when our virtual channel becomes idle.
> + *
> + * We would then need to deal with 'all channels in-use'
> + */
> +static void mtk_dma_sched(unsigned long data)
> +{
> +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> +       struct virt_dma_desc *vd;
> +       struct mtk_chan *c;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +       LIST_HEAD(head);
> +
> +       spin_lock_irq(&mtkd->lock);
> +       list_splice_tail_init(&mtkd->pending, &head);
> +       spin_unlock_irq(&mtkd->lock);
> +
> +       if (!list_empty(&head)) {
> +               c = list_first_entry(&head, struct mtk_chan, node);
> +               cookie = c->vc.chan.cookie;
> +
> +               spin_lock_irqsave(&c->vc.lock, flags);
> +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_rx(c);
> +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_tx(c);
> +               }
> +               spin_unlock_irqrestore(&c->vc.lock, flags);
> +       }
> +}
> +
> +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       int ret = -EBUSY;
> +
> +       pm_runtime_get_sync(mtkd->ddev.dev);
> +
> +       if (!mtkd->ch[c->dma_ch]) {
> +               c->base = mtkd->mem_base[c->dma_ch];
> +               mtkd->ch[c->dma_ch] = c;
> +               ret = 1;
> +       }
> +       c->requested = false;
> +       mtk_dma_reset(c);
> +
> +       return ret;
> +}
> +
> +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (c->requested) {
> +               c->requested = false;
> +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> +       }
> +
> +       tasklet_kill(&mtkd->task);
> +       tasklet_kill(&c->vc.task);
> +
> +       c->base = NULL;
> +       mtkd->ch[c->dma_ch] = NULL;
> +       vchan_free_chan_resources(&c->vc);
> +
> +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> +       c->dma_sig = 0;
> +
> +       pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> +                                        dma_cookie_t cookie,
> +                                        struct dma_tx_state *txstate)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       enum dma_status ret;
> +       unsigned long flags;
> +
> +       if (!txstate)
> +               return DMA_ERROR;
> +
> +       ret = dma_cookie_status(chan, cookie, txstate);
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (ret == DMA_IN_PROGRESS) {
> +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> +               dma_set_residue(txstate, c->rx_ptr);
> +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> +               dma_set_residue(txstate, c->remain_size);
> +       } else {
> +               dma_set_residue(txstate, 0);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return ret;
> +}
> +
> +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> +       (struct dma_chan *chan, struct scatterlist *sgl,
> +       unsigned int sglen,     enum dma_transfer_direction dir,
> +       unsigned long tx_flags, void *context)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct scatterlist *sgent;
> +       struct mtk_dma_desc *d;
> +       struct mtk_dma_sg *sg;
> +       unsigned int size, i, j, en;
> +
> +       en = 1;
> +
> +       if ((dir != DMA_DEV_TO_MEM) &&
> +               (dir != DMA_MEM_TO_DEV)) {
> +               dev_err(chan->device->dev, "bad direction\n");
> +               return NULL;
> +       }
> +
> +       /* Now allocate and setup the descriptor. */
> +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> +       if (!d)
> +               return NULL;
> +
> +       d->dir = dir;
> +
> +       j = 0;
> +       for_each_sg(sgl, sgent, sglen, i) {
> +               d->sg[j].addr = sg_dma_address(sgent);
> +               d->sg[j].en = en;
> +               d->sg[j].fn = sg_dma_len(sgent) / en;
> +               j++;
> +       }
> +
> +       d->sglen = j;
> +
> +       if (dir == DMA_MEM_TO_DEV) {
> +               for (size = i = 0; i < d->sglen; i++) {
> +                       sg = &d->sg[i];
> +                       size += sg->en * sg->fn;
> +               }
> +               c->remain_size = size;
> +       }
> +
> +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_dma_issue_pending(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct virt_dma_desc *vd;
> +       struct mtk_dmadev *mtkd;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +               cookie = c->vc.chan.cookie;
> +               mtkd = to_mtk_dma_dev(chan->device);
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

vchan_issue_pending means putting all the descriptors to list
desc_issued, I'm supposed somewhere would iterate the list by
vchan_next_desc and program each desc. into hardware. but I don't see
the similar code.

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +               }
> +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +               cookie = c->vc.chan.cookie;
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

ditto as the above

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       mtk_dma_start_tx(c);
> +               }
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +
> +       mtk_dma_start_rx(c);
> +
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dma_desc *d = c->desc;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->remain_size != 0U) {
> +               list_add_tail(&c->node, &mtkd->pending);
> +               tasklet_schedule(&mtkd->task);
> +       } else {
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static int mtk_dma_slave_config(struct dma_chan *chan,
> +                               struct dma_slave_config *cfg)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       int ret;
> +
> +       c->cfg = *cfg;
> +
> +       if (cfg->direction == DMA_DEV_TO_MEM) {
> +               unsigned int rx_len = cfg->src_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> +               mtk_dma_chan_write(c,
> +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> +                                  | VFF_RX_INT_EN1_B);
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_rx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_tx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       }
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> +               dev_err(chan->device->dev,
> +                       "config dma dir[%d] fail\n", cfg->direction);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_terminate_all(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       list_del_init(&c->node);
> +       mtk_dma_stop(c);
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_device_pause(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

it it possible not to register the function to the core if the
hardware doesn't support it?

> +
> +static int mtk_dma_device_resume(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

ditto as the above

> +
> +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> +{
> +       tasklet_kill(&mtkd->task);
> +       while (list_empty(&mtkd->ddev.channels) == 0) {
> +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> +                       struct mtk_chan, vc.chan.device_node);
> +
> +               list_del(&c->vc.chan.device_node);
> +               tasklet_kill(&c->vc.task);
> +               devm_kfree(mtkd->ddev.dev, c);
> +       }
> +}
> +
> +static const struct of_device_id mtk_uart_dma_match[] = {
> +       { .compatible = "mediatek,mt6577-uart-dma", },
> +       { /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> +
> +static int mtk_apdma_probe(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd;
> +       struct resource *res;
> +       struct mtk_chan *c;
> +       unsigned int i;
> +       int rc;
> +
> +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> +       if (!mtkd)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> +               if (!res)
> +                       return -ENODEV;
> +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> +               if (IS_ERR(mtkd->mem_base[i]))
> +                       return PTR_ERR(mtkd->mem_base[i]);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> +               if ((int)mtkd->dma_irq[i] < 0) {
> +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> +       if (IS_ERR(mtkd->clk)) {
> +               dev_err(&pdev->dev, "No clock specified\n");
> +               return PTR_ERR(mtkd->clk);
> +       }
> +
> +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> +               dev_info(&pdev->dev, "Support dma 33bits\n");
> +               mtkd->support_33bits = true;
> +       }
> +
> +       if (mtkd->support_33bits)
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> +       else
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +       if (rc)
> +               return rc;
> +
> +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> +       mtkd->ddev.device_config = mtk_dma_slave_config;
> +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +       mtkd->ddev.dev = &pdev->dev;
> +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> +       INIT_LIST_HEAD(&mtkd->pending);
> +
> +       spin_lock_init(&mtkd->lock);
> +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> +
> +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> +       if (of_property_read_u32(pdev->dev.of_node,
> +                                "dma-requests", &mtkd->dma_requests)) {
> +               dev_info(&pdev->dev,
> +                        "Missing dma-requests property, using %u.\n",
> +                        MTK_APDMA_DEFAULT_REQUESTS);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> +               if (!c)
> +                       goto err_no_dma;
> +
> +               c->vc.desc_free = mtk_dma_desc_free;
> +               vchan_init(&c->vc, &mtkd->ddev);
> +               INIT_LIST_HEAD(&c->node);
> +       }
> +
> +       pm_runtime_enable(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +
> +       rc = dma_async_device_register(&mtkd->ddev);
> +       if (rc)
> +               goto rpm_disable;
> +
> +       platform_set_drvdata(pdev, mtkd);
> +
> +       if (pdev->dev.of_node) {
> +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> +
> +               /* Device-tree DMA controller registration */
> +               rc = of_dma_controller_register(pdev->dev.of_node,
> +                                               of_dma_simple_xlate,
> +                                               &mtk_dma_info);

reusing of_dma_xlate_by_chan_id seem that also work for you.

> +               if (rc)
> +                       goto dma_remove;
> +       }
> +
> +       return rc;
> +
> +dma_remove:
> +       dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> +       pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> +       mtk_dma_free(mtkd);
> +       return rc;
> +}
> +
> +static int mtk_apdma_remove(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> +
> +       if (pdev->dev.of_node)
> +               of_dma_controller_free(pdev->dev.of_node);
> +
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_noidle(&pdev->dev);
> +
> +       dma_async_device_unregister(&mtkd->ddev);
> +
> +       mtk_dma_free(mtkd);
> +
> +       return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_dma_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev))
> +               mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev)) {
> +               ret = mtk_dma_clk_enable(mtkd);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       ret = mtk_dma_clk_enable(mtkd);
> +       if (ret)
> +               return ret;
> +
> +       return 0;
> +}
> +
> +#endif /* CONFIG_PM_SLEEP */
> +
> +static const struct dev_pm_ops mtk_dma_pm_ops = {
> +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> +                          mtk_dma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_dma_driver = {
> +       .probe  = mtk_apdma_probe,
> +       .remove = mtk_apdma_remove,
> +       .driver = {
> +               .name           = KBUILD_MODNAME,
> +               .pm             = &mtk_dma_pm_ops,
> +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> +       },
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> +{
> +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> +               unsigned int req = *(unsigned int *)param;
> +
> +               if (req <= mtkd->dma_requests) {
> +                       c->dma_sig = req;
> +                       c->dma_ch = req;
> +                       return true;
> +               }
> +       }
> +       return false;
> +}
> +
> +module_platform_driver(mtk_dma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> +MODULE_LICENSE("GPL v2");
> +
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 27bac0b..bef436e 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -1,4 +1,15 @@
>
> +config DMA_MTK_UART
> +       tristate "MediaTek SoCs APDMA support for UART"
> +       depends on OF
> +       select DMA_ENGINE
> +       select DMA_VIRTUAL_CHANNELS
> +       help
> +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> +         when 8250 mtk uart is enabled, and if you want to using DMA,
> +         you can enable the config. the DMA engine just only be used
> +         with MediaTek Socs.
> +
>  config MTK_HSDMA
>         tristate "MediaTek High-Speed DMA controller support"
>         depends on ARCH_MEDIATEK || COMPILE_TEST
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 6e778f8..2f2efd9 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1 +1,2 @@
> +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
>  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> --
> 1.7.9.5
>
>
> _______________________________________________
> Linux-mediatek mailing list
> Linux-mediatek@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-05 21:07 ` Sean Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-05 21:07 UTC (permalink / raw)
  To: long.cheng
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel

.
On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> In DMA engine framework, add 8250 mtk dma to support it.
>
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
>  drivers/dma/mediatek/Kconfig        |   11 +
>  drivers/dma/mediatek/Makefile       |    1 +
>  3 files changed, 906 insertions(+)
>  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
>
> diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> new file mode 100644
> index 0000000..3454679
> --- /dev/null
> +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> @@ -0,0 +1,894 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Mediatek 8250 DMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <long.cheng@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_dma.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/iopoll.h>
> +
> +#include "../virt-dma.h"
> +
> +#define MTK_APDMA_DEFAULT_REQUESTS     127
> +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> +
> +#define VFF_EN_B               BIT(0)
> +#define VFF_STOP_B             BIT(0)
> +#define VFF_FLUSH_B            BIT(0)
> +#define VFF_4G_SUPPORT_B       BIT(0)
> +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> +#define VFF_RX_INT_EN1_B       BIT(1)
> +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> +#define VFF_WARM_RST_B         BIT(0)
> +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> +#define VFF_TX_INT_FLAG_CLR_B  0
> +#define VFF_STOP_CLR_B         0
> +#define VFF_FLUSH_CLR_B                0
> +#define VFF_INT_EN_CLR_B       0
> +#define VFF_4G_SUPPORT_CLR_B   0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> +
> +#define MTK_DMA_RING_SIZE      0xffffU
> +/* invert this bit when wrap ring head again*/
> +#define MTK_DMA_RING_WRAP      0x10000U
> +
> +#define VFF_INT_FLAG           0x00
> +#define VFF_INT_EN             0x04
> +#define VFF_EN                 0x08
> +#define VFF_RST                        0x0c
> +#define VFF_STOP               0x10
> +#define VFF_FLUSH              0x14
> +#define VFF_ADDR               0x1c
> +#define VFF_LEN                        0x24
> +#define VFF_THRE               0x28
> +#define VFF_WPT                        0x2c
> +#define VFF_RPT                        0x30
> +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> +#define VFF_VALID_SIZE         0x3c
> +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> +#define VFF_LEFT_SIZE          0x40
> +#define VFF_DEBUG_STATUS       0x50
> +#define VFF_4G_SUPPORT         0x54
> +
> +struct mtk_dmadev {
> +       struct dma_device ddev;
> +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> +       spinlock_t lock; /* dma dev lock */
> +       struct tasklet_struct task;
> +       struct list_head pending;
> +       struct clk *clk;
> +       unsigned int dma_requests;
> +       bool support_33bits;
> +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> +};
> +
> +struct mtk_chan {
> +       struct virt_dma_chan vc;
> +       struct list_head node;
> +       struct dma_slave_config cfg;
> +       void __iomem *base;
> +       struct mtk_dma_desc *desc;
> +
> +       bool stop;
> +       bool requested;
> +
> +       unsigned int dma_sig;

the member can be removed as no real user would refer to it

> +       unsigned int dma_ch;

a chan_id is already included in struct dma_chan, we can reuse it

> +       unsigned int sgidx;

the member also can be removed as no real user would refer to it

> +       unsigned int remain_size;

The member remain_size seems unnecessary data to maintain a channel.
The virtual channel struct virt_dma_chan already provide the way to
manage all descriptors you're operating on, you should reuse related
functions to virt_dma_chan first.

Or if you mean remain_size is about the remaining size of current
descriptor, and then putting into struct mtk_dma_desc would be better.

> +       unsigned int rx_ptr;
> +};
> +
> +struct mtk_dma_sg {
> +       dma_addr_t addr;
> +       unsigned int en;                /* number of elements (24-bit) */
> +       unsigned int fn;                /* number of frames (16-bit) */
> +};
> +
> +struct mtk_dma_desc {
> +       struct virt_dma_desc vd;
> +       enum dma_transfer_direction dir;
> +
> +       unsigned int sglen;
> +       struct mtk_dma_sg sg[0];
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> +static struct of_dma_filter_info mtk_dma_info = {
> +       .filter_fn = mtk_dma_filter_fn,
> +};
> +
> +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> +{
> +       return container_of(d, struct mtk_dmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> +{
> +       return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_dma_desc *to_mtk_dma_desc
> +       (struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct mtk_dma_desc, vd.tx);
> +}
> +
> +static void mtk_dma_chan_write(struct mtk_chan *c,
> +                              unsigned int reg, unsigned int val)
> +{
> +       writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> +{
> +       return readl(c->base + reg);
> +}
> +
> +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> +{
> +       struct dma_chan *chan = vd->tx.chan;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       kfree(c->desc);
> +       c->desc = NULL;
> +}
> +
> +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> +{
> +       int ret;
> +
> +       ret = clk_prepare_enable(mtkd->clk);
> +       if (ret) {
> +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> +               return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> +{
> +       clk_disable_unprepare(mtkd->clk);
> +}
> +
> +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> +                                    struct virt_dma_chan *vc)
> +{
> +       struct virt_dma_desc *vd;
> +
> +       if (list_empty(&vc->desc_issued) == 0) {
> +               list_for_each_entry(vd, &vc->desc_issued, node) {
> +                       if (cookie == vd->tx.cookie) {
> +                               INIT_LIST_HEAD(&vc->desc_issued);

generally, we don't force initialze the list desc_issued. Instead,
when each descriptor is completed, we need to move each descriptor
from the list desc_issued to the list desc_completed.

> +                               break;
> +                       }
> +               }
> +       }
> +}
> +
> +static void mtk_dma_tx_flush(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_dma_tx_write(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned int txcount = c->remain_size;
> +       unsigned int len, send, left, wpt, wrap;
> +
> +       len = mtk_dma_chan_read(c, VFF_LEN);
> +
> +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> +               if (c->remain_size == 0U)
> +                       break;
> +               send = min(left, c->remain_size);

min_t

> +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> +
> +               if ((wpt & (len - 1U)) + send < len)
> +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> +               else
> +                       mtk_dma_chan_write(c, VFF_WPT,
> +                                          ((wpt + send) & (len - 1U))
> +                                          | wrap);
> +
> +               c->remain_size -= send;

I'm curious why you don't need to set up the hardware from the
descriptor information

> +       }
> +
> +       if (txcount != c->remain_size) {
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +               mtk_dma_tx_flush(chan);
> +       }
> +}
> +
> +static void mtk_dma_start_tx(struct mtk_chan *c)
> +{
> +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +       else
> +               mtk_dma_tx_write(&c->vc.chan);
> +
> +       c->stop = false;
> +}
> +
> +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> +{
> +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> +
> +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> +
> +       c->remain_size = count;
> +       c->rx_ptr = rdptr;
> +
> +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> +}
> +
> +static void mtk_dma_start_rx(struct mtk_chan *c)
> +{
> +       struct dma_chan *chan = &c->vc.chan;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_dma_desc *d = c->desc;
> +
> +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> +               return;
> +
> +       if (d && d->vd.tx.cookie != 0) {

the driver benefits from virtual_channel so you can manage the life
cycle of each descriptor by the virtual channel related functions
without handling details about the cookie.

> +               mtk_dma_get_rx_size(c);
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       } else {
> +               spin_lock(&mtkd->lock);
> +               if (list_empty(&mtkd->pending))
> +                       list_add_tail(&c->node, &mtkd->pending);
> +               spin_unlock(&mtkd->lock);
> +               tasklet_schedule(&mtkd->task);
> +       }
> +}
> +
> +static void mtk_dma_reset(struct mtk_chan *c)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> +       mtk_dma_chan_write(c, VFF_THRE, 0);
> +       mtk_dma_chan_write(c, VFF_LEN, 0);
> +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret) {
> +               dev_err(c->vc.chan.device->dev,
> +                               "dma reset: fail, timeout\n");
> +               return;
> +       }
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_RPT, 0);
> +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> +               mtk_dma_chan_write(c, VFF_WPT, 0);
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +}
> +
> +static void mtk_dma_stop(struct mtk_chan *c)
> +{
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> +       /* Wait for flush */
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_FLUSH,
> +                                status,
> +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> +                                10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +       else
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       c->stop = true;
> +}
> +
> +/*
> + * This callback schedules all pending channels. We could be more
> + * clever here by postponing allocation of the real DMA channels to
> + * this point, and freeing them when our virtual channel becomes idle.
> + *
> + * We would then need to deal with 'all channels in-use'
> + */
> +static void mtk_dma_sched(unsigned long data)
> +{
> +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> +       struct virt_dma_desc *vd;
> +       struct mtk_chan *c;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +       LIST_HEAD(head);
> +
> +       spin_lock_irq(&mtkd->lock);
> +       list_splice_tail_init(&mtkd->pending, &head);
> +       spin_unlock_irq(&mtkd->lock);
> +
> +       if (!list_empty(&head)) {
> +               c = list_first_entry(&head, struct mtk_chan, node);
> +               cookie = c->vc.chan.cookie;
> +
> +               spin_lock_irqsave(&c->vc.lock, flags);
> +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_rx(c);
> +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_tx(c);
> +               }
> +               spin_unlock_irqrestore(&c->vc.lock, flags);
> +       }
> +}
> +
> +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       int ret = -EBUSY;
> +
> +       pm_runtime_get_sync(mtkd->ddev.dev);
> +
> +       if (!mtkd->ch[c->dma_ch]) {
> +               c->base = mtkd->mem_base[c->dma_ch];
> +               mtkd->ch[c->dma_ch] = c;
> +               ret = 1;
> +       }
> +       c->requested = false;
> +       mtk_dma_reset(c);
> +
> +       return ret;
> +}
> +
> +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (c->requested) {
> +               c->requested = false;
> +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> +       }
> +
> +       tasklet_kill(&mtkd->task);
> +       tasklet_kill(&c->vc.task);
> +
> +       c->base = NULL;
> +       mtkd->ch[c->dma_ch] = NULL;
> +       vchan_free_chan_resources(&c->vc);
> +
> +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> +       c->dma_sig = 0;
> +
> +       pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> +                                        dma_cookie_t cookie,
> +                                        struct dma_tx_state *txstate)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       enum dma_status ret;
> +       unsigned long flags;
> +
> +       if (!txstate)
> +               return DMA_ERROR;
> +
> +       ret = dma_cookie_status(chan, cookie, txstate);
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (ret == DMA_IN_PROGRESS) {
> +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> +               dma_set_residue(txstate, c->rx_ptr);
> +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> +               dma_set_residue(txstate, c->remain_size);
> +       } else {
> +               dma_set_residue(txstate, 0);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return ret;
> +}
> +
> +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> +       (struct dma_chan *chan, struct scatterlist *sgl,
> +       unsigned int sglen,     enum dma_transfer_direction dir,
> +       unsigned long tx_flags, void *context)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct scatterlist *sgent;
> +       struct mtk_dma_desc *d;
> +       struct mtk_dma_sg *sg;
> +       unsigned int size, i, j, en;
> +
> +       en = 1;
> +
> +       if ((dir != DMA_DEV_TO_MEM) &&
> +               (dir != DMA_MEM_TO_DEV)) {
> +               dev_err(chan->device->dev, "bad direction\n");
> +               return NULL;
> +       }
> +
> +       /* Now allocate and setup the descriptor. */
> +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> +       if (!d)
> +               return NULL;
> +
> +       d->dir = dir;
> +
> +       j = 0;
> +       for_each_sg(sgl, sgent, sglen, i) {
> +               d->sg[j].addr = sg_dma_address(sgent);
> +               d->sg[j].en = en;
> +               d->sg[j].fn = sg_dma_len(sgent) / en;
> +               j++;
> +       }
> +
> +       d->sglen = j;
> +
> +       if (dir == DMA_MEM_TO_DEV) {
> +               for (size = i = 0; i < d->sglen; i++) {
> +                       sg = &d->sg[i];
> +                       size += sg->en * sg->fn;
> +               }
> +               c->remain_size = size;
> +       }
> +
> +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_dma_issue_pending(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct virt_dma_desc *vd;
> +       struct mtk_dmadev *mtkd;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +               cookie = c->vc.chan.cookie;
> +               mtkd = to_mtk_dma_dev(chan->device);
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

vchan_issue_pending means putting all the descriptors to list
desc_issued, I'm supposed somewhere would iterate the list by
vchan_next_desc and program each desc. into hardware. but I don't see
the similar code.

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +               }
> +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +               cookie = c->vc.chan.cookie;
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

ditto as the above

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       mtk_dma_start_tx(c);
> +               }
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +
> +       mtk_dma_start_rx(c);
> +
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dma_desc *d = c->desc;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->remain_size != 0U) {
> +               list_add_tail(&c->node, &mtkd->pending);
> +               tasklet_schedule(&mtkd->task);
> +       } else {
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static int mtk_dma_slave_config(struct dma_chan *chan,
> +                               struct dma_slave_config *cfg)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       int ret;
> +
> +       c->cfg = *cfg;
> +
> +       if (cfg->direction == DMA_DEV_TO_MEM) {
> +               unsigned int rx_len = cfg->src_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> +               mtk_dma_chan_write(c,
> +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> +                                  | VFF_RX_INT_EN1_B);
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_rx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_tx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       }
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> +               dev_err(chan->device->dev,
> +                       "config dma dir[%d] fail\n", cfg->direction);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_terminate_all(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       list_del_init(&c->node);
> +       mtk_dma_stop(c);
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_device_pause(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

it it possible not to register the function to the core if the
hardware doesn't support it?

> +
> +static int mtk_dma_device_resume(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

ditto as the above

> +
> +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> +{
> +       tasklet_kill(&mtkd->task);
> +       while (list_empty(&mtkd->ddev.channels) == 0) {
> +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> +                       struct mtk_chan, vc.chan.device_node);
> +
> +               list_del(&c->vc.chan.device_node);
> +               tasklet_kill(&c->vc.task);
> +               devm_kfree(mtkd->ddev.dev, c);
> +       }
> +}
> +
> +static const struct of_device_id mtk_uart_dma_match[] = {
> +       { .compatible = "mediatek,mt6577-uart-dma", },
> +       { /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> +
> +static int mtk_apdma_probe(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd;
> +       struct resource *res;
> +       struct mtk_chan *c;
> +       unsigned int i;
> +       int rc;
> +
> +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> +       if (!mtkd)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> +               if (!res)
> +                       return -ENODEV;
> +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> +               if (IS_ERR(mtkd->mem_base[i]))
> +                       return PTR_ERR(mtkd->mem_base[i]);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> +               if ((int)mtkd->dma_irq[i] < 0) {
> +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> +       if (IS_ERR(mtkd->clk)) {
> +               dev_err(&pdev->dev, "No clock specified\n");
> +               return PTR_ERR(mtkd->clk);
> +       }
> +
> +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> +               dev_info(&pdev->dev, "Support dma 33bits\n");
> +               mtkd->support_33bits = true;
> +       }
> +
> +       if (mtkd->support_33bits)
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> +       else
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +       if (rc)
> +               return rc;
> +
> +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> +       mtkd->ddev.device_config = mtk_dma_slave_config;
> +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +       mtkd->ddev.dev = &pdev->dev;
> +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> +       INIT_LIST_HEAD(&mtkd->pending);
> +
> +       spin_lock_init(&mtkd->lock);
> +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> +
> +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> +       if (of_property_read_u32(pdev->dev.of_node,
> +                                "dma-requests", &mtkd->dma_requests)) {
> +               dev_info(&pdev->dev,
> +                        "Missing dma-requests property, using %u.\n",
> +                        MTK_APDMA_DEFAULT_REQUESTS);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> +               if (!c)
> +                       goto err_no_dma;
> +
> +               c->vc.desc_free = mtk_dma_desc_free;
> +               vchan_init(&c->vc, &mtkd->ddev);
> +               INIT_LIST_HEAD(&c->node);
> +       }
> +
> +       pm_runtime_enable(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +
> +       rc = dma_async_device_register(&mtkd->ddev);
> +       if (rc)
> +               goto rpm_disable;
> +
> +       platform_set_drvdata(pdev, mtkd);
> +
> +       if (pdev->dev.of_node) {
> +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> +
> +               /* Device-tree DMA controller registration */
> +               rc = of_dma_controller_register(pdev->dev.of_node,
> +                                               of_dma_simple_xlate,
> +                                               &mtk_dma_info);

reusing of_dma_xlate_by_chan_id seem that also work for you.

> +               if (rc)
> +                       goto dma_remove;
> +       }
> +
> +       return rc;
> +
> +dma_remove:
> +       dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> +       pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> +       mtk_dma_free(mtkd);
> +       return rc;
> +}
> +
> +static int mtk_apdma_remove(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> +
> +       if (pdev->dev.of_node)
> +               of_dma_controller_free(pdev->dev.of_node);
> +
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_noidle(&pdev->dev);
> +
> +       dma_async_device_unregister(&mtkd->ddev);
> +
> +       mtk_dma_free(mtkd);
> +
> +       return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_dma_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev))
> +               mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev)) {
> +               ret = mtk_dma_clk_enable(mtkd);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       ret = mtk_dma_clk_enable(mtkd);
> +       if (ret)
> +               return ret;
> +
> +       return 0;
> +}
> +
> +#endif /* CONFIG_PM_SLEEP */
> +
> +static const struct dev_pm_ops mtk_dma_pm_ops = {
> +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> +                          mtk_dma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_dma_driver = {
> +       .probe  = mtk_apdma_probe,
> +       .remove = mtk_apdma_remove,
> +       .driver = {
> +               .name           = KBUILD_MODNAME,
> +               .pm             = &mtk_dma_pm_ops,
> +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> +       },
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> +{
> +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> +               unsigned int req = *(unsigned int *)param;
> +
> +               if (req <= mtkd->dma_requests) {
> +                       c->dma_sig = req;
> +                       c->dma_ch = req;
> +                       return true;
> +               }
> +       }
> +       return false;
> +}
> +
> +module_platform_driver(mtk_dma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> +MODULE_LICENSE("GPL v2");
> +
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 27bac0b..bef436e 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -1,4 +1,15 @@
>
> +config DMA_MTK_UART
> +       tristate "MediaTek SoCs APDMA support for UART"
> +       depends on OF
> +       select DMA_ENGINE
> +       select DMA_VIRTUAL_CHANNELS
> +       help
> +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> +         when 8250 mtk uart is enabled, and if you want to using DMA,
> +         you can enable the config. the DMA engine just only be used
> +         with MediaTek Socs.
> +
>  config MTK_HSDMA
>         tristate "MediaTek High-Speed DMA controller support"
>         depends on ARCH_MEDIATEK || COMPILE_TEST
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 6e778f8..2f2efd9 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1 +1,2 @@
> +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
>  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> --
> 1.7.9.5
>
>
> _______________________________________________
> Linux-mediatek mailing list
> Linux-mediatek@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-05 21:07 ` Sean Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-05 21:07 UTC (permalink / raw)
  To: long.cheng
  Cc: mark.rutland, devicetree, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, Matthias Brugger, vkoul, robh+dt, linux-mediatek,
	linux-serial, jslaby, dmaengine, yingjoe.chen, dan.j.williams,
	linux-arm-kernel

.
On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> In DMA engine framework, add 8250 mtk dma to support it.
>
> Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> ---
>  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
>  drivers/dma/mediatek/Kconfig        |   11 +
>  drivers/dma/mediatek/Makefile       |    1 +
>  3 files changed, 906 insertions(+)
>  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
>
> diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> new file mode 100644
> index 0000000..3454679
> --- /dev/null
> +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> @@ -0,0 +1,894 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Mediatek 8250 DMA driver.
> + *
> + * Copyright (c) 2018 MediaTek Inc.
> + * Author: Long Cheng <long.cheng@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/init.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/of_dma.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/iopoll.h>
> +
> +#include "../virt-dma.h"
> +
> +#define MTK_APDMA_DEFAULT_REQUESTS     127
> +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> +
> +#define VFF_EN_B               BIT(0)
> +#define VFF_STOP_B             BIT(0)
> +#define VFF_FLUSH_B            BIT(0)
> +#define VFF_4G_SUPPORT_B       BIT(0)
> +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> +#define VFF_RX_INT_EN1_B       BIT(1)
> +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> +#define VFF_WARM_RST_B         BIT(0)
> +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> +#define VFF_TX_INT_FLAG_CLR_B  0
> +#define VFF_STOP_CLR_B         0
> +#define VFF_FLUSH_CLR_B                0
> +#define VFF_INT_EN_CLR_B       0
> +#define VFF_4G_SUPPORT_CLR_B   0
> +
> +/* interrupt trigger level for tx */
> +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> +/* interrupt trigger level for rx */
> +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> +
> +#define MTK_DMA_RING_SIZE      0xffffU
> +/* invert this bit when wrap ring head again*/
> +#define MTK_DMA_RING_WRAP      0x10000U
> +
> +#define VFF_INT_FLAG           0x00
> +#define VFF_INT_EN             0x04
> +#define VFF_EN                 0x08
> +#define VFF_RST                        0x0c
> +#define VFF_STOP               0x10
> +#define VFF_FLUSH              0x14
> +#define VFF_ADDR               0x1c
> +#define VFF_LEN                        0x24
> +#define VFF_THRE               0x28
> +#define VFF_WPT                        0x2c
> +#define VFF_RPT                        0x30
> +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> +#define VFF_VALID_SIZE         0x3c
> +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> +#define VFF_LEFT_SIZE          0x40
> +#define VFF_DEBUG_STATUS       0x50
> +#define VFF_4G_SUPPORT         0x54
> +
> +struct mtk_dmadev {
> +       struct dma_device ddev;
> +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> +       spinlock_t lock; /* dma dev lock */
> +       struct tasklet_struct task;
> +       struct list_head pending;
> +       struct clk *clk;
> +       unsigned int dma_requests;
> +       bool support_33bits;
> +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> +};
> +
> +struct mtk_chan {
> +       struct virt_dma_chan vc;
> +       struct list_head node;
> +       struct dma_slave_config cfg;
> +       void __iomem *base;
> +       struct mtk_dma_desc *desc;
> +
> +       bool stop;
> +       bool requested;
> +
> +       unsigned int dma_sig;

the member can be removed as no real user would refer to it

> +       unsigned int dma_ch;

a chan_id is already included in struct dma_chan, we can reuse it

> +       unsigned int sgidx;

the member also can be removed as no real user would refer to it

> +       unsigned int remain_size;

The member remain_size seems unnecessary data to maintain a channel.
The virtual channel struct virt_dma_chan already provide the way to
manage all descriptors you're operating on, you should reuse related
functions to virt_dma_chan first.

Or if you mean remain_size is about the remaining size of current
descriptor, and then putting into struct mtk_dma_desc would be better.

> +       unsigned int rx_ptr;
> +};
> +
> +struct mtk_dma_sg {
> +       dma_addr_t addr;
> +       unsigned int en;                /* number of elements (24-bit) */
> +       unsigned int fn;                /* number of frames (16-bit) */
> +};
> +
> +struct mtk_dma_desc {
> +       struct virt_dma_desc vd;
> +       enum dma_transfer_direction dir;
> +
> +       unsigned int sglen;
> +       struct mtk_dma_sg sg[0];
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> +static struct of_dma_filter_info mtk_dma_info = {
> +       .filter_fn = mtk_dma_filter_fn,
> +};
> +
> +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> +{
> +       return container_of(d, struct mtk_dmadev, ddev);
> +}
> +
> +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> +{
> +       return container_of(c, struct mtk_chan, vc.chan);
> +}
> +
> +static inline struct mtk_dma_desc *to_mtk_dma_desc
> +       (struct dma_async_tx_descriptor *t)
> +{
> +       return container_of(t, struct mtk_dma_desc, vd.tx);
> +}
> +
> +static void mtk_dma_chan_write(struct mtk_chan *c,
> +                              unsigned int reg, unsigned int val)
> +{
> +       writel(val, c->base + reg);
> +}
> +
> +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> +{
> +       return readl(c->base + reg);
> +}
> +
> +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> +{
> +       struct dma_chan *chan = vd->tx.chan;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       kfree(c->desc);
> +       c->desc = NULL;
> +}
> +
> +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> +{
> +       int ret;
> +
> +       ret = clk_prepare_enable(mtkd->clk);
> +       if (ret) {
> +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> +               return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> +{
> +       clk_disable_unprepare(mtkd->clk);
> +}
> +
> +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> +                                    struct virt_dma_chan *vc)
> +{
> +       struct virt_dma_desc *vd;
> +
> +       if (list_empty(&vc->desc_issued) == 0) {
> +               list_for_each_entry(vd, &vc->desc_issued, node) {
> +                       if (cookie == vd->tx.cookie) {
> +                               INIT_LIST_HEAD(&vc->desc_issued);

generally, we don't force initialze the list desc_issued. Instead,
when each descriptor is completed, we need to move each descriptor
from the list desc_issued to the list desc_completed.

> +                               break;
> +                       }
> +               }
> +       }
> +}
> +
> +static void mtk_dma_tx_flush(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> +}
> +
> +static void mtk_dma_tx_write(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned int txcount = c->remain_size;
> +       unsigned int len, send, left, wpt, wrap;
> +
> +       len = mtk_dma_chan_read(c, VFF_LEN);
> +
> +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> +               if (c->remain_size == 0U)
> +                       break;
> +               send = min(left, c->remain_size);

min_t

> +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> +
> +               if ((wpt & (len - 1U)) + send < len)
> +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> +               else
> +                       mtk_dma_chan_write(c, VFF_WPT,
> +                                          ((wpt + send) & (len - 1U))
> +                                          | wrap);
> +
> +               c->remain_size -= send;

I'm curious why you don't need to set up the hardware from the
descriptor information

> +       }
> +
> +       if (txcount != c->remain_size) {
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +               mtk_dma_tx_flush(chan);
> +       }
> +}
> +
> +static void mtk_dma_start_tx(struct mtk_chan *c)
> +{
> +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> +       else
> +               mtk_dma_tx_write(&c->vc.chan);
> +
> +       c->stop = false;
> +}
> +
> +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> +{
> +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> +
> +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> +
> +       c->remain_size = count;
> +       c->rx_ptr = rdptr;
> +
> +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> +}
> +
> +static void mtk_dma_start_rx(struct mtk_chan *c)
> +{
> +       struct dma_chan *chan = &c->vc.chan;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_dma_desc *d = c->desc;
> +
> +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> +               return;
> +
> +       if (d && d->vd.tx.cookie != 0) {

the driver benefits from virtual_channel so you can manage the life
cycle of each descriptor by the virtual channel related functions
without handling details about the cookie.

> +               mtk_dma_get_rx_size(c);
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       } else {
> +               spin_lock(&mtkd->lock);
> +               if (list_empty(&mtkd->pending))
> +                       list_add_tail(&c->node, &mtkd->pending);
> +               spin_unlock(&mtkd->lock);
> +               tasklet_schedule(&mtkd->task);
> +       }
> +}
> +
> +static void mtk_dma_reset(struct mtk_chan *c)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> +       mtk_dma_chan_write(c, VFF_THRE, 0);
> +       mtk_dma_chan_write(c, VFF_LEN, 0);
> +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> +
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret) {
> +               dev_err(c->vc.chan.device->dev,
> +                               "dma reset: fail, timeout\n");
> +               return;
> +       }
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_RPT, 0);
> +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> +               mtk_dma_chan_write(c, VFF_WPT, 0);
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> +}
> +
> +static void mtk_dma_stop(struct mtk_chan *c)
> +{
> +       u32 status;
> +       int ret;
> +
> +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> +       /* Wait for flush */
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_FLUSH,
> +                                status,
> +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> +                                10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> +       ret = readx_poll_timeout(readl,
> +                                c->base + VFF_EN,
> +                                status, status == 0, 10, 100);
> +       if (ret)
> +               dev_err(c->vc.chan.device->dev,
> +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> +
> +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> +
> +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +       else
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       c->stop = true;
> +}
> +
> +/*
> + * This callback schedules all pending channels. We could be more
> + * clever here by postponing allocation of the real DMA channels to
> + * this point, and freeing them when our virtual channel becomes idle.
> + *
> + * We would then need to deal with 'all channels in-use'
> + */
> +static void mtk_dma_sched(unsigned long data)
> +{
> +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> +       struct virt_dma_desc *vd;
> +       struct mtk_chan *c;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +       LIST_HEAD(head);
> +
> +       spin_lock_irq(&mtkd->lock);
> +       list_splice_tail_init(&mtkd->pending, &head);
> +       spin_unlock_irq(&mtkd->lock);
> +
> +       if (!list_empty(&head)) {
> +               c = list_first_entry(&head, struct mtk_chan, node);
> +               cookie = c->vc.chan.cookie;
> +
> +               spin_lock_irqsave(&c->vc.lock, flags);
> +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_rx(c);
> +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       list_del_init(&c->node);
> +                       mtk_dma_start_tx(c);
> +               }
> +               spin_unlock_irqrestore(&c->vc.lock, flags);
> +       }
> +}
> +
> +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       int ret = -EBUSY;
> +
> +       pm_runtime_get_sync(mtkd->ddev.dev);
> +
> +       if (!mtkd->ch[c->dma_ch]) {
> +               c->base = mtkd->mem_base[c->dma_ch];
> +               mtkd->ch[c->dma_ch] = c;
> +               ret = 1;
> +       }
> +       c->requested = false;
> +       mtk_dma_reset(c);
> +
> +       return ret;
> +}
> +
> +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> +{
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +
> +       if (c->requested) {
> +               c->requested = false;
> +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> +       }
> +
> +       tasklet_kill(&mtkd->task);
> +       tasklet_kill(&c->vc.task);
> +
> +       c->base = NULL;
> +       mtkd->ch[c->dma_ch] = NULL;
> +       vchan_free_chan_resources(&c->vc);
> +
> +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> +       c->dma_sig = 0;
> +
> +       pm_runtime_put_sync(mtkd->ddev.dev);
> +}
> +
> +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> +                                        dma_cookie_t cookie,
> +                                        struct dma_tx_state *txstate)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       enum dma_status ret;
> +       unsigned long flags;
> +
> +       if (!txstate)
> +               return DMA_ERROR;
> +
> +       ret = dma_cookie_status(chan, cookie, txstate);
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (ret == DMA_IN_PROGRESS) {
> +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> +               dma_set_residue(txstate, c->rx_ptr);
> +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> +               dma_set_residue(txstate, c->remain_size);
> +       } else {
> +               dma_set_residue(txstate, 0);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return ret;
> +}
> +
> +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> +       (struct dma_chan *chan, struct scatterlist *sgl,
> +       unsigned int sglen,     enum dma_transfer_direction dir,
> +       unsigned long tx_flags, void *context)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct scatterlist *sgent;
> +       struct mtk_dma_desc *d;
> +       struct mtk_dma_sg *sg;
> +       unsigned int size, i, j, en;
> +
> +       en = 1;
> +
> +       if ((dir != DMA_DEV_TO_MEM) &&
> +               (dir != DMA_MEM_TO_DEV)) {
> +               dev_err(chan->device->dev, "bad direction\n");
> +               return NULL;
> +       }
> +
> +       /* Now allocate and setup the descriptor. */
> +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> +       if (!d)
> +               return NULL;
> +
> +       d->dir = dir;
> +
> +       j = 0;
> +       for_each_sg(sgl, sgent, sglen, i) {
> +               d->sg[j].addr = sg_dma_address(sgent);
> +               d->sg[j].en = en;
> +               d->sg[j].fn = sg_dma_len(sgent) / en;
> +               j++;
> +       }
> +
> +       d->sglen = j;
> +
> +       if (dir == DMA_MEM_TO_DEV) {
> +               for (size = i = 0; i < d->sglen; i++) {
> +                       sg = &d->sg[i];
> +                       size += sg->en * sg->fn;
> +               }
> +               c->remain_size = size;
> +       }
> +
> +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> +}
> +
> +static void mtk_dma_issue_pending(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct virt_dma_desc *vd;
> +       struct mtk_dmadev *mtkd;
> +       dma_cookie_t cookie;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> +               cookie = c->vc.chan.cookie;
> +               mtkd = to_mtk_dma_dev(chan->device);
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

vchan_issue_pending means putting all the descriptors to list
desc_issued, I'm supposed somewhere would iterate the list by
vchan_next_desc and program each desc. into hardware. but I don't see
the similar code.

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +               }
> +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> +               cookie = c->vc.chan.cookie;
> +               if (vchan_issue_pending(&c->vc) && !c->desc) {

ditto as the above

> +                       vd = vchan_find_desc(&c->vc, cookie);
> +                       c->desc = to_mtk_dma_desc(&vd->tx);
> +                       mtk_dma_start_tx(c);
> +               }
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +}
> +
> +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +
> +       mtk_dma_start_rx(c);
> +
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> +{
> +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dma_desc *d = c->desc;
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       if (c->remain_size != 0U) {
> +               list_add_tail(&c->node, &mtkd->pending);
> +               tasklet_schedule(&mtkd->task);
> +       } else {
> +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> +               vchan_cookie_complete(&d->vd);
> +       }
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +
> +       return IRQ_HANDLED;
> +}
> +
> +static int mtk_dma_slave_config(struct dma_chan *chan,
> +                               struct dma_slave_config *cfg)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> +       int ret;
> +
> +       c->cfg = *cfg;
> +
> +       if (cfg->direction == DMA_DEV_TO_MEM) {
> +               unsigned int rx_len = cfg->src_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> +               mtk_dma_chan_write(c,
> +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> +                                  | VFF_RX_INT_EN1_B);
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_rx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> +
> +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> +
> +               if (!c->requested) {
> +                       c->requested = true;
> +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> +                                         mtk_dma_tx_interrupt,
> +                                         IRQF_TRIGGER_NONE,
> +                                         KBUILD_MODNAME, chan);
> +                       if (ret < 0) {
> +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> +                               return -EINVAL;
> +                       }
> +               }
> +       }
> +
> +       if (mtkd->support_33bits)
> +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> +
> +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> +               dev_err(chan->device->dev,
> +                       "config dma dir[%d] fail\n", cfg->direction);
> +               return -EINVAL;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_terminate_all(struct dma_chan *chan)
> +{
> +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> +       unsigned long flags;
> +
> +       spin_lock_irqsave(&c->vc.lock, flags);
> +       list_del_init(&c->node);
> +       mtk_dma_stop(c);
> +       spin_unlock_irqrestore(&c->vc.lock, flags);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_device_pause(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

it it possible not to register the function to the core if the
hardware doesn't support it?

> +
> +static int mtk_dma_device_resume(struct dma_chan *chan)
> +{
> +       /* just for check caps pass */
> +       return -EINVAL;
> +}

ditto as the above

> +
> +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> +{
> +       tasklet_kill(&mtkd->task);
> +       while (list_empty(&mtkd->ddev.channels) == 0) {
> +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> +                       struct mtk_chan, vc.chan.device_node);
> +
> +               list_del(&c->vc.chan.device_node);
> +               tasklet_kill(&c->vc.task);
> +               devm_kfree(mtkd->ddev.dev, c);
> +       }
> +}
> +
> +static const struct of_device_id mtk_uart_dma_match[] = {
> +       { .compatible = "mediatek,mt6577-uart-dma", },
> +       { /* sentinel */ },
> +};
> +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> +
> +static int mtk_apdma_probe(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd;
> +       struct resource *res;
> +       struct mtk_chan *c;
> +       unsigned int i;
> +       int rc;
> +
> +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> +       if (!mtkd)
> +               return -ENOMEM;
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> +               if (!res)
> +                       return -ENODEV;
> +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> +               if (IS_ERR(mtkd->mem_base[i]))
> +                       return PTR_ERR(mtkd->mem_base[i]);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> +               if ((int)mtkd->dma_irq[i] < 0) {
> +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> +                       return -EINVAL;
> +               }
> +       }
> +
> +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> +       if (IS_ERR(mtkd->clk)) {
> +               dev_err(&pdev->dev, "No clock specified\n");
> +               return PTR_ERR(mtkd->clk);
> +       }
> +
> +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> +               dev_info(&pdev->dev, "Support dma 33bits\n");
> +               mtkd->support_33bits = true;
> +       }
> +
> +       if (mtkd->support_33bits)
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> +       else
> +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> +       if (rc)
> +               return rc;
> +
> +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> +       mtkd->ddev.device_config = mtk_dma_slave_config;
> +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> +       mtkd->ddev.dev = &pdev->dev;
> +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> +       INIT_LIST_HEAD(&mtkd->pending);
> +
> +       spin_lock_init(&mtkd->lock);
> +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> +
> +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> +       if (of_property_read_u32(pdev->dev.of_node,
> +                                "dma-requests", &mtkd->dma_requests)) {
> +               dev_info(&pdev->dev,
> +                        "Missing dma-requests property, using %u.\n",
> +                        MTK_APDMA_DEFAULT_REQUESTS);
> +       }
> +
> +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> +               if (!c)
> +                       goto err_no_dma;
> +
> +               c->vc.desc_free = mtk_dma_desc_free;
> +               vchan_init(&c->vc, &mtkd->ddev);
> +               INIT_LIST_HEAD(&c->node);
> +       }
> +
> +       pm_runtime_enable(&pdev->dev);
> +       pm_runtime_set_active(&pdev->dev);
> +
> +       rc = dma_async_device_register(&mtkd->ddev);
> +       if (rc)
> +               goto rpm_disable;
> +
> +       platform_set_drvdata(pdev, mtkd);
> +
> +       if (pdev->dev.of_node) {
> +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> +
> +               /* Device-tree DMA controller registration */
> +               rc = of_dma_controller_register(pdev->dev.of_node,
> +                                               of_dma_simple_xlate,
> +                                               &mtk_dma_info);

reusing of_dma_xlate_by_chan_id seem that also work for you.

> +               if (rc)
> +                       goto dma_remove;
> +       }
> +
> +       return rc;
> +
> +dma_remove:
> +       dma_async_device_unregister(&mtkd->ddev);
> +rpm_disable:
> +       pm_runtime_disable(&pdev->dev);
> +err_no_dma:
> +       mtk_dma_free(mtkd);
> +       return rc;
> +}
> +
> +static int mtk_apdma_remove(struct platform_device *pdev)
> +{
> +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> +
> +       if (pdev->dev.of_node)
> +               of_dma_controller_free(pdev->dev.of_node);
> +
> +       pm_runtime_disable(&pdev->dev);
> +       pm_runtime_put_noidle(&pdev->dev);
> +
> +       dma_async_device_unregister(&mtkd->ddev);
> +
> +       mtk_dma_free(mtkd);
> +
> +       return 0;
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int mtk_dma_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev))
> +               mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       if (!pm_runtime_suspended(dev)) {
> +               ret = mtk_dma_clk_enable(mtkd);
> +               if (ret)
> +                       return ret;
> +       }
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_suspend(struct device *dev)
> +{
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       mtk_dma_clk_disable(mtkd);
> +
> +       return 0;
> +}
> +
> +static int mtk_dma_runtime_resume(struct device *dev)
> +{
> +       int ret;
> +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> +
> +       ret = mtk_dma_clk_enable(mtkd);
> +       if (ret)
> +               return ret;
> +
> +       return 0;
> +}
> +
> +#endif /* CONFIG_PM_SLEEP */
> +
> +static const struct dev_pm_ops mtk_dma_pm_ops = {
> +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> +                          mtk_dma_runtime_resume, NULL)
> +};
> +
> +static struct platform_driver mtk_dma_driver = {
> +       .probe  = mtk_apdma_probe,
> +       .remove = mtk_apdma_remove,
> +       .driver = {
> +               .name           = KBUILD_MODNAME,
> +               .pm             = &mtk_dma_pm_ops,
> +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> +       },
> +};
> +
> +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> +{
> +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> +               unsigned int req = *(unsigned int *)param;
> +
> +               if (req <= mtkd->dma_requests) {
> +                       c->dma_sig = req;
> +                       c->dma_ch = req;
> +                       return true;
> +               }
> +       }
> +       return false;
> +}
> +
> +module_platform_driver(mtk_dma_driver);
> +
> +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> +MODULE_LICENSE("GPL v2");
> +
> diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> index 27bac0b..bef436e 100644
> --- a/drivers/dma/mediatek/Kconfig
> +++ b/drivers/dma/mediatek/Kconfig
> @@ -1,4 +1,15 @@
>
> +config DMA_MTK_UART
> +       tristate "MediaTek SoCs APDMA support for UART"
> +       depends on OF
> +       select DMA_ENGINE
> +       select DMA_VIRTUAL_CHANNELS
> +       help
> +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> +         when 8250 mtk uart is enabled, and if you want to using DMA,
> +         you can enable the config. the DMA engine just only be used
> +         with MediaTek Socs.
> +
>  config MTK_HSDMA
>         tristate "MediaTek High-Speed DMA controller support"
>         depends on ARCH_MEDIATEK || COMPILE_TEST
> diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> index 6e778f8..2f2efd9 100644
> --- a/drivers/dma/mediatek/Makefile
> +++ b/drivers/dma/mediatek/Makefile
> @@ -1 +1,2 @@
> +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
>  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> --
> 1.7.9.5
>
>
> _______________________________________________
> Linux-mediatek mailing list
> Linux-mediatek@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  2018-12-05 21:07 ` Sean Wang
  (?)
@ 2018-12-06  9:55 ` Long Cheng
  -1 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-06  9:55 UTC (permalink / raw)
  To: Sean Wang
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel,
	yt.shen

On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> .
> On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> >
> > In DMA engine framework, add 8250 mtk dma to support it.
> >
> > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > ---
> >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> >  drivers/dma/mediatek/Kconfig        |   11 +
> >  drivers/dma/mediatek/Makefile       |    1 +
> >  3 files changed, 906 insertions(+)
> >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> >
> > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > new file mode 100644
> > index 0000000..3454679
> > --- /dev/null
> > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > @@ -0,0 +1,894 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Mediatek 8250 DMA driver.
> > + *
> > + * Copyright (c) 2018 MediaTek Inc.
> > + * Author: Long Cheng <long.cheng@mediatek.com>
> > + */
> > +
> > +#include <linux/clk.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/of_device.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/iopoll.h>
> > +
> > +#include "../virt-dma.h"
> > +
> > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > +
> > +#define VFF_EN_B               BIT(0)
> > +#define VFF_STOP_B             BIT(0)
> > +#define VFF_FLUSH_B            BIT(0)
> > +#define VFF_4G_SUPPORT_B       BIT(0)
> > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > +#define VFF_RX_INT_EN1_B       BIT(1)
> > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > +#define VFF_WARM_RST_B         BIT(0)
> > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > +#define VFF_TX_INT_FLAG_CLR_B  0
> > +#define VFF_STOP_CLR_B         0
> > +#define VFF_FLUSH_CLR_B                0
> > +#define VFF_INT_EN_CLR_B       0
> > +#define VFF_4G_SUPPORT_CLR_B   0
> > +
> > +/* interrupt trigger level for tx */
> > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > +/* interrupt trigger level for rx */
> > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > +
> > +#define MTK_DMA_RING_SIZE      0xffffU
> > +/* invert this bit when wrap ring head again*/
> > +#define MTK_DMA_RING_WRAP      0x10000U
> > +
> > +#define VFF_INT_FLAG           0x00
> > +#define VFF_INT_EN             0x04
> > +#define VFF_EN                 0x08
> > +#define VFF_RST                        0x0c
> > +#define VFF_STOP               0x10
> > +#define VFF_FLUSH              0x14
> > +#define VFF_ADDR               0x1c
> > +#define VFF_LEN                        0x24
> > +#define VFF_THRE               0x28
> > +#define VFF_WPT                        0x2c
> > +#define VFF_RPT                        0x30
> > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > +#define VFF_VALID_SIZE         0x3c
> > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > +#define VFF_LEFT_SIZE          0x40
> > +#define VFF_DEBUG_STATUS       0x50
> > +#define VFF_4G_SUPPORT         0x54
> > +
> > +struct mtk_dmadev {
> > +       struct dma_device ddev;
> > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > +       spinlock_t lock; /* dma dev lock */
> > +       struct tasklet_struct task;
> > +       struct list_head pending;
> > +       struct clk *clk;
> > +       unsigned int dma_requests;
> > +       bool support_33bits;
> > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > +};
> > +
> > +struct mtk_chan {
> > +       struct virt_dma_chan vc;
> > +       struct list_head node;
> > +       struct dma_slave_config cfg;
> > +       void __iomem *base;
> > +       struct mtk_dma_desc *desc;
> > +
> > +       bool stop;
> > +       bool requested;
> > +
> > +       unsigned int dma_sig;
> 
> the member can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int dma_ch;
> 
> a chan_id is already included in struct dma_chan, we can reuse it
> 

chan_id is start from 0. but in this driver, dma info is stored to list.
if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
can't using chan_id

> > +       unsigned int sgidx;
> 
> the member also can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int remain_size;
> 
> The member remain_size seems unnecessary data to maintain a channel.
> The virtual channel struct virt_dma_chan already provide the way to
> manage all descriptors you're operating on, you should reuse related
> functions to virt_dma_chan first.
> 
> Or if you mean remain_size is about the remaining size of current
> descriptor, and then putting into struct mtk_dma_desc would be better.
> 

i will move it to mtk_dma_desc

> > +       unsigned int rx_ptr;
> > +};
> > +
> > +struct mtk_dma_sg {
> > +       dma_addr_t addr;
> > +       unsigned int en;                /* number of elements (24-bit) */
> > +       unsigned int fn;                /* number of frames (16-bit) */
> > +};
> > +
> > +struct mtk_dma_desc {
> > +       struct virt_dma_desc vd;
> > +       enum dma_transfer_direction dir;
> > +
> > +       unsigned int sglen;
> > +       struct mtk_dma_sg sg[0];
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> > +static struct of_dma_filter_info mtk_dma_info = {
> > +       .filter_fn = mtk_dma_filter_fn,
> > +};
> > +
> > +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> > +{
> > +       return container_of(d, struct mtk_dmadev, ddev);
> > +}
> > +
> > +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> > +{
> > +       return container_of(c, struct mtk_chan, vc.chan);
> > +}
> > +
> > +static inline struct mtk_dma_desc *to_mtk_dma_desc
> > +       (struct dma_async_tx_descriptor *t)
> > +{
> > +       return container_of(t, struct mtk_dma_desc, vd.tx);
> > +}
> > +
> > +static void mtk_dma_chan_write(struct mtk_chan *c,
> > +                              unsigned int reg, unsigned int val)
> > +{
> > +       writel(val, c->base + reg);
> > +}
> > +
> > +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> > +{
> > +       return readl(c->base + reg);
> > +}
> > +
> > +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> > +{
> > +       struct dma_chan *chan = vd->tx.chan;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       kfree(c->desc);
> > +       c->desc = NULL;
> > +}
> > +
> > +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> > +{
> > +       int ret;
> > +
> > +       ret = clk_prepare_enable(mtkd->clk);
> > +       if (ret) {
> > +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> > +               return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> > +{
> > +       clk_disable_unprepare(mtkd->clk);
> > +}
> > +
> > +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> > +                                    struct virt_dma_chan *vc)
> > +{
> > +       struct virt_dma_desc *vd;
> > +
> > +       if (list_empty(&vc->desc_issued) == 0) {
> > +               list_for_each_entry(vd, &vc->desc_issued, node) {
> > +                       if (cookie == vd->tx.cookie) {
> > +                               INIT_LIST_HEAD(&vc->desc_issued);
> 
> generally, we don't force initialze the list desc_issued. Instead,
> when each descriptor is completed, we need to move each descriptor
> from the list desc_issued to the list desc_completed.
> 

ok, i will modify it at next patch

> > +                               break;
> > +                       }
> > +               }
> > +       }
> > +}
> > +
> > +static void mtk_dma_tx_flush(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> > +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > +}
> > +
> > +static void mtk_dma_tx_write(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned int txcount = c->remain_size;
> > +       unsigned int len, send, left, wpt, wrap;
> > +
> > +       len = mtk_dma_chan_read(c, VFF_LEN);
> > +
> > +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> > +               if (c->remain_size == 0U)
> > +                       break;
> > +               send = min(left, c->remain_size);
> 
> min_t
> 

ok, i will modify it at next patch

> > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > +
> > +               if ((wpt & (len - 1U)) + send < len)
> > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > +               else
> > +                       mtk_dma_chan_write(c, VFF_WPT,
> > +                                          ((wpt + send) & (len - 1U))
> > +                                          | wrap);
> > +
> > +               c->remain_size -= send;
> 
> I'm curious why you don't need to set up the hardware from the
> descriptor information
> 

mtk_dma_chan_write is update the hardware. remain_size is length of TX
data.

> > +       }
> > +
> > +       if (txcount != c->remain_size) {
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +               mtk_dma_tx_flush(chan);
> > +       }
> > +}
> > +
> > +static void mtk_dma_start_tx(struct mtk_chan *c)
> > +{
> > +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +       else
> > +               mtk_dma_tx_write(&c->vc.chan);
> > +
> > +       c->stop = false;
> > +}
> > +
> > +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> > +{
> > +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> > +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> > +
> > +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> > +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> > +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> > +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> > +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> > +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> > +
> > +       c->remain_size = count;
> > +       c->rx_ptr = rdptr;
> > +
> > +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> > +}
> > +
> > +static void mtk_dma_start_rx(struct mtk_chan *c)
> > +{
> > +       struct dma_chan *chan = &c->vc.chan;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_dma_desc *d = c->desc;
> > +
> > +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> > +               return;
> > +
> > +       if (d && d->vd.tx.cookie != 0) {
> 
> the driver benefits from virtual_channel so you can manage the life
> cycle of each descriptor by the virtual channel related functions
> without handling details about the cookie.
> 

ok, i will modify it at next patch

> > +               mtk_dma_get_rx_size(c);
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       } else {
> > +               spin_lock(&mtkd->lock);
> > +               if (list_empty(&mtkd->pending))
> > +                       list_add_tail(&c->node, &mtkd->pending);
> > +               spin_unlock(&mtkd->lock);
> > +               tasklet_schedule(&mtkd->task);
> > +       }
> > +}
> > +
> > +static void mtk_dma_reset(struct mtk_chan *c)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> > +       mtk_dma_chan_write(c, VFF_THRE, 0);
> > +       mtk_dma_chan_write(c, VFF_LEN, 0);
> > +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> > +
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret) {
> > +               dev_err(c->vc.chan.device->dev,
> > +                               "dma reset: fail, timeout\n");
> > +               return;
> > +       }
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_RPT, 0);
> > +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> > +               mtk_dma_chan_write(c, VFF_WPT, 0);
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> > +}
> > +
> > +static void mtk_dma_stop(struct mtk_chan *c)
> > +{
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> > +       /* Wait for flush */
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_FLUSH,
> > +                                status,
> > +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> > +                                10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> > +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +       else
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       c->stop = true;
> > +}
> > +
> > +/*
> > + * This callback schedules all pending channels. We could be more
> > + * clever here by postponing allocation of the real DMA channels to
> > + * this point, and freeing them when our virtual channel becomes idle.
> > + *
> > + * We would then need to deal with 'all channels in-use'
> > + */
> > +static void mtk_dma_sched(unsigned long data)
> > +{
> > +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_chan *c;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +       LIST_HEAD(head);
> > +
> > +       spin_lock_irq(&mtkd->lock);
> > +       list_splice_tail_init(&mtkd->pending, &head);
> > +       spin_unlock_irq(&mtkd->lock);
> > +
> > +       if (!list_empty(&head)) {
> > +               c = list_first_entry(&head, struct mtk_chan, node);
> > +               cookie = c->vc.chan.cookie;
> > +
> > +               spin_lock_irqsave(&c->vc.lock, flags);
> > +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_rx(c);
> > +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +               spin_unlock_irqrestore(&c->vc.lock, flags);
> > +       }
> > +}
> > +
> > +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       int ret = -EBUSY;
> > +
> > +       pm_runtime_get_sync(mtkd->ddev.dev);
> > +
> > +       if (!mtkd->ch[c->dma_ch]) {
> > +               c->base = mtkd->mem_base[c->dma_ch];
> > +               mtkd->ch[c->dma_ch] = c;
> > +               ret = 1;
> > +       }
> > +       c->requested = false;
> > +       mtk_dma_reset(c);
> > +
> > +       return ret;
> > +}
> > +
> > +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (c->requested) {
> > +               c->requested = false;
> > +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> > +       }
> > +
> > +       tasklet_kill(&mtkd->task);
> > +       tasklet_kill(&c->vc.task);
> > +
> > +       c->base = NULL;
> > +       mtkd->ch[c->dma_ch] = NULL;
> > +       vchan_free_chan_resources(&c->vc);
> > +
> > +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> > +       c->dma_sig = 0;
> > +
> > +       pm_runtime_put_sync(mtkd->ddev.dev);
> > +}
> > +
> > +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> > +                                        dma_cookie_t cookie,
> > +                                        struct dma_tx_state *txstate)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +
> > +       if (!txstate)
> > +               return DMA_ERROR;
> > +
> > +       ret = dma_cookie_status(chan, cookie, txstate);
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (ret == DMA_IN_PROGRESS) {
> > +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> > +               dma_set_residue(txstate, c->rx_ptr);
> > +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               dma_set_residue(txstate, c->remain_size);
> > +       } else {
> > +               dma_set_residue(txstate, 0);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return ret;
> > +}
> > +
> > +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> > +       (struct dma_chan *chan, struct scatterlist *sgl,
> > +       unsigned int sglen,     enum dma_transfer_direction dir,
> > +       unsigned long tx_flags, void *context)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct scatterlist *sgent;
> > +       struct mtk_dma_desc *d;
> > +       struct mtk_dma_sg *sg;
> > +       unsigned int size, i, j, en;
> > +
> > +       en = 1;
> > +
> > +       if ((dir != DMA_DEV_TO_MEM) &&
> > +               (dir != DMA_MEM_TO_DEV)) {
> > +               dev_err(chan->device->dev, "bad direction\n");
> > +               return NULL;
> > +       }
> > +
> > +       /* Now allocate and setup the descriptor. */
> > +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> > +       if (!d)
> > +               return NULL;
> > +
> > +       d->dir = dir;
> > +
> > +       j = 0;
> > +       for_each_sg(sgl, sgent, sglen, i) {
> > +               d->sg[j].addr = sg_dma_address(sgent);
> > +               d->sg[j].en = en;
> > +               d->sg[j].fn = sg_dma_len(sgent) / en;
> > +               j++;
> > +       }
> > +
> > +       d->sglen = j;
> > +
> > +       if (dir == DMA_MEM_TO_DEV) {
> > +               for (size = i = 0; i < d->sglen; i++) {
> > +                       sg = &d->sg[i];
> > +                       size += sg->en * sg->fn;
> > +               }
> > +               c->remain_size = size;
> > +       }
> > +
> > +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> > +}
> > +
> > +static void mtk_dma_issue_pending(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_dmadev *mtkd;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               cookie = c->vc.chan.cookie;
> > +               mtkd = to_mtk_dma_dev(chan->device);
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> vchan_issue_pending means putting all the descriptors to list
> desc_issued, I'm supposed somewhere would iterate the list by
> vchan_next_desc and program each desc. into hardware. but I don't see
> the similar code.

ok, i will modify it at next patch

> 
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +               }
> > +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +               cookie = c->vc.chan.cookie;
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> ditto as the above
> 

ok, i will modify it at next patch

> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +}
> > +
> > +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +
> > +       mtk_dma_start_rx(c);
> > +
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dma_desc *d = c->desc;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->remain_size != 0U) {
> > +               list_add_tail(&c->node, &mtkd->pending);
> > +               tasklet_schedule(&mtkd->task);
> > +       } else {
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static int mtk_dma_slave_config(struct dma_chan *chan,
> > +                               struct dma_slave_config *cfg)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       int ret;
> > +
> > +       c->cfg = *cfg;
> > +
> > +       if (cfg->direction == DMA_DEV_TO_MEM) {
> > +               unsigned int rx_len = cfg->src_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> > +               mtk_dma_chan_write(c,
> > +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> > +                                  | VFF_RX_INT_EN1_B);
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_rx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> > +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_tx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> > +               dev_err(chan->device->dev,
> > +                       "config dma dir[%d] fail\n", cfg->direction);
> > +               return -EINVAL;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_terminate_all(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       list_del_init(&c->node);
> > +       mtk_dma_stop(c);
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_device_pause(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> it it possible not to register the function to the core if the
> hardware doesn't support it?
> 

can't support. the function will set to cmd_pause. in 8250 serial code,
will check cmd_pause. if no the function, will can't get DMA channel.
So 
the function just for this.

> > +
> > +static int mtk_dma_device_resume(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> ditto as the above
> 
> > +
> > +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> > +{
> > +       tasklet_kill(&mtkd->task);
> > +       while (list_empty(&mtkd->ddev.channels) == 0) {
> > +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> > +                       struct mtk_chan, vc.chan.device_node);
> > +
> > +               list_del(&c->vc.chan.device_node);
> > +               tasklet_kill(&c->vc.task);
> > +               devm_kfree(mtkd->ddev.dev, c);
> > +       }
> > +}
> > +
> > +static const struct of_device_id mtk_uart_dma_match[] = {
> > +       { .compatible = "mediatek,mt6577-uart-dma", },
> > +       { /* sentinel */ },
> > +};
> > +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> > +
> > +static int mtk_apdma_probe(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd;
> > +       struct resource *res;
> > +       struct mtk_chan *c;
> > +       unsigned int i;
> > +       int rc;
> > +
> > +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> > +       if (!mtkd)
> > +               return -ENOMEM;
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> > +               if (!res)
> > +                       return -ENODEV;
> > +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> > +               if (IS_ERR(mtkd->mem_base[i]))
> > +                       return PTR_ERR(mtkd->mem_base[i]);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> > +               if ((int)mtkd->dma_irq[i] < 0) {
> > +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> > +                       return -EINVAL;
> > +               }
> > +       }
> > +
> > +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> > +       if (IS_ERR(mtkd->clk)) {
> > +               dev_err(&pdev->dev, "No clock specified\n");
> > +               return PTR_ERR(mtkd->clk);
> > +       }
> > +
> > +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> > +               dev_info(&pdev->dev, "Support dma 33bits\n");
> > +               mtkd->support_33bits = true;
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> > +       else
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> > +       if (rc)
> > +               return rc;
> > +
> > +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> > +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> > +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> > +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> > +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> > +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> > +       mtkd->ddev.device_config = mtk_dma_slave_config;
> > +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> > +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> > +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> > +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> > +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +       mtkd->ddev.dev = &pdev->dev;
> > +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> > +       INIT_LIST_HEAD(&mtkd->pending);
> > +
> > +       spin_lock_init(&mtkd->lock);
> > +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> > +
> > +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> > +       if (of_property_read_u32(pdev->dev.of_node,
> > +                                "dma-requests", &mtkd->dma_requests)) {
> > +               dev_info(&pdev->dev,
> > +                        "Missing dma-requests property, using %u.\n",
> > +                        MTK_APDMA_DEFAULT_REQUESTS);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> > +               if (!c)
> > +                       goto err_no_dma;
> > +
> > +               c->vc.desc_free = mtk_dma_desc_free;
> > +               vchan_init(&c->vc, &mtkd->ddev);
> > +               INIT_LIST_HEAD(&c->node);
> > +       }
> > +
> > +       pm_runtime_enable(&pdev->dev);
> > +       pm_runtime_set_active(&pdev->dev);
> > +
> > +       rc = dma_async_device_register(&mtkd->ddev);
> > +       if (rc)
> > +               goto rpm_disable;
> > +
> > +       platform_set_drvdata(pdev, mtkd);
> > +
> > +       if (pdev->dev.of_node) {
> > +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> > +
> > +               /* Device-tree DMA controller registration */
> > +               rc = of_dma_controller_register(pdev->dev.of_node,
> > +                                               of_dma_simple_xlate,
> > +                                               &mtk_dma_info);
> 
> reusing of_dma_xlate_by_chan_id seem that also work for you.

in filter_fn function, neeg get number of dma.
> 
> > +               if (rc)
> > +                       goto dma_remove;
> > +       }
> > +
> > +       return rc;
> > +
> > +dma_remove:
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +rpm_disable:
> > +       pm_runtime_disable(&pdev->dev);
> > +err_no_dma:
> > +       mtk_dma_free(mtkd);
> > +       return rc;
> > +}
> > +
> > +static int mtk_apdma_remove(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> > +
> > +       if (pdev->dev.of_node)
> > +               of_dma_controller_free(pdev->dev.of_node);
> > +
> > +       pm_runtime_disable(&pdev->dev);
> > +       pm_runtime_put_noidle(&pdev->dev);
> > +
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +
> > +       mtk_dma_free(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int mtk_dma_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev))
> > +               mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev)) {
> > +               ret = mtk_dma_clk_enable(mtkd);
> > +               if (ret)
> > +                       return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       ret = mtk_dma_clk_enable(mtkd);
> > +       if (ret)
> > +               return ret;
> > +
> > +       return 0;
> > +}
> > +
> > +#endif /* CONFIG_PM_SLEEP */
> > +
> > +static const struct dev_pm_ops mtk_dma_pm_ops = {
> > +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> > +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> > +                          mtk_dma_runtime_resume, NULL)
> > +};
> > +
> > +static struct platform_driver mtk_dma_driver = {
> > +       .probe  = mtk_apdma_probe,
> > +       .remove = mtk_apdma_remove,
> > +       .driver = {
> > +               .name           = KBUILD_MODNAME,
> > +               .pm             = &mtk_dma_pm_ops,
> > +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> > +       },
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> > +{
> > +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> > +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +               unsigned int req = *(unsigned int *)param;
> > +
> > +               if (req <= mtkd->dma_requests) {
> > +                       c->dma_sig = req;
> > +                       c->dma_ch = req;
> > +                       return true;
> > +               }
> > +       }
> > +       return false;
> > +}
> > +
> > +module_platform_driver(mtk_dma_driver);
> > +
> > +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> > +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> > index 27bac0b..bef436e 100644
> > --- a/drivers/dma/mediatek/Kconfig
> > +++ b/drivers/dma/mediatek/Kconfig
> > @@ -1,4 +1,15 @@
> >
> > +config DMA_MTK_UART
> > +       tristate "MediaTek SoCs APDMA support for UART"
> > +       depends on OF
> > +       select DMA_ENGINE
> > +       select DMA_VIRTUAL_CHANNELS
> > +       help
> > +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> > +         when 8250 mtk uart is enabled, and if you want to using DMA,
> > +         you can enable the config. the DMA engine just only be used
> > +         with MediaTek Socs.
> > +
> >  config MTK_HSDMA
> >         tristate "MediaTek High-Speed DMA controller support"
> >         depends on ARCH_MEDIATEK || COMPILE_TEST
> > diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> > index 6e778f8..2f2efd9 100644
> > --- a/drivers/dma/mediatek/Makefile
> > +++ b/drivers/dma/mediatek/Makefile
> > @@ -1 +1,2 @@
> > +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
> >  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> > --
> > 1.7.9.5
> >
> >
> > _______________________________________________
> > Linux-mediatek mailing list
> > Linux-mediatek@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-06  9:55 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-06  9:55 UTC (permalink / raw)
  To: Sean Wang
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel,
	yt.shen

On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> .
> On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> >
> > In DMA engine framework, add 8250 mtk dma to support it.
> >
> > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > ---
> >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> >  drivers/dma/mediatek/Kconfig        |   11 +
> >  drivers/dma/mediatek/Makefile       |    1 +
> >  3 files changed, 906 insertions(+)
> >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> >
> > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > new file mode 100644
> > index 0000000..3454679
> > --- /dev/null
> > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > @@ -0,0 +1,894 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Mediatek 8250 DMA driver.
> > + *
> > + * Copyright (c) 2018 MediaTek Inc.
> > + * Author: Long Cheng <long.cheng@mediatek.com>
> > + */
> > +
> > +#include <linux/clk.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/of_device.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/iopoll.h>
> > +
> > +#include "../virt-dma.h"
> > +
> > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > +
> > +#define VFF_EN_B               BIT(0)
> > +#define VFF_STOP_B             BIT(0)
> > +#define VFF_FLUSH_B            BIT(0)
> > +#define VFF_4G_SUPPORT_B       BIT(0)
> > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > +#define VFF_RX_INT_EN1_B       BIT(1)
> > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > +#define VFF_WARM_RST_B         BIT(0)
> > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > +#define VFF_TX_INT_FLAG_CLR_B  0
> > +#define VFF_STOP_CLR_B         0
> > +#define VFF_FLUSH_CLR_B                0
> > +#define VFF_INT_EN_CLR_B       0
> > +#define VFF_4G_SUPPORT_CLR_B   0
> > +
> > +/* interrupt trigger level for tx */
> > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > +/* interrupt trigger level for rx */
> > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > +
> > +#define MTK_DMA_RING_SIZE      0xffffU
> > +/* invert this bit when wrap ring head again*/
> > +#define MTK_DMA_RING_WRAP      0x10000U
> > +
> > +#define VFF_INT_FLAG           0x00
> > +#define VFF_INT_EN             0x04
> > +#define VFF_EN                 0x08
> > +#define VFF_RST                        0x0c
> > +#define VFF_STOP               0x10
> > +#define VFF_FLUSH              0x14
> > +#define VFF_ADDR               0x1c
> > +#define VFF_LEN                        0x24
> > +#define VFF_THRE               0x28
> > +#define VFF_WPT                        0x2c
> > +#define VFF_RPT                        0x30
> > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > +#define VFF_VALID_SIZE         0x3c
> > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > +#define VFF_LEFT_SIZE          0x40
> > +#define VFF_DEBUG_STATUS       0x50
> > +#define VFF_4G_SUPPORT         0x54
> > +
> > +struct mtk_dmadev {
> > +       struct dma_device ddev;
> > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > +       spinlock_t lock; /* dma dev lock */
> > +       struct tasklet_struct task;
> > +       struct list_head pending;
> > +       struct clk *clk;
> > +       unsigned int dma_requests;
> > +       bool support_33bits;
> > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > +};
> > +
> > +struct mtk_chan {
> > +       struct virt_dma_chan vc;
> > +       struct list_head node;
> > +       struct dma_slave_config cfg;
> > +       void __iomem *base;
> > +       struct mtk_dma_desc *desc;
> > +
> > +       bool stop;
> > +       bool requested;
> > +
> > +       unsigned int dma_sig;
> 
> the member can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int dma_ch;
> 
> a chan_id is already included in struct dma_chan, we can reuse it
> 

chan_id is start from 0. but in this driver, dma info is stored to list.
if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
can't using chan_id

> > +       unsigned int sgidx;
> 
> the member also can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int remain_size;
> 
> The member remain_size seems unnecessary data to maintain a channel.
> The virtual channel struct virt_dma_chan already provide the way to
> manage all descriptors you're operating on, you should reuse related
> functions to virt_dma_chan first.
> 
> Or if you mean remain_size is about the remaining size of current
> descriptor, and then putting into struct mtk_dma_desc would be better.
> 

i will move it to mtk_dma_desc

> > +       unsigned int rx_ptr;
> > +};
> > +
> > +struct mtk_dma_sg {
> > +       dma_addr_t addr;
> > +       unsigned int en;                /* number of elements (24-bit) */
> > +       unsigned int fn;                /* number of frames (16-bit) */
> > +};
> > +
> > +struct mtk_dma_desc {
> > +       struct virt_dma_desc vd;
> > +       enum dma_transfer_direction dir;
> > +
> > +       unsigned int sglen;
> > +       struct mtk_dma_sg sg[0];
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> > +static struct of_dma_filter_info mtk_dma_info = {
> > +       .filter_fn = mtk_dma_filter_fn,
> > +};
> > +
> > +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> > +{
> > +       return container_of(d, struct mtk_dmadev, ddev);
> > +}
> > +
> > +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> > +{
> > +       return container_of(c, struct mtk_chan, vc.chan);
> > +}
> > +
> > +static inline struct mtk_dma_desc *to_mtk_dma_desc
> > +       (struct dma_async_tx_descriptor *t)
> > +{
> > +       return container_of(t, struct mtk_dma_desc, vd.tx);
> > +}
> > +
> > +static void mtk_dma_chan_write(struct mtk_chan *c,
> > +                              unsigned int reg, unsigned int val)
> > +{
> > +       writel(val, c->base + reg);
> > +}
> > +
> > +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> > +{
> > +       return readl(c->base + reg);
> > +}
> > +
> > +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> > +{
> > +       struct dma_chan *chan = vd->tx.chan;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       kfree(c->desc);
> > +       c->desc = NULL;
> > +}
> > +
> > +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> > +{
> > +       int ret;
> > +
> > +       ret = clk_prepare_enable(mtkd->clk);
> > +       if (ret) {
> > +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> > +               return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> > +{
> > +       clk_disable_unprepare(mtkd->clk);
> > +}
> > +
> > +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> > +                                    struct virt_dma_chan *vc)
> > +{
> > +       struct virt_dma_desc *vd;
> > +
> > +       if (list_empty(&vc->desc_issued) == 0) {
> > +               list_for_each_entry(vd, &vc->desc_issued, node) {
> > +                       if (cookie == vd->tx.cookie) {
> > +                               INIT_LIST_HEAD(&vc->desc_issued);
> 
> generally, we don't force initialze the list desc_issued. Instead,
> when each descriptor is completed, we need to move each descriptor
> from the list desc_issued to the list desc_completed.
> 

ok, i will modify it at next patch

> > +                               break;
> > +                       }
> > +               }
> > +       }
> > +}
> > +
> > +static void mtk_dma_tx_flush(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> > +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > +}
> > +
> > +static void mtk_dma_tx_write(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned int txcount = c->remain_size;
> > +       unsigned int len, send, left, wpt, wrap;
> > +
> > +       len = mtk_dma_chan_read(c, VFF_LEN);
> > +
> > +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> > +               if (c->remain_size == 0U)
> > +                       break;
> > +               send = min(left, c->remain_size);
> 
> min_t
> 

ok, i will modify it at next patch

> > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > +
> > +               if ((wpt & (len - 1U)) + send < len)
> > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > +               else
> > +                       mtk_dma_chan_write(c, VFF_WPT,
> > +                                          ((wpt + send) & (len - 1U))
> > +                                          | wrap);
> > +
> > +               c->remain_size -= send;
> 
> I'm curious why you don't need to set up the hardware from the
> descriptor information
> 

mtk_dma_chan_write is update the hardware. remain_size is length of TX
data.

> > +       }
> > +
> > +       if (txcount != c->remain_size) {
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +               mtk_dma_tx_flush(chan);
> > +       }
> > +}
> > +
> > +static void mtk_dma_start_tx(struct mtk_chan *c)
> > +{
> > +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +       else
> > +               mtk_dma_tx_write(&c->vc.chan);
> > +
> > +       c->stop = false;
> > +}
> > +
> > +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> > +{
> > +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> > +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> > +
> > +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> > +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> > +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> > +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> > +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> > +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> > +
> > +       c->remain_size = count;
> > +       c->rx_ptr = rdptr;
> > +
> > +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> > +}
> > +
> > +static void mtk_dma_start_rx(struct mtk_chan *c)
> > +{
> > +       struct dma_chan *chan = &c->vc.chan;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_dma_desc *d = c->desc;
> > +
> > +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> > +               return;
> > +
> > +       if (d && d->vd.tx.cookie != 0) {
> 
> the driver benefits from virtual_channel so you can manage the life
> cycle of each descriptor by the virtual channel related functions
> without handling details about the cookie.
> 

ok, i will modify it at next patch

> > +               mtk_dma_get_rx_size(c);
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       } else {
> > +               spin_lock(&mtkd->lock);
> > +               if (list_empty(&mtkd->pending))
> > +                       list_add_tail(&c->node, &mtkd->pending);
> > +               spin_unlock(&mtkd->lock);
> > +               tasklet_schedule(&mtkd->task);
> > +       }
> > +}
> > +
> > +static void mtk_dma_reset(struct mtk_chan *c)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> > +       mtk_dma_chan_write(c, VFF_THRE, 0);
> > +       mtk_dma_chan_write(c, VFF_LEN, 0);
> > +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> > +
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret) {
> > +               dev_err(c->vc.chan.device->dev,
> > +                               "dma reset: fail, timeout\n");
> > +               return;
> > +       }
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_RPT, 0);
> > +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> > +               mtk_dma_chan_write(c, VFF_WPT, 0);
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> > +}
> > +
> > +static void mtk_dma_stop(struct mtk_chan *c)
> > +{
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> > +       /* Wait for flush */
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_FLUSH,
> > +                                status,
> > +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> > +                                10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> > +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +       else
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       c->stop = true;
> > +}
> > +
> > +/*
> > + * This callback schedules all pending channels. We could be more
> > + * clever here by postponing allocation of the real DMA channels to
> > + * this point, and freeing them when our virtual channel becomes idle.
> > + *
> > + * We would then need to deal with 'all channels in-use'
> > + */
> > +static void mtk_dma_sched(unsigned long data)
> > +{
> > +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_chan *c;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +       LIST_HEAD(head);
> > +
> > +       spin_lock_irq(&mtkd->lock);
> > +       list_splice_tail_init(&mtkd->pending, &head);
> > +       spin_unlock_irq(&mtkd->lock);
> > +
> > +       if (!list_empty(&head)) {
> > +               c = list_first_entry(&head, struct mtk_chan, node);
> > +               cookie = c->vc.chan.cookie;
> > +
> > +               spin_lock_irqsave(&c->vc.lock, flags);
> > +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_rx(c);
> > +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +               spin_unlock_irqrestore(&c->vc.lock, flags);
> > +       }
> > +}
> > +
> > +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       int ret = -EBUSY;
> > +
> > +       pm_runtime_get_sync(mtkd->ddev.dev);
> > +
> > +       if (!mtkd->ch[c->dma_ch]) {
> > +               c->base = mtkd->mem_base[c->dma_ch];
> > +               mtkd->ch[c->dma_ch] = c;
> > +               ret = 1;
> > +       }
> > +       c->requested = false;
> > +       mtk_dma_reset(c);
> > +
> > +       return ret;
> > +}
> > +
> > +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (c->requested) {
> > +               c->requested = false;
> > +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> > +       }
> > +
> > +       tasklet_kill(&mtkd->task);
> > +       tasklet_kill(&c->vc.task);
> > +
> > +       c->base = NULL;
> > +       mtkd->ch[c->dma_ch] = NULL;
> > +       vchan_free_chan_resources(&c->vc);
> > +
> > +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> > +       c->dma_sig = 0;
> > +
> > +       pm_runtime_put_sync(mtkd->ddev.dev);
> > +}
> > +
> > +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> > +                                        dma_cookie_t cookie,
> > +                                        struct dma_tx_state *txstate)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +
> > +       if (!txstate)
> > +               return DMA_ERROR;
> > +
> > +       ret = dma_cookie_status(chan, cookie, txstate);
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (ret == DMA_IN_PROGRESS) {
> > +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> > +               dma_set_residue(txstate, c->rx_ptr);
> > +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               dma_set_residue(txstate, c->remain_size);
> > +       } else {
> > +               dma_set_residue(txstate, 0);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return ret;
> > +}
> > +
> > +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> > +       (struct dma_chan *chan, struct scatterlist *sgl,
> > +       unsigned int sglen,     enum dma_transfer_direction dir,
> > +       unsigned long tx_flags, void *context)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct scatterlist *sgent;
> > +       struct mtk_dma_desc *d;
> > +       struct mtk_dma_sg *sg;
> > +       unsigned int size, i, j, en;
> > +
> > +       en = 1;
> > +
> > +       if ((dir != DMA_DEV_TO_MEM) &&
> > +               (dir != DMA_MEM_TO_DEV)) {
> > +               dev_err(chan->device->dev, "bad direction\n");
> > +               return NULL;
> > +       }
> > +
> > +       /* Now allocate and setup the descriptor. */
> > +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> > +       if (!d)
> > +               return NULL;
> > +
> > +       d->dir = dir;
> > +
> > +       j = 0;
> > +       for_each_sg(sgl, sgent, sglen, i) {
> > +               d->sg[j].addr = sg_dma_address(sgent);
> > +               d->sg[j].en = en;
> > +               d->sg[j].fn = sg_dma_len(sgent) / en;
> > +               j++;
> > +       }
> > +
> > +       d->sglen = j;
> > +
> > +       if (dir == DMA_MEM_TO_DEV) {
> > +               for (size = i = 0; i < d->sglen; i++) {
> > +                       sg = &d->sg[i];
> > +                       size += sg->en * sg->fn;
> > +               }
> > +               c->remain_size = size;
> > +       }
> > +
> > +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> > +}
> > +
> > +static void mtk_dma_issue_pending(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_dmadev *mtkd;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               cookie = c->vc.chan.cookie;
> > +               mtkd = to_mtk_dma_dev(chan->device);
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> vchan_issue_pending means putting all the descriptors to list
> desc_issued, I'm supposed somewhere would iterate the list by
> vchan_next_desc and program each desc. into hardware. but I don't see
> the similar code.

ok, i will modify it at next patch

> 
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +               }
> > +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +               cookie = c->vc.chan.cookie;
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> ditto as the above
> 

ok, i will modify it at next patch

> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +}
> > +
> > +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +
> > +       mtk_dma_start_rx(c);
> > +
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dma_desc *d = c->desc;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->remain_size != 0U) {
> > +               list_add_tail(&c->node, &mtkd->pending);
> > +               tasklet_schedule(&mtkd->task);
> > +       } else {
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static int mtk_dma_slave_config(struct dma_chan *chan,
> > +                               struct dma_slave_config *cfg)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       int ret;
> > +
> > +       c->cfg = *cfg;
> > +
> > +       if (cfg->direction == DMA_DEV_TO_MEM) {
> > +               unsigned int rx_len = cfg->src_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> > +               mtk_dma_chan_write(c,
> > +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> > +                                  | VFF_RX_INT_EN1_B);
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_rx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> > +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_tx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> > +               dev_err(chan->device->dev,
> > +                       "config dma dir[%d] fail\n", cfg->direction);
> > +               return -EINVAL;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_terminate_all(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       list_del_init(&c->node);
> > +       mtk_dma_stop(c);
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_device_pause(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> it it possible not to register the function to the core if the
> hardware doesn't support it?
> 

can't support. the function will set to cmd_pause. in 8250 serial code,
will check cmd_pause. if no the function, will can't get DMA channel.
So 
the function just for this.

> > +
> > +static int mtk_dma_device_resume(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> ditto as the above
> 
> > +
> > +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> > +{
> > +       tasklet_kill(&mtkd->task);
> > +       while (list_empty(&mtkd->ddev.channels) == 0) {
> > +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> > +                       struct mtk_chan, vc.chan.device_node);
> > +
> > +               list_del(&c->vc.chan.device_node);
> > +               tasklet_kill(&c->vc.task);
> > +               devm_kfree(mtkd->ddev.dev, c);
> > +       }
> > +}
> > +
> > +static const struct of_device_id mtk_uart_dma_match[] = {
> > +       { .compatible = "mediatek,mt6577-uart-dma", },
> > +       { /* sentinel */ },
> > +};
> > +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> > +
> > +static int mtk_apdma_probe(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd;
> > +       struct resource *res;
> > +       struct mtk_chan *c;
> > +       unsigned int i;
> > +       int rc;
> > +
> > +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> > +       if (!mtkd)
> > +               return -ENOMEM;
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> > +               if (!res)
> > +                       return -ENODEV;
> > +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> > +               if (IS_ERR(mtkd->mem_base[i]))
> > +                       return PTR_ERR(mtkd->mem_base[i]);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> > +               if ((int)mtkd->dma_irq[i] < 0) {
> > +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> > +                       return -EINVAL;
> > +               }
> > +       }
> > +
> > +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> > +       if (IS_ERR(mtkd->clk)) {
> > +               dev_err(&pdev->dev, "No clock specified\n");
> > +               return PTR_ERR(mtkd->clk);
> > +       }
> > +
> > +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> > +               dev_info(&pdev->dev, "Support dma 33bits\n");
> > +               mtkd->support_33bits = true;
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> > +       else
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> > +       if (rc)
> > +               return rc;
> > +
> > +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> > +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> > +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> > +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> > +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> > +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> > +       mtkd->ddev.device_config = mtk_dma_slave_config;
> > +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> > +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> > +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> > +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> > +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +       mtkd->ddev.dev = &pdev->dev;
> > +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> > +       INIT_LIST_HEAD(&mtkd->pending);
> > +
> > +       spin_lock_init(&mtkd->lock);
> > +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> > +
> > +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> > +       if (of_property_read_u32(pdev->dev.of_node,
> > +                                "dma-requests", &mtkd->dma_requests)) {
> > +               dev_info(&pdev->dev,
> > +                        "Missing dma-requests property, using %u.\n",
> > +                        MTK_APDMA_DEFAULT_REQUESTS);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> > +               if (!c)
> > +                       goto err_no_dma;
> > +
> > +               c->vc.desc_free = mtk_dma_desc_free;
> > +               vchan_init(&c->vc, &mtkd->ddev);
> > +               INIT_LIST_HEAD(&c->node);
> > +       }
> > +
> > +       pm_runtime_enable(&pdev->dev);
> > +       pm_runtime_set_active(&pdev->dev);
> > +
> > +       rc = dma_async_device_register(&mtkd->ddev);
> > +       if (rc)
> > +               goto rpm_disable;
> > +
> > +       platform_set_drvdata(pdev, mtkd);
> > +
> > +       if (pdev->dev.of_node) {
> > +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> > +
> > +               /* Device-tree DMA controller registration */
> > +               rc = of_dma_controller_register(pdev->dev.of_node,
> > +                                               of_dma_simple_xlate,
> > +                                               &mtk_dma_info);
> 
> reusing of_dma_xlate_by_chan_id seem that also work for you.

in filter_fn function, neeg get number of dma.
> 
> > +               if (rc)
> > +                       goto dma_remove;
> > +       }
> > +
> > +       return rc;
> > +
> > +dma_remove:
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +rpm_disable:
> > +       pm_runtime_disable(&pdev->dev);
> > +err_no_dma:
> > +       mtk_dma_free(mtkd);
> > +       return rc;
> > +}
> > +
> > +static int mtk_apdma_remove(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> > +
> > +       if (pdev->dev.of_node)
> > +               of_dma_controller_free(pdev->dev.of_node);
> > +
> > +       pm_runtime_disable(&pdev->dev);
> > +       pm_runtime_put_noidle(&pdev->dev);
> > +
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +
> > +       mtk_dma_free(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int mtk_dma_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev))
> > +               mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev)) {
> > +               ret = mtk_dma_clk_enable(mtkd);
> > +               if (ret)
> > +                       return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       ret = mtk_dma_clk_enable(mtkd);
> > +       if (ret)
> > +               return ret;
> > +
> > +       return 0;
> > +}
> > +
> > +#endif /* CONFIG_PM_SLEEP */
> > +
> > +static const struct dev_pm_ops mtk_dma_pm_ops = {
> > +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> > +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> > +                          mtk_dma_runtime_resume, NULL)
> > +};
> > +
> > +static struct platform_driver mtk_dma_driver = {
> > +       .probe  = mtk_apdma_probe,
> > +       .remove = mtk_apdma_remove,
> > +       .driver = {
> > +               .name           = KBUILD_MODNAME,
> > +               .pm             = &mtk_dma_pm_ops,
> > +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> > +       },
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> > +{
> > +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> > +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +               unsigned int req = *(unsigned int *)param;
> > +
> > +               if (req <= mtkd->dma_requests) {
> > +                       c->dma_sig = req;
> > +                       c->dma_ch = req;
> > +                       return true;
> > +               }
> > +       }
> > +       return false;
> > +}
> > +
> > +module_platform_driver(mtk_dma_driver);
> > +
> > +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> > +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> > index 27bac0b..bef436e 100644
> > --- a/drivers/dma/mediatek/Kconfig
> > +++ b/drivers/dma/mediatek/Kconfig
> > @@ -1,4 +1,15 @@
> >
> > +config DMA_MTK_UART
> > +       tristate "MediaTek SoCs APDMA support for UART"
> > +       depends on OF
> > +       select DMA_ENGINE
> > +       select DMA_VIRTUAL_CHANNELS
> > +       help
> > +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> > +         when 8250 mtk uart is enabled, and if you want to using DMA,
> > +         you can enable the config. the DMA engine just only be used
> > +         with MediaTek Socs.
> > +
> >  config MTK_HSDMA
> >         tristate "MediaTek High-Speed DMA controller support"
> >         depends on ARCH_MEDIATEK || COMPILE_TEST
> > diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> > index 6e778f8..2f2efd9 100644
> > --- a/drivers/dma/mediatek/Makefile
> > +++ b/drivers/dma/mediatek/Makefile
> > @@ -1 +1,2 @@
> > +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
> >  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> > --
> > 1.7.9.5
> >
> >
> > _______________________________________________
> > Linux-mediatek mailing list
> > Linux-mediatek@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-mediatek



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-06  9:55 ` Long Cheng
  0 siblings, 0 replies; 34+ messages in thread
From: Long Cheng @ 2018-12-06  9:55 UTC (permalink / raw)
  To: Sean Wang
  Cc: mark.rutland, devicetree, yt.shen, srv_heupstream, gregkh,
	Sean Wang (王志亘),
	linux-kernel, Matthias Brugger, vkoul, robh+dt, linux-mediatek,
	linux-serial, jslaby, dmaengine, yingjoe.chen, dan.j.williams,
	linux-arm-kernel

On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> .
> On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> >
> > In DMA engine framework, add 8250 mtk dma to support it.
> >
> > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > ---
> >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> >  drivers/dma/mediatek/Kconfig        |   11 +
> >  drivers/dma/mediatek/Makefile       |    1 +
> >  3 files changed, 906 insertions(+)
> >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> >
> > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > new file mode 100644
> > index 0000000..3454679
> > --- /dev/null
> > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > @@ -0,0 +1,894 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Mediatek 8250 DMA driver.
> > + *
> > + * Copyright (c) 2018 MediaTek Inc.
> > + * Author: Long Cheng <long.cheng@mediatek.com>
> > + */
> > +
> > +#include <linux/clk.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/init.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/list.h>
> > +#include <linux/module.h>
> > +#include <linux/of_dma.h>
> > +#include <linux/of_device.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/iopoll.h>
> > +
> > +#include "../virt-dma.h"
> > +
> > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > +
> > +#define VFF_EN_B               BIT(0)
> > +#define VFF_STOP_B             BIT(0)
> > +#define VFF_FLUSH_B            BIT(0)
> > +#define VFF_4G_SUPPORT_B       BIT(0)
> > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > +#define VFF_RX_INT_EN1_B       BIT(1)
> > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > +#define VFF_WARM_RST_B         BIT(0)
> > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > +#define VFF_TX_INT_FLAG_CLR_B  0
> > +#define VFF_STOP_CLR_B         0
> > +#define VFF_FLUSH_CLR_B                0
> > +#define VFF_INT_EN_CLR_B       0
> > +#define VFF_4G_SUPPORT_CLR_B   0
> > +
> > +/* interrupt trigger level for tx */
> > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > +/* interrupt trigger level for rx */
> > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > +
> > +#define MTK_DMA_RING_SIZE      0xffffU
> > +/* invert this bit when wrap ring head again*/
> > +#define MTK_DMA_RING_WRAP      0x10000U
> > +
> > +#define VFF_INT_FLAG           0x00
> > +#define VFF_INT_EN             0x04
> > +#define VFF_EN                 0x08
> > +#define VFF_RST                        0x0c
> > +#define VFF_STOP               0x10
> > +#define VFF_FLUSH              0x14
> > +#define VFF_ADDR               0x1c
> > +#define VFF_LEN                        0x24
> > +#define VFF_THRE               0x28
> > +#define VFF_WPT                        0x2c
> > +#define VFF_RPT                        0x30
> > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > +#define VFF_VALID_SIZE         0x3c
> > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > +#define VFF_LEFT_SIZE          0x40
> > +#define VFF_DEBUG_STATUS       0x50
> > +#define VFF_4G_SUPPORT         0x54
> > +
> > +struct mtk_dmadev {
> > +       struct dma_device ddev;
> > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > +       spinlock_t lock; /* dma dev lock */
> > +       struct tasklet_struct task;
> > +       struct list_head pending;
> > +       struct clk *clk;
> > +       unsigned int dma_requests;
> > +       bool support_33bits;
> > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > +};
> > +
> > +struct mtk_chan {
> > +       struct virt_dma_chan vc;
> > +       struct list_head node;
> > +       struct dma_slave_config cfg;
> > +       void __iomem *base;
> > +       struct mtk_dma_desc *desc;
> > +
> > +       bool stop;
> > +       bool requested;
> > +
> > +       unsigned int dma_sig;
> 
> the member can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int dma_ch;
> 
> a chan_id is already included in struct dma_chan, we can reuse it
> 

chan_id is start from 0. but in this driver, dma info is stored to list.
if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
can't using chan_id

> > +       unsigned int sgidx;
> 
> the member also can be removed as no real user would refer to it
> 

Ok, i will remove it at next patch

> > +       unsigned int remain_size;
> 
> The member remain_size seems unnecessary data to maintain a channel.
> The virtual channel struct virt_dma_chan already provide the way to
> manage all descriptors you're operating on, you should reuse related
> functions to virt_dma_chan first.
> 
> Or if you mean remain_size is about the remaining size of current
> descriptor, and then putting into struct mtk_dma_desc would be better.
> 

i will move it to mtk_dma_desc

> > +       unsigned int rx_ptr;
> > +};
> > +
> > +struct mtk_dma_sg {
> > +       dma_addr_t addr;
> > +       unsigned int en;                /* number of elements (24-bit) */
> > +       unsigned int fn;                /* number of frames (16-bit) */
> > +};
> > +
> > +struct mtk_dma_desc {
> > +       struct virt_dma_desc vd;
> > +       enum dma_transfer_direction dir;
> > +
> > +       unsigned int sglen;
> > +       struct mtk_dma_sg sg[0];
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param);
> > +static struct of_dma_filter_info mtk_dma_info = {
> > +       .filter_fn = mtk_dma_filter_fn,
> > +};
> > +
> > +static inline struct mtk_dmadev *to_mtk_dma_dev(struct dma_device *d)
> > +{
> > +       return container_of(d, struct mtk_dmadev, ddev);
> > +}
> > +
> > +static inline struct mtk_chan *to_mtk_dma_chan(struct dma_chan *c)
> > +{
> > +       return container_of(c, struct mtk_chan, vc.chan);
> > +}
> > +
> > +static inline struct mtk_dma_desc *to_mtk_dma_desc
> > +       (struct dma_async_tx_descriptor *t)
> > +{
> > +       return container_of(t, struct mtk_dma_desc, vd.tx);
> > +}
> > +
> > +static void mtk_dma_chan_write(struct mtk_chan *c,
> > +                              unsigned int reg, unsigned int val)
> > +{
> > +       writel(val, c->base + reg);
> > +}
> > +
> > +static unsigned int mtk_dma_chan_read(struct mtk_chan *c, unsigned int reg)
> > +{
> > +       return readl(c->base + reg);
> > +}
> > +
> > +static void mtk_dma_desc_free(struct virt_dma_desc *vd)
> > +{
> > +       struct dma_chan *chan = vd->tx.chan;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       kfree(c->desc);
> > +       c->desc = NULL;
> > +}
> > +
> > +static int mtk_dma_clk_enable(struct mtk_dmadev *mtkd)
> > +{
> > +       int ret;
> > +
> > +       ret = clk_prepare_enable(mtkd->clk);
> > +       if (ret) {
> > +               dev_err(mtkd->ddev.dev, "Couldn't enable the clock\n");
> > +               return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static void mtk_dma_clk_disable(struct mtk_dmadev *mtkd)
> > +{
> > +       clk_disable_unprepare(mtkd->clk);
> > +}
> > +
> > +static void mtk_dma_remove_virt_list(dma_cookie_t cookie,
> > +                                    struct virt_dma_chan *vc)
> > +{
> > +       struct virt_dma_desc *vd;
> > +
> > +       if (list_empty(&vc->desc_issued) == 0) {
> > +               list_for_each_entry(vd, &vc->desc_issued, node) {
> > +                       if (cookie == vd->tx.cookie) {
> > +                               INIT_LIST_HEAD(&vc->desc_issued);
> 
> generally, we don't force initialze the list desc_issued. Instead,
> when each descriptor is completed, we need to move each descriptor
> from the list desc_issued to the list desc_completed.
> 

ok, i will modify it at next patch

> > +                               break;
> > +                       }
> > +               }
> > +       }
> > +}
> > +
> > +static void mtk_dma_tx_flush(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_FLUSH) == 0U)
> > +               mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_B);
> > +}
> > +
> > +static void mtk_dma_tx_write(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned int txcount = c->remain_size;
> > +       unsigned int len, send, left, wpt, wrap;
> > +
> > +       len = mtk_dma_chan_read(c, VFF_LEN);
> > +
> > +       while ((left = mtk_dma_chan_read(c, VFF_LEFT_SIZE)) > 0U) {
> > +               if (c->remain_size == 0U)
> > +                       break;
> > +               send = min(left, c->remain_size);
> 
> min_t
> 

ok, i will modify it at next patch

> > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > +
> > +               if ((wpt & (len - 1U)) + send < len)
> > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > +               else
> > +                       mtk_dma_chan_write(c, VFF_WPT,
> > +                                          ((wpt + send) & (len - 1U))
> > +                                          | wrap);
> > +
> > +               c->remain_size -= send;
> 
> I'm curious why you don't need to set up the hardware from the
> descriptor information
> 

mtk_dma_chan_write is update the hardware. remain_size is length of TX
data.

> > +       }
> > +
> > +       if (txcount != c->remain_size) {
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +               mtk_dma_tx_flush(chan);
> > +       }
> > +}
> > +
> > +static void mtk_dma_start_tx(struct mtk_chan *c)
> > +{
> > +       if (mtk_dma_chan_read(c, VFF_LEFT_SIZE) == 0U)
> > +               mtk_dma_chan_write(c, VFF_INT_EN, VFF_TX_INT_EN_B);
> > +       else
> > +               mtk_dma_tx_write(&c->vc.chan);
> > +
> > +       c->stop = false;
> > +}
> > +
> > +static void mtk_dma_get_rx_size(struct mtk_chan *c)
> > +{
> > +       unsigned int rx_size = mtk_dma_chan_read(c, VFF_LEN);
> > +       unsigned int rdptr, wrptr, wrreg, rdreg, count;
> > +
> > +       rdreg = mtk_dma_chan_read(c, VFF_RPT);
> > +       wrreg = mtk_dma_chan_read(c, VFF_WPT);
> > +       rdptr = rdreg & MTK_DMA_RING_SIZE;
> > +       wrptr = wrreg & MTK_DMA_RING_SIZE;
> > +       count = ((rdreg ^ wrreg) & MTK_DMA_RING_WRAP) ?
> > +                       (wrptr + rx_size - rdptr) : (wrptr - rdptr);
> > +
> > +       c->remain_size = count;
> > +       c->rx_ptr = rdptr;
> > +
> > +       mtk_dma_chan_write(c, VFF_RPT, wrreg);
> > +}
> > +
> > +static void mtk_dma_start_rx(struct mtk_chan *c)
> > +{
> > +       struct dma_chan *chan = &c->vc.chan;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_dma_desc *d = c->desc;
> > +
> > +       if (mtk_dma_chan_read(c, VFF_VALID_SIZE) == 0U)
> > +               return;
> > +
> > +       if (d && d->vd.tx.cookie != 0) {
> 
> the driver benefits from virtual_channel so you can manage the life
> cycle of each descriptor by the virtual channel related functions
> without handling details about the cookie.
> 

ok, i will modify it at next patch

> > +               mtk_dma_get_rx_size(c);
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       } else {
> > +               spin_lock(&mtkd->lock);
> > +               if (list_empty(&mtkd->pending))
> > +                       list_add_tail(&c->node, &mtkd->pending);
> > +               spin_unlock(&mtkd->lock);
> > +               tasklet_schedule(&mtkd->task);
> > +       }
> > +}
> > +
> > +static void mtk_dma_reset(struct mtk_chan *c)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_ADDR, 0);
> > +       mtk_dma_chan_write(c, VFF_THRE, 0);
> > +       mtk_dma_chan_write(c, VFF_LEN, 0);
> > +       mtk_dma_chan_write(c, VFF_RST, VFF_WARM_RST_B);
> > +
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret) {
> > +               dev_err(c->vc.chan.device->dev,
> > +                               "dma reset: fail, timeout\n");
> > +               return;
> > +       }
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_RPT, 0);
> > +       else if (c->cfg.direction == DMA_MEM_TO_DEV)
> > +               mtk_dma_chan_write(c, VFF_WPT, 0);
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
> > +}
> > +
> > +static void mtk_dma_stop(struct mtk_chan *c)
> > +{
> > +       u32 status;
> > +       int ret;
> > +
> > +       mtk_dma_chan_write(c, VFF_FLUSH, VFF_FLUSH_CLR_B);
> > +       /* Wait for flush */
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_FLUSH,
> > +                                status,
> > +                                (status & VFF_FLUSH_B) != VFF_FLUSH_B,
> > +                                10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling FLUSH fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       /*set stop as 1 -> wait until en is 0 -> set stop as 0*/
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_B);
> > +       ret = readx_poll_timeout(readl,
> > +                                c->base + VFF_EN,
> > +                                status, status == 0, 10, 100);
> > +       if (ret)
> > +               dev_err(c->vc.chan.device->dev,
> > +                       "dma stop: polling VFF_EN fail, DEBUG=0x%x\n",
> > +                       mtk_dma_chan_read(c, VFF_DEBUG_STATUS));
> > +
> > +       mtk_dma_chan_write(c, VFF_STOP, VFF_STOP_CLR_B);
> > +       mtk_dma_chan_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B);
> > +
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM)
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +       else
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       c->stop = true;
> > +}
> > +
> > +/*
> > + * This callback schedules all pending channels. We could be more
> > + * clever here by postponing allocation of the real DMA channels to
> > + * this point, and freeing them when our virtual channel becomes idle.
> > + *
> > + * We would then need to deal with 'all channels in-use'
> > + */
> > +static void mtk_dma_sched(unsigned long data)
> > +{
> > +       struct mtk_dmadev *mtkd = (struct mtk_dmadev *)data;
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_chan *c;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +       LIST_HEAD(head);
> > +
> > +       spin_lock_irq(&mtkd->lock);
> > +       list_splice_tail_init(&mtkd->pending, &head);
> > +       spin_unlock_irq(&mtkd->lock);
> > +
> > +       if (!list_empty(&head)) {
> > +               c = list_first_entry(&head, struct mtk_chan, node);
> > +               cookie = c->vc.chan.cookie;
> > +
> > +               spin_lock_irqsave(&c->vc.lock, flags);
> > +               if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_rx(c);
> > +               } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       list_del_init(&c->node);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +               spin_unlock_irqrestore(&c->vc.lock, flags);
> > +       }
> > +}
> > +
> > +static int mtk_dma_alloc_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       int ret = -EBUSY;
> > +
> > +       pm_runtime_get_sync(mtkd->ddev.dev);
> > +
> > +       if (!mtkd->ch[c->dma_ch]) {
> > +               c->base = mtkd->mem_base[c->dma_ch];
> > +               mtkd->ch[c->dma_ch] = c;
> > +               ret = 1;
> > +       }
> > +       c->requested = false;
> > +       mtk_dma_reset(c);
> > +
> > +       return ret;
> > +}
> > +
> > +static void mtk_dma_free_chan_resources(struct dma_chan *chan)
> > +{
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +
> > +       if (c->requested) {
> > +               c->requested = false;
> > +               free_irq(mtkd->dma_irq[c->dma_ch], chan);
> > +       }
> > +
> > +       tasklet_kill(&mtkd->task);
> > +       tasklet_kill(&c->vc.task);
> > +
> > +       c->base = NULL;
> > +       mtkd->ch[c->dma_ch] = NULL;
> > +       vchan_free_chan_resources(&c->vc);
> > +
> > +       dev_dbg(mtkd->ddev.dev, "freeing channel for %u\n", c->dma_sig);
> > +       c->dma_sig = 0;
> > +
> > +       pm_runtime_put_sync(mtkd->ddev.dev);
> > +}
> > +
> > +static enum dma_status mtk_dma_tx_status(struct dma_chan *chan,
> > +                                        dma_cookie_t cookie,
> > +                                        struct dma_tx_state *txstate)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       enum dma_status ret;
> > +       unsigned long flags;
> > +
> > +       if (!txstate)
> > +               return DMA_ERROR;
> > +
> > +       ret = dma_cookie_status(chan, cookie, txstate);
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (ret == DMA_IN_PROGRESS) {
> > +               c->rx_ptr = mtk_dma_chan_read(c, VFF_RPT) & MTK_DMA_RING_SIZE;
> > +               dma_set_residue(txstate, c->rx_ptr);
> > +       } else if (ret == DMA_COMPLETE && c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               dma_set_residue(txstate, c->remain_size);
> > +       } else {
> > +               dma_set_residue(txstate, 0);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return ret;
> > +}
> > +
> > +static struct dma_async_tx_descriptor *mtk_dma_prep_slave_sg
> > +       (struct dma_chan *chan, struct scatterlist *sgl,
> > +       unsigned int sglen,     enum dma_transfer_direction dir,
> > +       unsigned long tx_flags, void *context)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct scatterlist *sgent;
> > +       struct mtk_dma_desc *d;
> > +       struct mtk_dma_sg *sg;
> > +       unsigned int size, i, j, en;
> > +
> > +       en = 1;
> > +
> > +       if ((dir != DMA_DEV_TO_MEM) &&
> > +               (dir != DMA_MEM_TO_DEV)) {
> > +               dev_err(chan->device->dev, "bad direction\n");
> > +               return NULL;
> > +       }
> > +
> > +       /* Now allocate and setup the descriptor. */
> > +       d = kzalloc(sizeof(*d) + sglen * sizeof(d->sg[0]), GFP_ATOMIC);
> > +       if (!d)
> > +               return NULL;
> > +
> > +       d->dir = dir;
> > +
> > +       j = 0;
> > +       for_each_sg(sgl, sgent, sglen, i) {
> > +               d->sg[j].addr = sg_dma_address(sgent);
> > +               d->sg[j].en = en;
> > +               d->sg[j].fn = sg_dma_len(sgent) / en;
> > +               j++;
> > +       }
> > +
> > +       d->sglen = j;
> > +
> > +       if (dir == DMA_MEM_TO_DEV) {
> > +               for (size = i = 0; i < d->sglen; i++) {
> > +                       sg = &d->sg[i];
> > +                       size += sg->en * sg->fn;
> > +               }
> > +               c->remain_size = size;
> > +       }
> > +
> > +       return vchan_tx_prep(&c->vc, &d->vd, tx_flags);
> > +}
> > +
> > +static void mtk_dma_issue_pending(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct virt_dma_desc *vd;
> > +       struct mtk_dmadev *mtkd;
> > +       dma_cookie_t cookie;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->cfg.direction == DMA_DEV_TO_MEM) {
> > +               cookie = c->vc.chan.cookie;
> > +               mtkd = to_mtk_dma_dev(chan->device);
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> vchan_issue_pending means putting all the descriptors to list
> desc_issued, I'm supposed somewhere would iterate the list by
> vchan_next_desc and program each desc. into hardware. but I don't see
> the similar code.

ok, i will modify it at next patch

> 
> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +               }
> > +       } else if (c->cfg.direction == DMA_MEM_TO_DEV) {
> > +               cookie = c->vc.chan.cookie;
> > +               if (vchan_issue_pending(&c->vc) && !c->desc) {
> 
> ditto as the above
> 

ok, i will modify it at next patch

> > +                       vd = vchan_find_desc(&c->vc, cookie);
> > +                       c->desc = to_mtk_dma_desc(&vd->tx);
> > +                       mtk_dma_start_tx(c);
> > +               }
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +}
> > +
> > +static irqreturn_t mtk_dma_rx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +
> > +       mtk_dma_start_rx(c);
> > +
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static irqreturn_t mtk_dma_tx_interrupt(int irq, void *dev_id)
> > +{
> > +       struct dma_chan *chan = (struct dma_chan *)dev_id;
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dma_desc *d = c->desc;
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       if (c->remain_size != 0U) {
> > +               list_add_tail(&c->node, &mtkd->pending);
> > +               tasklet_schedule(&mtkd->task);
> > +       } else {
> > +               mtk_dma_remove_virt_list(d->vd.tx.cookie, &c->vc);
> > +               vchan_cookie_complete(&d->vd);
> > +       }
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +
> > +       return IRQ_HANDLED;
> > +}
> > +
> > +static int mtk_dma_slave_config(struct dma_chan *chan,
> > +                               struct dma_slave_config *cfg)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       struct mtk_dmadev *mtkd = to_mtk_dma_dev(c->vc.chan.device);
> > +       int ret;
> > +
> > +       c->cfg = *cfg;
> > +
> > +       if (cfg->direction == DMA_DEV_TO_MEM) {
> > +               unsigned int rx_len = cfg->src_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->src_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, rx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_RX_THRE(rx_len));
> > +               mtk_dma_chan_write(c,
> > +                                  VFF_INT_EN, VFF_RX_INT_EN0_B
> > +                                  | VFF_RX_INT_EN1_B);
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_RX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_rx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request rx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       } else if (cfg->direction == DMA_MEM_TO_DEV)    {
> > +               unsigned int tx_len = cfg->dst_addr_width * 1024;
> > +
> > +               mtk_dma_chan_write(c, VFF_ADDR, cfg->dst_addr);
> > +               mtk_dma_chan_write(c, VFF_LEN, tx_len);
> > +               mtk_dma_chan_write(c, VFF_THRE, VFF_TX_THRE(tx_len));
> > +               mtk_dma_chan_write(c, VFF_INT_FLAG, VFF_TX_INT_FLAG_CLR_B);
> > +               mtk_dma_chan_write(c, VFF_EN, VFF_EN_B);
> > +
> > +               if (!c->requested) {
> > +                       c->requested = true;
> > +                       ret = request_irq(mtkd->dma_irq[c->dma_ch],
> > +                                         mtk_dma_tx_interrupt,
> > +                                         IRQF_TRIGGER_NONE,
> > +                                         KBUILD_MODNAME, chan);
> > +                       if (ret < 0) {
> > +                               dev_err(chan->device->dev, "Can't request tx dma IRQ\n");
> > +                               return -EINVAL;
> > +                       }
> > +               }
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               mtk_dma_chan_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_B);
> > +
> > +       if (mtk_dma_chan_read(c, VFF_EN) != VFF_EN_B) {
> > +               dev_err(chan->device->dev,
> > +                       "config dma dir[%d] fail\n", cfg->direction);
> > +               return -EINVAL;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_terminate_all(struct dma_chan *chan)
> > +{
> > +       struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +       unsigned long flags;
> > +
> > +       spin_lock_irqsave(&c->vc.lock, flags);
> > +       list_del_init(&c->node);
> > +       mtk_dma_stop(c);
> > +       spin_unlock_irqrestore(&c->vc.lock, flags);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_device_pause(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> it it possible not to register the function to the core if the
> hardware doesn't support it?
> 

can't support. the function will set to cmd_pause. in 8250 serial code,
will check cmd_pause. if no the function, will can't get DMA channel.
So 
the function just for this.

> > +
> > +static int mtk_dma_device_resume(struct dma_chan *chan)
> > +{
> > +       /* just for check caps pass */
> > +       return -EINVAL;
> > +}
> 
> ditto as the above
> 
> > +
> > +static void mtk_dma_free(struct mtk_dmadev *mtkd)
> > +{
> > +       tasklet_kill(&mtkd->task);
> > +       while (list_empty(&mtkd->ddev.channels) == 0) {
> > +               struct mtk_chan *c = list_first_entry(&mtkd->ddev.channels,
> > +                       struct mtk_chan, vc.chan.device_node);
> > +
> > +               list_del(&c->vc.chan.device_node);
> > +               tasklet_kill(&c->vc.task);
> > +               devm_kfree(mtkd->ddev.dev, c);
> > +       }
> > +}
> > +
> > +static const struct of_device_id mtk_uart_dma_match[] = {
> > +       { .compatible = "mediatek,mt6577-uart-dma", },
> > +       { /* sentinel */ },
> > +};
> > +MODULE_DEVICE_TABLE(of, mtk_uart_dma_match);
> > +
> > +static int mtk_apdma_probe(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd;
> > +       struct resource *res;
> > +       struct mtk_chan *c;
> > +       unsigned int i;
> > +       int rc;
> > +
> > +       mtkd = devm_kzalloc(&pdev->dev, sizeof(*mtkd), GFP_KERNEL);
> > +       if (!mtkd)
> > +               return -ENOMEM;
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> > +               if (!res)
> > +                       return -ENODEV;
> > +               mtkd->mem_base[i] = devm_ioremap_resource(&pdev->dev, res);
> > +               if (IS_ERR(mtkd->mem_base[i]))
> > +                       return PTR_ERR(mtkd->mem_base[i]);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               mtkd->dma_irq[i] = platform_get_irq(pdev, i);
> > +               if ((int)mtkd->dma_irq[i] < 0) {
> > +                       dev_err(&pdev->dev, "failed to get IRQ[%d]\n", i);
> > +                       return -EINVAL;
> > +               }
> > +       }
> > +
> > +       mtkd->clk = devm_clk_get(&pdev->dev, NULL);
> > +       if (IS_ERR(mtkd->clk)) {
> > +               dev_err(&pdev->dev, "No clock specified\n");
> > +               return PTR_ERR(mtkd->clk);
> > +       }
> > +
> > +       if (of_property_read_bool(pdev->dev.of_node, "dma-33bits")) {
> > +               dev_info(&pdev->dev, "Support dma 33bits\n");
> > +               mtkd->support_33bits = true;
> > +       }
> > +
> > +       if (mtkd->support_33bits)
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(33));
> > +       else
> > +               rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32));
> > +       if (rc)
> > +               return rc;
> > +
> > +       dma_cap_set(DMA_SLAVE, mtkd->ddev.cap_mask);
> > +       mtkd->ddev.device_alloc_chan_resources = mtk_dma_alloc_chan_resources;
> > +       mtkd->ddev.device_free_chan_resources = mtk_dma_free_chan_resources;
> > +       mtkd->ddev.device_tx_status = mtk_dma_tx_status;
> > +       mtkd->ddev.device_issue_pending = mtk_dma_issue_pending;
> > +       mtkd->ddev.device_prep_slave_sg = mtk_dma_prep_slave_sg;
> > +       mtkd->ddev.device_config = mtk_dma_slave_config;
> > +       mtkd->ddev.device_pause = mtk_dma_device_pause;
> > +       mtkd->ddev.device_resume = mtk_dma_device_resume;
> > +       mtkd->ddev.device_terminate_all = mtk_dma_terminate_all;
> > +       mtkd->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE);
> > +       mtkd->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> > +       mtkd->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
> > +       mtkd->ddev.dev = &pdev->dev;
> > +       INIT_LIST_HEAD(&mtkd->ddev.channels);
> > +       INIT_LIST_HEAD(&mtkd->pending);
> > +
> > +       spin_lock_init(&mtkd->lock);
> > +       tasklet_init(&mtkd->task, mtk_dma_sched, (unsigned long)mtkd);
> > +
> > +       mtkd->dma_requests = MTK_APDMA_DEFAULT_REQUESTS;
> > +       if (of_property_read_u32(pdev->dev.of_node,
> > +                                "dma-requests", &mtkd->dma_requests)) {
> > +               dev_info(&pdev->dev,
> > +                        "Missing dma-requests property, using %u.\n",
> > +                        MTK_APDMA_DEFAULT_REQUESTS);
> > +       }
> > +
> > +       for (i = 0; i < MTK_APDMA_CHANNELS; i++) {
> > +               c = devm_kzalloc(mtkd->ddev.dev, sizeof(*c), GFP_KERNEL);
> > +               if (!c)
> > +                       goto err_no_dma;
> > +
> > +               c->vc.desc_free = mtk_dma_desc_free;
> > +               vchan_init(&c->vc, &mtkd->ddev);
> > +               INIT_LIST_HEAD(&c->node);
> > +       }
> > +
> > +       pm_runtime_enable(&pdev->dev);
> > +       pm_runtime_set_active(&pdev->dev);
> > +
> > +       rc = dma_async_device_register(&mtkd->ddev);
> > +       if (rc)
> > +               goto rpm_disable;
> > +
> > +       platform_set_drvdata(pdev, mtkd);
> > +
> > +       if (pdev->dev.of_node) {
> > +               mtk_dma_info.dma_cap = mtkd->ddev.cap_mask;
> > +
> > +               /* Device-tree DMA controller registration */
> > +               rc = of_dma_controller_register(pdev->dev.of_node,
> > +                                               of_dma_simple_xlate,
> > +                                               &mtk_dma_info);
> 
> reusing of_dma_xlate_by_chan_id seem that also work for you.

in filter_fn function, neeg get number of dma.
> 
> > +               if (rc)
> > +                       goto dma_remove;
> > +       }
> > +
> > +       return rc;
> > +
> > +dma_remove:
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +rpm_disable:
> > +       pm_runtime_disable(&pdev->dev);
> > +err_no_dma:
> > +       mtk_dma_free(mtkd);
> > +       return rc;
> > +}
> > +
> > +static int mtk_apdma_remove(struct platform_device *pdev)
> > +{
> > +       struct mtk_dmadev *mtkd = platform_get_drvdata(pdev);
> > +
> > +       if (pdev->dev.of_node)
> > +               of_dma_controller_free(pdev->dev.of_node);
> > +
> > +       pm_runtime_disable(&pdev->dev);
> > +       pm_runtime_put_noidle(&pdev->dev);
> > +
> > +       dma_async_device_unregister(&mtkd->ddev);
> > +
> > +       mtk_dma_free(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int mtk_dma_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev))
> > +               mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       if (!pm_runtime_suspended(dev)) {
> > +               ret = mtk_dma_clk_enable(mtkd);
> > +               if (ret)
> > +                       return ret;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_suspend(struct device *dev)
> > +{
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       mtk_dma_clk_disable(mtkd);
> > +
> > +       return 0;
> > +}
> > +
> > +static int mtk_dma_runtime_resume(struct device *dev)
> > +{
> > +       int ret;
> > +       struct mtk_dmadev *mtkd = dev_get_drvdata(dev);
> > +
> > +       ret = mtk_dma_clk_enable(mtkd);
> > +       if (ret)
> > +               return ret;
> > +
> > +       return 0;
> > +}
> > +
> > +#endif /* CONFIG_PM_SLEEP */
> > +
> > +static const struct dev_pm_ops mtk_dma_pm_ops = {
> > +       SET_SYSTEM_SLEEP_PM_OPS(mtk_dma_suspend, mtk_dma_resume)
> > +       SET_RUNTIME_PM_OPS(mtk_dma_runtime_suspend,
> > +                          mtk_dma_runtime_resume, NULL)
> > +};
> > +
> > +static struct platform_driver mtk_dma_driver = {
> > +       .probe  = mtk_apdma_probe,
> > +       .remove = mtk_apdma_remove,
> > +       .driver = {
> > +               .name           = KBUILD_MODNAME,
> > +               .pm             = &mtk_dma_pm_ops,
> > +               .of_match_table = of_match_ptr(mtk_uart_dma_match),
> > +       },
> > +};
> > +
> > +static bool mtk_dma_filter_fn(struct dma_chan *chan, void *param)
> > +{
> > +       if (chan->device->dev->driver == &mtk_dma_driver.driver) {
> > +               struct mtk_dmadev *mtkd = to_mtk_dma_dev(chan->device);
> > +               struct mtk_chan *c = to_mtk_dma_chan(chan);
> > +               unsigned int req = *(unsigned int *)param;
> > +
> > +               if (req <= mtkd->dma_requests) {
> > +                       c->dma_sig = req;
> > +                       c->dma_ch = req;
> > +                       return true;
> > +               }
> > +       }
> > +       return false;
> > +}
> > +
> > +module_platform_driver(mtk_dma_driver);
> > +
> > +MODULE_DESCRIPTION("MediaTek UART APDMA Controller Driver");
> > +MODULE_AUTHOR("Long Cheng <long.cheng@mediatek.com>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > diff --git a/drivers/dma/mediatek/Kconfig b/drivers/dma/mediatek/Kconfig
> > index 27bac0b..bef436e 100644
> > --- a/drivers/dma/mediatek/Kconfig
> > +++ b/drivers/dma/mediatek/Kconfig
> > @@ -1,4 +1,15 @@
> >
> > +config DMA_MTK_UART
> > +       tristate "MediaTek SoCs APDMA support for UART"
> > +       depends on OF
> > +       select DMA_ENGINE
> > +       select DMA_VIRTUAL_CHANNELS
> > +       help
> > +         Support for the UART DMA engine found on MediaTek MTK SoCs.
> > +         when 8250 mtk uart is enabled, and if you want to using DMA,
> > +         you can enable the config. the DMA engine just only be used
> > +         with MediaTek Socs.
> > +
> >  config MTK_HSDMA
> >         tristate "MediaTek High-Speed DMA controller support"
> >         depends on ARCH_MEDIATEK || COMPILE_TEST
> > diff --git a/drivers/dma/mediatek/Makefile b/drivers/dma/mediatek/Makefile
> > index 6e778f8..2f2efd9 100644
> > --- a/drivers/dma/mediatek/Makefile
> > +++ b/drivers/dma/mediatek/Makefile
> > @@ -1 +1,2 @@
> > +obj-$(CONFIG_DMA_MTK_UART) += 8250_mtk_dma.o
> >  obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
> > --
> > 1.7.9.5
> >
> >
> > _______________________________________________
> > Linux-mediatek mailing list
> > Linux-mediatek@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-mediatek



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
  2018-12-06  9:55 ` Long Cheng
  (?)
@ 2018-12-06 21:22 ` Sean Wang
  -1 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-06 21:22 UTC (permalink / raw)
  To: long.cheng
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel,
	yt.shen

On Thu, Dec 6, 2018 at 1:55 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> > .
> > On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> > >
> > > In DMA engine framework, add 8250 mtk dma to support it.
> > >
> > > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > > ---
> > >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> > >  drivers/dma/mediatek/Kconfig        |   11 +
> > >  drivers/dma/mediatek/Makefile       |    1 +
> > >  3 files changed, 906 insertions(+)
> > >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> > >
> > > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > > new file mode 100644
> > > index 0000000..3454679
> > > --- /dev/null
> > > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > > @@ -0,0 +1,894 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Mediatek 8250 DMA driver.
> > > + *
> > > + * Copyright (c) 2018 MediaTek Inc.
> > > + * Author: Long Cheng <long.cheng@mediatek.com>
> > > + */
> > > +
> > > +#include <linux/clk.h>
> > > +#include <linux/dmaengine.h>
> > > +#include <linux/dma-mapping.h>
> > > +#include <linux/err.h>
> > > +#include <linux/init.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/list.h>
> > > +#include <linux/module.h>
> > > +#include <linux/of_dma.h>
> > > +#include <linux/of_device.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/spinlock.h>
> > > +#include <linux/pm_runtime.h>
> > > +#include <linux/iopoll.h>
> > > +
> > > +#include "../virt-dma.h"
> > > +
> > > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > > +
> > > +#define VFF_EN_B               BIT(0)
> > > +#define VFF_STOP_B             BIT(0)
> > > +#define VFF_FLUSH_B            BIT(0)
> > > +#define VFF_4G_SUPPORT_B       BIT(0)
> > > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > > +#define VFF_RX_INT_EN1_B       BIT(1)
> > > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > > +#define VFF_WARM_RST_B         BIT(0)
> > > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > > +#define VFF_TX_INT_FLAG_CLR_B  0
> > > +#define VFF_STOP_CLR_B         0
> > > +#define VFF_FLUSH_CLR_B                0
> > > +#define VFF_INT_EN_CLR_B       0
> > > +#define VFF_4G_SUPPORT_CLR_B   0
> > > +
> > > +/* interrupt trigger level for tx */
> > > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > > +/* interrupt trigger level for rx */
> > > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > > +
> > > +#define MTK_DMA_RING_SIZE      0xffffU
> > > +/* invert this bit when wrap ring head again*/
> > > +#define MTK_DMA_RING_WRAP      0x10000U
> > > +
> > > +#define VFF_INT_FLAG           0x00
> > > +#define VFF_INT_EN             0x04
> > > +#define VFF_EN                 0x08
> > > +#define VFF_RST                        0x0c
> > > +#define VFF_STOP               0x10
> > > +#define VFF_FLUSH              0x14
> > > +#define VFF_ADDR               0x1c
> > > +#define VFF_LEN                        0x24
> > > +#define VFF_THRE               0x28
> > > +#define VFF_WPT                        0x2c
> > > +#define VFF_RPT                        0x30
> > > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > > +#define VFF_VALID_SIZE         0x3c
> > > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > > +#define VFF_LEFT_SIZE          0x40
> > > +#define VFF_DEBUG_STATUS       0x50
> > > +#define VFF_4G_SUPPORT         0x54
> > > +
> > > +struct mtk_dmadev {
> > > +       struct dma_device ddev;
> > > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > > +       spinlock_t lock; /* dma dev lock */
> > > +       struct tasklet_struct task;
> > > +       struct list_head pending;
> > > +       struct clk *clk;
> > > +       unsigned int dma_requests;
> > > +       bool support_33bits;
> > > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > > +};
> > > +
> > > +struct mtk_chan {
> > > +       struct virt_dma_chan vc;
> > > +       struct list_head node;
> > > +       struct dma_slave_config cfg;
> > > +       void __iomem *base;
> > > +       struct mtk_dma_desc *desc;
> > > +
> > > +       bool stop;
> > > +       bool requested;
> > > +
> > > +       unsigned int dma_sig;
> >
> > the member can be removed as no real user would refer to it
> >
>
> Ok, i will remove it at next patch
>
> > > +       unsigned int dma_ch;
> >
> > a chan_id is already included in struct dma_chan, we can reuse it
> >
>
> chan_id is start from 0. but in this driver, dma info is stored to list.
> if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
> can't using chan_id
>

if you use of_dma_xlate_by_chan_id in of_dma_controller_register, the
client device still can get the right channel with the appropriate
chan_id expressed in dmas property in dts.

> > > +       unsigned int sgidx;
> >

[ ... ]

> ok, i will modify it
> > > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > > +
> > > +               if ((wpt & (len - 1U)) + send < len)
> > > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > > +               else
> > > +                       mtk_dma_chan_write(c, VFF_WPT,
> > > +                                          ((wpt + send) & (len - 1U))
> > > +                                          | wrap);
> > > +
> > > +               c->remain_size -= send;
> >
> > I'm curious why you don't need to set up the hardware from the
> > descriptor information
> >
>
> mtk_dma_chan_write is update the hardware. remain_size is length of TX
> data.

I thought it is not enough to be a full dmaengine. if the driver only
supports single continuous data moving for device_prep_slave_sg
callback.

We could try to turn each sg item information such as source or
destination address and data length to be one part of the descriptor.
When a descriptor is being issued, and then program VFF_ADDR/LEN and
issue the data move by other VFF_* registers rather than assuming the
data portion is always the fixed as the user assignment at
mtk_dma_slave_config.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-06 21:22 ` Sean Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-06 21:22 UTC (permalink / raw)
  To: long.cheng
  Cc: vkoul, robh+dt, mark.rutland, devicetree, srv_heupstream, gregkh,
	linux-kernel, dmaengine, linux-mediatek, linux-serial, jslaby,
	Matthias Brugger, yingjoe.chen, dan.j.williams, linux-arm-kernel,
	yt.shen

On Thu, Dec 6, 2018 at 1:55 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> > .
> > On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> > >
> > > In DMA engine framework, add 8250 mtk dma to support it.
> > >
> > > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > > ---
> > >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> > >  drivers/dma/mediatek/Kconfig        |   11 +
> > >  drivers/dma/mediatek/Makefile       |    1 +
> > >  3 files changed, 906 insertions(+)
> > >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> > >
> > > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > > new file mode 100644
> > > index 0000000..3454679
> > > --- /dev/null
> > > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > > @@ -0,0 +1,894 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Mediatek 8250 DMA driver.
> > > + *
> > > + * Copyright (c) 2018 MediaTek Inc.
> > > + * Author: Long Cheng <long.cheng@mediatek.com>
> > > + */
> > > +
> > > +#include <linux/clk.h>
> > > +#include <linux/dmaengine.h>
> > > +#include <linux/dma-mapping.h>
> > > +#include <linux/err.h>
> > > +#include <linux/init.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/list.h>
> > > +#include <linux/module.h>
> > > +#include <linux/of_dma.h>
> > > +#include <linux/of_device.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/spinlock.h>
> > > +#include <linux/pm_runtime.h>
> > > +#include <linux/iopoll.h>
> > > +
> > > +#include "../virt-dma.h"
> > > +
> > > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > > +
> > > +#define VFF_EN_B               BIT(0)
> > > +#define VFF_STOP_B             BIT(0)
> > > +#define VFF_FLUSH_B            BIT(0)
> > > +#define VFF_4G_SUPPORT_B       BIT(0)
> > > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > > +#define VFF_RX_INT_EN1_B       BIT(1)
> > > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > > +#define VFF_WARM_RST_B         BIT(0)
> > > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > > +#define VFF_TX_INT_FLAG_CLR_B  0
> > > +#define VFF_STOP_CLR_B         0
> > > +#define VFF_FLUSH_CLR_B                0
> > > +#define VFF_INT_EN_CLR_B       0
> > > +#define VFF_4G_SUPPORT_CLR_B   0
> > > +
> > > +/* interrupt trigger level for tx */
> > > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > > +/* interrupt trigger level for rx */
> > > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > > +
> > > +#define MTK_DMA_RING_SIZE      0xffffU
> > > +/* invert this bit when wrap ring head again*/
> > > +#define MTK_DMA_RING_WRAP      0x10000U
> > > +
> > > +#define VFF_INT_FLAG           0x00
> > > +#define VFF_INT_EN             0x04
> > > +#define VFF_EN                 0x08
> > > +#define VFF_RST                        0x0c
> > > +#define VFF_STOP               0x10
> > > +#define VFF_FLUSH              0x14
> > > +#define VFF_ADDR               0x1c
> > > +#define VFF_LEN                        0x24
> > > +#define VFF_THRE               0x28
> > > +#define VFF_WPT                        0x2c
> > > +#define VFF_RPT                        0x30
> > > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > > +#define VFF_VALID_SIZE         0x3c
> > > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > > +#define VFF_LEFT_SIZE          0x40
> > > +#define VFF_DEBUG_STATUS       0x50
> > > +#define VFF_4G_SUPPORT         0x54
> > > +
> > > +struct mtk_dmadev {
> > > +       struct dma_device ddev;
> > > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > > +       spinlock_t lock; /* dma dev lock */
> > > +       struct tasklet_struct task;
> > > +       struct list_head pending;
> > > +       struct clk *clk;
> > > +       unsigned int dma_requests;
> > > +       bool support_33bits;
> > > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > > +};
> > > +
> > > +struct mtk_chan {
> > > +       struct virt_dma_chan vc;
> > > +       struct list_head node;
> > > +       struct dma_slave_config cfg;
> > > +       void __iomem *base;
> > > +       struct mtk_dma_desc *desc;
> > > +
> > > +       bool stop;
> > > +       bool requested;
> > > +
> > > +       unsigned int dma_sig;
> >
> > the member can be removed as no real user would refer to it
> >
>
> Ok, i will remove it at next patch
>
> > > +       unsigned int dma_ch;
> >
> > a chan_id is already included in struct dma_chan, we can reuse it
> >
>
> chan_id is start from 0. but in this driver, dma info is stored to list.
> if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
> can't using chan_id
>

if you use of_dma_xlate_by_chan_id in of_dma_controller_register, the
client device still can get the right channel with the appropriate
chan_id expressed in dmas property in dts.

> > > +       unsigned int sgidx;
> >

[ ... ]

> ok, i will modify it
> > > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > > +
> > > +               if ((wpt & (len - 1U)) + send < len)
> > > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > > +               else
> > > +                       mtk_dma_chan_write(c, VFF_WPT,
> > > +                                          ((wpt + send) & (len - 1U))
> > > +                                          | wrap);
> > > +
> > > +               c->remain_size -= send;
> >
> > I'm curious why you don't need to set up the hardware from the
> > descriptor information
> >
>
> mtk_dma_chan_write is update the hardware. remain_size is length of TX
> data.

I thought it is not enough to be a full dmaengine. if the driver only
supports single continuous data moving for device_prep_slave_sg
callback.

We could try to turn each sg item information such as source or
destination address and data length to be one part of the descriptor.
When a descriptor is being issued, and then program VFF_ADDR/LEN and
issue the data move by other VFF_* registers rather than assuming the
data portion is always the fixed as the user assignment at
mtk_dma_slave_config.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support
@ 2018-12-06 21:22 ` Sean Wang
  0 siblings, 0 replies; 34+ messages in thread
From: Sean Wang @ 2018-12-06 21:22 UTC (permalink / raw)
  To: long.cheng
  Cc: mark.rutland, devicetree, yt.shen, srv_heupstream, gregkh,
	linux-kernel, Matthias Brugger, vkoul, robh+dt, linux-mediatek,
	linux-serial, jslaby, dmaengine, yingjoe.chen, dan.j.williams,
	linux-arm-kernel

On Thu, Dec 6, 2018 at 1:55 AM Long Cheng <long.cheng@mediatek.com> wrote:
>
> On Wed, 2018-12-05 at 13:07 -0800, Sean Wang wrote:
> > .
> > On Wed, Dec 5, 2018 at 1:31 AM Long Cheng <long.cheng@mediatek.com> wrote:
> > >
> > > In DMA engine framework, add 8250 mtk dma to support it.
> > >
> > > Signed-off-by: Long Cheng <long.cheng@mediatek.com>
> > > ---
> > >  drivers/dma/mediatek/8250_mtk_dma.c |  894 +++++++++++++++++++++++++++++++++++
> > >  drivers/dma/mediatek/Kconfig        |   11 +
> > >  drivers/dma/mediatek/Makefile       |    1 +
> > >  3 files changed, 906 insertions(+)
> > >  create mode 100644 drivers/dma/mediatek/8250_mtk_dma.c
> > >
> > > diff --git a/drivers/dma/mediatek/8250_mtk_dma.c b/drivers/dma/mediatek/8250_mtk_dma.c
> > > new file mode 100644
> > > index 0000000..3454679
> > > --- /dev/null
> > > +++ b/drivers/dma/mediatek/8250_mtk_dma.c
> > > @@ -0,0 +1,894 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Mediatek 8250 DMA driver.
> > > + *
> > > + * Copyright (c) 2018 MediaTek Inc.
> > > + * Author: Long Cheng <long.cheng@mediatek.com>
> > > + */
> > > +
> > > +#include <linux/clk.h>
> > > +#include <linux/dmaengine.h>
> > > +#include <linux/dma-mapping.h>
> > > +#include <linux/err.h>
> > > +#include <linux/init.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/list.h>
> > > +#include <linux/module.h>
> > > +#include <linux/of_dma.h>
> > > +#include <linux/of_device.h>
> > > +#include <linux/platform_device.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/spinlock.h>
> > > +#include <linux/pm_runtime.h>
> > > +#include <linux/iopoll.h>
> > > +
> > > +#include "../virt-dma.h"
> > > +
> > > +#define MTK_APDMA_DEFAULT_REQUESTS     127
> > > +#define MTK_APDMA_CHANNELS             (CONFIG_SERIAL_8250_NR_UARTS * 2)
> > > +
> > > +#define VFF_EN_B               BIT(0)
> > > +#define VFF_STOP_B             BIT(0)
> > > +#define VFF_FLUSH_B            BIT(0)
> > > +#define VFF_4G_SUPPORT_B       BIT(0)
> > > +#define VFF_RX_INT_EN0_B       BIT(0)  /*rx valid size >=  vff thre*/
> > > +#define VFF_RX_INT_EN1_B       BIT(1)
> > > +#define VFF_TX_INT_EN_B                BIT(0)  /*tx left size >= vff thre*/
> > > +#define VFF_WARM_RST_B         BIT(0)
> > > +#define VFF_RX_INT_FLAG_CLR_B  (BIT(0) | BIT(1))
> > > +#define VFF_TX_INT_FLAG_CLR_B  0
> > > +#define VFF_STOP_CLR_B         0
> > > +#define VFF_FLUSH_CLR_B                0
> > > +#define VFF_INT_EN_CLR_B       0
> > > +#define VFF_4G_SUPPORT_CLR_B   0
> > > +
> > > +/* interrupt trigger level for tx */
> > > +#define VFF_TX_THRE(n)         ((n) * 7 / 8)
> > > +/* interrupt trigger level for rx */
> > > +#define VFF_RX_THRE(n)         ((n) * 3 / 4)
> > > +
> > > +#define MTK_DMA_RING_SIZE      0xffffU
> > > +/* invert this bit when wrap ring head again*/
> > > +#define MTK_DMA_RING_WRAP      0x10000U
> > > +
> > > +#define VFF_INT_FLAG           0x00
> > > +#define VFF_INT_EN             0x04
> > > +#define VFF_EN                 0x08
> > > +#define VFF_RST                        0x0c
> > > +#define VFF_STOP               0x10
> > > +#define VFF_FLUSH              0x14
> > > +#define VFF_ADDR               0x1c
> > > +#define VFF_LEN                        0x24
> > > +#define VFF_THRE               0x28
> > > +#define VFF_WPT                        0x2c
> > > +#define VFF_RPT                        0x30
> > > +/*TX: the buffer size HW can read. RX: the buffer size SW can read.*/
> > > +#define VFF_VALID_SIZE         0x3c
> > > +/*TX: the buffer size SW can write. RX: the buffer size HW can write.*/
> > > +#define VFF_LEFT_SIZE          0x40
> > > +#define VFF_DEBUG_STATUS       0x50
> > > +#define VFF_4G_SUPPORT         0x54
> > > +
> > > +struct mtk_dmadev {
> > > +       struct dma_device ddev;
> > > +       void __iomem *mem_base[MTK_APDMA_CHANNELS];
> > > +       spinlock_t lock; /* dma dev lock */
> > > +       struct tasklet_struct task;
> > > +       struct list_head pending;
> > > +       struct clk *clk;
> > > +       unsigned int dma_requests;
> > > +       bool support_33bits;
> > > +       unsigned int dma_irq[MTK_APDMA_CHANNELS];
> > > +       struct mtk_chan *ch[MTK_APDMA_CHANNELS];
> > > +};
> > > +
> > > +struct mtk_chan {
> > > +       struct virt_dma_chan vc;
> > > +       struct list_head node;
> > > +       struct dma_slave_config cfg;
> > > +       void __iomem *base;
> > > +       struct mtk_dma_desc *desc;
> > > +
> > > +       bool stop;
> > > +       bool requested;
> > > +
> > > +       unsigned int dma_sig;
> >
> > the member can be removed as no real user would refer to it
> >
>
> Ok, i will remove it at next patch
>
> > > +       unsigned int dma_ch;
> >
> > a chan_id is already included in struct dma_chan, we can reuse it
> >
>
> chan_id is start from 0. but in this driver, dma info is stored to list.
> if use port1, in filter_fn function, will set dma_ch to 2, 3(tx, rx). So
> can't using chan_id
>

if you use of_dma_xlate_by_chan_id in of_dma_controller_register, the
client device still can get the right channel with the appropriate
chan_id expressed in dmas property in dts.

> > > +       unsigned int sgidx;
> >

[ ... ]

> ok, i will modify it
> > > +               wpt = mtk_dma_chan_read(c, VFF_WPT);
> > > +               wrap = wpt & MTK_DMA_RING_WRAP ? 0U : MTK_DMA_RING_WRAP;
> > > +
> > > +               if ((wpt & (len - 1U)) + send < len)
> > > +                       mtk_dma_chan_write(c, VFF_WPT, wpt + send);
> > > +               else
> > > +                       mtk_dma_chan_write(c, VFF_WPT,
> > > +                                          ((wpt + send) & (len - 1U))
> > > +                                          | wrap);
> > > +
> > > +               c->remain_size -= send;
> >
> > I'm curious why you don't need to set up the hardware from the
> > descriptor information
> >
>
> mtk_dma_chan_write is update the hardware. remain_size is length of TX
> data.

I thought it is not enough to be a full dmaengine. if the driver only
supports single continuous data moving for device_prep_slave_sg
callback.

We could try to turn each sg item information such as source or
destination address and data length to be one part of the descriptor.
When a descriptor is being issued, and then program VFF_ADDR/LEN and
issue the data move by other VFF_* registers rather than assuming the
data portion is always the fixed as the user assignment at
mtk_dma_slave_config.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2018-12-06 21:22 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-05 21:07 [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek uart DMA support Sean Wang
2018-12-05 21:07 ` [PATCH v2 2/4] " Sean Wang
2018-12-05 21:07 ` Sean Wang
  -- strict thread matches above, loose matches on Subject: below --
2018-12-06 21:22 [v2,2/4] " Sean Wang
2018-12-06 21:22 ` [PATCH v2 2/4] " Sean Wang
2018-12-06 21:22 ` Sean Wang
2018-12-06  9:55 [v2,2/4] " Long Cheng
2018-12-06  9:55 ` [PATCH v2 2/4] " Long Cheng
2018-12-06  9:55 ` Long Cheng
2018-12-05  8:43 [v2,4/4] arm: dts: mt2701: add uart APDMA to device tree Long Cheng
2018-12-05  8:43 ` [PATCH v2 4/4] " Long Cheng
2018-12-05  8:43 ` Long Cheng
2018-12-05  8:43 ` Long Cheng
2018-12-05  8:42 [v2,3/4] serial: 8250-mtk: add uart DMA support Long Cheng
2018-12-05  8:42 ` [PATCH v2 3/4] " Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 [v2,2/4] dmaengine: mtk_uart_dma: add Mediatek " Long Cheng
2018-12-05  8:42 ` [PATCH v2 2/4] " Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 [v2,1/4] dt-bindings: dma: uart: add uart dma bindings Long Cheng
2018-12-05  8:42 ` [PATCH v2 1/4] " Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 [PATCH v2 0/4] add uart DMA function Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05  8:42 ` Long Cheng
2018-12-05 10:03 ` Greg Kroah-Hartman
2018-12-05 10:03   ` Greg Kroah-Hartman
2018-12-05 16:31   ` Vinod Koul
2018-12-05 16:31     ` Vinod Koul
2018-12-05 18:50     ` Greg Kroah-Hartman
2018-12-05 18:50       ` Greg Kroah-Hartman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.