All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Add support for AXI DMA controller on Milbeaut series
@ 2019-03-25  4:15 Kazuhiro Kasai
  2019-03-25  4:15   ` [PATCH 1/2] " Kazuhiro Kasai
  2019-03-25  4:15   ` [PATCH 2/2] " Kazuhiro Kasai
  0 siblings, 2 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-03-25  4:15 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, orito.takao, sugaya.taichi,
	kanematsu.shinji, jaswinder.singh, masami.hiramatsu,
	linux-kernel, Kazuhiro Kasai

The following series adds AXI DMA controller support on Milbeaut series.
This controller has only capable of memory to memory transfer.

Kazuhiro Kasai (2):
  dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings
  dmaengine: milbeaut: Add Milbeaut AXI DMA controller

 .../devicetree/bindings/dma/xdmac-milbeaut.txt     |  24 ++
 drivers/dma/Kconfig                                |   8 +
 drivers/dma/Makefile                               |   1 +
 drivers/dma/xdmac-milbeaut.c                       | 353 +++++++++++++++++++++
 4 files changed, 386 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
 create mode 100644 drivers/dma/xdmac-milbeaut.c

--
1.9.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [1/2] dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings
  2019-03-25  4:15 [PATCH 0/2] Add support for AXI DMA controller on Milbeaut series Kazuhiro Kasai
@ 2019-03-25  4:15   ` Kazuhiro Kasai
  2019-03-25  4:15   ` [PATCH 2/2] " Kazuhiro Kasai
  1 sibling, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-03-25  4:15 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, orito.takao, sugaya.taichi,
	kanematsu.shinji, jaswinder.singh, masami.hiramatsu,
	linux-kernel, Kazuhiro Kasai

Add Milbeaut AXI DMA controller bindings. This DMA controller has
only capable of memory to memory transfer.

Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
---
 .../devicetree/bindings/dma/xdmac-milbeaut.txt     | 24 ++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt

diff --git a/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt b/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
new file mode 100644
index 0000000..3a77fb1
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
@@ -0,0 +1,24 @@
+* Milbeaut AXI DMA Controller
+
+Milbeaut AXI DMA controller has only memory to memory transfer capability.
+
+* DMA controller
+
+Required property:
+- compatible: 		Should be  "socionext,milbeaut-m10v-xdmac"
+- reg:			Should contain DMA registers location and length.
+- interrupts: 		Should contain all of the per-channel DMA interrupts.
+- #dma-cells: 		Should be 1.
+- dma-channels: 	Number of DMA channels supported by the controller.
+
+Example:
+	xdmac0: dma-controller@1c250000 {
+		compatible = "socionext,milbeaut-m10v-xdmac";
+		reg = <0x1c250000 0x1000>;
+		interrupts = <0 17 0x4>,
+			     <0 18 0x4>,
+			     <0 19 0x4>,
+			     <0 20 0x4>;
+		#dma-cells = <1>;
+		dma-channels = <4>;
+	};

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 1/2] dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings
@ 2019-03-25  4:15   ` Kazuhiro Kasai
  0 siblings, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-03-25  4:15 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, orito.takao, sugaya.taichi,
	kanematsu.shinji, jaswinder.singh, masami.hiramatsu,
	linux-kernel, Kazuhiro Kasai

Add Milbeaut AXI DMA controller bindings. This DMA controller has
only capable of memory to memory transfer.

Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
---
 .../devicetree/bindings/dma/xdmac-milbeaut.txt     | 24 ++++++++++++++++++++++
 1 file changed, 24 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt

diff --git a/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt b/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
new file mode 100644
index 0000000..3a77fb1
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
@@ -0,0 +1,24 @@
+* Milbeaut AXI DMA Controller
+
+Milbeaut AXI DMA controller has only memory to memory transfer capability.
+
+* DMA controller
+
+Required property:
+- compatible: 		Should be  "socionext,milbeaut-m10v-xdmac"
+- reg:			Should contain DMA registers location and length.
+- interrupts: 		Should contain all of the per-channel DMA interrupts.
+- #dma-cells: 		Should be 1.
+- dma-channels: 	Number of DMA channels supported by the controller.
+
+Example:
+	xdmac0: dma-controller@1c250000 {
+		compatible = "socionext,milbeaut-m10v-xdmac";
+		reg = <0x1c250000 0x1000>;
+		interrupts = <0 17 0x4>,
+			     <0 18 0x4>,
+			     <0 19 0x4>,
+			     <0 20 0x4>;
+		#dma-cells = <1>;
+		dma-channels = <4>;
+	};
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
  2019-03-25  4:15 [PATCH 0/2] Add support for AXI DMA controller on Milbeaut series Kazuhiro Kasai
@ 2019-03-25  4:15   ` Kazuhiro Kasai
  2019-03-25  4:15   ` [PATCH 2/2] " Kazuhiro Kasai
  1 sibling, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-03-25  4:15 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, orito.takao, sugaya.taichi,
	kanematsu.shinji, jaswinder.singh, masami.hiramatsu,
	linux-kernel, Kazuhiro Kasai

Add Milbeaut AXI DMA controller. This DMA controller has
only capable of memory to memory transfer.

Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
---
 drivers/dma/Kconfig          |   8 +
 drivers/dma/Makefile         |   1 +
 drivers/dma/xdmac-milbeaut.c | 353 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 362 insertions(+)
 create mode 100644 drivers/dma/xdmac-milbeaut.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 0b1dfb5..733fe5f 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -612,6 +612,14 @@ config UNIPHIER_MDMAC
 	  UniPhier platform.  This DMA controller is used as the external
 	  DMA engine of the SD/eMMC controllers of the LD4, Pro4, sLD8 SoCs.
 
+config XDMAC_MILBEAUT
+       tristate "Milbeaut AXI DMA support"
+       depends on ARCH_MILBEAUT || COMPILE_TEST
+       select DMA_ENGINE
+       help
+         Support for Milbeaut AXI DMA controller driver. The DMA controller
+         has only memory to memory capability.
+
 config XGENE_DMA
 	tristate "APM X-Gene DMA support"
 	depends on ARCH_XGENE || COMPILE_TEST
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 6126e1c..4aab810 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -72,6 +72,7 @@ obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
 obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
 obj-$(CONFIG_TIMB_DMA) += timb_dma.o
 obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
+obj-$(CONFIG_XDMAC_MILBEAUT) += xdmac-milbeaut.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx_dma.o
 obj-$(CONFIG_ST_FDMA) += st_fdma.o
diff --git a/drivers/dma/xdmac-milbeaut.c b/drivers/dma/xdmac-milbeaut.c
new file mode 100644
index 0000000..7035c61
--- /dev/null
+++ b/drivers/dma/xdmac-milbeaut.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Socionext Inc.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+
+#include "dmaengine.h"
+
+/* global register */
+#define M10V_XDACS 0x00
+
+/* channel local register */
+#define M10V_XDTBC 0x10
+#define M10V_XDSSA 0x14
+#define M10V_XDDSA 0x18
+#define M10V_XDSAC 0x1C
+#define M10V_XDDAC 0x20
+#define M10V_XDDCC 0x24
+#define M10V_XDDES 0x28
+#define M10V_XDDPC 0x2C
+#define M10V_XDDSD 0x30
+
+#define M10V_XDACS_XE BIT(28)
+
+#define M10V_XDSAC_SBS	GENMASK(17, 16)
+#define M10V_XDSAC_SBL	GENMASK(11, 8)
+
+#define M10V_XDDAC_DBS	GENMASK(17, 16)
+#define M10V_XDDAC_DBL	GENMASK(11, 8)
+
+#define M10V_XDDES_CE	BIT(28)
+#define M10V_XDDES_SE	BIT(24)
+#define M10V_XDDES_SA	BIT(15)
+#define M10V_XDDES_TF	GENMASK(23, 20)
+#define M10V_XDDES_EI	BIT(1)
+#define M10V_XDDES_TI	BIT(0)
+
+#define M10V_XDDSD_IS_MASK	GENMASK(3, 0)
+#define M10V_XDDSD_IS_NORMAL	0x8
+
+#define M10V_XDMAC_BUSWIDTHS	(BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
+
+#define M10V_XDMAC_CHAN_BASE(base, i)	((base) + (i) * 0x30)
+
+#define to_m10v_dma_chan(c)	container_of((c), struct m10v_dma_chan, chan)
+
+struct m10v_dma_desc {
+	struct dma_async_tx_descriptor txd;
+	size_t len;
+	dma_addr_t src;
+	dma_addr_t dst;
+};
+
+struct m10v_dma_chan {
+	struct dma_chan chan;
+	struct m10v_dma_device *mdmac;
+	void __iomem *regs;
+	int irq;
+	struct m10v_dma_desc mdesc;
+	spinlock_t lock;
+};
+
+struct m10v_dma_device {
+	struct dma_device dmac;
+	void __iomem *regs;
+	unsigned int channels;
+	struct m10v_dma_chan mchan[0];
+};
+
+static void m10v_xdmac_enable_dma(struct m10v_dma_device *mdmac)
+{
+	unsigned int val;
+
+	val = readl(mdmac->regs + M10V_XDACS);
+	val &= ~M10V_XDACS_XE;
+	val |= FIELD_PREP(M10V_XDACS_XE, 1);
+	writel(val, mdmac->regs + M10V_XDACS);
+}
+
+static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
+{
+	unsigned int val;
+
+	val = readl(mdmac->regs + M10V_XDACS);
+	val &= ~M10V_XDACS_XE;
+	val |= FIELD_PREP(M10V_XDACS_XE, 0);
+	writel(val, mdmac->regs + M10V_XDACS);
+}
+
+static void m10v_xdmac_config_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = mchan->mdesc.len - 1;
+	writel(val, mchan->regs + M10V_XDTBC);
+
+	val = mchan->mdesc.src;
+	writel(val, mchan->regs + M10V_XDSSA);
+
+	val = mchan->mdesc.dst;
+	writel(val, mchan->regs + M10V_XDDSA);
+
+	val = readl(mchan->regs + M10V_XDSAC);
+	val &= ~(M10V_XDSAC_SBS | M10V_XDSAC_SBL);
+	val |= FIELD_PREP(M10V_XDSAC_SBS, 0x3) |
+	       FIELD_PREP(M10V_XDSAC_SBL, 0xf);
+	writel(val, mchan->regs + M10V_XDSAC);
+
+	val = readl(mchan->regs + M10V_XDDAC);
+	val &= ~(M10V_XDDAC_DBS | M10V_XDDAC_DBL);
+	val |= FIELD_PREP(M10V_XDDAC_DBS, 0x3) |
+	       FIELD_PREP(M10V_XDDAC_DBL, 0xf);
+	writel(val, mchan->regs + M10V_XDDAC);
+}
+
+static void m10v_xdmac_enable_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDES);
+	val &= ~(M10V_XDDES_CE |
+	         M10V_XDDES_SE |
+	         M10V_XDDES_TF |
+	         M10V_XDDES_EI |
+	         M10V_XDDES_TI);
+	val |= FIELD_PREP(M10V_XDDES_CE, 1) |
+	       FIELD_PREP(M10V_XDDES_SE, 1) |
+	       FIELD_PREP(M10V_XDDES_TF, 1) |
+	       FIELD_PREP(M10V_XDDES_EI, 1) |
+	       FIELD_PREP(M10V_XDDES_TI, 1);
+	writel(val, mchan->regs + M10V_XDDES);
+}
+
+static void m10v_xdmac_disable_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDES);
+	val &= ~M10V_XDDES_CE;
+	val |= FIELD_PREP(M10V_XDDES_CE, 0);
+	writel(val, mchan->regs + M10V_XDDES);
+}
+
+static irqreturn_t m10v_xdmac_irq(int irq, void *data)
+{
+	struct m10v_dma_chan *mchan = data;
+	unsigned long flags;
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDSD);
+	val = FIELD_GET(M10V_XDDSD_IS_MASK, val);
+
+	if (val != M10V_XDDSD_IS_NORMAL)
+		dev_err(mchan->chan.device->dev, "XDMAC error with status: %x", val);
+
+	val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0);
+	writel(val, mchan->regs + M10V_XDDSD);
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	dma_cookie_complete(&mchan->mdesc.txd);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	if (mchan->mdesc.txd.flags & DMA_PREP_INTERRUPT)
+		dmaengine_desc_get_callback_invoke(&mchan->mdesc.txd, NULL);
+
+	return IRQ_HANDLED;
+}
+
+static void m10v_xdmac_issue_pending(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	m10v_xdmac_config_chan(mchan);
+
+	m10v_xdmac_enable_chan(mchan);
+}
+
+static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	cookie = dma_cookie_assign(txd);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	return cookie;
+}
+
+static struct dma_async_tx_descriptor *
+m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len, unsigned long flags)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
+	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
+	mchan->mdesc.txd.callback = NULL;
+	mchan->mdesc.txd.flags = flags;
+	mchan->mdesc.txd.cookie = -EBUSY;
+
+	mchan->mdesc.len = len;
+	mchan->mdesc.src = src;
+	mchan->mdesc.dst = dst;
+
+	return &mchan->mdesc.txd;
+}
+
+static int m10v_xdmac_device_terminate_all(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	m10v_xdmac_disable_chan(mchan);
+
+	return 0;
+}
+
+static int m10v_xdmac_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	dma_cookie_init(chan);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	return 1;
+}
+
+static int m10v_xdmac_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct m10v_dma_chan *mchan;
+	struct m10v_dma_device *mdmac;
+	struct resource *res;
+	unsigned int channels;
+	int ret, i;
+
+	ret = device_property_read_u32(&pdev->dev, "dma-channels", &channels);
+	if (ret) {
+		dev_err(&pdev->dev, "get dma-channels failed\n");
+		return ret;
+	}
+
+	mdmac = devm_kzalloc(&pdev->dev,
+			     struct_size(mdmac, mchan, channels),
+			     GFP_KERNEL);
+	if (!mdmac)
+		return -ENOMEM;
+
+	mdmac->channels = channels;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	mdmac->regs = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(mdmac->regs))
+		return PTR_ERR(mdmac->regs);
+
+	INIT_LIST_HEAD(&mdmac->dmac.channels);
+	for (i = 0; i < mdmac->channels; i++) {
+		mchan = &mdmac->mchan[i];
+		mchan->irq = platform_get_irq(pdev, i);
+		ret = devm_request_irq(&pdev->dev, mchan->irq, m10v_xdmac_irq,
+				       IRQF_SHARED, dev_name(&pdev->dev), mchan);
+		if (ret) {
+			dev_err(&pdev->dev, "failed to request IRQ\n");
+			return ret;
+		}
+		mchan->mdmac = mdmac;
+		mchan->chan.device = &mdmac->dmac;
+		list_add_tail(&mchan->chan.device_node,
+				&mdmac->dmac.channels);
+
+		mchan->regs = M10V_XDMAC_CHAN_BASE(mdmac->regs, i);
+		spin_lock_init(&mchan->lock);
+	}
+
+	dma_cap_set(DMA_MEMCPY, mdmac->dmac.cap_mask);
+
+	mdmac->dmac.device_alloc_chan_resources = m10v_xdmac_alloc_chan_resources;
+	mdmac->dmac.device_prep_dma_memcpy = m10v_xdmac_prep_dma_memcpy;
+	mdmac->dmac.device_issue_pending = m10v_xdmac_issue_pending;
+	mdmac->dmac.device_tx_status = dma_cookie_status;
+	mdmac->dmac.device_terminate_all = m10v_xdmac_device_terminate_all;
+	mdmac->dmac.src_addr_widths = M10V_XDMAC_BUSWIDTHS;
+	mdmac->dmac.dst_addr_widths = M10V_XDMAC_BUSWIDTHS;
+	mdmac->dmac.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+	mdmac->dmac.dev = &pdev->dev;
+
+	platform_set_drvdata(pdev, mdmac);
+
+	m10v_xdmac_enable_dma(mdmac);
+
+	ret = dmaenginem_async_device_register(&mdmac->dmac);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to register dmaengine device\n");
+		return ret;
+	}
+
+	ret = of_dma_controller_register(np, of_dma_simple_xlate, mdmac);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to register OF DMA controller\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int m10v_xdmac_remove(struct platform_device *pdev)
+{
+	struct m10v_dma_chan *mchan;
+	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
+	int i;
+
+	m10v_xdmac_disable_dma(mdmac);
+
+	for (i = 0; i < mdmac->channels; i++) {
+		mchan = &mdmac->mchan[i];
+		devm_free_irq(&pdev->dev, mchan->irq, mchan);
+	}
+
+	of_dma_controller_free(pdev->dev.of_node);
+
+	return 0;
+}
+
+static const struct of_device_id m10v_xdmac_dt_ids[] = {
+	{.compatible = "socionext,milbeaut-m10v-xdmac",},
+	{},
+};
+MODULE_DEVICE_TABLE(of, m10v_xdmac_dt_ids);
+
+static struct platform_driver m10v_xdmac_driver = {
+	.driver = {
+		.name = "m10v-xdmac",
+		.of_match_table = of_match_ptr(m10v_xdmac_dt_ids),
+	},
+	.probe = m10v_xdmac_probe,
+	.remove = m10v_xdmac_remove,
+};
+module_platform_driver(m10v_xdmac_driver);
+
+MODULE_AUTHOR("Kazuhiro Kasai <kasai.kazuhiro@socionext.com>");
+MODULE_DESCRIPTION("Socionext Milbeaut XDMAC driver");
+MODULE_LICENSE("GPL v2");

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
@ 2019-03-25  4:15   ` Kazuhiro Kasai
  0 siblings, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-03-25  4:15 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, orito.takao, sugaya.taichi,
	kanematsu.shinji, jaswinder.singh, masami.hiramatsu,
	linux-kernel, Kazuhiro Kasai

Add Milbeaut AXI DMA controller. This DMA controller has
only capable of memory to memory transfer.

Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
---
 drivers/dma/Kconfig          |   8 +
 drivers/dma/Makefile         |   1 +
 drivers/dma/xdmac-milbeaut.c | 353 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 362 insertions(+)
 create mode 100644 drivers/dma/xdmac-milbeaut.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 0b1dfb5..733fe5f 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -612,6 +612,14 @@ config UNIPHIER_MDMAC
 	  UniPhier platform.  This DMA controller is used as the external
 	  DMA engine of the SD/eMMC controllers of the LD4, Pro4, sLD8 SoCs.
 
+config XDMAC_MILBEAUT
+       tristate "Milbeaut AXI DMA support"
+       depends on ARCH_MILBEAUT || COMPILE_TEST
+       select DMA_ENGINE
+       help
+         Support for Milbeaut AXI DMA controller driver. The DMA controller
+         has only memory to memory capability.
+
 config XGENE_DMA
 	tristate "APM X-Gene DMA support"
 	depends on ARCH_XGENE || COMPILE_TEST
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 6126e1c..4aab810 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -72,6 +72,7 @@ obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
 obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
 obj-$(CONFIG_TIMB_DMA) += timb_dma.o
 obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
+obj-$(CONFIG_XDMAC_MILBEAUT) += xdmac-milbeaut.o
 obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx_dma.o
 obj-$(CONFIG_ST_FDMA) += st_fdma.o
diff --git a/drivers/dma/xdmac-milbeaut.c b/drivers/dma/xdmac-milbeaut.c
new file mode 100644
index 0000000..7035c61
--- /dev/null
+++ b/drivers/dma/xdmac-milbeaut.c
@@ -0,0 +1,353 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Socionext Inc.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+
+#include "dmaengine.h"
+
+/* global register */
+#define M10V_XDACS 0x00
+
+/* channel local register */
+#define M10V_XDTBC 0x10
+#define M10V_XDSSA 0x14
+#define M10V_XDDSA 0x18
+#define M10V_XDSAC 0x1C
+#define M10V_XDDAC 0x20
+#define M10V_XDDCC 0x24
+#define M10V_XDDES 0x28
+#define M10V_XDDPC 0x2C
+#define M10V_XDDSD 0x30
+
+#define M10V_XDACS_XE BIT(28)
+
+#define M10V_XDSAC_SBS	GENMASK(17, 16)
+#define M10V_XDSAC_SBL	GENMASK(11, 8)
+
+#define M10V_XDDAC_DBS	GENMASK(17, 16)
+#define M10V_XDDAC_DBL	GENMASK(11, 8)
+
+#define M10V_XDDES_CE	BIT(28)
+#define M10V_XDDES_SE	BIT(24)
+#define M10V_XDDES_SA	BIT(15)
+#define M10V_XDDES_TF	GENMASK(23, 20)
+#define M10V_XDDES_EI	BIT(1)
+#define M10V_XDDES_TI	BIT(0)
+
+#define M10V_XDDSD_IS_MASK	GENMASK(3, 0)
+#define M10V_XDDSD_IS_NORMAL	0x8
+
+#define M10V_XDMAC_BUSWIDTHS	(BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
+				 BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
+
+#define M10V_XDMAC_CHAN_BASE(base, i)	((base) + (i) * 0x30)
+
+#define to_m10v_dma_chan(c)	container_of((c), struct m10v_dma_chan, chan)
+
+struct m10v_dma_desc {
+	struct dma_async_tx_descriptor txd;
+	size_t len;
+	dma_addr_t src;
+	dma_addr_t dst;
+};
+
+struct m10v_dma_chan {
+	struct dma_chan chan;
+	struct m10v_dma_device *mdmac;
+	void __iomem *regs;
+	int irq;
+	struct m10v_dma_desc mdesc;
+	spinlock_t lock;
+};
+
+struct m10v_dma_device {
+	struct dma_device dmac;
+	void __iomem *regs;
+	unsigned int channels;
+	struct m10v_dma_chan mchan[0];
+};
+
+static void m10v_xdmac_enable_dma(struct m10v_dma_device *mdmac)
+{
+	unsigned int val;
+
+	val = readl(mdmac->regs + M10V_XDACS);
+	val &= ~M10V_XDACS_XE;
+	val |= FIELD_PREP(M10V_XDACS_XE, 1);
+	writel(val, mdmac->regs + M10V_XDACS);
+}
+
+static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
+{
+	unsigned int val;
+
+	val = readl(mdmac->regs + M10V_XDACS);
+	val &= ~M10V_XDACS_XE;
+	val |= FIELD_PREP(M10V_XDACS_XE, 0);
+	writel(val, mdmac->regs + M10V_XDACS);
+}
+
+static void m10v_xdmac_config_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = mchan->mdesc.len - 1;
+	writel(val, mchan->regs + M10V_XDTBC);
+
+	val = mchan->mdesc.src;
+	writel(val, mchan->regs + M10V_XDSSA);
+
+	val = mchan->mdesc.dst;
+	writel(val, mchan->regs + M10V_XDDSA);
+
+	val = readl(mchan->regs + M10V_XDSAC);
+	val &= ~(M10V_XDSAC_SBS | M10V_XDSAC_SBL);
+	val |= FIELD_PREP(M10V_XDSAC_SBS, 0x3) |
+	       FIELD_PREP(M10V_XDSAC_SBL, 0xf);
+	writel(val, mchan->regs + M10V_XDSAC);
+
+	val = readl(mchan->regs + M10V_XDDAC);
+	val &= ~(M10V_XDDAC_DBS | M10V_XDDAC_DBL);
+	val |= FIELD_PREP(M10V_XDDAC_DBS, 0x3) |
+	       FIELD_PREP(M10V_XDDAC_DBL, 0xf);
+	writel(val, mchan->regs + M10V_XDDAC);
+}
+
+static void m10v_xdmac_enable_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDES);
+	val &= ~(M10V_XDDES_CE |
+	         M10V_XDDES_SE |
+	         M10V_XDDES_TF |
+	         M10V_XDDES_EI |
+	         M10V_XDDES_TI);
+	val |= FIELD_PREP(M10V_XDDES_CE, 1) |
+	       FIELD_PREP(M10V_XDDES_SE, 1) |
+	       FIELD_PREP(M10V_XDDES_TF, 1) |
+	       FIELD_PREP(M10V_XDDES_EI, 1) |
+	       FIELD_PREP(M10V_XDDES_TI, 1);
+	writel(val, mchan->regs + M10V_XDDES);
+}
+
+static void m10v_xdmac_disable_chan(struct m10v_dma_chan *mchan)
+{
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDES);
+	val &= ~M10V_XDDES_CE;
+	val |= FIELD_PREP(M10V_XDDES_CE, 0);
+	writel(val, mchan->regs + M10V_XDDES);
+}
+
+static irqreturn_t m10v_xdmac_irq(int irq, void *data)
+{
+	struct m10v_dma_chan *mchan = data;
+	unsigned long flags;
+	u32 val;
+
+	val = readl(mchan->regs + M10V_XDDSD);
+	val = FIELD_GET(M10V_XDDSD_IS_MASK, val);
+
+	if (val != M10V_XDDSD_IS_NORMAL)
+		dev_err(mchan->chan.device->dev, "XDMAC error with status: %x", val);
+
+	val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0);
+	writel(val, mchan->regs + M10V_XDDSD);
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	dma_cookie_complete(&mchan->mdesc.txd);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	if (mchan->mdesc.txd.flags & DMA_PREP_INTERRUPT)
+		dmaengine_desc_get_callback_invoke(&mchan->mdesc.txd, NULL);
+
+	return IRQ_HANDLED;
+}
+
+static void m10v_xdmac_issue_pending(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	m10v_xdmac_config_chan(mchan);
+
+	m10v_xdmac_enable_chan(mchan);
+}
+
+static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
+	dma_cookie_t cookie;
+	unsigned long flags;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	cookie = dma_cookie_assign(txd);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	return cookie;
+}
+
+static struct dma_async_tx_descriptor *
+m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
+			   dma_addr_t src, size_t len, unsigned long flags)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
+	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
+	mchan->mdesc.txd.callback = NULL;
+	mchan->mdesc.txd.flags = flags;
+	mchan->mdesc.txd.cookie = -EBUSY;
+
+	mchan->mdesc.len = len;
+	mchan->mdesc.src = src;
+	mchan->mdesc.dst = dst;
+
+	return &mchan->mdesc.txd;
+}
+
+static int m10v_xdmac_device_terminate_all(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+
+	m10v_xdmac_disable_chan(mchan);
+
+	return 0;
+}
+
+static int m10v_xdmac_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&mchan->lock, flags);
+	dma_cookie_init(chan);
+	spin_unlock_irqrestore(&mchan->lock, flags);
+
+	return 1;
+}
+
+static int m10v_xdmac_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct m10v_dma_chan *mchan;
+	struct m10v_dma_device *mdmac;
+	struct resource *res;
+	unsigned int channels;
+	int ret, i;
+
+	ret = device_property_read_u32(&pdev->dev, "dma-channels", &channels);
+	if (ret) {
+		dev_err(&pdev->dev, "get dma-channels failed\n");
+		return ret;
+	}
+
+	mdmac = devm_kzalloc(&pdev->dev,
+			     struct_size(mdmac, mchan, channels),
+			     GFP_KERNEL);
+	if (!mdmac)
+		return -ENOMEM;
+
+	mdmac->channels = channels;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	mdmac->regs = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(mdmac->regs))
+		return PTR_ERR(mdmac->regs);
+
+	INIT_LIST_HEAD(&mdmac->dmac.channels);
+	for (i = 0; i < mdmac->channels; i++) {
+		mchan = &mdmac->mchan[i];
+		mchan->irq = platform_get_irq(pdev, i);
+		ret = devm_request_irq(&pdev->dev, mchan->irq, m10v_xdmac_irq,
+				       IRQF_SHARED, dev_name(&pdev->dev), mchan);
+		if (ret) {
+			dev_err(&pdev->dev, "failed to request IRQ\n");
+			return ret;
+		}
+		mchan->mdmac = mdmac;
+		mchan->chan.device = &mdmac->dmac;
+		list_add_tail(&mchan->chan.device_node,
+				&mdmac->dmac.channels);
+
+		mchan->regs = M10V_XDMAC_CHAN_BASE(mdmac->regs, i);
+		spin_lock_init(&mchan->lock);
+	}
+
+	dma_cap_set(DMA_MEMCPY, mdmac->dmac.cap_mask);
+
+	mdmac->dmac.device_alloc_chan_resources = m10v_xdmac_alloc_chan_resources;
+	mdmac->dmac.device_prep_dma_memcpy = m10v_xdmac_prep_dma_memcpy;
+	mdmac->dmac.device_issue_pending = m10v_xdmac_issue_pending;
+	mdmac->dmac.device_tx_status = dma_cookie_status;
+	mdmac->dmac.device_terminate_all = m10v_xdmac_device_terminate_all;
+	mdmac->dmac.src_addr_widths = M10V_XDMAC_BUSWIDTHS;
+	mdmac->dmac.dst_addr_widths = M10V_XDMAC_BUSWIDTHS;
+	mdmac->dmac.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+	mdmac->dmac.dev = &pdev->dev;
+
+	platform_set_drvdata(pdev, mdmac);
+
+	m10v_xdmac_enable_dma(mdmac);
+
+	ret = dmaenginem_async_device_register(&mdmac->dmac);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to register dmaengine device\n");
+		return ret;
+	}
+
+	ret = of_dma_controller_register(np, of_dma_simple_xlate, mdmac);
+	if (ret) {
+		dev_err(&pdev->dev, "failed to register OF DMA controller\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int m10v_xdmac_remove(struct platform_device *pdev)
+{
+	struct m10v_dma_chan *mchan;
+	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
+	int i;
+
+	m10v_xdmac_disable_dma(mdmac);
+
+	for (i = 0; i < mdmac->channels; i++) {
+		mchan = &mdmac->mchan[i];
+		devm_free_irq(&pdev->dev, mchan->irq, mchan);
+	}
+
+	of_dma_controller_free(pdev->dev.of_node);
+
+	return 0;
+}
+
+static const struct of_device_id m10v_xdmac_dt_ids[] = {
+	{.compatible = "socionext,milbeaut-m10v-xdmac",},
+	{},
+};
+MODULE_DEVICE_TABLE(of, m10v_xdmac_dt_ids);
+
+static struct platform_driver m10v_xdmac_driver = {
+	.driver = {
+		.name = "m10v-xdmac",
+		.of_match_table = of_match_ptr(m10v_xdmac_dt_ids),
+	},
+	.probe = m10v_xdmac_probe,
+	.remove = m10v_xdmac_remove,
+};
+module_platform_driver(m10v_xdmac_driver);
+
+MODULE_AUTHOR("Kazuhiro Kasai <kasai.kazuhiro@socionext.com>");
+MODULE_DESCRIPTION("Socionext Milbeaut XDMAC driver");
+MODULE_LICENSE("GPL v2");
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings
  2019-03-25  4:15   ` [PATCH 1/2] " Kazuhiro Kasai
@ 2019-03-31  6:41     ` Rob Herring
  -1 siblings, 0 replies; 14+ messages in thread
From: Rob Herring @ 2019-03-31  6:41 UTC (permalink / raw)
  To: Kazuhiro Kasai
  Cc: vkoul, robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel, Kazuhiro Kasai

On Mon, 25 Mar 2019 13:15:13 +0900, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller bindings. This DMA controller has
> only capable of memory to memory transfer.
> 
> Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
> ---
>  .../devicetree/bindings/dma/xdmac-milbeaut.txt     | 24 ++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings
@ 2019-03-31  6:41     ` Rob Herring
  0 siblings, 0 replies; 14+ messages in thread
From: Rob Herring @ 2019-03-31  6:41 UTC (permalink / raw)
  To: Kazuhiro Kasai
  Cc: vkoul, robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel

On Mon, 25 Mar 2019 13:15:13 +0900, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller bindings. This DMA controller has
> only capable of memory to memory transfer.
> 
> Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
> ---
>  .../devicetree/bindings/dma/xdmac-milbeaut.txt     | 24 ++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/xdmac-milbeaut.txt
> 

Reviewed-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
@ 2019-04-16  2:06 ` Kazuhiro Kasai
  0 siblings, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-04-16  2:06 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, Orito,
	Takao/織戸 誉生,
	Sugaya, Taichi/菅谷 太一,
	Kanematsu, Shinji/兼松 伸次,
	jaswinder.singh, masami.hiramatsu, linux-kernel

Hello,

Does anyone have any commnets on this?

On Mon, Mar 25, 2019 at  4:15 +0000, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller. This DMA controller has
> only capable of memory to memory transfer.
>
> Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
> ---
>  drivers/dma/Kconfig          |   8 +
>  drivers/dma/Makefile         |   1 +
>  drivers/dma/xdmac-milbeaut.c | 353 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 362 insertions(+)
>  create mode 100644 drivers/dma/xdmac-milbeaut.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 0b1dfb5..733fe5f 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -612,6 +612,14 @@ config UNIPHIER_MDMAC
>  	  UniPhier platform.  This DMA controller is used as the external
>  	  DMA engine of the SD/eMMC controllers of the LD4, Pro4, sLD8 SoCs.
>
> +config XDMAC_MILBEAUT
> +       tristate "Milbeaut AXI DMA support"
> +       depends on ARCH_MILBEAUT || COMPILE_TEST
> +       select DMA_ENGINE
> +       help
> +         Support for Milbeaut AXI DMA controller driver. The DMA controller
> +         has only memory to memory capability.
> +
>  config XGENE_DMA
>  	tristate "APM X-Gene DMA support"
>  	depends on ARCH_XGENE || COMPILE_TEST
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 6126e1c..4aab810 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -72,6 +72,7 @@ obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
>  obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
>  obj-$(CONFIG_TIMB_DMA) += timb_dma.o
>  obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
> +obj-$(CONFIG_XDMAC_MILBEAUT) += xdmac-milbeaut.o
>  obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
>  obj-$(CONFIG_ZX_DMA) += zx_dma.o
>  obj-$(CONFIG_ST_FDMA) += st_fdma.o
> diff --git a/drivers/dma/xdmac-milbeaut.c b/drivers/dma/xdmac-milbeaut.c
> new file mode 100644
> index 0000000..7035c61
> --- /dev/null
> +++ b/drivers/dma/xdmac-milbeaut.c
> @@ -0,0 +1,353 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Socionext Inc.
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/interrupt.h>
> +#include <linux/module.h>
> +#include <linux/of_dma.h>
> +#include <linux/platform_device.h>
> +
> +#include "dmaengine.h"
> +
> +/* global register */
> +#define M10V_XDACS 0x00
> +
> +/* channel local register */
> +#define M10V_XDTBC 0x10
> +#define M10V_XDSSA 0x14
> +#define M10V_XDDSA 0x18
> +#define M10V_XDSAC 0x1C
> +#define M10V_XDDAC 0x20
> +#define M10V_XDDCC 0x24
> +#define M10V_XDDES 0x28
> +#define M10V_XDDPC 0x2C
> +#define M10V_XDDSD 0x30
> +
> +#define M10V_XDACS_XE BIT(28)
> +
> +#define M10V_XDSAC_SBS	GENMASK(17, 16)
> +#define M10V_XDSAC_SBL	GENMASK(11, 8)
> +
> +#define M10V_XDDAC_DBS	GENMASK(17, 16)
> +#define M10V_XDDAC_DBL	GENMASK(11, 8)
> +
> +#define M10V_XDDES_CE	BIT(28)
> +#define M10V_XDDES_SE	BIT(24)
> +#define M10V_XDDES_SA	BIT(15)
> +#define M10V_XDDES_TF	GENMASK(23, 20)
> +#define M10V_XDDES_EI	BIT(1)
> +#define M10V_XDDES_TI	BIT(0)
> +
> +#define M10V_XDDSD_IS_MASK	GENMASK(3, 0)
> +#define M10V_XDDSD_IS_NORMAL	0x8
> +
> +#define M10V_XDMAC_BUSWIDTHS	(BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
> +
> +#define M10V_XDMAC_CHAN_BASE(base, i)	((base) + (i) * 0x30)
> +
> +#define to_m10v_dma_chan(c)	container_of((c), struct m10v_dma_chan, chan)
> +
> +struct m10v_dma_desc {
> +	struct dma_async_tx_descriptor txd;
> +	size_t len;
> +	dma_addr_t src;
> +	dma_addr_t dst;
> +};
> +
> +struct m10v_dma_chan {
> +	struct dma_chan chan;
> +	struct m10v_dma_device *mdmac;
> +	void __iomem *regs;
> +	int irq;
> +	struct m10v_dma_desc mdesc;
> +	spinlock_t lock;
> +};
> +
> +struct m10v_dma_device {
> +	struct dma_device dmac;
> +	void __iomem *regs;
> +	unsigned int channels;
> +	struct m10v_dma_chan mchan[0];
> +};
> +
> +static void m10v_xdmac_enable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 1);
> +	writel(val, mdmac->regs + M10V_XDACS);
> +}
> +
> +static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 0);
> +	writel(val, mdmac->regs + M10V_XDACS);
> +}
> +
> +static void m10v_xdmac_config_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = mchan->mdesc.len - 1;
> +	writel(val, mchan->regs + M10V_XDTBC);
> +
> +	val = mchan->mdesc.src;
> +	writel(val, mchan->regs + M10V_XDSSA);
> +
> +	val = mchan->mdesc.dst;
> +	writel(val, mchan->regs + M10V_XDDSA);
> +
> +	val = readl(mchan->regs + M10V_XDSAC);
> +	val &= ~(M10V_XDSAC_SBS | M10V_XDSAC_SBL);
> +	val |= FIELD_PREP(M10V_XDSAC_SBS, 0x3) |
> +	       FIELD_PREP(M10V_XDSAC_SBL, 0xf);
> +	writel(val, mchan->regs + M10V_XDSAC);
> +
> +	val = readl(mchan->regs + M10V_XDDAC);
> +	val &= ~(M10V_XDDAC_DBS | M10V_XDDAC_DBL);
> +	val |= FIELD_PREP(M10V_XDDAC_DBS, 0x3) |
> +	       FIELD_PREP(M10V_XDDAC_DBL, 0xf);
> +	writel(val, mchan->regs + M10V_XDDAC);
> +}
> +
> +static void m10v_xdmac_enable_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDES);
> +	val &= ~(M10V_XDDES_CE |
> +	         M10V_XDDES_SE |
> +	         M10V_XDDES_TF |
> +	         M10V_XDDES_EI |
> +	         M10V_XDDES_TI);
> +	val |= FIELD_PREP(M10V_XDDES_CE, 1) |
> +	       FIELD_PREP(M10V_XDDES_SE, 1) |
> +	       FIELD_PREP(M10V_XDDES_TF, 1) |
> +	       FIELD_PREP(M10V_XDDES_EI, 1) |
> +	       FIELD_PREP(M10V_XDDES_TI, 1);
> +	writel(val, mchan->regs + M10V_XDDES);
> +}
> +
> +static void m10v_xdmac_disable_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDES);
> +	val &= ~M10V_XDDES_CE;
> +	val |= FIELD_PREP(M10V_XDDES_CE, 0);
> +	writel(val, mchan->regs + M10V_XDDES);
> +}
> +
> +static irqreturn_t m10v_xdmac_irq(int irq, void *data)
> +{
> +	struct m10v_dma_chan *mchan = data;
> +	unsigned long flags;
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDSD);
> +	val = FIELD_GET(M10V_XDDSD_IS_MASK, val);
> +
> +	if (val != M10V_XDDSD_IS_NORMAL)
> +		dev_err(mchan->chan.device->dev, "XDMAC error with status: %x", val);
> +
> +	val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0);
> +	writel(val, mchan->regs + M10V_XDDSD);
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	dma_cookie_complete(&mchan->mdesc.txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	if (mchan->mdesc.txd.flags & DMA_PREP_INTERRUPT)
> +		dmaengine_desc_get_callback_invoke(&mchan->mdesc.txd, NULL);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static void m10v_xdmac_issue_pending(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_config_chan(mchan);
> +
> +	m10v_xdmac_enable_chan(mchan);
> +}
> +
> +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> +	dma_cookie_t cookie;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	cookie = dma_cookie_assign(txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return cookie;
> +}
> +
> +static struct dma_async_tx_descriptor *
> +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> +			   dma_addr_t src, size_t len, unsigned long flags)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> +	mchan->mdesc.txd.callback = NULL;
> +	mchan->mdesc.txd.flags = flags;
> +	mchan->mdesc.txd.cookie = -EBUSY;
> +
> +	mchan->mdesc.len = len;
> +	mchan->mdesc.src = src;
> +	mchan->mdesc.dst = dst;
> +
> +	return &mchan->mdesc.txd;
> +}
> +
> +static int m10v_xdmac_device_terminate_all(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_disable_chan(mchan);
> +
> +	return 0;
> +}
> +
> +static int m10v_xdmac_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	dma_cookie_init(chan);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return 1;
> +}
> +
> +static int m10v_xdmac_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac;
> +	struct resource *res;
> +	unsigned int channels;
> +	int ret, i;
> +
> +	ret = device_property_read_u32(&pdev->dev, "dma-channels", &channels);
> +	if (ret) {
> +		dev_err(&pdev->dev, "get dma-channels failed\n");
> +		return ret;
> +	}
> +
> +	mdmac = devm_kzalloc(&pdev->dev,
> +			     struct_size(mdmac, mchan, channels),
> +			     GFP_KERNEL);
> +	if (!mdmac)
> +		return -ENOMEM;
> +
> +	mdmac->channels = channels;
> +
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	mdmac->regs = devm_ioremap_resource(&pdev->dev, res);
> +	if (IS_ERR(mdmac->regs))
> +		return PTR_ERR(mdmac->regs);
> +
> +	INIT_LIST_HEAD(&mdmac->dmac.channels);
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		mchan->irq = platform_get_irq(pdev, i);
> +		ret = devm_request_irq(&pdev->dev, mchan->irq, m10v_xdmac_irq,
> +				       IRQF_SHARED, dev_name(&pdev->dev), mchan);
> +		if (ret) {
> +			dev_err(&pdev->dev, "failed to request IRQ\n");
> +			return ret;
> +		}
> +		mchan->mdmac = mdmac;
> +		mchan->chan.device = &mdmac->dmac;
> +		list_add_tail(&mchan->chan.device_node,
> +				&mdmac->dmac.channels);
> +
> +		mchan->regs = M10V_XDMAC_CHAN_BASE(mdmac->regs, i);
> +		spin_lock_init(&mchan->lock);
> +	}
> +
> +	dma_cap_set(DMA_MEMCPY, mdmac->dmac.cap_mask);
> +
> +	mdmac->dmac.device_alloc_chan_resources = m10v_xdmac_alloc_chan_resources;
> +	mdmac->dmac.device_prep_dma_memcpy = m10v_xdmac_prep_dma_memcpy;
> +	mdmac->dmac.device_issue_pending = m10v_xdmac_issue_pending;
> +	mdmac->dmac.device_tx_status = dma_cookie_status;
> +	mdmac->dmac.device_terminate_all = m10v_xdmac_device_terminate_all;
> +	mdmac->dmac.src_addr_widths = M10V_XDMAC_BUSWIDTHS;
> +	mdmac->dmac.dst_addr_widths = M10V_XDMAC_BUSWIDTHS;
> +	mdmac->dmac.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
> +	mdmac->dmac.dev = &pdev->dev;
> +
> +	platform_set_drvdata(pdev, mdmac);
> +
> +	m10v_xdmac_enable_dma(mdmac);
> +
> +	ret = dmaenginem_async_device_register(&mdmac->dmac);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to register dmaengine device\n");
> +		return ret;
> +	}
> +
> +	ret = of_dma_controller_register(np, of_dma_simple_xlate, mdmac);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to register OF DMA controller\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int m10v_xdmac_remove(struct platform_device *pdev)
> +{
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
> +	int i;
> +
> +	m10v_xdmac_disable_dma(mdmac);
> +
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		devm_free_irq(&pdev->dev, mchan->irq, mchan);
> +	}
> +
> +	of_dma_controller_free(pdev->dev.of_node);
> +
> +	return 0;
> +}
> +
> +static const struct of_device_id m10v_xdmac_dt_ids[] = {
> +	{.compatible = "socionext,milbeaut-m10v-xdmac",},
> +	{},
> +};
> +MODULE_DEVICE_TABLE(of, m10v_xdmac_dt_ids);
> +
> +static struct platform_driver m10v_xdmac_driver = {
> +	.driver = {
> +		.name = "m10v-xdmac",
> +		.of_match_table = of_match_ptr(m10v_xdmac_dt_ids),
> +	},
> +	.probe = m10v_xdmac_probe,
> +	.remove = m10v_xdmac_remove,
> +};
> +module_platform_driver(m10v_xdmac_driver);
> +
> +MODULE_AUTHOR("Kazuhiro Kasai <kasai.kazuhiro@socionext.com>");
> +MODULE_DESCRIPTION("Socionext Milbeaut XDMAC driver");
> +MODULE_LICENSE("GPL v2");
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
@ 2019-04-16  2:06 ` Kazuhiro Kasai
  0 siblings, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-04-16  2:06 UTC (permalink / raw)
  To: vkoul, robh+dt, mark.rutland
  Cc: dmaengine, devicetree, Orito,
	Takao/織戸 誉生,
	Sugaya, Taichi/菅谷 太一,
	Kanematsu, Shinji/兼松 伸次,
	jaswinder.singh, masami.hiramatsu, linux-kernel

Hello,

Does anyone have any commnets on this?

On Mon, Mar 25, 2019 at  4:15 +0000, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller. This DMA controller has
> only capable of memory to memory transfer.
>
> Signed-off-by: Kazuhiro Kasai <kasai.kazuhiro@socionext.com>
> ---
>  drivers/dma/Kconfig          |   8 +
>  drivers/dma/Makefile         |   1 +
>  drivers/dma/xdmac-milbeaut.c | 353 +++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 362 insertions(+)
>  create mode 100644 drivers/dma/xdmac-milbeaut.c
>
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index 0b1dfb5..733fe5f 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -612,6 +612,14 @@ config UNIPHIER_MDMAC
>  	  UniPhier platform.  This DMA controller is used as the external
>  	  DMA engine of the SD/eMMC controllers of the LD4, Pro4, sLD8 SoCs.
>
> +config XDMAC_MILBEAUT
> +       tristate "Milbeaut AXI DMA support"
> +       depends on ARCH_MILBEAUT || COMPILE_TEST
> +       select DMA_ENGINE
> +       help
> +         Support for Milbeaut AXI DMA controller driver. The DMA controller
> +         has only memory to memory capability.
> +
>  config XGENE_DMA
>  	tristate "APM X-Gene DMA support"
>  	depends on ARCH_XGENE || COMPILE_TEST
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 6126e1c..4aab810 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -72,6 +72,7 @@ obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
>  obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
>  obj-$(CONFIG_TIMB_DMA) += timb_dma.o
>  obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
> +obj-$(CONFIG_XDMAC_MILBEAUT) += xdmac-milbeaut.o
>  obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
>  obj-$(CONFIG_ZX_DMA) += zx_dma.o
>  obj-$(CONFIG_ST_FDMA) += st_fdma.o
> diff --git a/drivers/dma/xdmac-milbeaut.c b/drivers/dma/xdmac-milbeaut.c
> new file mode 100644
> index 0000000..7035c61
> --- /dev/null
> +++ b/drivers/dma/xdmac-milbeaut.c
> @@ -0,0 +1,353 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2019 Socionext Inc.
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/interrupt.h>
> +#include <linux/module.h>
> +#include <linux/of_dma.h>
> +#include <linux/platform_device.h>
> +
> +#include "dmaengine.h"
> +
> +/* global register */
> +#define M10V_XDACS 0x00
> +
> +/* channel local register */
> +#define M10V_XDTBC 0x10
> +#define M10V_XDSSA 0x14
> +#define M10V_XDDSA 0x18
> +#define M10V_XDSAC 0x1C
> +#define M10V_XDDAC 0x20
> +#define M10V_XDDCC 0x24
> +#define M10V_XDDES 0x28
> +#define M10V_XDDPC 0x2C
> +#define M10V_XDDSD 0x30
> +
> +#define M10V_XDACS_XE BIT(28)
> +
> +#define M10V_XDSAC_SBS	GENMASK(17, 16)
> +#define M10V_XDSAC_SBL	GENMASK(11, 8)
> +
> +#define M10V_XDDAC_DBS	GENMASK(17, 16)
> +#define M10V_XDDAC_DBL	GENMASK(11, 8)
> +
> +#define M10V_XDDES_CE	BIT(28)
> +#define M10V_XDDES_SE	BIT(24)
> +#define M10V_XDDES_SA	BIT(15)
> +#define M10V_XDDES_TF	GENMASK(23, 20)
> +#define M10V_XDDES_EI	BIT(1)
> +#define M10V_XDDES_TI	BIT(0)
> +
> +#define M10V_XDDSD_IS_MASK	GENMASK(3, 0)
> +#define M10V_XDDSD_IS_NORMAL	0x8
> +
> +#define M10V_XDMAC_BUSWIDTHS	(BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES) | \
> +				 BIT(DMA_SLAVE_BUSWIDTH_8_BYTES))
> +
> +#define M10V_XDMAC_CHAN_BASE(base, i)	((base) + (i) * 0x30)
> +
> +#define to_m10v_dma_chan(c)	container_of((c), struct m10v_dma_chan, chan)
> +
> +struct m10v_dma_desc {
> +	struct dma_async_tx_descriptor txd;
> +	size_t len;
> +	dma_addr_t src;
> +	dma_addr_t dst;
> +};
> +
> +struct m10v_dma_chan {
> +	struct dma_chan chan;
> +	struct m10v_dma_device *mdmac;
> +	void __iomem *regs;
> +	int irq;
> +	struct m10v_dma_desc mdesc;
> +	spinlock_t lock;
> +};
> +
> +struct m10v_dma_device {
> +	struct dma_device dmac;
> +	void __iomem *regs;
> +	unsigned int channels;
> +	struct m10v_dma_chan mchan[0];
> +};
> +
> +static void m10v_xdmac_enable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 1);
> +	writel(val, mdmac->regs + M10V_XDACS);
> +}
> +
> +static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 0);
> +	writel(val, mdmac->regs + M10V_XDACS);
> +}
> +
> +static void m10v_xdmac_config_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = mchan->mdesc.len - 1;
> +	writel(val, mchan->regs + M10V_XDTBC);
> +
> +	val = mchan->mdesc.src;
> +	writel(val, mchan->regs + M10V_XDSSA);
> +
> +	val = mchan->mdesc.dst;
> +	writel(val, mchan->regs + M10V_XDDSA);
> +
> +	val = readl(mchan->regs + M10V_XDSAC);
> +	val &= ~(M10V_XDSAC_SBS | M10V_XDSAC_SBL);
> +	val |= FIELD_PREP(M10V_XDSAC_SBS, 0x3) |
> +	       FIELD_PREP(M10V_XDSAC_SBL, 0xf);
> +	writel(val, mchan->regs + M10V_XDSAC);
> +
> +	val = readl(mchan->regs + M10V_XDDAC);
> +	val &= ~(M10V_XDDAC_DBS | M10V_XDDAC_DBL);
> +	val |= FIELD_PREP(M10V_XDDAC_DBS, 0x3) |
> +	       FIELD_PREP(M10V_XDDAC_DBL, 0xf);
> +	writel(val, mchan->regs + M10V_XDDAC);
> +}
> +
> +static void m10v_xdmac_enable_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDES);
> +	val &= ~(M10V_XDDES_CE |
> +	         M10V_XDDES_SE |
> +	         M10V_XDDES_TF |
> +	         M10V_XDDES_EI |
> +	         M10V_XDDES_TI);
> +	val |= FIELD_PREP(M10V_XDDES_CE, 1) |
> +	       FIELD_PREP(M10V_XDDES_SE, 1) |
> +	       FIELD_PREP(M10V_XDDES_TF, 1) |
> +	       FIELD_PREP(M10V_XDDES_EI, 1) |
> +	       FIELD_PREP(M10V_XDDES_TI, 1);
> +	writel(val, mchan->regs + M10V_XDDES);
> +}
> +
> +static void m10v_xdmac_disable_chan(struct m10v_dma_chan *mchan)
> +{
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDES);
> +	val &= ~M10V_XDDES_CE;
> +	val |= FIELD_PREP(M10V_XDDES_CE, 0);
> +	writel(val, mchan->regs + M10V_XDDES);
> +}
> +
> +static irqreturn_t m10v_xdmac_irq(int irq, void *data)
> +{
> +	struct m10v_dma_chan *mchan = data;
> +	unsigned long flags;
> +	u32 val;
> +
> +	val = readl(mchan->regs + M10V_XDDSD);
> +	val = FIELD_GET(M10V_XDDSD_IS_MASK, val);
> +
> +	if (val != M10V_XDDSD_IS_NORMAL)
> +		dev_err(mchan->chan.device->dev, "XDMAC error with status: %x", val);
> +
> +	val = FIELD_PREP(M10V_XDDSD_IS_MASK, 0x0);
> +	writel(val, mchan->regs + M10V_XDDSD);
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	dma_cookie_complete(&mchan->mdesc.txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	if (mchan->mdesc.txd.flags & DMA_PREP_INTERRUPT)
> +		dmaengine_desc_get_callback_invoke(&mchan->mdesc.txd, NULL);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static void m10v_xdmac_issue_pending(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_config_chan(mchan);
> +
> +	m10v_xdmac_enable_chan(mchan);
> +}
> +
> +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> +	dma_cookie_t cookie;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	cookie = dma_cookie_assign(txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return cookie;
> +}
> +
> +static struct dma_async_tx_descriptor *
> +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> +			   dma_addr_t src, size_t len, unsigned long flags)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> +	mchan->mdesc.txd.callback = NULL;
> +	mchan->mdesc.txd.flags = flags;
> +	mchan->mdesc.txd.cookie = -EBUSY;
> +
> +	mchan->mdesc.len = len;
> +	mchan->mdesc.src = src;
> +	mchan->mdesc.dst = dst;
> +
> +	return &mchan->mdesc.txd;
> +}
> +
> +static int m10v_xdmac_device_terminate_all(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_disable_chan(mchan);
> +
> +	return 0;
> +}
> +
> +static int m10v_xdmac_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	dma_cookie_init(chan);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return 1;
> +}
> +
> +static int m10v_xdmac_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac;
> +	struct resource *res;
> +	unsigned int channels;
> +	int ret, i;
> +
> +	ret = device_property_read_u32(&pdev->dev, "dma-channels", &channels);
> +	if (ret) {
> +		dev_err(&pdev->dev, "get dma-channels failed\n");
> +		return ret;
> +	}
> +
> +	mdmac = devm_kzalloc(&pdev->dev,
> +			     struct_size(mdmac, mchan, channels),
> +			     GFP_KERNEL);
> +	if (!mdmac)
> +		return -ENOMEM;
> +
> +	mdmac->channels = channels;
> +
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	mdmac->regs = devm_ioremap_resource(&pdev->dev, res);
> +	if (IS_ERR(mdmac->regs))
> +		return PTR_ERR(mdmac->regs);
> +
> +	INIT_LIST_HEAD(&mdmac->dmac.channels);
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		mchan->irq = platform_get_irq(pdev, i);
> +		ret = devm_request_irq(&pdev->dev, mchan->irq, m10v_xdmac_irq,
> +				       IRQF_SHARED, dev_name(&pdev->dev), mchan);
> +		if (ret) {
> +			dev_err(&pdev->dev, "failed to request IRQ\n");
> +			return ret;
> +		}
> +		mchan->mdmac = mdmac;
> +		mchan->chan.device = &mdmac->dmac;
> +		list_add_tail(&mchan->chan.device_node,
> +				&mdmac->dmac.channels);
> +
> +		mchan->regs = M10V_XDMAC_CHAN_BASE(mdmac->regs, i);
> +		spin_lock_init(&mchan->lock);
> +	}
> +
> +	dma_cap_set(DMA_MEMCPY, mdmac->dmac.cap_mask);
> +
> +	mdmac->dmac.device_alloc_chan_resources = m10v_xdmac_alloc_chan_resources;
> +	mdmac->dmac.device_prep_dma_memcpy = m10v_xdmac_prep_dma_memcpy;
> +	mdmac->dmac.device_issue_pending = m10v_xdmac_issue_pending;
> +	mdmac->dmac.device_tx_status = dma_cookie_status;
> +	mdmac->dmac.device_terminate_all = m10v_xdmac_device_terminate_all;
> +	mdmac->dmac.src_addr_widths = M10V_XDMAC_BUSWIDTHS;
> +	mdmac->dmac.dst_addr_widths = M10V_XDMAC_BUSWIDTHS;
> +	mdmac->dmac.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
> +	mdmac->dmac.dev = &pdev->dev;
> +
> +	platform_set_drvdata(pdev, mdmac);
> +
> +	m10v_xdmac_enable_dma(mdmac);
> +
> +	ret = dmaenginem_async_device_register(&mdmac->dmac);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to register dmaengine device\n");
> +		return ret;
> +	}
> +
> +	ret = of_dma_controller_register(np, of_dma_simple_xlate, mdmac);
> +	if (ret) {
> +		dev_err(&pdev->dev, "failed to register OF DMA controller\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int m10v_xdmac_remove(struct platform_device *pdev)
> +{
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
> +	int i;
> +
> +	m10v_xdmac_disable_dma(mdmac);
> +
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		devm_free_irq(&pdev->dev, mchan->irq, mchan);
> +	}
> +
> +	of_dma_controller_free(pdev->dev.of_node);
> +
> +	return 0;
> +}
> +
> +static const struct of_device_id m10v_xdmac_dt_ids[] = {
> +	{.compatible = "socionext,milbeaut-m10v-xdmac",},
> +	{},
> +};
> +MODULE_DEVICE_TABLE(of, m10v_xdmac_dt_ids);
> +
> +static struct platform_driver m10v_xdmac_driver = {
> +	.driver = {
> +		.name = "m10v-xdmac",
> +		.of_match_table = of_match_ptr(m10v_xdmac_dt_ids),
> +	},
> +	.probe = m10v_xdmac_probe,
> +	.remove = m10v_xdmac_remove,
> +};
> +module_platform_driver(m10v_xdmac_driver);
> +
> +MODULE_AUTHOR("Kazuhiro Kasai <kasai.kazuhiro@socionext.com>");
> +MODULE_DESCRIPTION("Socionext Milbeaut XDMAC driver");
> +MODULE_LICENSE("GPL v2");
> --
> 1.9.1
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
@ 2019-04-26 11:46     ` Vinod Koul
  0 siblings, 0 replies; 14+ messages in thread
From: Vinod Koul @ 2019-04-26 11:46 UTC (permalink / raw)
  To: Kazuhiro Kasai
  Cc: robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel

On 25-03-19, 13:15, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller. This DMA controller has
> only capable of memory to memory transfer.

Have you tested this with dmatest?

> +struct m10v_dma_chan {
> +	struct dma_chan chan;
> +	struct m10v_dma_device *mdmac;
> +	void __iomem *regs;
> +	int irq;
> +	struct m10v_dma_desc mdesc;

So there is a *single* descriptor? Not a list??

> +static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 0);
> +	writel(val, mdmac->regs + M10V_XDACS);

Why not create a modifyl() macro and use it here

> +static void m10v_xdmac_issue_pending(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_config_chan(mchan);
> +
> +	m10v_xdmac_enable_chan(mchan);

You dont check if anything is already running or not?

> +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> +	dma_cookie_t cookie;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	cookie = dma_cookie_assign(txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return cookie;

sounds like vchan_tx_submit() i think you can use virt-dma layer and then
get rid of artificial limit in driver and be able to queue up the txn on
dmaengine.

> +static struct dma_async_tx_descriptor *
> +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> +			   dma_addr_t src, size_t len, unsigned long flags)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> +	mchan->mdesc.txd.callback = NULL;
> +	mchan->mdesc.txd.flags = flags;
> +	mchan->mdesc.txd.cookie = -EBUSY;
> +
> +	mchan->mdesc.len = len;
> +	mchan->mdesc.src = src;
> +	mchan->mdesc.dst = dst;
> +
> +	return &mchan->mdesc.txd;

So you support single descriptor and dont check if this has been already
configured. So I guess this has been tested by doing txn one at a time
and not submitted bunch of txn and wait for them to complete. Please fix
that to really enable dmaengine capabilities.

> +static int m10v_xdmac_remove(struct platform_device *pdev)
> +{
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
> +	int i;
> +
> +	m10v_xdmac_disable_dma(mdmac);
> +
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		devm_free_irq(&pdev->dev, mchan->irq, mchan);
> +	}

No call to dma_async_device_unregister()?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
@ 2019-04-26 11:46     ` Vinod Koul
  0 siblings, 0 replies; 14+ messages in thread
From: Vinod Koul @ 2019-04-26 11:46 UTC (permalink / raw)
  To: Kazuhiro Kasai
  Cc: robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel

On 25-03-19, 13:15, Kazuhiro Kasai wrote:
> Add Milbeaut AXI DMA controller. This DMA controller has
> only capable of memory to memory transfer.

Have you tested this with dmatest?

> +struct m10v_dma_chan {
> +	struct dma_chan chan;
> +	struct m10v_dma_device *mdmac;
> +	void __iomem *regs;
> +	int irq;
> +	struct m10v_dma_desc mdesc;

So there is a *single* descriptor? Not a list??

> +static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
> +{
> +	unsigned int val;
> +
> +	val = readl(mdmac->regs + M10V_XDACS);
> +	val &= ~M10V_XDACS_XE;
> +	val |= FIELD_PREP(M10V_XDACS_XE, 0);
> +	writel(val, mdmac->regs + M10V_XDACS);

Why not create a modifyl() macro and use it here

> +static void m10v_xdmac_issue_pending(struct dma_chan *chan)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	m10v_xdmac_config_chan(mchan);
> +
> +	m10v_xdmac_enable_chan(mchan);

You dont check if anything is already running or not?

> +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> +	dma_cookie_t cookie;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&mchan->lock, flags);
> +	cookie = dma_cookie_assign(txd);
> +	spin_unlock_irqrestore(&mchan->lock, flags);
> +
> +	return cookie;

sounds like vchan_tx_submit() i think you can use virt-dma layer and then
get rid of artificial limit in driver and be able to queue up the txn on
dmaengine.

> +static struct dma_async_tx_descriptor *
> +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> +			   dma_addr_t src, size_t len, unsigned long flags)
> +{
> +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> +
> +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> +	mchan->mdesc.txd.callback = NULL;
> +	mchan->mdesc.txd.flags = flags;
> +	mchan->mdesc.txd.cookie = -EBUSY;
> +
> +	mchan->mdesc.len = len;
> +	mchan->mdesc.src = src;
> +	mchan->mdesc.dst = dst;
> +
> +	return &mchan->mdesc.txd;

So you support single descriptor and dont check if this has been already
configured. So I guess this has been tested by doing txn one at a time
and not submitted bunch of txn and wait for them to complete. Please fix
that to really enable dmaengine capabilities.

> +static int m10v_xdmac_remove(struct platform_device *pdev)
> +{
> +	struct m10v_dma_chan *mchan;
> +	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
> +	int i;
> +
> +	m10v_xdmac_disable_dma(mdmac);
> +
> +	for (i = 0; i < mdmac->channels; i++) {
> +		mchan = &mdmac->mchan[i];
> +		devm_free_irq(&pdev->dev, mchan->irq, mchan);
> +	}

No call to dma_async_device_unregister()?
-- 
~Vinod

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
  2019-04-26 11:46     ` [PATCH 2/2] " Vinod Koul
  (?)
@ 2019-05-07  5:39     ` Kazuhiro Kasai
  2019-05-07 17:10       ` Vinod Koul
  -1 siblings, 1 reply; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-05-07  5:39 UTC (permalink / raw)
  To: Vinod Koul
  Cc: robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel, kasai.kazuhiro


Thank you very much for reviewing my patch.
Sorry for my late reply. Japan was in Spring Vacation.

On Fri, Apr 26, 2019 at 17:16 +0530, Vinod Koul wrote:
> On 25-03-19, 13:15, Kazuhiro Kasai wrote:
> > Add Milbeaut AXI DMA controller. This DMA controller has
> > only capable of memory to memory transfer.
>
> Have you tested this with dmatest?
Yes, I have tested this with dmatest.

I use dmatest with the following parameter.
>echo 10 > iterations
>echo "" > channel
>echo 1 > run

And I got the below report from dmatest.
[11675.231268] dmatest: dma0chan0-copy0: summary 10 tests, 0 failures 6910.84 iops 67035 KB/s (0)
[ 5646.689234] dmatest: dma0chan1-copy0: summary 10 tests, 0 failures 7949.12 iops 59618 KB/s (0)
[12487.712996] dmatest: dma0chan2-copy0: summary 10 tests, 0 failures 1493.87 iops 15088 KB/s (0)
[12487.733932] dmatest: dma1chan0-copy0: summary 10 tests, 0 failures 490.98 iops 3142 KB/s (0)
[11675.282428] dmatest: dma1chan2-copy0: summary 10 tests, 0 failures 7112.37 iops 56187 KB/s (0)
[ 5646.754230] dmatest: dma1chan3-copy0: summary 10 tests, 0 failures 6609.38 iops 61467 KB/s (0)
[ 5043.009255] dmatest: dma0chan3-copy0: summary 10 tests, 0 failures 498.08 iops 4183 KB/s (0)
[ 5043.018385] dmatest: dma1chan1-copy0: summary 10 tests, 0 failures 350.62 iops 3155 KB/s (0)

>
> > +struct m10v_dma_chan {
> > +	struct dma_chan chan;
> > +	struct m10v_dma_device *mdmac;
> > +	void __iomem *regs;
> > +	int irq;
> > +	struct m10v_dma_desc mdesc;
>
> So there is a *single* descriptor? Not a list??

Yes, single descriptor.

>
> > +static void m10v_xdmac_disable_dma(struct m10v_dma_device *mdmac)
> > +{
> > +	unsigned int val;
> > +
> > +	val = readl(mdmac->regs + M10V_XDACS);
> > +	val &= ~M10V_XDACS_XE;
> > +	val |= FIELD_PREP(M10V_XDACS_XE, 0);
> > +	writel(val, mdmac->regs + M10V_XDACS);
>
> Why not create a modifyl() macro and use it here

Thank you for advise, I will creat modifyl() macro and use it in next version.

>
> > +static void m10v_xdmac_issue_pending(struct dma_chan *chan)
> > +{
> > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> > +
> > +	m10v_xdmac_config_chan(mchan);
> > +
> > +	m10v_xdmac_enable_chan(mchan);
>
> You dont check if anything is already running or not?

Yes, I think I don't need check if dma is running or not.
Because there's a single descriptor.

>
> > +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> > +{
> > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> > +	dma_cookie_t cookie;
> > +	unsigned long flags;
> > +
> > +	spin_lock_irqsave(&mchan->lock, flags);
> > +	cookie = dma_cookie_assign(txd);
> > +	spin_unlock_irqrestore(&mchan->lock, flags);
> > +
> > +	return cookie;
>
> sounds like vchan_tx_submit() i think you can use virt-dma layer and then
> get rid of artificial limit in driver and be able to queue up the txn on
> dmaengine.

OK, I will try to use virt-dma layer in next version.

>
> > +static struct dma_async_tx_descriptor *
> > +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> > +			   dma_addr_t src, size_t len, unsigned long flags)
> > +{
> > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> > +
> > +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> > +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> > +	mchan->mdesc.txd.callback = NULL;
> > +	mchan->mdesc.txd.flags = flags;
> > +	mchan->mdesc.txd.cookie = -EBUSY;
> > +
> > +	mchan->mdesc.len = len;
> > +	mchan->mdesc.src = src;
> > +	mchan->mdesc.dst = dst;
> > +
> > +	return &mchan->mdesc.txd;
>
> So you support single descriptor and dont check if this has been already
> configured. So I guess this has been tested by doing txn one at a time
> and not submitted bunch of txn and wait for them to complete. Please fix
> that to really enable dmaengine capabilities.

Thank you for advice. I want to fix it and I have 2 questions.

1. Does virt-dma layer help to fix this?
2. Can dmatest test that dmaengine capabilities?

>
> > +static int m10v_xdmac_remove(struct platform_device *pdev)
> > +{
> > +	struct m10v_dma_chan *mchan;
> > +	struct m10v_dma_device *mdmac = platform_get_drvdata(pdev);
> > +	int i;
> > +
> > +	m10v_xdmac_disable_dma(mdmac);
> > +
> > +	for (i = 0; i < mdmac->channels; i++) {
> > +		mchan = &mdmac->mchan[i];
> > +		devm_free_irq(&pdev->dev, mchan->irq, mchan);
> > +	}
>
> No call to dma_async_device_unregister()?

Thank you, I will call dma_async_device_unregister in next version.

Thanks,
Kasai


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
  2019-05-07  5:39     ` Kazuhiro Kasai
@ 2019-05-07 17:10       ` Vinod Koul
  2019-05-07 23:37         ` Kazuhiro Kasai
  0 siblings, 1 reply; 14+ messages in thread
From: Vinod Koul @ 2019-05-07 17:10 UTC (permalink / raw)
  To: Kazuhiro Kasai
  Cc: robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel

On 07-05-19, 14:39, Kazuhiro Kasai wrote:
> On Fri, Apr 26, 2019 at 17:16 +0530, Vinod Koul wrote:
> > On 25-03-19, 13:15, Kazuhiro Kasai wrote:

> > > +struct m10v_dma_chan {
> > > +	struct dma_chan chan;
> > > +	struct m10v_dma_device *mdmac;
> > > +	void __iomem *regs;
> > > +	int irq;
> > > +	struct m10v_dma_desc mdesc;
> >
> > So there is a *single* descriptor? Not a list??
> 
> Yes, single descriptor.

And why is that, you can create a list and keep getting descriptors and
issue them to hardware and get better pref!

> > > +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> > > +{
> > > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> > > +	dma_cookie_t cookie;
> > > +	unsigned long flags;
> > > +
> > > +	spin_lock_irqsave(&mchan->lock, flags);
> > > +	cookie = dma_cookie_assign(txd);
> > > +	spin_unlock_irqrestore(&mchan->lock, flags);
> > > +
> > > +	return cookie;
> >
> > sounds like vchan_tx_submit() i think you can use virt-dma layer and then
> > get rid of artificial limit in driver and be able to queue up the txn on
> > dmaengine.
> 
> OK, I will try to use virt-dma layer in next version.

And you will get lists to manage descriptor for free! so you can use
that to support multiple txns as well!

> > > +static struct dma_async_tx_descriptor *
> > > +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> > > +			   dma_addr_t src, size_t len, unsigned long flags)
> > > +{
> > > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> > > +
> > > +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> > > +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> > > +	mchan->mdesc.txd.callback = NULL;
> > > +	mchan->mdesc.txd.flags = flags;
> > > +	mchan->mdesc.txd.cookie = -EBUSY;
> > > +
> > > +	mchan->mdesc.len = len;
> > > +	mchan->mdesc.src = src;
> > > +	mchan->mdesc.dst = dst;
> > > +
> > > +	return &mchan->mdesc.txd;
> >
> > So you support single descriptor and dont check if this has been already
> > configured. So I guess this has been tested by doing txn one at a time
> > and not submitted bunch of txn and wait for them to complete. Please fix
> > that to really enable dmaengine capabilities.
> 
> Thank you for advice. I want to fix it and I have 2 questions.
> 
> 1. Does virt-dma layer help to fix this?

Yes

> 2. Can dmatest test that dmaengine capabilities?

Yes for memcpy operations, see Documentation/driver-api/dmaengine/dmatest.rst

-- 
~Vinod

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller
  2019-05-07 17:10       ` Vinod Koul
@ 2019-05-07 23:37         ` Kazuhiro Kasai
  0 siblings, 0 replies; 14+ messages in thread
From: Kazuhiro Kasai @ 2019-05-07 23:37 UTC (permalink / raw)
  To: Vinod Koul
  Cc: robh+dt, mark.rutland, dmaengine, devicetree, orito.takao,
	sugaya.taichi, kanematsu.shinji, jaswinder.singh,
	masami.hiramatsu, linux-kernel


Thank you very much for quick response!
I appreciate your comments.

Maybe it takes long time, but I will try to write v2 patch with virt-dma.

On Tue, May 07, 2019 at 22:40 +0530, Vinod Koul wrote:
> On 07-05-19, 14:39, Kazuhiro Kasai wrote:
> > On Fri, Apr 26, 2019 at 17:16 +0530, Vinod Koul wrote:
> > > On 25-03-19, 13:15, Kazuhiro Kasai wrote:
>
> > > > +struct m10v_dma_chan {
> > > > +	struct dma_chan chan;
> > > > +	struct m10v_dma_device *mdmac;
> > > > +	void __iomem *regs;
> > > > +	int irq;
> > > > +	struct m10v_dma_desc mdesc;
> > >
> > > So there is a *single* descriptor? Not a list??
> >
> > Yes, single descriptor.
>
> And why is that, you can create a list and keep getting descriptors and
> issue them to hardware and get better pref!

I understand, thank you.

>
> > > > +static dma_cookie_t m10v_xdmac_tx_submit(struct dma_async_tx_descriptor *txd)
> > > > +{
> > > > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(txd->chan);
> > > > +	dma_cookie_t cookie;
> > > > +	unsigned long flags;
> > > > +
> > > > +	spin_lock_irqsave(&mchan->lock, flags);
> > > > +	cookie = dma_cookie_assign(txd);
> > > > +	spin_unlock_irqrestore(&mchan->lock, flags);
> > > > +
> > > > +	return cookie;
> > >
> > > sounds like vchan_tx_submit() i think you can use virt-dma layer and then
> > > get rid of artificial limit in driver and be able to queue up the txn on
> > > dmaengine.
> >
> > OK, I will try to use virt-dma layer in next version.
>
> And you will get lists to manage descriptor for free! so you can use
> that to support multiple txns as well!

It sounds great! I start to study virt-dma layer for next version.

>
> > > > +static struct dma_async_tx_descriptor *
> > > > +m10v_xdmac_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dst,
> > > > +			   dma_addr_t src, size_t len, unsigned long flags)
> > > > +{
> > > > +	struct m10v_dma_chan *mchan = to_m10v_dma_chan(chan);
> > > > +
> > > > +	dma_async_tx_descriptor_init(&mchan->mdesc.txd, chan);
> > > > +	mchan->mdesc.txd.tx_submit = m10v_xdmac_tx_submit;
> > > > +	mchan->mdesc.txd.callback = NULL;
> > > > +	mchan->mdesc.txd.flags = flags;
> > > > +	mchan->mdesc.txd.cookie = -EBUSY;
> > > > +
> > > > +	mchan->mdesc.len = len;
> > > > +	mchan->mdesc.src = src;
> > > > +	mchan->mdesc.dst = dst;
> > > > +
> > > > +	return &mchan->mdesc.txd;
> > >
> > > So you support single descriptor and dont check if this has been already
> > > configured. So I guess this has been tested by doing txn one at a time
> > > and not submitted bunch of txn and wait for them to complete. Please fix
> > > that to really enable dmaengine capabilities.
> >
> > Thank you for advice. I want to fix it and I have 2 questions.
> >
> > 1. Does virt-dma layer help to fix this?
>
> Yes

It sounds very good news for me. Thank you.

>
> > 2. Can dmatest test that dmaengine capabilities?
>
> Yes for memcpy operations, see Documentation/driver-api/dmaengine/dmatest.rst
>

OK, I will read the document.

Thanks,
Kasai

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-05-07 23:37 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-25  4:15 [PATCH 0/2] Add support for AXI DMA controller on Milbeaut series Kazuhiro Kasai
2019-03-25  4:15 ` [1/2] dt-bindings: dmaengine: Add Milbeaut AXI DMA controller bindings Kazuhiro Kasai
2019-03-25  4:15   ` [PATCH 1/2] " Kazuhiro Kasai
2019-03-31  6:41   ` Rob Herring
2019-03-31  6:41     ` Rob Herring
2019-03-25  4:15 ` [2/2] dmaengine: milbeaut: Add Milbeaut AXI DMA controller Kazuhiro Kasai
2019-03-25  4:15   ` [PATCH 2/2] " Kazuhiro Kasai
2019-04-26 11:46   ` [2/2] " Vinod Koul
2019-04-26 11:46     ` [PATCH 2/2] " Vinod Koul
2019-05-07  5:39     ` Kazuhiro Kasai
2019-05-07 17:10       ` Vinod Koul
2019-05-07 23:37         ` Kazuhiro Kasai
2019-04-16  2:06 [2/2] " Kazuhiro Kasai
2019-04-16  2:06 ` [PATCH 2/2] " Kazuhiro Kasai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.