All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 15:34 ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: dmaengine
  Cc: linux-kernel, devicetree, linux-snps-arc, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Andy Shevchenko,
	Alexey Brodkin, Eugeniy Paltsev

This patch series add support for the DW AXI DMAC controller.

DW AXI DMAC is a part of upcoming development board from Synopsys.

In this driver implementation only DMA_MEMCPY and DMA_SG transfers
are supported.

Changes for v0:
 * Switch to virt-dma API (according to previous RFC)
 * Small fixies according to previous RFC
 * Add DT bindings

Eugeniy Paltsev (2):
  dt-bindings: Document the Synopsys DW AXI DMA bindings
  dmaengine: Add DW AXI DMAC driver

 .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   |   33 +
 drivers/dma/Kconfig                                |    8 +
 drivers/dma/Makefile                               |    1 +
 drivers/dma/axi_dma_platform.c                     | 1060 ++++++++++++++++++++
 drivers/dma/axi_dma_platform.h                     |  124 +++
 drivers/dma/axi_dma_platform_reg.h                 |  189 ++++
 6 files changed, 1415 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
 create mode 100644 drivers/dma/axi_dma_platform.c
 create mode 100644 drivers/dma/axi_dma_platform.h
 create mode 100644 drivers/dma/axi_dma_platform_reg.h

-- 
2.5.5

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 0/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 15:34 ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: linux-snps-arc

This patch series add support for the DW AXI DMAC controller.

DW AXI DMAC is a part of upcoming development board from Synopsys.

In this driver implementation only DMA_MEMCPY and DMA_SG transfers
are supported.

Changes for v0:
 * Switch to virt-dma API (according to previous RFC)
 * Small fixies according to previous RFC
 * Add DT bindings

Eugeniy Paltsev (2):
  dt-bindings: Document the Synopsys DW AXI DMA bindings
  dmaengine: Add DW AXI DMAC driver

 .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   |   33 +
 drivers/dma/Kconfig                                |    8 +
 drivers/dma/Makefile                               |    1 +
 drivers/dma/axi_dma_platform.c                     | 1060 ++++++++++++++++++++
 drivers/dma/axi_dma_platform.h                     |  124 +++
 drivers/dma/axi_dma_platform_reg.h                 |  189 ++++
 6 files changed, 1415 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
 create mode 100644 drivers/dma/axi_dma_platform.c
 create mode 100644 drivers/dma/axi_dma_platform.h
 create mode 100644 drivers/dma/axi_dma_platform_reg.h

-- 
2.5.5

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: dmaengine
  Cc: linux-kernel, devicetree, linux-snps-arc, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Andy Shevchenko,
	Alexey Brodkin, Eugeniy Paltsev

This patch adds documentation of device tree bindings for the Synopsys
DesignWare AXI DMA controller.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
---
 .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt

diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
new file mode 100644
index 0000000..21318a7
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
@@ -0,0 +1,33 @@
+* Synopsys DesignWare AXI DMA Controller
+
+Required properties:
+- compatible: "snps,axi-dma"
+- reg: Address range of the DMAC registers. This should include
+  all of the per-channel registers.
+- interrupt: Should contain the DMAC interrupt number.
+- interrupt-parent: Should be the phandle for the interrupt controller
+  that services interrupts for this device.
+- dma-channels: Number of channels supported by hardware.
+- dma-masters: Number of AXI masters supported by the hardware.
+- data-width: Maximum AXI data width supported by hardware.
+  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)
+- priority: Priority of channel. Array property. Priority value must be
+  programmed within [0:dma-channels-1] range. (0 - minimum priority)
+- block-size: Maximum block size supported by the controller channel. Array
+  property.
+
+Example:
+
+dmac: dmac@80000 {
+	compatible = "snps,axi-dma";
+	reg = <0x80000 0x400>;
+	clocks = <&core_clk>;
+	interrupt-parent = <&intc>;
+	interrupts = <27>;
+
+	dma-channels = <4>;
+	dma-masters = <2>;
+	data-width = <3>;
+	block-size = <4096 4096 4096 4096>;
+	priority = <0 1 2 3>;
+};
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: dmaengine-u79uwXL29TY76Z2rM5mHXA
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Andy Shevchenko,
	Alexey Brodkin, Eugeniy Paltsev

This patch adds documentation of device tree bindings for the Synopsys
DesignWare AXI DMA controller.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev-HKixBCOQz3hWk0Htik3J/w@public.gmane.org>
---
 .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt

diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
new file mode 100644
index 0000000..21318a7
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
@@ -0,0 +1,33 @@
+* Synopsys DesignWare AXI DMA Controller
+
+Required properties:
+- compatible: "snps,axi-dma"
+- reg: Address range of the DMAC registers. This should include
+  all of the per-channel registers.
+- interrupt: Should contain the DMAC interrupt number.
+- interrupt-parent: Should be the phandle for the interrupt controller
+  that services interrupts for this device.
+- dma-channels: Number of channels supported by hardware.
+- dma-masters: Number of AXI masters supported by the hardware.
+- data-width: Maximum AXI data width supported by hardware.
+  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)
+- priority: Priority of channel. Array property. Priority value must be
+  programmed within [0:dma-channels-1] range. (0 - minimum priority)
+- block-size: Maximum block size supported by the controller channel. Array
+  property.
+
+Example:
+
+dmac: dmac@80000 {
+	compatible = "snps,axi-dma";
+	reg = <0x80000 0x400>;
+	clocks = <&core_clk>;
+	interrupt-parent = <&intc>;
+	interrupts = <27>;
+
+	dma-channels = <4>;
+	dma-masters = <2>;
+	data-width = <3>;
+	block-size = <4096 4096 4096 4096>;
+	priority = <0 1 2 3>;
+};
-- 
2.5.5

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: linux-snps-arc

This patch adds documentation of device tree bindings for the Synopsys
DesignWare AXI DMA controller.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev at synopsys.com>
---
 .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt

diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
new file mode 100644
index 0000000..21318a7
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
@@ -0,0 +1,33 @@
+* Synopsys DesignWare AXI DMA Controller
+
+Required properties:
+- compatible: "snps,axi-dma"
+- reg: Address range of the DMAC registers. This should include
+  all of the per-channel registers.
+- interrupt: Should contain the DMAC interrupt number.
+- interrupt-parent: Should be the phandle for the interrupt controller
+  that services interrupts for this device.
+- dma-channels: Number of channels supported by hardware.
+- dma-masters: Number of AXI masters supported by the hardware.
+- data-width: Maximum AXI data width supported by hardware.
+  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)
+- priority: Priority of channel. Array property. Priority value must be
+  programmed within [0:dma-channels-1] range. (0 - minimum priority)
+- block-size: Maximum block size supported by the controller channel. Array
+  property.
+
+Example:
+
+dmac: dmac at 80000 {
+	compatible = "snps,axi-dma";
+	reg = <0x80000 0x400>;
+	clocks = <&core_clk>;
+	interrupt-parent = <&intc>;
+	interrupts = <27>;
+
+	dma-channels = <4>;
+	dma-masters = <2>;
+	data-width = <3>;
+	block-size = <4096 4096 4096 4096>;
+	priority = <0 1 2 3>;
+};
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 15:34 ` Eugeniy Paltsev
  (?)
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  -1 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: dmaengine
  Cc: linux-kernel, devicetree, linux-snps-arc, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Andy Shevchenko,
	Alexey Brodkin, Eugeniy Paltsev

This patch adds support for the DW AXI DMAC controller.

DW AXI DMAC is a part of upcoming development board from Synopsys.

In this driver implementation only DMA_MEMCPY and DMA_SG transfers
are supported.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
---
 drivers/dma/Kconfig                |    8 +
 drivers/dma/Makefile               |    1 +
 drivers/dma/axi_dma_platform.c     | 1060 ++++++++++++++++++++++++++++++++++++
 drivers/dma/axi_dma_platform.h     |  124 +++++
 drivers/dma/axi_dma_platform_reg.h |  189 +++++++
 5 files changed, 1382 insertions(+)
 create mode 100644 drivers/dma/axi_dma_platform.c
 create mode 100644 drivers/dma/axi_dma_platform.h
 create mode 100644 drivers/dma/axi_dma_platform_reg.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 263495d..6d511b9 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -578,6 +578,14 @@ config ZX_DMA
 	help
 	  Support the DMA engine for ZTE ZX296702 platform devices.
 
+config AXI_DW_DMAC
+	tristate "Synopsys DesignWare AXI DMA support"
+	depends on OF && !64BIT
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Enable support for Synopsys DesignWare AXI DMA controller.
+
 
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index a4fa336..9fb1dfe 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
 obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
+obj-$(CONFIG_AXI_DW_DMAC) += axi_dma_platform.o
 obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
 obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
 obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
diff --git a/drivers/dma/axi_dma_platform.c b/drivers/dma/axi_dma_platform.c
new file mode 100644
index 0000000..31b9fdc
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.c
@@ -0,0 +1,1060 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
+
+#include "axi_dma_platform.h"
+#include "axi_dma_platform_reg.h"
+#include "dmaengine.h"
+#include "virt-dma.h"
+
+#define DRV_NAME	"axi_dw_dmac"
+
+/*
+ * The set of bus widths supported by the DMA controller. DW AXI DMAC supports
+ * master data bus width up to 512 bits (for both AXI master interfaces), but
+ * it depends on IP block configurarion.
+ */
+#define AXI_DMA_BUSWIDTHS		  \
+	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \
+	DMA_SLAVE_BUSWIDTH_1_BYTE	| \
+	DMA_SLAVE_BUSWIDTH_2_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_4_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_8_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_16_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_32_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_64_BYTES)
+/* TODO: check: do we need to use BIT() macro here? */
+
+static inline void
+axi_dma_iowrite32(struct axi_dma_chip *chip, u32 reg, u32 val)
+{
+	iowrite32(val, chip->regs + reg);
+}
+
+static inline u32 axi_dma_ioread32(struct axi_dma_chip *chip, u32 reg)
+{
+	return ioread32(chip->regs + reg);
+}
+
+static inline void
+axi_chan_iowrite32(struct axi_dma_chan *chan, u32 reg, u32 val)
+{
+	iowrite32(val, chan->chan_regs + reg);
+}
+
+static inline u32 axi_chan_ioread32(struct axi_dma_chan *chan, u32 reg)
+{
+	return ioread32(chan->chan_regs + reg);
+}
+
+static inline void axi_dma_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_chan_irq_disable(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	u32 val;
+
+	if (likely(irq_mask == DWAXIDMAC_IRQ_ALL)) {
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, DWAXIDMAC_IRQ_NONE);
+	} else {
+		val = axi_chan_ioread32(chan, CH_INTSTATUS_ENA);
+		val &= ~irq_mask;
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, val);
+	}
+}
+
+static inline void axi_chan_irq_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_sig_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSIGNAL_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_clear(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTCLEAR, irq_mask);
+}
+
+static inline u32 axi_chan_irq_read(struct axi_dma_chan *chan)
+{
+	return axi_chan_ioread32(chan, CH_INTSTATUS);
+}
+
+static inline void axi_chan_disable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline void axi_chan_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline bool axi_chan_is_hw_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+
+	return !!(val & (BIT(chan->id) << DMAC_CHAN_EN_SHIFT));
+}
+
+static inline bool axi_dma_is_sw_enable(struct axi_dma_chip *chip)
+{
+	struct dw_axi_dma *dw = chip->dw;
+	u32 i;
+
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		if (dw->chan[i].in_use)
+			return true;
+	}
+
+	return false;
+}
+
+static void axi_dma_hw_init(struct axi_dma_chip *chip)
+{
+	u32 i;
+
+	axi_dma_disable(chip);
+
+	for (i = 0; i < chip->dw->hdata->nr_channels; i++) {
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+		axi_chan_disable(&chip->dw->chan[i]);
+	}
+
+	axi_dma_irq_enable(chip);
+}
+
+static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan, dma_addr_t src,
+				   dma_addr_t dst, size_t len)
+{
+	u32 width;
+	size_t sdl = (src | dst | len);
+	u32 max_width = chan->chip->dw->hdata->m_data_width;
+
+	width = sdl ? __ffs(sdl) : DWAXIDMAC_TRANS_WIDTH_MAX;
+
+	return min_t(size_t, width, max_width);
+}
+
+static inline const char *axi_chan_name(struct axi_dma_chan *chan)
+{
+	return dma_chan_name(&chan->vc.chan);
+}
+
+static struct axi_dma_desc *axi_desc_get(struct axi_dma_chan *chan)
+{
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *desc;
+	dma_addr_t phys;
+
+	desc = dma_pool_zalloc(dw->desc_pool, GFP_ATOMIC, &phys);
+	if (unlikely(!desc)) {
+		dev_err(chan2dev(chan), "%s: not enough descriptors available\n",
+			axi_chan_name(chan));
+		return NULL;
+	}
+
+	chan->descs_allocated++;
+	INIT_LIST_HEAD(&desc->xfer_list);
+	desc->vd.tx.phys = phys;
+	desc->chan = chan;
+
+	return desc;
+}
+
+static void axi_desc_put(struct axi_dma_desc *desc)
+{
+	struct axi_dma_chan *chan = desc->chan;
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *child, *_next;
+	unsigned int descs_put = 0;
+
+	if (unlikely(!desc))
+		return;
+
+	list_for_each_entry_safe(child, _next, &desc->xfer_list, xfer_list) {
+		list_del(&child->xfer_list);
+		dma_pool_free(dw->desc_pool, child, child->vd.tx.phys);
+		descs_put++;
+	}
+
+	dma_pool_free(dw->desc_pool, desc, desc->vd.tx.phys);
+	descs_put++;
+
+	chan->descs_allocated -= descs_put;
+
+	dev_dbg(chan2dev(chan), "%s: %d descs put, %d still allocated\n",
+		axi_chan_name(chan), descs_put, chan->descs_allocated);
+}
+
+static void vchan_desc_put(struct virt_dma_desc *vdesc)
+{
+	axi_desc_put(vd_to_axi_desc(vdesc));
+}
+
+static enum dma_status
+dma_chan_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
+		  struct dma_tx_state *txstate)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	enum dma_status ret;
+
+	/* TODO: implement DMA_ERROR status managment */
+	ret = dma_cookie_status(dchan, cookie, txstate);
+
+	if (chan->is_paused && ret == DMA_IN_PROGRESS)
+		return DMA_PAUSED;
+
+	return ret;
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_llp(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.llp_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_chan_llp(struct axi_dma_chan *chan, dma_addr_t adr)
+{
+	axi_chan_iowrite32(chan, CH_LLP, adr);
+}
+
+/* Called in chan locked context */
+static void axi_chan_block_xfer_start(struct axi_dma_chan *chan,
+				      struct axi_dma_desc *first)
+{
+	u32 reg, irq_mask;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+	u32 priority = chan->chip->dw->hdata->priority[chan->id];
+
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+		/* The tasklet will hopefully advance the queue... */
+		return;
+	}
+
+	axi_dma_enable(chan->chip);
+
+	reg = (DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_DST_MULTBLK_TYPE_POS |
+	       DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_SRC_MULTBLK_TYPE_POS);
+	axi_chan_iowrite32(chan, CH_CFG_L, reg);
+
+	reg = (DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC << CH_CFG_H_TT_FC_POS |
+	       priority << CH_CFG_H_PRIORITY_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_DST_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_SRC_POS);
+	axi_chan_iowrite32(chan, CH_CFG_H, reg);
+
+	write_chan_llp(chan, first->vd.tx.phys | lms);
+
+	irq_mask = DWAXIDMAC_IRQ_DMA_TRF | DWAXIDMAC_IRQ_ALL_ERR;
+	axi_chan_irq_sig_set(chan, irq_mask);
+
+	/* generate 'suspend' status but don't generate interrupt */
+	irq_mask |= DWAXIDMAC_IRQ_SUSPENDED;
+	axi_chan_irq_set(chan, irq_mask);
+
+	axi_chan_enable(chan);
+}
+
+static void axi_chan_start_first_queued(struct axi_dma_chan *chan)
+{
+	struct axi_dma_desc *desc;
+	struct virt_dma_desc *vd;
+
+	vd = vchan_next_desc(&chan->vc);
+
+	if (!vd)
+		return;
+
+	desc = vd_to_axi_desc(vd);
+	dev_dbg(chan2dev(chan), "%s: started %u\n", axi_chan_name(chan),
+		vd->tx.cookie);
+	axi_chan_block_xfer_start(chan, desc);
+}
+
+static void dma_chan_issue_pending(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__, axi_chan_name(chan));
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (vchan_issue_pending(&chan->vc))
+		axi_chan_start_first_queued(chan);
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan)) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+		return -EBUSY;
+	}
+
+	dev_dbg(dchan2dev(dchan), "%s: allocating\n", axi_chan_name(chan));
+
+	dma_cookie_init(dchan);
+
+	chan->in_use = true;
+
+	return 0;
+}
+
+static void dma_chan_free_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan))
+		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+	axi_chan_disable(chan);
+	axi_chan_irq_disable(chan, DWAXIDMAC_IRQ_ALL);
+
+	vchan_free_chan_resources(&chan->vc);
+
+	dev_dbg(dchan2dev(dchan), "%s: %s: descriptor still allocated: %u\n",
+		__func__, axi_chan_name(chan), chan->descs_allocated);
+
+	chan->in_use = false;
+
+	/* Disable controller in case it was a last user */
+	if (!axi_dma_is_sw_enable(chan->chip))
+		axi_dma_disable(chan->chip);
+}
+
+/*
+ * If DW_axi_dmac sees CHx_CTL.ShadowReg_Or_LLI_Last bit of the fetched LLI
+ * as 1, it understands that the current block is the final block in the
+ * transfer and completes the DMA transfer operation at the end of current
+ * block transfer.
+ */
+static void set_desc_last(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	val = le32_to_cpu(desc->lli.ctl_hi);
+	val |= CH_CTL_H_LLI_LAST;
+	desc->lli.ctl_hi = cpu_to_le32(val);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_sar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.sar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_dar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.dar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_src_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI0 for source master */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	val &= ~CH_CTL_L_SRC_MAST;
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_dest_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI1 for source master if available */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	if (desc->chan->chip->dw->hdata->nr_masters > 1)
+		val |= CH_CTL_L_DST_MAST;
+	else
+		val &= ~CH_CTL_L_DST_MAST;
+
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_sg(struct dma_chan *dchan,
+		     struct scatterlist *dst_sg, unsigned int dst_nents,
+		     struct scatterlist *src_sg, unsigned int src_nents,
+		     unsigned long flags)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	struct axi_dma_desc *first = NULL, *desc = NULL, *prev = NULL;
+	size_t dst_len = 0, src_len = 0, xfer_len = 0, total_len = 0;
+	dma_addr_t dst_adr = 0, src_adr = 0;
+	u32 src_width, dst_width;
+	size_t block_ts, max_block_ts;
+	u32 reg;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+
+	dev_dbg(chan2dev(chan), "%s: %s: sn: %d dn: %d flags: 0x%lx",
+		__func__, axi_chan_name(chan), src_nents, dst_nents, flags);
+
+	if (unlikely(dst_nents == 0 || src_nents == 0))
+		return NULL;
+
+	if (unlikely(dst_sg == NULL || src_sg == NULL))
+		return NULL;
+
+	max_block_ts = chan->chip->dw->hdata->block_size[chan->id];
+
+	/*
+	 * Loop until there is either no more source or no more destination
+	 * scatterlist entry.
+	 */
+	while (true) {
+		/* Process dest sg list */
+		if (dst_len == 0) {
+			/* No more destination scatterlist entries */
+			if (!dst_sg || !dst_nents)
+				break;
+
+			dst_adr = sg_dma_address(dst_sg);
+			dst_len = sg_dma_len(dst_sg);
+
+			dst_sg = sg_next(dst_sg);
+			dst_nents--;
+		}
+
+		/* Process src sg list */
+		if (src_len == 0) {
+			/* No more source scatterlist entries */
+			if (!src_sg || !src_nents)
+				break;
+
+			src_adr = sg_dma_address(src_sg);
+			src_len = sg_dma_len(src_sg);
+
+			src_sg = sg_next(src_sg);
+			src_nents--;
+		}
+
+		/* Min of src and dest length will be this xfer length */
+		xfer_len = min_t(size_t, src_len, dst_len);
+		if (xfer_len == 0)
+			continue;
+
+		/* Take care for the alignment */
+		src_width = axi_chan_get_xfer_width(chan, src_adr,
+						    dst_adr, xfer_len);
+		/*
+		 * Actually src_width and dst_width can be different, but make
+		 * them same to be simpler.
+		 * TODO: REVISIT: Can we optimize it?
+		 */
+		dst_width = src_width;
+
+		/*
+		 * block_ts indicates the total number of data of width
+		 * src_width to be transferred in a DMA block transfer.
+		 * BLOCK_TS register should be set to block_ts -1
+		 */
+		block_ts = xfer_len >> src_width;
+		if (block_ts > max_block_ts) {
+			block_ts = max_block_ts;
+			xfer_len = max_block_ts << src_width;
+		}
+
+		desc = axi_desc_get(chan);
+		if (unlikely(!desc))
+			goto err_desc_get;
+
+		write_desc_sar(desc, src_adr);
+		write_desc_dar(desc, dst_adr);
+		desc->lli.block_ts_lo = cpu_to_le32(block_ts - 1);
+		desc->lli.ctl_hi = cpu_to_le32(CH_CTL_H_LLI_VALID);
+
+		reg = (DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_DST_MSIZE_POS |
+		       DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_SRC_MSIZE_POS |
+		       dst_width << CH_CTL_L_DST_WIDTH_POS |
+		       src_width << CH_CTL_L_SRC_WIDTH_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_DST_INC_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_SRC_INC_POS);
+		desc->lli.ctl_lo = cpu_to_le32(reg);
+
+		set_desc_src_master(desc);
+		set_desc_dest_master(desc);
+
+		/* Manage transfer list (xfer_list) */
+		if (!first) {
+			first = desc;
+		} else {
+			list_add_tail(&desc->xfer_list, &first->xfer_list);
+			write_desc_llp(prev, desc->vd.tx.phys | lms);
+		}
+		prev = desc;
+
+		/* update the lengths and addresses for the next loop cycle */
+		dst_len -= xfer_len;
+		src_len -= xfer_len;
+		dst_adr += xfer_len;
+		src_adr += xfer_len;
+
+		total_len += xfer_len;
+	}
+
+	/* Total len of src/dest sg == 0, so no descriptor were allocated */
+	if (unlikely(!first))
+		return NULL;
+
+	/* First descriptor of the chain embedds additional information */
+	first->total_len = total_len;
+
+	/* Set end-of-link to the last link descriptor of list */
+	set_desc_last(desc);
+
+	return vchan_tx_prep(&chan->vc, &first->vd, flags);
+
+err_desc_get:
+	axi_desc_put(first);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
+			 dma_addr_t src, size_t len, unsigned long flags)
+{
+	unsigned int nents = 1;
+	struct scatterlist dst_sg;
+	struct scatterlist src_sg;
+
+	sg_init_table(&dst_sg, nents);
+	sg_init_table(&src_sg, nents);
+
+	sg_dma_address(&dst_sg) = dest;
+	sg_dma_address(&src_sg) = src;
+
+	sg_dma_len(&dst_sg) = len;
+	sg_dma_len(&src_sg) = len;
+
+	/* Implement memcpy transfer as sg transfer with single list */
+	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
+				    &src_sg, nents, flags);
+}
+
+static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
+{
+	struct axi_dma_chip *chip = dev_id;
+
+	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
+	axi_dma_irq_disable(chip);
+
+	tasklet_schedule(&chip->dw->tasklet);
+
+	return IRQ_HANDLED;
+}
+
+static void axi_chan_dump_lli(struct axi_dma_chan *chan,
+			      struct axi_dma_desc *desc)
+{
+	dev_err(dchan2dev(&chan->vc.chan),
+		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL: 0x%x:%08x",
+		le32_to_cpu(desc->lli.sar_lo),
+		le32_to_cpu(desc->lli.dar_lo),
+		le32_to_cpu(desc->lli.llp_lo),
+		le32_to_cpu(desc->lli.block_ts_lo),
+		le32_to_cpu(desc->lli.ctl_hi),
+		le32_to_cpu(desc->lli.ctl_lo));
+}
+
+static void axi_chan_list_dump_lli(struct axi_dma_chan *chan,
+				   struct axi_dma_desc *desc_head)
+{
+	struct axi_dma_desc *desc;
+
+	axi_chan_dump_lli(chan, desc_head);
+	list_for_each_entry(desc, &desc_head->xfer_list, xfer_list)
+		axi_chan_dump_lli(chan, desc);
+}
+
+
+static void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	/* The bad descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list */
+	list_del(&vd->node);
+
+	/* WARN about bad descriptor */
+	dev_err(chan2dev(chan),
+		"Bad descriptor submitted for %s, cookie: %d, irq: 0x%08x\n",
+		axi_chan_name(chan), vd->tx.cookie, status);
+	axi_chan_list_dump_lli(chan, vd_to_axi_desc(vd));
+
+	/* Pretend the bad descriptor completed successfully */
+	vchan_cookie_complete(vd);
+
+	/* Try to restart the controller */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
+			axi_chan_name(chan));
+		axi_chan_disable(chan);
+	}
+
+	/* The completed descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list before completing */
+	list_del(&vd->node);
+	vchan_cookie_complete(vd);
+
+	/* Submit queued descriptors after processing the completed ones */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_dma_tasklet(unsigned long data)
+{
+	struct axi_dma_chip *chip = (struct axi_dma_chip *)data;
+	struct axi_dma_chan *chan;
+	struct dw_axi_dma *dw = chip->dw;
+
+	u32 status, i;
+
+	/* Poll, clear and process every chanel interrupt status */
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		chan = &dw->chan[i];
+		status = axi_chan_irq_read(chan);
+		axi_chan_irq_clear(chan, status);
+
+		dev_dbg(chip->dev, "%s %u IRQ status: 0x%08x\n",
+			axi_chan_name(chan), i, status);
+
+		if (unlikely(!chan->in_use))
+			continue;
+
+		if (status & DWAXIDMAC_IRQ_ALL_ERR)
+			axi_chan_handle_err(chan, status);
+		else if (status & DWAXIDMAC_IRQ_DMA_TRF)
+			axi_chan_block_xfer_complete(chan);
+	}
+
+	/* Re-enable interrupts */
+	axi_dma_irq_enable(chip);
+}
+
+static int dma_chan_terminate_all(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	vchan_get_all_descriptors(&chan->vc, &head);
+
+	/*
+	 * As vchan_dma_desc_free_list can access to desc_allocated list
+	 * we need to call it in vc.lock context.
+	 */
+	vchan_dma_desc_free_list(&chan->vc, &head);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	dev_dbg(dchan2dev(dchan), "terminated: %s\n", axi_chan_name(chan));
+
+	return 0;
+}
+
+static int dma_chan_pause(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	unsigned int timeout = 20; /* timeout iterations */
+	int ret = -EAGAIN;
+	u32 val;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	while (timeout--) {
+		if (axi_chan_irq_read(chan) & DWAXIDMAC_IRQ_SUSPENDED) {
+			axi_chan_irq_clear(chan, DWAXIDMAC_IRQ_SUSPENDED);
+			ret = 0;
+			break;
+		}
+		udelay(2);
+	}
+
+	chan->is_paused = true;
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return ret;
+}
+
+/* Called in chan locked context */
+static inline void axi_chan_resume(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	chan->is_paused = false;
+}
+
+static int dma_chan_resume(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	if (chan->is_paused)
+		axi_chan_resume(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return 0;
+}
+
+static int parse_device_properties(struct axi_dma_chip *chip)
+{
+	struct device *dev = chip->dev;
+	u32 tmp, carr[DMAC_MAX_CHANNELS];
+	int ret;
+
+	ret = device_property_read_u32(dev, "dma-channels", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_CHANNELS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_channels = tmp;
+
+	ret = device_property_read_u32(dev, "dma-masters", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_MASTERS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_masters = tmp;
+
+	ret = device_property_read_u32(dev, "data-width", &tmp);
+	if (ret)
+		return ret;
+	if (tmp > DWAXIDMAC_TRANS_WIDTH_MAX)
+		return -EINVAL;
+
+	chip->dw->hdata->m_data_width = tmp;
+
+	ret = device_property_read_u32_array(dev, "block-size", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] == 0 || carr[tmp] > DMAC_MAX_BLK_SIZE)
+			return -EINVAL;
+		else
+			chip->dw->hdata->block_size[tmp] = carr[tmp];
+
+	ret = device_property_read_u32_array(dev, "priority", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	/* priority value must be programmed within [0:nr_channels-1] range */
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] >= chip->dw->hdata->nr_channels)
+			return -EINVAL;
+		else
+			chip->dw->hdata->priority[tmp] = carr[tmp];
+
+	return 0;
+}
+
+static int dw_probe(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip;
+	struct resource *mem;
+	struct dw_axi_dma *dw;
+	struct dw_axi_dma_hcfg *hdata;
+	u32 i;
+	int ret;
+
+	chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL);
+	if (!chip)
+		return -ENOMEM;
+
+	dw = devm_kzalloc(&pdev->dev, sizeof(*dw), GFP_KERNEL);
+	if (!dw)
+		return -ENOMEM;
+
+	hdata = devm_kzalloc(&pdev->dev, sizeof(*hdata), GFP_KERNEL);
+	if (!hdata)
+		return -ENOMEM;
+
+	chip->dw = dw;
+	chip->dev = &pdev->dev;
+	chip->dw->hdata = hdata;
+
+	chip->irq = platform_get_irq(pdev, 0);
+	if (chip->irq < 0)
+		return chip->irq;
+
+	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	chip->regs = devm_ioremap_resource(chip->dev, mem);
+	if (IS_ERR(chip->regs))
+		return PTR_ERR(chip->regs);
+
+	chip->clk = devm_clk_get(chip->dev, NULL);
+	if (IS_ERR(chip->clk))
+		return PTR_ERR(chip->clk);
+
+	ret = parse_device_properties(chip);
+	if (ret)
+		return ret;
+
+	dw->chan = devm_kcalloc(chip->dev, hdata->nr_channels,
+				sizeof(*dw->chan), GFP_KERNEL);
+	if (!dw->chan)
+		return -ENOMEM;
+
+	ret = devm_request_irq(chip->dev, chip->irq, dw_axi_dma_intretupt,
+			       IRQF_SHARED, DRV_NAME, chip);
+	if (ret)
+		return ret;
+
+	/* Lli address must be aligned to a 64-byte boundary */
+	dw->desc_pool = dmam_pool_create(DRV_NAME, chip->dev,
+					 sizeof(struct axi_dma_desc), 64, 0);
+	if (!dw->desc_pool) {
+		dev_err(chip->dev, "No memory for descriptors dma pool\n");
+		return -ENOMEM;
+	}
+
+	tasklet_init(&dw->tasklet, axi_dma_tasklet, (unsigned long)chip);
+
+	INIT_LIST_HEAD(&dw->dma.channels);
+	for (i = 0; i < hdata->nr_channels; i++) {
+		struct axi_dma_chan *chan = &dw->chan[i];
+
+		chan->chip = chip;
+		chan->id = (u8)i;
+		chan->chan_regs = chip->regs + COMMON_REG_LEN + i * CHAN_REG_LEN;
+
+		chan->vc.desc_free = vchan_desc_put;
+		vchan_init(&chan->vc, &dw->dma);
+	}
+
+	axi_dma_hw_init(chip);
+
+	/* Set capabilities */
+	dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
+	dma_cap_set(DMA_SG, dw->dma.cap_mask);
+
+	/* DMA capabilities */
+	dw->dma.chancnt = hdata->nr_channels;
+	dw->dma.src_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.dst_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.directions = BIT(DMA_MEM_TO_MEM);
+	dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+
+	dw->dma.dev = chip->dev;
+	dw->dma.device_tx_status = dma_chan_tx_status;
+	dw->dma.device_issue_pending = dma_chan_issue_pending;
+	dw->dma.device_terminate_all = dma_chan_terminate_all;
+	dw->dma.device_pause = dma_chan_pause;
+	dw->dma.device_resume = dma_chan_resume;
+
+	dw->dma.device_alloc_chan_resources = dma_chan_alloc_chan_resources;
+	dw->dma.device_free_chan_resources = dma_chan_free_chan_resources;
+
+	dw->dma.device_prep_dma_memcpy = dma_chan_prep_dma_memcpy;
+	dw->dma.device_prep_dma_sg = dma_chan_prep_dma_sg;
+
+	ret = clk_prepare_enable(chip->clk);
+	if (ret < 0)
+		return ret;
+
+	ret = dma_async_device_register(&dw->dma);
+	if (ret)
+		goto err_clk_disable;
+
+	platform_set_drvdata(pdev, chip);
+
+	dev_info(chip->dev, "DesignWare AXI DMA Controller, %d channels\n",
+		 dw->hdata->nr_channels);
+
+	return 0;
+
+err_clk_disable:
+	clk_disable_unprepare(chip->clk);
+
+	return ret;
+}
+
+static int dw_remove(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
+	struct dw_axi_dma *dw = chip->dw;
+	struct axi_dma_chan *chan, *_chan;
+	u32 i;
+
+	axi_dma_irq_disable(chip);
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		axi_chan_disable(&chip->dw->chan[i]);
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+	}
+	axi_dma_disable(chip);
+
+	tasklet_kill(&dw->tasklet);
+
+	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
+			vc.chan.device_node) {
+		list_del(&chan->vc.chan.device_node);
+		tasklet_kill(&chan->vc.task);
+	}
+
+	dma_async_device_unregister(&dw->dma);
+
+	clk_disable_unprepare(chip->clk);
+
+	return 0;
+}
+
+static const struct of_device_id dw_dma_of_id_table[] = {
+	{ .compatible = "snps,axi-dma" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, dw_dma_of_id_table);
+
+static struct platform_driver dw_driver = {
+	.probe		= dw_probe,
+	.remove		= dw_remove,
+	.driver = {
+		.name	= DRV_NAME,
+		.of_match_table = of_match_ptr(dw_dma_of_id_table),
+	},
+};
+module_platform_driver(dw_driver);
+
+static int __init dw_init(void)
+{
+	return platform_driver_register(&dw_driver);
+}
+subsys_initcall(dw_init);
+
+static void __exit dw_exit(void)
+{
+	platform_driver_unregister(&dw_driver);
+}
+module_exit(dw_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Synopsys DesignWare AXI DMA Controller platform driver");
+MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>");
diff --git a/drivers/dma/axi_dma_platform.h b/drivers/dma/axi_dma_platform.h
new file mode 100644
index 0000000..02cd744
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.h
@@ -0,0 +1,124 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_H
+#define _AXI_DMA_PLATFORM_H
+
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+
+#include "virt-dma.h"
+
+#define DMAC_MAX_CHANNELS	8
+#define DMAC_MAX_MASTERS	2
+#define DMAC_MAX_BLK_SIZE	0x200000
+
+struct dw_axi_dma_hcfg {
+	u32	nr_channels;
+	u32	nr_masters;
+	u32	m_data_width;
+	u32	block_size[DMAC_MAX_CHANNELS];
+	u32	priority[DMAC_MAX_CHANNELS];
+};
+
+struct axi_dma_chan {
+	struct axi_dma_chip		*chip;
+	void __iomem			*chan_regs;
+	u8				id;
+	bool				in_use;
+	unsigned int			descs_allocated;
+
+	struct virt_dma_chan		vc;
+
+	/* these other elements are all protected by vc.lock */
+	bool				is_paused;
+};
+
+struct dw_axi_dma {
+	struct dma_device	dma;
+	struct dw_axi_dma_hcfg	*hdata;
+	struct dma_pool		*desc_pool;
+	struct tasklet_struct	tasklet;
+
+	/* channels */
+	struct axi_dma_chan	*chan;
+};
+
+struct axi_dma_chip {
+	struct device		*dev;
+	int			irq;
+	void __iomem		*regs;
+	struct clk		*clk;
+	struct dw_axi_dma	*dw;
+};
+
+/* LLI == Linked List Item */
+struct axi_dma_lli {
+	__le32		sar_lo;
+	__le32		sar_hi;
+	__le32		dar_lo;
+	__le32		dar_hi;
+	__le32		block_ts_lo;
+	__le32		block_ts_hi;
+	__le32		llp_lo;
+	__le32		llp_hi;
+	__le32		ctl_lo;
+	__le32		ctl_hi;
+	__le32		sstat;
+	__le32		dstat;
+	__le32		status_lo;
+	__le32		ststus_hi;
+	__le32		reserved_lo;
+	__le32		reserved_hi;
+};
+
+struct axi_dma_desc {
+	struct axi_dma_lli		lli;
+
+	struct virt_dma_desc		vd;
+	struct axi_dma_chan		*chan;
+	struct list_head		xfer_list;
+	size_t				total_len;
+};
+
+static inline struct device *dchan2dev(struct dma_chan *dchan)
+{
+	return &dchan->dev->device;
+}
+
+static inline struct device *chan2dev(struct axi_dma_chan *chan)
+{
+	return &chan->vc.chan.dev->device;
+}
+
+static inline struct axi_dma_desc *vd_to_axi_desc(struct virt_dma_desc *vd)
+{
+	return container_of(vd, struct axi_dma_desc, vd);
+}
+
+static inline struct axi_dma_chan *vc_to_axi_dma_chan(struct virt_dma_chan *vc)
+{
+	return container_of(vc, struct axi_dma_chan, vc);
+}
+
+static inline struct axi_dma_chan *dchan_to_axi_dma_chan(struct dma_chan *dchan)
+{
+	return vc_to_axi_dma_chan(to_virt_chan(dchan));
+}
+
+#endif /* _AXI_DMA_PLATFORM_H */
diff --git a/drivers/dma/axi_dma_platform_reg.h b/drivers/dma/axi_dma_platform_reg.h
new file mode 100644
index 0000000..4d62b50
--- /dev/null
+++ b/drivers/dma/axi_dma_platform_reg.h
@@ -0,0 +1,189 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_REG_H
+#define _AXI_DMA_PLATFORM_REG_H
+
+#include <linux/bitops.h>
+
+#define COMMON_REG_LEN		0x100
+#define CHAN_REG_LEN		0x100
+
+/* Common registers offset */
+#define DMAC_ID			0x000 // R DMAC ID
+#define DMAC_COMPVER		0x008 // R DMAC Component Version
+#define DMAC_CFG		0x010 // R/W DMAC Configuration
+#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
+#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
+#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
+#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
+#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
+#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
+#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
+#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
+#define DMAC_RESET		0x058 // R DMAC Reset Register1
+
+/* DMA channel registers offset */
+#define CH_SAR			0x000 // R/W Chan Source Address
+#define CH_DAR			0x008 // R/W Chan Destination Address
+#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
+#define CH_CTL			0x018 // R/W Chan Control
+#define CH_CTL_L		0x018 // R/W Chan Control 00-31
+#define CH_CTL_H		0x01C // R/W Chan Control 32-63
+#define CH_CFG			0x020 // R/W Chan Configuration
+#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
+#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
+#define CH_LLP			0x028 // R/W Chan Linked List Pointer
+#define CH_STATUS		0x030 // R Chan Status
+#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
+#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
+#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
+#define CH_AXI_ID		0x050 // R/W Chan AXI ID
+#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
+#define CH_SSTAT		0x060 // R Chan Source Status
+#define CH_DSTAT		0x068 // R Chan Destination Status
+#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
+#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
+#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
+#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
+#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
+#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear
+
+
+/* DMAC_CFG */
+#define DMAC_EN_MASK		0x00000001U
+#define DMAC_EN_POS		0
+
+#define INT_EN_MASK		0x00000002U
+#define INT_EN_POS		1
+
+#define DMAC_CHAN_EN_SHIFT	0
+#define DMAC_CHAN_EN_WE_SHIFT	8
+
+#define DMAC_CHAN_SUSP_SHIFT	16
+#define DMAC_CHAN_SUSP_WE_SHIFT	24
+
+/* CH_CTL_H */
+#define CH_CTL_H_LLI_LAST	BIT(30)
+#define CH_CTL_H_LLI_VALID	BIT(31)
+
+/* CH_CTL_L */
+#define CH_CTL_L_LAST_WRITE_EN	BIT(30)
+
+#define CH_CTL_L_DST_MSIZE_POS	18
+#define CH_CTL_L_SRC_MSIZE_POS	14
+enum {
+	DWAXIDMAC_BURST_TRANS_LEN_1	= 0x0,
+	DWAXIDMAC_BURST_TRANS_LEN_4,
+	DWAXIDMAC_BURST_TRANS_LEN_8,
+	DWAXIDMAC_BURST_TRANS_LEN_16,
+	DWAXIDMAC_BURST_TRANS_LEN_32,
+	DWAXIDMAC_BURST_TRANS_LEN_64,
+	DWAXIDMAC_BURST_TRANS_LEN_128,
+	DWAXIDMAC_BURST_TRANS_LEN_256,
+	DWAXIDMAC_BURST_TRANS_LEN_512,
+	DWAXIDMAC_BURST_TRANS_LEN_1024
+};
+
+#define CH_CTL_L_DST_WIDTH_POS	11
+#define CH_CTL_L_SRC_WIDTH_POS	8
+
+#define CH_CTL_L_DST_INC_POS	6
+#define CH_CTL_L_SRC_INC_POS	4
+enum {
+	DWAXIDMAC_CH_CTL_L_INC	= 0x0,
+	DWAXIDMAC_CH_CTL_L_NOINC
+};
+
+#define CH_CTL_L_DST_MAST_POS	2
+#define CH_CTL_L_DST_MAST	BIT(CH_CTL_L_DST_MAST_POS)
+#define CH_CTL_L_SRC_MAST_POS	0
+#define CH_CTL_L_SRC_MAST	BIT(CH_CTL_L_SRC_MAST_POS)
+
+/* CH_CFG_H */
+#define CH_CFG_H_PRIORITY_POS	17
+#define CH_CFG_H_HS_SEL_DST_POS	4
+#define CH_CFG_H_HS_SEL_SRC_POS	3
+enum {
+	DWAXIDMAC_HS_SEL_HW	= 0x0,
+	DWAXIDMAC_HS_SEL_SW
+};
+
+#define CH_CFG_H_TT_FC_POS	0
+enum {
+	DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC	= 0x0,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_SRC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_SRC,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DST,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DST
+};
+
+/* CH_CFG_L */
+#define CH_CFG_L_DST_MULTBLK_TYPE_POS	2
+#define CH_CFG_L_SRC_MULTBLK_TYPE_POS	0
+enum {
+	DWAXIDMAC_MBLK_TYPE_CONTIGUOUS	= 0x0,
+	DWAXIDMAC_MBLK_TYPE_RELOAD,
+	DWAXIDMAC_MBLK_TYPE_SHADOW_REG,
+	DWAXIDMAC_MBLK_TYPE_LL
+};
+
+enum {
+	DWAXIDMAC_IRQ_NONE		= 0x0,
+	DWAXIDMAC_IRQ_BLOCK_TRF		= BIT(0),  // block transfer complete
+	DWAXIDMAC_IRQ_DMA_TRF		= BIT(1),  // dma transfer complete
+	DWAXIDMAC_IRQ_SRC_TRAN		= BIT(3),  // source transaction complete
+	DWAXIDMAC_IRQ_DST_TRAN		= BIT(4),  // destination transaction complete
+	DWAXIDMAC_IRQ_SRC_DEC_ERR	= BIT(5),  // source decode error
+	DWAXIDMAC_IRQ_DST_DEC_ERR	= BIT(6),  // destination decode error
+	DWAXIDMAC_IRQ_SRC_SLV_ERR	= BIT(7),  // source slave error
+	DWAXIDMAC_IRQ_DST_SLV_ERR	= BIT(8),  // destination slave error
+	DWAXIDMAC_IRQ_LLI_RD_DEC_ERR	= BIT(9),  // LLI read decode error
+	DWAXIDMAC_IRQ_LLI_WR_DEC_ERR	= BIT(10), // LLI write decode error
+	DWAXIDMAC_IRQ_LLI_RD_SLV_ERR	= BIT(11), // LLI read slave error
+	DWAXIDMAC_IRQ_LLI_WR_SLV_ERR	= BIT(12), // LLI write slave error
+	DWAXIDMAC_IRQ_INVALID_ERR	= BIT(13), // LLI invalide error or Shadow register error
+	DWAXIDMAC_IRQ_MULTIBLKTYPE_ERR	= BIT(14), // Slave Interface Multiblock type error
+	DWAXIDMAC_IRQ_DEC_ERR		= BIT(16), // Slave Interface decode error
+	DWAXIDMAC_IRQ_WR2RO_ERR		= BIT(17), // Slave Interface write to read only error
+	DWAXIDMAC_IRQ_RD2RWO_ERR	= BIT(18), // Slave Interface read to write only error
+	DWAXIDMAC_IRQ_WRONCHEN_ERR	= BIT(19), // Slave Interface write to channel error
+	DWAXIDMAC_IRQ_SHADOWREG_ERR	= BIT(20), // Slave Interface shadow reg error
+	DWAXIDMAC_IRQ_WRONHOLD_ERR	= BIT(21), // Slave Interface hold error
+	DWAXIDMAC_IRQ_LOCK_CLEARED	= BIT(27), // Lock Cleared Status
+	DWAXIDMAC_IRQ_SRC_SUSPENDED	= BIT(28), // Source Suspended Status
+	DWAXIDMAC_IRQ_SUSPENDED		= BIT(29), // Channel Suspended Status
+	DWAXIDMAC_IRQ_DISABLED		= BIT(30), // Channel Disabled Status
+	DWAXIDMAC_IRQ_ABORTED		= BIT(31), // Channel Aborted Status
+	DWAXIDMAC_IRQ_ALL_ERR		= 0x003F7FE0,
+	DWAXIDMAC_IRQ_ALL		= 0xFFFFFFFF
+};
+
+enum {
+	DWAXIDMAC_TRANS_WIDTH_8		= 0x0,
+	DWAXIDMAC_TRANS_WIDTH_16,
+	DWAXIDMAC_TRANS_WIDTH_32,
+	DWAXIDMAC_TRANS_WIDTH_64,
+	DWAXIDMAC_TRANS_WIDTH_128,
+	DWAXIDMAC_TRANS_WIDTH_256,
+	DWAXIDMAC_TRANS_WIDTH_512,
+	DWAXIDMAC_TRANS_WIDTH_MAX	= DWAXIDMAC_TRANS_WIDTH_512
+};
+
+#endif /* _AXI_DMA_PLATFORM_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: dmaengine
  Cc: Mark Rutland, devicetree, Andy Shevchenko, Vinod Koul,
	Alexey Brodkin, linux-kernel, Rob Herring, Dan Williams,
	linux-snps-arc, Eugeniy Paltsev

This patch adds support for the DW AXI DMAC controller.

DW AXI DMAC is a part of upcoming development board from Synopsys.

In this driver implementation only DMA_MEMCPY and DMA_SG transfers
are supported.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
---
 drivers/dma/Kconfig                |    8 +
 drivers/dma/Makefile               |    1 +
 drivers/dma/axi_dma_platform.c     | 1060 ++++++++++++++++++++++++++++++++++++
 drivers/dma/axi_dma_platform.h     |  124 +++++
 drivers/dma/axi_dma_platform_reg.h |  189 +++++++
 5 files changed, 1382 insertions(+)
 create mode 100644 drivers/dma/axi_dma_platform.c
 create mode 100644 drivers/dma/axi_dma_platform.h
 create mode 100644 drivers/dma/axi_dma_platform_reg.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 263495d..6d511b9 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -578,6 +578,14 @@ config ZX_DMA
 	help
 	  Support the DMA engine for ZTE ZX296702 platform devices.
 
+config AXI_DW_DMAC
+	tristate "Synopsys DesignWare AXI DMA support"
+	depends on OF && !64BIT
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Enable support for Synopsys DesignWare AXI DMA controller.
+
 
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index a4fa336..9fb1dfe 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
 obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
+obj-$(CONFIG_AXI_DW_DMAC) += axi_dma_platform.o
 obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
 obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
 obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
diff --git a/drivers/dma/axi_dma_platform.c b/drivers/dma/axi_dma_platform.c
new file mode 100644
index 0000000..31b9fdc
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.c
@@ -0,0 +1,1060 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
+
+#include "axi_dma_platform.h"
+#include "axi_dma_platform_reg.h"
+#include "dmaengine.h"
+#include "virt-dma.h"
+
+#define DRV_NAME	"axi_dw_dmac"
+
+/*
+ * The set of bus widths supported by the DMA controller. DW AXI DMAC supports
+ * master data bus width up to 512 bits (for both AXI master interfaces), but
+ * it depends on IP block configurarion.
+ */
+#define AXI_DMA_BUSWIDTHS		  \
+	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \
+	DMA_SLAVE_BUSWIDTH_1_BYTE	| \
+	DMA_SLAVE_BUSWIDTH_2_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_4_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_8_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_16_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_32_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_64_BYTES)
+/* TODO: check: do we need to use BIT() macro here? */
+
+static inline void
+axi_dma_iowrite32(struct axi_dma_chip *chip, u32 reg, u32 val)
+{
+	iowrite32(val, chip->regs + reg);
+}
+
+static inline u32 axi_dma_ioread32(struct axi_dma_chip *chip, u32 reg)
+{
+	return ioread32(chip->regs + reg);
+}
+
+static inline void
+axi_chan_iowrite32(struct axi_dma_chan *chan, u32 reg, u32 val)
+{
+	iowrite32(val, chan->chan_regs + reg);
+}
+
+static inline u32 axi_chan_ioread32(struct axi_dma_chan *chan, u32 reg)
+{
+	return ioread32(chan->chan_regs + reg);
+}
+
+static inline void axi_dma_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_chan_irq_disable(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	u32 val;
+
+	if (likely(irq_mask == DWAXIDMAC_IRQ_ALL)) {
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, DWAXIDMAC_IRQ_NONE);
+	} else {
+		val = axi_chan_ioread32(chan, CH_INTSTATUS_ENA);
+		val &= ~irq_mask;
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, val);
+	}
+}
+
+static inline void axi_chan_irq_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_sig_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSIGNAL_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_clear(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTCLEAR, irq_mask);
+}
+
+static inline u32 axi_chan_irq_read(struct axi_dma_chan *chan)
+{
+	return axi_chan_ioread32(chan, CH_INTSTATUS);
+}
+
+static inline void axi_chan_disable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline void axi_chan_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline bool axi_chan_is_hw_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+
+	return !!(val & (BIT(chan->id) << DMAC_CHAN_EN_SHIFT));
+}
+
+static inline bool axi_dma_is_sw_enable(struct axi_dma_chip *chip)
+{
+	struct dw_axi_dma *dw = chip->dw;
+	u32 i;
+
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		if (dw->chan[i].in_use)
+			return true;
+	}
+
+	return false;
+}
+
+static void axi_dma_hw_init(struct axi_dma_chip *chip)
+{
+	u32 i;
+
+	axi_dma_disable(chip);
+
+	for (i = 0; i < chip->dw->hdata->nr_channels; i++) {
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+		axi_chan_disable(&chip->dw->chan[i]);
+	}
+
+	axi_dma_irq_enable(chip);
+}
+
+static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan, dma_addr_t src,
+				   dma_addr_t dst, size_t len)
+{
+	u32 width;
+	size_t sdl = (src | dst | len);
+	u32 max_width = chan->chip->dw->hdata->m_data_width;
+
+	width = sdl ? __ffs(sdl) : DWAXIDMAC_TRANS_WIDTH_MAX;
+
+	return min_t(size_t, width, max_width);
+}
+
+static inline const char *axi_chan_name(struct axi_dma_chan *chan)
+{
+	return dma_chan_name(&chan->vc.chan);
+}
+
+static struct axi_dma_desc *axi_desc_get(struct axi_dma_chan *chan)
+{
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *desc;
+	dma_addr_t phys;
+
+	desc = dma_pool_zalloc(dw->desc_pool, GFP_ATOMIC, &phys);
+	if (unlikely(!desc)) {
+		dev_err(chan2dev(chan), "%s: not enough descriptors available\n",
+			axi_chan_name(chan));
+		return NULL;
+	}
+
+	chan->descs_allocated++;
+	INIT_LIST_HEAD(&desc->xfer_list);
+	desc->vd.tx.phys = phys;
+	desc->chan = chan;
+
+	return desc;
+}
+
+static void axi_desc_put(struct axi_dma_desc *desc)
+{
+	struct axi_dma_chan *chan = desc->chan;
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *child, *_next;
+	unsigned int descs_put = 0;
+
+	if (unlikely(!desc))
+		return;
+
+	list_for_each_entry_safe(child, _next, &desc->xfer_list, xfer_list) {
+		list_del(&child->xfer_list);
+		dma_pool_free(dw->desc_pool, child, child->vd.tx.phys);
+		descs_put++;
+	}
+
+	dma_pool_free(dw->desc_pool, desc, desc->vd.tx.phys);
+	descs_put++;
+
+	chan->descs_allocated -= descs_put;
+
+	dev_dbg(chan2dev(chan), "%s: %d descs put, %d still allocated\n",
+		axi_chan_name(chan), descs_put, chan->descs_allocated);
+}
+
+static void vchan_desc_put(struct virt_dma_desc *vdesc)
+{
+	axi_desc_put(vd_to_axi_desc(vdesc));
+}
+
+static enum dma_status
+dma_chan_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
+		  struct dma_tx_state *txstate)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	enum dma_status ret;
+
+	/* TODO: implement DMA_ERROR status managment */
+	ret = dma_cookie_status(dchan, cookie, txstate);
+
+	if (chan->is_paused && ret == DMA_IN_PROGRESS)
+		return DMA_PAUSED;
+
+	return ret;
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_llp(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.llp_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_chan_llp(struct axi_dma_chan *chan, dma_addr_t adr)
+{
+	axi_chan_iowrite32(chan, CH_LLP, adr);
+}
+
+/* Called in chan locked context */
+static void axi_chan_block_xfer_start(struct axi_dma_chan *chan,
+				      struct axi_dma_desc *first)
+{
+	u32 reg, irq_mask;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+	u32 priority = chan->chip->dw->hdata->priority[chan->id];
+
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+		/* The tasklet will hopefully advance the queue... */
+		return;
+	}
+
+	axi_dma_enable(chan->chip);
+
+	reg = (DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_DST_MULTBLK_TYPE_POS |
+	       DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_SRC_MULTBLK_TYPE_POS);
+	axi_chan_iowrite32(chan, CH_CFG_L, reg);
+
+	reg = (DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC << CH_CFG_H_TT_FC_POS |
+	       priority << CH_CFG_H_PRIORITY_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_DST_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_SRC_POS);
+	axi_chan_iowrite32(chan, CH_CFG_H, reg);
+
+	write_chan_llp(chan, first->vd.tx.phys | lms);
+
+	irq_mask = DWAXIDMAC_IRQ_DMA_TRF | DWAXIDMAC_IRQ_ALL_ERR;
+	axi_chan_irq_sig_set(chan, irq_mask);
+
+	/* generate 'suspend' status but don't generate interrupt */
+	irq_mask |= DWAXIDMAC_IRQ_SUSPENDED;
+	axi_chan_irq_set(chan, irq_mask);
+
+	axi_chan_enable(chan);
+}
+
+static void axi_chan_start_first_queued(struct axi_dma_chan *chan)
+{
+	struct axi_dma_desc *desc;
+	struct virt_dma_desc *vd;
+
+	vd = vchan_next_desc(&chan->vc);
+
+	if (!vd)
+		return;
+
+	desc = vd_to_axi_desc(vd);
+	dev_dbg(chan2dev(chan), "%s: started %u\n", axi_chan_name(chan),
+		vd->tx.cookie);
+	axi_chan_block_xfer_start(chan, desc);
+}
+
+static void dma_chan_issue_pending(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__, axi_chan_name(chan));
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (vchan_issue_pending(&chan->vc))
+		axi_chan_start_first_queued(chan);
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan)) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+		return -EBUSY;
+	}
+
+	dev_dbg(dchan2dev(dchan), "%s: allocating\n", axi_chan_name(chan));
+
+	dma_cookie_init(dchan);
+
+	chan->in_use = true;
+
+	return 0;
+}
+
+static void dma_chan_free_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan))
+		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+	axi_chan_disable(chan);
+	axi_chan_irq_disable(chan, DWAXIDMAC_IRQ_ALL);
+
+	vchan_free_chan_resources(&chan->vc);
+
+	dev_dbg(dchan2dev(dchan), "%s: %s: descriptor still allocated: %u\n",
+		__func__, axi_chan_name(chan), chan->descs_allocated);
+
+	chan->in_use = false;
+
+	/* Disable controller in case it was a last user */
+	if (!axi_dma_is_sw_enable(chan->chip))
+		axi_dma_disable(chan->chip);
+}
+
+/*
+ * If DW_axi_dmac sees CHx_CTL.ShadowReg_Or_LLI_Last bit of the fetched LLI
+ * as 1, it understands that the current block is the final block in the
+ * transfer and completes the DMA transfer operation at the end of current
+ * block transfer.
+ */
+static void set_desc_last(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	val = le32_to_cpu(desc->lli.ctl_hi);
+	val |= CH_CTL_H_LLI_LAST;
+	desc->lli.ctl_hi = cpu_to_le32(val);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_sar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.sar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_dar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.dar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_src_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI0 for source master */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	val &= ~CH_CTL_L_SRC_MAST;
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_dest_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI1 for source master if available */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	if (desc->chan->chip->dw->hdata->nr_masters > 1)
+		val |= CH_CTL_L_DST_MAST;
+	else
+		val &= ~CH_CTL_L_DST_MAST;
+
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_sg(struct dma_chan *dchan,
+		     struct scatterlist *dst_sg, unsigned int dst_nents,
+		     struct scatterlist *src_sg, unsigned int src_nents,
+		     unsigned long flags)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	struct axi_dma_desc *first = NULL, *desc = NULL, *prev = NULL;
+	size_t dst_len = 0, src_len = 0, xfer_len = 0, total_len = 0;
+	dma_addr_t dst_adr = 0, src_adr = 0;
+	u32 src_width, dst_width;
+	size_t block_ts, max_block_ts;
+	u32 reg;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+
+	dev_dbg(chan2dev(chan), "%s: %s: sn: %d dn: %d flags: 0x%lx",
+		__func__, axi_chan_name(chan), src_nents, dst_nents, flags);
+
+	if (unlikely(dst_nents == 0 || src_nents == 0))
+		return NULL;
+
+	if (unlikely(dst_sg == NULL || src_sg == NULL))
+		return NULL;
+
+	max_block_ts = chan->chip->dw->hdata->block_size[chan->id];
+
+	/*
+	 * Loop until there is either no more source or no more destination
+	 * scatterlist entry.
+	 */
+	while (true) {
+		/* Process dest sg list */
+		if (dst_len == 0) {
+			/* No more destination scatterlist entries */
+			if (!dst_sg || !dst_nents)
+				break;
+
+			dst_adr = sg_dma_address(dst_sg);
+			dst_len = sg_dma_len(dst_sg);
+
+			dst_sg = sg_next(dst_sg);
+			dst_nents--;
+		}
+
+		/* Process src sg list */
+		if (src_len == 0) {
+			/* No more source scatterlist entries */
+			if (!src_sg || !src_nents)
+				break;
+
+			src_adr = sg_dma_address(src_sg);
+			src_len = sg_dma_len(src_sg);
+
+			src_sg = sg_next(src_sg);
+			src_nents--;
+		}
+
+		/* Min of src and dest length will be this xfer length */
+		xfer_len = min_t(size_t, src_len, dst_len);
+		if (xfer_len == 0)
+			continue;
+
+		/* Take care for the alignment */
+		src_width = axi_chan_get_xfer_width(chan, src_adr,
+						    dst_adr, xfer_len);
+		/*
+		 * Actually src_width and dst_width can be different, but make
+		 * them same to be simpler.
+		 * TODO: REVISIT: Can we optimize it?
+		 */
+		dst_width = src_width;
+
+		/*
+		 * block_ts indicates the total number of data of width
+		 * src_width to be transferred in a DMA block transfer.
+		 * BLOCK_TS register should be set to block_ts -1
+		 */
+		block_ts = xfer_len >> src_width;
+		if (block_ts > max_block_ts) {
+			block_ts = max_block_ts;
+			xfer_len = max_block_ts << src_width;
+		}
+
+		desc = axi_desc_get(chan);
+		if (unlikely(!desc))
+			goto err_desc_get;
+
+		write_desc_sar(desc, src_adr);
+		write_desc_dar(desc, dst_adr);
+		desc->lli.block_ts_lo = cpu_to_le32(block_ts - 1);
+		desc->lli.ctl_hi = cpu_to_le32(CH_CTL_H_LLI_VALID);
+
+		reg = (DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_DST_MSIZE_POS |
+		       DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_SRC_MSIZE_POS |
+		       dst_width << CH_CTL_L_DST_WIDTH_POS |
+		       src_width << CH_CTL_L_SRC_WIDTH_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_DST_INC_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_SRC_INC_POS);
+		desc->lli.ctl_lo = cpu_to_le32(reg);
+
+		set_desc_src_master(desc);
+		set_desc_dest_master(desc);
+
+		/* Manage transfer list (xfer_list) */
+		if (!first) {
+			first = desc;
+		} else {
+			list_add_tail(&desc->xfer_list, &first->xfer_list);
+			write_desc_llp(prev, desc->vd.tx.phys | lms);
+		}
+		prev = desc;
+
+		/* update the lengths and addresses for the next loop cycle */
+		dst_len -= xfer_len;
+		src_len -= xfer_len;
+		dst_adr += xfer_len;
+		src_adr += xfer_len;
+
+		total_len += xfer_len;
+	}
+
+	/* Total len of src/dest sg == 0, so no descriptor were allocated */
+	if (unlikely(!first))
+		return NULL;
+
+	/* First descriptor of the chain embedds additional information */
+	first->total_len = total_len;
+
+	/* Set end-of-link to the last link descriptor of list */
+	set_desc_last(desc);
+
+	return vchan_tx_prep(&chan->vc, &first->vd, flags);
+
+err_desc_get:
+	axi_desc_put(first);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
+			 dma_addr_t src, size_t len, unsigned long flags)
+{
+	unsigned int nents = 1;
+	struct scatterlist dst_sg;
+	struct scatterlist src_sg;
+
+	sg_init_table(&dst_sg, nents);
+	sg_init_table(&src_sg, nents);
+
+	sg_dma_address(&dst_sg) = dest;
+	sg_dma_address(&src_sg) = src;
+
+	sg_dma_len(&dst_sg) = len;
+	sg_dma_len(&src_sg) = len;
+
+	/* Implement memcpy transfer as sg transfer with single list */
+	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
+				    &src_sg, nents, flags);
+}
+
+static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
+{
+	struct axi_dma_chip *chip = dev_id;
+
+	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
+	axi_dma_irq_disable(chip);
+
+	tasklet_schedule(&chip->dw->tasklet);
+
+	return IRQ_HANDLED;
+}
+
+static void axi_chan_dump_lli(struct axi_dma_chan *chan,
+			      struct axi_dma_desc *desc)
+{
+	dev_err(dchan2dev(&chan->vc.chan),
+		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL: 0x%x:%08x",
+		le32_to_cpu(desc->lli.sar_lo),
+		le32_to_cpu(desc->lli.dar_lo),
+		le32_to_cpu(desc->lli.llp_lo),
+		le32_to_cpu(desc->lli.block_ts_lo),
+		le32_to_cpu(desc->lli.ctl_hi),
+		le32_to_cpu(desc->lli.ctl_lo));
+}
+
+static void axi_chan_list_dump_lli(struct axi_dma_chan *chan,
+				   struct axi_dma_desc *desc_head)
+{
+	struct axi_dma_desc *desc;
+
+	axi_chan_dump_lli(chan, desc_head);
+	list_for_each_entry(desc, &desc_head->xfer_list, xfer_list)
+		axi_chan_dump_lli(chan, desc);
+}
+
+
+static void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	/* The bad descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list */
+	list_del(&vd->node);
+
+	/* WARN about bad descriptor */
+	dev_err(chan2dev(chan),
+		"Bad descriptor submitted for %s, cookie: %d, irq: 0x%08x\n",
+		axi_chan_name(chan), vd->tx.cookie, status);
+	axi_chan_list_dump_lli(chan, vd_to_axi_desc(vd));
+
+	/* Pretend the bad descriptor completed successfully */
+	vchan_cookie_complete(vd);
+
+	/* Try to restart the controller */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
+			axi_chan_name(chan));
+		axi_chan_disable(chan);
+	}
+
+	/* The completed descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list before completing */
+	list_del(&vd->node);
+	vchan_cookie_complete(vd);
+
+	/* Submit queued descriptors after processing the completed ones */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_dma_tasklet(unsigned long data)
+{
+	struct axi_dma_chip *chip = (struct axi_dma_chip *)data;
+	struct axi_dma_chan *chan;
+	struct dw_axi_dma *dw = chip->dw;
+
+	u32 status, i;
+
+	/* Poll, clear and process every chanel interrupt status */
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		chan = &dw->chan[i];
+		status = axi_chan_irq_read(chan);
+		axi_chan_irq_clear(chan, status);
+
+		dev_dbg(chip->dev, "%s %u IRQ status: 0x%08x\n",
+			axi_chan_name(chan), i, status);
+
+		if (unlikely(!chan->in_use))
+			continue;
+
+		if (status & DWAXIDMAC_IRQ_ALL_ERR)
+			axi_chan_handle_err(chan, status);
+		else if (status & DWAXIDMAC_IRQ_DMA_TRF)
+			axi_chan_block_xfer_complete(chan);
+	}
+
+	/* Re-enable interrupts */
+	axi_dma_irq_enable(chip);
+}
+
+static int dma_chan_terminate_all(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	vchan_get_all_descriptors(&chan->vc, &head);
+
+	/*
+	 * As vchan_dma_desc_free_list can access to desc_allocated list
+	 * we need to call it in vc.lock context.
+	 */
+	vchan_dma_desc_free_list(&chan->vc, &head);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	dev_dbg(dchan2dev(dchan), "terminated: %s\n", axi_chan_name(chan));
+
+	return 0;
+}
+
+static int dma_chan_pause(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	unsigned int timeout = 20; /* timeout iterations */
+	int ret = -EAGAIN;
+	u32 val;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	while (timeout--) {
+		if (axi_chan_irq_read(chan) & DWAXIDMAC_IRQ_SUSPENDED) {
+			axi_chan_irq_clear(chan, DWAXIDMAC_IRQ_SUSPENDED);
+			ret = 0;
+			break;
+		}
+		udelay(2);
+	}
+
+	chan->is_paused = true;
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return ret;
+}
+
+/* Called in chan locked context */
+static inline void axi_chan_resume(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	chan->is_paused = false;
+}
+
+static int dma_chan_resume(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	if (chan->is_paused)
+		axi_chan_resume(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return 0;
+}
+
+static int parse_device_properties(struct axi_dma_chip *chip)
+{
+	struct device *dev = chip->dev;
+	u32 tmp, carr[DMAC_MAX_CHANNELS];
+	int ret;
+
+	ret = device_property_read_u32(dev, "dma-channels", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_CHANNELS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_channels = tmp;
+
+	ret = device_property_read_u32(dev, "dma-masters", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_MASTERS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_masters = tmp;
+
+	ret = device_property_read_u32(dev, "data-width", &tmp);
+	if (ret)
+		return ret;
+	if (tmp > DWAXIDMAC_TRANS_WIDTH_MAX)
+		return -EINVAL;
+
+	chip->dw->hdata->m_data_width = tmp;
+
+	ret = device_property_read_u32_array(dev, "block-size", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] == 0 || carr[tmp] > DMAC_MAX_BLK_SIZE)
+			return -EINVAL;
+		else
+			chip->dw->hdata->block_size[tmp] = carr[tmp];
+
+	ret = device_property_read_u32_array(dev, "priority", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	/* priority value must be programmed within [0:nr_channels-1] range */
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] >= chip->dw->hdata->nr_channels)
+			return -EINVAL;
+		else
+			chip->dw->hdata->priority[tmp] = carr[tmp];
+
+	return 0;
+}
+
+static int dw_probe(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip;
+	struct resource *mem;
+	struct dw_axi_dma *dw;
+	struct dw_axi_dma_hcfg *hdata;
+	u32 i;
+	int ret;
+
+	chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL);
+	if (!chip)
+		return -ENOMEM;
+
+	dw = devm_kzalloc(&pdev->dev, sizeof(*dw), GFP_KERNEL);
+	if (!dw)
+		return -ENOMEM;
+
+	hdata = devm_kzalloc(&pdev->dev, sizeof(*hdata), GFP_KERNEL);
+	if (!hdata)
+		return -ENOMEM;
+
+	chip->dw = dw;
+	chip->dev = &pdev->dev;
+	chip->dw->hdata = hdata;
+
+	chip->irq = platform_get_irq(pdev, 0);
+	if (chip->irq < 0)
+		return chip->irq;
+
+	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	chip->regs = devm_ioremap_resource(chip->dev, mem);
+	if (IS_ERR(chip->regs))
+		return PTR_ERR(chip->regs);
+
+	chip->clk = devm_clk_get(chip->dev, NULL);
+	if (IS_ERR(chip->clk))
+		return PTR_ERR(chip->clk);
+
+	ret = parse_device_properties(chip);
+	if (ret)
+		return ret;
+
+	dw->chan = devm_kcalloc(chip->dev, hdata->nr_channels,
+				sizeof(*dw->chan), GFP_KERNEL);
+	if (!dw->chan)
+		return -ENOMEM;
+
+	ret = devm_request_irq(chip->dev, chip->irq, dw_axi_dma_intretupt,
+			       IRQF_SHARED, DRV_NAME, chip);
+	if (ret)
+		return ret;
+
+	/* Lli address must be aligned to a 64-byte boundary */
+	dw->desc_pool = dmam_pool_create(DRV_NAME, chip->dev,
+					 sizeof(struct axi_dma_desc), 64, 0);
+	if (!dw->desc_pool) {
+		dev_err(chip->dev, "No memory for descriptors dma pool\n");
+		return -ENOMEM;
+	}
+
+	tasklet_init(&dw->tasklet, axi_dma_tasklet, (unsigned long)chip);
+
+	INIT_LIST_HEAD(&dw->dma.channels);
+	for (i = 0; i < hdata->nr_channels; i++) {
+		struct axi_dma_chan *chan = &dw->chan[i];
+
+		chan->chip = chip;
+		chan->id = (u8)i;
+		chan->chan_regs = chip->regs + COMMON_REG_LEN + i * CHAN_REG_LEN;
+
+		chan->vc.desc_free = vchan_desc_put;
+		vchan_init(&chan->vc, &dw->dma);
+	}
+
+	axi_dma_hw_init(chip);
+
+	/* Set capabilities */
+	dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
+	dma_cap_set(DMA_SG, dw->dma.cap_mask);
+
+	/* DMA capabilities */
+	dw->dma.chancnt = hdata->nr_channels;
+	dw->dma.src_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.dst_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.directions = BIT(DMA_MEM_TO_MEM);
+	dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+
+	dw->dma.dev = chip->dev;
+	dw->dma.device_tx_status = dma_chan_tx_status;
+	dw->dma.device_issue_pending = dma_chan_issue_pending;
+	dw->dma.device_terminate_all = dma_chan_terminate_all;
+	dw->dma.device_pause = dma_chan_pause;
+	dw->dma.device_resume = dma_chan_resume;
+
+	dw->dma.device_alloc_chan_resources = dma_chan_alloc_chan_resources;
+	dw->dma.device_free_chan_resources = dma_chan_free_chan_resources;
+
+	dw->dma.device_prep_dma_memcpy = dma_chan_prep_dma_memcpy;
+	dw->dma.device_prep_dma_sg = dma_chan_prep_dma_sg;
+
+	ret = clk_prepare_enable(chip->clk);
+	if (ret < 0)
+		return ret;
+
+	ret = dma_async_device_register(&dw->dma);
+	if (ret)
+		goto err_clk_disable;
+
+	platform_set_drvdata(pdev, chip);
+
+	dev_info(chip->dev, "DesignWare AXI DMA Controller, %d channels\n",
+		 dw->hdata->nr_channels);
+
+	return 0;
+
+err_clk_disable:
+	clk_disable_unprepare(chip->clk);
+
+	return ret;
+}
+
+static int dw_remove(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
+	struct dw_axi_dma *dw = chip->dw;
+	struct axi_dma_chan *chan, *_chan;
+	u32 i;
+
+	axi_dma_irq_disable(chip);
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		axi_chan_disable(&chip->dw->chan[i]);
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+	}
+	axi_dma_disable(chip);
+
+	tasklet_kill(&dw->tasklet);
+
+	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
+			vc.chan.device_node) {
+		list_del(&chan->vc.chan.device_node);
+		tasklet_kill(&chan->vc.task);
+	}
+
+	dma_async_device_unregister(&dw->dma);
+
+	clk_disable_unprepare(chip->clk);
+
+	return 0;
+}
+
+static const struct of_device_id dw_dma_of_id_table[] = {
+	{ .compatible = "snps,axi-dma" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, dw_dma_of_id_table);
+
+static struct platform_driver dw_driver = {
+	.probe		= dw_probe,
+	.remove		= dw_remove,
+	.driver = {
+		.name	= DRV_NAME,
+		.of_match_table = of_match_ptr(dw_dma_of_id_table),
+	},
+};
+module_platform_driver(dw_driver);
+
+static int __init dw_init(void)
+{
+	return platform_driver_register(&dw_driver);
+}
+subsys_initcall(dw_init);
+
+static void __exit dw_exit(void)
+{
+	platform_driver_unregister(&dw_driver);
+}
+module_exit(dw_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Synopsys DesignWare AXI DMA Controller platform driver");
+MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>");
diff --git a/drivers/dma/axi_dma_platform.h b/drivers/dma/axi_dma_platform.h
new file mode 100644
index 0000000..02cd744
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.h
@@ -0,0 +1,124 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_H
+#define _AXI_DMA_PLATFORM_H
+
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+
+#include "virt-dma.h"
+
+#define DMAC_MAX_CHANNELS	8
+#define DMAC_MAX_MASTERS	2
+#define DMAC_MAX_BLK_SIZE	0x200000
+
+struct dw_axi_dma_hcfg {
+	u32	nr_channels;
+	u32	nr_masters;
+	u32	m_data_width;
+	u32	block_size[DMAC_MAX_CHANNELS];
+	u32	priority[DMAC_MAX_CHANNELS];
+};
+
+struct axi_dma_chan {
+	struct axi_dma_chip		*chip;
+	void __iomem			*chan_regs;
+	u8				id;
+	bool				in_use;
+	unsigned int			descs_allocated;
+
+	struct virt_dma_chan		vc;
+
+	/* these other elements are all protected by vc.lock */
+	bool				is_paused;
+};
+
+struct dw_axi_dma {
+	struct dma_device	dma;
+	struct dw_axi_dma_hcfg	*hdata;
+	struct dma_pool		*desc_pool;
+	struct tasklet_struct	tasklet;
+
+	/* channels */
+	struct axi_dma_chan	*chan;
+};
+
+struct axi_dma_chip {
+	struct device		*dev;
+	int			irq;
+	void __iomem		*regs;
+	struct clk		*clk;
+	struct dw_axi_dma	*dw;
+};
+
+/* LLI == Linked List Item */
+struct axi_dma_lli {
+	__le32		sar_lo;
+	__le32		sar_hi;
+	__le32		dar_lo;
+	__le32		dar_hi;
+	__le32		block_ts_lo;
+	__le32		block_ts_hi;
+	__le32		llp_lo;
+	__le32		llp_hi;
+	__le32		ctl_lo;
+	__le32		ctl_hi;
+	__le32		sstat;
+	__le32		dstat;
+	__le32		status_lo;
+	__le32		ststus_hi;
+	__le32		reserved_lo;
+	__le32		reserved_hi;
+};
+
+struct axi_dma_desc {
+	struct axi_dma_lli		lli;
+
+	struct virt_dma_desc		vd;
+	struct axi_dma_chan		*chan;
+	struct list_head		xfer_list;
+	size_t				total_len;
+};
+
+static inline struct device *dchan2dev(struct dma_chan *dchan)
+{
+	return &dchan->dev->device;
+}
+
+static inline struct device *chan2dev(struct axi_dma_chan *chan)
+{
+	return &chan->vc.chan.dev->device;
+}
+
+static inline struct axi_dma_desc *vd_to_axi_desc(struct virt_dma_desc *vd)
+{
+	return container_of(vd, struct axi_dma_desc, vd);
+}
+
+static inline struct axi_dma_chan *vc_to_axi_dma_chan(struct virt_dma_chan *vc)
+{
+	return container_of(vc, struct axi_dma_chan, vc);
+}
+
+static inline struct axi_dma_chan *dchan_to_axi_dma_chan(struct dma_chan *dchan)
+{
+	return vc_to_axi_dma_chan(to_virt_chan(dchan));
+}
+
+#endif /* _AXI_DMA_PLATFORM_H */
diff --git a/drivers/dma/axi_dma_platform_reg.h b/drivers/dma/axi_dma_platform_reg.h
new file mode 100644
index 0000000..4d62b50
--- /dev/null
+++ b/drivers/dma/axi_dma_platform_reg.h
@@ -0,0 +1,189 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_REG_H
+#define _AXI_DMA_PLATFORM_REG_H
+
+#include <linux/bitops.h>
+
+#define COMMON_REG_LEN		0x100
+#define CHAN_REG_LEN		0x100
+
+/* Common registers offset */
+#define DMAC_ID			0x000 // R DMAC ID
+#define DMAC_COMPVER		0x008 // R DMAC Component Version
+#define DMAC_CFG		0x010 // R/W DMAC Configuration
+#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
+#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
+#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
+#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
+#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
+#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
+#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
+#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
+#define DMAC_RESET		0x058 // R DMAC Reset Register1
+
+/* DMA channel registers offset */
+#define CH_SAR			0x000 // R/W Chan Source Address
+#define CH_DAR			0x008 // R/W Chan Destination Address
+#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
+#define CH_CTL			0x018 // R/W Chan Control
+#define CH_CTL_L		0x018 // R/W Chan Control 00-31
+#define CH_CTL_H		0x01C // R/W Chan Control 32-63
+#define CH_CFG			0x020 // R/W Chan Configuration
+#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
+#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
+#define CH_LLP			0x028 // R/W Chan Linked List Pointer
+#define CH_STATUS		0x030 // R Chan Status
+#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
+#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
+#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
+#define CH_AXI_ID		0x050 // R/W Chan AXI ID
+#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
+#define CH_SSTAT		0x060 // R Chan Source Status
+#define CH_DSTAT		0x068 // R Chan Destination Status
+#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
+#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
+#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
+#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
+#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
+#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear
+
+
+/* DMAC_CFG */
+#define DMAC_EN_MASK		0x00000001U
+#define DMAC_EN_POS		0
+
+#define INT_EN_MASK		0x00000002U
+#define INT_EN_POS		1
+
+#define DMAC_CHAN_EN_SHIFT	0
+#define DMAC_CHAN_EN_WE_SHIFT	8
+
+#define DMAC_CHAN_SUSP_SHIFT	16
+#define DMAC_CHAN_SUSP_WE_SHIFT	24
+
+/* CH_CTL_H */
+#define CH_CTL_H_LLI_LAST	BIT(30)
+#define CH_CTL_H_LLI_VALID	BIT(31)
+
+/* CH_CTL_L */
+#define CH_CTL_L_LAST_WRITE_EN	BIT(30)
+
+#define CH_CTL_L_DST_MSIZE_POS	18
+#define CH_CTL_L_SRC_MSIZE_POS	14
+enum {
+	DWAXIDMAC_BURST_TRANS_LEN_1	= 0x0,
+	DWAXIDMAC_BURST_TRANS_LEN_4,
+	DWAXIDMAC_BURST_TRANS_LEN_8,
+	DWAXIDMAC_BURST_TRANS_LEN_16,
+	DWAXIDMAC_BURST_TRANS_LEN_32,
+	DWAXIDMAC_BURST_TRANS_LEN_64,
+	DWAXIDMAC_BURST_TRANS_LEN_128,
+	DWAXIDMAC_BURST_TRANS_LEN_256,
+	DWAXIDMAC_BURST_TRANS_LEN_512,
+	DWAXIDMAC_BURST_TRANS_LEN_1024
+};
+
+#define CH_CTL_L_DST_WIDTH_POS	11
+#define CH_CTL_L_SRC_WIDTH_POS	8
+
+#define CH_CTL_L_DST_INC_POS	6
+#define CH_CTL_L_SRC_INC_POS	4
+enum {
+	DWAXIDMAC_CH_CTL_L_INC	= 0x0,
+	DWAXIDMAC_CH_CTL_L_NOINC
+};
+
+#define CH_CTL_L_DST_MAST_POS	2
+#define CH_CTL_L_DST_MAST	BIT(CH_CTL_L_DST_MAST_POS)
+#define CH_CTL_L_SRC_MAST_POS	0
+#define CH_CTL_L_SRC_MAST	BIT(CH_CTL_L_SRC_MAST_POS)
+
+/* CH_CFG_H */
+#define CH_CFG_H_PRIORITY_POS	17
+#define CH_CFG_H_HS_SEL_DST_POS	4
+#define CH_CFG_H_HS_SEL_SRC_POS	3
+enum {
+	DWAXIDMAC_HS_SEL_HW	= 0x0,
+	DWAXIDMAC_HS_SEL_SW
+};
+
+#define CH_CFG_H_TT_FC_POS	0
+enum {
+	DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC	= 0x0,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_SRC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_SRC,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DST,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DST
+};
+
+/* CH_CFG_L */
+#define CH_CFG_L_DST_MULTBLK_TYPE_POS	2
+#define CH_CFG_L_SRC_MULTBLK_TYPE_POS	0
+enum {
+	DWAXIDMAC_MBLK_TYPE_CONTIGUOUS	= 0x0,
+	DWAXIDMAC_MBLK_TYPE_RELOAD,
+	DWAXIDMAC_MBLK_TYPE_SHADOW_REG,
+	DWAXIDMAC_MBLK_TYPE_LL
+};
+
+enum {
+	DWAXIDMAC_IRQ_NONE		= 0x0,
+	DWAXIDMAC_IRQ_BLOCK_TRF		= BIT(0),  // block transfer complete
+	DWAXIDMAC_IRQ_DMA_TRF		= BIT(1),  // dma transfer complete
+	DWAXIDMAC_IRQ_SRC_TRAN		= BIT(3),  // source transaction complete
+	DWAXIDMAC_IRQ_DST_TRAN		= BIT(4),  // destination transaction complete
+	DWAXIDMAC_IRQ_SRC_DEC_ERR	= BIT(5),  // source decode error
+	DWAXIDMAC_IRQ_DST_DEC_ERR	= BIT(6),  // destination decode error
+	DWAXIDMAC_IRQ_SRC_SLV_ERR	= BIT(7),  // source slave error
+	DWAXIDMAC_IRQ_DST_SLV_ERR	= BIT(8),  // destination slave error
+	DWAXIDMAC_IRQ_LLI_RD_DEC_ERR	= BIT(9),  // LLI read decode error
+	DWAXIDMAC_IRQ_LLI_WR_DEC_ERR	= BIT(10), // LLI write decode error
+	DWAXIDMAC_IRQ_LLI_RD_SLV_ERR	= BIT(11), // LLI read slave error
+	DWAXIDMAC_IRQ_LLI_WR_SLV_ERR	= BIT(12), // LLI write slave error
+	DWAXIDMAC_IRQ_INVALID_ERR	= BIT(13), // LLI invalide error or Shadow register error
+	DWAXIDMAC_IRQ_MULTIBLKTYPE_ERR	= BIT(14), // Slave Interface Multiblock type error
+	DWAXIDMAC_IRQ_DEC_ERR		= BIT(16), // Slave Interface decode error
+	DWAXIDMAC_IRQ_WR2RO_ERR		= BIT(17), // Slave Interface write to read only error
+	DWAXIDMAC_IRQ_RD2RWO_ERR	= BIT(18), // Slave Interface read to write only error
+	DWAXIDMAC_IRQ_WRONCHEN_ERR	= BIT(19), // Slave Interface write to channel error
+	DWAXIDMAC_IRQ_SHADOWREG_ERR	= BIT(20), // Slave Interface shadow reg error
+	DWAXIDMAC_IRQ_WRONHOLD_ERR	= BIT(21), // Slave Interface hold error
+	DWAXIDMAC_IRQ_LOCK_CLEARED	= BIT(27), // Lock Cleared Status
+	DWAXIDMAC_IRQ_SRC_SUSPENDED	= BIT(28), // Source Suspended Status
+	DWAXIDMAC_IRQ_SUSPENDED		= BIT(29), // Channel Suspended Status
+	DWAXIDMAC_IRQ_DISABLED		= BIT(30), // Channel Disabled Status
+	DWAXIDMAC_IRQ_ABORTED		= BIT(31), // Channel Aborted Status
+	DWAXIDMAC_IRQ_ALL_ERR		= 0x003F7FE0,
+	DWAXIDMAC_IRQ_ALL		= 0xFFFFFFFF
+};
+
+enum {
+	DWAXIDMAC_TRANS_WIDTH_8		= 0x0,
+	DWAXIDMAC_TRANS_WIDTH_16,
+	DWAXIDMAC_TRANS_WIDTH_32,
+	DWAXIDMAC_TRANS_WIDTH_64,
+	DWAXIDMAC_TRANS_WIDTH_128,
+	DWAXIDMAC_TRANS_WIDTH_256,
+	DWAXIDMAC_TRANS_WIDTH_512,
+	DWAXIDMAC_TRANS_WIDTH_MAX	= DWAXIDMAC_TRANS_WIDTH_512
+};
+
+#endif /* _AXI_DMA_PLATFORM_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 15:34   ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-01-25 15:34 UTC (permalink / raw)
  To: linux-snps-arc

This patch adds support for the DW AXI DMAC controller.

DW AXI DMAC is a part of upcoming development board from Synopsys.

In this driver implementation only DMA_MEMCPY and DMA_SG transfers
are supported.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev at synopsys.com>
---
 drivers/dma/Kconfig                |    8 +
 drivers/dma/Makefile               |    1 +
 drivers/dma/axi_dma_platform.c     | 1060 ++++++++++++++++++++++++++++++++++++
 drivers/dma/axi_dma_platform.h     |  124 +++++
 drivers/dma/axi_dma_platform_reg.h |  189 +++++++
 5 files changed, 1382 insertions(+)
 create mode 100644 drivers/dma/axi_dma_platform.c
 create mode 100644 drivers/dma/axi_dma_platform.h
 create mode 100644 drivers/dma/axi_dma_platform_reg.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 263495d..6d511b9 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -578,6 +578,14 @@ config ZX_DMA
 	help
 	  Support the DMA engine for ZTE ZX296702 platform devices.
 
+config AXI_DW_DMAC
+	tristate "Synopsys DesignWare AXI DMA support"
+	depends on OF && !64BIT
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Enable support for Synopsys DesignWare AXI DMA controller.
+
 
 # driver files
 source "drivers/dma/bestcomm/Kconfig"
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index a4fa336..9fb1dfe 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_AMCC_PPC440SPE_ADMA) += ppc4xx/
 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
 obj-$(CONFIG_AXI_DMAC) += dma-axi-dmac.o
+obj-$(CONFIG_AXI_DW_DMAC) += axi_dma_platform.o
 obj-$(CONFIG_COH901318) += coh901318.o coh901318_lli.o
 obj-$(CONFIG_DMA_BCM2835) += bcm2835-dma.o
 obj-$(CONFIG_DMA_JZ4740) += dma-jz4740.o
diff --git a/drivers/dma/axi_dma_platform.c b/drivers/dma/axi_dma_platform.c
new file mode 100644
index 0000000..31b9fdc
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.c
@@ -0,0 +1,1060 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
+
+#include "axi_dma_platform.h"
+#include "axi_dma_platform_reg.h"
+#include "dmaengine.h"
+#include "virt-dma.h"
+
+#define DRV_NAME	"axi_dw_dmac"
+
+/*
+ * The set of bus widths supported by the DMA controller. DW AXI DMAC supports
+ * master data bus width up to 512 bits (for both AXI master interfaces), but
+ * it depends on IP block configurarion.
+ */
+#define AXI_DMA_BUSWIDTHS		  \
+	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \
+	DMA_SLAVE_BUSWIDTH_1_BYTE	| \
+	DMA_SLAVE_BUSWIDTH_2_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_4_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_8_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_16_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_32_BYTES	| \
+	DMA_SLAVE_BUSWIDTH_64_BYTES)
+/* TODO: check: do we need to use BIT() macro here? */
+
+static inline void
+axi_dma_iowrite32(struct axi_dma_chip *chip, u32 reg, u32 val)
+{
+	iowrite32(val, chip->regs + reg);
+}
+
+static inline u32 axi_dma_ioread32(struct axi_dma_chip *chip, u32 reg)
+{
+	return ioread32(chip->regs + reg);
+}
+
+static inline void
+axi_chan_iowrite32(struct axi_dma_chan *chan, u32 reg, u32 val)
+{
+	iowrite32(val, chan->chan_regs + reg);
+}
+
+static inline u32 axi_chan_ioread32(struct axi_dma_chan *chan, u32 reg)
+{
+	return ioread32(chan->chan_regs + reg);
+}
+
+static inline void axi_dma_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= DMAC_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_disable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val &= ~INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_dma_irq_enable(struct axi_dma_chip *chip)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chip, DMAC_CFG);
+	val |= INT_EN_MASK;
+	axi_dma_iowrite32(chip, DMAC_CFG, val);
+}
+
+static inline void axi_chan_irq_disable(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	u32 val;
+
+	if (likely(irq_mask == DWAXIDMAC_IRQ_ALL)) {
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, DWAXIDMAC_IRQ_NONE);
+	} else {
+		val = axi_chan_ioread32(chan, CH_INTSTATUS_ENA);
+		val &= ~irq_mask;
+		axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, val);
+	}
+}
+
+static inline void axi_chan_irq_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSTATUS_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_sig_set(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTSIGNAL_ENA, irq_mask);
+}
+
+static inline void axi_chan_irq_clear(struct axi_dma_chan *chan, u32 irq_mask)
+{
+	axi_chan_iowrite32(chan, CH_INTCLEAR, irq_mask);
+}
+
+static inline u32 axi_chan_irq_read(struct axi_dma_chan *chan)
+{
+	return axi_chan_ioread32(chan, CH_INTSTATUS);
+}
+
+static inline void axi_chan_disable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline void axi_chan_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+}
+
+static inline bool axi_chan_is_hw_enable(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+
+	return !!(val & (BIT(chan->id) << DMAC_CHAN_EN_SHIFT));
+}
+
+static inline bool axi_dma_is_sw_enable(struct axi_dma_chip *chip)
+{
+	struct dw_axi_dma *dw = chip->dw;
+	u32 i;
+
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		if (dw->chan[i].in_use)
+			return true;
+	}
+
+	return false;
+}
+
+static void axi_dma_hw_init(struct axi_dma_chip *chip)
+{
+	u32 i;
+
+	axi_dma_disable(chip);
+
+	for (i = 0; i < chip->dw->hdata->nr_channels; i++) {
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+		axi_chan_disable(&chip->dw->chan[i]);
+	}
+
+	axi_dma_irq_enable(chip);
+}
+
+static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan, dma_addr_t src,
+				   dma_addr_t dst, size_t len)
+{
+	u32 width;
+	size_t sdl = (src | dst | len);
+	u32 max_width = chan->chip->dw->hdata->m_data_width;
+
+	width = sdl ? __ffs(sdl) : DWAXIDMAC_TRANS_WIDTH_MAX;
+
+	return min_t(size_t, width, max_width);
+}
+
+static inline const char *axi_chan_name(struct axi_dma_chan *chan)
+{
+	return dma_chan_name(&chan->vc.chan);
+}
+
+static struct axi_dma_desc *axi_desc_get(struct axi_dma_chan *chan)
+{
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *desc;
+	dma_addr_t phys;
+
+	desc = dma_pool_zalloc(dw->desc_pool, GFP_ATOMIC, &phys);
+	if (unlikely(!desc)) {
+		dev_err(chan2dev(chan), "%s: not enough descriptors available\n",
+			axi_chan_name(chan));
+		return NULL;
+	}
+
+	chan->descs_allocated++;
+	INIT_LIST_HEAD(&desc->xfer_list);
+	desc->vd.tx.phys = phys;
+	desc->chan = chan;
+
+	return desc;
+}
+
+static void axi_desc_put(struct axi_dma_desc *desc)
+{
+	struct axi_dma_chan *chan = desc->chan;
+	struct dw_axi_dma *dw = chan->chip->dw;
+	struct axi_dma_desc *child, *_next;
+	unsigned int descs_put = 0;
+
+	if (unlikely(!desc))
+		return;
+
+	list_for_each_entry_safe(child, _next, &desc->xfer_list, xfer_list) {
+		list_del(&child->xfer_list);
+		dma_pool_free(dw->desc_pool, child, child->vd.tx.phys);
+		descs_put++;
+	}
+
+	dma_pool_free(dw->desc_pool, desc, desc->vd.tx.phys);
+	descs_put++;
+
+	chan->descs_allocated -= descs_put;
+
+	dev_dbg(chan2dev(chan), "%s: %d descs put, %d still allocated\n",
+		axi_chan_name(chan), descs_put, chan->descs_allocated);
+}
+
+static void vchan_desc_put(struct virt_dma_desc *vdesc)
+{
+	axi_desc_put(vd_to_axi_desc(vdesc));
+}
+
+static enum dma_status
+dma_chan_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
+		  struct dma_tx_state *txstate)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	enum dma_status ret;
+
+	/* TODO: implement DMA_ERROR status managment */
+	ret = dma_cookie_status(dchan, cookie, txstate);
+
+	if (chan->is_paused && ret == DMA_IN_PROGRESS)
+		return DMA_PAUSED;
+
+	return ret;
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_llp(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.llp_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_chan_llp(struct axi_dma_chan *chan, dma_addr_t adr)
+{
+	axi_chan_iowrite32(chan, CH_LLP, adr);
+}
+
+/* Called in chan locked context */
+static void axi_chan_block_xfer_start(struct axi_dma_chan *chan,
+				      struct axi_dma_desc *first)
+{
+	u32 reg, irq_mask;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+	u32 priority = chan->chip->dw->hdata->priority[chan->id];
+
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+		/* The tasklet will hopefully advance the queue... */
+		return;
+	}
+
+	axi_dma_enable(chan->chip);
+
+	reg = (DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_DST_MULTBLK_TYPE_POS |
+	       DWAXIDMAC_MBLK_TYPE_LL << CH_CFG_L_SRC_MULTBLK_TYPE_POS);
+	axi_chan_iowrite32(chan, CH_CFG_L, reg);
+
+	reg = (DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC << CH_CFG_H_TT_FC_POS |
+	       priority << CH_CFG_H_PRIORITY_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_DST_POS |
+	       DWAXIDMAC_HS_SEL_HW << CH_CFG_H_HS_SEL_SRC_POS);
+	axi_chan_iowrite32(chan, CH_CFG_H, reg);
+
+	write_chan_llp(chan, first->vd.tx.phys | lms);
+
+	irq_mask = DWAXIDMAC_IRQ_DMA_TRF | DWAXIDMAC_IRQ_ALL_ERR;
+	axi_chan_irq_sig_set(chan, irq_mask);
+
+	/* generate 'suspend' status but don't generate interrupt */
+	irq_mask |= DWAXIDMAC_IRQ_SUSPENDED;
+	axi_chan_irq_set(chan, irq_mask);
+
+	axi_chan_enable(chan);
+}
+
+static void axi_chan_start_first_queued(struct axi_dma_chan *chan)
+{
+	struct axi_dma_desc *desc;
+	struct virt_dma_desc *vd;
+
+	vd = vchan_next_desc(&chan->vc);
+
+	if (!vd)
+		return;
+
+	desc = vd_to_axi_desc(vd);
+	dev_dbg(chan2dev(chan), "%s: started %u\n", axi_chan_name(chan),
+		vd->tx.cookie);
+	axi_chan_block_xfer_start(chan, desc);
+}
+
+static void dma_chan_issue_pending(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__, axi_chan_name(chan));
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (vchan_issue_pending(&chan->vc))
+		axi_chan_start_first_queued(chan);
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan)) {
+		dev_err(chan2dev(chan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+		return -EBUSY;
+	}
+
+	dev_dbg(dchan2dev(dchan), "%s: allocating\n", axi_chan_name(chan));
+
+	dma_cookie_init(dchan);
+
+	chan->in_use = true;
+
+	return 0;
+}
+
+static void dma_chan_free_chan_resources(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+
+	/* ASSERT: channel is idle */
+	if (axi_chan_is_hw_enable(chan))
+		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
+			axi_chan_name(chan));
+
+	axi_chan_disable(chan);
+	axi_chan_irq_disable(chan, DWAXIDMAC_IRQ_ALL);
+
+	vchan_free_chan_resources(&chan->vc);
+
+	dev_dbg(dchan2dev(dchan), "%s: %s: descriptor still allocated: %u\n",
+		__func__, axi_chan_name(chan), chan->descs_allocated);
+
+	chan->in_use = false;
+
+	/* Disable controller in case it was a last user */
+	if (!axi_dma_is_sw_enable(chan->chip))
+		axi_dma_disable(chan->chip);
+}
+
+/*
+ * If DW_axi_dmac sees CHx_CTL.ShadowReg_Or_LLI_Last bit of the fetched LLI
+ * as 1, it understands that the current block is the final block in the
+ * transfer and completes the DMA transfer operation at the end of current
+ * block transfer.
+ */
+static void set_desc_last(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	val = le32_to_cpu(desc->lli.ctl_hi);
+	val |= CH_CTL_H_LLI_LAST;
+	desc->lli.ctl_hi = cpu_to_le32(val);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_sar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.sar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: add 64 bit adress support */
+static void write_desc_dar(struct axi_dma_desc *desc, dma_addr_t adr)
+{
+	desc->lli.dar_lo = cpu_to_le32(adr);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_src_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI0 for source master */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	val &= ~CH_CTL_L_SRC_MAST;
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+/* TODO: REVISIT: how we should choose AXI master for mem-to-mem transfer? */
+static void set_desc_dest_master(struct axi_dma_desc *desc)
+{
+	u32 val;
+
+	/* Select AXI1 for source master if available */
+	val = le32_to_cpu(desc->lli.ctl_lo);
+	if (desc->chan->chip->dw->hdata->nr_masters > 1)
+		val |= CH_CTL_L_DST_MAST;
+	else
+		val &= ~CH_CTL_L_DST_MAST;
+
+	desc->lli.ctl_lo = cpu_to_le32(val);
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_sg(struct dma_chan *dchan,
+		     struct scatterlist *dst_sg, unsigned int dst_nents,
+		     struct scatterlist *src_sg, unsigned int src_nents,
+		     unsigned long flags)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	struct axi_dma_desc *first = NULL, *desc = NULL, *prev = NULL;
+	size_t dst_len = 0, src_len = 0, xfer_len = 0, total_len = 0;
+	dma_addr_t dst_adr = 0, src_adr = 0;
+	u32 src_width, dst_width;
+	size_t block_ts, max_block_ts;
+	u32 reg;
+	u8 lms = 0; /* TODO: REVISIT: hardcode LLI master to AXI0 (should we?)*/
+
+	dev_dbg(chan2dev(chan), "%s: %s: sn: %d dn: %d flags: 0x%lx",
+		__func__, axi_chan_name(chan), src_nents, dst_nents, flags);
+
+	if (unlikely(dst_nents == 0 || src_nents == 0))
+		return NULL;
+
+	if (unlikely(dst_sg == NULL || src_sg == NULL))
+		return NULL;
+
+	max_block_ts = chan->chip->dw->hdata->block_size[chan->id];
+
+	/*
+	 * Loop until there is either no more source or no more destination
+	 * scatterlist entry.
+	 */
+	while (true) {
+		/* Process dest sg list */
+		if (dst_len == 0) {
+			/* No more destination scatterlist entries */
+			if (!dst_sg || !dst_nents)
+				break;
+
+			dst_adr = sg_dma_address(dst_sg);
+			dst_len = sg_dma_len(dst_sg);
+
+			dst_sg = sg_next(dst_sg);
+			dst_nents--;
+		}
+
+		/* Process src sg list */
+		if (src_len == 0) {
+			/* No more source scatterlist entries */
+			if (!src_sg || !src_nents)
+				break;
+
+			src_adr = sg_dma_address(src_sg);
+			src_len = sg_dma_len(src_sg);
+
+			src_sg = sg_next(src_sg);
+			src_nents--;
+		}
+
+		/* Min of src and dest length will be this xfer length */
+		xfer_len = min_t(size_t, src_len, dst_len);
+		if (xfer_len == 0)
+			continue;
+
+		/* Take care for the alignment */
+		src_width = axi_chan_get_xfer_width(chan, src_adr,
+						    dst_adr, xfer_len);
+		/*
+		 * Actually src_width and dst_width can be different, but make
+		 * them same to be simpler.
+		 * TODO: REVISIT: Can we optimize it?
+		 */
+		dst_width = src_width;
+
+		/*
+		 * block_ts indicates the total number of data of width
+		 * src_width to be transferred in a DMA block transfer.
+		 * BLOCK_TS register should be set to block_ts -1
+		 */
+		block_ts = xfer_len >> src_width;
+		if (block_ts > max_block_ts) {
+			block_ts = max_block_ts;
+			xfer_len = max_block_ts << src_width;
+		}
+
+		desc = axi_desc_get(chan);
+		if (unlikely(!desc))
+			goto err_desc_get;
+
+		write_desc_sar(desc, src_adr);
+		write_desc_dar(desc, dst_adr);
+		desc->lli.block_ts_lo = cpu_to_le32(block_ts - 1);
+		desc->lli.ctl_hi = cpu_to_le32(CH_CTL_H_LLI_VALID);
+
+		reg = (DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_DST_MSIZE_POS |
+		       DWAXIDMAC_BURST_TRANS_LEN_4 << CH_CTL_L_SRC_MSIZE_POS |
+		       dst_width << CH_CTL_L_DST_WIDTH_POS |
+		       src_width << CH_CTL_L_SRC_WIDTH_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_DST_INC_POS |
+		       DWAXIDMAC_CH_CTL_L_INC << CH_CTL_L_SRC_INC_POS);
+		desc->lli.ctl_lo = cpu_to_le32(reg);
+
+		set_desc_src_master(desc);
+		set_desc_dest_master(desc);
+
+		/* Manage transfer list (xfer_list) */
+		if (!first) {
+			first = desc;
+		} else {
+			list_add_tail(&desc->xfer_list, &first->xfer_list);
+			write_desc_llp(prev, desc->vd.tx.phys | lms);
+		}
+		prev = desc;
+
+		/* update the lengths and addresses for the next loop cycle */
+		dst_len -= xfer_len;
+		src_len -= xfer_len;
+		dst_adr += xfer_len;
+		src_adr += xfer_len;
+
+		total_len += xfer_len;
+	}
+
+	/* Total len of src/dest sg == 0, so no descriptor were allocated */
+	if (unlikely(!first))
+		return NULL;
+
+	/* First descriptor of the chain embedds additional information */
+	first->total_len = total_len;
+
+	/* Set end-of-link to the last link descriptor of list */
+	set_desc_last(desc);
+
+	return vchan_tx_prep(&chan->vc, &first->vd, flags);
+
+err_desc_get:
+	axi_desc_put(first);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *
+dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
+			 dma_addr_t src, size_t len, unsigned long flags)
+{
+	unsigned int nents = 1;
+	struct scatterlist dst_sg;
+	struct scatterlist src_sg;
+
+	sg_init_table(&dst_sg, nents);
+	sg_init_table(&src_sg, nents);
+
+	sg_dma_address(&dst_sg) = dest;
+	sg_dma_address(&src_sg) = src;
+
+	sg_dma_len(&dst_sg) = len;
+	sg_dma_len(&src_sg) = len;
+
+	/* Implement memcpy transfer as sg transfer with single list */
+	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
+				    &src_sg, nents, flags);
+}
+
+static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
+{
+	struct axi_dma_chip *chip = dev_id;
+
+	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
+	axi_dma_irq_disable(chip);
+
+	tasklet_schedule(&chip->dw->tasklet);
+
+	return IRQ_HANDLED;
+}
+
+static void axi_chan_dump_lli(struct axi_dma_chan *chan,
+			      struct axi_dma_desc *desc)
+{
+	dev_err(dchan2dev(&chan->vc.chan),
+		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL: 0x%x:%08x",
+		le32_to_cpu(desc->lli.sar_lo),
+		le32_to_cpu(desc->lli.dar_lo),
+		le32_to_cpu(desc->lli.llp_lo),
+		le32_to_cpu(desc->lli.block_ts_lo),
+		le32_to_cpu(desc->lli.ctl_hi),
+		le32_to_cpu(desc->lli.ctl_lo));
+}
+
+static void axi_chan_list_dump_lli(struct axi_dma_chan *chan,
+				   struct axi_dma_desc *desc_head)
+{
+	struct axi_dma_desc *desc;
+
+	axi_chan_dump_lli(chan, desc_head);
+	list_for_each_entry(desc, &desc_head->xfer_list, xfer_list)
+		axi_chan_dump_lli(chan, desc);
+}
+
+
+static void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	/* The bad descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list */
+	list_del(&vd->node);
+
+	/* WARN about bad descriptor */
+	dev_err(chan2dev(chan),
+		"Bad descriptor submitted for %s, cookie: %d, irq: 0x%08x\n",
+		axi_chan_name(chan), vd->tx.cookie, status);
+	axi_chan_list_dump_lli(chan, vd_to_axi_desc(vd));
+
+	/* Pretend the bad descriptor completed successfully */
+	vchan_cookie_complete(vd);
+
+	/* Try to restart the controller */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
+{
+	struct virt_dma_desc *vd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+	if (unlikely(axi_chan_is_hw_enable(chan))) {
+		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
+			axi_chan_name(chan));
+		axi_chan_disable(chan);
+	}
+
+	/* The completed descriptor currently is in the head of vc list */
+	vd = vchan_next_desc(&chan->vc);
+	/* Remove the completed descriptor from issued list before completing */
+	list_del(&vd->node);
+	vchan_cookie_complete(vd);
+
+	/* Submit queued descriptors after processing the completed ones */
+	axi_chan_start_first_queued(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+}
+
+static void axi_dma_tasklet(unsigned long data)
+{
+	struct axi_dma_chip *chip = (struct axi_dma_chip *)data;
+	struct axi_dma_chan *chan;
+	struct dw_axi_dma *dw = chip->dw;
+
+	u32 status, i;
+
+	/* Poll, clear and process every chanel interrupt status */
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		chan = &dw->chan[i];
+		status = axi_chan_irq_read(chan);
+		axi_chan_irq_clear(chan, status);
+
+		dev_dbg(chip->dev, "%s %u IRQ status: 0x%08x\n",
+			axi_chan_name(chan), i, status);
+
+		if (unlikely(!chan->in_use))
+			continue;
+
+		if (status & DWAXIDMAC_IRQ_ALL_ERR)
+			axi_chan_handle_err(chan, status);
+		else if (status & DWAXIDMAC_IRQ_DMA_TRF)
+			axi_chan_block_xfer_complete(chan);
+	}
+
+	/* Re-enable interrupts */
+	axi_dma_irq_enable(chip);
+}
+
+static int dma_chan_terminate_all(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	axi_chan_disable(chan);
+
+	vchan_get_all_descriptors(&chan->vc, &head);
+
+	/*
+	 * As vchan_dma_desc_free_list can access to desc_allocated list
+	 * we need to call it in vc.lock context.
+	 */
+	vchan_dma_desc_free_list(&chan->vc, &head);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	dev_dbg(dchan2dev(dchan), "terminated: %s\n", axi_chan_name(chan));
+
+	return 0;
+}
+
+static int dma_chan_pause(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+	unsigned int timeout = 20; /* timeout iterations */
+	int ret = -EAGAIN;
+	u32 val;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
+		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	while (timeout--) {
+		if (axi_chan_irq_read(chan) & DWAXIDMAC_IRQ_SUSPENDED) {
+			axi_chan_irq_clear(chan, DWAXIDMAC_IRQ_SUSPENDED);
+			ret = 0;
+			break;
+		}
+		udelay(2);
+	}
+
+	chan->is_paused = true;
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return ret;
+}
+
+/* Called in chan locked context */
+static inline void axi_chan_resume(struct axi_dma_chan *chan)
+{
+	u32 val;
+
+	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
+	val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT);
+	val |=  (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
+	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
+
+	chan->is_paused = false;
+}
+
+static int dma_chan_resume(struct dma_chan *dchan)
+{
+	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&chan->vc.lock, flags);
+
+	if (chan->is_paused)
+		axi_chan_resume(chan);
+
+	spin_unlock_irqrestore(&chan->vc.lock, flags);
+
+	return 0;
+}
+
+static int parse_device_properties(struct axi_dma_chip *chip)
+{
+	struct device *dev = chip->dev;
+	u32 tmp, carr[DMAC_MAX_CHANNELS];
+	int ret;
+
+	ret = device_property_read_u32(dev, "dma-channels", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_CHANNELS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_channels = tmp;
+
+	ret = device_property_read_u32(dev, "dma-masters", &tmp);
+	if (ret)
+		return ret;
+	if (tmp == 0 || tmp > DMAC_MAX_MASTERS)
+		return -EINVAL;
+
+	chip->dw->hdata->nr_masters = tmp;
+
+	ret = device_property_read_u32(dev, "data-width", &tmp);
+	if (ret)
+		return ret;
+	if (tmp > DWAXIDMAC_TRANS_WIDTH_MAX)
+		return -EINVAL;
+
+	chip->dw->hdata->m_data_width = tmp;
+
+	ret = device_property_read_u32_array(dev, "block-size", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] == 0 || carr[tmp] > DMAC_MAX_BLK_SIZE)
+			return -EINVAL;
+		else
+			chip->dw->hdata->block_size[tmp] = carr[tmp];
+
+	ret = device_property_read_u32_array(dev, "priority", carr,
+					     chip->dw->hdata->nr_channels);
+	if (ret)
+		return ret;
+	/* priority value must be programmed within [0:nr_channels-1] range */
+	for (tmp = 0; tmp < chip->dw->hdata->nr_channels; tmp++)
+		if (carr[tmp] >= chip->dw->hdata->nr_channels)
+			return -EINVAL;
+		else
+			chip->dw->hdata->priority[tmp] = carr[tmp];
+
+	return 0;
+}
+
+static int dw_probe(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip;
+	struct resource *mem;
+	struct dw_axi_dma *dw;
+	struct dw_axi_dma_hcfg *hdata;
+	u32 i;
+	int ret;
+
+	chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL);
+	if (!chip)
+		return -ENOMEM;
+
+	dw = devm_kzalloc(&pdev->dev, sizeof(*dw), GFP_KERNEL);
+	if (!dw)
+		return -ENOMEM;
+
+	hdata = devm_kzalloc(&pdev->dev, sizeof(*hdata), GFP_KERNEL);
+	if (!hdata)
+		return -ENOMEM;
+
+	chip->dw = dw;
+	chip->dev = &pdev->dev;
+	chip->dw->hdata = hdata;
+
+	chip->irq = platform_get_irq(pdev, 0);
+	if (chip->irq < 0)
+		return chip->irq;
+
+	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	chip->regs = devm_ioremap_resource(chip->dev, mem);
+	if (IS_ERR(chip->regs))
+		return PTR_ERR(chip->regs);
+
+	chip->clk = devm_clk_get(chip->dev, NULL);
+	if (IS_ERR(chip->clk))
+		return PTR_ERR(chip->clk);
+
+	ret = parse_device_properties(chip);
+	if (ret)
+		return ret;
+
+	dw->chan = devm_kcalloc(chip->dev, hdata->nr_channels,
+				sizeof(*dw->chan), GFP_KERNEL);
+	if (!dw->chan)
+		return -ENOMEM;
+
+	ret = devm_request_irq(chip->dev, chip->irq, dw_axi_dma_intretupt,
+			       IRQF_SHARED, DRV_NAME, chip);
+	if (ret)
+		return ret;
+
+	/* Lli address must be aligned to a 64-byte boundary */
+	dw->desc_pool = dmam_pool_create(DRV_NAME, chip->dev,
+					 sizeof(struct axi_dma_desc), 64, 0);
+	if (!dw->desc_pool) {
+		dev_err(chip->dev, "No memory for descriptors dma pool\n");
+		return -ENOMEM;
+	}
+
+	tasklet_init(&dw->tasklet, axi_dma_tasklet, (unsigned long)chip);
+
+	INIT_LIST_HEAD(&dw->dma.channels);
+	for (i = 0; i < hdata->nr_channels; i++) {
+		struct axi_dma_chan *chan = &dw->chan[i];
+
+		chan->chip = chip;
+		chan->id = (u8)i;
+		chan->chan_regs = chip->regs + COMMON_REG_LEN + i * CHAN_REG_LEN;
+
+		chan->vc.desc_free = vchan_desc_put;
+		vchan_init(&chan->vc, &dw->dma);
+	}
+
+	axi_dma_hw_init(chip);
+
+	/* Set capabilities */
+	dma_cap_set(DMA_MEMCPY, dw->dma.cap_mask);
+	dma_cap_set(DMA_SG, dw->dma.cap_mask);
+
+	/* DMA capabilities */
+	dw->dma.chancnt = hdata->nr_channels;
+	dw->dma.src_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.dst_addr_widths = AXI_DMA_BUSWIDTHS;
+	dw->dma.directions = BIT(DMA_MEM_TO_MEM);
+	dw->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+
+	dw->dma.dev = chip->dev;
+	dw->dma.device_tx_status = dma_chan_tx_status;
+	dw->dma.device_issue_pending = dma_chan_issue_pending;
+	dw->dma.device_terminate_all = dma_chan_terminate_all;
+	dw->dma.device_pause = dma_chan_pause;
+	dw->dma.device_resume = dma_chan_resume;
+
+	dw->dma.device_alloc_chan_resources = dma_chan_alloc_chan_resources;
+	dw->dma.device_free_chan_resources = dma_chan_free_chan_resources;
+
+	dw->dma.device_prep_dma_memcpy = dma_chan_prep_dma_memcpy;
+	dw->dma.device_prep_dma_sg = dma_chan_prep_dma_sg;
+
+	ret = clk_prepare_enable(chip->clk);
+	if (ret < 0)
+		return ret;
+
+	ret = dma_async_device_register(&dw->dma);
+	if (ret)
+		goto err_clk_disable;
+
+	platform_set_drvdata(pdev, chip);
+
+	dev_info(chip->dev, "DesignWare AXI DMA Controller, %d channels\n",
+		 dw->hdata->nr_channels);
+
+	return 0;
+
+err_clk_disable:
+	clk_disable_unprepare(chip->clk);
+
+	return ret;
+}
+
+static int dw_remove(struct platform_device *pdev)
+{
+	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
+	struct dw_axi_dma *dw = chip->dw;
+	struct axi_dma_chan *chan, *_chan;
+	u32 i;
+
+	axi_dma_irq_disable(chip);
+	for (i = 0; i < dw->hdata->nr_channels; i++) {
+		axi_chan_disable(&chip->dw->chan[i]);
+		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
+	}
+	axi_dma_disable(chip);
+
+	tasklet_kill(&dw->tasklet);
+
+	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
+			vc.chan.device_node) {
+		list_del(&chan->vc.chan.device_node);
+		tasklet_kill(&chan->vc.task);
+	}
+
+	dma_async_device_unregister(&dw->dma);
+
+	clk_disable_unprepare(chip->clk);
+
+	return 0;
+}
+
+static const struct of_device_id dw_dma_of_id_table[] = {
+	{ .compatible = "snps,axi-dma" },
+	{}
+};
+MODULE_DEVICE_TABLE(of, dw_dma_of_id_table);
+
+static struct platform_driver dw_driver = {
+	.probe		= dw_probe,
+	.remove		= dw_remove,
+	.driver = {
+		.name	= DRV_NAME,
+		.of_match_table = of_match_ptr(dw_dma_of_id_table),
+	},
+};
+module_platform_driver(dw_driver);
+
+static int __init dw_init(void)
+{
+	return platform_driver_register(&dw_driver);
+}
+subsys_initcall(dw_init);
+
+static void __exit dw_exit(void)
+{
+	platform_driver_unregister(&dw_driver);
+}
+module_exit(dw_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Synopsys DesignWare AXI DMA Controller platform driver");
+MODULE_AUTHOR("Eugeniy Paltsev <Eugeniy.Paltsev at synopsys.com>");
diff --git a/drivers/dma/axi_dma_platform.h b/drivers/dma/axi_dma_platform.h
new file mode 100644
index 0000000..02cd744
--- /dev/null
+++ b/drivers/dma/axi_dma_platform.h
@@ -0,0 +1,124 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_H
+#define _AXI_DMA_PLATFORM_H
+
+#include <linux/bitops.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+
+#include "virt-dma.h"
+
+#define DMAC_MAX_CHANNELS	8
+#define DMAC_MAX_MASTERS	2
+#define DMAC_MAX_BLK_SIZE	0x200000
+
+struct dw_axi_dma_hcfg {
+	u32	nr_channels;
+	u32	nr_masters;
+	u32	m_data_width;
+	u32	block_size[DMAC_MAX_CHANNELS];
+	u32	priority[DMAC_MAX_CHANNELS];
+};
+
+struct axi_dma_chan {
+	struct axi_dma_chip		*chip;
+	void __iomem			*chan_regs;
+	u8				id;
+	bool				in_use;
+	unsigned int			descs_allocated;
+
+	struct virt_dma_chan		vc;
+
+	/* these other elements are all protected by vc.lock */
+	bool				is_paused;
+};
+
+struct dw_axi_dma {
+	struct dma_device	dma;
+	struct dw_axi_dma_hcfg	*hdata;
+	struct dma_pool		*desc_pool;
+	struct tasklet_struct	tasklet;
+
+	/* channels */
+	struct axi_dma_chan	*chan;
+};
+
+struct axi_dma_chip {
+	struct device		*dev;
+	int			irq;
+	void __iomem		*regs;
+	struct clk		*clk;
+	struct dw_axi_dma	*dw;
+};
+
+/* LLI == Linked List Item */
+struct axi_dma_lli {
+	__le32		sar_lo;
+	__le32		sar_hi;
+	__le32		dar_lo;
+	__le32		dar_hi;
+	__le32		block_ts_lo;
+	__le32		block_ts_hi;
+	__le32		llp_lo;
+	__le32		llp_hi;
+	__le32		ctl_lo;
+	__le32		ctl_hi;
+	__le32		sstat;
+	__le32		dstat;
+	__le32		status_lo;
+	__le32		ststus_hi;
+	__le32		reserved_lo;
+	__le32		reserved_hi;
+};
+
+struct axi_dma_desc {
+	struct axi_dma_lli		lli;
+
+	struct virt_dma_desc		vd;
+	struct axi_dma_chan		*chan;
+	struct list_head		xfer_list;
+	size_t				total_len;
+};
+
+static inline struct device *dchan2dev(struct dma_chan *dchan)
+{
+	return &dchan->dev->device;
+}
+
+static inline struct device *chan2dev(struct axi_dma_chan *chan)
+{
+	return &chan->vc.chan.dev->device;
+}
+
+static inline struct axi_dma_desc *vd_to_axi_desc(struct virt_dma_desc *vd)
+{
+	return container_of(vd, struct axi_dma_desc, vd);
+}
+
+static inline struct axi_dma_chan *vc_to_axi_dma_chan(struct virt_dma_chan *vc)
+{
+	return container_of(vc, struct axi_dma_chan, vc);
+}
+
+static inline struct axi_dma_chan *dchan_to_axi_dma_chan(struct dma_chan *dchan)
+{
+	return vc_to_axi_dma_chan(to_virt_chan(dchan));
+}
+
+#endif /* _AXI_DMA_PLATFORM_H */
diff --git a/drivers/dma/axi_dma_platform_reg.h b/drivers/dma/axi_dma_platform_reg.h
new file mode 100644
index 0000000..4d62b50
--- /dev/null
+++ b/drivers/dma/axi_dma_platform_reg.h
@@ -0,0 +1,189 @@
+/*
+ * Synopsys DesignWare AXI DMA Controller driver.
+ *
+ * Copyright (C) 2017 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _AXI_DMA_PLATFORM_REG_H
+#define _AXI_DMA_PLATFORM_REG_H
+
+#include <linux/bitops.h>
+
+#define COMMON_REG_LEN		0x100
+#define CHAN_REG_LEN		0x100
+
+/* Common registers offset */
+#define DMAC_ID			0x000 // R DMAC ID
+#define DMAC_COMPVER		0x008 // R DMAC Component Version
+#define DMAC_CFG		0x010 // R/W DMAC Configuration
+#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
+#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
+#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
+#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
+#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
+#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
+#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
+#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
+#define DMAC_RESET		0x058 // R DMAC Reset Register1
+
+/* DMA channel registers offset */
+#define CH_SAR			0x000 // R/W Chan Source Address
+#define CH_DAR			0x008 // R/W Chan Destination Address
+#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
+#define CH_CTL			0x018 // R/W Chan Control
+#define CH_CTL_L		0x018 // R/W Chan Control 00-31
+#define CH_CTL_H		0x01C // R/W Chan Control 32-63
+#define CH_CFG			0x020 // R/W Chan Configuration
+#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
+#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
+#define CH_LLP			0x028 // R/W Chan Linked List Pointer
+#define CH_STATUS		0x030 // R Chan Status
+#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
+#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
+#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
+#define CH_AXI_ID		0x050 // R/W Chan AXI ID
+#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
+#define CH_SSTAT		0x060 // R Chan Source Status
+#define CH_DSTAT		0x068 // R Chan Destination Status
+#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
+#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
+#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
+#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
+#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
+#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear
+
+
+/* DMAC_CFG */
+#define DMAC_EN_MASK		0x00000001U
+#define DMAC_EN_POS		0
+
+#define INT_EN_MASK		0x00000002U
+#define INT_EN_POS		1
+
+#define DMAC_CHAN_EN_SHIFT	0
+#define DMAC_CHAN_EN_WE_SHIFT	8
+
+#define DMAC_CHAN_SUSP_SHIFT	16
+#define DMAC_CHAN_SUSP_WE_SHIFT	24
+
+/* CH_CTL_H */
+#define CH_CTL_H_LLI_LAST	BIT(30)
+#define CH_CTL_H_LLI_VALID	BIT(31)
+
+/* CH_CTL_L */
+#define CH_CTL_L_LAST_WRITE_EN	BIT(30)
+
+#define CH_CTL_L_DST_MSIZE_POS	18
+#define CH_CTL_L_SRC_MSIZE_POS	14
+enum {
+	DWAXIDMAC_BURST_TRANS_LEN_1	= 0x0,
+	DWAXIDMAC_BURST_TRANS_LEN_4,
+	DWAXIDMAC_BURST_TRANS_LEN_8,
+	DWAXIDMAC_BURST_TRANS_LEN_16,
+	DWAXIDMAC_BURST_TRANS_LEN_32,
+	DWAXIDMAC_BURST_TRANS_LEN_64,
+	DWAXIDMAC_BURST_TRANS_LEN_128,
+	DWAXIDMAC_BURST_TRANS_LEN_256,
+	DWAXIDMAC_BURST_TRANS_LEN_512,
+	DWAXIDMAC_BURST_TRANS_LEN_1024
+};
+
+#define CH_CTL_L_DST_WIDTH_POS	11
+#define CH_CTL_L_SRC_WIDTH_POS	8
+
+#define CH_CTL_L_DST_INC_POS	6
+#define CH_CTL_L_SRC_INC_POS	4
+enum {
+	DWAXIDMAC_CH_CTL_L_INC	= 0x0,
+	DWAXIDMAC_CH_CTL_L_NOINC
+};
+
+#define CH_CTL_L_DST_MAST_POS	2
+#define CH_CTL_L_DST_MAST	BIT(CH_CTL_L_DST_MAST_POS)
+#define CH_CTL_L_SRC_MAST_POS	0
+#define CH_CTL_L_SRC_MAST	BIT(CH_CTL_L_SRC_MAST_POS)
+
+/* CH_CFG_H */
+#define CH_CFG_H_PRIORITY_POS	17
+#define CH_CFG_H_HS_SEL_DST_POS	4
+#define CH_CFG_H_HS_SEL_SRC_POS	3
+enum {
+	DWAXIDMAC_HS_SEL_HW	= 0x0,
+	DWAXIDMAC_HS_SEL_SW
+};
+
+#define CH_CFG_H_TT_FC_POS	0
+enum {
+	DWAXIDMAC_TT_FC_MEM_TO_MEM_DMAC	= 0x0,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DMAC,
+	DWAXIDMAC_TT_FC_PER_TO_MEM_SRC,
+	DWAXIDMAC_TT_FC_PER_TO_PER_SRC,
+	DWAXIDMAC_TT_FC_MEM_TO_PER_DST,
+	DWAXIDMAC_TT_FC_PER_TO_PER_DST
+};
+
+/* CH_CFG_L */
+#define CH_CFG_L_DST_MULTBLK_TYPE_POS	2
+#define CH_CFG_L_SRC_MULTBLK_TYPE_POS	0
+enum {
+	DWAXIDMAC_MBLK_TYPE_CONTIGUOUS	= 0x0,
+	DWAXIDMAC_MBLK_TYPE_RELOAD,
+	DWAXIDMAC_MBLK_TYPE_SHADOW_REG,
+	DWAXIDMAC_MBLK_TYPE_LL
+};
+
+enum {
+	DWAXIDMAC_IRQ_NONE		= 0x0,
+	DWAXIDMAC_IRQ_BLOCK_TRF		= BIT(0),  // block transfer complete
+	DWAXIDMAC_IRQ_DMA_TRF		= BIT(1),  // dma transfer complete
+	DWAXIDMAC_IRQ_SRC_TRAN		= BIT(3),  // source transaction complete
+	DWAXIDMAC_IRQ_DST_TRAN		= BIT(4),  // destination transaction complete
+	DWAXIDMAC_IRQ_SRC_DEC_ERR	= BIT(5),  // source decode error
+	DWAXIDMAC_IRQ_DST_DEC_ERR	= BIT(6),  // destination decode error
+	DWAXIDMAC_IRQ_SRC_SLV_ERR	= BIT(7),  // source slave error
+	DWAXIDMAC_IRQ_DST_SLV_ERR	= BIT(8),  // destination slave error
+	DWAXIDMAC_IRQ_LLI_RD_DEC_ERR	= BIT(9),  // LLI read decode error
+	DWAXIDMAC_IRQ_LLI_WR_DEC_ERR	= BIT(10), // LLI write decode error
+	DWAXIDMAC_IRQ_LLI_RD_SLV_ERR	= BIT(11), // LLI read slave error
+	DWAXIDMAC_IRQ_LLI_WR_SLV_ERR	= BIT(12), // LLI write slave error
+	DWAXIDMAC_IRQ_INVALID_ERR	= BIT(13), // LLI invalide error or Shadow register error
+	DWAXIDMAC_IRQ_MULTIBLKTYPE_ERR	= BIT(14), // Slave Interface Multiblock type error
+	DWAXIDMAC_IRQ_DEC_ERR		= BIT(16), // Slave Interface decode error
+	DWAXIDMAC_IRQ_WR2RO_ERR		= BIT(17), // Slave Interface write to read only error
+	DWAXIDMAC_IRQ_RD2RWO_ERR	= BIT(18), // Slave Interface read to write only error
+	DWAXIDMAC_IRQ_WRONCHEN_ERR	= BIT(19), // Slave Interface write to channel error
+	DWAXIDMAC_IRQ_SHADOWREG_ERR	= BIT(20), // Slave Interface shadow reg error
+	DWAXIDMAC_IRQ_WRONHOLD_ERR	= BIT(21), // Slave Interface hold error
+	DWAXIDMAC_IRQ_LOCK_CLEARED	= BIT(27), // Lock Cleared Status
+	DWAXIDMAC_IRQ_SRC_SUSPENDED	= BIT(28), // Source Suspended Status
+	DWAXIDMAC_IRQ_SUSPENDED		= BIT(29), // Channel Suspended Status
+	DWAXIDMAC_IRQ_DISABLED		= BIT(30), // Channel Disabled Status
+	DWAXIDMAC_IRQ_ABORTED		= BIT(31), // Channel Aborted Status
+	DWAXIDMAC_IRQ_ALL_ERR		= 0x003F7FE0,
+	DWAXIDMAC_IRQ_ALL		= 0xFFFFFFFF
+};
+
+enum {
+	DWAXIDMAC_TRANS_WIDTH_8		= 0x0,
+	DWAXIDMAC_TRANS_WIDTH_16,
+	DWAXIDMAC_TRANS_WIDTH_32,
+	DWAXIDMAC_TRANS_WIDTH_64,
+	DWAXIDMAC_TRANS_WIDTH_128,
+	DWAXIDMAC_TRANS_WIDTH_256,
+	DWAXIDMAC_TRANS_WIDTH_512,
+	DWAXIDMAC_TRANS_WIDTH_MAX	= DWAXIDMAC_TRANS_WIDTH_512
+};
+
+#endif /* _AXI_DMA_PLATFORM_H */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 0/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 15:34 ` Eugeniy Paltsev
@ 2017-01-25 16:41   ` Andy Shevchenko
  -1 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-01-25 16:41 UTC (permalink / raw)
  To: Eugeniy Paltsev, dmaengine
  Cc: linux-kernel, devicetree, linux-snps-arc, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Alexey Brodkin

On Wed, 2017-01-25 at 18:34 +0300, Eugeniy Paltsev wrote:
> This patch series add support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> are supported.
> 
> Changes for v0:
>  * Switch to virt-dma API (according to previous RFC)
>  * Small fixies according to previous RFC

Yeah, seems you didn't address some of the comments and didn't comment
why...

>  * Add DT bindings
> 
> Eugeniy Paltsev (2):
>   dt-bindings: Document the Synopsys DW AXI DMA bindings
>   dmaengine: Add DW AXI DMAC driver
> 
>  .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   |   33 +
>  drivers/dma/Kconfig                                |    8 +
>  drivers/dma/Makefile                               |    1 +

>  drivers/dma/axi_dma_platform.c                     | 1060
> ++++++++++++++++++++

This surprises me. I would expect more then 100+ LOC reduction when
switched to virt-dma API. Can you double check that you are using it
effectively?

>  drivers/dma/axi_dma_platform.h                     |  124 +++
>  drivers/dma/axi_dma_platform_reg.h                 |  189 ++++
>  6 files changed, 1415 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-
> dmac.txt
>  create mode 100644 drivers/dma/axi_dma_platform.c
>  create mode 100644 drivers/dma/axi_dma_platform.h
>  create mode 100644 drivers/dma/axi_dma_platform_reg.h
> 

-- 
Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 0/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 16:41   ` Andy Shevchenko
  0 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-01-25 16:41 UTC (permalink / raw)
  To: linux-snps-arc

On Wed, 2017-01-25@18:34 +0300, Eugeniy Paltsev wrote:
> This patch series add support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> are supported.
> 
> Changes for v0:
> ?* Switch to virt-dma API (according to previous RFC)
> ?* Small fixies according to previous RFC

Yeah, seems you didn't address some of the comments and didn't comment
why...

> ?* Add DT bindings
> 
> Eugeniy Paltsev (2):
> ? dt-bindings: Document the Synopsys DW AXI DMA bindings
> ? dmaengine: Add DW AXI DMAC driver
> 
> ?.../devicetree/bindings/dma/snps,axi-dw-dmac.txt???|???33 +
> ?drivers/dma/Kconfig????????????????????????????????|????8 +
> ?drivers/dma/Makefile???????????????????????????????|????1 +

> ?drivers/dma/axi_dma_platform.c?????????????????????| 1060
> ++++++++++++++++++++

This surprises me. I would expect more then 100+ LOC reduction when
switched to virt-dma API. Can you double check that you are using it
effectively?

> ?drivers/dma/axi_dma_platform.h?????????????????????|??124 +++
> ?drivers/dma/axi_dma_platform_reg.h?????????????????|??189 ++++
> ?6 files changed, 1415 insertions(+)
> ?create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-
> dmac.txt
> ?create mode 100644 drivers/dma/axi_dma_platform.c
> ?create mode 100644 drivers/dma/axi_dma_platform.h
> ?create mode 100644 drivers/dma/axi_dma_platform_reg.h
> 

-- 
Andy Shevchenko <andriy.shevchenko at linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 15:34   ` Eugeniy Paltsev
  (?)
@ 2017-01-25 16:49     ` kbuild test robot
  -1 siblings, 0 replies; 29+ messages in thread
From: kbuild test robot @ 2017-01-25 16:49 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: kbuild-all, dmaengine, linux-kernel, devicetree, linux-snps-arc,
	Dan Williams, Vinod Koul, Mark Rutland, Rob Herring,
	Andy Shevchenko, Alexey Brodkin, Eugeniy Paltsev

[-- Attachment #1: Type: text/plain, Size: 7629 bytes --]

Hi Eugeniy,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.10-rc5 next-20170125]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Eugeniy-Paltsev/dmaengine-Add-DW-AXI-DMAC-driver/20170126-000653
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/dma/axi_dma_platform.c:26:0:
>> include/linux/module.h:130:27: error: redefinition of '__inittest'
     static inline initcall_t __inittest(void)  \
                              ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:130:27: note: previous definition of '__inittest' was here
     static inline initcall_t __inittest(void)  \
                              ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:132:6: error: redefinition of 'init_module'
     int init_module(void) __attribute__((alias(#initfn)));
         ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:132:6: note: previous definition of 'init_module' was here
     int init_module(void) __attribute__((alias(#initfn)));
         ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:136:27: error: redefinition of '__exittest'
     static inline exitcall_t __exittest(void)  \
                              ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:136:27: note: previous definition of '__exittest' was here
     static inline exitcall_t __exittest(void)  \
                              ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:138:7: error: redefinition of 'cleanup_module'
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:138:7: note: previous definition of 'cleanup_module' was here
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~

vim +/__inittest +130 include/linux/module.h

0fd972a7 Paul Gortmaker 2015-05-01  109  #define early_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  110  #define core_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  111  #define core_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  112  #define postcore_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  113  #define postcore_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  114  #define arch_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01 @115  #define subsys_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  116  #define subsys_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  117  #define fs_initcall(fn)			module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  118  #define fs_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  119  #define rootfs_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  120  #define device_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  121  #define device_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  122  #define late_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  123  #define late_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  124  
0fd972a7 Paul Gortmaker 2015-05-01  125  #define console_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  126  #define security_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  127  
0fd972a7 Paul Gortmaker 2015-05-01  128  /* Each module must use one module_init(). */
0fd972a7 Paul Gortmaker 2015-05-01  129  #define module_init(initfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @130  	static inline initcall_t __inittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  131  	{ return initfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @132  	int init_module(void) __attribute__((alias(#initfn)));
0fd972a7 Paul Gortmaker 2015-05-01  133  
0fd972a7 Paul Gortmaker 2015-05-01  134  /* This is only required if you want to be unloadable. */
0fd972a7 Paul Gortmaker 2015-05-01  135  #define module_exit(exitfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @136  	static inline exitcall_t __exittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  137  	{ return exitfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @138  	void cleanup_module(void) __attribute__((alias(#exitfn)));
0fd972a7 Paul Gortmaker 2015-05-01  139  
0fd972a7 Paul Gortmaker 2015-05-01  140  #endif
0fd972a7 Paul Gortmaker 2015-05-01  141  

:::::: The code at line 130 was first introduced by commit
:::::: 0fd972a7d91d6e15393c449492a04d94c0b89351 module: relocate module_init from init.h to module.h

:::::: TO: Paul Gortmaker <paul.gortmaker@windriver.com>
:::::: CC: Paul Gortmaker <paul.gortmaker@windriver.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 57936 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 16:49     ` kbuild test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kbuild test robot @ 2017-01-25 16:49 UTC (permalink / raw)
  Cc: Mark Rutland, devicetree, Andy Shevchenko, Vinod Koul,
	Alexey Brodkin, linux-kernel, Rob Herring, kbuild-all, dmaengine,
	Dan Williams, linux-snps-arc, Eugeniy Paltsev

[-- Attachment #1: Type: text/plain, Size: 7629 bytes --]

Hi Eugeniy,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.10-rc5 next-20170125]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Eugeniy-Paltsev/dmaengine-Add-DW-AXI-DMAC-driver/20170126-000653
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/dma/axi_dma_platform.c:26:0:
>> include/linux/module.h:130:27: error: redefinition of '__inittest'
     static inline initcall_t __inittest(void)  \
                              ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:130:27: note: previous definition of '__inittest' was here
     static inline initcall_t __inittest(void)  \
                              ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:132:6: error: redefinition of 'init_module'
     int init_module(void) __attribute__((alias(#initfn)));
         ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:132:6: note: previous definition of 'init_module' was here
     int init_module(void) __attribute__((alias(#initfn)));
         ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:136:27: error: redefinition of '__exittest'
     static inline exitcall_t __exittest(void)  \
                              ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:136:27: note: previous definition of '__exittest' was here
     static inline exitcall_t __exittest(void)  \
                              ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:138:7: error: redefinition of 'cleanup_module'
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:138:7: note: previous definition of 'cleanup_module' was here
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~

vim +/__inittest +130 include/linux/module.h

0fd972a7 Paul Gortmaker 2015-05-01  109  #define early_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  110  #define core_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  111  #define core_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  112  #define postcore_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  113  #define postcore_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  114  #define arch_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01 @115  #define subsys_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  116  #define subsys_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  117  #define fs_initcall(fn)			module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  118  #define fs_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  119  #define rootfs_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  120  #define device_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  121  #define device_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  122  #define late_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  123  #define late_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  124  
0fd972a7 Paul Gortmaker 2015-05-01  125  #define console_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  126  #define security_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  127  
0fd972a7 Paul Gortmaker 2015-05-01  128  /* Each module must use one module_init(). */
0fd972a7 Paul Gortmaker 2015-05-01  129  #define module_init(initfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @130  	static inline initcall_t __inittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  131  	{ return initfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @132  	int init_module(void) __attribute__((alias(#initfn)));
0fd972a7 Paul Gortmaker 2015-05-01  133  
0fd972a7 Paul Gortmaker 2015-05-01  134  /* This is only required if you want to be unloadable. */
0fd972a7 Paul Gortmaker 2015-05-01  135  #define module_exit(exitfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @136  	static inline exitcall_t __exittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  137  	{ return exitfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @138  	void cleanup_module(void) __attribute__((alias(#exitfn)));
0fd972a7 Paul Gortmaker 2015-05-01  139  
0fd972a7 Paul Gortmaker 2015-05-01  140  #endif
0fd972a7 Paul Gortmaker 2015-05-01  141  

:::::: The code at line 130 was first introduced by commit
:::::: 0fd972a7d91d6e15393c449492a04d94c0b89351 module: relocate module_init from init.h to module.h

:::::: TO: Paul Gortmaker <paul.gortmaker@windriver.com>
:::::: CC: Paul Gortmaker <paul.gortmaker@windriver.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 57936 bytes --]

[-- Attachment #3: Type: text/plain, Size: 169 bytes --]

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 16:49     ` kbuild test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kbuild test robot @ 2017-01-25 16:49 UTC (permalink / raw)
  To: linux-snps-arc

Hi Eugeniy,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.10-rc5 next-20170125]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Eugeniy-Paltsev/dmaengine-Add-DW-AXI-DMAC-driver/20170126-000653
config: i386-allmodconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/dma/axi_dma_platform.c:26:0:
>> include/linux/module.h:130:27: error: redefinition of '__inittest'
     static inline initcall_t __inittest(void)  \
                              ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:130:27: note: previous definition of '__inittest' was here
     static inline initcall_t __inittest(void)  \
                              ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:132:6: error: redefinition of 'init_module'
     int init_module(void) __attribute__((alias(#initfn)));
         ^
>> include/linux/module.h:115:30: note: in expansion of macro 'module_init'
    #define subsys_initcall(fn)  module_init(fn)
                                 ^~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1050:1: note: in expansion of macro 'subsys_initcall'
    subsys_initcall(dw_init);
    ^~~~~~~~~~~~~~~
   include/linux/module.h:132:6: note: previous definition of 'init_module' was here
     int init_module(void) __attribute__((alias(#initfn)));
         ^
   include/linux/device.h:1463:1: note: in expansion of macro 'module_init'
    module_init(__driver##_init); \
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:136:27: error: redefinition of '__exittest'
     static inline exitcall_t __exittest(void)  \
                              ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:136:27: note: previous definition of '__exittest' was here
     static inline exitcall_t __exittest(void)  \
                              ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~
>> include/linux/module.h:138:7: error: redefinition of 'cleanup_module'
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
>> drivers/dma/axi_dma_platform.c:1056:1: note: in expansion of macro 'module_exit'
    module_exit(dw_exit);
    ^~~~~~~~~~~
   include/linux/module.h:138:7: note: previous definition of 'cleanup_module' was here
     void cleanup_module(void) __attribute__((alias(#exitfn)));
          ^
   include/linux/device.h:1468:1: note: in expansion of macro 'module_exit'
    module_exit(__driver##_exit);
    ^~~~~~~~~~~
   include/linux/platform_device.h:228:2: note: in expansion of macro 'module_driver'
     module_driver(__platform_driver, platform_driver_register, \
     ^~~~~~~~~~~~~
>> drivers/dma/axi_dma_platform.c:1044:1: note: in expansion of macro 'module_platform_driver'
    module_platform_driver(dw_driver);
    ^~~~~~~~~~~~~~~~~~~~~~

vim +/__inittest +130 include/linux/module.h

0fd972a7 Paul Gortmaker 2015-05-01  109  #define early_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  110  #define core_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  111  #define core_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  112  #define postcore_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  113  #define postcore_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  114  #define arch_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01 @115  #define subsys_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  116  #define subsys_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  117  #define fs_initcall(fn)			module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  118  #define fs_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  119  #define rootfs_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  120  #define device_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  121  #define device_initcall_sync(fn)	module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  122  #define late_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  123  #define late_initcall_sync(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  124  
0fd972a7 Paul Gortmaker 2015-05-01  125  #define console_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  126  #define security_initcall(fn)		module_init(fn)
0fd972a7 Paul Gortmaker 2015-05-01  127  
0fd972a7 Paul Gortmaker 2015-05-01  128  /* Each module must use one module_init(). */
0fd972a7 Paul Gortmaker 2015-05-01  129  #define module_init(initfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @130  	static inline initcall_t __inittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  131  	{ return initfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @132  	int init_module(void) __attribute__((alias(#initfn)));
0fd972a7 Paul Gortmaker 2015-05-01  133  
0fd972a7 Paul Gortmaker 2015-05-01  134  /* This is only required if you want to be unloadable. */
0fd972a7 Paul Gortmaker 2015-05-01  135  #define module_exit(exitfn)					\
0fd972a7 Paul Gortmaker 2015-05-01 @136  	static inline exitcall_t __exittest(void)		\
0fd972a7 Paul Gortmaker 2015-05-01  137  	{ return exitfn; }					\
0fd972a7 Paul Gortmaker 2015-05-01 @138  	void cleanup_module(void) __attribute__((alias(#exitfn)));
0fd972a7 Paul Gortmaker 2015-05-01  139  
0fd972a7 Paul Gortmaker 2015-05-01  140  #endif
0fd972a7 Paul Gortmaker 2015-05-01  141  

:::::: The code at line 130 was first introduced by commit
:::::: 0fd972a7d91d6e15393c449492a04d94c0b89351 module: relocate module_init from init.h to module.h

:::::: TO: Paul Gortmaker <paul.gortmaker at windriver.com>
:::::: CC: Paul Gortmaker <paul.gortmaker at windriver.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 57936 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-snps-arc/attachments/20170126/f48b6066/attachment-0001.gz>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 15:34   ` Eugeniy Paltsev
  (?)
@ 2017-01-25 17:25     ` Andy Shevchenko
  -1 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-01-25 17:25 UTC (permalink / raw)
  To: Eugeniy Paltsev, dmaengine
  Cc: linux-kernel, devicetree, linux-snps-arc, Dan Williams,
	Vinod Koul, Mark Rutland, Rob Herring, Alexey Brodkin

On Wed, 2017-01-25 at 18:34 +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> are supported.
> 

Few more comments on top of not addressed/answered yet.

> +static inline void axi_chan_disable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
> +	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}
> +
> +static inline void axi_chan_enable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);

> +	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> 

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}


> +static void axi_desc_put(struct axi_dma_desc *desc)
> +{
> +	struct axi_dma_chan *chan = desc->chan;
> +	struct dw_axi_dma *dw = chan->chip->dw;
> +	struct axi_dma_desc *child, *_next;
> +	unsigned int descs_put = 0;
> +

> +	if (unlikely(!desc))
> +		return;

Would it be the case?

> +static void dma_chan_issue_pending(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +

> +	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__,
> axi_chan_name(chan));

Messages like this kinda redundant.
Either you use function tracer and see them anyway, or you are using
Dynamic Debug, which may include function name.

Basically you wrote an equivalent to something like

dev_dbg(dev, "%s\n", channame);

> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (vchan_issue_pending(&chan->vc))
> +		axi_chan_start_first_queued(chan);
> +	spin_unlock_irqrestore(&chan->vc.lock, flags);

...and taking into account the function itself one might expect
something useful printed in _start_first_queued().

For some cases there is also dev_vdbg().

> +}
> +
> +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan)) {
> +		dev_err(chan2dev(chan), "%s is non-idle!\n",
> +			axi_chan_name(chan));
> +		return -EBUSY;
> +	}
> +

> +	dev_dbg(dchan2dev(dchan), "%s: allocating\n",
> axi_chan_name(chan));

Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is useful
and what is not?

Give a chance to function tracer as well.

> +static void dma_chan_free_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> 

> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan))
> +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> +			axi_chan_name(chan));

Yeah, as I said, dw_dmac is not a good example. And this one can be
managed by runtime PM I suppose.

> +
> +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem
> transfer? */

Read datasheet for a SoC/platform? For dw_dmac is chosen with accordance
to hardware topology.

> +	while (true) {

Usually it makes readability harder and rises a red flag to the code.

> 		/* Manage transfer list (xfer_list) */
> +		if (!first) {
> +			first = desc;
> +		} else {
> +			list_add_tail(&desc->xfer_list, &first-
> >xfer_list);
> +			write_desc_llp(prev, desc->vd.tx.phys | lms);
> +		}
> +		prev = desc;
> +
> +		/* update the lengths and addresses for the next loop
> cycle */
> +		dst_len -= xfer_len;
> +		src_len -= xfer_len;
> +		dst_adr += xfer_len;
> +		src_adr += xfer_len;
> +
> +		total_len += xfer_len;

I would suggest to leave this on caller. At some point, if no one else
do this faster than me, I would like to introduce something like struct
dma_parms per DMA channel to allow caller prepare SG list suitable for
the DMA device.

> +
> +static struct dma_async_tx_descriptor *
> +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
> +			 dma_addr_t src, size_t len, unsigned long
> flags)
> +{

Do you indeed have a use case for this except debugging and testing?

> +	unsigned int nents = 1;
> +	struct scatterlist dst_sg;
> +	struct scatterlist src_sg;
> +
> +	sg_init_table(&dst_sg, nents);
> +	sg_init_table(&src_sg, nents);
> +
> +	sg_dma_address(&dst_sg) = dest;
> +	sg_dma_address(&src_sg) = src;
> +
> +	sg_dma_len(&dst_sg) = len;
> +	sg_dma_len(&src_sg) = len;
> +
> +	/* Implement memcpy transfer as sg transfer with single list
> */

> +	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
> +				    &src_sg, nents, flags);

One line?

> +}

> +
> +static void axi_chan_dump_lli(struct axi_dma_chan *chan,
> +			      struct axi_dma_desc *desc)
> +{
> +	dev_err(dchan2dev(&chan->vc.chan),
> +		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL:
> 0x%x:%08x",
> +		le32_to_cpu(desc->lli.sar_lo),
> +		le32_to_cpu(desc->lli.dar_lo),
> +		le32_to_cpu(desc->lli.llp_lo),
> +		le32_to_cpu(desc->lli.block_ts_lo),
> +		le32_to_cpu(desc->lli.ctl_hi),
> +		le32_to_cpu(desc->lli.ctl_lo));

I hope at some point ARC (and other architectures which will use this
IP) can implement MMIO tracer.

> +}

> +		axi_chan_dump_lli(chan, desc);
> +}
> +
> +

Extra line.

> +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32
> status)
> +{

> +
> +static int dma_chan_pause(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +	unsigned int timeout = 20; /* timeout iterations */
> +	int ret = -EAGAIN;
> +	u32 val;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +
> +	while (timeout--) {

> +		if (axi_chan_irq_read(chan) &
> DWAXIDMAC_IRQ_SUSPENDED) {

> +			axi_chan_irq_clear(chan,
> DWAXIDMAC_IRQ_SUSPENDED);

You may move this invariant out from the loop. Makes code cleaner.

> +			ret = 0;
> +			break;
> +		}

> +		udelay(2);


> +	}
> +
> +	chan->is_paused = true;

This is indeed property of channel.
That said, you may do a trick and use descriptor status for it.
You channel and driver, I'm sure, can't serve in interleaved mode with
descriptors. So, that makes channel(paused) == active
descriptor(paused).

The trick allows to make code cleaner.

> +	ret = device_property_read_u32_array(dev, "priority", carr,

I'm not sure you will have a use case for that. Have you?

> +	ret = devm_request_irq(chip->dev, chip->irq,
> dw_axi_dma_intretupt,
> +			       IRQF_SHARED, DRV_NAME, chip);
> +	if (ret)
> +		return ret;

You can't do this unless you are using threaded IRQ handler without any
tasklets involved.

There was a nice mail by Thomas who explained what the problem you have
there.

It's a bug in your code.

> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

Hmm... Why it can't be just a platform driver? You describe the
dependency in Device Tree, if you have something happened, you may
utilize -EPROBE_DEFER.

-- 
Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 17:25     ` Andy Shevchenko
  0 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-01-25 17:25 UTC (permalink / raw)
  To: Eugeniy Paltsev, dmaengine
  Cc: Mark Rutland, devicetree, Vinod Koul, Alexey Brodkin,
	linux-kernel, Rob Herring, Dan Williams, linux-snps-arc

On Wed, 2017-01-25 at 18:34 +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> are supported.
> 

Few more comments on top of not addressed/answered yet.

> +static inline void axi_chan_disable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
> +	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}
> +
> +static inline void axi_chan_enable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);

> +	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> 

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}


> +static void axi_desc_put(struct axi_dma_desc *desc)
> +{
> +	struct axi_dma_chan *chan = desc->chan;
> +	struct dw_axi_dma *dw = chan->chip->dw;
> +	struct axi_dma_desc *child, *_next;
> +	unsigned int descs_put = 0;
> +

> +	if (unlikely(!desc))
> +		return;

Would it be the case?

> +static void dma_chan_issue_pending(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +

> +	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__,
> axi_chan_name(chan));

Messages like this kinda redundant.
Either you use function tracer and see them anyway, or you are using
Dynamic Debug, which may include function name.

Basically you wrote an equivalent to something like

dev_dbg(dev, "%s\n", channame);

> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (vchan_issue_pending(&chan->vc))
> +		axi_chan_start_first_queued(chan);
> +	spin_unlock_irqrestore(&chan->vc.lock, flags);

...and taking into account the function itself one might expect
something useful printed in _start_first_queued().

For some cases there is also dev_vdbg().

> +}
> +
> +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan)) {
> +		dev_err(chan2dev(chan), "%s is non-idle!\n",
> +			axi_chan_name(chan));
> +		return -EBUSY;
> +	}
> +

> +	dev_dbg(dchan2dev(dchan), "%s: allocating\n",
> axi_chan_name(chan));

Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is useful
and what is not?

Give a chance to function tracer as well.

> +static void dma_chan_free_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> 

> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan))
> +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> +			axi_chan_name(chan));

Yeah, as I said, dw_dmac is not a good example. And this one can be
managed by runtime PM I suppose.

> +
> +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem
> transfer? */

Read datasheet for a SoC/platform? For dw_dmac is chosen with accordance
to hardware topology.

> +	while (true) {

Usually it makes readability harder and rises a red flag to the code.

> 		/* Manage transfer list (xfer_list) */
> +		if (!first) {
> +			first = desc;
> +		} else {
> +			list_add_tail(&desc->xfer_list, &first-
> >xfer_list);
> +			write_desc_llp(prev, desc->vd.tx.phys | lms);
> +		}
> +		prev = desc;
> +
> +		/* update the lengths and addresses for the next loop
> cycle */
> +		dst_len -= xfer_len;
> +		src_len -= xfer_len;
> +		dst_adr += xfer_len;
> +		src_adr += xfer_len;
> +
> +		total_len += xfer_len;

I would suggest to leave this on caller. At some point, if no one else
do this faster than me, I would like to introduce something like struct
dma_parms per DMA channel to allow caller prepare SG list suitable for
the DMA device.

> +
> +static struct dma_async_tx_descriptor *
> +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
> +			 dma_addr_t src, size_t len, unsigned long
> flags)
> +{

Do you indeed have a use case for this except debugging and testing?

> +	unsigned int nents = 1;
> +	struct scatterlist dst_sg;
> +	struct scatterlist src_sg;
> +
> +	sg_init_table(&dst_sg, nents);
> +	sg_init_table(&src_sg, nents);
> +
> +	sg_dma_address(&dst_sg) = dest;
> +	sg_dma_address(&src_sg) = src;
> +
> +	sg_dma_len(&dst_sg) = len;
> +	sg_dma_len(&src_sg) = len;
> +
> +	/* Implement memcpy transfer as sg transfer with single list
> */

> +	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
> +				    &src_sg, nents, flags);

One line?

> +}

> +
> +static void axi_chan_dump_lli(struct axi_dma_chan *chan,
> +			      struct axi_dma_desc *desc)
> +{
> +	dev_err(dchan2dev(&chan->vc.chan),
> +		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL:
> 0x%x:%08x",
> +		le32_to_cpu(desc->lli.sar_lo),
> +		le32_to_cpu(desc->lli.dar_lo),
> +		le32_to_cpu(desc->lli.llp_lo),
> +		le32_to_cpu(desc->lli.block_ts_lo),
> +		le32_to_cpu(desc->lli.ctl_hi),
> +		le32_to_cpu(desc->lli.ctl_lo));

I hope at some point ARC (and other architectures which will use this
IP) can implement MMIO tracer.

> +}

> +		axi_chan_dump_lli(chan, desc);
> +}
> +
> +

Extra line.

> +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32
> status)
> +{

> +
> +static int dma_chan_pause(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +	unsigned int timeout = 20; /* timeout iterations */
> +	int ret = -EAGAIN;
> +	u32 val;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +
> +	while (timeout--) {

> +		if (axi_chan_irq_read(chan) &
> DWAXIDMAC_IRQ_SUSPENDED) {

> +			axi_chan_irq_clear(chan,
> DWAXIDMAC_IRQ_SUSPENDED);

You may move this invariant out from the loop. Makes code cleaner.

> +			ret = 0;
> +			break;
> +		}

> +		udelay(2);


> +	}
> +
> +	chan->is_paused = true;

This is indeed property of channel.
That said, you may do a trick and use descriptor status for it.
You channel and driver, I'm sure, can't serve in interleaved mode with
descriptors. So, that makes channel(paused) == active
descriptor(paused).

The trick allows to make code cleaner.

> +	ret = device_property_read_u32_array(dev, "priority", carr,

I'm not sure you will have a use case for that. Have you?

> +	ret = devm_request_irq(chip->dev, chip->irq,
> dw_axi_dma_intretupt,
> +			       IRQF_SHARED, DRV_NAME, chip);
> +	if (ret)
> +		return ret;

You can't do this unless you are using threaded IRQ handler without any
tasklets involved.

There was a nice mail by Thomas who explained what the problem you have
there.

It's a bug in your code.

> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

Hmm... Why it can't be just a platform driver? You describe the
dependency in Device Tree, if you have something happened, you may
utilize -EPROBE_DEFER.

-- 
Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Intel Finland Oy

_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-01-25 17:25     ` Andy Shevchenko
  0 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-01-25 17:25 UTC (permalink / raw)
  To: linux-snps-arc

On Wed, 2017-01-25@18:34 +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> are supported.
> 

Few more comments on top of not addressed/answered yet.

> +static inline void axi_chan_disable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
> +	val |=??(BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}
> +
> +static inline void axi_chan_enable(struct axi_dma_chan *chan)
> +{
> +	u32 val;
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);

> +	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> 

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +}


> +static void axi_desc_put(struct axi_dma_desc *desc)
> +{
> +	struct axi_dma_chan *chan = desc->chan;
> +	struct dw_axi_dma *dw = chan->chip->dw;
> +	struct axi_dma_desc *child, *_next;
> +	unsigned int descs_put = 0;
> +

> +	if (unlikely(!desc))
> +		return;

Would it be the case?

> +static void dma_chan_issue_pending(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +

> +	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__,
> axi_chan_name(chan));

Messages like this kinda redundant.
Either you use function tracer and see them anyway, or you are using
Dynamic Debug, which may include function name.

Basically you wrote an equivalent to something like

dev_dbg(dev, "%s\n", channame);

> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (vchan_issue_pending(&chan->vc))
> +		axi_chan_start_first_queued(chan);
> +	spin_unlock_irqrestore(&chan->vc.lock, flags);

...and taking into account the function itself one might expect
something useful printed in _start_first_queued().

For some cases there is also dev_vdbg().

> +}
> +
> +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan)) {
> +		dev_err(chan2dev(chan), "%s is non-idle!\n",
> +			axi_chan_name(chan));
> +		return -EBUSY;
> +	}
> +

> +	dev_dbg(dchan2dev(dchan), "%s: allocating\n",
> axi_chan_name(chan));

Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is useful
and what is not?

Give a chance to function tracer as well.

> +static void dma_chan_free_chan_resources(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +
> 

> +	/* ASSERT: channel is idle */
> +	if (axi_chan_is_hw_enable(chan))
> +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> +			axi_chan_name(chan));

Yeah, as I said, dw_dmac is not a good example. And this one can be
managed by runtime PM I suppose.

> +
> +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem
> transfer? */

Read datasheet for a SoC/platform? For dw_dmac is chosen with accordance
to hardware topology.

> +	while (true) {

Usually it makes readability harder and rises a red flag to the code.

> 		/* Manage transfer list (xfer_list) */
> +		if (!first) {
> +			first = desc;
> +		} else {
> +			list_add_tail(&desc->xfer_list, &first-
> >xfer_list);
> +			write_desc_llp(prev, desc->vd.tx.phys | lms);
> +		}
> +		prev = desc;
> +
> +		/* update the lengths and addresses for the next loop
> cycle */
> +		dst_len -= xfer_len;
> +		src_len -= xfer_len;
> +		dst_adr += xfer_len;
> +		src_adr += xfer_len;
> +
> +		total_len += xfer_len;

I would suggest to leave this on caller. At some point, if no one else
do this faster than me, I would like to introduce something like struct
dma_parms per DMA channel to allow caller prepare SG list suitable for
the DMA device.

> +
> +static struct dma_async_tx_descriptor *
> +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
> +			?dma_addr_t src, size_t len, unsigned long
> flags)
> +{

Do you indeed have a use case for this except debugging and testing?

> +	unsigned int nents = 1;
> +	struct scatterlist dst_sg;
> +	struct scatterlist src_sg;
> +
> +	sg_init_table(&dst_sg, nents);
> +	sg_init_table(&src_sg, nents);
> +
> +	sg_dma_address(&dst_sg) = dest;
> +	sg_dma_address(&src_sg) = src;
> +
> +	sg_dma_len(&dst_sg) = len;
> +	sg_dma_len(&src_sg) = len;
> +
> +	/* Implement memcpy transfer as sg transfer with single list
> */

> +	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
> +				????&src_sg, nents, flags);

One line?

> +}

> +
> +static void axi_chan_dump_lli(struct axi_dma_chan *chan,
> +			??????struct axi_dma_desc *desc)
> +{
> +	dev_err(dchan2dev(&chan->vc.chan),
> +		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL:
> 0x%x:%08x",
> +		le32_to_cpu(desc->lli.sar_lo),
> +		le32_to_cpu(desc->lli.dar_lo),
> +		le32_to_cpu(desc->lli.llp_lo),
> +		le32_to_cpu(desc->lli.block_ts_lo),
> +		le32_to_cpu(desc->lli.ctl_hi),
> +		le32_to_cpu(desc->lli.ctl_lo));

I hope at some point ARC (and other architectures which will use this
IP) can implement MMIO tracer.

> +}

> +		axi_chan_dump_lli(chan, desc);
> +}
> +
> +

Extra line.

> +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32
> status)
> +{

> +
> +static int dma_chan_pause(struct dma_chan *dchan)
> +{
> +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> +	unsigned long flags;
> +	unsigned int timeout = 20; /* timeout iterations */
> +	int ret = -EAGAIN;
> +	u32 val;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +
> +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> +	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
> +		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);

Redundant parens.

> +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> +
> +	while (timeout--) {

> +		if (axi_chan_irq_read(chan) &
> DWAXIDMAC_IRQ_SUSPENDED) {

> +			axi_chan_irq_clear(chan,
> DWAXIDMAC_IRQ_SUSPENDED);

You may move this invariant out from the loop. Makes code cleaner.

> +			ret = 0;
> +			break;
> +		}

> +		udelay(2);


> +	}
> +
> +	chan->is_paused = true;

This is indeed property of channel.
That said, you may do a trick and use descriptor status for it.
You channel and driver, I'm sure, can't serve in interleaved mode with
descriptors. So, that makes channel(paused) == active
descriptor(paused).

The trick allows to make code cleaner.

> +	ret = device_property_read_u32_array(dev, "priority", carr,

I'm not sure you will have a use case for that. Have you?

> +	ret = devm_request_irq(chip->dev, chip->irq,
> dw_axi_dma_intretupt,
> +			???????IRQF_SHARED, DRV_NAME, chip);
> +	if (ret)
> +		return ret;

You can't do this unless you are using threaded IRQ handler without any
tasklets involved.

There was a nice mail by Thomas who explained what the problem you have
there.

It's a bug in your code.

> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

Hmm... Why it can't be just a platform driver? You describe the
dependency in Device Tree, if you have something happened, you may
utilize -EPROBE_DEFER.

-- 
Andy Shevchenko <andriy.shevchenko at linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-30 20:08     ` Rob Herring
  0 siblings, 0 replies; 29+ messages in thread
From: Rob Herring @ 2017-01-30 20:08 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: dmaengine, linux-kernel, devicetree, linux-snps-arc,
	Dan Williams, Vinod Koul, Mark Rutland, Andy Shevchenko,
	Alexey Brodkin

On Wed, Jan 25, 2017 at 06:34:16PM +0300, Eugeniy Paltsev wrote:
> This patch adds documentation of device tree bindings for the Synopsys
> DesignWare AXI DMA controller.
> 
> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
> ---
>  .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> 
> diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> new file mode 100644
> index 0000000..21318a7
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> @@ -0,0 +1,33 @@
> +* Synopsys DesignWare AXI DMA Controller
> +
> +Required properties:
> +- compatible: "snps,axi-dma"

Too generic. This needs an IP version at least.

> +- reg: Address range of the DMAC registers. This should include
> +  all of the per-channel registers.
> +- interrupt: Should contain the DMAC interrupt number.
> +- interrupt-parent: Should be the phandle for the interrupt controller
> +  that services interrupts for this device.
> +- dma-channels: Number of channels supported by hardware.
> +- dma-masters: Number of AXI masters supported by the hardware.
> +- data-width: Maximum AXI data width supported by hardware.
> +  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)

> +- priority: Priority of channel. Array property. Priority value must be
> +  programmed within [0:dma-channels-1] range. (0 - minimum priority)
> +- block-size: Maximum block size supported by the controller channel. Array
> +  property.

Array size equal to the number of dma-channels?

Other than dma-channels, all these either need vendor prefix.

> +
> +Example:
> +
> +dmac: dmac@80000 {

dma-controller@...

> +	compatible = "snps,axi-dma";
> +	reg = <0x80000 0x400>;
> +	clocks = <&core_clk>;
> +	interrupt-parent = <&intc>;
> +	interrupts = <27>;
> +
> +	dma-channels = <4>;
> +	dma-masters = <2>;
> +	data-width = <3>;
> +	block-size = <4096 4096 4096 4096>;
> +	priority = <0 1 2 3>;
> +};
> -- 
> 2.5.5
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-30 20:08     ` Rob Herring
  0 siblings, 0 replies; 29+ messages in thread
From: Rob Herring @ 2017-01-30 20:08 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: dmaengine-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	devicetree-u79uwXL29TY76Z2rM5mHXA,
	linux-snps-arc-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Dan Williams,
	Vinod Koul, Mark Rutland, Andy Shevchenko, Alexey Brodkin

On Wed, Jan 25, 2017 at 06:34:16PM +0300, Eugeniy Paltsev wrote:
> This patch adds documentation of device tree bindings for the Synopsys
> DesignWare AXI DMA controller.
> 
> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev-HKixBCOQz3hWk0Htik3J/w@public.gmane.org>
> ---
>  .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> 
> diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> new file mode 100644
> index 0000000..21318a7
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> @@ -0,0 +1,33 @@
> +* Synopsys DesignWare AXI DMA Controller
> +
> +Required properties:
> +- compatible: "snps,axi-dma"

Too generic. This needs an IP version at least.

> +- reg: Address range of the DMAC registers. This should include
> +  all of the per-channel registers.
> +- interrupt: Should contain the DMAC interrupt number.
> +- interrupt-parent: Should be the phandle for the interrupt controller
> +  that services interrupts for this device.
> +- dma-channels: Number of channels supported by hardware.
> +- dma-masters: Number of AXI masters supported by the hardware.
> +- data-width: Maximum AXI data width supported by hardware.
> +  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)

> +- priority: Priority of channel. Array property. Priority value must be
> +  programmed within [0:dma-channels-1] range. (0 - minimum priority)
> +- block-size: Maximum block size supported by the controller channel. Array
> +  property.

Array size equal to the number of dma-channels?

Other than dma-channels, all these either need vendor prefix.

> +
> +Example:
> +
> +dmac: dmac@80000 {

dma-controller@...

> +	compatible = "snps,axi-dma";
> +	reg = <0x80000 0x400>;
> +	clocks = <&core_clk>;
> +	interrupt-parent = <&intc>;
> +	interrupts = <27>;
> +
> +	dma-channels = <4>;
> +	dma-masters = <2>;
> +	data-width = <3>;
> +	block-size = <4096 4096 4096 4096>;
> +	priority = <0 1 2 3>;
> +};
> -- 
> 2.5.5
> 
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings
@ 2017-01-30 20:08     ` Rob Herring
  0 siblings, 0 replies; 29+ messages in thread
From: Rob Herring @ 2017-01-30 20:08 UTC (permalink / raw)
  To: linux-snps-arc

On Wed, Jan 25, 2017@06:34:16PM +0300, Eugeniy Paltsev wrote:
> This patch adds documentation of device tree bindings for the Synopsys
> DesignWare AXI DMA controller.
> 
> Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev at synopsys.com>
> ---
>  .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   | 33 ++++++++++++++++++++++
>  1 file changed, 33 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> 
> diff --git a/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> new file mode 100644
> index 0000000..21318a7
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/dma/snps,axi-dw-dmac.txt
> @@ -0,0 +1,33 @@
> +* Synopsys DesignWare AXI DMA Controller
> +
> +Required properties:
> +- compatible: "snps,axi-dma"

Too generic. This needs an IP version at least.

> +- reg: Address range of the DMAC registers. This should include
> +  all of the per-channel registers.
> +- interrupt: Should contain the DMAC interrupt number.
> +- interrupt-parent: Should be the phandle for the interrupt controller
> +  that services interrupts for this device.
> +- dma-channels: Number of channels supported by hardware.
> +- dma-masters: Number of AXI masters supported by the hardware.
> +- data-width: Maximum AXI data width supported by hardware.
> +  (0 - 8bits, 1 - 16bits, 2 - 32bits, ..., 6 - 512bits)

> +- priority: Priority of channel. Array property. Priority value must be
> +  programmed within [0:dma-channels-1] range. (0 - minimum priority)
> +- block-size: Maximum block size supported by the controller channel. Array
> +  property.

Array size equal to the number of dma-channels?

Other than dma-channels, all these either need vendor prefix.

> +
> +Example:
> +
> +dmac: dmac at 80000 {

dma-controller at ...

> +	compatible = "snps,axi-dma";
> +	reg = <0x80000 0x400>;
> +	clocks = <&core_clk>;
> +	interrupt-parent = <&intc>;
> +	interrupts = <27>;
> +
> +	dma-channels = <4>;
> +	dma-masters = <2>;
> +	data-width = <3>;
> +	block-size = <4096 4096 4096 4096>;
> +	priority = <0 1 2 3>;
> +};
> -- 
> 2.5.5
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 17:25     ` Andy Shevchenko
@ 2017-02-09 13:58       ` Eugeniy Paltsev
  -1 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-02-09 13:58 UTC (permalink / raw)
  To: andriy.shevchenko
  Cc: dmaengine, dan.j.williams, linux-kernel, Alexey.Brodkin,
	vinod.koul, linux-snps-arc

Thanks for response.
My comments are given inline below.

On Wed, 2017-01-25 at 19:25 +0200, Andy Shevchenko wrote:
> On Wed, 2017-01-25 at 18:34 +0300, Eugeniy Paltsev wrote:
> > 
> > This patch adds support for the DW AXI DMAC controller.
> > 
> > DW AXI DMAC is a part of upcoming development board from Synopsys.
> > 
> > In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> > are supported.
> > 
> Few more comments on top of not addressed/answered yet.
> 
> > 
> > +static inline void axi_chan_disable(struct axi_dma_chan *chan)
> > +{
> > +	u32 val;
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> > +	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
> > +	val |=  (BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +}
> > +
> > +static inline void axi_chan_enable(struct axi_dma_chan *chan)
> > +{
> > +	u32 val;
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> > 
> > +	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
> > +		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> > 
> Redundant parens.
Will be fixed.

> > 
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +}
> 
> > 
> > +static void axi_desc_put(struct axi_dma_desc *desc)
> > +{
> > +	struct axi_dma_chan *chan = desc->chan;
> > +	struct dw_axi_dma *dw = chan->chip->dw;
> > +	struct axi_dma_desc *child, *_next;
> > +	unsigned int descs_put = 0;
> > +
> > 
> > +	if (unlikely(!desc))
> > +		return;
> Would it be the case?
Yes, I checked the code - this NULL check is unnecessary, so I'll
remove it.

Also about your previous question about likely/unlikely:
I checked disassembly - gcc generates different code when I use likely
and unlikely. So, I guess they are useful. 

> > 
> > +static void dma_chan_issue_pending(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +	unsigned long flags;
> > +
> > 
> > +	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__,
> > axi_chan_name(chan));
> Messages like this kinda redundant.
> Either you use function tracer and see them anyway, or you are using
> Dynamic Debug, which may include function name.
> 
> Basically you wrote an equivalent to something like
> 
> dev_dbg(dev, "%s\n", channame);
Agreed. I'll remove dev_dbg from here because I also have it in 
axi_chan_start_first_queued.

> > 
> > +
> > +	spin_lock_irqsave(&chan->vc.lock, flags);
> > +	if (vchan_issue_pending(&chan->vc))
> > +		axi_chan_start_first_queued(chan);
> > +	spin_unlock_irqrestore(&chan->vc.lock, flags);
> ...and taking into account the function itself one might expect
> something useful printed in _start_first_queued().
> 
> For some cases there is also dev_vdbg().
> 
> > 
> > +}
> > +
> > +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +
> > +	/* ASSERT: channel is idle */
> > +	if (axi_chan_is_hw_enable(chan)) {
> > +		dev_err(chan2dev(chan), "%s is non-idle!\n",
> > +			axi_chan_name(chan));
> > +		return -EBUSY;
> > +	}
> > +
> > 
> > +	dev_dbg(dchan2dev(dchan), "%s: allocating\n",
> > axi_chan_name(chan));
> Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is
> useful
> and what is not?
> 
> Give a chance to function tracer as well.
Yep, this dev_dbg is redundant, so I'll remove it.

> > 
> > +static void dma_chan_free_chan_resources(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +
> > 
> > 
> > +	/* ASSERT: channel is idle */
> > +	if (axi_chan_is_hw_enable(chan))
> > +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> > +			axi_chan_name(chan));
> Yeah, as I said, dw_dmac is not a good example. And this one can be
> managed by runtime PM I suppose.
See comment below.

> >> > +static inline bool axi_dma_is_sw_enable(struct axi_dma_chip
*chip)
> >> > +{
> >> > +   struct dw_axi_dma *dw = chip->dw;
> >> > +   u32 i;
> >> > +
> >> > +   for (i = 0; i < dw->hdata->nr_channels; i++) {
> >> > +           if (dw->chan[i].in_use)
> >> Hmm... I know why we have such flag in dw_dmac, but I doubt it's
> >> needed
> >> in this driver. Just double check the need of it.
> > I use this flag to check state of channel (used now/unused) to
disable
> > dmac if all channels are unused for now.
> 
> Okay, but wouldn't be easier to use runtime PM for that? You will not
> need any special counter and runtime PM will take case of
> enabling/disabling the device.

Now in_use variable has several purposes - it is also used to mask
interrupts from channels, which are not used - that is the reason I
prefer leave it untouched.
Also all existing SoCs with this DMAC don't support power management -
so there is no really profit from implementing PM.

> > 
> > +
> > +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem
> > transfer? */
> Read datasheet for a SoC/platform? For dw_dmac is chosen with
> accordance
> to hardware topology.
Yep, for existing SoCs there is no difference - both masters have
access to memory. But it isn't necessarily true for future SoCs.
But possibly I shouldn't think about it now.

> > 
> > +	while (true) {
> Usually it makes readability harder and rises a red flag to the code.
I can write something like next code. Not sure is it looks better.
----------->8------------------
while ((dst_len || dst_sg && dst_nents) &&
       (src_len || src_sg && src_nents)) {
...
----------->8------------------

> > 
> > 		/* Manage transfer list (xfer_list) */
> > +		if (!first) {
> > +			first = desc;
> > +		} else {
> > +			list_add_tail(&desc->xfer_list, &first-
> > > 
> > > xfer_list);
> > +			write_desc_llp(prev, desc->vd.tx.phys |
> > lms);
> > +		}
> > +		prev = desc;
> > +
> > +		/* update the lengths and addresses for the next
> > loop
> > cycle */
> > +		dst_len -= xfer_len;
> > +		src_len -= xfer_len;
> > +		dst_adr += xfer_len;
> > +		src_adr += xfer_len;
> > +
> > +		total_len += xfer_len;
> I would suggest to leave this on caller.
I don't think it is a good idea. Caller doesn't know value of internal
parameters like max data width and max block size, but we know these
values and can organize LLI the most optimal way.
And if the caller has already prepared suitable SG list there will be
really little overhead, so I prefer leave this code unchanged.

> At some point, if no one
> else
> do this faster than me, I would like to introduce something like
> struct
> dma_parms per DMA channel to allow caller prepare SG list suitable
> for the DMA device.
In my opinion it is driver responsibility, not the caller.

> > 
> > +
> > +static struct dma_async_tx_descriptor *
> > +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
> > +			 dma_addr_t src, size_t len, unsigned long
> > flags)
> > +{
> Do you indeed have a use case for this except debugging and testing?
At least for IO coherency engine testing.

> > 
> > +	unsigned int nents = 1;
> > +	struct scatterlist dst_sg;
> > +	struct scatterlist src_sg;
> > +
> > +	sg_init_table(&dst_sg, nents);
> > +	sg_init_table(&src_sg, nents);
> > +
> > +	sg_dma_address(&dst_sg) = dest;
> > +	sg_dma_address(&src_sg) = src;
> > +
> > +	sg_dma_len(&dst_sg) = len;
> > +	sg_dma_len(&src_sg) = len;
> > +
> > +	/* Implement memcpy transfer as sg transfer with single
> > list
> > */
> > 
> > +	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
> > +				    &src_sg, nents, flags);
> One line?
It will be more than 80 character.

> > 
> > +}
> > 
> > +
> > +static void axi_chan_dump_lli(struct axi_dma_chan *chan,
> > +			      struct axi_dma_desc *desc)
> > +{
> > +	dev_err(dchan2dev(&chan->vc.chan),
> > +		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL:
> > 0x%x:%08x",
> > +		le32_to_cpu(desc->lli.sar_lo),
> > +		le32_to_cpu(desc->lli.dar_lo),
> > +		le32_to_cpu(desc->lli.llp_lo),
> > +		le32_to_cpu(desc->lli.block_ts_lo),
> > +		le32_to_cpu(desc->lli.ctl_hi),
> > +		le32_to_cpu(desc->lli.ctl_lo));
> I hope at some point ARC (and other architectures which will use this
> IP) can implement MMIO tracer.
But we had to use such code until it happens.

> > 
> > +}
> > 
> > +		axi_chan_dump_lli(chan, desc);
> > +}
> > +
> > +
> Extra line.
I guess we don't need any extra line here:
----------->8------------------
axi_chan_dump_lli(chan, desc_head);
list_for_each_entry(desc, &desc_head->xfer_list, xfer_list)
	axi_chan_dump_lli(chan, desc);
----------->8------------------
	
> > 
> > +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32
> > status)
> > +{
> > 
> > +
> > +static int dma_chan_pause(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +	unsigned long flags;
> > +	unsigned int timeout = 20; /* timeout iterations */
> > +	int ret = -EAGAIN;
> > +	u32 val;
> > +
> > +	spin_lock_irqsave(&chan->vc.lock, flags);
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> > +	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
> > +		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
> Redundant parens.
Will be fixed.

> > 
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +
> > +	while (timeout--) {
> > 
> > +		if (axi_chan_irq_read(chan) &
> > DWAXIDMAC_IRQ_SUSPENDED) {
> > 
> > +			axi_chan_irq_clear(chan,
> > DWAXIDMAC_IRQ_SUSPENDED);
> You may move this invariant out from the loop. Makes code cleaner.
Agreed.

> > 
> > +			ret = 0;
> > +			break;
> > +		}
> > 
> > +		udelay(2);
> 
> > 
> > +	}
> > +
> > +	chan->is_paused = true;
> This is indeed property of channel.
> That said, you may do a trick and use descriptor status for it.
> You channel and driver, I'm sure, can't serve in interleaved mode
> with
> descriptors. So, that makes channel(paused) == active
> descriptor(paused).
> 
> The trick allows to make code cleaner.

I don't understand how we can use use descriptor status for it.
I determine descriptor status basing on the is_paused value.
Look at the code:
----------->8------------------
ret = dma_cookie_status(dchan, cookie, txstate);
if (chan->is_paused && ret == DMA_IN_PROGRESS)
	return DMA_PAUSED;
----------->8------------------

> > 
> > +	ret = device_property_read_u32_array(dev, "priority",
> > carr,
> I'm not sure you will have a use case for that. Have you?
I guess we don't have use case for priority managment for mem-to-mem
transfers.
But we need priority management for slave dma (I'll implement slave dma
api soon)

> > 
> > +	ret = devm_request_irq(chip->dev, chip->irq,
> > dw_axi_dma_intretupt,
> > +			       IRQF_SHARED, DRV_NAME, chip);
> > +	if (ret)
> > +		return ret;
> You can't do this unless you are using threaded IRQ handler without
> any
> tasklets involved.
> 
> There was a nice mail by Thomas who explained what the problem you
> have
> there.
> 
> It's a bug in your code.
Yep, thanks a lot.
Looks like I have problem with driver remove function.

I guess I can solve this problem by freeing irq manually before tasklet
kill:
----------->8------------------
static int dw_remove(...)
{
	devm_free_irq(chip->dev, chip->irq, chip);
	tasklet_kill(&dw->tasklet);
	...
}
----------->8------------------

> > 
> > +static int __init dw_init(void)
> > +{
> > +	return platform_driver_register(&dw_driver);
> > +}
> > +subsys_initcall(dw_init);
> Hmm... Why it can't be just a platform driver? You describe the
> dependency in Device Tree, if you have something happened, you may
> utilize -EPROBE_DEFER.
> 
Will be fixed.


> >  * Add DT bindings
> > 
> > Eugeniy Paltsev (2):
> >   dt-bindings: Document the Synopsys DW AXI DMA bindings
> >   dmaengine: Add DW AXI DMAC driver
> > 
> >  .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   |   33 +
> >  drivers/dma/Kconfig                                |    8 +
> >  drivers/dma/Makefile                               |    1 +
> > 
> >  drivers/dma/axi_dma_platform.c                     | 1060
> > ++++++++++++++++++++
> This surprises me. I would expect more then 100+ LOC reduction when
> switched to virt-dma API. Can you double check that you are using it
> effectively?
There is a simple explanation: I switched to virt-dma API (and it cause
LOC reduction) and I implemented some TODOs (and it cause LOC
increase).

Also about previous comment, which I missed before:
> > +static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan,
> >                    dma_addr_t src, dma_addr_t dst, size_t len)
> > +{
> > +	u32 width;
> > +	dma_addr_t sdl = (src | dst | len);
> > +	u32 max_width = chan->chip->dw->hdata->m_data_width;
> 
> Use reverse tree.
> 
> dma_addr_t use above is wrong. (size_t might be bigger in some cases)

What do you mean by the phrase "Use reverse tree" ?

-- 
 Eugeniy Paltsev

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-09 13:58       ` Eugeniy Paltsev
  0 siblings, 0 replies; 29+ messages in thread
From: Eugeniy Paltsev @ 2017-02-09 13:58 UTC (permalink / raw)
  To: linux-snps-arc

Thanks for response.
My comments are given inline below.

On Wed, 2017-01-25@19:25 +0200, Andy Shevchenko wrote:
> On Wed, 2017-01-25@18:34 +0300, Eugeniy Paltsev wrote:
> >?
> > This patch adds support for the DW AXI DMAC controller.
> >?
> > DW AXI DMAC is a part of upcoming development board from Synopsys.
> >?
> > In this driver implementation only DMA_MEMCPY and DMA_SG transfers
> > are supported.
> >?
> Few more comments on top of not addressed/answered yet.
>?
> >?
> > +static inline void axi_chan_disable(struct axi_dma_chan *chan)
> > +{
> > +	u32 val;
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> > +	val &= ~(BIT(chan->id) << DMAC_CHAN_EN_SHIFT);
> > +	val |=??(BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +}
> > +
> > +static inline void axi_chan_enable(struct axi_dma_chan *chan)
> > +{
> > +	u32 val;
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> >?
> > +	val |= (BIT(chan->id) << DMAC_CHAN_EN_SHIFT |
> > +		BIT(chan->id) << DMAC_CHAN_EN_WE_SHIFT);
> >?
> Redundant parens.
Will be fixed.

> >?
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +}
>?
> >?
> > +static void axi_desc_put(struct axi_dma_desc *desc)
> > +{
> > +	struct axi_dma_chan *chan = desc->chan;
> > +	struct dw_axi_dma *dw = chan->chip->dw;
> > +	struct axi_dma_desc *child, *_next;
> > +	unsigned int descs_put = 0;
> > +
> >?
> > +	if (unlikely(!desc))
> > +		return;
> Would it be the case?
Yes, I checked the code - this NULL check is unnecessary, so I'll
remove it.

Also about your previous question about likely/unlikely:
I checked disassembly - gcc generates different code when I use likely
and unlikely. So, I guess they are useful.?

> >?
> > +static void dma_chan_issue_pending(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +	unsigned long flags;
> > +
> >?
> > +	dev_dbg(dchan2dev(dchan), "%s: %s\n", __func__,
> > axi_chan_name(chan));
> Messages like this kinda redundant.
> Either you use function tracer and see them anyway, or you are using
> Dynamic Debug, which may include function name.
>?
> Basically you wrote an equivalent to something like
>?
> dev_dbg(dev, "%s\n", channame);
Agreed. I'll remove dev_dbg from here because I also have it in?
axi_chan_start_first_queued.

> >?
> > +
> > +	spin_lock_irqsave(&chan->vc.lock, flags);
> > +	if (vchan_issue_pending(&chan->vc))
> > +		axi_chan_start_first_queued(chan);
> > +	spin_unlock_irqrestore(&chan->vc.lock, flags);
> ...and taking into account the function itself one might expect
> something useful printed in _start_first_queued().
>?
> For some cases there is also dev_vdbg().
>?
> >?
> > +}
> > +
> > +static int dma_chan_alloc_chan_resources(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +
> > +	/* ASSERT: channel is idle */
> > +	if (axi_chan_is_hw_enable(chan)) {
> > +		dev_err(chan2dev(chan), "%s is non-idle!\n",
> > +			axi_chan_name(chan));
> > +		return -EBUSY;
> > +	}
> > +
> >?
> > +	dev_dbg(dchan2dev(dchan), "%s: allocating\n",
> > axi_chan_name(chan));
> Can you try to enable DMADEVICES_DEBUG and VDEBUG and see what is
> useful
> and what is not?
>?
> Give a chance to function tracer as well.
Yep, this dev_dbg is redundant, so I'll remove it.

> >?
> > +static void dma_chan_free_chan_resources(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +
> >?
> >?
> > +	/* ASSERT: channel is idle */
> > +	if (axi_chan_is_hw_enable(chan))
> > +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> > +			axi_chan_name(chan));
> Yeah, as I said, dw_dmac is not a good example. And this one can be
> managed by runtime PM I suppose.
See comment below.

> >> > +static inline bool axi_dma_is_sw_enable(struct axi_dma_chip
*chip)
> >> > +{
> >> > +???struct dw_axi_dma *dw = chip->dw;
> >> > +???u32 i;
> >> > +
> >> > +???for (i = 0; i < dw->hdata->nr_channels; i++) {
> >> > +???????????if (dw->chan[i].in_use)
> >> Hmm... I know why we have such flag in dw_dmac, but I doubt it's
> >> needed
> >> in this driver. Just double check the need of it.
> > I use this flag to check state of channel (used now/unused) to
disable
> > dmac if all channels are unused for now.
>?
> Okay, but wouldn't be easier to use runtime PM for that? You will not
> need any special counter and runtime PM will take case of
> enabling/disabling the device.

Now in_use variable has several purposes - it is also used to mask
interrupts from channels, which are not used - that is the reason I
prefer leave it untouched.
Also all existing SoCs with this DMAC don't support power management -
so there is no really profit from implementing PM.

> >?
> > +
> > +/* TODO: REVISIT: how we should choose AXI master for mem-to-mem
> > transfer? */
> Read datasheet for a SoC/platform? For dw_dmac is chosen with
> accordance
> to hardware topology.
Yep, for existing SoCs there is no difference - both masters have
access to memory. But it isn't necessarily true for future SoCs.
But possibly I shouldn't think about it now.

> >?
> > +	while (true) {
> Usually it makes readability harder and rises a red flag to the code.
I can write something like next code. Not sure is it looks better.
----------->8------------------
while ((dst_len || dst_sg && dst_nents) &&
? ? ? ?(src_len || src_sg && src_nents)) {
...
----------->8------------------

> >?
> >?		/* Manage transfer list (xfer_list) */
> > +		if (!first) {
> > +			first = desc;
> > +		} else {
> > +			list_add_tail(&desc->xfer_list, &first-
> > >?
> > > xfer_list);
> > +			write_desc_llp(prev, desc->vd.tx.phys |
> > lms);
> > +		}
> > +		prev = desc;
> > +
> > +		/* update the lengths and addresses for the next
> > loop
> > cycle */
> > +		dst_len -= xfer_len;
> > +		src_len -= xfer_len;
> > +		dst_adr += xfer_len;
> > +		src_adr += xfer_len;
> > +
> > +		total_len += xfer_len;
> I would suggest to leave this on caller.
I don't think it is a good idea. Caller doesn't know value of internal
parameters like max data width and max block size, but we know these
values and can organize LLI the most optimal way.
And if the caller has already prepared suitable SG list there will be
really little overhead, so I prefer leave this code unchanged.

> At some point, if no one
> else
> do this faster than me, I would like to introduce something like
> struct
> dma_parms per DMA channel to allow caller prepare SG list suitable
> for the DMA device.
In my opinion it is driver responsibility, not the caller.

> >?
> > +
> > +static struct dma_async_tx_descriptor *
> > +dma_chan_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest,
> > +			?dma_addr_t src, size_t len, unsigned long
> > flags)
> > +{
> Do you indeed have a use case for this except debugging and testing?
At least for IO coherency engine testing.

> >?
> > +	unsigned int nents = 1;
> > +	struct scatterlist dst_sg;
> > +	struct scatterlist src_sg;
> > +
> > +	sg_init_table(&dst_sg, nents);
> > +	sg_init_table(&src_sg, nents);
> > +
> > +	sg_dma_address(&dst_sg) = dest;
> > +	sg_dma_address(&src_sg) = src;
> > +
> > +	sg_dma_len(&dst_sg) = len;
> > +	sg_dma_len(&src_sg) = len;
> > +
> > +	/* Implement memcpy transfer as sg transfer with single
> > list
> > */
> >?
> > +	return dma_chan_prep_dma_sg(dchan, &dst_sg, nents,
> > +				????&src_sg, nents, flags);
> One line?
It will be more than 80 character.

> >?
> > +}
> >?
> > +
> > +static void axi_chan_dump_lli(struct axi_dma_chan *chan,
> > +			??????struct axi_dma_desc *desc)
> > +{
> > +	dev_err(dchan2dev(&chan->vc.chan),
> > +		"SAR: 0x%x DAR: 0x%x LLP: 0x%x BTS 0x%x CTL:
> > 0x%x:%08x",
> > +		le32_to_cpu(desc->lli.sar_lo),
> > +		le32_to_cpu(desc->lli.dar_lo),
> > +		le32_to_cpu(desc->lli.llp_lo),
> > +		le32_to_cpu(desc->lli.block_ts_lo),
> > +		le32_to_cpu(desc->lli.ctl_hi),
> > +		le32_to_cpu(desc->lli.ctl_lo));
> I hope at some point ARC (and other architectures which will use this
> IP) can implement MMIO tracer.
But we had to use such code until it happens.

> >?
> > +}
> >?
> > +		axi_chan_dump_lli(chan, desc);
> > +}
> > +
> > +
> Extra line.
I guess we don't need any extra line here:
----------->8------------------
axi_chan_dump_lli(chan, desc_head);
list_for_each_entry(desc, &desc_head->xfer_list, xfer_list)
	axi_chan_dump_lli(chan, desc);
----------->8------------------
	
> >?
> > +static void axi_chan_handle_err(struct axi_dma_chan *chan, u32
> > status)
> > +{
> >?
> > +
> > +static int dma_chan_pause(struct dma_chan *dchan)
> > +{
> > +	struct axi_dma_chan *chan = dchan_to_axi_dma_chan(dchan);
> > +	unsigned long flags;
> > +	unsigned int timeout = 20; /* timeout iterations */
> > +	int ret = -EAGAIN;
> > +	u32 val;
> > +
> > +	spin_lock_irqsave(&chan->vc.lock, flags);
> > +
> > +	val = axi_dma_ioread32(chan->chip, DMAC_CHEN);
> > +	val |= (BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT |
> > +		BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT);
> Redundant parens.
Will be fixed.

> >?
> > +	axi_dma_iowrite32(chan->chip, DMAC_CHEN, val);
> > +
> > +	while (timeout--) {
> >?
> > +		if (axi_chan_irq_read(chan) &
> > DWAXIDMAC_IRQ_SUSPENDED) {
> >?
> > +			axi_chan_irq_clear(chan,
> > DWAXIDMAC_IRQ_SUSPENDED);
> You may move this invariant out from the loop. Makes code cleaner.
Agreed.

> >?
> > +			ret = 0;
> > +			break;
> > +		}
> >?
> > +		udelay(2);
>?
> >?
> > +	}
> > +
> > +	chan->is_paused = true;
> This is indeed property of channel.
> That said, you may do a trick and use descriptor status for it.
> You channel and driver, I'm sure, can't serve in interleaved mode
> with
> descriptors. So, that makes channel(paused) == active
> descriptor(paused).
>?
> The trick allows to make code cleaner.

I don't understand how we can use use descriptor status for it.
I determine descriptor status basing on the is_paused value.
Look at the code:
----------->8------------------
ret = dma_cookie_status(dchan, cookie, txstate);
if (chan->is_paused && ret == DMA_IN_PROGRESS)
	return DMA_PAUSED;
----------->8------------------

> >?
> > +	ret = device_property_read_u32_array(dev, "priority",
> > carr,
> I'm not sure you will have a use case for that. Have you?
I guess we don't have use case for priority managment for mem-to-mem
transfers.
But we need priority management for slave dma (I'll implement slave dma
api soon)

> >?
> > +	ret = devm_request_irq(chip->dev, chip->irq,
> > dw_axi_dma_intretupt,
> > +			???????IRQF_SHARED, DRV_NAME, chip);
> > +	if (ret)
> > +		return ret;
> You can't do this unless you are using threaded IRQ handler without
> any
> tasklets involved.
>?
> There was a nice mail by Thomas who explained what the problem you
> have
> there.
>?
> It's a bug in your code.
Yep, thanks a lot.
Looks like I have problem with driver remove function.

I guess I can solve this problem by freeing irq manually before tasklet
kill:
----------->8------------------
static int dw_remove(...)
{
	devm_free_irq(chip->dev, chip->irq, chip);
	tasklet_kill(&dw->tasklet);
	...
}
----------->8------------------

> >?
> > +static int __init dw_init(void)
> > +{
> > +	return platform_driver_register(&dw_driver);
> > +}
> > +subsys_initcall(dw_init);
> Hmm... Why it can't be just a platform driver? You describe the
> dependency in Device Tree, if you have something happened, you may
> utilize -EPROBE_DEFER.
>?
Will be fixed.


> >??* Add DT bindings
> >?
> > Eugeniy Paltsev (2):
> >???dt-bindings: Document the Synopsys DW AXI DMA bindings
> >???dmaengine: Add DW AXI DMAC driver
> >?
> >??.../devicetree/bindings/dma/snps,axi-dw-dmac.txt???|???33 +
> >??drivers/dma/Kconfig????????????????????????????????|????8 +
> >??drivers/dma/Makefile???????????????????????????????|????1 +
> >?
> >??drivers/dma/axi_dma_platform.c?????????????????????| 1060
> > ++++++++++++++++++++
> This surprises me. I would expect more then 100+ LOC reduction when
> switched to virt-dma API. Can you double check that you are using it
> effectively?
There is a simple explanation: I switched to virt-dma API (and it cause
LOC reduction) and I implemented some TODOs (and it cause LOC
increase).

Also about previous comment, which I missed before:
> > +static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan,
> >????????????????????dma_addr_t src, dma_addr_t dst, size_t len)
> > +{
> > +	u32 width;
> > +	dma_addr_t sdl = (src | dst | len);
> > +	u32 max_width = chan->chip->dw->hdata->m_data_width;
>?
> Use reverse tree.
>?
> dma_addr_t use above is wrong. (size_t might be bigger in some cases)

What do you mean by the phrase "Use reverse tree" ?

--?
?Eugeniy Paltsev

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-02-09 13:58       ` Eugeniy Paltsev
@ 2017-02-09 20:52         ` Andy Shevchenko
  -1 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-02-09 20:52 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: dmaengine, dan.j.williams, linux-kernel, Alexey.Brodkin,
	vinod.koul, linux-snps-arc

On Thu, 2017-02-09 at 13:58 +0000, Eugeniy Paltsev wrote:
  
> > > +static void axi_desc_put(struct axi_dma_desc *desc)
> > > +{
> > > +	struct axi_dma_chan *chan = desc->chan;
> > > +	struct dw_axi_dma *dw = chan->chip->dw;
> > > +	struct axi_dma_desc *child, *_next;
> > > +	unsigned int descs_put = 0;
> > > +
> > >  
> > > +	if (unlikely(!desc))
> > > +		return;
> > 
> > Would it be the case?
> 
> Yes, I checked the code - this NULL check is unnecessary, so I'll
> remove it.
> 
> Also about your previous question about likely/unlikely:
> I checked disassembly - gcc generates different code when I use likely
> and unlikely. So, I guess they are useful. 

They are, but in rare cases.

I assume you have read the following article and other material on LWN.

https://lwn.net/Articles/420019/
 
> > > +	/* ASSERT: channel is idle */
> > > +	if (axi_chan_is_hw_enable(chan))
> > > +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> > > +			axi_chan_name(chan));
> > 
> > Yeah, as I said, dw_dmac is not a good example. And this one can be
> > managed by runtime PM I suppose.
> 
> See comment below.
> 
> > > > > +static inline bool axi_dma_is_sw_enable(struct axi_dma_chip
> 
> *chip)
> > > > > +{
> > > > > +   struct dw_axi_dma *dw = chip->dw;
> > > > > +   u32 i;
> > > > > +
> > > > > +   for (i = 0; i < dw->hdata->nr_channels; i++) {
> > > > > +           if (dw->chan[i].in_use)
> > > > 
> > > > Hmm... I know why we have such flag in dw_dmac, but I doubt it's
> > > > needed
> > > > in this driver. Just double check the need of it.
> > > 
> > > I use this flag to check state of channel (used now/unused) to
> 
> disable
> > > dmac if all channels are unused for now.
> > 
> >  
> > Okay, but wouldn't be easier to use runtime PM for that? You will
> > not
> > need any special counter and runtime PM will take case of
> > enabling/disabling the device.
> 
> Now in_use variable has several purposes - it is also used to mask
> interrupts from channels, which are not used - that is the reason I
> prefer leave it untouched.

And this is indeed a good argument to move it to runtime PM callbacks!

> Also all existing SoCs with this DMAC don't support power management -
> so there is no really profit from implementing PM.

Disabling IRQ, managing clocks or alike are quite suitable tasks for
runtime PM even if there is no actual *power* management done.

If unsure, ask other developers for their opinions: from Qualcomm,
nVidia, TI, etc.

Besides Vinod I would suggest Tony Lindgren, for example.

> > > +	while (true) {
> > 
> > Usually it makes readability harder and rises a red flag to the
> > code.
> 
> I can write something like next code. Not sure is it looks better.
> ----------->8------------------
> while ((dst_len || dst_sg && dst_nents) &&
>        (src_len || src_sg && src_nents)) {

Just give a thought about it once more. It might be not easy, but the
result should be quite better than "while (true)" approach.

> > >  		/* Manage transfer list (xfer_list) */
> > > +		if (!first) {
> > > +			first = desc;
> > > +		} else {
> > > +			list_add_tail(&desc->xfer_list, &first-
> > > >  
> > > > xfer_list);
> > > 
> > > +			write_desc_llp(prev, desc->vd.tx.phys |
> > > lms);
> > > +		}
> > > +		prev = desc;
> > > +
> > > +		/* update the lengths and addresses for the next
> > > loop
> > > cycle */
> > > +		dst_len -= xfer_len;
> > > +		src_len -= xfer_len;
> > > +		dst_adr += xfer_len;
> > > +		src_adr += xfer_len;
> > > +
> > > +		total_len += xfer_len;
> > 
> > I would suggest to leave this on caller.
> 
> I don't think it is a good idea. Caller doesn't know value of internal
> parameters like max data width and max block size, but we know these
> values and can organize LLI the most optimal way.
> And if the caller has already prepared suitable SG list there will be
> really little overhead, so I prefer leave this code unchanged.

The problem with DMA is a performance in case you need to supply new
chain in fastest possible way.

So, first rationale is to deduplicate the allocation and preparation of
SG list (especially in cases when you have not much nodes in it).

Besides above, caller is the best one who knows *how* to split data in
the *best* way taking into account *all possible limitations*.

That is a second point.

If you still disagree, I would leave it to other maintainers and
experienced engineers to share their opinions.

> > At some point, if no one
> > else
> > do this faster than me, I would like to introduce something like
> > struct
> > dma_parms per DMA channel to allow caller prepare SG list suitable
> > for the DMA device.
> 
> In my opinion it is driver responsibility, not the caller.

See above.

> +	}
> > > +
> > > +	chan->is_paused = true;
> > 
> > This is indeed property of channel.
> > That said, you may do a trick and use descriptor status for it.
> > You channel and driver, I'm sure, can't serve in interleaved mode
> > with
> > descriptors. So, that makes channel(paused) == active
> > descriptor(paused).
> >  
> > The trick allows to make code cleaner.
> 
> I don't understand how we can use use descriptor status for it.
> I determine descriptor status basing on the is_paused value.
> Look at the code:
> ----------->8------------------
> ret = dma_cookie_status(dchan, cookie, txstate);
> if (chan->is_paused && ret == DMA_IN_PROGRESS)
> 	return DMA_PAUSED;
> ----------->8------------------

You may look to the drivers which have 'enum dma_status' field in their
data structures.

Some of them using it per channel, some per descriptor.

So, either way would be better than current approach in this driver.


> Yep, thanks a lot.
> Looks like I have problem with driver remove function.
> 
> I guess I can solve this problem by freeing irq manually before
> tasklet
> kill:
> ----------->8------------------
> static int dw_remove(...)
> {
> 	devm_free_irq(chip->dev, chip->irq, chip);
> 	tasklet_kill(&dw->tasklet);
> 	...
> }
> ----------->8------------------

Yes, it will fix it.

> > >   * Add DT bindings
> > >  
> > > Eugeniy Paltsev (2):
> > >    dt-bindings: Document the Synopsys DW AXI DMA bindings
> > >    dmaengine: Add DW AXI DMAC driver
> > >  
> > >   .../devicetree/bindings/dma/snps,axi-dw-dmac.txt   |   33 +
> > >   drivers/dma/Kconfig                                |    8 +
> > >   drivers/dma/Makefile                               |    1 +
> > >  
> > >   drivers/dma/axi_dma_platform.c                     | 1060
> > > ++++++++++++++++++++
> > 
> > This surprises me. I would expect more then 100+ LOC reduction when
> > switched to virt-dma API. Can you double check that you are using it
> > effectively?
> 
> There is a simple explanation: I switched to virt-dma API (and it
> cause
> LOC reduction) and I implemented some TODOs (and it cause LOC
> increase).

Ah, that explains, indeed.

> Also about previous comment, which I missed before:
> > > +static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan,
> > >                     dma_addr_t src, dma_addr_t dst, size_t len)
> > > +{
> > > +	u32 width;
> > > +	dma_addr_t sdl = (src | dst | len);
> > > +	u32 max_width = chan->chip->dw->hdata->m_data_width;
> > 
> >  
> > Use reverse tree.
> >  
> > dma_addr_t use above is wrong. (size_t might be bigger in some
> > cases)
> 
> What do you mean by the phrase "Use reverse tree" ?

Just convenient pattern: longest first.

On top of that logical groups of definitions:
a) assignments based on parameters;
b) other variables;
c) return variable last;
d) flags for spin lock -> depends.

Thus:

u32 max_width = chan->chip->dw->hdata->m_data_width;
dma_addr_t sdl = (src | dst | len);
u32 width;

But pay attention to your sdl, which is always nonzero.

-- 
Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-09 20:52         ` Andy Shevchenko
  0 siblings, 0 replies; 29+ messages in thread
From: Andy Shevchenko @ 2017-02-09 20:52 UTC (permalink / raw)
  To: linux-snps-arc

On Thu, 2017-02-09@13:58 +0000, Eugeniy Paltsev wrote:
??
> > > +static void axi_desc_put(struct axi_dma_desc *desc)
> > > +{
> > > +	struct axi_dma_chan *chan = desc->chan;
> > > +	struct dw_axi_dma *dw = chan->chip->dw;
> > > +	struct axi_dma_desc *child, *_next;
> > > +	unsigned int descs_put = 0;
> > > +
> > > ?
> > > +	if (unlikely(!desc))
> > > +		return;
> > 
> > Would it be the case?
> 
> Yes, I checked the code - this NULL check is unnecessary, so I'll
> remove it.
> 
> Also about your previous question about likely/unlikely:
> I checked disassembly - gcc generates different code when I use likely
> and unlikely. So, I guess they are useful.?

They are, but in rare cases.

I assume you have read the following article and other material on LWN.

https://lwn.net/Articles/420019/
?
> > > +	/* ASSERT: channel is idle */
> > > +	if (axi_chan_is_hw_enable(chan))
> > > +		dev_err(dchan2dev(dchan), "%s is non-idle!\n",
> > > +			axi_chan_name(chan));
> > 
> > Yeah, as I said, dw_dmac is not a good example. And this one can be
> > managed by runtime PM I suppose.
> 
> See comment below.
> 
> > > > > +static inline bool axi_dma_is_sw_enable(struct axi_dma_chip
> 
> *chip)
> > > > > +{
> > > > > +???struct dw_axi_dma *dw = chip->dw;
> > > > > +???u32 i;
> > > > > +
> > > > > +???for (i = 0; i < dw->hdata->nr_channels; i++) {
> > > > > +???????????if (dw->chan[i].in_use)
> > > > 
> > > > Hmm... I know why we have such flag in dw_dmac, but I doubt it's
> > > > needed
> > > > in this driver. Just double check the need of it.
> > > 
> > > I use this flag to check state of channel (used now/unused) to
> 
> disable
> > > dmac if all channels are unused for now.
> > 
> > ?
> > Okay, but wouldn't be easier to use runtime PM for that? You will
> > not
> > need any special counter and runtime PM will take case of
> > enabling/disabling the device.
> 
> Now in_use variable has several purposes - it is also used to mask
> interrupts from channels, which are not used - that is the reason I
> prefer leave it untouched.

And this is indeed a good argument to move it to runtime PM callbacks!

> Also all existing SoCs with this DMAC don't support power management -
> so there is no really profit from implementing PM.

Disabling IRQ, managing clocks or alike are quite suitable tasks for
runtime PM even if there is no actual *power* management done.

If unsure, ask other developers for their opinions: from Qualcomm,
nVidia, TI, etc.

Besides Vinod I would suggest Tony Lindgren, for example.

> > > +	while (true) {
> > 
> > Usually it makes readability harder and rises a red flag to the
> > code.
> 
> I can write something like next code. Not sure is it looks better.
> ----------->8------------------
> while ((dst_len || dst_sg && dst_nents) &&
> ? ? ? ?(src_len || src_sg && src_nents)) {

Just give a thought about it once more. It might be not easy, but the
result should be quite better than "while (true)" approach.

> > > ?		/* Manage transfer list (xfer_list) */
> > > +		if (!first) {
> > > +			first = desc;
> > > +		} else {
> > > +			list_add_tail(&desc->xfer_list, &first-
> > > > ?
> > > > xfer_list);
> > > 
> > > +			write_desc_llp(prev, desc->vd.tx.phys |
> > > lms);
> > > +		}
> > > +		prev = desc;
> > > +
> > > +		/* update the lengths and addresses for the next
> > > loop
> > > cycle */
> > > +		dst_len -= xfer_len;
> > > +		src_len -= xfer_len;
> > > +		dst_adr += xfer_len;
> > > +		src_adr += xfer_len;
> > > +
> > > +		total_len += xfer_len;
> > 
> > I would suggest to leave this on caller.
> 
> I don't think it is a good idea. Caller doesn't know value of internal
> parameters like max data width and max block size, but we know these
> values and can organize LLI the most optimal way.
> And if the caller has already prepared suitable SG list there will be
> really little overhead, so I prefer leave this code unchanged.

The problem with DMA is a performance in case you need to supply new
chain in fastest possible way.

So, first rationale is to deduplicate the allocation and preparation of
SG list (especially in cases when you have not much nodes in it).

Besides above, caller is the best one who knows *how* to split data in
the *best* way taking into account *all possible limitations*.

That is a second point.

If you still disagree, I would leave it to other maintainers and
experienced engineers to share their opinions.

> > At some point, if no one
> > else
> > do this faster than me, I would like to introduce something like
> > struct
> > dma_parms per DMA channel to allow caller prepare SG list suitable
> > for the DMA device.
> 
> In my opinion it is driver responsibility, not the caller.

See above.

> +	}
> > > +
> > > +	chan->is_paused = true;
> > 
> > This is indeed property of channel.
> > That said, you may do a trick and use descriptor status for it.
> > You channel and driver, I'm sure, can't serve in interleaved mode
> > with
> > descriptors. So, that makes channel(paused) == active
> > descriptor(paused).
> > ?
> > The trick allows to make code cleaner.
> 
> I don't understand how we can use use descriptor status for it.
> I determine descriptor status basing on the is_paused value.
> Look at the code:
> ----------->8------------------
> ret = dma_cookie_status(dchan, cookie, txstate);
> if (chan->is_paused && ret == DMA_IN_PROGRESS)
> 	return DMA_PAUSED;
> ----------->8------------------

You may look to the drivers which have 'enum dma_status' field in their
data structures.

Some of them using it per channel, some per descriptor.

So, either way would be better than current approach in this driver.


> Yep, thanks a lot.
> Looks like I have problem with driver remove function.
> 
> I guess I can solve this problem by freeing irq manually before
> tasklet
> kill:
> ----------->8------------------
> static int dw_remove(...)
> {
> 	devm_free_irq(chip->dev, chip->irq, chip);
> 	tasklet_kill(&dw->tasklet);
> 	...
> }
> ----------->8------------------

Yes, it will fix it.

> > > ??* Add DT bindings
> > > ?
> > > Eugeniy Paltsev (2):
> > > ???dt-bindings: Document the Synopsys DW AXI DMA bindings
> > > ???dmaengine: Add DW AXI DMAC driver
> > > ?
> > > ??.../devicetree/bindings/dma/snps,axi-dw-dmac.txt???|???33 +
> > > ??drivers/dma/Kconfig????????????????????????????????|????8 +
> > > ??drivers/dma/Makefile???????????????????????????????|????1 +
> > > ?
> > > ??drivers/dma/axi_dma_platform.c?????????????????????| 1060
> > > ++++++++++++++++++++
> > 
> > This surprises me. I would expect more then 100+ LOC reduction when
> > switched to virt-dma API. Can you double check that you are using it
> > effectively?
> 
> There is a simple explanation: I switched to virt-dma API (and it
> cause
> LOC reduction) and I implemented some TODOs (and it cause LOC
> increase).

Ah, that explains, indeed.

> Also about previous comment, which I missed before:
> > > +static u32 axi_chan_get_xfer_width(struct axi_dma_chan *chan,
> > > ????????????????????dma_addr_t src, dma_addr_t dst, size_t len)
> > > +{
> > > +	u32 width;
> > > +	dma_addr_t sdl = (src | dst | len);
> > > +	u32 max_width = chan->chip->dw->hdata->m_data_width;
> > 
> > ?
> > Use reverse tree.
> > ?
> > dma_addr_t use above is wrong. (size_t might be bigger in some
> > cases)
> 
> What do you mean by the phrase "Use reverse tree" ?

Just convenient pattern: longest first.

On top of that logical groups of definitions:
a) assignments based on parameters;
b) other variables;
c) return variable last;
d) flags for spin lock -> depends.

Thus:

u32 max_width = chan->chip->dw->hdata->m_data_width;
dma_addr_t sdl = (src | dst | len);
u32 width;

But pay attention to your sdl, which is always nonzero.

-- 
Andy Shevchenko <andriy.shevchenko at linux.intel.com>
Intel Finland Oy

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-01-25 15:34   ` Eugeniy Paltsev
  (?)
@ 2017-02-10  6:06     ` Vinod Koul
  -1 siblings, 0 replies; 29+ messages in thread
From: Vinod Koul @ 2017-02-10  6:06 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: dmaengine, linux-kernel, devicetree, linux-snps-arc,
	Dan Williams, Mark Rutland, Rob Herring, Andy Shevchenko,
	Alexey Brodkin

On Wed, Jan 25, 2017 at 06:34:17PM +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.

How different is AXI from DW Synopsys?

Is the spec publicly available?

> +config AXI_DW_DMAC
> +	tristate "Synopsys DesignWare AXI DMA support"
> +	depends on OF && !64BIT

why not 64 bit, can you also add compile test

> +#define AXI_DMA_BUSWIDTHS		  \
> +	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \

DMA_SLAVE_BUSWIDTH_UNDEFINED??

> +static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
> +{
> +	struct axi_dma_chip *chip = dev_id;
> +
> +	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
> +	axi_dma_irq_disable(chip);
> +
> +	tasklet_schedule(&chip->dw->tasklet);

This is very inefficient, we would want to submit next txn here and not in
tasklet

> +static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
> +{
> +	struct virt_dma_desc *vd;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (unlikely(axi_chan_is_hw_enable(chan))) {
> +		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
> +			axi_chan_name(chan));
> +		axi_chan_disable(chan);
> +	}
> +
> +	/* The completed descriptor currently is in the head of vc list */
> +	vd = vchan_next_desc(&chan->vc);
> +	/* Remove the completed descriptor from issued list before completing */
> +	list_del(&vd->node);
> +	vchan_cookie_complete(vd);

this should be called from irq handler


> +static int dw_remove(struct platform_device *pdev)
> +{
> +	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
> +	struct dw_axi_dma *dw = chip->dw;
> +	struct axi_dma_chan *chan, *_chan;
> +	u32 i;
> +
> +	axi_dma_irq_disable(chip);
> +	for (i = 0; i < dw->hdata->nr_channels; i++) {
> +		axi_chan_disable(&chip->dw->chan[i]);
> +		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
> +	}
> +	axi_dma_disable(chip);
> +
> +	tasklet_kill(&dw->tasklet);
> +
> +	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
> +			vc.chan.device_node) {
> +		list_del(&chan->vc.chan.device_node);
> +		tasklet_kill(&chan->vc.task);
> +	}

very nice :)

But please freeup irq as well

> +module_platform_driver(dw_driver);
> +
> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

why subsys_initcall??

> +/* Common registers offset */
> +#define DMAC_ID			0x000 // R DMAC ID
> +#define DMAC_COMPVER		0x008 // R DMAC Component Version
> +#define DMAC_CFG		0x010 // R/W DMAC Configuration
> +#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
> +#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
> +#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
> +#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
> +#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
> +#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
> +#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
> +#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
> +#define DMAC_RESET		0x058 // R DMAC Reset Register1

DMAC is a generic term, AX_DMAC and no C99 style comment, checkpatch would
have cribbed

Use BITS and GENMASK here

> +
> +/* DMA channel registers offset */
> +#define CH_SAR			0x000 // R/W Chan Source Address
> +#define CH_DAR			0x008 // R/W Chan Destination Address
> +#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
> +#define CH_CTL			0x018 // R/W Chan Control
> +#define CH_CTL_L		0x018 // R/W Chan Control 00-31
> +#define CH_CTL_H		0x01C // R/W Chan Control 32-63
> +#define CH_CFG			0x020 // R/W Chan Configuration
> +#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
> +#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
> +#define CH_LLP			0x028 // R/W Chan Linked List Pointer
> +#define CH_STATUS		0x030 // R Chan Status
> +#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
> +#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
> +#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
> +#define CH_AXI_ID		0x050 // R/W Chan AXI ID
> +#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
> +#define CH_SSTAT		0x060 // R Chan Source Status
> +#define CH_DSTAT		0x068 // R Chan Destination Status
> +#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
> +#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
> +#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
> +#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
> +#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
> +#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear

Same here

-- 
~Vinod

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-10  6:06     ` Vinod Koul
  0 siblings, 0 replies; 29+ messages in thread
From: Vinod Koul @ 2017-02-10  6:06 UTC (permalink / raw)
  To: Eugeniy Paltsev
  Cc: Mark Rutland, devicetree, Andy Shevchenko, Alexey Brodkin,
	linux-kernel, Rob Herring, dmaengine, Dan Williams,
	linux-snps-arc

On Wed, Jan 25, 2017 at 06:34:17PM +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.

How different is AXI from DW Synopsys?

Is the spec publicly available?

> +config AXI_DW_DMAC
> +	tristate "Synopsys DesignWare AXI DMA support"
> +	depends on OF && !64BIT

why not 64 bit, can you also add compile test

> +#define AXI_DMA_BUSWIDTHS		  \
> +	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \

DMA_SLAVE_BUSWIDTH_UNDEFINED??

> +static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
> +{
> +	struct axi_dma_chip *chip = dev_id;
> +
> +	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
> +	axi_dma_irq_disable(chip);
> +
> +	tasklet_schedule(&chip->dw->tasklet);

This is very inefficient, we would want to submit next txn here and not in
tasklet

> +static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
> +{
> +	struct virt_dma_desc *vd;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (unlikely(axi_chan_is_hw_enable(chan))) {
> +		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
> +			axi_chan_name(chan));
> +		axi_chan_disable(chan);
> +	}
> +
> +	/* The completed descriptor currently is in the head of vc list */
> +	vd = vchan_next_desc(&chan->vc);
> +	/* Remove the completed descriptor from issued list before completing */
> +	list_del(&vd->node);
> +	vchan_cookie_complete(vd);

this should be called from irq handler


> +static int dw_remove(struct platform_device *pdev)
> +{
> +	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
> +	struct dw_axi_dma *dw = chip->dw;
> +	struct axi_dma_chan *chan, *_chan;
> +	u32 i;
> +
> +	axi_dma_irq_disable(chip);
> +	for (i = 0; i < dw->hdata->nr_channels; i++) {
> +		axi_chan_disable(&chip->dw->chan[i]);
> +		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
> +	}
> +	axi_dma_disable(chip);
> +
> +	tasklet_kill(&dw->tasklet);
> +
> +	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
> +			vc.chan.device_node) {
> +		list_del(&chan->vc.chan.device_node);
> +		tasklet_kill(&chan->vc.task);
> +	}

very nice :)

But please freeup irq as well

> +module_platform_driver(dw_driver);
> +
> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

why subsys_initcall??

> +/* Common registers offset */
> +#define DMAC_ID			0x000 // R DMAC ID
> +#define DMAC_COMPVER		0x008 // R DMAC Component Version
> +#define DMAC_CFG		0x010 // R/W DMAC Configuration
> +#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
> +#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
> +#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
> +#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
> +#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
> +#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
> +#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
> +#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
> +#define DMAC_RESET		0x058 // R DMAC Reset Register1

DMAC is a generic term, AX_DMAC and no C99 style comment, checkpatch would
have cribbed

Use BITS and GENMASK here

> +
> +/* DMA channel registers offset */
> +#define CH_SAR			0x000 // R/W Chan Source Address
> +#define CH_DAR			0x008 // R/W Chan Destination Address
> +#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
> +#define CH_CTL			0x018 // R/W Chan Control
> +#define CH_CTL_L		0x018 // R/W Chan Control 00-31
> +#define CH_CTL_H		0x01C // R/W Chan Control 32-63
> +#define CH_CFG			0x020 // R/W Chan Configuration
> +#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
> +#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
> +#define CH_LLP			0x028 // R/W Chan Linked List Pointer
> +#define CH_STATUS		0x030 // R Chan Status
> +#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
> +#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
> +#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
> +#define CH_AXI_ID		0x050 // R/W Chan AXI ID
> +#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
> +#define CH_SSTAT		0x060 // R Chan Source Status
> +#define CH_DSTAT		0x068 // R Chan Destination Status
> +#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
> +#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
> +#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
> +#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
> +#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
> +#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear

Same here

-- 
~Vinod

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-10  6:06     ` Vinod Koul
  0 siblings, 0 replies; 29+ messages in thread
From: Vinod Koul @ 2017-02-10  6:06 UTC (permalink / raw)
  To: linux-snps-arc

On Wed, Jan 25, 2017@06:34:17PM +0300, Eugeniy Paltsev wrote:
> This patch adds support for the DW AXI DMAC controller.
> 
> DW AXI DMAC is a part of upcoming development board from Synopsys.

How different is AXI from DW Synopsys?

Is the spec publicly available?

> +config AXI_DW_DMAC
> +	tristate "Synopsys DesignWare AXI DMA support"
> +	depends on OF && !64BIT

why not 64 bit, can you also add compile test

> +#define AXI_DMA_BUSWIDTHS		  \
> +	(DMA_SLAVE_BUSWIDTH_UNDEFINED	| \

DMA_SLAVE_BUSWIDTH_UNDEFINED??

> +static irqreturn_t dw_axi_dma_intretupt(int irq, void *dev_id)
> +{
> +	struct axi_dma_chip *chip = dev_id;
> +
> +	/* Disable DMAC inerrupts. We'll enable them in the tasklet */
> +	axi_dma_irq_disable(chip);
> +
> +	tasklet_schedule(&chip->dw->tasklet);

This is very inefficient, we would want to submit next txn here and not in
tasklet

> +static void axi_chan_block_xfer_complete(struct axi_dma_chan *chan)
> +{
> +	struct virt_dma_desc *vd;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&chan->vc.lock, flags);
> +	if (unlikely(axi_chan_is_hw_enable(chan))) {
> +		dev_err(chan2dev(chan), "BUG: %s catched DWAXIDMAC_IRQ_DMA_TRF, but channel not idle!\n",
> +			axi_chan_name(chan));
> +		axi_chan_disable(chan);
> +	}
> +
> +	/* The completed descriptor currently is in the head of vc list */
> +	vd = vchan_next_desc(&chan->vc);
> +	/* Remove the completed descriptor from issued list before completing */
> +	list_del(&vd->node);
> +	vchan_cookie_complete(vd);

this should be called from irq handler


> +static int dw_remove(struct platform_device *pdev)
> +{
> +	struct axi_dma_chip *chip = platform_get_drvdata(pdev);
> +	struct dw_axi_dma *dw = chip->dw;
> +	struct axi_dma_chan *chan, *_chan;
> +	u32 i;
> +
> +	axi_dma_irq_disable(chip);
> +	for (i = 0; i < dw->hdata->nr_channels; i++) {
> +		axi_chan_disable(&chip->dw->chan[i]);
> +		axi_chan_irq_disable(&chip->dw->chan[i], DWAXIDMAC_IRQ_ALL);
> +	}
> +	axi_dma_disable(chip);
> +
> +	tasklet_kill(&dw->tasklet);
> +
> +	list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
> +			vc.chan.device_node) {
> +		list_del(&chan->vc.chan.device_node);
> +		tasklet_kill(&chan->vc.task);
> +	}

very nice :)

But please freeup irq as well

> +module_platform_driver(dw_driver);
> +
> +static int __init dw_init(void)
> +{
> +	return platform_driver_register(&dw_driver);
> +}
> +subsys_initcall(dw_init);

why subsys_initcall??

> +/* Common registers offset */
> +#define DMAC_ID			0x000 // R DMAC ID
> +#define DMAC_COMPVER		0x008 // R DMAC Component Version
> +#define DMAC_CFG		0x010 // R/W DMAC Configuration
> +#define DMAC_CHEN		0x018 // R/W DMAC Channel Enable
> +#define DMAC_CHEN_L		0x018 // R/W DMAC Channel Enable 00-31
> +#define DMAC_CHEN_H		0x01C // R/W DMAC Channel Enable 32-63
> +#define DMAC_INTSTATUS		0x030 // R DMAC Interrupt Status
> +#define DMAC_COMMON_INTCLEAR	0x038 // W DMAC Interrupt Clear
> +#define DMAC_COMMON_INTSTATUS_ENA 0x040 // R DMAC Interrupt Status Enable
> +#define DMAC_COMMON_INTSIGNAL_ENA 0x048 // R/W DMAC Interrupt Signal Enable
> +#define DMAC_COMMON_INTSTATUS	0x050 // R DMAC Interrupt Status
> +#define DMAC_RESET		0x058 // R DMAC Reset Register1

DMAC is a generic term, AX_DMAC and no C99 style comment, checkpatch would
have cribbed

Use BITS and GENMASK here

> +
> +/* DMA channel registers offset */
> +#define CH_SAR			0x000 // R/W Chan Source Address
> +#define CH_DAR			0x008 // R/W Chan Destination Address
> +#define CH_BLOCK_TS		0x010 // R/W Chan Block Transfer Size
> +#define CH_CTL			0x018 // R/W Chan Control
> +#define CH_CTL_L		0x018 // R/W Chan Control 00-31
> +#define CH_CTL_H		0x01C // R/W Chan Control 32-63
> +#define CH_CFG			0x020 // R/W Chan Configuration
> +#define CH_CFG_L		0x020 // R/W Chan Configuration 00-31
> +#define CH_CFG_H		0x024 // R/W Chan Configuration 32-63
> +#define CH_LLP			0x028 // R/W Chan Linked List Pointer
> +#define CH_STATUS		0x030 // R Chan Status
> +#define CH_SWHSSRC		0x038 // R/W Chan SW Handshake Source
> +#define CH_SWHSDST		0x040 // R/W Chan SW Handshake Destination
> +#define CH_BLK_TFR_RESUMEREQ	0x048 // W Chan Block Transfer Resume Req
> +#define CH_AXI_ID		0x050 // R/W Chan AXI ID
> +#define CH_AXI_QOS		0x058 // R/W Chan AXI QOS
> +#define CH_SSTAT		0x060 // R Chan Source Status
> +#define CH_DSTAT		0x068 // R Chan Destination Status
> +#define CH_SSTATAR		0x070 // R/W Chan Source Status Fetch Addr
> +#define CH_DSTATAR		0x078 // R/W Chan Destination Status Fetch Addr
> +#define CH_INTSTATUS_ENA	0x080 // R/W Chan Interrupt Status Enable
> +#define CH_INTSTATUS		0x088 // R/W Chan Interrupt Status
> +#define CH_INTSIGNAL_ENA	0x090 // R/W Chan Interrupt Signal Enable
> +#define CH_INTCLEAR		0x098 // W Chan Interrupt Clear

Same here

-- 
~Vinod

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
  2017-02-10  6:06     ` Vinod Koul
  (?)
@ 2017-02-10  8:23       ` Alexey Brodkin
  -1 siblings, 0 replies; 29+ messages in thread
From: Alexey Brodkin @ 2017-02-10  8:23 UTC (permalink / raw)
  To: vinod.koul
  Cc: linux-kernel, robh+dt, devicetree, linux-snps-arc,
	Eugeniy Paltsev, dan.j.williams, mark.rutland, dmaengine,
	andriy.shevchenko

Hi Vinod,

On Fri, 2017-02-10 at 11:36 +0530, Vinod Koul wrote:
> On Wed, Jan 25, 2017 at 06:34:17PM +0300, Eugeniy Paltsev wrote:
> > 
> > This patch adds support for the DW AXI DMAC controller.
> > 
> > DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> How different is AXI from DW Synopsys?

These are 2 completely unrelated products from Synopsys, see:
 a) https://www.synopsys.com/dw/ipdir.php?ds=amba_ahb_dma
 b) https://www.synopsys.com/dw/ipdir.php?ds=amba_axi_dma

> Is the spec publicly available?

I'm afraid not. Synopsys customers may get DW AXI DMAC databook here:
https://www.synopsys.com/dw/doc.php/iip/DW_axi_dmac/latest/doc/DW_axi_dmac_databook.pdf

Just a side note "DW" prefix stands for DesignWare and applicable
to most if not all IP libs from Synopsys so here we're talking about
AHB DMAC vs AXI DMAC. Still my understanding is different bus type is
not the one and only difference but just the one which might simplify
selection of one or another IP block.

-Alexey

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-10  8:23       ` Alexey Brodkin
  0 siblings, 0 replies; 29+ messages in thread
From: Alexey Brodkin @ 2017-02-10  8:23 UTC (permalink / raw)
  To: vinod.koul
  Cc: mark.rutland, devicetree, andriy.shevchenko, linux-kernel,
	robh+dt, dmaengine, dan.j.williams, linux-snps-arc,
	Eugeniy Paltsev

Hi Vinod,

On Fri, 2017-02-10 at 11:36 +0530, Vinod Koul wrote:
> On Wed, Jan 25, 2017 at 06:34:17PM +0300, Eugeniy Paltsev wrote:
> > 
> > This patch adds support for the DW AXI DMAC controller.
> > 
> > DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> How different is AXI from DW Synopsys?

These are 2 completely unrelated products from Synopsys, see:
 a) https://www.synopsys.com/dw/ipdir.php?ds=amba_ahb_dma
 b) https://www.synopsys.com/dw/ipdir.php?ds=amba_axi_dma

> Is the spec publicly available?

I'm afraid not. Synopsys customers may get DW AXI DMAC databook here:
https://www.synopsys.com/dw/doc.php/iip/DW_axi_dmac/latest/doc/DW_axi_dmac_databook.pdf

Just a side note "DW" prefix stands for DesignWare and applicable
to most if not all IP libs from Synopsys so here we're talking about
AHB DMAC vs AXI DMAC. Still my understanding is different bus type is
not the one and only difference but just the one which might simplify
selection of one or another IP block.

-Alexey
_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH 2/2] dmaengine: Add DW AXI DMAC driver
@ 2017-02-10  8:23       ` Alexey Brodkin
  0 siblings, 0 replies; 29+ messages in thread
From: Alexey Brodkin @ 2017-02-10  8:23 UTC (permalink / raw)
  To: linux-snps-arc

Hi Vinod,

On Fri, 2017-02-10@11:36 +0530, Vinod Koul wrote:
> On Wed, Jan 25, 2017@06:34:17PM +0300, Eugeniy Paltsev wrote:
> > 
> > This patch adds support for the DW AXI DMAC controller.
> > 
> > DW AXI DMAC is a part of upcoming development board from Synopsys.
> 
> How different is AXI from DW Synopsys?

These are 2 completely unrelated products from Synopsys, see:
?a)?https://www.synopsys.com/dw/ipdir.php?ds=amba_ahb_dma
?b)?https://www.synopsys.com/dw/ipdir.php?ds=amba_axi_dma

> Is the spec publicly available?

I'm afraid not. Synopsys customers may get DW AXI DMAC databook here:
https://www.synopsys.com/dw/doc.php/iip/DW_axi_dmac/latest/doc/DW_axi_dmac_databook.pdf

Just a side note "DW" prefix stands for DesignWare and applicable
to most if not all IP libs from Synopsys so here we're talking about
AHB DMAC vs AXI DMAC. Still my understanding is different bus type is
not the one and only difference but just the one which might simplify
selection of one or another IP block.

-Alexey

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2017-02-10  8:25 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-25 15:34 [PATCH 0/2] dmaengine: Add DW AXI DMAC driver Eugeniy Paltsev
2017-01-25 15:34 ` Eugeniy Paltsev
2017-01-25 15:34 ` [PATCH 1/2] dt-bindings: Document the Synopsys DW AXI DMA bindings Eugeniy Paltsev
2017-01-25 15:34   ` Eugeniy Paltsev
2017-01-25 15:34   ` Eugeniy Paltsev
2017-01-30 20:08   ` Rob Herring
2017-01-30 20:08     ` Rob Herring
2017-01-30 20:08     ` Rob Herring
2017-01-25 15:34 ` [PATCH 2/2] dmaengine: Add DW AXI DMAC driver Eugeniy Paltsev
2017-01-25 15:34   ` Eugeniy Paltsev
2017-01-25 15:34   ` Eugeniy Paltsev
2017-01-25 16:49   ` kbuild test robot
2017-01-25 16:49     ` kbuild test robot
2017-01-25 16:49     ` kbuild test robot
2017-01-25 17:25   ` Andy Shevchenko
2017-01-25 17:25     ` Andy Shevchenko
2017-01-25 17:25     ` Andy Shevchenko
2017-02-09 13:58     ` Eugeniy Paltsev
2017-02-09 13:58       ` Eugeniy Paltsev
2017-02-09 20:52       ` Andy Shevchenko
2017-02-09 20:52         ` Andy Shevchenko
2017-02-10  6:06   ` Vinod Koul
2017-02-10  6:06     ` Vinod Koul
2017-02-10  6:06     ` Vinod Koul
2017-02-10  8:23     ` Alexey Brodkin
2017-02-10  8:23       ` Alexey Brodkin
2017-02-10  8:23       ` Alexey Brodkin
2017-01-25 16:41 ` [PATCH 0/2] " Andy Shevchenko
2017-01-25 16:41   ` Andy Shevchenko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.