All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-17 19:46 ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: Dan Williams, Vinod Koul
  Cc: linux-kernel, linux-arm-kernel, linux-sunxi, kevin.z.m.zh, sunny,
	shuge, zhuzhenhua, Arnd Bergmann, andriy.shevchenko, dmaengine,
	Maxime Ripard

Hi,

This patchset adds support for the DMA controller found in the
Allwinner A31 and A23 SoCs.

This has been tested using the newly introduced SPI driver on an A31
EVK. Support for DMA-driven SPI transfers will be the subject of
another patch serie.

This has been around for around 5 monthes now, and didn't get any
review but nitpicks for three versions, so I feel like it could be
merged quite quickly.

Thanks,
Maxime

Changes from v10:
  - Added in 

Changes from v9:
  - Rebased on top of 3.16-rc1
  - Fixed the whitespace error in the documentation

Changes from v8:
  - Drop the select on DMA_OF
  - Depend on COMPILE_TEST to get more build tests coverage

Changes from v7:
  - select DMA_OF, since we're only relying on DT
  - Properly kill the tasklet as suggested in
    https://lwn.net/Articles/588457/
  - Split up the dt bindings documentation into a separate patch

Changes from v6:
  - Dropped the merged patches and the clock patches that are pretty
    orthogonal to this driver

Changes from v5:
  - Rebased on top of 3.15-rc1

Changes from v4:
  - Removed the packed attribute on the LLI
  - Switched to using a NULL device pointer in clk_get on PLL6 and
    AHB1 mux to make explicit that we are getting global clocks
  - Switched from spin_lock_irqsave to spin_lock in the interrupt
    handler
  - Various nitpicks from Andy Shevchenko:
    + Switched to using %p printk formats for pointers
    + Inverted some tests to lose a level of indentation
    + Dropped ifdef DEBUG protecting calls to dev_dbg

Changes from v3:
  - A few other comments made by Andy Shevchenko were fixed:
    + Used references in %pa* printk formats
    + Used is_slave_direction in prep_slave_sg to make sure we were
      actually called for something, and to avoid making assumptions
      that we were actually called with the expected directions
    + A few others minor fixes: s/pr_err/dev_err/, etc.

Changes from v2:
  - Removed the clk_put calls in the clock protection functions
  - Splitted out the sunxi machines into several files
  - Moved the clock protection code into these new machine files
  - Moved the PLL6 reparenting to the DMA driver
  - Addressed various comments from Andy Shevchenko: switched to using
    devm_kcalloc, used correct printk formats for physical and DMA
    addresses, etc.

Changes from v1:
  - Removed the clk_put call in the clocks protecting patches
  - Minor fixes here and there as suggested by Andy Shevchenko: switch
    to dmam_pool_create, switch to dev_dbg instead of pr_debug, etc.

Maxime Ripard (2):
  Documentation: dt: Add Allwinner A31 DMA controller bindings
  dmaengine: sun6i: Add driver for the Allwinner A31 DMA controller

 .../devicetree/bindings/dma/sun6i-dma.txt          |   45 +
 drivers/dma/Kconfig                                |    8 +
 drivers/dma/Makefile                               |    1 +
 drivers/dma/sun6i-dma.c                            | 1059 ++++++++++++++++++++
 4 files changed, 1113 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/sun6i-dma.txt
 create mode 100644 drivers/dma/sun6i-dma.c

-- 
2.0.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-17 19:46 ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

This patchset adds support for the DMA controller found in the
Allwinner A31 and A23 SoCs.

This has been tested using the newly introduced SPI driver on an A31
EVK. Support for DMA-driven SPI transfers will be the subject of
another patch serie.

This has been around for around 5 monthes now, and didn't get any
review but nitpicks for three versions, so I feel like it could be
merged quite quickly.

Thanks,
Maxime

Changes from v10:
  - Added in 

Changes from v9:
  - Rebased on top of 3.16-rc1
  - Fixed the whitespace error in the documentation

Changes from v8:
  - Drop the select on DMA_OF
  - Depend on COMPILE_TEST to get more build tests coverage

Changes from v7:
  - select DMA_OF, since we're only relying on DT
  - Properly kill the tasklet as suggested in
    https://lwn.net/Articles/588457/
  - Split up the dt bindings documentation into a separate patch

Changes from v6:
  - Dropped the merged patches and the clock patches that are pretty
    orthogonal to this driver

Changes from v5:
  - Rebased on top of 3.15-rc1

Changes from v4:
  - Removed the packed attribute on the LLI
  - Switched to using a NULL device pointer in clk_get on PLL6 and
    AHB1 mux to make explicit that we are getting global clocks
  - Switched from spin_lock_irqsave to spin_lock in the interrupt
    handler
  - Various nitpicks from Andy Shevchenko:
    + Switched to using %p printk formats for pointers
    + Inverted some tests to lose a level of indentation
    + Dropped ifdef DEBUG protecting calls to dev_dbg

Changes from v3:
  - A few other comments made by Andy Shevchenko were fixed:
    + Used references in %pa* printk formats
    + Used is_slave_direction in prep_slave_sg to make sure we were
      actually called for something, and to avoid making assumptions
      that we were actually called with the expected directions
    + A few others minor fixes: s/pr_err/dev_err/, etc.

Changes from v2:
  - Removed the clk_put calls in the clock protection functions
  - Splitted out the sunxi machines into several files
  - Moved the clock protection code into these new machine files
  - Moved the PLL6 reparenting to the DMA driver
  - Addressed various comments from Andy Shevchenko: switched to using
    devm_kcalloc, used correct printk formats for physical and DMA
    addresses, etc.

Changes from v1:
  - Removed the clk_put call in the clocks protecting patches
  - Minor fixes here and there as suggested by Andy Shevchenko: switch
    to dmam_pool_create, switch to dev_dbg instead of pr_debug, etc.

Maxime Ripard (2):
  Documentation: dt: Add Allwinner A31 DMA controller bindings
  dmaengine: sun6i: Add driver for the Allwinner A31 DMA controller

 .../devicetree/bindings/dma/sun6i-dma.txt          |   45 +
 drivers/dma/Kconfig                                |    8 +
 drivers/dma/Makefile                               |    1 +
 drivers/dma/sun6i-dma.c                            | 1059 ++++++++++++++++++++
 4 files changed, 1113 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/sun6i-dma.txt
 create mode 100644 drivers/dma/sun6i-dma.c

-- 
2.0.1

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 1/2] Documentation: dt: Add Allwinner A31 DMA controller bindings
  2014-07-17 19:46 ` Maxime Ripard
@ 2014-07-17 19:46   ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: Dan Williams, Vinod Koul
  Cc: linux-kernel, linux-arm-kernel, linux-sunxi, kevin.z.m.zh, sunny,
	shuge, zhuzhenhua, Arnd Bergmann, andriy.shevchenko, dmaengine,
	Maxime Ripard

The Allwinner A31 DMA controller is rather simple to describe in the DT. Add
the bindings documentation.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 .../devicetree/bindings/dma/sun6i-dma.txt          | 45 ++++++++++++++++++++++
 1 file changed, 45 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/sun6i-dma.txt

diff --git a/Documentation/devicetree/bindings/dma/sun6i-dma.txt b/Documentation/devicetree/bindings/dma/sun6i-dma.txt
new file mode 100644
index 000000000000..3e145c1675b1
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/sun6i-dma.txt
@@ -0,0 +1,45 @@
+Allwinner A31 DMA Controller
+
+This driver follows the generic DMA bindings defined in dma.txt.
+
+Required properties:
+
+- compatible:	Must be "allwinner,sun6i-a31-dma"
+- reg:		Should contain the registers base address and length
+- interrupts:	Should contain a reference to the interrupt used by this device
+- clocks:	Should contain a reference to the parent AHB clock
+- resets:	Should contain a reference to the reset controller asserting
+		this device in reset
+- #dma-cells :	Should be 1, a single cell holding a line request number
+
+Example:
+	dma: dma-controller@01c02000 {
+		compatible = "allwinner,sun6i-a31-dma";
+		reg = <0x01c02000 0x1000>;
+		interrupts = <0 50 4>;
+		clocks = <&ahb1_gates 6>;
+		resets = <&ahb1_rst 6>;
+		#dma-cells = <1>;
+	};
+
+Clients:
+
+DMA clients connected to the A31 DMA controller must use the format
+described in the dma.txt file, using a two-cell specifier for each
+channel: a phandle plus one integer cells.
+The two cells in order are:
+
+1. A phandle pointing to the DMA controller.
+2. The port ID as specified in the datasheet
+
+Example:
+spi2: spi@01c6a000 {
+	compatible = "allwinner,sun6i-a31-spi";
+	reg = <0x01c6a000 0x1000>;
+	interrupts = <0 67 4>;
+	clocks = <&ahb1_gates 22>, <&spi2_clk>;
+	clock-names = "ahb", "mod";
+	dmas = <&dma 25>, <&dma 25>;
+	dma-names = "rx", "tx";
+	resets = <&ahb1_rst 22>;
+};
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v11 1/2] Documentation: dt: Add Allwinner A31 DMA controller bindings
@ 2014-07-17 19:46   ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: linux-arm-kernel

The Allwinner A31 DMA controller is rather simple to describe in the DT. Add
the bindings documentation.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
---
 .../devicetree/bindings/dma/sun6i-dma.txt          | 45 ++++++++++++++++++++++
 1 file changed, 45 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/sun6i-dma.txt

diff --git a/Documentation/devicetree/bindings/dma/sun6i-dma.txt b/Documentation/devicetree/bindings/dma/sun6i-dma.txt
new file mode 100644
index 000000000000..3e145c1675b1
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/sun6i-dma.txt
@@ -0,0 +1,45 @@
+Allwinner A31 DMA Controller
+
+This driver follows the generic DMA bindings defined in dma.txt.
+
+Required properties:
+
+- compatible:	Must be "allwinner,sun6i-a31-dma"
+- reg:		Should contain the registers base address and length
+- interrupts:	Should contain a reference to the interrupt used by this device
+- clocks:	Should contain a reference to the parent AHB clock
+- resets:	Should contain a reference to the reset controller asserting
+		this device in reset
+- #dma-cells :	Should be 1, a single cell holding a line request number
+
+Example:
+	dma: dma-controller at 01c02000 {
+		compatible = "allwinner,sun6i-a31-dma";
+		reg = <0x01c02000 0x1000>;
+		interrupts = <0 50 4>;
+		clocks = <&ahb1_gates 6>;
+		resets = <&ahb1_rst 6>;
+		#dma-cells = <1>;
+	};
+
+Clients:
+
+DMA clients connected to the A31 DMA controller must use the format
+described in the dma.txt file, using a two-cell specifier for each
+channel: a phandle plus one integer cells.
+The two cells in order are:
+
+1. A phandle pointing to the DMA controller.
+2. The port ID as specified in the datasheet
+
+Example:
+spi2: spi at 01c6a000 {
+	compatible = "allwinner,sun6i-a31-spi";
+	reg = <0x01c6a000 0x1000>;
+	interrupts = <0 67 4>;
+	clocks = <&ahb1_gates 22>, <&spi2_clk>;
+	clock-names = "ahb", "mod";
+	dmas = <&dma 25>, <&dma 25>;
+	dma-names = "rx", "tx";
+	resets = <&ahb1_rst 22>;
+};
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v11 2/2] dmaengine: sun6i: Add driver for the Allwinner A31 DMA controller
  2014-07-17 19:46 ` Maxime Ripard
@ 2014-07-17 19:46   ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: Dan Williams, Vinod Koul
  Cc: linux-kernel, linux-arm-kernel, linux-sunxi, kevin.z.m.zh, sunny,
	shuge, zhuzhenhua, Arnd Bergmann, andriy.shevchenko, dmaengine,
	Maxime Ripard

The Allwinner A31 has a 16 channels DMA controller that it shares with the
newer A23. Although sharing some similarities with the DMA controller of the
older Allwinner SoCs, it's significantly different, I don't expect it to be
possible to share the driver for these two.

The A31 Controller is able to memory-to-memory or memory-to-device transfers on
the 16 channels in parallel.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 drivers/dma/Kconfig     |    8 +
 drivers/dma/Makefile    |    1 +
 drivers/dma/sun6i-dma.c | 1059 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1068 insertions(+)
 create mode 100644 drivers/dma/sun6i-dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 1eca7b9760e6..4b439270fb11 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -375,6 +375,14 @@ config XILINX_VDMA
 	  channels, Memory Mapped to Stream (MM2S) and Stream to
 	  Memory Mapped (S2MM) for the data transfers.
 
+config DMA_SUN6I
+	tristate "Allwinner A31 SoCs DMA support"
+	depends on MACH_SUN6I || COMPILE_TEST
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the DMA engine for Allwinner A31 SoCs.
+
 config DMA_ENGINE
 	bool
 
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index c779e1eb2db2..6807c50214c6 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -47,3 +47,4 @@ obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
 obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
 obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-y += xilinx/
+obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
new file mode 100644
index 000000000000..ce8d5d1b0ff4
--- /dev/null
+++ b/drivers/dma/sun6i-dma.c
@@ -0,0 +1,1059 @@
+/*
+ * Copyright (C) 2013-2014 Allwinner Tech Co., Ltd
+ * Author: Sugar <shuge@allwinnertech.com>
+ *
+ * Copyright (C) 2014 Maxime Ripard
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "virt-dma.h"
+
+/*
+ * There's 16 physical channels that can work in parallel.
+ *
+ * However we have 30 different endpoints for our requests.
+ *
+ * Since the channels are able to handle only an unidirectional
+ * transfer, we need to allocate more virtual channels so that
+ * everyone can grab one channel.
+ *
+ * Some devices can't work in both direction (mostly because it
+ * wouldn't make sense), so we have a bit fewer virtual channels than
+ * 2 channels per endpoints.
+ */
+
+#define NR_MAX_CHANNELS		16
+#define NR_MAX_REQUESTS		30
+#define NR_MAX_VCHANS		53
+
+/*
+ * Common registers
+ */
+#define DMA_IRQ_EN(x)		((x) * 0x04)
+#define DMA_IRQ_HALF			BIT(0)
+#define DMA_IRQ_PKG			BIT(1)
+#define DMA_IRQ_QUEUE			BIT(2)
+
+#define DMA_IRQ_CHAN_NR			8
+#define DMA_IRQ_CHAN_WIDTH		4
+
+
+#define DMA_IRQ_STAT(x)		((x) * 0x04 + 0x10)
+
+#define DMA_STAT		0x30
+
+/*
+ * Channels specific registers
+ */
+#define DMA_CHAN_ENABLE		0x00
+#define DMA_CHAN_ENABLE_START		BIT(0)
+#define DMA_CHAN_ENABLE_STOP		0
+
+#define DMA_CHAN_PAUSE		0x04
+#define DMA_CHAN_PAUSE_PAUSE		BIT(1)
+#define DMA_CHAN_PAUSE_RESUME		0
+
+#define DMA_CHAN_LLI_ADDR	0x08
+
+#define DMA_CHAN_CUR_CFG	0x0c
+#define DMA_CHAN_CFG_SRC_DRQ(x)		((x) & 0x1f)
+#define DMA_CHAN_CFG_SRC_IO_MODE	BIT(5)
+#define DMA_CHAN_CFG_SRC_LINEAR_MODE	(0 << 5)
+#define DMA_CHAN_CFG_SRC_BURST(x)	(((x) & 0x3) << 7)
+#define DMA_CHAN_CFG_SRC_WIDTH(x)	(((x) & 0x3) << 9)
+
+#define DMA_CHAN_CFG_DST_DRQ(x)		(DMA_CHAN_CFG_SRC_DRQ(x) << 16)
+#define DMA_CHAN_CFG_DST_IO_MODE	(DMA_CHAN_CFG_SRC_IO_MODE << 16)
+#define DMA_CHAN_CFG_DST_LINEAR_MODE	(DMA_CHAN_CFG_SRC_LINEAR_MODE << 16)
+#define DMA_CHAN_CFG_DST_BURST(x)	(DMA_CHAN_CFG_SRC_BURST(x) << 16)
+#define DMA_CHAN_CFG_DST_WIDTH(x)	(DMA_CHAN_CFG_SRC_WIDTH(x) << 16)
+
+#define DMA_CHAN_CUR_SRC	0x10
+
+#define DMA_CHAN_CUR_DST	0x14
+
+#define DMA_CHAN_CUR_CNT	0x18
+
+#define DMA_CHAN_CUR_PARA	0x1c
+
+
+/*
+ * Various hardware related defines
+ */
+#define LLI_LAST_ITEM	0xfffff800
+#define NORMAL_WAIT	8
+#define DRQ_SDRAM	1
+
+/*
+ * Hardware representation of the LLI
+ *
+ * The hardware will be fed the physical address of this structure,
+ * and read its content in order to start the transfer.
+ */
+struct sun6i_dma_lli {
+	u32			cfg;
+	u32			src;
+	u32			dst;
+	u32			len;
+	u32			para;
+	u32			p_lli_next;
+
+	/*
+	 * This field is not used by the DMA controller, but will be
+	 * used by the CPU to go through the list (mostly for dumping
+	 * or freeing it).
+	 */
+	struct sun6i_dma_lli	*v_lli_next;
+};
+
+
+struct sun6i_desc {
+	struct virt_dma_desc	vd;
+	dma_addr_t		p_lli;
+	struct sun6i_dma_lli	*v_lli;
+};
+
+struct sun6i_pchan {
+	u32			idx;
+	void __iomem		*base;
+	struct sun6i_vchan	*vchan;
+	struct sun6i_desc	*desc;
+	struct sun6i_desc	*done;
+};
+
+struct sun6i_vchan {
+	struct virt_dma_chan	vc;
+	struct list_head	node;
+	struct dma_slave_config	cfg;
+	struct sun6i_pchan	*phy;
+	u8			port;
+};
+
+struct sun6i_dma_dev {
+	struct dma_device	slave;
+	void __iomem		*base;
+	struct clk		*clk;
+	int			irq;
+	spinlock_t		lock;
+	struct reset_control	*rstc;
+	struct tasklet_struct	task;
+	atomic_t		tasklet_shutdown;
+	struct list_head	pending;
+	struct dma_pool		*pool;
+	struct sun6i_pchan	*pchans;
+	struct sun6i_vchan	*vchans;
+};
+
+static struct device *chan2dev(struct dma_chan *chan)
+{
+	return &chan->dev->device;
+}
+
+static inline struct sun6i_dma_dev *to_sun6i_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct sun6i_dma_dev, slave);
+}
+
+static inline struct sun6i_vchan *to_sun6i_vchan(struct dma_chan *chan)
+{
+	return container_of(chan, struct sun6i_vchan, vc.chan);
+}
+
+static inline struct sun6i_desc *
+to_sun6i_desc(struct dma_async_tx_descriptor *tx)
+{
+	return container_of(tx, struct sun6i_desc, vd.tx);
+}
+
+static inline void sun6i_dma_dump_com_regs(struct sun6i_dma_dev *sdev)
+{
+	dev_dbg(sdev->slave.dev, "Common register:\n"
+		"\tmask0(%04x): 0x%08x\n"
+		"\tmask1(%04x): 0x%08x\n"
+		"\tpend0(%04x): 0x%08x\n"
+		"\tpend1(%04x): 0x%08x\n"
+		"\tstats(%04x): 0x%08x\n",
+		DMA_IRQ_EN(0), readl(sdev->base + DMA_IRQ_EN(0)),
+		DMA_IRQ_EN(1), readl(sdev->base + DMA_IRQ_EN(1)),
+		DMA_IRQ_STAT(0), readl(sdev->base + DMA_IRQ_STAT(0)),
+		DMA_IRQ_STAT(1), readl(sdev->base + DMA_IRQ_STAT(1)),
+		DMA_STAT, readl(sdev->base + DMA_STAT));
+}
+
+static inline void sun6i_dma_dump_chan_regs(struct sun6i_dma_dev *sdev,
+					    struct sun6i_pchan *pchan)
+{
+	phys_addr_t reg = __virt_to_phys((unsigned long)pchan->base);
+
+	dev_dbg(sdev->slave.dev, "Chan %d reg: %pa\n"
+		"\t___en(%04x): \t0x%08x\n"
+		"\tpause(%04x): \t0x%08x\n"
+		"\tstart(%04x): \t0x%08x\n"
+		"\t__cfg(%04x): \t0x%08x\n"
+		"\t__src(%04x): \t0x%08x\n"
+		"\t__dst(%04x): \t0x%08x\n"
+		"\tcount(%04x): \t0x%08x\n"
+		"\t_para(%04x): \t0x%08x\n\n",
+		pchan->idx, &reg,
+		DMA_CHAN_ENABLE,
+		readl(pchan->base + DMA_CHAN_ENABLE),
+		DMA_CHAN_PAUSE,
+		readl(pchan->base + DMA_CHAN_PAUSE),
+		DMA_CHAN_LLI_ADDR,
+		readl(pchan->base + DMA_CHAN_LLI_ADDR),
+		DMA_CHAN_CUR_CFG,
+		readl(pchan->base + DMA_CHAN_CUR_CFG),
+		DMA_CHAN_CUR_SRC,
+		readl(pchan->base + DMA_CHAN_CUR_SRC),
+		DMA_CHAN_CUR_DST,
+		readl(pchan->base + DMA_CHAN_CUR_DST),
+		DMA_CHAN_CUR_CNT,
+		readl(pchan->base + DMA_CHAN_CUR_CNT),
+		DMA_CHAN_CUR_PARA,
+		readl(pchan->base + DMA_CHAN_CUR_PARA));
+}
+
+static inline int convert_burst(u32 maxburst, u8 *burst)
+{
+	switch (maxburst) {
+	case 1:
+		*burst = 0;
+		break;
+	case 8:
+		*burst = 2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int convert_buswidth(enum dma_slave_buswidth addr_width, u8 *width)
+{
+	switch (addr_width) {
+	case DMA_SLAVE_BUSWIDTH_1_BYTE:
+		*width = 0;
+		break;
+	case DMA_SLAVE_BUSWIDTH_2_BYTES:
+		*width = 1;
+		break;
+	case DMA_SLAVE_BUSWIDTH_4_BYTES:
+		*width = 2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev,
+			       struct sun6i_dma_lli *next,
+			       dma_addr_t next_phy,
+			       struct sun6i_desc *txd)
+{
+	if ((!prev && !txd) || !next)
+		return NULL;
+
+	if (!prev) {
+		txd->p_lli = next_phy;
+		txd->v_lli = next;
+	} else {
+		prev->p_lli_next = next_phy;
+		prev->v_lli_next = next;
+	}
+
+	next->p_lli_next = LLI_LAST_ITEM;
+	next->v_lli_next = NULL;
+
+	return next;
+}
+
+static inline int sun6i_dma_cfg_lli(struct sun6i_dma_lli *lli,
+				    dma_addr_t src,
+				    dma_addr_t dst, u32 len,
+				    struct dma_slave_config *config)
+{
+	u8 src_width, dst_width, src_burst, dst_burst;
+	int ret;
+
+	if (!config)
+		return -EINVAL;
+
+	ret = convert_burst(config->src_maxburst, &src_burst);
+	if (ret)
+		return ret;
+
+	ret = convert_burst(config->dst_maxburst, &dst_burst);
+	if (ret)
+		return ret;
+
+	ret = convert_buswidth(config->src_addr_width, &src_width);
+	if (ret)
+		return ret;
+
+	ret = convert_buswidth(config->dst_addr_width, &dst_width);
+	if (ret)
+		return ret;
+
+	lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) |
+		DMA_CHAN_CFG_SRC_WIDTH(src_width) |
+		DMA_CHAN_CFG_DST_BURST(dst_burst) |
+		DMA_CHAN_CFG_DST_WIDTH(dst_width);
+
+	lli->src = src;
+	lli->dst = dst;
+	lli->len = len;
+	lli->para = NORMAL_WAIT;
+
+	return 0;
+}
+
+static inline void sun6i_dma_dump_lli(struct sun6i_vchan *vchan,
+				      struct sun6i_dma_lli *lli)
+{
+	phys_addr_t p_lli = __virt_to_phys((unsigned long)lli);
+
+	dev_dbg(chan2dev(&vchan->vc.chan),
+		"\n\tdesc:   p - %pa v - 0x%p\n"
+		"\t\tc - 0x%08x s - 0x%08x d - 0x%08x\n"
+		"\t\tl - 0x%08x p - 0x%08x n - 0x%08x\n",
+		&p_lli, lli,
+		lli->cfg, lli->src, lli->dst,
+		lli->len, lli->para, lli->p_lli_next);
+}
+
+static void sun6i_dma_free_desc(struct virt_dma_desc *vd)
+{
+	struct sun6i_desc *txd = to_sun6i_desc(&vd->tx);
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vd->tx.chan->device);
+	struct sun6i_dma_lli *v_lli, *v_next;
+	dma_addr_t p_lli, p_next;
+
+	if (unlikely(!txd))
+		return;
+
+	p_lli = txd->p_lli;
+	v_lli = txd->v_lli;
+
+	while (v_lli) {
+		v_next = v_lli->v_lli_next;
+		p_next = v_lli->p_lli_next;
+
+		dma_pool_free(sdev->pool, v_lli, p_lli);
+
+		v_lli = v_next;
+		p_lli = p_next;
+	}
+
+	kfree(txd);
+}
+
+static int sun6i_dma_terminate_all(struct sun6i_vchan *vchan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
+	struct sun6i_pchan *pchan = vchan->phy;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock(&sdev->lock);
+	list_del_init(&vchan->node);
+	spin_unlock(&sdev->lock);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	vchan_get_all_descriptors(&vchan->vc, &head);
+
+	if (pchan) {
+		writel(DMA_CHAN_ENABLE_STOP, pchan->base + DMA_CHAN_ENABLE);
+		writel(DMA_CHAN_PAUSE_RESUME, pchan->base + DMA_CHAN_PAUSE);
+
+		vchan->phy = NULL;
+		pchan->vchan = NULL;
+		pchan->desc = NULL;
+		pchan->done = NULL;
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	vchan_dma_desc_free_list(&vchan->vc, &head);
+
+	return 0;
+}
+
+static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
+	struct virt_dma_desc *desc = vchan_next_desc(&vchan->vc);
+	struct sun6i_pchan *pchan = vchan->phy;
+	u32 irq_val, irq_reg, irq_offset;
+
+	if (!pchan)
+		return -EAGAIN;
+
+	if (!desc) {
+		pchan->desc = NULL;
+		pchan->done = NULL;
+		return -EAGAIN;
+	}
+
+	list_del(&desc->node);
+
+	pchan->desc = to_sun6i_desc(&desc->tx);
+	pchan->done = NULL;
+
+	sun6i_dma_dump_lli(vchan, pchan->desc->v_lli);
+
+	irq_reg = pchan->idx / DMA_IRQ_CHAN_NR;
+	irq_offset = pchan->idx % DMA_IRQ_CHAN_NR;
+
+	irq_val = readl(sdev->base + DMA_IRQ_EN(irq_offset));
+	irq_val |= DMA_IRQ_QUEUE << (irq_offset * DMA_IRQ_CHAN_WIDTH);
+	writel(irq_val, sdev->base + DMA_IRQ_EN(irq_offset));
+
+	writel(pchan->desc->p_lli, pchan->base + DMA_CHAN_LLI_ADDR);
+	writel(DMA_CHAN_ENABLE_START, pchan->base + DMA_CHAN_ENABLE);
+
+	sun6i_dma_dump_com_regs(sdev);
+	sun6i_dma_dump_chan_regs(sdev, pchan);
+
+	return 0;
+}
+
+static void sun6i_dma_tasklet(unsigned long data)
+{
+	struct sun6i_dma_dev *sdev = (struct sun6i_dma_dev *)data;
+	struct sun6i_vchan *vchan;
+	struct sun6i_pchan *pchan;
+	unsigned int pchan_alloc = 0;
+	unsigned int pchan_idx;
+
+	list_for_each_entry(vchan, &sdev->slave.channels, vc.chan.device_node) {
+		spin_lock_irq(&vchan->vc.lock);
+
+		pchan = vchan->phy;
+
+		if (pchan && pchan->done) {
+			if (sun6i_dma_start_desc(vchan)) {
+				/*
+				 * No current txd associated with this channel
+				 */
+				dev_dbg(sdev->slave.dev, "pchan %u: free\n",
+					pchan->idx);
+
+				/* Mark this channel free */
+				vchan->phy = NULL;
+				pchan->vchan = NULL;
+			}
+		}
+		spin_unlock_irq(&vchan->vc.lock);
+	}
+
+	spin_lock_irq(&sdev->lock);
+	for (pchan_idx = 0; pchan_idx < NR_MAX_CHANNELS; pchan_idx++) {
+		pchan = &sdev->pchans[pchan_idx];
+
+		if (pchan->vchan || list_empty(&sdev->pending))
+			continue;
+
+		vchan = list_first_entry(&sdev->pending,
+					 struct sun6i_vchan, node);
+
+		/* Remove from pending channels */
+		list_del_init(&vchan->node);
+		pchan_alloc |= BIT(pchan_idx);
+
+		/* Mark this channel allocated */
+		pchan->vchan = vchan;
+		vchan->phy = pchan;
+		dev_dbg(sdev->slave.dev, "pchan %u: alloc vchan %p\n",
+			pchan->idx, &vchan->vc);
+	}
+	spin_unlock_irq(&sdev->lock);
+
+	for (pchan_idx = 0; pchan_idx < NR_MAX_CHANNELS; pchan_idx++) {
+		if (!(pchan_alloc & BIT(pchan_idx)))
+			continue;
+
+		pchan = sdev->pchans + pchan_idx;
+		vchan = pchan->vchan;
+		if (vchan) {
+			spin_lock_irq(&vchan->vc.lock);
+			sun6i_dma_start_desc(vchan);
+			spin_unlock_irq(&vchan->vc.lock);
+		}
+	}
+}
+
+static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
+{
+	struct sun6i_dma_dev *sdev = dev_id;
+	struct sun6i_vchan *vchan;
+	struct sun6i_pchan *pchan;
+	int i, j, ret = IRQ_NONE;
+	u32 status;
+
+	for (i = 0; i < 2; i++) {
+		status = readl(sdev->base + DMA_IRQ_STAT(i));
+		if (!status)
+			continue;
+
+		dev_dbg(sdev->slave.dev, "DMA irq status %s: 0x%x\n",
+			i ? "high" : "low", status);
+
+		writel(status, sdev->base + DMA_IRQ_STAT(i));
+
+		for (j = 0; (j < 8) && status; j++) {
+			if (status & DMA_IRQ_QUEUE) {
+				pchan = sdev->pchans + j;
+				vchan = pchan->vchan;
+
+				if (vchan) {
+					spin_lock(&vchan->vc.lock);
+					vchan_cookie_complete(&pchan->desc->vd);
+					pchan->done = pchan->desc;
+					spin_unlock(&vchan->vc.lock);
+				}
+			}
+
+			status = status >> 4;
+		}
+
+		if (!atomic_read(&sdev->tasklet_shutdown))
+			tasklet_schedule(&sdev->task);
+		ret = IRQ_HANDLED;
+	}
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *sun6i_dma_prep_dma_memcpy(
+		struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+		size_t len, unsigned long flags)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct sun6i_dma_lli *v_lli;
+	struct sun6i_desc *txd;
+	dma_addr_t p_lli;
+	int ret;
+
+	dev_dbg(chan2dev(chan),
+		"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+		__func__, vchan->vc.chan.chan_id, &dest, &src, len, flags);
+
+	if (!len)
+		return NULL;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	v_lli = dma_pool_alloc(sdev->pool, GFP_NOWAIT, &p_lli);
+	if (!v_lli) {
+		dev_err(sdev->slave.dev, "Failed to alloc lli memory\n");
+		kfree(txd);
+		return NULL;
+	}
+
+	ret = sun6i_dma_cfg_lli(v_lli, src, dest, len, sconfig);
+	if (ret)
+		goto err_dma_free;
+
+	v_lli->cfg |= DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
+		DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
+		DMA_CHAN_CFG_DST_LINEAR_MODE |
+		DMA_CHAN_CFG_SRC_LINEAR_MODE;
+
+	sun6i_dma_lli_add(NULL, v_lli, p_lli, txd);
+
+	sun6i_dma_dump_lli(vchan, v_lli);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_dma_free:
+	dma_pool_free(sdev->pool, v_lli, p_lli);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
+		struct dma_chan *chan, struct scatterlist *sgl,
+		unsigned int sg_len, enum dma_transfer_direction dir,
+		unsigned long flags, void *context)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct sun6i_dma_lli *v_lli, *prev = NULL;
+	struct sun6i_desc *txd;
+	struct scatterlist *sg;
+	dma_addr_t p_lli;
+	int i, ret;
+
+	if (!sgl)
+		return NULL;
+
+	if (!is_slave_direction(dir)) {
+		dev_err(chan2dev(chan), "Invalid DMA direction\n");
+		return NULL;
+	}
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		v_lli = dma_pool_alloc(sdev->pool, GFP_NOWAIT, &p_lli);
+		if (!v_lli) {
+			kfree(txd);
+			return NULL;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			ret = sun6i_dma_cfg_lli(v_lli, sg_dma_address(sg),
+						sconfig->dst_addr, sg_dma_len(sg),
+						sconfig);
+			if (ret)
+				goto err_dma_free;
+
+			v_lli->cfg |= DMA_CHAN_CFG_DST_IO_MODE |
+				DMA_CHAN_CFG_SRC_LINEAR_MODE |
+				DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
+				DMA_CHAN_CFG_DST_DRQ(vchan->port);
+
+			dev_dbg(chan2dev(chan),
+				"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+				__func__, vchan->vc.chan.chan_id,
+				&sconfig->dst_addr, &sg_dma_address(sg),
+				sg_dma_len(sg), flags);
+
+		} else {
+			ret = sun6i_dma_cfg_lli(v_lli, sconfig->src_addr,
+						sg_dma_address(sg), sg_dma_len(sg),
+						sconfig);
+			if (ret)
+				goto err_dma_free;
+
+			v_lli->cfg |= DMA_CHAN_CFG_DST_LINEAR_MODE |
+				DMA_CHAN_CFG_SRC_IO_MODE |
+				DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
+				DMA_CHAN_CFG_SRC_DRQ(vchan->port);
+
+			dev_dbg(chan2dev(chan),
+				"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+				__func__, vchan->vc.chan.chan_id,
+				&sg_dma_address(sg), &sconfig->src_addr,
+				sg_dma_len(sg), flags);
+		}
+
+		prev = sun6i_dma_lli_add(prev, v_lli, p_lli, txd);
+	}
+
+	dev_dbg(chan2dev(chan), "First: %pad\n", &txd->p_lli);
+	for (prev = txd->v_lli; prev; prev = prev->v_lli_next)
+		sun6i_dma_dump_lli(vchan, prev);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_dma_free:
+	dma_pool_free(sdev->pool, v_lli, p_lli);
+	return NULL;
+}
+
+static int sun6i_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
+		       unsigned long arg)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct sun6i_pchan *pchan = vchan->phy;
+	unsigned long flags;
+	int ret = 0;
+
+	switch (cmd) {
+	case DMA_RESUME:
+		dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+		spin_lock_irqsave(&vchan->vc.lock, flags);
+
+		if (pchan) {
+			writel(DMA_CHAN_PAUSE_RESUME,
+			       pchan->base + DMA_CHAN_PAUSE);
+		} else if (!list_empty(&vchan->vc.desc_issued)) {
+			spin_lock(&sdev->lock);
+			list_add_tail(&vchan->node, &sdev->pending);
+			spin_unlock(&sdev->lock);
+		}
+
+		spin_unlock_irqrestore(&vchan->vc.lock, flags);
+		break;
+
+	case DMA_PAUSE:
+		dev_dbg(chan2dev(chan), "vchan %p: pause\n", &vchan->vc);
+
+		if (pchan) {
+			writel(DMA_CHAN_PAUSE_PAUSE,
+			       pchan->base + DMA_CHAN_PAUSE);
+		} else {
+			spin_lock(&sdev->lock);
+			list_del_init(&vchan->node);
+			spin_unlock(&sdev->lock);
+		}
+		break;
+
+	case DMA_TERMINATE_ALL:
+		ret = sun6i_dma_terminate_all(vchan);
+		break;
+	case DMA_SLAVE_CONFIG:
+		memcpy(&vchan->cfg, (void *)arg, sizeof(struct dma_slave_config));
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+	return ret;
+}
+
+static enum dma_status sun6i_dma_tx_status(struct dma_chan *chan,
+					   dma_cookie_t cookie,
+					   struct dma_tx_state *state)
+{
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct sun6i_pchan *pchan = vchan->phy;
+	struct sun6i_dma_lli *lli;
+	struct virt_dma_desc *vd;
+	struct sun6i_desc *txd;
+	enum dma_status ret;
+	unsigned long flags;
+	size_t bytes = 0;
+
+	ret = dma_cookie_status(chan, cookie, state);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	vd = vchan_find_desc(&vchan->vc, cookie);
+	txd = to_sun6i_desc(&vd->tx);
+
+	if (vd) {
+		for (lli = txd->v_lli; lli != NULL; lli = lli->v_lli_next)
+			bytes += lli->len;
+	} else if (!pchan || !pchan->desc) {
+		bytes = 0;
+	} else {
+		bytes = readl(pchan->base + DMA_CHAN_CUR_CNT);
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	dma_set_residue(state, bytes);
+
+	return ret;
+}
+
+static void sun6i_dma_issue_pending(struct dma_chan *chan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	if (vchan_issue_pending(&vchan->vc)) {
+		spin_lock(&sdev->lock);
+
+		if (!vchan->phy && list_empty(&vchan->node)) {
+			list_add_tail(&vchan->node, &sdev->pending);
+			tasklet_schedule(&sdev->task);
+			dev_dbg(chan2dev(chan), "vchan %p: issued\n",
+				&vchan->vc);
+		}
+
+		spin_unlock(&sdev->lock);
+	} else {
+		dev_dbg(chan2dev(chan), "vchan %p: nothing to issue\n",
+			&vchan->vc);
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+}
+
+static int sun6i_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	return 0;
+}
+
+static void sun6i_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&sdev->lock, flags);
+	list_del_init(&vchan->node);
+	spin_unlock_irqrestore(&sdev->lock, flags);
+
+	vchan_free_chan_resources(&vchan->vc);
+}
+
+static struct dma_chan *sun6i_dma_of_xlate(struct of_phandle_args *dma_spec,
+					   struct of_dma *ofdma)
+{
+	struct sun6i_dma_dev *sdev = ofdma->of_dma_data;
+	struct sun6i_vchan *vchan;
+	struct dma_chan *chan;
+	u8 port = dma_spec->args[0];
+
+	if (port > NR_MAX_REQUESTS)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&sdev->slave);
+	if (!chan)
+		return NULL;
+
+	vchan = to_sun6i_vchan(chan);
+	vchan->port = port;
+
+	return chan;
+}
+
+static inline void sun6i_kill_tasklet(struct sun6i_dma_dev *sdev)
+{
+	/* Disable all interrupts from DMA */
+	writel(0, sdev->base + DMA_IRQ_EN(0));
+	writel(0, sdev->base + DMA_IRQ_EN(1));
+
+	/* Prevent spurious interrupts from scheduling the tasklet */
+	atomic_inc(&sdev->tasklet_shutdown);
+
+	/* Make sure all interrupts are handled */
+	synchronize_irq(sdev->irq);
+
+	/* Actually prevent the tasklet from being scheduled */
+	tasklet_kill(&sdev->task);
+}
+
+static inline void sun6i_dma_free(struct sun6i_dma_dev *sdev)
+{
+	int i;
+
+	for (i = 0; i < NR_MAX_VCHANS; i++) {
+		struct sun6i_vchan *vchan = &sdev->vchans[i];
+
+		list_del(&vchan->vc.chan.device_node);
+		tasklet_kill(&vchan->vc.task);
+	}
+}
+
+static int sun6i_dma_probe(struct platform_device *pdev)
+{
+	struct sun6i_dma_dev *sdc;
+	struct resource *res;
+	struct clk *mux, *pll6;
+	int ret, i;
+
+	sdc = devm_kzalloc(&pdev->dev, sizeof(*sdc), GFP_KERNEL);
+	if (!sdc)
+		return -ENOMEM;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	sdc->base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(sdc->base))
+		return PTR_ERR(sdc->base);
+
+	sdc->irq = platform_get_irq(pdev, 0);
+	if (sdc->irq < 0) {
+		dev_err(&pdev->dev, "Cannot claim IRQ\n");
+		return sdc->irq;
+	}
+
+	sdc->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(sdc->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(sdc->clk);
+	}
+
+	mux = clk_get(NULL, "ahb1_mux");
+	if (IS_ERR(mux)) {
+		dev_err(&pdev->dev, "Couldn't get AHB1 Mux\n");
+		return PTR_ERR(mux);
+	}
+
+	pll6 = clk_get(NULL, "pll6");
+	if (IS_ERR(pll6)) {
+		dev_err(&pdev->dev, "Couldn't get PLL6\n");
+		clk_put(mux);
+		return PTR_ERR(pll6);
+	}
+
+	ret = clk_set_parent(mux, pll6);
+	clk_put(pll6);
+	clk_put(mux);
+
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't reparent AHB1 on PLL6\n");
+		return ret;
+	}
+
+	sdc->rstc = devm_reset_control_get(&pdev->dev, NULL);
+	if (IS_ERR(sdc->rstc)) {
+		dev_err(&pdev->dev, "No reset controller specified\n");
+		return PTR_ERR(sdc->rstc);
+	}
+
+	sdc->pool = dmam_pool_create(dev_name(&pdev->dev), &pdev->dev,
+				     sizeof(struct sun6i_dma_lli), 4, 0);
+	if (!sdc->pool) {
+		dev_err(&pdev->dev, "No memory for descriptors dma pool\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, sdc);
+	INIT_LIST_HEAD(&sdc->pending);
+	spin_lock_init(&sdc->lock);
+
+	dma_cap_set(DMA_PRIVATE, sdc->slave.cap_mask);
+	dma_cap_set(DMA_MEMCPY, sdc->slave.cap_mask);
+	dma_cap_set(DMA_SLAVE, sdc->slave.cap_mask);
+
+	INIT_LIST_HEAD(&sdc->slave.channels);
+	sdc->slave.device_alloc_chan_resources	= sun6i_dma_alloc_chan_resources;
+	sdc->slave.device_free_chan_resources	= sun6i_dma_free_chan_resources;
+	sdc->slave.device_tx_status		= sun6i_dma_tx_status;
+	sdc->slave.device_issue_pending		= sun6i_dma_issue_pending;
+	sdc->slave.device_prep_slave_sg		= sun6i_dma_prep_slave_sg;
+	sdc->slave.device_prep_dma_memcpy	= sun6i_dma_prep_dma_memcpy;
+	sdc->slave.device_control		= sun6i_dma_control;
+	sdc->slave.chancnt			= NR_MAX_VCHANS;
+
+	sdc->slave.dev = &pdev->dev;
+
+	sdc->pchans = devm_kcalloc(&pdev->dev, NR_MAX_CHANNELS,
+				   sizeof(struct sun6i_pchan), GFP_KERNEL);
+	if (!sdc->pchans)
+		return -ENOMEM;
+
+	sdc->vchans = devm_kcalloc(&pdev->dev, NR_MAX_VCHANS,
+				   sizeof(struct sun6i_vchan), GFP_KERNEL);
+	if (!sdc->vchans)
+		return -ENOMEM;
+
+	tasklet_init(&sdc->task, sun6i_dma_tasklet, (unsigned long)sdc);
+
+	for (i = 0; i < NR_MAX_CHANNELS; i++) {
+		struct sun6i_pchan *pchan = &sdc->pchans[i];
+
+		pchan->idx = i;
+		pchan->base = sdc->base + 0x100 + i * 0x40;
+	}
+
+	for (i = 0; i < NR_MAX_VCHANS; i++) {
+		struct sun6i_vchan *vchan = &sdc->vchans[i];
+
+		INIT_LIST_HEAD(&vchan->node);
+		vchan->vc.desc_free = sun6i_dma_free_desc;
+		vchan_init(&vchan->vc, &sdc->slave);
+	}
+
+	ret = reset_control_deassert(sdc->rstc);
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't deassert the device from reset\n");
+		goto err_chan_free;
+	}
+
+	ret = clk_prepare_enable(sdc->clk);
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't enable the clock\n");
+		goto err_reset_assert;
+	}
+
+	ret = devm_request_irq(&pdev->dev, sdc->irq, sun6i_dma_interrupt, 0,
+			       dev_name(&pdev->dev), sdc);
+	if (ret) {
+		dev_err(&pdev->dev, "Cannot request IRQ\n");
+		goto err_clk_disable;
+	}
+
+	ret = dma_async_device_register(&sdc->slave);
+	if (ret) {
+		dev_warn(&pdev->dev, "Failed to register DMA engine device\n");
+		goto err_irq_disable;
+	}
+
+	ret = of_dma_controller_register(pdev->dev.of_node, sun6i_dma_of_xlate,
+					 sdc);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
+	return 0;
+
+err_dma_unregister:
+	dma_async_device_unregister(&sdc->slave);
+err_irq_disable:
+	sun6i_kill_tasklet(sdc);
+err_clk_disable:
+	clk_disable_unprepare(sdc->clk);
+err_reset_assert:
+	reset_control_assert(sdc->rstc);
+err_chan_free:
+	sun6i_dma_free(sdc);
+	return ret;
+}
+
+static int sun6i_dma_remove(struct platform_device *pdev)
+{
+	struct sun6i_dma_dev *sdc = platform_get_drvdata(pdev);
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&sdc->slave);
+
+	sun6i_kill_tasklet(sdc);
+
+	clk_disable_unprepare(sdc->clk);
+	reset_control_assert(sdc->rstc);
+
+	sun6i_dma_free(sdc);
+
+	return 0;
+}
+
+static struct of_device_id sun6i_dma_match[] = {
+	{ .compatible = "allwinner,sun6i-a31-dma" },
+	{ /* sentinel */ }
+};
+
+static struct platform_driver sun6i_dma_driver = {
+	.probe		= sun6i_dma_probe,
+	.remove		= sun6i_dma_remove,
+	.driver = {
+		.name		= "sun6i-dma",
+		.of_match_table	= sun6i_dma_match,
+	},
+};
+module_platform_driver(sun6i_dma_driver);
+
+MODULE_DESCRIPTION("Allwinner A31 DMA Controller Driver");
+MODULE_AUTHOR("Sugar <shuge@allwinnertech.com>");
+MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>");
+MODULE_LICENSE("GPL");
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v11 2/2] dmaengine: sun6i: Add driver for the Allwinner A31 DMA controller
@ 2014-07-17 19:46   ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-17 19:46 UTC (permalink / raw)
  To: linux-arm-kernel

The Allwinner A31 has a 16 channels DMA controller that it shares with the
newer A23. Although sharing some similarities with the DMA controller of the
older Allwinner SoCs, it's significantly different, I don't expect it to be
possible to share the driver for these two.

The A31 Controller is able to memory-to-memory or memory-to-device transfers on
the 16 channels in parallel.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 drivers/dma/Kconfig     |    8 +
 drivers/dma/Makefile    |    1 +
 drivers/dma/sun6i-dma.c | 1059 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1068 insertions(+)
 create mode 100644 drivers/dma/sun6i-dma.c

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 1eca7b9760e6..4b439270fb11 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -375,6 +375,14 @@ config XILINX_VDMA
 	  channels, Memory Mapped to Stream (MM2S) and Stream to
 	  Memory Mapped (S2MM) for the data transfers.
 
+config DMA_SUN6I
+	tristate "Allwinner A31 SoCs DMA support"
+	depends on MACH_SUN6I || COMPILE_TEST
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+	help
+	  Support for the DMA engine for Allwinner A31 SoCs.
+
 config DMA_ENGINE
 	bool
 
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index c779e1eb2db2..6807c50214c6 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -47,3 +47,4 @@ obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
 obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
 obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
 obj-y += xilinx/
+obj-$(CONFIG_DMA_SUN6I) += sun6i-dma.o
diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
new file mode 100644
index 000000000000..ce8d5d1b0ff4
--- /dev/null
+++ b/drivers/dma/sun6i-dma.c
@@ -0,0 +1,1059 @@
+/*
+ * Copyright (C) 2013-2014 Allwinner Tech Co., Ltd
+ * Author: Sugar <shuge@allwinnertech.com>
+ *
+ * Copyright (C) 2014 Maxime Ripard
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dmapool.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_dma.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+#include "virt-dma.h"
+
+/*
+ * There's 16 physical channels that can work in parallel.
+ *
+ * However we have 30 different endpoints for our requests.
+ *
+ * Since the channels are able to handle only an unidirectional
+ * transfer, we need to allocate more virtual channels so that
+ * everyone can grab one channel.
+ *
+ * Some devices can't work in both direction (mostly because it
+ * wouldn't make sense), so we have a bit fewer virtual channels than
+ * 2 channels per endpoints.
+ */
+
+#define NR_MAX_CHANNELS		16
+#define NR_MAX_REQUESTS		30
+#define NR_MAX_VCHANS		53
+
+/*
+ * Common registers
+ */
+#define DMA_IRQ_EN(x)		((x) * 0x04)
+#define DMA_IRQ_HALF			BIT(0)
+#define DMA_IRQ_PKG			BIT(1)
+#define DMA_IRQ_QUEUE			BIT(2)
+
+#define DMA_IRQ_CHAN_NR			8
+#define DMA_IRQ_CHAN_WIDTH		4
+
+
+#define DMA_IRQ_STAT(x)		((x) * 0x04 + 0x10)
+
+#define DMA_STAT		0x30
+
+/*
+ * Channels specific registers
+ */
+#define DMA_CHAN_ENABLE		0x00
+#define DMA_CHAN_ENABLE_START		BIT(0)
+#define DMA_CHAN_ENABLE_STOP		0
+
+#define DMA_CHAN_PAUSE		0x04
+#define DMA_CHAN_PAUSE_PAUSE		BIT(1)
+#define DMA_CHAN_PAUSE_RESUME		0
+
+#define DMA_CHAN_LLI_ADDR	0x08
+
+#define DMA_CHAN_CUR_CFG	0x0c
+#define DMA_CHAN_CFG_SRC_DRQ(x)		((x) & 0x1f)
+#define DMA_CHAN_CFG_SRC_IO_MODE	BIT(5)
+#define DMA_CHAN_CFG_SRC_LINEAR_MODE	(0 << 5)
+#define DMA_CHAN_CFG_SRC_BURST(x)	(((x) & 0x3) << 7)
+#define DMA_CHAN_CFG_SRC_WIDTH(x)	(((x) & 0x3) << 9)
+
+#define DMA_CHAN_CFG_DST_DRQ(x)		(DMA_CHAN_CFG_SRC_DRQ(x) << 16)
+#define DMA_CHAN_CFG_DST_IO_MODE	(DMA_CHAN_CFG_SRC_IO_MODE << 16)
+#define DMA_CHAN_CFG_DST_LINEAR_MODE	(DMA_CHAN_CFG_SRC_LINEAR_MODE << 16)
+#define DMA_CHAN_CFG_DST_BURST(x)	(DMA_CHAN_CFG_SRC_BURST(x) << 16)
+#define DMA_CHAN_CFG_DST_WIDTH(x)	(DMA_CHAN_CFG_SRC_WIDTH(x) << 16)
+
+#define DMA_CHAN_CUR_SRC	0x10
+
+#define DMA_CHAN_CUR_DST	0x14
+
+#define DMA_CHAN_CUR_CNT	0x18
+
+#define DMA_CHAN_CUR_PARA	0x1c
+
+
+/*
+ * Various hardware related defines
+ */
+#define LLI_LAST_ITEM	0xfffff800
+#define NORMAL_WAIT	8
+#define DRQ_SDRAM	1
+
+/*
+ * Hardware representation of the LLI
+ *
+ * The hardware will be fed the physical address of this structure,
+ * and read its content in order to start the transfer.
+ */
+struct sun6i_dma_lli {
+	u32			cfg;
+	u32			src;
+	u32			dst;
+	u32			len;
+	u32			para;
+	u32			p_lli_next;
+
+	/*
+	 * This field is not used by the DMA controller, but will be
+	 * used by the CPU to go through the list (mostly for dumping
+	 * or freeing it).
+	 */
+	struct sun6i_dma_lli	*v_lli_next;
+};
+
+
+struct sun6i_desc {
+	struct virt_dma_desc	vd;
+	dma_addr_t		p_lli;
+	struct sun6i_dma_lli	*v_lli;
+};
+
+struct sun6i_pchan {
+	u32			idx;
+	void __iomem		*base;
+	struct sun6i_vchan	*vchan;
+	struct sun6i_desc	*desc;
+	struct sun6i_desc	*done;
+};
+
+struct sun6i_vchan {
+	struct virt_dma_chan	vc;
+	struct list_head	node;
+	struct dma_slave_config	cfg;
+	struct sun6i_pchan	*phy;
+	u8			port;
+};
+
+struct sun6i_dma_dev {
+	struct dma_device	slave;
+	void __iomem		*base;
+	struct clk		*clk;
+	int			irq;
+	spinlock_t		lock;
+	struct reset_control	*rstc;
+	struct tasklet_struct	task;
+	atomic_t		tasklet_shutdown;
+	struct list_head	pending;
+	struct dma_pool		*pool;
+	struct sun6i_pchan	*pchans;
+	struct sun6i_vchan	*vchans;
+};
+
+static struct device *chan2dev(struct dma_chan *chan)
+{
+	return &chan->dev->device;
+}
+
+static inline struct sun6i_dma_dev *to_sun6i_dma_dev(struct dma_device *d)
+{
+	return container_of(d, struct sun6i_dma_dev, slave);
+}
+
+static inline struct sun6i_vchan *to_sun6i_vchan(struct dma_chan *chan)
+{
+	return container_of(chan, struct sun6i_vchan, vc.chan);
+}
+
+static inline struct sun6i_desc *
+to_sun6i_desc(struct dma_async_tx_descriptor *tx)
+{
+	return container_of(tx, struct sun6i_desc, vd.tx);
+}
+
+static inline void sun6i_dma_dump_com_regs(struct sun6i_dma_dev *sdev)
+{
+	dev_dbg(sdev->slave.dev, "Common register:\n"
+		"\tmask0(%04x): 0x%08x\n"
+		"\tmask1(%04x): 0x%08x\n"
+		"\tpend0(%04x): 0x%08x\n"
+		"\tpend1(%04x): 0x%08x\n"
+		"\tstats(%04x): 0x%08x\n",
+		DMA_IRQ_EN(0), readl(sdev->base + DMA_IRQ_EN(0)),
+		DMA_IRQ_EN(1), readl(sdev->base + DMA_IRQ_EN(1)),
+		DMA_IRQ_STAT(0), readl(sdev->base + DMA_IRQ_STAT(0)),
+		DMA_IRQ_STAT(1), readl(sdev->base + DMA_IRQ_STAT(1)),
+		DMA_STAT, readl(sdev->base + DMA_STAT));
+}
+
+static inline void sun6i_dma_dump_chan_regs(struct sun6i_dma_dev *sdev,
+					    struct sun6i_pchan *pchan)
+{
+	phys_addr_t reg = __virt_to_phys((unsigned long)pchan->base);
+
+	dev_dbg(sdev->slave.dev, "Chan %d reg: %pa\n"
+		"\t___en(%04x): \t0x%08x\n"
+		"\tpause(%04x): \t0x%08x\n"
+		"\tstart(%04x): \t0x%08x\n"
+		"\t__cfg(%04x): \t0x%08x\n"
+		"\t__src(%04x): \t0x%08x\n"
+		"\t__dst(%04x): \t0x%08x\n"
+		"\tcount(%04x): \t0x%08x\n"
+		"\t_para(%04x): \t0x%08x\n\n",
+		pchan->idx, &reg,
+		DMA_CHAN_ENABLE,
+		readl(pchan->base + DMA_CHAN_ENABLE),
+		DMA_CHAN_PAUSE,
+		readl(pchan->base + DMA_CHAN_PAUSE),
+		DMA_CHAN_LLI_ADDR,
+		readl(pchan->base + DMA_CHAN_LLI_ADDR),
+		DMA_CHAN_CUR_CFG,
+		readl(pchan->base + DMA_CHAN_CUR_CFG),
+		DMA_CHAN_CUR_SRC,
+		readl(pchan->base + DMA_CHAN_CUR_SRC),
+		DMA_CHAN_CUR_DST,
+		readl(pchan->base + DMA_CHAN_CUR_DST),
+		DMA_CHAN_CUR_CNT,
+		readl(pchan->base + DMA_CHAN_CUR_CNT),
+		DMA_CHAN_CUR_PARA,
+		readl(pchan->base + DMA_CHAN_CUR_PARA));
+}
+
+static inline int convert_burst(u32 maxburst, u8 *burst)
+{
+	switch (maxburst) {
+	case 1:
+		*burst = 0;
+		break;
+	case 8:
+		*burst = 2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static inline int convert_buswidth(enum dma_slave_buswidth addr_width, u8 *width)
+{
+	switch (addr_width) {
+	case DMA_SLAVE_BUSWIDTH_1_BYTE:
+		*width = 0;
+		break;
+	case DMA_SLAVE_BUSWIDTH_2_BYTES:
+		*width = 1;
+		break;
+	case DMA_SLAVE_BUSWIDTH_4_BYTES:
+		*width = 2;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev,
+			       struct sun6i_dma_lli *next,
+			       dma_addr_t next_phy,
+			       struct sun6i_desc *txd)
+{
+	if ((!prev && !txd) || !next)
+		return NULL;
+
+	if (!prev) {
+		txd->p_lli = next_phy;
+		txd->v_lli = next;
+	} else {
+		prev->p_lli_next = next_phy;
+		prev->v_lli_next = next;
+	}
+
+	next->p_lli_next = LLI_LAST_ITEM;
+	next->v_lli_next = NULL;
+
+	return next;
+}
+
+static inline int sun6i_dma_cfg_lli(struct sun6i_dma_lli *lli,
+				    dma_addr_t src,
+				    dma_addr_t dst, u32 len,
+				    struct dma_slave_config *config)
+{
+	u8 src_width, dst_width, src_burst, dst_burst;
+	int ret;
+
+	if (!config)
+		return -EINVAL;
+
+	ret = convert_burst(config->src_maxburst, &src_burst);
+	if (ret)
+		return ret;
+
+	ret = convert_burst(config->dst_maxburst, &dst_burst);
+	if (ret)
+		return ret;
+
+	ret = convert_buswidth(config->src_addr_width, &src_width);
+	if (ret)
+		return ret;
+
+	ret = convert_buswidth(config->dst_addr_width, &dst_width);
+	if (ret)
+		return ret;
+
+	lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) |
+		DMA_CHAN_CFG_SRC_WIDTH(src_width) |
+		DMA_CHAN_CFG_DST_BURST(dst_burst) |
+		DMA_CHAN_CFG_DST_WIDTH(dst_width);
+
+	lli->src = src;
+	lli->dst = dst;
+	lli->len = len;
+	lli->para = NORMAL_WAIT;
+
+	return 0;
+}
+
+static inline void sun6i_dma_dump_lli(struct sun6i_vchan *vchan,
+				      struct sun6i_dma_lli *lli)
+{
+	phys_addr_t p_lli = __virt_to_phys((unsigned long)lli);
+
+	dev_dbg(chan2dev(&vchan->vc.chan),
+		"\n\tdesc:   p - %pa v - 0x%p\n"
+		"\t\tc - 0x%08x s - 0x%08x d - 0x%08x\n"
+		"\t\tl - 0x%08x p - 0x%08x n - 0x%08x\n",
+		&p_lli, lli,
+		lli->cfg, lli->src, lli->dst,
+		lli->len, lli->para, lli->p_lli_next);
+}
+
+static void sun6i_dma_free_desc(struct virt_dma_desc *vd)
+{
+	struct sun6i_desc *txd = to_sun6i_desc(&vd->tx);
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vd->tx.chan->device);
+	struct sun6i_dma_lli *v_lli, *v_next;
+	dma_addr_t p_lli, p_next;
+
+	if (unlikely(!txd))
+		return;
+
+	p_lli = txd->p_lli;
+	v_lli = txd->v_lli;
+
+	while (v_lli) {
+		v_next = v_lli->v_lli_next;
+		p_next = v_lli->p_lli_next;
+
+		dma_pool_free(sdev->pool, v_lli, p_lli);
+
+		v_lli = v_next;
+		p_lli = p_next;
+	}
+
+	kfree(txd);
+}
+
+static int sun6i_dma_terminate_all(struct sun6i_vchan *vchan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
+	struct sun6i_pchan *pchan = vchan->phy;
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock(&sdev->lock);
+	list_del_init(&vchan->node);
+	spin_unlock(&sdev->lock);
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	vchan_get_all_descriptors(&vchan->vc, &head);
+
+	if (pchan) {
+		writel(DMA_CHAN_ENABLE_STOP, pchan->base + DMA_CHAN_ENABLE);
+		writel(DMA_CHAN_PAUSE_RESUME, pchan->base + DMA_CHAN_PAUSE);
+
+		vchan->phy = NULL;
+		pchan->vchan = NULL;
+		pchan->desc = NULL;
+		pchan->done = NULL;
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	vchan_dma_desc_free_list(&vchan->vc, &head);
+
+	return 0;
+}
+
+static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(vchan->vc.chan.device);
+	struct virt_dma_desc *desc = vchan_next_desc(&vchan->vc);
+	struct sun6i_pchan *pchan = vchan->phy;
+	u32 irq_val, irq_reg, irq_offset;
+
+	if (!pchan)
+		return -EAGAIN;
+
+	if (!desc) {
+		pchan->desc = NULL;
+		pchan->done = NULL;
+		return -EAGAIN;
+	}
+
+	list_del(&desc->node);
+
+	pchan->desc = to_sun6i_desc(&desc->tx);
+	pchan->done = NULL;
+
+	sun6i_dma_dump_lli(vchan, pchan->desc->v_lli);
+
+	irq_reg = pchan->idx / DMA_IRQ_CHAN_NR;
+	irq_offset = pchan->idx % DMA_IRQ_CHAN_NR;
+
+	irq_val = readl(sdev->base + DMA_IRQ_EN(irq_offset));
+	irq_val |= DMA_IRQ_QUEUE << (irq_offset * DMA_IRQ_CHAN_WIDTH);
+	writel(irq_val, sdev->base + DMA_IRQ_EN(irq_offset));
+
+	writel(pchan->desc->p_lli, pchan->base + DMA_CHAN_LLI_ADDR);
+	writel(DMA_CHAN_ENABLE_START, pchan->base + DMA_CHAN_ENABLE);
+
+	sun6i_dma_dump_com_regs(sdev);
+	sun6i_dma_dump_chan_regs(sdev, pchan);
+
+	return 0;
+}
+
+static void sun6i_dma_tasklet(unsigned long data)
+{
+	struct sun6i_dma_dev *sdev = (struct sun6i_dma_dev *)data;
+	struct sun6i_vchan *vchan;
+	struct sun6i_pchan *pchan;
+	unsigned int pchan_alloc = 0;
+	unsigned int pchan_idx;
+
+	list_for_each_entry(vchan, &sdev->slave.channels, vc.chan.device_node) {
+		spin_lock_irq(&vchan->vc.lock);
+
+		pchan = vchan->phy;
+
+		if (pchan && pchan->done) {
+			if (sun6i_dma_start_desc(vchan)) {
+				/*
+				 * No current txd associated with this channel
+				 */
+				dev_dbg(sdev->slave.dev, "pchan %u: free\n",
+					pchan->idx);
+
+				/* Mark this channel free */
+				vchan->phy = NULL;
+				pchan->vchan = NULL;
+			}
+		}
+		spin_unlock_irq(&vchan->vc.lock);
+	}
+
+	spin_lock_irq(&sdev->lock);
+	for (pchan_idx = 0; pchan_idx < NR_MAX_CHANNELS; pchan_idx++) {
+		pchan = &sdev->pchans[pchan_idx];
+
+		if (pchan->vchan || list_empty(&sdev->pending))
+			continue;
+
+		vchan = list_first_entry(&sdev->pending,
+					 struct sun6i_vchan, node);
+
+		/* Remove from pending channels */
+		list_del_init(&vchan->node);
+		pchan_alloc |= BIT(pchan_idx);
+
+		/* Mark this channel allocated */
+		pchan->vchan = vchan;
+		vchan->phy = pchan;
+		dev_dbg(sdev->slave.dev, "pchan %u: alloc vchan %p\n",
+			pchan->idx, &vchan->vc);
+	}
+	spin_unlock_irq(&sdev->lock);
+
+	for (pchan_idx = 0; pchan_idx < NR_MAX_CHANNELS; pchan_idx++) {
+		if (!(pchan_alloc & BIT(pchan_idx)))
+			continue;
+
+		pchan = sdev->pchans + pchan_idx;
+		vchan = pchan->vchan;
+		if (vchan) {
+			spin_lock_irq(&vchan->vc.lock);
+			sun6i_dma_start_desc(vchan);
+			spin_unlock_irq(&vchan->vc.lock);
+		}
+	}
+}
+
+static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
+{
+	struct sun6i_dma_dev *sdev = dev_id;
+	struct sun6i_vchan *vchan;
+	struct sun6i_pchan *pchan;
+	int i, j, ret = IRQ_NONE;
+	u32 status;
+
+	for (i = 0; i < 2; i++) {
+		status = readl(sdev->base + DMA_IRQ_STAT(i));
+		if (!status)
+			continue;
+
+		dev_dbg(sdev->slave.dev, "DMA irq status %s: 0x%x\n",
+			i ? "high" : "low", status);
+
+		writel(status, sdev->base + DMA_IRQ_STAT(i));
+
+		for (j = 0; (j < 8) && status; j++) {
+			if (status & DMA_IRQ_QUEUE) {
+				pchan = sdev->pchans + j;
+				vchan = pchan->vchan;
+
+				if (vchan) {
+					spin_lock(&vchan->vc.lock);
+					vchan_cookie_complete(&pchan->desc->vd);
+					pchan->done = pchan->desc;
+					spin_unlock(&vchan->vc.lock);
+				}
+			}
+
+			status = status >> 4;
+		}
+
+		if (!atomic_read(&sdev->tasklet_shutdown))
+			tasklet_schedule(&sdev->task);
+		ret = IRQ_HANDLED;
+	}
+
+	return ret;
+}
+
+static struct dma_async_tx_descriptor *sun6i_dma_prep_dma_memcpy(
+		struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+		size_t len, unsigned long flags)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct sun6i_dma_lli *v_lli;
+	struct sun6i_desc *txd;
+	dma_addr_t p_lli;
+	int ret;
+
+	dev_dbg(chan2dev(chan),
+		"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+		__func__, vchan->vc.chan.chan_id, &dest, &src, len, flags);
+
+	if (!len)
+		return NULL;
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	v_lli = dma_pool_alloc(sdev->pool, GFP_NOWAIT, &p_lli);
+	if (!v_lli) {
+		dev_err(sdev->slave.dev, "Failed to alloc lli memory\n");
+		kfree(txd);
+		return NULL;
+	}
+
+	ret = sun6i_dma_cfg_lli(v_lli, src, dest, len, sconfig);
+	if (ret)
+		goto err_dma_free;
+
+	v_lli->cfg |= DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
+		DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
+		DMA_CHAN_CFG_DST_LINEAR_MODE |
+		DMA_CHAN_CFG_SRC_LINEAR_MODE;
+
+	sun6i_dma_lli_add(NULL, v_lli, p_lli, txd);
+
+	sun6i_dma_dump_lli(vchan, v_lli);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_dma_free:
+	dma_pool_free(sdev->pool, v_lli, p_lli);
+	return NULL;
+}
+
+static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
+		struct dma_chan *chan, struct scatterlist *sgl,
+		unsigned int sg_len, enum dma_transfer_direction dir,
+		unsigned long flags, void *context)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct dma_slave_config *sconfig = &vchan->cfg;
+	struct sun6i_dma_lli *v_lli, *prev = NULL;
+	struct sun6i_desc *txd;
+	struct scatterlist *sg;
+	dma_addr_t p_lli;
+	int i, ret;
+
+	if (!sgl)
+		return NULL;
+
+	if (!is_slave_direction(dir)) {
+		dev_err(chan2dev(chan), "Invalid DMA direction\n");
+		return NULL;
+	}
+
+	txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+	if (!txd)
+		return NULL;
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		v_lli = dma_pool_alloc(sdev->pool, GFP_NOWAIT, &p_lli);
+		if (!v_lli) {
+			kfree(txd);
+			return NULL;
+		}
+
+		if (dir == DMA_MEM_TO_DEV) {
+			ret = sun6i_dma_cfg_lli(v_lli, sg_dma_address(sg),
+						sconfig->dst_addr, sg_dma_len(sg),
+						sconfig);
+			if (ret)
+				goto err_dma_free;
+
+			v_lli->cfg |= DMA_CHAN_CFG_DST_IO_MODE |
+				DMA_CHAN_CFG_SRC_LINEAR_MODE |
+				DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
+				DMA_CHAN_CFG_DST_DRQ(vchan->port);
+
+			dev_dbg(chan2dev(chan),
+				"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+				__func__, vchan->vc.chan.chan_id,
+				&sconfig->dst_addr, &sg_dma_address(sg),
+				sg_dma_len(sg), flags);
+
+		} else {
+			ret = sun6i_dma_cfg_lli(v_lli, sconfig->src_addr,
+						sg_dma_address(sg), sg_dma_len(sg),
+						sconfig);
+			if (ret)
+				goto err_dma_free;
+
+			v_lli->cfg |= DMA_CHAN_CFG_DST_LINEAR_MODE |
+				DMA_CHAN_CFG_SRC_IO_MODE |
+				DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
+				DMA_CHAN_CFG_SRC_DRQ(vchan->port);
+
+			dev_dbg(chan2dev(chan),
+				"%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n",
+				__func__, vchan->vc.chan.chan_id,
+				&sg_dma_address(sg), &sconfig->src_addr,
+				sg_dma_len(sg), flags);
+		}
+
+		prev = sun6i_dma_lli_add(prev, v_lli, p_lli, txd);
+	}
+
+	dev_dbg(chan2dev(chan), "First: %pad\n", &txd->p_lli);
+	for (prev = txd->v_lli; prev; prev = prev->v_lli_next)
+		sun6i_dma_dump_lli(vchan, prev);
+
+	return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
+err_dma_free:
+	dma_pool_free(sdev->pool, v_lli, p_lli);
+	return NULL;
+}
+
+static int sun6i_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
+		       unsigned long arg)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct sun6i_pchan *pchan = vchan->phy;
+	unsigned long flags;
+	int ret = 0;
+
+	switch (cmd) {
+	case DMA_RESUME:
+		dev_dbg(chan2dev(chan), "vchan %p: resume\n", &vchan->vc);
+
+		spin_lock_irqsave(&vchan->vc.lock, flags);
+
+		if (pchan) {
+			writel(DMA_CHAN_PAUSE_RESUME,
+			       pchan->base + DMA_CHAN_PAUSE);
+		} else if (!list_empty(&vchan->vc.desc_issued)) {
+			spin_lock(&sdev->lock);
+			list_add_tail(&vchan->node, &sdev->pending);
+			spin_unlock(&sdev->lock);
+		}
+
+		spin_unlock_irqrestore(&vchan->vc.lock, flags);
+		break;
+
+	case DMA_PAUSE:
+		dev_dbg(chan2dev(chan), "vchan %p: pause\n", &vchan->vc);
+
+		if (pchan) {
+			writel(DMA_CHAN_PAUSE_PAUSE,
+			       pchan->base + DMA_CHAN_PAUSE);
+		} else {
+			spin_lock(&sdev->lock);
+			list_del_init(&vchan->node);
+			spin_unlock(&sdev->lock);
+		}
+		break;
+
+	case DMA_TERMINATE_ALL:
+		ret = sun6i_dma_terminate_all(vchan);
+		break;
+	case DMA_SLAVE_CONFIG:
+		memcpy(&vchan->cfg, (void *)arg, sizeof(struct dma_slave_config));
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+	return ret;
+}
+
+static enum dma_status sun6i_dma_tx_status(struct dma_chan *chan,
+					   dma_cookie_t cookie,
+					   struct dma_tx_state *state)
+{
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	struct sun6i_pchan *pchan = vchan->phy;
+	struct sun6i_dma_lli *lli;
+	struct virt_dma_desc *vd;
+	struct sun6i_desc *txd;
+	enum dma_status ret;
+	unsigned long flags;
+	size_t bytes = 0;
+
+	ret = dma_cookie_status(chan, cookie, state);
+	if (ret == DMA_COMPLETE)
+		return ret;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	vd = vchan_find_desc(&vchan->vc, cookie);
+	txd = to_sun6i_desc(&vd->tx);
+
+	if (vd) {
+		for (lli = txd->v_lli; lli != NULL; lli = lli->v_lli_next)
+			bytes += lli->len;
+	} else if (!pchan || !pchan->desc) {
+		bytes = 0;
+	} else {
+		bytes = readl(pchan->base + DMA_CHAN_CUR_CNT);
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+
+	dma_set_residue(state, bytes);
+
+	return ret;
+}
+
+static void sun6i_dma_issue_pending(struct dma_chan *chan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&vchan->vc.lock, flags);
+
+	if (vchan_issue_pending(&vchan->vc)) {
+		spin_lock(&sdev->lock);
+
+		if (!vchan->phy && list_empty(&vchan->node)) {
+			list_add_tail(&vchan->node, &sdev->pending);
+			tasklet_schedule(&sdev->task);
+			dev_dbg(chan2dev(chan), "vchan %p: issued\n",
+				&vchan->vc);
+		}
+
+		spin_unlock(&sdev->lock);
+	} else {
+		dev_dbg(chan2dev(chan), "vchan %p: nothing to issue\n",
+			&vchan->vc);
+	}
+
+	spin_unlock_irqrestore(&vchan->vc.lock, flags);
+}
+
+static int sun6i_dma_alloc_chan_resources(struct dma_chan *chan)
+{
+	return 0;
+}
+
+static void sun6i_dma_free_chan_resources(struct dma_chan *chan)
+{
+	struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+	struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&sdev->lock, flags);
+	list_del_init(&vchan->node);
+	spin_unlock_irqrestore(&sdev->lock, flags);
+
+	vchan_free_chan_resources(&vchan->vc);
+}
+
+static struct dma_chan *sun6i_dma_of_xlate(struct of_phandle_args *dma_spec,
+					   struct of_dma *ofdma)
+{
+	struct sun6i_dma_dev *sdev = ofdma->of_dma_data;
+	struct sun6i_vchan *vchan;
+	struct dma_chan *chan;
+	u8 port = dma_spec->args[0];
+
+	if (port > NR_MAX_REQUESTS)
+		return NULL;
+
+	chan = dma_get_any_slave_channel(&sdev->slave);
+	if (!chan)
+		return NULL;
+
+	vchan = to_sun6i_vchan(chan);
+	vchan->port = port;
+
+	return chan;
+}
+
+static inline void sun6i_kill_tasklet(struct sun6i_dma_dev *sdev)
+{
+	/* Disable all interrupts from DMA */
+	writel(0, sdev->base + DMA_IRQ_EN(0));
+	writel(0, sdev->base + DMA_IRQ_EN(1));
+
+	/* Prevent spurious interrupts from scheduling the tasklet */
+	atomic_inc(&sdev->tasklet_shutdown);
+
+	/* Make sure all interrupts are handled */
+	synchronize_irq(sdev->irq);
+
+	/* Actually prevent the tasklet from being scheduled */
+	tasklet_kill(&sdev->task);
+}
+
+static inline void sun6i_dma_free(struct sun6i_dma_dev *sdev)
+{
+	int i;
+
+	for (i = 0; i < NR_MAX_VCHANS; i++) {
+		struct sun6i_vchan *vchan = &sdev->vchans[i];
+
+		list_del(&vchan->vc.chan.device_node);
+		tasklet_kill(&vchan->vc.task);
+	}
+}
+
+static int sun6i_dma_probe(struct platform_device *pdev)
+{
+	struct sun6i_dma_dev *sdc;
+	struct resource *res;
+	struct clk *mux, *pll6;
+	int ret, i;
+
+	sdc = devm_kzalloc(&pdev->dev, sizeof(*sdc), GFP_KERNEL);
+	if (!sdc)
+		return -ENOMEM;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	sdc->base = devm_ioremap_resource(&pdev->dev, res);
+	if (IS_ERR(sdc->base))
+		return PTR_ERR(sdc->base);
+
+	sdc->irq = platform_get_irq(pdev, 0);
+	if (sdc->irq < 0) {
+		dev_err(&pdev->dev, "Cannot claim IRQ\n");
+		return sdc->irq;
+	}
+
+	sdc->clk = devm_clk_get(&pdev->dev, NULL);
+	if (IS_ERR(sdc->clk)) {
+		dev_err(&pdev->dev, "No clock specified\n");
+		return PTR_ERR(sdc->clk);
+	}
+
+	mux = clk_get(NULL, "ahb1_mux");
+	if (IS_ERR(mux)) {
+		dev_err(&pdev->dev, "Couldn't get AHB1 Mux\n");
+		return PTR_ERR(mux);
+	}
+
+	pll6 = clk_get(NULL, "pll6");
+	if (IS_ERR(pll6)) {
+		dev_err(&pdev->dev, "Couldn't get PLL6\n");
+		clk_put(mux);
+		return PTR_ERR(pll6);
+	}
+
+	ret = clk_set_parent(mux, pll6);
+	clk_put(pll6);
+	clk_put(mux);
+
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't reparent AHB1 on PLL6\n");
+		return ret;
+	}
+
+	sdc->rstc = devm_reset_control_get(&pdev->dev, NULL);
+	if (IS_ERR(sdc->rstc)) {
+		dev_err(&pdev->dev, "No reset controller specified\n");
+		return PTR_ERR(sdc->rstc);
+	}
+
+	sdc->pool = dmam_pool_create(dev_name(&pdev->dev), &pdev->dev,
+				     sizeof(struct sun6i_dma_lli), 4, 0);
+	if (!sdc->pool) {
+		dev_err(&pdev->dev, "No memory for descriptors dma pool\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, sdc);
+	INIT_LIST_HEAD(&sdc->pending);
+	spin_lock_init(&sdc->lock);
+
+	dma_cap_set(DMA_PRIVATE, sdc->slave.cap_mask);
+	dma_cap_set(DMA_MEMCPY, sdc->slave.cap_mask);
+	dma_cap_set(DMA_SLAVE, sdc->slave.cap_mask);
+
+	INIT_LIST_HEAD(&sdc->slave.channels);
+	sdc->slave.device_alloc_chan_resources	= sun6i_dma_alloc_chan_resources;
+	sdc->slave.device_free_chan_resources	= sun6i_dma_free_chan_resources;
+	sdc->slave.device_tx_status		= sun6i_dma_tx_status;
+	sdc->slave.device_issue_pending		= sun6i_dma_issue_pending;
+	sdc->slave.device_prep_slave_sg		= sun6i_dma_prep_slave_sg;
+	sdc->slave.device_prep_dma_memcpy	= sun6i_dma_prep_dma_memcpy;
+	sdc->slave.device_control		= sun6i_dma_control;
+	sdc->slave.chancnt			= NR_MAX_VCHANS;
+
+	sdc->slave.dev = &pdev->dev;
+
+	sdc->pchans = devm_kcalloc(&pdev->dev, NR_MAX_CHANNELS,
+				   sizeof(struct sun6i_pchan), GFP_KERNEL);
+	if (!sdc->pchans)
+		return -ENOMEM;
+
+	sdc->vchans = devm_kcalloc(&pdev->dev, NR_MAX_VCHANS,
+				   sizeof(struct sun6i_vchan), GFP_KERNEL);
+	if (!sdc->vchans)
+		return -ENOMEM;
+
+	tasklet_init(&sdc->task, sun6i_dma_tasklet, (unsigned long)sdc);
+
+	for (i = 0; i < NR_MAX_CHANNELS; i++) {
+		struct sun6i_pchan *pchan = &sdc->pchans[i];
+
+		pchan->idx = i;
+		pchan->base = sdc->base + 0x100 + i * 0x40;
+	}
+
+	for (i = 0; i < NR_MAX_VCHANS; i++) {
+		struct sun6i_vchan *vchan = &sdc->vchans[i];
+
+		INIT_LIST_HEAD(&vchan->node);
+		vchan->vc.desc_free = sun6i_dma_free_desc;
+		vchan_init(&vchan->vc, &sdc->slave);
+	}
+
+	ret = reset_control_deassert(sdc->rstc);
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't deassert the device from reset\n");
+		goto err_chan_free;
+	}
+
+	ret = clk_prepare_enable(sdc->clk);
+	if (ret) {
+		dev_err(&pdev->dev, "Couldn't enable the clock\n");
+		goto err_reset_assert;
+	}
+
+	ret = devm_request_irq(&pdev->dev, sdc->irq, sun6i_dma_interrupt, 0,
+			       dev_name(&pdev->dev), sdc);
+	if (ret) {
+		dev_err(&pdev->dev, "Cannot request IRQ\n");
+		goto err_clk_disable;
+	}
+
+	ret = dma_async_device_register(&sdc->slave);
+	if (ret) {
+		dev_warn(&pdev->dev, "Failed to register DMA engine device\n");
+		goto err_irq_disable;
+	}
+
+	ret = of_dma_controller_register(pdev->dev.of_node, sun6i_dma_of_xlate,
+					 sdc);
+	if (ret) {
+		dev_err(&pdev->dev, "of_dma_controller_register failed\n");
+		goto err_dma_unregister;
+	}
+
+	return 0;
+
+err_dma_unregister:
+	dma_async_device_unregister(&sdc->slave);
+err_irq_disable:
+	sun6i_kill_tasklet(sdc);
+err_clk_disable:
+	clk_disable_unprepare(sdc->clk);
+err_reset_assert:
+	reset_control_assert(sdc->rstc);
+err_chan_free:
+	sun6i_dma_free(sdc);
+	return ret;
+}
+
+static int sun6i_dma_remove(struct platform_device *pdev)
+{
+	struct sun6i_dma_dev *sdc = platform_get_drvdata(pdev);
+
+	of_dma_controller_free(pdev->dev.of_node);
+	dma_async_device_unregister(&sdc->slave);
+
+	sun6i_kill_tasklet(sdc);
+
+	clk_disable_unprepare(sdc->clk);
+	reset_control_assert(sdc->rstc);
+
+	sun6i_dma_free(sdc);
+
+	return 0;
+}
+
+static struct of_device_id sun6i_dma_match[] = {
+	{ .compatible = "allwinner,sun6i-a31-dma" },
+	{ /* sentinel */ }
+};
+
+static struct platform_driver sun6i_dma_driver = {
+	.probe		= sun6i_dma_probe,
+	.remove		= sun6i_dma_remove,
+	.driver = {
+		.name		= "sun6i-dma",
+		.of_match_table	= sun6i_dma_match,
+	},
+};
+module_platform_driver(sun6i_dma_driver);
+
+MODULE_DESCRIPTION("Allwinner A31 DMA Controller Driver");
+MODULE_AUTHOR("Sugar <shuge@allwinnertech.com>");
+MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>");
+MODULE_LICENSE("GPL");
-- 
2.0.1

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-17 19:46 ` Maxime Ripard
@ 2014-07-24 12:13   ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-24 12:13 UTC (permalink / raw)
  To: Dan Williams, Vinod Koul
  Cc: linux-kernel, linux-arm-kernel, linux-sunxi, kevin.z.m.zh, sunny,
	shuge, zhuzhenhua, Arnd Bergmann, andriy.shevchenko, dmaengine,
	Greg Kroah-Hartman, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1395 bytes --]

Vinod, Dan,

On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This patchset adds support for the DMA controller found in the
> Allwinner A31 and A23 SoCs.
> 
> This has been tested using the newly introduced SPI driver on an A31
> EVK. Support for DMA-driven SPI transfers will be the subject of
> another patch serie.
> 
> This has been around for around 5 monthes now, and didn't get any
> review but nitpicks for three versions, so I feel like it could be
> merged quite quickly.

Ok, so, who should I bribe to get this merged?

The first version was posted on the 2/24, we're at v11 already, and I
only got a single mail from either one of you.

Don't get me wrong, I'd be ok to make any change you deemed necessary,
but in order to do so, I at least have to get a sign of life from you,
and so far, you've both always been MIA.

We're pretty much in the same situation for the other DMA driver
Emilio has been sending for over a month now, that he is doing as part
as a GSoC for the Linux Foundation, with not a single maintainer
review so far.

I'm getting quite tired of this honestly, and I'm not sure it sends
the right message to the hobbyists and companies that try to get
things right by contributing.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-24 12:13   ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-24 12:13 UTC (permalink / raw)
  To: linux-arm-kernel

Vinod, Dan,

On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This patchset adds support for the DMA controller found in the
> Allwinner A31 and A23 SoCs.
> 
> This has been tested using the newly introduced SPI driver on an A31
> EVK. Support for DMA-driven SPI transfers will be the subject of
> another patch serie.
> 
> This has been around for around 5 monthes now, and didn't get any
> review but nitpicks for three versions, so I feel like it could be
> merged quite quickly.

Ok, so, who should I bribe to get this merged?

The first version was posted on the 2/24, we're at v11 already, and I
only got a single mail from either one of you.

Don't get me wrong, I'd be ok to make any change you deemed necessary,
but in order to do so, I at least have to get a sign of life from you,
and so far, you've both always been MIA.

We're pretty much in the same situation for the other DMA driver
Emilio has been sending for over a month now, that he is doing as part
as a GSoC for the Linux Foundation, with not a single maintainer
review so far.

I'm getting quite tired of this honestly, and I'm not sure it sends
the right message to the hobbyists and companies that try to get
things right by contributing.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140724/48ac8d0a/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-24 12:13   ` Maxime Ripard
@ 2014-07-24 23:44     ` Andrew Morton
  -1 siblings, 0 replies; 32+ messages in thread
From: Andrew Morton @ 2014-07-24 23:44 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Dan Williams, Vinod Koul, linux-kernel, linux-arm-kernel,
	linux-sunxi, kevin.z.m.zh, sunny, shuge, zhuzhenhua,
	Arnd Bergmann, andriy.shevchenko, dmaengine, Greg Kroah-Hartman

On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:

> On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > Hi,
> > 
> > This patchset adds support for the DMA controller found in the
> > Allwinner A31 and A23 SoCs.
> > 
> > This has been tested using the newly introduced SPI driver on an A31
> > EVK. Support for DMA-driven SPI transfers will be the subject of
> > another patch serie.
> > 
> > This has been around for around 5 monthes now, and didn't get any
> > review but nitpicks for three versions, so I feel like it could be
> > merged quite quickly.
> 
> Ok, so, who should I bribe to get this merged?

Turns out I'm easily bribed.  The code looks pretty clean and simple
and is refreshingly free of comments, which only confuse people anyway.

I think we could do this as a single patch - is there any benefit to
splitting it apart like this?

The combinations of spin_lock()/spin_lock_irq() and spin_lock_irqsave()
are a bit scary - it's easy to get these optimisations wrong.  Has it
been thoroughly tested with lockdep enabled?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-24 23:44     ` Andrew Morton
  0 siblings, 0 replies; 32+ messages in thread
From: Andrew Morton @ 2014-07-24 23:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:

> On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > Hi,
> > 
> > This patchset adds support for the DMA controller found in the
> > Allwinner A31 and A23 SoCs.
> > 
> > This has been tested using the newly introduced SPI driver on an A31
> > EVK. Support for DMA-driven SPI transfers will be the subject of
> > another patch serie.
> > 
> > This has been around for around 5 monthes now, and didn't get any
> > review but nitpicks for three versions, so I feel like it could be
> > merged quite quickly.
> 
> Ok, so, who should I bribe to get this merged?

Turns out I'm easily bribed.  The code looks pretty clean and simple
and is refreshingly free of comments, which only confuse people anyway.

I think we could do this as a single patch - is there any benefit to
splitting it apart like this?

The combinations of spin_lock()/spin_lock_irq() and spin_lock_irqsave()
are a bit scary - it's easy to get these optimisations wrong.  Has it
been thoroughly tested with lockdep enabled?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-24 23:44     ` Andrew Morton
@ 2014-07-25  6:11       ` Vinod Koul
  -1 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25  6:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Maxime Ripard, Dan Williams, linux-kernel, linux-arm-kernel,
	linux-sunxi, kevin.z.m.zh, sunny, shuge, zhuzhenhua,
	Arnd Bergmann, andriy.shevchenko, dmaengine, Greg Kroah-Hartman

On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> 
> > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > Hi,
> > > 
> > > This patchset adds support for the DMA controller found in the
> > > Allwinner A31 and A23 SoCs.
> > > 
> > > This has been tested using the newly introduced SPI driver on an A31
> > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > another patch serie.
> > > 
> > > This has been around for around 5 monthes now, and didn't get any
> > > review but nitpicks for three versions, so I feel like it could be
> > > merged quite quickly.
> > 
> > Ok, so, who should I bribe to get this merged?
> 
> Turns out I'm easily bribed.  The code looks pretty clean and simple
> and is refreshingly free of comments, which only confuse people anyway.
Sorry for the delay, I will get this done today and merge to slave-dmaengine
next

> I think we could do this as a single patch - is there any benefit to
> splitting it apart like this?
Typically DT bindings are sent as aseprate patch and review independent of
teh driver. Though not many from DT-list have looked up slowing down patches

> The combinations of spin_lock()/spin_lock_irq() and spin_lock_irqsave()
> are a bit scary - it's easy to get these optimisations wrong.  Has it
> been thoroughly tested with lockdep enabled?
Since the dma is handled in the irq, iut is actaully essential to have
irqsave versions for dmaenegine driver

Thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25  6:11       ` Vinod Koul
  0 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25  6:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> 
> > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > Hi,
> > > 
> > > This patchset adds support for the DMA controller found in the
> > > Allwinner A31 and A23 SoCs.
> > > 
> > > This has been tested using the newly introduced SPI driver on an A31
> > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > another patch serie.
> > > 
> > > This has been around for around 5 monthes now, and didn't get any
> > > review but nitpicks for three versions, so I feel like it could be
> > > merged quite quickly.
> > 
> > Ok, so, who should I bribe to get this merged?
> 
> Turns out I'm easily bribed.  The code looks pretty clean and simple
> and is refreshingly free of comments, which only confuse people anyway.
Sorry for the delay, I will get this done today and merge to slave-dmaengine
next

> I think we could do this as a single patch - is there any benefit to
> splitting it apart like this?
Typically DT bindings are sent as aseprate patch and review independent of
teh driver. Though not many from DT-list have looked up slowing down patches

> The combinations of spin_lock()/spin_lock_irq() and spin_lock_irqsave()
> are a bit scary - it's easy to get these optimisations wrong.  Has it
> been thoroughly tested with lockdep enabled?
Since the dma is handled in the irq, iut is actaully essential to have
irqsave versions for dmaenegine driver

Thanks
-- 
~Vinod

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25  6:11       ` Vinod Koul
@ 2014-07-25  6:13         ` Vinod Koul
  -1 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25  6:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Maxime Ripard, Dan Williams, linux-kernel, linux-arm-kernel,
	linux-sunxi, kevin.z.m.zh, sunny, shuge, zhuzhenhua,
	Arnd Bergmann, andriy.shevchenko, dmaengine, Greg Kroah-Hartman

On Fri, Jul 25, 2014 at 11:41:30AM +0530, Vinod Koul wrote:
> On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> > On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> > 
> > > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > > Hi,
> > > > 
> > > > This patchset adds support for the DMA controller found in the
> > > > Allwinner A31 and A23 SoCs.
> > > > 
> > > > This has been tested using the newly introduced SPI driver on an A31
> > > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > > another patch serie.
> > > > 
> > > > This has been around for around 5 monthes now, and didn't get any
> > > > review but nitpicks for three versions, so I feel like it could be
> > > > merged quite quickly.
> > > 
> > > Ok, so, who should I bribe to get this merged?
> > 
> > Turns out I'm easily bribed.  The code looks pretty clean and simple
> > and is refreshingly free of comments, which only confuse people anyway.
> Sorry for the delay, I will get this done today and merge to slave-dmaengine
> next
And looks like Andrew merged them. Do you mind if I take these up thru
dmaengine tree?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25  6:13         ` Vinod Koul
  0 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25  6:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 25, 2014 at 11:41:30AM +0530, Vinod Koul wrote:
> On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> > On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> > 
> > > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > > Hi,
> > > > 
> > > > This patchset adds support for the DMA controller found in the
> > > > Allwinner A31 and A23 SoCs.
> > > > 
> > > > This has been tested using the newly introduced SPI driver on an A31
> > > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > > another patch serie.
> > > > 
> > > > This has been around for around 5 monthes now, and didn't get any
> > > > review but nitpicks for three versions, so I feel like it could be
> > > > merged quite quickly.
> > > 
> > > Ok, so, who should I bribe to get this merged?
> > 
> > Turns out I'm easily bribed.  The code looks pretty clean and simple
> > and is refreshingly free of comments, which only confuse people anyway.
> Sorry for the delay, I will get this done today and merge to slave-dmaengine
> next
And looks like Andrew merged them. Do you mind if I take these up thru
dmaengine tree?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25  6:13         ` Vinod Koul
@ 2014-07-25  7:14           ` Andrew Morton
  -1 siblings, 0 replies; 32+ messages in thread
From: Andrew Morton @ 2014-07-25  7:14 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Maxime Ripard, Dan Williams, linux-kernel, linux-arm-kernel,
	linux-sunxi, kevin.z.m.zh, sunny, shuge, zhuzhenhua,
	Arnd Bergmann, andriy.shevchenko, dmaengine, Greg Kroah-Hartman

On Fri, 25 Jul 2014 11:43:20 +0530 Vinod Koul <vinod.koul@intel.com> wrote:

> On Fri, Jul 25, 2014 at 11:41:30AM +0530, Vinod Koul wrote:
> > On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> > > On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> > > 
> > > > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > > > Hi,
> > > > > 
> > > > > This patchset adds support for the DMA controller found in the
> > > > > Allwinner A31 and A23 SoCs.
> > > > > 
> > > > > This has been tested using the newly introduced SPI driver on an A31
> > > > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > > > another patch serie.
> > > > > 
> > > > > This has been around for around 5 monthes now, and didn't get any
> > > > > review but nitpicks for three versions, so I feel like it could be
> > > > > merged quite quickly.
> > > > 
> > > > Ok, so, who should I bribe to get this merged?
> > > 
> > > Turns out I'm easily bribed.  The code looks pretty clean and simple
> > > and is refreshingly free of comments, which only confuse people anyway.
> > Sorry for the delay, I will get this done today and merge to slave-dmaengine
> > next
> And looks like Andrew merged them. Do you mind if I take these up thru
> dmaengine tree?

Sure.  If a patch turns up in linux-next I'll autodrop my copy.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25  7:14           ` Andrew Morton
  0 siblings, 0 replies; 32+ messages in thread
From: Andrew Morton @ 2014-07-25  7:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 25 Jul 2014 11:43:20 +0530 Vinod Koul <vinod.koul@intel.com> wrote:

> On Fri, Jul 25, 2014 at 11:41:30AM +0530, Vinod Koul wrote:
> > On Thu, Jul 24, 2014 at 04:44:20PM -0700, Andrew Morton wrote:
> > > On Thu, 24 Jul 2014 14:13:15 +0200 Maxime Ripard <maxime.ripard@free-electrons.com> wrote:
> > > 
> > > > On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > > > > Hi,
> > > > > 
> > > > > This patchset adds support for the DMA controller found in the
> > > > > Allwinner A31 and A23 SoCs.
> > > > > 
> > > > > This has been tested using the newly introduced SPI driver on an A31
> > > > > EVK. Support for DMA-driven SPI transfers will be the subject of
> > > > > another patch serie.
> > > > > 
> > > > > This has been around for around 5 monthes now, and didn't get any
> > > > > review but nitpicks for three versions, so I feel like it could be
> > > > > merged quite quickly.
> > > > 
> > > > Ok, so, who should I bribe to get this merged?
> > > 
> > > Turns out I'm easily bribed.  The code looks pretty clean and simple
> > > and is refreshingly free of comments, which only confuse people anyway.
> > Sorry for the delay, I will get this done today and merge to slave-dmaengine
> > next
> And looks like Andrew merged them. Do you mind if I take these up thru
> dmaengine tree?

Sure.  If a patch turns up in linux-next I'll autodrop my copy.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-17 19:46 ` Maxime Ripard
@ 2014-07-25 13:12   ` Vinod Koul
  -1 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25 13:12 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This patchset adds support for the DMA controller found in the
> Allwinner A31 and A23 SoCs.
> 
> This has been tested using the newly introduced SPI driver on an A31
> EVK. Support for DMA-driven SPI transfers will be the subject of
> another patch serie.
> 
> This has been around for around 5 monthes now, and didn't get any
> review but nitpicks for three versions, so I feel like it could be
> merged quite quickly.
I have applied this now.

Can you please send follow patches for these:
- don't recall if I pointed earlier, but can we use direct conversion for
  calculating convert_burst() and convert_buswidth(), latter one at least
  seem doable
- don't use devm_request_irq(). You have irq enabled and you have killed
  tasklet. This is too racy. You need to ensure no irqs can be generated before killing
  tasklets.
- use synchronize_irq() before killing tasklet

Thanks

-- 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25 13:12   ` Vinod Koul
  0 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25 13:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> Hi,
> 
> This patchset adds support for the DMA controller found in the
> Allwinner A31 and A23 SoCs.
> 
> This has been tested using the newly introduced SPI driver on an A31
> EVK. Support for DMA-driven SPI transfers will be the subject of
> another patch serie.
> 
> This has been around for around 5 monthes now, and didn't get any
> review but nitpicks for three versions, so I feel like it could be
> merged quite quickly.
I have applied this now.

Can you please send follow patches for these:
- don't recall if I pointed earlier, but can we use direct conversion for
  calculating convert_burst() and convert_buswidth(), latter one at least
  seem doable
- don't use devm_request_irq(). You have irq enabled and you have killed
  tasklet. This is too racy. You need to ensure no irqs can be generated before killing
  tasklets.
- use synchronize_irq() before killing tasklet

Thanks

-- 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25 13:12   ` Vinod Koul
@ 2014-07-25 16:37     ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-25 16:37 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

[-- Attachment #1: Type: text/plain, Size: 1630 bytes --]

Hi Vinod,

On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > Hi,
> > 
> > This patchset adds support for the DMA controller found in the
> > Allwinner A31 and A23 SoCs.
> > 
> > This has been tested using the newly introduced SPI driver on an A31
> > EVK. Support for DMA-driven SPI transfers will be the subject of
> > another patch serie.
> > 
> > This has been around for around 5 monthes now, and didn't get any
> > review but nitpicks for three versions, so I feel like it could be
> > merged quite quickly.
> I have applied this now.

Thanks.

> Can you please send follow patches for these:
> - don't recall if I pointed earlier, but can we use direct conversion for
>   calculating convert_burst() and convert_buswidth(), latter one at least
>   seem doable

Ok. Do you still want the error reporting for the invalid width and
burst size?

> - don't use devm_request_irq(). You have irq enabled and you have killed
>   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
>   tasklets.

Ok, would calling disable_irq before killing the tasklet an option for
you ? that would allow to keep the devm_request_irq.

> - use synchronize_irq() before killing tasklet

Actually, I already do it, like you suggested previously. See
sun6i_kill_tasklet.

I'll also send patches for the various breakages and warnings spotted
by the autobuilders.

Thanks!
Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25 16:37     ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-25 16:37 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Vinod,

On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> On Thu, Jul 17, 2014 at 09:46:14PM +0200, Maxime Ripard wrote:
> > Hi,
> > 
> > This patchset adds support for the DMA controller found in the
> > Allwinner A31 and A23 SoCs.
> > 
> > This has been tested using the newly introduced SPI driver on an A31
> > EVK. Support for DMA-driven SPI transfers will be the subject of
> > another patch serie.
> > 
> > This has been around for around 5 monthes now, and didn't get any
> > review but nitpicks for three versions, so I feel like it could be
> > merged quite quickly.
> I have applied this now.

Thanks.

> Can you please send follow patches for these:
> - don't recall if I pointed earlier, but can we use direct conversion for
>   calculating convert_burst() and convert_buswidth(), latter one at least
>   seem doable

Ok. Do you still want the error reporting for the invalid width and
burst size?

> - don't use devm_request_irq(). You have irq enabled and you have killed
>   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
>   tasklets.

Ok, would calling disable_irq before killing the tasklet an option for
you ? that would allow to keep the devm_request_irq.

> - use synchronize_irq() before killing tasklet

Actually, I already do it, like you suggested previously. See
sun6i_kill_tasklet.

I'll also send patches for the various breakages and warnings spotted
by the autobuilders.

Thanks!
Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140725/ebf2b18a/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25 16:37     ` Maxime Ripard
@ 2014-07-25 16:42       ` Vinod Koul
  -1 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25 16:42 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

[-- Attachment #1: Type: text/plain, Size: 1424 bytes --]

On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > Can you please send follow patches for these:
> > - don't recall if I pointed earlier, but can we use direct conversion for
> >   calculating convert_burst() and convert_buswidth(), latter one at least
> >   seem doable
> 
> Ok. Do you still want the error reporting for the invalid width and
> burst size?
I think for convert_buswidth() we can do away with a switch case and convert
the numbers directly

> > - don't use devm_request_irq(). You have irq enabled and you have killed
> >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> >   tasklets.
> 
> Ok, would calling disable_irq before killing the tasklet an option for
> you ? that would allow to keep the devm_request_irq.
disable_irq also does syncronize, so it might work. But then again if the
probe fails after registering irq due to some other reason, and you get into
races with unitialized driver as well.

I think it might be easier to amnge by working with plain old request_irq()

> I'll also send patches for the various breakages and warnings spotted
> by the autobuilders.
Yes please, I havent seen the reports in detail yet, but they are reporting
mainly due COMPILE_TEST. DO you really want that?

Yes it did help me to compile for ARM, i didnt see any defconfig which has
this MACH in arm/configs

-- 
~Vinod

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25 16:42       ` Vinod Koul
  0 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-25 16:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > Can you please send follow patches for these:
> > - don't recall if I pointed earlier, but can we use direct conversion for
> >   calculating convert_burst() and convert_buswidth(), latter one at least
> >   seem doable
> 
> Ok. Do you still want the error reporting for the invalid width and
> burst size?
I think for convert_buswidth() we can do away with a switch case and convert
the numbers directly

> > - don't use devm_request_irq(). You have irq enabled and you have killed
> >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> >   tasklets.
> 
> Ok, would calling disable_irq before killing the tasklet an option for
> you ? that would allow to keep the devm_request_irq.
disable_irq also does syncronize, so it might work. But then again if the
probe fails after registering irq due to some other reason, and you get into
races with unitialized driver as well.

I think it might be easier to amnge by working with plain old request_irq()

> I'll also send patches for the various breakages and warnings spotted
> by the autobuilders.
Yes please, I havent seen the reports in detail yet, but they are reporting
mainly due COMPILE_TEST. DO you really want that?

Yes it did help me to compile for ARM, i didnt see any defconfig which has
this MACH in arm/configs

-- 
~Vinod
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140725/56c99054/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25 16:37     ` Maxime Ripard
@ 2014-07-25 16:45       ` Russell King - ARM Linux
  -1 siblings, 0 replies; 32+ messages in thread
From: Russell King - ARM Linux @ 2014-07-25 16:45 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Vinod Koul, andriy.shevchenko, Arnd Bergmann, linux-kernel,
	zhuzhenhua, dmaengine, linux-sunxi, kevin.z.m.zh, sunny, shuge,
	Dan Williams, linux-arm-kernel

On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> Hi Vinod,
> 
> On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> > - don't use devm_request_irq(). You have irq enabled and you have killed
> >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> >   tasklets.
> 
> Ok, would calling disable_irq before killing the tasklet an option for
> you ? that would allow to keep the devm_request_irq.

That's not really an acceptable approach if you can use shared interrupts.
A better alternative would be devm_free_irq() to give a definite point
that the interrupt is unregistered in the driver remove sequence.  That
allows you to keep the advantage of devm_request_irq() to clean up during
the initialisation side.

An alternative approach would be to ensure that the hardware is quiesced,
and interrupts are disabled.  Then call synchronize_irq() on it, and at
that point, you should be certain that your interrupt handler should not
process any further interrupts for your device (though, in a shared
interrupt environment, it would still be called should a different device
on the shared line raise its interrupt.)

-- 
FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-25 16:45       ` Russell King - ARM Linux
  0 siblings, 0 replies; 32+ messages in thread
From: Russell King - ARM Linux @ 2014-07-25 16:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> Hi Vinod,
> 
> On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> > - don't use devm_request_irq(). You have irq enabled and you have killed
> >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> >   tasklets.
> 
> Ok, would calling disable_irq before killing the tasklet an option for
> you ? that would allow to keep the devm_request_irq.

That's not really an acceptable approach if you can use shared interrupts.
A better alternative would be devm_free_irq() to give a definite point
that the interrupt is unregistered in the driver remove sequence.  That
allows you to keep the advantage of devm_request_irq() to clean up during
the initialisation side.

An alternative approach would be to ensure that the hardware is quiesced,
and interrupts are disabled.  Then call synchronize_irq() on it, and at
that point, you should be certain that your interrupt handler should not
process any further interrupts for your device (though, in a shared
interrupt environment, it would still be called should a different device
on the shared line raise its interrupt.)

-- 
FTTC broadband for 0.8mile line: currently at 9.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25 16:45       ` Russell King - ARM Linux
@ 2014-07-28  9:41         ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-28  9:41 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Vinod Koul, andriy.shevchenko, Arnd Bergmann, linux-kernel,
	zhuzhenhua, dmaengine, linux-sunxi, kevin.z.m.zh, sunny, shuge,
	Dan Williams, linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 1840 bytes --]

Hi Russell,

On Fri, Jul 25, 2014 at 05:45:19PM +0100, Russell King - ARM Linux wrote:
> On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > Hi Vinod,
> > 
> > On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > >   tasklets.
> > 
> > Ok, would calling disable_irq before killing the tasklet an option for
> > you ? that would allow to keep the devm_request_irq.
> 
> That's not really an acceptable approach if you can use shared
> interrupts.

We don't, but yes, I see your point.

> A better alternative would be devm_free_irq() to give a definite point
> that the interrupt is unregistered in the driver remove sequence.  That
> allows you to keep the advantage of devm_request_irq() to clean up during
> the initialisation side.

Ah, right, thanks.

> An alternative approach would be to ensure that the hardware is quiesced,
> and interrupts are disabled.  Then call synchronize_irq() on it, and at
> that point, you should be certain that your interrupt handler should not
> process any further interrupts for your device (though, in a shared
> interrupt environment, it would still be called should a different device
> on the shared line raise its interrupt.)

Actually, unless I'm missing something, that's pretty much what we're
doing here.

I disable all interrupts in the DMA controller, I call
synchronize_irq, and then kill the tasklet. The only interrupts I
could get are spurious, and we made sure such kind of interrupts
couldn't schedule the tasklet either.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-28  9:41         ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-28  9:41 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Russell,

On Fri, Jul 25, 2014 at 05:45:19PM +0100, Russell King - ARM Linux wrote:
> On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > Hi Vinod,
> > 
> > On Fri, Jul 25, 2014 at 06:42:17PM +0530, Vinod Koul wrote:
> > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > >   tasklets.
> > 
> > Ok, would calling disable_irq before killing the tasklet an option for
> > you ? that would allow to keep the devm_request_irq.
> 
> That's not really an acceptable approach if you can use shared
> interrupts.

We don't, but yes, I see your point.

> A better alternative would be devm_free_irq() to give a definite point
> that the interrupt is unregistered in the driver remove sequence.  That
> allows you to keep the advantage of devm_request_irq() to clean up during
> the initialisation side.

Ah, right, thanks.

> An alternative approach would be to ensure that the hardware is quiesced,
> and interrupts are disabled.  Then call synchronize_irq() on it, and at
> that point, you should be certain that your interrupt handler should not
> process any further interrupts for your device (though, in a shared
> interrupt environment, it would still be called should a different device
> on the shared line raise its interrupt.)

Actually, unless I'm missing something, that's pretty much what we're
doing here.

I disable all interrupts in the DMA controller, I call
synchronize_irq, and then kill the tasklet. The only interrupts I
could get are spurious, and we made sure such kind of interrupts
couldn't schedule the tasklet either.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140728/5d00e9e9/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-25 16:42       ` Vinod Koul
@ 2014-07-28 10:14         ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-28 10:14 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

[-- Attachment #1: Type: text/plain, Size: 2147 bytes --]

On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > Can you please send follow patches for these:
> > > - don't recall if I pointed earlier, but can we use direct conversion for
> > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > >   seem doable
> > 
> > Ok. Do you still want the error reporting for the invalid width and
> > burst size?
> I think for convert_buswidth() we can do away with a switch case and convert
> the numbers directly

Well, it's already using a switch. Do you want me to remove completely
the function and move the switch where it's used? It's used two times
now, so I'd like to avoid duplicating it.

> > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > >   tasklets.
> > 
> > Ok, would calling disable_irq before killing the tasklet an option for
> > you ? that would allow to keep the devm_request_irq.
> disable_irq also does syncronize, so it might work. But then again if the
> probe fails after registering irq due to some other reason, and you get into
> races with unitialized driver as well.
> 
> I think it might be easier to amnge by working with plain old request_irq()

Yes, devm_free_irq like Russell suggested seems to both address your
concerns, while not having to care about it at probe.

> > I'll also send patches for the various breakages and warnings spotted
> > by the autobuilders.
> Yes please, I havent seen the reports in detail yet, but they are reporting
> mainly due COMPILE_TEST. DO you really want that?

I don't know, I thought it was the new "default". And it allows to
catch issues like we've seen.

> Yes it did help me to compile for ARM, i didnt see any defconfig which has
> this MACH in arm/configs

I don't know what kernel you were looking at, but it's been introduced
in 3.16.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-28 10:14         ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-28 10:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > Can you please send follow patches for these:
> > > - don't recall if I pointed earlier, but can we use direct conversion for
> > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > >   seem doable
> > 
> > Ok. Do you still want the error reporting for the invalid width and
> > burst size?
> I think for convert_buswidth() we can do away with a switch case and convert
> the numbers directly

Well, it's already using a switch. Do you want me to remove completely
the function and move the switch where it's used? It's used two times
now, so I'd like to avoid duplicating it.

> > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > >   tasklets.
> > 
> > Ok, would calling disable_irq before killing the tasklet an option for
> > you ? that would allow to keep the devm_request_irq.
> disable_irq also does syncronize, so it might work. But then again if the
> probe fails after registering irq due to some other reason, and you get into
> races with unitialized driver as well.
> 
> I think it might be easier to amnge by working with plain old request_irq()

Yes, devm_free_irq like Russell suggested seems to both address your
concerns, while not having to care about it at probe.

> > I'll also send patches for the various breakages and warnings spotted
> > by the autobuilders.
> Yes please, I havent seen the reports in detail yet, but they are reporting
> mainly due COMPILE_TEST. DO you really want that?

I don't know, I thought it was the new "default". And it allows to
catch issues like we've seen.

> Yes it did help me to compile for ARM, i didnt see any defconfig which has
> this MACH in arm/configs

I don't know what kernel you were looking at, but it's been introduced
in 3.16.

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140728/f01b3cd3/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-28 10:14         ` Maxime Ripard
@ 2014-07-28 14:18           ` Vinod Koul
  -1 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-28 14:18 UTC (permalink / raw)
  To: Maxime Ripard
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

[-- Attachment #1: Type: text/plain, Size: 1742 bytes --]

On Mon, Jul 28, 2014 at 12:14:02PM +0200, Maxime Ripard wrote:
> On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> > On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > > Can you please send follow patches for these:
> > > > - don't recall if I pointed earlier, but can we use direct conversion for
> > > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > > >   seem doable
> > > 
> > > Ok. Do you still want the error reporting for the invalid width and
> > > burst size?
> > I think for convert_buswidth() we can do away with a switch case and convert
> > the numbers directly
> 
> Well, it's already using a switch. Do you want me to remove completely
> the function and move the switch where it's used? It's used two times
> now, so I'd like to avoid duplicating it.
You can probably remove switch in these function by converting them
arithmetically..

> > > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > > >   tasklets.
> > > 
> > > Ok, would calling disable_irq before killing the tasklet an option for
> > > you ? that would allow to keep the devm_request_irq.
> > disable_irq also does syncronize, so it might work. But then again if the
> > probe fails after registering irq due to some other reason, and you get into
> > races with unitialized driver as well.
> > 
> > I think it might be easier to amnge by working with plain old request_irq()
> 
> Yes, devm_free_irq like Russell suggested seems to both address your
> concerns, while not having to care about it at probe.
OK, pls send the fix

-- 
~Vinod

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-28 14:18           ` Vinod Koul
  0 siblings, 0 replies; 32+ messages in thread
From: Vinod Koul @ 2014-07-28 14:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 28, 2014 at 12:14:02PM +0200, Maxime Ripard wrote:
> On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> > On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > > Can you please send follow patches for these:
> > > > - don't recall if I pointed earlier, but can we use direct conversion for
> > > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > > >   seem doable
> > > 
> > > Ok. Do you still want the error reporting for the invalid width and
> > > burst size?
> > I think for convert_buswidth() we can do away with a switch case and convert
> > the numbers directly
> 
> Well, it's already using a switch. Do you want me to remove completely
> the function and move the switch where it's used? It's used two times
> now, so I'd like to avoid duplicating it.
You can probably remove switch in these function by converting them
arithmetically..

> > > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > > >   tasklets.
> > > 
> > > Ok, would calling disable_irq before killing the tasklet an option for
> > > you ? that would allow to keep the devm_request_irq.
> > disable_irq also does syncronize, so it might work. But then again if the
> > probe fails after registering irq due to some other reason, and you get into
> > races with unitialized driver as well.
> > 
> > I think it might be easier to amnge by working with plain old request_irq()
> 
> Yes, devm_free_irq like Russell suggested seems to both address your
> concerns, while not having to care about it at probe.
OK, pls send the fix

-- 
~Vinod
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140728/45b8020b/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
  2014-07-28 14:18           ` Vinod Koul
@ 2014-07-29 16:03             ` Maxime Ripard
  -1 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-29 16:03 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Dan Williams, linux-kernel, linux-arm-kernel, linux-sunxi,
	kevin.z.m.zh, sunny, shuge, zhuzhenhua, Arnd Bergmann,
	andriy.shevchenko, dmaengine

[-- Attachment #1: Type: text/plain, Size: 2013 bytes --]

On Mon, Jul 28, 2014 at 07:48:12PM +0530, Vinod Koul wrote:
> On Mon, Jul 28, 2014 at 12:14:02PM +0200, Maxime Ripard wrote:
> > On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> > > On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > > > Can you please send follow patches for these:
> > > > > - don't recall if I pointed earlier, but can we use direct conversion for
> > > > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > > > >   seem doable
> > > > 
> > > > Ok. Do you still want the error reporting for the invalid width and
> > > > burst size?
> > > I think for convert_buswidth() we can do away with a switch case and convert
> > > the numbers directly
> > 
> > Well, it's already using a switch. Do you want me to remove completely
> > the function and move the switch where it's used? It's used two times
> > now, so I'd like to avoid duplicating it.
> You can probably remove switch in these function by converting them
> arithmetically..
> 
> > > > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > > > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > > > >   tasklets.
> > > > 
> > > > Ok, would calling disable_irq before killing the tasklet an option for
> > > > you ? that would allow to keep the devm_request_irq.
> > > disable_irq also does syncronize, so it might work. But then again if the
> > > probe fails after registering irq due to some other reason, and you get into
> > > races with unitialized driver as well.
> > > 
> > > I think it might be easier to amnge by working with plain old request_irq()
> > 
> > Yes, devm_free_irq like Russell suggested seems to both address your
> > concerns, while not having to care about it at probe.
> OK, pls send the fix

Ok, will do tomorrow.

Thanks!
Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller
@ 2014-07-29 16:03             ` Maxime Ripard
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Ripard @ 2014-07-29 16:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 28, 2014 at 07:48:12PM +0530, Vinod Koul wrote:
> On Mon, Jul 28, 2014 at 12:14:02PM +0200, Maxime Ripard wrote:
> > On Fri, Jul 25, 2014 at 10:12:18PM +0530, Vinod Koul wrote:
> > > On Fri, Jul 25, 2014 at 06:37:46PM +0200, Maxime Ripard wrote:
> > > > > Can you please send follow patches for these:
> > > > > - don't recall if I pointed earlier, but can we use direct conversion for
> > > > >   calculating convert_burst() and convert_buswidth(), latter one at least
> > > > >   seem doable
> > > > 
> > > > Ok. Do you still want the error reporting for the invalid width and
> > > > burst size?
> > > I think for convert_buswidth() we can do away with a switch case and convert
> > > the numbers directly
> > 
> > Well, it's already using a switch. Do you want me to remove completely
> > the function and move the switch where it's used? It's used two times
> > now, so I'd like to avoid duplicating it.
> You can probably remove switch in these function by converting them
> arithmetically..
> 
> > > > > - don't use devm_request_irq(). You have irq enabled and you have killed
> > > > >   tasklet. This is too racy. You need to ensure no irqs can be generated before killing
> > > > >   tasklets.
> > > > 
> > > > Ok, would calling disable_irq before killing the tasklet an option for
> > > > you ? that would allow to keep the devm_request_irq.
> > > disable_irq also does syncronize, so it might work. But then again if the
> > > probe fails after registering irq due to some other reason, and you get into
> > > races with unitialized driver as well.
> > > 
> > > I think it might be easier to amnge by working with plain old request_irq()
> > 
> > Yes, devm_free_irq like Russell suggested seems to both address your
> > concerns, while not having to care about it at probe.
> OK, pls send the fix

Ok, will do tomorrow.

Thanks!
Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140729/e24cf17c/attachment.sig>

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2014-07-29 16:05 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-17 19:46 [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller Maxime Ripard
2014-07-17 19:46 ` Maxime Ripard
2014-07-17 19:46 ` [PATCH v11 1/2] Documentation: dt: Add Allwinner A31 DMA controller bindings Maxime Ripard
2014-07-17 19:46   ` Maxime Ripard
2014-07-17 19:46 ` [PATCH v11 2/2] dmaengine: sun6i: Add driver for the Allwinner A31 DMA controller Maxime Ripard
2014-07-17 19:46   ` Maxime Ripard
2014-07-24 12:13 ` [PATCH v11 0/2] Add support for the Allwinner A31 DMA Controller Maxime Ripard
2014-07-24 12:13   ` Maxime Ripard
2014-07-24 23:44   ` Andrew Morton
2014-07-24 23:44     ` Andrew Morton
2014-07-25  6:11     ` Vinod Koul
2014-07-25  6:11       ` Vinod Koul
2014-07-25  6:13       ` Vinod Koul
2014-07-25  6:13         ` Vinod Koul
2014-07-25  7:14         ` Andrew Morton
2014-07-25  7:14           ` Andrew Morton
2014-07-25 13:12 ` Vinod Koul
2014-07-25 13:12   ` Vinod Koul
2014-07-25 16:37   ` Maxime Ripard
2014-07-25 16:37     ` Maxime Ripard
2014-07-25 16:42     ` Vinod Koul
2014-07-25 16:42       ` Vinod Koul
2014-07-28 10:14       ` Maxime Ripard
2014-07-28 10:14         ` Maxime Ripard
2014-07-28 14:18         ` Vinod Koul
2014-07-28 14:18           ` Vinod Koul
2014-07-29 16:03           ` Maxime Ripard
2014-07-29 16:03             ` Maxime Ripard
2014-07-25 16:45     ` Russell King - ARM Linux
2014-07-25 16:45       ` Russell King - ARM Linux
2014-07-28  9:41       ` Maxime Ripard
2014-07-28  9:41         ` Maxime Ripard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.